id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
270864239 | pes2o/s2orc | v3-fos-license | Endovascular treatment for basilar artery occlusion: whether the “weekend effect” affects time metrics and clinical outcomes at a comprehensive stroke center
Objectives This study aimed to evaluate whether the “weekend effect” would affect the time metrics and the prognosis of acute ischemic stroke (AIS) patients who underwent endovascular treatment (EVT) due to basilar artery occlusion (BAO). Methods Clinical data of AIS patients who underwent EVT due to BAO between December 2019 and July 2023 were retrospectively analyzed. At the time when the patients were admitted, the study population was divided into the weekdays daytime group and weekends nighttime group. In the subgroup analysis, the study cohort was divided into four groups: the weekdays daytime group, weekdays nighttime group, weekend daytime group, and weekend nighttime group. A good outcome was defined as a modified Rankin Scale score of ≤3 at 90 days after EVT. Time metrics [e.g. onset-to-door time (ODT) and door-to-puncture time (DPT)] and clinical outcomes were compared using appropriate statistical methods. Results A total of 111 patients (88 male patients, mean age, 67.7 ± 11.7 years) were included. Of these, 37 patients were treated during weekdays daytime, while 74 patients were treated during nights or weekends. There were no statistically significant differences in ODT (P = 0.136), DPT (P = 0.931), and also clinical outcomes (P = 0.826) between the two groups. Similarly, we found no significant differences in the time metrics and clinical outcomes among the four sub-groups (all P > 0.05). Conclusion This study did not reveal any influence of the “weekend effect” on the time metrics and clinical outcomes in AIS patients who underwent EVT due to BAO at a comprehensive stroke center.
Introduction
Basilar artery occlusion (BAO) accounts for 1% of all acute ischemic stroke (AIS) cases and 5% of AIS due to large vessel occlusion (LVO) (1).It is a devastating sub-type of AIS with an extremely poor prognosis.In addition to its proven superiority over the best medical management in both real-world studies and randomized controlled trials, endovascular treatment (EVT) has become an important strategy for treating patients with BAO (1-7).However, as indicated in previous studies, the proportion of patients achieving functional independence at 90 days after EVT is <40%, even if the occluded artery is successfully recanalized (1,3).Therefore, clarifying the variables associated with the clinical outcome is crucial for the doctor-patient communication and the establishment of a reasonable treatment strategy.
The "weekend effect", which was defined as an increased rate of worse outcomes and mortality for hospitalization occurring on weekends or nighttime vs. weekdays, attracts increasing attention (8).It is presumably due to fewer in-hospital personnel and resources during off-hours.Previously, the influence of the "weekend effect" on the clinical outcomes after EVT has been studied sporadically in patients with AIS and mainly in the anterior circulation (8)(9)(10)(11)(12).Potts et al. have reported that the door-togroin times were delayed in patients presenting on the weekends nighttime group compared to weekdays; however, the incidence of symptomatic intracerebral hemorrhage and 90-day good functional outcomes did not differ between the two groups (8).Similarly, Lin et al. found that it took longer during non-working hours than working hours in door-to-image times and door-to-groin puncture times.The change in the National Institute of Health Stroke Scale (NIHSS) scores in 24 h was potentially better in the workinghour group than in the non-working-hour group (9).However, the influence of the working time on the clinical outcomes of patients with BAO after EVT has not been explored until now.
Therefore, the purpose of this study is to explore whether the "weekend effect" existed and its potential influence on the procedural metrics and outcomes of patients with BAO after EVT.
Patient selection
This retrospective study was approved by the institutional review board of our institution.The requirement for written informed consent was waived.We searched for all the patients with posterior circulation AIS who received EVT from December 2019 to July 2023 in our stroke database.Inclusion criteria were as follows: patients (1) whose ages were ≥18 years old, (2) with stroke due to occlusion of BAO, (3) with onset-to-door time (ODT) was <24 h, (4) with baseline modified Rankin Scale (mRS) scores of 0-2, and (5) on whom EVT was performed.We excluded the patients according to the following criteria: patients (1) with baseline mRS scores of >3, (2) with a combination of anterior circulation stroke, and (3) with incomplete clinical data (e.g., 90-day mRS or time metrics).
Patients group
Nighttime was defined as the time interval between 6:00 p.m. and 8:00 a.m., while daytime was defined as the remaining hours (12).Weekdays were defined from Monday to Friday, while weekends were defined as Saturday and Sunday (8).Therefore, our study population was divided into four groups according to the time when the patients were admitted into our stroke center: the weekdays daytime group, weekdays nighttime group, weekend daytime group, and weekend nighttime group.After combining the latter three groups, our study cohort was also divided into two groups, namely, the weekdays daytime group and weekends nighttime group.
Clinical variables
Demographic and clinical data were collected from the database of our stroke center.The following stroke-related risk factors were identified: age, sex, hypertension, hyperlipidemia, diabetes, smoking, and atrial fibrillation.Baseline characteristics, including ODT, door-to-puncture time (DPT), NIHSS score at admission (NIHSS pre ), NIHSS at 24 h after EVT (NIHSS 24h ), intravenous tissue-type plasminogen activator (IV tPA) performed or not, recanalization status, hemorrhagic transformation (HT) status, and patients outcome, were also collected.Successful recanalization was assessed by using the modified Treatment in Cerebral Infarction (mTICI) scale and defined as mTICI 2b-3 (13).HT was evaluated based on the follow-up CT, where high density persisting in the infarcted area without rapid disappearance was defined as an HT.Good outcomes were defined as an mRS score of ≤3 at 90 days after treatment (14).
Image evaluation and endovascular treatment
One 128-section multi-detector CT scanner (Optima CT 660; GE Healthcare) was used to perform CT scans.Standard noncontrast computed tomography (NCCT) (120 kV, 100-350 auto-mAs, contiguous 5-mm axial sections) and whole-brain volumetric CT perfusion (CTP) scan would be performed for evaluating the patients with AIS.CTP parameters were as follows: fourdimensional adaptive spiral mode, periodic spiral approach, 80 mm in z-coverage, 100 kVp, 200 mAs, rotation time of 0.4 s, 0.984 maximum pitch, and 5 mm thickness.A total of 50 ml of nonionic iodinated contrast (Iopromide, Ultravist 370, Bayer Schering Pharma) was administered intravenously at 5 ml/sex by using a power injector, followed by 30 ml of saline at the same rate.The total acquisition time was 53 s.Simulated CTA images with a section thickness of 0.625 mm were reconstructed from the peak arterial phase of CTP data for assessing whether an occlusion of BAO existed or not.
EVT was performed using the method reported in a previous study (13).Briefly, EVT was carried out under local anesthesia or conscious sedation.The Solumbra technique was usually performed by using a Solitaire FR device (Medtronic, Irvine, California, USA).If necessary, contact aspiration via a 5F or 6F distal access catheter (Penumbra, Alameda, California, USA) was performed.After each intervention, angiography was performed to evaluate blood flow restoration.For patients with residual stenosis but acceptable reperfusion, antiplatelet and/or statin medicines would be suggested.For patients with residual stenosis and insitu thrombosis, balloon angioplasty and/or stent implantation could be considered according to the operator ′ s experience.Intra-arterial thrombolysis or tirofiban administration would also be used as rescue therapies.
Statistical analysis
Statistical analyses were performed using SPSS version 26.0 (IBM Corporation).Continuous variables were presented as mean ± standard deviation (SD) or median with interquartile range (IQR) depending on the distribution of variables.The normality of the distributions was evaluated using Shapiro-Wilk tests.Categorical variables were presented as numbers and percentages.Comparisons of continuous variables between two groups were performed using independent sample t-tests if normally distributed or using the Mann-Whitney U test if not normally distributed.Comparisons of continuous variables among four groups were performed using one-way analysis of variance analyses, followed by multiple comparisons using least-significant difference or Tamhans, as appropriate, to identify where the differences lay.Comparisons of categorical variables were performed using the Chi-square test or Fisher's exact test.A two-sided P-value of < 0.05 was considered to be statistically significant.32 patients (28.8%) were administered IV rt-PA before EVT.Successful recanalization was achieved in 94 (84.7%) patients, and 33 (29.7%) patients achieved good outcomes at 90 days after EVT.A total of 37 patients were treated during weekdays daytime, and 74 patients were treated during nights or weekends.Table 2 compares the baseline demographics, risk factors, and presenting characteristics between the two groups.There were no statistically significant differences in any variables between the two groups (all P > 0.05), other than that a higher proportion of patients underwent administration of IV tPA in the weekend nighttime group (P = 0.046).
Patients' characteristics are detailed in
To further investigate differences in the time metrics and outcomes, we divided the cohort into four subgroups, namely, the weekdays daytime group, weekdays nighttime group, weekend daytime group, and weekend nighttime group (Table 3).Similarly, we did not find any significant differences in the demographics, risk factors, time metrics, and patient outcomes among the four subgroups (all P > 0.05).The differences between ODT, DPT, HT, and mRS among the four groups are shown in Figure 1.
Discussion
The "Weekend effect, " which was termed as a poor outcome due to fewer in-hospital personnel and resources during off-hours, had been focused on for several years.Previously, several studies had tried to explore the potential influence of the "weekend effect" on the clinical outcomes after EVT in patients with AIS (8)(9)(10)(11)(12); however, these studies mainly focused on stroke due to anterior circulation LVO.The present study first focused on the potential influence of working time on the clinical outcomes of patients with BAO after EVT and found that there were no statistically significant differences in ODT, DPT, and also clinical outcomes among groups with different working times.Our study indicated that the "weekend effect" might not exist in patients with AIS who underwent EVT due to BAO at a comprehensive stroke center.
Previously, several studies had investigated the potential existence and influence of the "weekend effect" of EVT on patients with AIS, especially those due to anterior circulation LVO (8)(9)(10)(11)(12).Mpotsaris et al. reported that the patients admitted during nighttime and weekends showed statistically prolonged door-to-reperfusion times; however, it did not affect the rate of revascularization and favorable outcome (12).This main conclusion was also supported by similar studies by Potts et al. (8), Lin et al. (9), and Omura et al. (10)..They explained that the outpatient clinics did not provide services, which led to the patients being crowded into the emergency department during non-working hours, and subsequently slightly increased door-toimage times.In addition to that, the team in the comprehensive stroke center included several specialties (e.g., physician, stroke neurology, neurointervention surgery, radiologists, and nurses), and some of them were on-call from home during non-working hours (9).Prolonged door-to-reperfusion times might be due to waiting for some specialties on duty during non-working hours.However, further analysis did not find a significant influence of increased time intervals on the functional outcome.This might be due to the wide application of perfusion imaging for patient selection.An accurate assessment of the "tissue-window" might offset the influence of a prolonged "time-window" on the functional outcome (15).
Recently, numerous case series and trials have reported the efficacy of EVT in patients with AIS due to BAO (1)(2)(3)(4)(5)(6)(7)16).As so many positive results had been reported, we could expect that the number of EVTs for treating patients with BAO would significantly increase.However, the impact of the "weekend effect" on time metrics and clinical outcomes has not been fully studied to date.Our study focused on this topic for the first time, and we found that there were no statistically significant differences in ODT, DPT, and clinical outcomes between the weekdays daytime group and weekends nighttime group and also in further subgroup analysis.It was not surprising that the difference in clinical outcomes was not significant, especially because the treatment time window of posterior circulation stroke was longer than that of anterior circulation stroke (17).Nevertheless, we also did not observe a prolonged time metrics in the weekends nighttime group.In the author's opinion, it might be due to that center being a comprehensive stroke center in the academic setting.Our group included emergency physicians, radiologists, neurointerventional team, and had 24/7 availability of all personnel involved in the emergency treatment of AIS.Except for nurses and technicians in the neurointerventional team, all the other members in our group were "on-call" in the hospital and not "in-house."Because our hospital was an academic unit, the in-hospital residents and fellows could expedite the treatment process.
There were some limitations to our study.The first limitation is that this was a retrospective study conducted in a single center with a relatively small sample size; therefore, the selection bias was inevitable.The second limitation is our results were specific to the EVT situation (mostly high-volume stroke centers) and might differ from other centers with different organizations of acute stroke therapy.
In conclusion, based on a retrospective cohort of patients with AIS due to BAO from a comprehensive stroke center, we did not observe any differences in time metrics and clinical outcomes after EVT between the weekdays daytime group and weekend nighttime group.The "weekend effect" might not exist in patients with BAO who underwent EVT in a well-organized comprehensive stroke center.In future, further multiple-center study with larger sample sizes was warranted to confirm our results.
Table 1 .
Of the 111 patients (88 male patients, mean age, 67.7 ± 11.7 years old) finally included, the mean ODT and DPT were 433.2 ± 332.2 min and 126.3 ± 214.9 min, respectively.A total of TABLE Comparisons between the weekdays daytime and weekends nighttime groups.
SD, standard deviation; ODT, onset to door time; DPT, door to puncture time; NIHSSpre, National Institutes of Health Stroke Scale scores at admission; NIHSS 24h , NIHSS at 24 hours after treatment; IV, intravenous; and tPA, tissue-type plasminogen activator.n indicates the patients' number.Categorical variables were expressed as numbers (percentages).Continuous variables were presented as mean (standard deviation) due to normal distribution. | 2024-07-01T15:06:33.989Z | 2024-06-27T00:00:00.000 | {
"year": 2024,
"sha1": "9dc0783339a89c27c2f102fbf77de235007f12fd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fneur.2024.1413557",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "785945eceda80e1734bf26e7155d646357fa56ef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
3715229 | pes2o/s2orc | v3-fos-license | A Web-Based Training Resource for Therapists to Deliver an Evidence-Based Exercise Program for Rheumatoid Arthritis of the Hand (iSARAH): Design, Development, and Usability Testing
Background The Strengthening and Stretching for Rheumatoid Arthritis of the Hand (SARAH) is a tailored, progressive exercise program for people having difficulties with wrist and hand function due to rheumatoid arthritis (RA). The program was evaluated in a large-scale clinical trial and was found to improve hand function, was safe to deliver, and was cost-effective. These findings led to the SARAH program being recommended in the UK National Institute for Health and Care Excellence guidelines for the management of adults with RA. To facilitate the uptake of this evidence-based program by clinicians, we proposed a Web-based training program for SARAH (iSARAH) to educate and train physiotherapists and occupational therapists on delivering the SARAH program in their practice. The overall iSARAH implementation project was guided by the 5 phases of the analysis, design, development, implementation, and evaluation (ADDIE) system design model. Objective The objective of our study was to conduct the first 3 phases of the model in the development of the iSARAH project. Methods Following publication of the trial, the SARAH program materials were made available to therapists to download from the trial website for use in clinical practice. A total of 35 therapists who downloaded these materials completed an online survey to provide feedback on practice trends in prescribing hand exercises for people with RA, perceived barriers and facilitators to using the SARAH program in clinical practice, and their preferences for the content and Web features of iSARAH. The development and design of iSARAH were further guided by a team of multidisciplinary health professionals (n=17) who took part in a half-day development meeting. We developed the preliminary version of iSARAH and tested it among therapists (n=10) to identify and rectify usability issues and to produce the final version. Results The major recommendations made by therapists and the multidisciplinary team were having a simple Web design and layout, clear exercise pictures and videos, and compatibility of iSARAH on various browsers and devices. We rectified all usability issues in the preliminary version to develop the final version of iSARAH, which included 4 short modules and additional sections on self-assessment, frequently asked questions, and a resource library. Conclusions The use of the ADDIE design model and engagement of end users in the development and evaluation phases have rendered iSARAH a convenient, easy-to-use, and effective Web-based learning resource for therapists on how to deliver the SARAH program. There is also huge potential for adapting iSARAH across different cultures and languages, thus opening more opportunities for wider uptake and application of the SARAH program into practice.
Introduction
Rheumatoid arthritis (RA) is a chronic inflammatory joint disease that presents with pain, inflammation, stiffness, and reduced muscle strength, joint movements, and joint function [1,2].
Joints of the hands and wrists are very commonly affected in people with RA [2,3], resulting in reduced functional ability of the hands [4][5][6][7]. The Strengthening and Stretching for Rheumatoid Arthritis of the Hand (SARAH) program is an individually tailored, progressive exercise program for people with pain and hand function problems due to RA [8,9]. It includes mobility exercises for the hand, wrist, and shoulder and strengthening exercises for the hand and wrist muscles. The exercises are delivered by a therapist with behavioral support strategies for exercise adherence, such as exercise diaries, goal setting, action planning, confidence building, and problem solving, along with routine advice on joint protection, assistive devices, and splints. Between 2009 and 2011, a large, pragmatic, multicenter randomized controlled trial (ISRCTN 89936343) evaluated the SARAH program across 17 National Health Service (NHS) hospitals in the United Kingdom [10]. A total of 490 adults with diagnosed RA, and who had been on a stable drug regimen for at least 3 months, were randomly assigned to receive best practice usual care either alone or in conjunction with the SARAH program. Significant improvements in overall hand function and self-efficacy were seen at 4 and 12 months in participants who received the SARAH program. The program was also found to be safe and cost-effective [10]. Based on this research, the exercise program is now recommended in the UK National Institute for Health and Care Excellence (NICE) guidelines for patients with RA affecting their hands [11].
Due to the success of the program and the NICE recommendations, we are now aiming to disseminate the evidence-based SARAH program to facilitate its use in clinical practice. In the original clinical trial, therapists attended a face-to-face training session (one-half to 1 day in duration) to learn how to deliver the SARAH program. Following the publication of the SARAH clinical trial results, all the patient and therapist materials required to deliver the SARAH program were made available for health care professionals worldwide downloadable from the Oxford Clinical Trials Research Unit (OCTRU) website [12].
However, we recognized the need for a knowledge dissemination tool with the potential to facilitate wider and systematic uptake of the SARAH program by physiotherapists and occupational therapists and its implementation in clinical practice. We, therefore, proposed a free Web-based training program for SARAH, iSARAH [13], to serve this purpose. Web-based training programs use modern telecommunication and information technologies to deliver information and have the capacity to accommodate multimodal learning formats (eg, written materials, multimedia, animations, feedback, and assessments) [14,15]. They can reach many people at their convenience, can overcome geographical barriers, and are cost-effective in terms of time, effort, and travel [15]. Web-based training has the potential to be an effective method of reaching and training health professionals globally [16][17][18][19][20].
The iSARAH implementation project is based on the analysis, design, development, implementation, and evaluation (ADDIE) model, one of the common instructional system design models used for constructing Web-based programs [21][22][23][24].
The analysis stage comprises defining the problem, identifying the target knowledge users, and looking for possible solutions to bridge the knowledge-action gap and user-specific needs for the dissemination tool. In the context of the SARAH program, the knowledge-action gap is the evidence-based SARAH program (current knowledge) and its application in practice (action). The targeted users are the physiotherapists and occupational therapists who routinely treat and prescribe hand exercises to people with RA. We proposed to bridge the knowledge-action gap by educating and training the therapists on the SARAH program with a knowledge dissemination tool (iSARAH).
The design stage consists of finding ways to organize and present the content, identifying modes of delivery, and developing an evaluation plan of the dissemination tool. This stage involves conceptualizing and adapting the SARAH program to fit the Web-based iSARAH.
The development stage involves building iSARAH, evaluating its usability issues, and refining iSARAH to develop the final version.
The implementation stage involves making iSARAH available for NHS therapists.
The evaluation stage will include evaluation of learning outcomes such as knowledge, attitudes, intention to implement and user satisfaction with iSARAH, and evaluation of actual use of the SARAH program by iSARAH-trained therapists in real-world settings.
Here we describe the first 3 phases of the iSARAH implementation project.
Phase 1: iSARAH Needs Analysis
Specific objectives of this phase were (1) to explore routine exercise prescription practices and outcomes use among therapists who treat people with RA affecting the hands and wrists, (2) to identify barriers and facilitators to implementing the SARAH program, and (3) to collect therapists' opinions and preferences on the design, content, and features of iSARAH.
A convenience sample of physiotherapists and occupational therapists of different countries who downloaded the SARAH program materials from the OCTRU website and gave permission to be contacted by the SARAH team were considered eligible for participation in the SARAH survey. Willingness to provide consent for taking part in the survey was the other inclusion criterion.
We developed a survey questionnaire (Multimedia Appendix 1) that focused on routine therapist practice patterns in prescribing hand exercises for people with RA, and their experiences of using the SARAH program in clinical practice since they downloaded the SARAH program materials. We also asked therapists about barriers and enablers to using the SARAH program, and their preferences for the content, design, and structure of iSARAH. We sent invitation emails with a weblink containing information about the survey, along with a consent form and some questions relating to the therapists' professional background and experience. Access to the survey was allowed for those therapists who provided online consent. Those who consented were asked to complete the survey within 2 weeks. For nonresponders, a reminder email was sent after 2 weeks, followed by a final reminder a week later.
The survey protocol was reviewed and approved by the medical sciences Inter-Divisional Research Ethics Committee at the University of Oxford, Oxford, UK (reference number R43362/RE001). The SARAH survey was developed using LimeSurvey (LimeSurvey GmbH), an open source survey tool, and was hosted by OCTRU, University of Oxford.
Phase 2: iSARAH Design
Specific objectives of this phase were (1) to design a paper prototype of iSARAH, and (2) to gain feedback from a multidisciplinary group of health professionals and to agree on the content, delivery methods, frequently asked questions (FAQs), and the navigation, layout, and visual appeal features of iSARAH.
The SARAH research team and information technology experts mapped the SARAH program from the SARAH clinical trial to a 3-to 4-hour Web-based training package for therapists and designed a paper prototype. We proposed a half-day meeting with rheumatology clinicians, researchers, and technology experts based on their convenience and availability to attend the meeting. The purpose of this meeting was to gain collective feedback on the prototype and the survey findings to finalize the design of iSARAH. The paper prototype was presented at a half-day multidisciplinary team meeting (n=17) involving a rheumatologist (n=1), occupational therapists and physiotherapists (n=10; 7 of whom were part of the SARAH trial), SARAH trial researchers (n=4), and information technology experts (n=2).
Phase 3: iSARAH Development and Usability Testing
Specific objectives of this phase were (1) to develop the iSARAH website, (2) to gain end user feedback on the usability, usefulness, ease of use, and confidence in using iSARAH, and (3) to rectify usability issues and further refine iSARAH prior to its implementation. This phase involved building iSARAH (preliminary version) and evaluating its usability, usefulness, and ease of use and user confidence [25,26]. The usability evaluation protocol was reviewed and approved by the medical sciences Inter-Divisional Research Ethics Committee, University of Oxford (reference number R47560/RE001).
NHS hand therapists (physiotherapists and occupational therapists) who were treating people with RA and lived within 2 hours of travel to Oxford were considered eligible for participation in the usability testing. Willingness to provide signed consent was the inclusion criterion. We invited volunteers via the Centre for Rehabilitation Research in Oxford Twitter page and the online community forum of the Chartered Society of Physiotherapy.
Based on the available evidence that 80% of usability issues can be identified by testing with 5 participants and that 95% can be identified with 9 participants [27,28], we proposed to include 10 therapists who fulfilled the inclusion criteria.
We coordinated individual appointments to attend usability sessions through telephone calls and conducted the sessions at the Botnar Research Centre, University of Oxford. Before evaluation, participants provided signed consent and completed a series of demographic questions. The usability testing procedure was then explained emphasizing that the session was about evaluating iSARAH and not the user. Each session took approximately 90 minutes. The usability testing involved the following procedures.
Think-Aloud Procedure
The procedure was facilitated by 1 of the members of the SARAH implementation team. Participants were asked to log on to the iSARAH website by registering with test usernames and passwords. They were then asked to navigate through the website, starting from the home page. They were simultaneously encouraged to talk about what they felt, saw, or thought while browsing. The facilitator observed and took notes as participants were asked to verbalize their thoughts. When participants had difficulties in verbalizing, they were encouraged by a "keep talking" signboard and were minimally assisted with prompts (only when required) by the facilitator. All think-aloud sessions were audio recorded.
Self-Reported Questionnaires
We used the Computer System Usability Questionnaire (CSUQ) [29] to evaluate user satisfaction, ease of use, information, and interface of the program on a 7-point Likert scale (1=strongly disagree to 7=strongly agree).
Interviews
Using a semistructured interview guide, we asked participants about their experiences in navigating iSARAH. Interviews were conducted for approximately 10 to 15 minutes and were audio recorded. We summarized users' comments on iSARAH by listening to the audio files and cross-checking a second time. Figure 1 shows the flow of participants through the 3 phases of iSARAH.
Phase 1: iSARAH Needs Analysis
We sent SARAH survey invitations to a total of 102 physiotherapists and occupational therapists who had downloaded the SARAH program materials. Figure 1 displays the flow of the survey participants. Table 1 shows the demographic characteristics of those who took part in the SARAH survey. Pain, self-reported hand function, joint range of motion, stiffness, grip and pinch strength, and joint deformities were more commonly evaluated as part of their current practice. Performance-based hand function, 28-joint Disease Activity Score, and activities of daily living were the least evaluated outcomes.
The most common type of exercise prescribed by therapists was active range of motion exercises. Strengthening exercises were also frequently used, as were tendon gliding exercises. Nerve gliding, passive, and isometric exercises were much less commonly prescribed.
Self-management strategies, joint protection, and splinting were more commonly prescribed than thermotherapy, therapeutic gloves, work support, advice on activities of daily living, and electrotherapy.
On average, therapists had 4 sessions with their patients (mean 4, SD 3.9). The frequency of review sessions was mostly either once every 15 days, reported by 11 therapists, or every 1 to 2 months, reported by 9 therapists.
Most therapists used exercise sheets and review appointments to encourage adherence with home exercise programs. Exercise diaries, exercise contracts, and telephone reminders were less commonly used. About 74% (n=26) of the therapists delivered the SARAH program in their clinical practice, and on average had prescribed the program to 17 (SD 22) of their patients since downloading the materials. More than 50% (17/26) of the therapists who delivered the SARAH program did not find any aspect of SARAH that made it difficult to put into practice. They reported that the SARAH therapist manual, the exercise sheets with photographs, and the strong evidence base facilitated their use of the SARAH program in their daily practice. Other therapists reported issues with time, funding for exercise equipment, and inability to complete review assessments and exercise contracts.
Therapists who did not use the SARAH program (n=9) reported a lack of appropriate patients to be prescribed the SARAH program, budget, time, and their routine prescription of hand exercises like the SARAH program as main reasons for nonimplementation. Table 3 presents barriers and facilitators identified by therapists who completed the survey (n=35).
We asked therapists what they would like to see in a Web-based training program if one were available. Textbox 1 lists their suggestions.
Phase 2: iSARAH Design
Following the multidisciplinary team meeting, we identified the specific need to educate and train therapists on the behavioral support strategies and proposed a separate module on this topic. We agreed that a section addressing common questions that might be raised by therapists about the SARAH program in real-world settings should be included in an FAQ section of iSARAH. Attendees provided suggestions for framing these questions. Based on discussions about the iSARAH prototype and SARAH survey findings, the team suggested the following recommendations: (1) to provide weblinks within the text for additional information on a topic, for example, Splints in RA, (2) to provide a progression status bar to enable users to know where they are in the training, (3) to use consistent names for exercises, (4) to have a separate educational video on joint protection advice, (5) to have a separate module on behavioral support strategies, (6) to have SARAH exercises demonstrated through videos and photographs, (7) to have brief modules, (8) to have a plain layout and use optimal font sizes (14 point), (9) to have an official email support to address technical enquiries, and (10) to ensure iSARAH adapts across different types of Internet browsers and computers at NHS settings and other telecommunication devices.
Specific recommendations were also made regarding the behavioral strategies module: (1) to provide examples of general goals relating to upper limb function to aid therapists with goal setting, and (2) to include model scenarios on filling in the personal exercise guide and Barriers and Facilitators form. 1. Make it clear in iSARAH that the SARAH program is flexible and will be feasible to complete at the user's convenience.
Send monthly email reminders to iSARAH-trained therapists.
3. Signpost therapists and their patients to resources needed to deliver the program, which could, ideally, be purchased at a discounted rate (eg, therapeutic putty, resistance bands).
4. Provide multiple hard copies of the SARAH patient materials at no cost to iSARAH-trained therapists for use in clinical practice, if required.
5.
Demonstrate high credibility by incorporating information about the SARAH research team and all SARAH peer-reviewed publications. 6. Propose pain and self-reported hand function as the main outcomes for the evaluation phase.
To facilitate effective implementation of the SARAH program by iSARAH-trained therapists in actual practice, we also discussed ways to minimize major implementation barriers reported in the survey (time limitations, forgetting, and difficulties in access to and cost of SARAH exercise equipment and patient materials). Clinicians who had worked on the SARAH clinical trial raised some issues with the original forms used in the trial, and they proposed ways to streamline these forms to make them easier to use.
Textbox 2 lists suggestions to guide the implementation evaluation phases of the SARAH program.
Development: Preliminary iSARAH
iSARAH was built on a Moodle platform (release version 3.1; Moodle Pty Ltd) by the OCTRU information technology team, customized and styled using the Essential Theme add-on. An overview of iSARAH (preliminary version) is provided below.
Landing Page
The landing page introduced the iSARAH with a brief statement about the purpose of the website, site contact information, privacy policy, and the modules. Other features included a 2.5-minute preliminary iSARAH promotional video and a prominent widget for logging on to the training.
Modules
Module 1 covered clinical aspects of RA, benefits of exercises in RA, UK guidelines in the management of RA, and information about the SARAH clinical trial.
Module 2 covered development and physiological principles of the SARAH program, behavioral support strategies, and instructions on how to deliver the SARAH program.
Module 3 covered the self-assessment.
Module 4 included FAQs to inform the delivery of the SARAH program in different practice settings and patient scenarios.
Resource Library
All text materials required to deliver the SARAH program (eg, exercise booklets and videos, exercise diary, RA patient education booklets) and additional reference documents, such as SARAH trial publications, were archived in the resource library.
Delivery of Content
A combination of text, photographs, tables, and videos was used to deliver the training. Preliminary videos were produced for iSARAH promotion and instruction purposes of the training.
Visual Design and Navigation
A simple Web layout was used consistently across modules to reduce distraction and information overload.
iSARAH Usability Testing Table 1 presents demographics of participants in the usability testing.
Think-Aloud Procedure
One of the major usability issues we observed was the difficulty in navigating from the end of one module to the next (eg, from the last page of Module 1 to the first page of Module 2), as there were no direct buttons to take users to the following module. Instead, participants had to click the respective module tabs on the top of the screen to navigate between modules or to proceed to the next module. We also noticed that some additional tabs appearing within the Moodle platform were confusing for the participants.
Hyperlinks to reference documents such as SARAH trial publications and patient materials were reported to be repetitive and distracting. Participants said that photographs showing RA hands and activities of daily living, and other illustrations, did not add to iSARAH but instead occupied screen space and led them to frequently scrolling down to read the whole page. In the self-assessment module, when participants entered an incorrect response to a question, they couldn't find a feature to signpost to the correct response in the respective module. They also reported that information about the SARAH team on the home page was not adequate.
Self-Reported Questionnaires
The CSUQ showed that participants overall found iSARAH simple, easy to use, and easy to understand, and they were satisfied in using it (Table 4). There was an overall agreement that participants could complete their work quickly and efficiently and recover from any unexpected technical mistakes.
There was some uncertainty as to whether the system gave error messages and informed them how to fix problems. Results from Likert scales (Table 4) indicated that participants rated iSARAH as useful and easy to use, and that they were confident about using it. Table 4. Questionnaire scores of iSARAH a usability testing (n=10).
Median (IQR b ) Questionnaire
Computer System Usability Questionnaire items on 1-7 scale c 6 (0.75) Overall, I am satisfied with how easy it is to use this system 6 (0) It was simple to use this system 5 (1.0) I can effectively complete my work quickly using this system 5 (0) I am able to complete my work quickly using this system 5 (1.0) I am able to efficiently complete my work using this system 6 (1.5) I feel comfortable using this system 6 (0.75) It was easy to learn to use this system 6 (1.0) I believe I became productive quickly using this system 4 (0) The system gives error messages that clearly tell me how to fix problems 5 (1.0) Whenever I make a mistake using this system, I recover easily and quickly 6 (1.0) The information (such as online help, on-screen messages, and other documentation) provided with this system is clear 6 (2.0) It is easy to find the information I needed 6 (1.0) The information provided for the system is easy to understand 6 (0.75) The information is effective in helping me complete the tasks and scenarios c 1=strongly disagree, 2=disagree, 3=somewhat disagree, 4=neither, 5=somewhat agree, 6=agree, 7=strongly agree.
Interviews
In general, users found that iSARAH was a detailed and helpful learning resource for therapists. The most common comments were that participants liked the Web layout, tabs for modules, exercise videos, and the whole content. Some key suggestions provided were to create videos of good sound quality, and to remove excess text and photographs to keep the information relevant and clear.
Modifications Made to Produce the Final Version of iSARAH
We revised iSARAH to address all major usability issues identified from the think-aloud procedure and interviews (Table 5). We produced good-quality promotional (Multimedia Appendix 2) and instructional videos using media professionals and removed all irrelevant photographs to allow more screen space. We minimized repetitive links to reference documents and patient materials within modules. We set up clear-cut tabs to navigate between the end of a module and the start of subsequent modules. The SARAH implementation team further reviewed the final version of iSARAH (Multimedia Appendix 3) for content, navigation issues, and grammar. Solutions implemented in the final iSARAH a Usability issues Navigation was made easy by adding buttons to take the user from the last page of the previous module to the first page of next module.
Navigation between the last and first pages of consecutive modules was difficult.
Only 2 colors were used: black for text and blue for weblinks. Different-colored text was hard to follow.
Sections A, B, and C of Module 2 were categorized as separate modules: modules 2, 3, and 4.
Sections A, B, and C of Module 2 were confusing.
FAQs and self-assessment were labelled with their same names for more clarity.
Having FAQs b and self-assessment labelled as modules was irrelevant.
Documents were set to easily open up and close in a separate window that will allow users stay on their last seen page of the training.
Resource library documents were not opening in a separate window, and it was confusing when participants closed the document and wanted to access their last seen page of the training.
Repetitive links were removed. Too many links within the modules was distracting.
Photographs were removed to allow more space for text and less scrolling. Too much scrolling was annoying because of photographs occupying space.
The self-assessment section was set to point out incorrect responses. When the user provides an incorrect response, he or she will be directed to the relevant module to learn more on the particular question.
For the self-assessment, when an incorrect answer was entered, participants were not directed to find correct answers in the respective modules.
More information on the SARAH program, the SARAH team, and the host organization was added. A promo video was produced. The home page did not cover all essential information about the SARAH c program and SARAH team.
All irrelevant buttons and tags were removed. Some Moodle features (eg, tags, buttons) were distracting.
Good-quality videos were produced. The quality of videos could be improved.
Exercise videos with a patient volunteer demonstrating the exercises were produced.
A patient could demonstrate exercises in exercise videos.
The text was reduced, and more bullet points were used. There was too much text to read.
Discussion
The overall purpose of this paper was to present how we developed a Web-based implementation tool (iSARAH) and produced the final version suitable for implementation. The strength of this work is that it followed a recognized model for the construction of Web-based programs [21][22][23][24].
Principal Findings
Engagement with users through the SARAH survey allowed us to identify current practice and learning needs to ensure iSARAH was fit for purpose. From the survey, we established that the exercises included in the SARAH program were commonly used by therapists [6,7,30,31] but the behavioral change techniques were likely to be less familiar [8][9][10]. It also gave us insight into potential barriers to implementation. Respondents provided information about the features they would like to see in a Web-based training program, and this directly informed the design of the program. Survey findings also directly influenced the selection of outcomes for the evaluation phase of implementation.
Engagement with users continued during the design phase with a face-to-face meeting, as well as carrying out usability testing. Usability testing was essential to producing a user-friendly website that could be deployed for implementation. We believe this has resulted in a flexible learning experience for users, which is easy to navigate with unlimited access. We included FAQs and self-assessment to ensure that therapists have adequate training and skills to efficiently apply for the SARAH program in actual practice. The next step is to evaluate the impact of iSARAH training on actual implementation of the SARAH program, including the impact on knowledge and skills of therapists, implementation rates, and patient outcomes. We know from our previous work [32] that training alone may not result in implementation [33]. A Web-based training developed to facilitate the implementation of a cognitive behavior approach for low back pain was shown to be as effective as face-to-face training regarding knowledge and confidence, but actual implementation rates were low and further enhancement of the training was required [33]. We have tried to identify potential barriers to implementation during the development phase of this project so that these are addressed by the Web-based training.
Limitations
This study has some limitations. First, we neither used observational analysis with video recordings in the think-aloud procedure to observe users' interactions with iSARAH, nor conducted a systematic qualitative analysis of participants' interviews. Second, the CSUQ and Likert scales have not been tested for reliability and validity in the target population. Hence, the range of scores should be interpreted with caution. Third, the SARAH survey participants were familiar with the SARAH program and hence their responses were prone to the risk of volunteer bias. Additionally, with a low consent rate (39 of 102 participants, 38.2%), the survey findings are at the risk of nonresponse bias from people who did not participate or respond. Fourth, we did not employ iterative cycles of usability testing-that is, consecutive cycles of testing until the point when no further usability issues were identified-but we used the feedback from all participants in a one-off cycle to refine iSARAH.
Evidence-based therapies have been found to be poorly disseminated into routine practice [34]. Some of the barriers often reported by health professionals in practicing evidence are the lack of access to evidence resources [35][36][37] and nonavailability of the evidence resources in usable formats [38]. In the context of implementing the evidence-based SARAH program, we believe that the easy and free access for health professionals to the SARAH program in a simplified Web-based format has overcome these barriers. We foresee that the training of qualified health professionals directly involved in the rehabilitation of people with RA of the hands would increase their knowledge of the evidence (the SARAH program), and build their skills and confidence to deliver it in practice. The Web-based training would also be a time-saving learning resource that is also potentially flexible in terms of learning [39] for health professionals from diverse backgrounds of Internet use habits and computer skills. Further, the content of the iSARAH can be adapted [19] for language and cultural differences to assist wider implementation. Thus, it would open opportunities to disseminate the SARAH program among therapists across the world who have limited or no access to the SARAH training.
Next Steps
In our next steps toward opening more opportunities for wider uptake and application of the SARAH program into clinical practice, there is also a huge potential for adapting iSARAH across different cultures and languages across the world.
Conclusions
To our knowledge, iSARAH is the first Web-based learning resource for therapists on an evidence-based hand exercise program. A systematic design approach by using the ADDIE model and involving end users has been successful in developing a user-centered iSARAH.
Our ongoing work on the impact evaluation among therapists who completed iSARAH and a service evaluation in people treated by SARAH-trained therapists will provide more insights on the uptake of the SARAH program in actual practice. | 2017-12-14T13:12:53.636Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "f0d21bb78082d2b3645d11f2dd0889e4800f2a0f",
"oa_license": "CCBY",
"oa_url": "https://www.jmir.org/2017/12/e411/PDF",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "339259a115dc90ab6d21108eb934712d1bd92993",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
102623798 | pes2o/s2orc | v3-fos-license | Synthesis, Structure, and Properties of β-Vinyl Ketone/Ester Functionalized AzaBODIPYs from FormylazaBODIPYs
Postfunctionalization of azaBODIPY (the BF2 complex of azadipyrromethene) is highly desirable due to the strong tunable absorption bands at wavelengths above 650 nm and the wide-ranging applications of this class of dyes in biomedicine and materials science. Currently available postfunctionalization methods for this class of dyes have been limited to the Pd-catalyzed coupling reactions on β-halogenated (brominated or iodinated) azaBODIPY platforms. In this work, we report a new strategy for the facile postfunctionalization of the azaBODIPY chromophore with various vinyl ketone and vinyl esters based on a Wittig reaction on our previously developed β-formylazaBODIPYs and our recently developed β-bromo-β′-formylazaBODIPYs. Our strategy uses easily accessible starting materials and mild reaction conditions. It is highly compatible with various common phosphonium ylides (aliphatic, aromatic, and ester substituted ones). These resultant bromo-containing β-vinyl ketone/ester functionalized azaBODIPYs are potential photosensitizers and can be further functionalized via coupling reactions. The ester groups on some of these resultant azaBODIPYs can be further hydrolyzed to achieve the desired water solubility and conjugate with the biomolecule and solid surface.
■ INTRODUCTION
AzaBODIPY (the BF 2 complex of azadipyrromethene, Chart 1) shows a strong absorption band at wavelengths above 650 nm, large molar extinction coefficients, and high photostability. This class of dyes has attracted wide-ranging research interest in biomedicine and materials science, for example, as photosensitizers (for photodynamic therapy), sensors, and nearinfrared labeling agents. 1−5 BODIPY, as the meso-carbon analogue of azaBODIPY (Chart 1), has been extensively studied. 6,7 There are many well-established methods available for the facile postfunctionalization of BODIPY. By contrast, few research efforts have been devoted to the postfunctionalization of azaBODIPY. Recently, some postfunctionalization methods have developed based on Pd-catalyzed coupling (Suzuki, 8 Stille, 9 and Sonogashira, 10 Figure 1) reactions 8−11 on βhalogenated (brominated or iodinated) azaBODIPYs. These postfunctionalization methods although elegant generally require the use of a transition metal catalyst. In addition, the yields are far from optimal (<50%). In some coupling reactions, a tedious purification process is required due to the partial removal of the BF 2 unit under those reaction conditions.
As part of our continuous research efforts in the preparation of functionalized azaBODIPYs, we have recently reported several strategies (thiophene-fusion, 12a "push-pull" 12b and "conformation-restriction" 12c ) for the fine tuning of the optical properties of azaBODIPYs. We have previously reported the regioselective β-formylation of BODIPYs and have successfully extended this reaction to the azaBODIPY system. 13 Those resultant β-formylBODIPYs have received wide-ranging research interest in a highly diverse research fields. 14,15 We rationalized that these resultant β-formylazaBODIPYs may be applied for the Wittig reaction 14d,16 to achieve the facile postfunctionalization of the azaBODIPY chromophore ( Figure 1). With our recent progress in the regioselective βbromination of β-formylazaBODIPYs, we anticipated that the resultant β-bromo-β′-formylazaBODIPY may be used as a privileged platform in the facile postfunctionalization of the azaBODIPY system. Herein, we report the facile preparation of a series of β-vinyl ketone/ester functionalized azaBODIPYs via a straightforward Wittig reaction on β-formylazaBODIPYs, as well as the X-ray structures, the photophysical properties, and the electrochemical properties of these resultant dyes.
These resultant β-formylazaBODIPYs 2a−c and 3a−c smoothly reacted with various readily available phosphonium ylides 5a−c 14d,16 under Wittig reaction conditions (80°C in toluene), from which the desired β-vinyl ketone/ester functionalized azaBODIPYs 4a−h were isolated as the major products in 43−83% yields (Scheme 2). These resultant azaBODIPYs 4a−h were characterized via NMR and HRMS analysis. The structures of azaBODIPYs 4a and 4b and their key synthetic precursors 2a and 2b were further confirmed by X-ray analysis ( Figure 2). This Wittig reaction uses readily available starting materials (β-formylazaBODIPYs and phosphonium ylide reagents) and mild reaction conditions and shows good compatibility with various functionalities. It was found that the electron-donating substituents in azaBODIPYs 1 increase the reactivity of this formylation and the subsequent Wittig reaction and the yields of these two reactions. The ester moiety may be further hydrolyzed to generate a carboxylic acid group, which provides the desired water solubility and a valuable site for conjugation with the biomolecule and solid surface.
The remaining bromo substituent in azaBODIPYs 4d, 4e, and 4f provides the desired heavy-atom effect to facilitate their applications as photosensitizers. In addition, this bromo substituent also provides a valuable site for various coupling reactions including the Suzuki coupling reaction demonstrated in this work (Scheme 2). The Suzuki coupling of azaBODIPY 4d with 4-(diphenylamino)phenylboronic acid smoothly proceeded to generate azaBODIPY 4k in 47% isolated yield. The installation of triphenylamine, an important unit in many electronic devices, results in an interesting donor−azaBODI-PY−acceptor (D−π−A) structure.
X-ray Structures. Crystals suitable for X-ray analysis of azaBODIPYs 2a, 2b, 4a, and 4b ( Figure 2) were obtained via the slow diffusion of petroleum ether into their dichloromethane solutions under ambient conditions. As expected, these azaBODIPY dyes all show an almost planar structure for the azaBODIPY core (defined by the central six-membered C 2 N 3 B ring and two adjacent five-membered pyrrole rings) and a perpendicular arrangement of the plane defined by F−B−F atoms to that of the azaBODIPY core. The B−N distances for these azaBODIPYs are within 1.55−1.58 Å, which indicates the usual delocalization of the positive charge. These azaBODIPYs show a characteristic core structure similar to most of the previously reported azaBODIPY systems. 1c,12c The dihedral angles of the four phenyl rings in the dipyrrin core are in the (Table S2), indicating the intramolecular hydrogen bondings between F and the hydrogen atoms on four phenyl moieties. The dihedral angles defined by the two pyrrole units in azaBODIPYs 2a and 2b are 12.4 and 14.8°, respectively, which were reduced to 3.8 and 1.1°, respectively, for azaBODIPYs 4a and 4b. This indicates that the presence of the β-vinyl ester moiety reduces the distortion of the planar structure of the azaBODIPY chromophore. The average rootmean-square deviation of the 19 atoms (atoms 1−19 labeled in Figures S5 and S7) from the mean plane of the azaBODIPY 4a and 4b cores are 0.0037 and 0.0019 Å, respectively. This indicates that the β-vinyl ester moiety and the C 8 BN 3 central core nearly stay at the same planar structure. Thus, the installation of this β-vinyl ester moiety indeed extends the πconjugation of the chromophore.
Photophysical Properties. The photophysical properties of these resultant azaBODIPYs 4a−k and their key synthetic precursors 2a−c and 3a−c were investigated in chloroform as summarized in Table 1. Each of these dyes, except for 4k, shows an intense absorption band in the range of 650−690 nm with a molar extinction coefficient (57 400−82 600 cm −1 M −1 ) comparable to that of most azaBODIPYs appearing in the literature. 1c In comparison with azaBODIPYs 2 and 3, each of the azaBODIPYs 4 shows an around 3−30 nm redshift of the absorption spectra. AzaBODIPY 4k shows two absorption bands (a strong absorption band centered at 688 nm and a broad shoulder tailing up to 850 nm). The relatively low molar extinction coefficient (31 600 cm −1 M −1 ) observed for azaBODIPY 4k with respect to azaBODIPYs 4a−j may be partially attributed to the extremely broad absorption bands of this dye (Figure 3a).
Each of the azaBODIPYs 4 shows a weak fluorescence emission in the range 690−730 nm (Table 1). The relatively lower fluorescence emission for those bromo-containing azaBODIPYs may be attributed to the heavy atom effect of the bromo substituent on these azaBODIPYs, which makes these azaBODIPYs potential photosensitizers for dye-sensitized solar cells. 18 The weak fluorescence of azaBODIPY 4k may be attributed to the nonradiative decay associated with free rotation of the triphenylamine moiety and the internal charge transfer effect from the triphenylamine moiety to the azaBODIPY core.
The solvatochromic effects of three common organic solvents (toluene, chloroform, and methanol) on the absorption and emission properties of azaBODIPYs 4a−k were investigated (Figures S11−S27) as summarized in Table S2. A slight blueshift of the absorption and emission band and a slight decrease in the fluorescence quantum yield were observed with increasing solvent polarity. For example, a slight blueshift of the absorption (from 685 to 672 nm) and emission band (from 719 to 715 nm) and a slight decrease in the fluorescence quantum yield (from 0.10 to 0.04) were observed for azaBODIPY 4a by changing the solvent from toluene to methanol. Similar solvatochromic behavior has been reported previously for the solution of azaBODIPY 1b. 1c Electrochemical Properties. The cyclic voltammetry of 1a, 2a, 3a, 4d, and 4k were performed in deoxygenated dichloromethane at room temperature with tetrabutylammonium hexafluorophosphate (TBAPF 6 ) as the supporting electrolyte ( Figure 4). Most of these dyes show two reversible reduction waves and one reversible oxidation wave. AzaBODI-PYs 2a and 4d have reduction potentials more negative than 1a. This indicates that the installation of electron-withdrawing substituents (formyl and vinyl ester groups) at the β-position of the chromophore increases the electron-deficiency of the chromophore, and makes azaBODIPYs 2a and 4d more susceptible to reduction than 1a. A similar effect was observed for the installation of bromo substituent (3a). AzaBODIPY 4k, containing a strong electron-donating triphenylamine substituent, shows one irreversible reduction at −1.10 V. The highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels for 4d and 4k were estimated based on their onset potential of the first oxidation and reduction waves (−5.59 and −4.08 eV for 4d, −4.95 and −3.51 eV for 4k, respectively). The calculated electrochemical energy band gap for 4k is 1.45 eV, which is in good agreement with its optical band gap.
■ CONCLUSIONS
In conclusion, we have developed a new strategy for the facile postfunctionalization of azaBODIPY based on a classic Wittig reaction on our previously reported β-formylazaBODIPYs and our recently developed β-bromo-β′-formylazaBODIPYs. Our strategy uses readily available starting materials, requires mild reaction conditions, and features good yields and good compatibility with various functionalities. The installation of β-vinyl ketone/ester increases the π-conjugation of the system, whereas it has negligible influence on the planar structure of the chromophore. These resultant β-vinyl ketone/ester functionalized azaBODIPY dyes show intense absorption in the NIR range (660−690 nm). The resultant bromo-containing β-vinyl ketone/ester functionalized azaBODIPYs are potential photo- sensitizers and can be further applied for coupling reactions to generate various β,β′-difunctionalized D−π−A dyes.
■ EXPERIMENTAL SECTION
General. Reagents and solvents were used as received from commercial suppliers unless noted otherwise. All reactions were performed in oven-dried or flame-dried glassware unless otherwise stated and were monitored by thin-layer chromatography (TLC) using 0.25 mm silica gel plates with a UV indicator (60F-254). 1 H and 13 C NMR were recorded on a 300 or 500 MHz NMR spectrometer at room temperature. Chemical shifts (δ) are given in ppm relative to CDCl 3 (7.26 ppm for 1 H and 77 ppm for 13 C) or to internal TMS. Highresolution mass spectra (HRMS) were obtained using APCI-TOF or MALDI-TOF in positive mode.
Photophysical Measurements. UV−visible absorption and fluorescence emission spectra were recorded on commercial spectrophotometers (190−900 nm scan range) at room temperature (10 mm quartz cuvette). Relative fluorescence quantum efficiencies of BODIPY derivatives were obtained by comparing the areas under the corrected emission spectrum of the test sample in various organic solvents with that of Rhodamine B (Φ = 0.36 in chloroform). 10b Nondegassed, spectroscopic grade solvents and a 10 mm quartz cuvette were used. Dilute solutions (0.01 < A < 0.05) were used to minimize the reabsorption effects. Quantum yields were determined using eq 1 19 (1) where subscripts x and r refer, respectively, to our sample x and reference (standard) fluorophore r with known quantum yield A r in a specific solvent; F stands for the spectrally corrected, integrated fluorescence spectra; A(λ ex ) denotes the absorbance at the used excitation wavelength λ ex ; and n represents the refractive index of the solvent (in principle at the average emission wavelength). Cyclic Voltammograms. Cyclic voltammograms of 1 mM 1a, 2a, 3a, 4d, and 4k were measured in dichloromethane solution, containing 0.1 M TBAPF 6 as the supporting electrolyte, a glassy carbon electrode as a working electrode, Pt wire as a counter electrode, and a saturated calomel electrode as a reference electrode at 50 mV s −1 scanning rate at room temperature (Figure 4).
Crystallography. Crystals of 2a, 2b, 4a, and 4b suitable for X-ray analysis were obtained via the slow diffusion of petroleum ether into their dichloromethane solutions under ambient conditions. The vial containing this solution was placed, loosely capped, to promote the crystallization. A suitable crystal was chosen and mounted on a glass fiber using grease. Data were collected using a diffractometer equipped with a graphite crystal monochromator situated in the incident beam for data collection at room temperature. Cell parameters were retrieved using SMART 20 software and refined using SAINT 21 on all observed reflections. The determination of unit cell parameters and data collection were performed with Mo Kα radiation (λ) at 0.71073 Å. Data reduction was performed using the SAINT software, which corrects for Lp and decay. The structure was solved by the direct method using the SHELXS-974 program and refined by the least squares method on F 2 , SHELXL-97, 22 incorporated in SHELXTL V5.10. 23 CCDC-1547895 (2a), CCDC-1547896 (2b), CCDC-1519967 (4a), CCDC-1519968 (4b) contain the supporting crystallographic data for this article. These data can be obtained free of charge from the Cambridge Crystallographic Data Centre via www.ccdc.cam.ac. uk/data_request/cif.
General Procedure for the Bromination of β-Formylaza-BODIPYs 2a−c To Generate 3a−c. To β-formylazaBODIPYs 2a−c (0.6 mmol) in 50 mL dichloromethane was added liquid bromine (0.7 mmol). The mixture was stirred at room temperature for 5 min and quenched via the addition of an aqueous solution of sodium bicarbonate (30 mL, 1 M). The reaction mixture was washed with brine and extracted with dichloromethane (50 mL × 3). The organic layers were combined, dried over anhydrous Na 2 SO 4 , and the organic solvent was removed under vacuum. The residue was purified by column chromatography on silica using the mixture of petroleum ether and dichloromethane (v/v = 2/1) as eluent to afford the target azaBODIPYs 3a−c as a dark cyan solid in 91− 95% isolated yields. | 2019-04-09T13:01:41.015Z | 2017-06-08T00:00:00.000 | {
"year": 2017,
"sha1": "af88736ccabdf81900e35a3f005bc3ba78beac8c",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://doi.org/10.1021/acsomega.7b00393",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78e719bcb977cd07ad17cdc349bcd730f5fbb500",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
119081355 | pes2o/s2orc | v3-fos-license | Stellar energetic particles in the magnetically turbulent habitable zones of TRAPPIST-1-like planetary systems
Planets in close proximity to their parent star, such as those in the habitable zones around M dwarfs, could be subject to particularly high doses of particle radiation. We have carried out test-particle simulations of ~GeV protons to investigate the propagation of energetic particles accelerated by flares or travelling shock waves within the stellar wind and magnetic field of a TRAPPIST-1-like system. Turbulence was simulated with small-scale magnetostatic perturbations with an isotropic power spectrum. We find that only a few percent of particles injected within half a stellar radius from the stellar surface escape, and that the escaping fraction increases strongly with increasing injection radius. Escaping particles are increasingly deflected and focused by the ambient spiralling magnetic field as the superimposed turbulence amplitude is increased. In our TRAPPIST-1-like simulations, regardless of the angular region of injection, particles are strongly focused onto two caps within the fast wind regions and centered on the equatorial planetary orbital plane. Based on a scaling relation between far-UV emission and energetic protons for solar flares applied to M dwarfs, the innermost putative habitable planet, TRAPPIST-1e, is bombarded by a proton flux up to 6 orders of magnitude larger than experienced by the present-day Earth. We note two mechanisms that could strongly limit EP fluxes from active stars: EPs from flares are contained by the stellar magnetic field; and potential CMEs that might generate EPs at larger distances also fail to escape.
INTRODUCTION
The definition of planet habitability has been based in the last decades on the orbital distance (or habitable zone, hereafter HZ, Kasting et al. 1993) at which the steady stellar irradiation allows for a temperature consistent with the presence of liquid water on the planetary surface. However, charged energetic particles (hereafter EPs) produced by stellar flares or shock waves driven by Coronal Mass Ejections (hereafter CMEs) and travelling into the interplanetary medium may significantly impact the conditions for life to exist in planets beyond the solar system (Segura et al. 2010;Ribas et al. 2016;. In the case of the solar wind, in-situ measurements of EP irradiation are used to assess shielding requirements for astronauts at 1 AU (Mewaldt 2006;Mewaldt et al. 2007). Multi-spacecraft observations of solar eruptive events during the solar maximum of cycle 23 (2002 − 2006) show that between 0.4 and 20% of the kinetic energy of CMEs in the energy range 10 31 − 10 32 erg (in the solar wind frame) is expended in accelerating solar EPs (Mewaldt et al. 2008;Emslie et al. 2012).
Stellar EPs are in some cases expected to cause depletion of planetary ozone layers (Segura et al. 2010;Tilley et al. 2017). Such depletion allows penetration of UV radiation with consequent degradation of proteins (Kerwin & Remmele 2007) but also, in contrast, catalysis of pre-biotic molecules (Airapetian et al. 2016;. Loyd et al. (2018) note that ozone depletion by photolysis alone was expected to be significant only for very major flares expected to occur monthly or yearly, but note that effects of very commonly occurring weaker flares in their study could be enhanced by EPs. Such multiple lines of evidence suggest that EPs are a component of the star/planet interaction worthy of detailed investigation in relation to habitability.
Propagation of EPs from the injection location to a planet is mediated by the large-scale and the turbulent components of the stellar magnetic field. Studies of the effect of EPs on the ionization of protoplanetary disks (Turner & Drake 2009) or on the synthesis of short-lived nuclides in the early solar system (see, e.g., Dauphas & Chaussidon 2011) assumed that EPs propagate rectilinearly, unimpeded by the magnetic field structure. However, both the components of the magnetic field have been shown to lead to an efficient confinement of EPs close to young active stars (see, e.g., Fraschetti et al. 2018). M dwarfs, the most abundant and long-lived stars in the Milky Way, are currently among the primary targets in exoplanet searches. This is largely due to their small radius that increases the likelihood of detecting orbiting Earth-sized planets with transit techniques, or due to their low mass compared with other spectral types that increases a planet-induced radial velocity Doppler shift in the stellar spectrum. Youngblood et al. (2017) have recently used the MUS-CLES (Measurements of the Ultraviolet Spectral Characteristics of Low-mass Exoplanetary Systems) Treasury Survey (France et al. 2016) to determine that large flares on M dwarfs, i.e., with a soft X-ray (hereafter SXR) peak flux ≥ 10 −3 W m −2 at 1 AU or class X10.0 in the GOES (Geostationary Operational Environmental Satellite) classification, lead to a > 10 MeV proton flux on planets in the HZ up to ∼ 4 orders of magnitude higher than the present-day Earth.
Likewise, the assumption of a solar-like correlation for T Tauri stars between peak emission of large flares (Xray luminosity > 10 30 erg s −1 ) and energetic proton enhancements (Feigelson et al. 2002;Turner & Drake 2009) leads to suggest an enrichment by ∼ 4 orders of magnitude over the present-day proton density at 1 AU. These fluxes imply that the ionization of protoplanetary disks can locally exceed ionization due to stellar X-rays as a result of EPs being channeled and concentrated by magnetic turbulence (Fraschetti et al. 2018).
Such cases show that the EPs emitted by stars more active than the Sun can play a crucial role in the evolution of the circumstellar medium, or inner "astrosphere" (here within ∼ 100 stellar radii), and potentially in the habitability of exoplanets. However, while active stars might generate copious EPs, it is necessary to understand how they propagate within the stellar and interplanetary magnetic field in order to assess their potential impact.
The seven Earth-sized transiting exoplanets recently discovered in the TRAPPIST-1 system (Gillon et al. 2017) are surprisingly packed within a distance of 0.062 AU from the host star (Delrez et al. 2018). Three planets (TRAPPIST-1e, f, g) have been found to orbit the HZ, that spans the range ∼ 0.029 − 0.047 AU (Delrez et al. 2018), raising the question whether the enhanced EP flux at such a close distance affects the atmosphere and planetary habitability.
In this work we determine the flux of EPs impinging onto the HZ planets in the TRAPPIST-1 system by using a realistic and turbulent magnetized wind model of an M dwarf star proxy for the yet poorly-constrained wind of TRAPPIST-1. We adopt the extended magnetic field structure computed using a three-dimensional magnetohydrodynamic (MHD) model previously calibrated to the solar wind and recently applied to study the coronal structure, winds, and inner astrospheres of Sun-like stars (Alvarado-Gómez et al. 2016a,b) and M-dwarfs (Garraffo et al. , 2017, together with the propagation of EPs in stellar turbulence (Fraschetti et al. 2018). We directly solve for the propagation of individual EPs in the turbulent inner astrosphere of an M dwarf wind. The turbulence is calculated via the prescription defined in Giacalone & Jokipii (1999); Fraschetti & Giacalone (2012).
In section 2, general properties of the MHD model simulations are outlined. Section 3 describes the assumptions adopted regarding EP propagation and the magnetic turbulence. Section 4 presents the numerical model. Section 5 contains the main results and 6 quantifies the flux impinging on the HZ planets in the TRAPPIST-1 system. Discussion and conclusion are in Sections 7 and 8, respectively.
TRAPPIST-1 MAGNETOSPHERIC MODEL
TRAPPIST-1 is a low-mass M dwarf (0.089M ) with a 3.3 day rotation period and a radius R * ∼ 0.114 R according to the latest observations (Luger et al. 2017). It was confirmed to host seven planets orbiting in a coplanar system (within ∼ 30 arcmin) viewed nearly edgeon (Gillon et al. 2017). All the planets reside close to the host star, with semi-major axes from 0.01 AU to 0.062 AU (Mercury orbits at 0.39 AU), with orbital periods from 1.5 days to 20 days.
As a background medium for studying the propagation of EPs within the TRAPPIST-1 system, we adopt Garraffo et al. (2017) as a proxy for TRAPPIST-1. The Z-axis is aligned with the stellar rotation axis. Up: The inner sphere represents the surface of the star, color-coded by the radial component of the magnetic field (Br), at bottom-right. A slice perpendicular to the line-of-sight is included, which contains the distribution of the radial component of the wind speed (Ur) as indicated by the bottom-left color-scale. The white translucent halfsphere at R = 20R * denotes the maximum R at which the transition between closed (magenta) and open (black with arrows) magnetic field lines is observed in the simulation. The entire field of view of the visualization is 75R * . Bottom: Same color code for Br and Ur as the upper panel. The distribution of Ur is projected on the equatorial plane (plane z = 0). Open field lines contained in the equatorial plane are denoted by black arrows. Open field lines extending to different latitudes (cyan) are probed on the white translucent half-sphere surface R = 60R * to ease visualization. Selected closed field lines are shown in magenta. The entire field of view of the visualization is 135R * .
the wind and magnetosphere model computed by Garraffo et al. (2017) using the 3D MHD code Block Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US, Powell et al. 1999;Tóth et al. 2012), in the version that incorporates the Alfvén Wave Solar Model (AWSoM) (van der Holst et al. 2014). A datadriven global MHD method is used that was initially developed to reconstruct the solar atmosphere and the solar wind. BATS-R-US employs a radial field magnetogram as a boundary condition for the stellar photospheric magnetic field. In the case of application to the Sun, this is a solar magnetogram but stellar magnetograms obtained using the Zeeman-Doppler Imaging technique (Donati & Brown 1997) can also be used.
Zeeman-Doppler Imaging is presently limited to luminous, fairly rapidly rotating stars. TRAPPIST-1, despite its relatively fast spin, is optically faint (M v = 18.8, Gillon et al. 2017) and out of reach of current Zeeman-Doppler Imaging capabilities. Unfortunately both the distribution of the magnetic field on its surface and the direction of the rotation axis are unknown due to the extreme faintness of the star; moreover, both are subject to change in time with time scales of years to greater, due to the periodic change of magnetic polarity and to the axis precession, respectively. Its average magnetic field, however, has been estimated to be ∼ 600 G using Zeeman broadening (Reiners & Basri 2010). There is growing agreement that the geometry of the magnetic field depends on the rotation period and spectral type of the star (Vidotto et al. 2014;Garraffo et al. 2015;Réville et al. 2015;Finley & Matt 2018). Garraffo et al. (2017) therefore used as a proxy for TRAPPIST-1 the magnetogram observed for GJ 3622 (Morin et al. 2010), an M4 dwarf with a rotation period of 1.5 days. The field on its surface reaches a maximum of 1.4 kG, yielding an average field of ∼600 G, consistent with the TRAPPIST-1 observations. The magnetic structure is not expected to change significantly between stars with periods of 1 to 3 days. We note that our approach is different to that of Dong et al. (2018), who estimated the ion escape rate in the seven planets using a wind model based on a solar magnetogram under solar minimum conditions, rescaled to a magnetic field strength more like typical M-dwarf values (Morin et al. 2010).
The GJ 3622 magnetic field is vaguely dipolar with a notable misalignment between the rotation axis and the magnetic field amounting to a few tens of degrees (∼ 40 • -50 • ). The wind and magnetosphere model are illustrated in Figure 1.
STELLAR ENERGETIC PARTICLES IN THE
TRAPPIST-1 ENVIRONMENT
General assumptions on EPs: origin and propagation
Our general goal here is to explore the effect of smallscale magnetic turbulence on the propagation of EPs through the magnetosphere of the host star TRAPPIST-1, and as far as the outermost planet located at a distance of ∼ 0.062 AU. In particular, we focus on a comparison of the EP flux generated at the star itself with that which propagates out to planets 1b, 1e and 1h.
Two processes are assumed to produce the nonthermal particles (Fraschetti et al. 2018): 1) shock waves driven by CMEs, travelling in the interplanetary medium and therein accelerating and releasing EPs; 2) flares occurring within the stellar corona and releasing EPs within a small distance from the stellar surface (∼ 0.5R * ). Both such processes are assumed to produce the ∼ GeV kinetic energy protons studied here. This assumption can be justified by a solar analogy: former GOES measurements correlating solar proton enhancements at 1 AU with SXR flares do not unequivocally pinpoint the flares as the only sources of particle acceleration as CME-driven shocks are consistent with such a correlation as well (Belov et al. 2007).
In our simulations only the location of injection of EPs (at a distance R s from the star), rather than the acceleration mechanism, is assigned. As for the abundance of accelerated particles in the circumstellar medium at a given distance from the host star, we use the estimate based on solar scaling relations between EP fluence and far-UV and SXR fluence during flares by Youngblood et al. (2017). This scaling provides a time-averaged EP enrichment for time scales comparable with a statistically typical flare duration (Vida et al. 2017) .
We calculate the propagation of the EPs using a test particle approach within a realistic representation of the interplanetary medium that includes magnetic field fluctuations. The large-scale structure used here for the TRAPPIST-1 magnetic field (see Fig. 1) has an approximately dipolar structure with no significant field lines wrapping around the star as might be expected for T Tauri stars and some fast rotators (see, e.g., Gregory et al. 2009;Cohen et al. 2010;Fraschetti et al. 2018). Nevertheless, it is still uncertain whether the average ∼ kiloGauss magnetic field of TRAPPIST-1 allows for CME escape and the outward driving of EPs accelerated at shocks Osten & Wolk 2015). Under the assumption that EPs can be steadily supplied by flares and CMEs, the dominant magnetic effects we are concerned with for EP propagation in TRAPPIST-1 are expected to be scattering and perpendicular diffusion in the turbulent stellar field.
The MHD wind solution and the magnetic turbulence are stationary on the time-scale of EP propagation to a good approximation. The EPs travel at speed c, whereas the stellar rotation speed close to the surface is ∼ 2 km s −1 , and the Alfvén wave speed in the circumstellar medium is ∼ 10 4 km s −1 (∼ 10 3 km s −1 ) at a distance ∼ 10R * (110R * , semi-major axis of the outermost planet) from the host star. This holds for M dwarfs in general. The visible light periodograms of M dwarfs (with radii in the range 0.08−0.6R )-presumably dominated by rotational modulation signatures-typically peak at a few days over a range of periods ∼ 1-100 days (Hawley et al. 2014), with a corresponding surface rotation speed over a range 0.04 − 30 km/s (Barnes et al. 2014;Jeffers et al. 2018). Only the earliest M dwarfs (0.6R ) with rotation periods ≤ 3 days have surface rotation speeds > 10 km s −1 . Dynamical timescales are therefore much longer than the EP travel time in our simulations (typically < 1 hour).
Turbulent stellar magnetic field
In analogy with the measurements of interplanetary magnetic turbulence (e.g., Jokipii & Coleman 1968), and of interstellar density turbulence (Armstrong et al. 1995), we prescribe a magnetic turbulence power spectrum having the shape of a power-law (Kolmogorov) in the 3D turbulent wavenumber k (see Fig. 2). Scaledependent anisotropic turbulence (á la the Goldreich & Sridhar e.g. 1995 model) explaining the origin of the solar wind MHD-scale turbulence anisotropy (e.g., Horbury et al. 2008), has unsettled theoretical transport properties (Laitinen et al. 2013;Fraschetti 2016a,b) and would require a more cumbersome numerical code.
The test-particle simulations presented here track naturally the pitch-angle scattering and cross-field motion of EPs caused by the small-scale turbulence. An alternative approach to EP transport involves Monte Carlo simulations that reproduce the pitch-angle scattering and neglect perpendicular transport (see, e.g., Ellison et al. 1981). The nearly radial spread-out of the open magnetic field lines of the astrosphere used here leads to an observable consequence (see Sect. 5) of the turbulent transport across field lines (Fraschetti & Jokipii 2011;Strauss et al. 2017). In contrast, in the case of the T Tauri star studied in Fraschetti et al. (2018) the wrapping of magnetic field lines around the star prevented an assessment of the effect of the transport across field lines.
Due to the lack of observational estimates of the correlation length, or injection scale, L c , of the magnetic turbulence within the circumstellar medium (see Fig. 2), we adopt the uniform value L c = 10 −5 AU throughout the simulation box. A simulation set carried out with a smaller uniform L c = 10 −6 AU shows that the statistical properties of EPs are not significantly affected by the choice of L c , provided that the resonance condition is satisfied. In this regard, L c = 10 −5 AU is a reasonable value for the quite small range in radial distance of the planets in the TRAPPIST-1 system, within 0.062 AU.
The chosen value of L c ensures resonance with turbulent inertial scales at each EP energy considered (see Fig. 2) during their entire propagation. Such a condition reads for each wave-number k within the inertial range; here, r g (x) = p ⊥ c/eB 0 (x) is the gyroradius of a proton with momentum p ⊥ perpendicular to the unperturbed and space-dependent magnetic field B 0 (x) of TRAPPIST-1, e the proton electric charge and c the speed of light in vacuum.
The power of the magnetic fluctuation δB(x) relative to B 0 (x) is defined as (2) Here, σ 2 is assumed to be independent of space throughout the simulation box as well. The spherical average of the unperturbed field B 0 (x) Ω produced by the 3D-MHD simulations (see Sect. 2) drops with radius R from 2R * as ∼ R −2.2 . On the other hand, the solar wind measurements yield for the turbulence amplitude δB between 0.3 and 4 AU a power-law dependence on heliocentric distance with a very similar index ( 2.2) at a variety of helio-latitudes (Horbury & Tsurutani 2001). Thus, in the lack of any current measurement of the magnetic turbulence around TRAPPIST-1, it seems reasonable to assume a uniform σ 2 , following Fraschetti et al. (2018). The turbulence might be generated by the stirring of the plasma at the outer scale L c , followed by a cascade, or by plasma instabilities at kinetic scales generated, e.g., by streaming of EPs along the field; we neglect the latter here as we are restricted to the test-particle limit. The turbulence within the violently active M dwarf magnetosphere is likely to be much stronger than that in the solar wind (σ 2 not greater than 0.1, Burlaga & Turner 1976). Thus, we considered values of σ 2 spanning the range 0.01 − 1.0. The interpretation of our simulations makes use of the scattering mean free path, λ , given by quasi-linear theory (Jokipii 1966), that reads (Giacalone & Jokipii 1999;Fraschetti et al. 2018) The choices of uniform L c and σ 2 imply that λ depends on spatial coordinates only via r g (x) (i.e., B 0 (x)).
NUMERICAL METHOD
In our numerical experiments, we have directly integrated the trajectories of ∼ 10 4 energetic protons propagating in a turbulent magnetic field that can be decomposed as where the large-scale component, B 0 (x), is the 3D magnetic field generated by the 3D-MHD simulations as calculated in Garraffo et al. (2017) and described in Section 2; the random component δB = δB(x, y, z) has a zero mean ( δB(x) = 0). Here δB(x, y, z) is calculated as the sum of plane waves with random orientation, polarization, and phase following the prescription in Giacalone & Jokipii (1999); Fraschetti & Giacalone (2012). We use an inertial range k min < k < k max , with k max /k min = 10 2 , where k min = 2π/L c and k max is the magnitude of the wavenumber corresponding to some turbulence dissipation scale. In Fraschetti et al. (2018) we verified that an inertial range extended by one decade to smaller scales does not substantially change the resulting distribution of a large number of EPs hitting a protoplanetary disk, despite being computationally much more expensive; we assume that a larger inertial range is not relevant for the M dwarf circumstellar turbulence either. The turbulence power spectrum within the inertial range ( Fig. 2) is assumed to be a three-dimensional Kolmogorov power-law (index −11/3). At scales larger than k −1 min (k 0 < k < k min ), the power spectrum is taken as constant (see, e.g., Jokipii & Coleman (1968) for the solar wind case).
In our simulations, the EPs are injected uniformly on spherical surfaces at a variety of radii, R s , with a velocity distribution isotropic in pitch-angle. The number of particles is then rescaled by using the enhancement in EP flux inferred at a given distance from the star in Youngblood et al. (2017). After propagation through the inner astrosphere, the EP angular location is recorded on spherical surfaces at distances R p . We verified that the particle energy is conserved to a relative accuracy of 10 −3 − 10 −4 . (Figs. 6, 7) and R s = 5R (Fig. 8) recorded at the spheres R p = R b,e . The total number of injected EPs (N inj ) is the same in all cases. Different rows correspond to different values of σ 2 , increasing from top to bottom; different columns correspond to a different planet, 1b (left) and 1e (right). The colorbar is scaled to the maximum number of EPs per pixel, and varies strongly between panels; thus, the same color in different panels does not indicate the same absolute number of EPs. The plane θ = θ + 90 • = 90 • perpendicular to the stellar rotation axis, where −90 • < θ < 90 • is the latitude, marks the plane of the planetary orbits within 30 arcmin (Delrez et al. 2018).
In Figs. 6 and 7, for weak turbulence (σ 2 = 0.01, upper row), the distribution of hitting points spreads fairly uniformly over the R p -sphere. Such a distribution mirrors the uniform distribution of the injection points of EPs and results from the EPs propagation outward close to the scatter-free limit, i.e., uniform and static electric and magnetic field, along the open field lines intercepted on the sphere at R s (greater λ for small σ 2 , from Eq. 3).
The perpendicular diffusion coefficient κ ⊥ grows, regardless of the model, as κ ⊥ ∼ σ 2 (Giacalone & Jokipii 1999;Fraschetti & Jokipii 2011;Strauss et al. 2017) leading to a negligible decorrelation of EPs, for small σ 2 from the direction of the average magnetic field. Thus, the resulting distribution of hitting points at R p is close to the injection distribution at R s and the trajectories nearly map the unperturbed magnetic field B 0 . However, we note that the ratio of the number of EPs at R p -sphere (N Rp ) to N inj is limited to 20 − 25% (see also Fig. 9, left panel), as a large fraction (75 − 80%) collapse back to the star. The latter EPs are released on closed field lines that are prevalent at R s = 10R * (see Fig. 1), and propagate along those closed field lines back to the star, due to the large λ (see Eq. 3) and negligible perpendicular diffusion.
We also note in Fig. 9, left panel, that for each value of σ 2 the ratio N Rp /N inj decreases for greater R p , i.e., decreasing from 1b (red) to 1h (blue). This occurs because some EPs that propagate past an inner R p -sphere undergo pitch-angle diffusion that leads them to move backward and to collapse to the star without reaching the outer R p -sphere. In addition, Fig. 9, left panel, shows a smaller difference for each value of σ 2 between the blue and green curves as compared with green and red ones: this change results from the transition of the large-scale B 0 -field structure from closed/open to prevalently open field lines between the 1b (red) and 1e (green), whereas between 1e and 1h (blue) all field lines are open (cfr. Fig. 1), so that no significant difference is expected between the green and the blue curves. We note that the likelihood of backward trajectories decreases further out due to the increase of the mean free path: λ increases outward as r 3) for B 0 decreasing outward in a uniform σ 2 , so most EPs channelled onto an open line that reach 1e will also reach 1h.
We have run an additional set of simulations with R s = 1.5R * , i.e., at a distance of 0.5R * from the stellar surface, for particles with E = 0.3 GeV. For these simulations, negligible turbulence was adopted (σ 2 = 10 −8 ) since within the chosen turbulent inertial range the EPs would not scatter resonantly as r g is suppressed by the strong B 0 field close to the surface. We find that the ratio N Rp /N inj is in the range 3.0 − 3.7% for R p = R b or R h .
Effect of Stronger Turbulence
The histogram on the R p -sphere changes dramatically in the presence of stronger turbulence (σ 2 = 0.1, 1.0, middle and lower row in Figs. 6, 7 and in Fig. 8): EP hitting points on the R p -sphere are confined to equatorial caps. We find a depleted region, in white, that is barely discernible at R p = R b but conspicuous at R p = R e , and that azimuthally oscillates in the middle and bottom rows in Figs. 6, 7 and 8. This arises from the inclination of the magnetic axis to the rotation axis, and traces the azimuthal variation of the slow wind (see the spherical map of the wind speed, upper row in Fig. 10).
Inspection of the structure of the average magnetic field (see Fig. 1) confirms that closed (open) field lines populate mainly the slow (fast) wind region. Moreover, a comparison of the middle row of Fig. 7 with Fig. 8 shows that injection further out (R s = 10R * rather than 5R * ) reduces the chances of intercepting a closed field line due to the opening of field lines in the slow wind region as one proceeds outward. Consequently, the de-pleted white regions narrow down as the injection radius is increased from R s = 5 to 10.
The broadening of the depleted regions as σ 2 increases, shown in the bottom rows of Figs. 6 and 7, can be explained as follows. A greater amplitude of magnetic fluctuation, i.e., greater σ 2 , leads to a reduced λ (see Eq. 3) and to an enhanced perpendicular diffusion: EPs more frequently decorrelate via cross-field transport. Near the boundary between open and closed field lines, a fraction of particles diffusing from open onto closed field will collapse back to the star, depleting the region corresponding to the current-sheet. There is then a net migration from open to closed field due to this loss of particles at the stellar surface.
The diffusive motion in the opposite direction, i.e., from a closed field lines near the boundary to an open line, and subsequent escape is less likely due to smaller B 0 of the closed line regions (see Fig. 10, lower row), i.e. larger λ , that might lead EPs rapidly to the stellar surface. Indeed, EPs can travel a short distance before falling to the star as the path length of the closed field lines is only a few times λ (from Eq. 3, a 10 GeV proton at R s = 10R * , with r g /L c ∼ 0.1, for σ 2 = 0.1 has λ 3.3 × 10 9 cm 0.5R * that increases outward as shown in Sec. 5.2). We note that for the case of weak turbulence (σ 2 = 0.01, Figs. 6 and 7, upper row) the depleted regions seen at higher σ 2 are not visible on the R p -sphere as on the spheres at R p = R b , R e the points intercepted by open field lines are approximately uniform and closed lines do not reach such distances.
As for the escaping EPs, once they are channelled into the fast wind region, the large B 0 (see Fig. 10, lower row) keeps them confined and focussed toward the caps, where B 0 is larger and hence r g smaller.
Particularly relevant to the influence of EPs on planets in our simulated magnetic field configuration is the approximate symmetry of the caps (see Sect. 6) with respect to the equatorial plane (θ = 90 • ); such a pattern results within the fast wind region from the approximately symmetric and greater B 0 (lower row in Fig. 10) that reduces r g thus favouring the confinement and focussing EPs within the caps.
In the case of a Sun-like B 0 -field, i.e., approximate alignment of B 0 with the rotation axis, with σ 2 1 (within the solar system typically σ 2 < 0.1), EPs would be directed preferentially into the polar regions, leaving planets relatively unaffected. The solar wind latitudinal dependence of EPs in large events is, however, poorly constrained due to the limited number of events with high-latitude in-situ measurements (see Sect. 7).
Surprisingly we find that EPs are focussed toward the equatorial plane even when injected at high latitude, i.e., close to the pole. Such an effect is shown in Fig. 11 where EPs are injected, with isotropic velocity distribution, in the latitudinal ring in the upper hemisphere close to the geographic north pole with θ = 160 − 170 • . In this case, EPs are focused on the R p -sphere within 40 • from the equatorial plane mostly in the upper hemisphere, except for a few points in the lower hemisphere (180 • < φ < 230 • ) due to an additional diffusion in the azimuthal direction.
We note that, despite the reduced filling factor of the EP caps for greater values of σ 2 shown in Figs. 6 and 7, that would seem to suggest a smaller N Rp , the ratio N Rp /N inj actually increases for greater σ 2 (see Fig. 9). This effect results again from (1) a more efficient perpendicular diffusion at the boundary between open and closed field lines and from (2) the increase of λ with distance from the star (λ ∝ B 0 (r) −1/3 ). For most EPs injected on open field lines near the boundary, the former enhances the frequency of decorrelation from a given field line, as discussed above, and the latter favours EPs moving outward with an increasing λ rather than back to the star. Such combined effects ultimately prevent most particles from collapsing to the star and allow them to propagate outward toward the equatorial caps.
At larger EP energy, the escape of EPs injected at the open/closed field line boundary is favoured, as suggested by Fig. 9, right panel: 10 GeV protons arrive more copiously on the R p -spheres than 1 GeV ones. This is a result of a larger perpendicular transport coefficient at larger energy, regardless of the particular model.
Finally, the features in the bottom rows of Fig. 7 protruding out of the caps toward greater φ, and also present to a lesser extent in Fig. 6, map the stripe at constant latitude of maximal wind flow visible in red in Fig. 10, lower panels. On the other hand, the EP caps are shifted to smaller φ as a result of the stellar rotation.
ENERGETIC PARTICLE FLUX WITHIN THE TRAPPIST-1 SYSTEM
The total output of EPs from M dwarf stars cannot be measured directly at present. A possible approach to estimate the EP abundance relies on the solar correlations between the observed properties of coronal flares and in-situ spacecraft measurements of EP fluxes at 1 AU. GOES observations of 800 SXR solar flares (1.5 − 12.4 keV) at the Sun and measurements of the associated > 10 MeV energetic protons events have shown an approximately linear correlation of the far-UV emission line flux to the proton flux (Belov et al. 2007). Youngblood et al. (2017) found two correlations: (1) between SXR peak flux and the flux of > 10 MeV protons from GOES data only; (2) between SDO/EVE He II 304Å emission line fluence during the entire durations of flares and > 10 MeV GOES protons fluence. By using a sample of stellar flares observed by the Hubble Space Telescope (HST) and Chandra/ACIS, Youngblood et al. (2017) finally inferred the proton enhancement for other stars. The He II 304Å (41 eV) flare fluence was related to the HST far-UV (7.3−13.6 eV) fluence with the M dwarf synthetic spectrum created in Fontenla et al. (2016). The solar flaring rates for M-and X-class (corresponding to a SXR peak flare flux of 10 −5 and 10 −4 W/m 2 at 1 AU in the [1 − 8]Å band in the GOES classification, respectively) are estimated to be 0.02 hr −1 and 2.3×10 −5 hr −1 , respectively, based on flare observations in the period 1976-2000 (Veronig et al. 2002). Therefore, the estimated rates for M-and X-class flares on the M4 dwarf GJ 876 are ∼ 0.4 hr −1 (Youngblood et al. 2017), 20 and 1.7×10 5 times more frequent than the Sun for Mand X-class, respectively. The rescaling to the average HZ radius r HZ 876 ∼ 0.18 AU (Youngblood et al. 2017, via the empirical scaling in Kopparapu et al. (2014)), leads to an increase of the flux by a factor 30 for the HZ of GJ 876 (a flaring rate 600 and 5 × 10 5 times higher for Mand X-class, respectively); it should also be noted that, due to the closer HZ, M-class flares are scaled up to X10. Therefore, Youngblood et al. (2017) estimate that large GJ 876 flares (SXR peak flux ≥ 10 −3 W m −2 ) lead to a > 10 MeV proton flux (F max 876 ) on HZ planets up to 10 3 protons cm −2 s −1 sr −1 , and enhanced up to ∼ 4 orders of magnitude higher than for the present-day Earth by both the higher flaring rate and closer distance.
Since the Youngblood et al. (2017) scaling applies to EPs of any energy > 10 MeV, it should be noted that here we implicitly assume a uniform EP energy spectrum, although different spectral shapes, e.g., power-law or log-parabola, normalized to > 10 MeV could be used.
The TRAPPIST-1 HZ is dramatically closer to the host star (R e = 0.029 AU) than the GJ 876 HZ, leading to a much higher EP flux. Rescaling the flux from r HZ 876 = 0.18 AU to the injection radius in our simulations, R s = 10R * = 0.0056 AU, we find an EP flux enhancement 10 3 ×F max 876 10 6 protons cm 2 s sterad .
(5) The relation above holds for very intense flares.
By using the maximal EP flux in Eq. 5, we can determine the flux F (R p ) of EPs impinging on the planet 1e along its 6 day orbital motion around the star. The EP flux impinging on a ring of the R p -sphere with semiaperture ∆θ = 5 • centered on the equatorial plane is given by where N Rp is the number of EPs hitting the ring and we have used A = 95 • 85 • sin θ dθ = 0.17. The flux of 10 GeV EPs with σ 2 = 1, R s = 10R * along the orbit of planet 1e is shown in Fig. 12. The maximal flux, ∼ 1.2 × 10 5 protons cm 2 s sterad , exceeds by roughly 6 orders of magnitude the EP abundance at the present-day Earth. However, such an estimate is subject to several caveats, which we discuss in the following section.
DISCUSSION
The results described in Sect. 5 show that the magnetic fluctuations not only affect the small-scale particle motion but change drastically the behaviour of EPs over the entire inner astrosphere.
The spatial distribution of propagating EPs
The EP-depleted angular regions on the R p -sphere track the slow wind populated by closed field lines that lead to EPs being trapped and lost due to their trajectories leading back to the stellar surface. For relatively large values of σ 2 , particles are lost due to enhanced perpendicular diffusion into the closed field region (see Fig.s 6 and 7). The opening of the closed field lines further out results in the narrowing of the depleted regions for larger particle injection radii R s = 10R * as compared to R s = 5R * (see Fig. 8).
The stronger unperturbed magnetic field in the fast wind region on the equatorial plane (see Fig. 10, lower row) favours EP focussing. The EP caps are centered in the region of fast wind speed at ∼ 800 − 1, 000 (∼ 950 − 1, 100) km/s at the planet 1b (1e).
A key characteristic of the GJ 3622 proxy magnetogram we adopted for TRAPPIST-1 is its resemblance to a tilted dipole. This gives rise to the focus of EPs at low latitudes, and into the planetary orbital plane. The location of the spherical caps of EPs hitting the R p -sphere has potentially important consequences for the energetic particle flux experienced by the planets in our TRAPPIST-1-like system (the TRAPPIST-1 planets themselves are all in coplanar orbits to within 30 arcmin). It should be noted that the locations of the EP caps are subject to shift both along the orbital plane due to differences in the stellar rotation and planetary orbital periods, and in latitude due to the evolution and probable cyclic behavior of the stellar surface magnetic field. Both times scales associated with these processes are much greater than the EP propagation time scale. We investigate the EP flux variation planets could experience below.
We also point out that the EP focussing onto planets seen in our simulations is not expected to occur in a stellar wind driven by a dipolar magnetic field closely 1h Figure 9. Left: Fraction of EPs hitting the Rp-sphere for planets 1b (red), 1e (green), 1h (blue) relative to the total injected EPs as a function of σ 2 , for 10 GeV protons injected at Rs = 5 and 10R * . Right: Fraction of EPs hitting the Rp-sphere (same color legenda as left panel) relative to the total injected EPs as a function of σ 2 , for 10 GeV (solid) and 1 GeV (dashed) protons injected, with equal Ninj, at Rs = 10R * aligned with the stellar rotation axis (such as the solar wind), where the wind is fast at high latitudes (see Fig. 10, lower row). Moreover, σ 2 might attain values greater than 0.1 only in transients, such as CME-driven shocks, or corotating interaction regions. In-situ solar wind measurements following large solar flares (> 10 30 erg) do not strongly constrain the latitudinal dependence in EP intensity: for instance, in the Bastille day event (Zhang et al. 2003) Ulysses high heliolatitude EP intensity, in the fast wind, was measured at 3.2 AU distance from the Sun whereas lower latitude intensity, in the slow wind, was measured at a different distance (1 AU). The spatial distribution of EPs centered on the equatorial plane might raise the question of a possible relation with the spatial distribution of CMEs in active M dwarfs found in numerical simulations by Kay et al. (2016): regardless of the latitude of injection, CMEs are deflected further out (∼ 60R * ) along the near-equatorial current sheet, where the B-field is minimum and therefore CME expansion encounters the minimal magnetic confinement as the ratio between the CME ram pressure to the stellar magnetic pressure is highest. In our simulations, EPs are unleashed from the bulk motion of CME-driven shocks at the initial time, so their motion is independent of the subsequent CME trajectory. We expect that in a stellar wind with a highly tilted magnetic-to-rotation axis, such as the one in Fig. 1, particles emitted at R > 5R * by CMEs along the current sheet (blue-purple stripe in Fig. 10) will be transported toward the fast wind region for σ 2 > 0.1 (cfr. Fig. 11). However, for σ 2 < 0.1 we expect that the fewer escaping EPs will concentrate along the current-sheet stripe.
On the absolute EP flux and trapping of EPs and CMEs
Since EPs can be trapped by close field line regions, they can also be liberated from these regions when the closed field is perturbed or broken open. Such a disruption to the stellar magnetic B 0 -structure can result from a CME-driven shock (not accounted for in our static solution MHD simulations), increasing the chances for EPs to fill the depleted regions on the R p -sphere.
On the other hand, EPs accelerated and injected directly by coronal flares at R s < 2R * , rather than by the travelling shock scenario considered in Figs. 6, 7, 8, are efficiently trapped by the very intense stellar magnetic field and by the closed field lines. Figure 9, left panel, shows that doubling R s approximately doubles N Rp . The low N Rp /N inj (3.0 − 3.7%) for R s = 1.5R * described in Sect. 5, might be considered a lower limit if disturbances of the B 0 topology by flares or CMEs can enable a larger N Rp /N inj .
These results indicate that a fairly simple dipole-like magnetic field structure on a magnetically active star prevents coronal flares from contributing significantly to the steady abundance of EPs further out. Thus, at face value in the undisrupted magnetic topology used here, CME-driven shocks might be expected to be the dominant supplier of EPs within the interplanetary medium of a very active star.
In this context, the underlying assumption that CMEs can successfully escape the strong magnetic confinement of the stellar magnetic field to drive shock waves that accelerate EPs is uncertain and needs further investigation. Drake et al. (2016) presented a preliminary simulation of what would have been a large CME on the Sun induced on the surface of the very active K dwarf AB Dor, and found the event to be entirely contained by the strong overlying magnetic field. Indication that a 75 G dipolar field prevents the escape from the stellar corona of CMEs with kinetic energy < 10 32 erg has also been found by Alvarado-Gómez et al. (2018) based on a number of detailed numerical CME simulations.
There are thus two potentially powerful mechanisms that could strongly limit EP fluxes from active stars: EPs from flares are contained; and CMEs that might generate EPs at larger distances also fail to escape.
The morphology of N Rp /N inj in Figs. 6, 7 are, to a good approximation, independent of the EPs energy. In addition, the Youngblood et al. (2017) correlation is determined for > 10 MeV protons, with an unspecified EP energy-dependence. Regardless of the specific shape, we expect EPs energy spectrum to decrease at larger energy; thus, the EP flux ∼ 10 5 protons cm 2 s sterad impinging on 1e (see Sect. 6 and Fig. 12) will be lower at 10 MeV. We will investigate this effect in a forthcoming work.
We emphasize that our estimated number of injected EPs (Sect 6) is based on strong flares in SXR observed from GJ 876 and classified as large, i.e., time-integrated SXR flux larger than 10 29 − 10 30 erg, due to the small distance to the star. The extrapolation of the correlation between SXR and EP fluence to such large events is uncertain due to the scatter of the observations and to the fact that no solar events beyond a certain energy have been observed (> X10, Hudson 2007;Drake et al. 2016). However, Kepler-2 constraints (Vida et al. 2017) on TRAPPIST-1 white light flares lead to an estimated total flare energy (in the optical) between 10 31 and 10 33 erg, similar to other very active M dwarfs (Hawley et al. 2014) and beyond the total estimated energy of the Carrington event (10 32 erg, Carrington 1859) that is among the most energetic geomagnetic storms ever recorded on Earth. Thus, we argue that the dramatic EP enhancement in the HZ of M dwarfs like TRAPPIST-1 or GJ 876, as compared to present-day Earth, might be not uncommon. Such EP fluxes could have a significant impact on exoplanet atmospheric ionization.
We do not consider the spatial distribution of the EP hitting points on the planetary surface or through the planetary atmosphere, since they depend strongly on the propagation through the planetary magnetosphere and atmosphere: the magnetospheric properties of the TRAPPIST-1 HZ planets-or any other exoplanetsare at present unknown. The effect of EPs on the atmospheric evolution also depends on the atmospheric mass and chemical composition, which are also unknown for TRAPPIST-1. Lyman α detection of variability during transits (observed for planets 1b and 1c, but not 1e, Bourrier et al. 2017), could be useful for further atmospheric characterization, although more detailed con-straints will likely have to await observations by next generation facilities.
By using preliminary 3D-MHD simulations here, we instead consider simply the geometrical flux impinging onto a latitudinal ring, centered on the equatorial plane. We have integrated fluxes over a 5 • semi-aperture, which is much broader than the dispersion of the planetary orbits, in order to obtain sufficient signal from our test particle results (see Fig. 12).
CONCLUSIONS
We have carried out numerical test-particle simulations to calculate for the first time the propagation of stellar energetic particles through a realistic and turbulent magnetic field of an M dwarf star and its wind. Our simulations have been tailored to a proxy for TRAPPIST-1, and we have investigated the flux of energetic particles throughout the habitable zone of the TRAPPIST-1 system to the outermost planet. Particle acceleration by flares close to the stellar surface and further out by CME-driven shocks is mimicked here by injecting particles at various distances from the star over the full sphere and with an isotropic velocity distribution. We highlight three important aspects of the results.
Particles injected close to the stellar surface, regardless of their energy, are trapped within the strong stellar magnetic field. In our simulations, only a 3-4% of particles injected within half a stellar radius from the surface escape. The escaping fraction increases strongly with increasing injection radius: Particles accelerated further from the stellar surface have a much greater chance of escaping the closed stellar magnetic field.
Particles are increasingly focussed and directed toward the equator and toward open field fast wind regions with increasing turbulence amplitude. This results from asymmetric perpendicular diffusion from stronger to weaker field regions. In our TRAPPIST-1 proxy, strong turbulence produces two concentrated polar streams 180 • apart of energetic particles in the fast wind region focussed on the planetary orbital plane, regardless of the angular location of the injection. Based on the scaling relation between far-UV emission and energetic protons for solar flares by Youngblood et al. (2017), we estimate that the innermost putative habitable planet, TRAPPIST-1e, is bombarded by a proton flux up to 6 orders of magnitude larger than experienced by the present-day Earth. Such a bombardement of planets in this study is found to result largely from the misalignment of the B-field/rotation axis assumed for the star-proxy. Since the exact magnetic morphology and alignment of the magnetic field is currently unknown for TRAPPIST-1, and for M dwarfs in general, our results indicate that determination of these quantities for exoplanet hosts would be of considerable value for understanding their radiation environments.
The trapping of EPs produced close to the stellar surface suggests that particles directly accelerated in flares do not generally escape, and that the ambient energetic particle environment of planets is dominated by particles accelerated in CME shocks. However, recent findings that CMEs can be strongly suppressed by strong stellar magnetic fields Alvarado-Gómez et al. 2018) point to a consequent large uncertainty in our understanding of the EP fluxes that exoplanets around active stars sustain. | 2019-02-11T05:04:03.000Z | 2019-02-11T00:00:00.000 | {
"year": 2019,
"sha1": "d61cd18f9fd0023f5d9754b58a6e7bc261b67e7e",
"oa_license": null,
"oa_url": "https://repository.arizona.edu/bitstream/10150/633277/1/Fraschetti_2019_ApJ_874_21.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4a07ec59126bd83eb6a594f3b43dfbdb941691c2",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
15356902 | pes2o/s2orc | v3-fos-license | Detection of pairing from the extended Aharonov-Bohm period in strongly correlated electron systems
Inspired from Sutherland's work [Phys. Rev. Lett. {\bf 74}, 816 (1995)] on detecting bound spin waves, we propose that bound electron states can be detected from the dependence of interacting electron systems to the Aharonov-Bohm flux in the `extended zone' scheme, where the electron pairing halves the original period $N_a$ flux quanta in a system of linear size $N_a$. Along with the Bethe-ansatz analysis, a numerical implementation for keeping track of the adiabatic flow of energy levels is applied to the attractive/repulsive Hubbard models and the $t-J$ ladder.
The response to an adiabatic change against external parameters is an interesting way to probe the nature of interacting electron systems. Some thirty years ago, Byers and Yang [1] proposed a notable example of detecting the Cooper pairing: while a normal state responds to an Aharonov-Bohm (AB) flux periodically with the period of flux quantum, Φ 0 ≡ hc/e, a BCS state will have a halved period, Φ 0 /2. The anomalous flux quantization has actually been applied to various strongly correlated electron systems that are intended to describe high-T c cuprates [2][3][4][5].
This is in fact one out of several ways to detect superconductivity in purely electronic systems. Since an effective electron-electron attraction per se does not guarantee the Cooper pairing, a usual way is to search for a long-tailed pairing correlation function, but the finitesize effect must be carefully analyzed. The quantum Monte Carlo method (QMC) [6][7][8], the density-matrix renormalization group [9], evaluation of the superfluid density (helicity modulus) [10,4] are toward this line of approach. Thus the detection of the Cooper pairing (or bound electrons in more general terms) is a demanding problem in a correlated electron gas, except for one-dimensional(1D) systems, where exact analytic treatment is feasible with the Bethe-ansatz solutions coupled with the conformal field theory. This is where the Byers-Yang flux quantization comes in. The test, however, remains some way from a clear-cut criterion, since the flux dependence of the ground-state energy may give the half-flux periodicity even for the repulsive Hubbard model [3]. An origin of the obscured period is the spin degrees of freedom [11], which is most prominent in 1D.
In the present paper, we propose a new way to detect bound electron states from a more global look at the response to the flux. The idea has been inspired by a recent analysis of the bound spin waves by Sutherland [12]: bound complexes of spin waves in 1D Heisenberg magnets (or equivalently a gas of charged bosons) may be detected from their response to a boosted total momentum. The momentum boost is achieved by twisting the boundary condition, Ψ(. . . , x j + N a , . . .) = e i2πφ Ψ(. . . , x j , . . .), for an N a -site lattice, which uniformly shifts the set of k points. He has shown that, if all of the N particles (i.e., N flipped spins in a magnet) form one bound state, the energy returns to its initial value by a twist of φ = N a /N, which is 1/N times the twist φ = N a required to shift the set of k points for the free particles back to the original position. Intuitively, this discerns whether the momentum boost acts on individual particles or on a 'center-of-mass' of bound particles. A key in Sutherland's idea is to keep track of the φ-dependence of the state not over the one period (0 ≤ φ < 1) but over the 'extended zone' (0 ≤ φ < N a ).
Sutherland does not argue what happens in electron systems or in the situation where more than one bound complexes exist, but we conjecture here that the extended AB method should, conceptually, hold in general. We would then be able to treat e.g. the Cooper pairing problem. Natural questions are: (i) Can we really extend Sutherland's spin-wave analysis to electron systems? (ii) Can we apply the method to two or higher dimensions?
In the present Letter we first give a straightforward extension of Sutherland's spin analysis to electron systems with the Bethe-ansatz analysis of the 1D Hubbard model as a prototype. There we introduce an AB flux Φ that couples to the charge degrees of freedom to twist the boundary condition. We then look at the energy levels against Φ over 0 ≤ Φ < N a Φ 0 (which we call the 'extended AB' spectral flow) to discriminate the bound states, as opposed to the conventional wisdom that one period of Φ 0 suffices. We next propose a method to numerically implement the extended AB test for arbitrary systems including 2D systems to go beyond the Bethe-ansatz analysis.
We start with confirming for the 1D Hubbard model that the electrons have a reduced period of N a Φ 0 /N as well for N-bound states with the Bethe-ansatz analysis [13]. Consider an N a -site ring (with N a even for simplicity) containing N electrons, and thread a magnetic flux Φ. A change of Φ by Φ 0 shifts the k points exactly by their spacing ∆k = 2π/N a for noninteracting electrons, so that the set of k points accomplishes a full travel across the Brillouin zone when Φ/Φ 0 reaches N a . When interacting, the electrons are subject to Bethe-ansatz equations [14], for which there are two types of solutions, i.e., real and complex roots [15]. A real charge rapidity k j represents the quasi-momentum of a beam of charges, while complex k j 's, which are sometimes called string solutions because they appear in a linear group with a common real part, represent bound states of electrons.
For a set of real k j 's, we can take a logarithm of Bethe-ansatz equations to have where M is the number of down spins, t the transfer, U the Hubbard interaction, and I j (j = 1, . . . , N) is an integer (half odd integer) for an even (odd) M. We assume that spin rapidities Λ α are real as is the case with the ground state of the repulsive model. Equation (1) has a periodicity of N a in I j . Since an increase of Φ by Φ 0 causes a uniform shift of I j by unity, a real solution indeed has a periodicity of N a Φ 0 (except at the half-filling, see below). Now, the string solutions for charge rapidities, {k nl (l = 1, . . . , 2n)}, are specified by a single real parameter, Λ ′ n [15]. When all the N particles form a single bound state (an N-bound state in Sutherland's words), the equation reduces to We can then see that Λ ′ n is determined by the total momentum l k nl , so that a change of Φ by Φ 0 /N is enough to shift Λ ′ n to the next position among the N a solutions. Thus we end up with a period of N a Φ 0 /N for a single N-bound state.
Solutions having more than one set of strings have been known to exist for e.g., the attractive Hubbard model having a set of two-strings (electron pairs) [16,17]. However, their high-energy spectra has not been fully understood, so that the spectral flow has to be obtained numerically even in 1D.
So we move on to the numerical implementation of looking at the flow, which is readily applicable to two or higher spatial dimensions, and also to various models such as the extended Hubbard or t-J models. In determining the extended period, we have to keep track of the ground state for the range of the flux well beyond the flux quantum, where the energy level soars to become high-energy states. In addition the level has to be traced straight through level crossings, so that the numerical method must be good enough to reproduce (a) level crossings and (b) high-energy states. Conventional methods such as the exact diagonalization or quantum Monte Carlo methods would then be inadequate.
The algorithm we propose consists of successive estimations of the energy and wave function for Φ + ∆Φ. The new state Ψ is estimated by multiplying a connection, e iA(Φ)∆Φ , defined as Here Increase m until this inequality is fulfilled, then move on to ii).
Applied over the extended zone, the method can count the periodicity (or the 'winding number') of the state. The tractable matrix size is the similar to that for the conventional Lanczos diagonalization.
Around level crossings the calculation becomes subtle. We first observe that level crossings come in two classes: One class occurs for two levels that differ in some symmetry of the system. The second occurs at a phase transition point, e.g., the normal-superconductor transition. The first class can be dealt with by only letting the convergence criterion in iv) with typically ε = 10 −8 for the systems considered here with the matrix size up to a few ten thousands. The second class requires scaling analysis by varying the distance from the critical point as illustrated for the t − J ladder below.
Our first example is the 1D Hubbard model. Figure 1 For comparison, we have superposed the anomalous flux quantization test for 0 ≤ Φ < Φ 0 in Fig.1. We can see that the latter test is indeed ambiguous. To be more precise, we can observe the following. If the Fermi sea has a closed shell (N ≡ 2 (mod 4), as is the case with Fig.1), the true ground state is always spin singlet irrespective of the sign of U. In addition to the branch starting from the ground state, there is a second branch, which is degenerate with the first one at Φ = Φ 0 /2 for U = 0 but shifts upward for a repulsive interaction, or downward for an attractive interaction. Thus a dip appears in a continuous fashion as a negative U is turned on. However, the half periodicity can appear even for the repulsive interactions for open-shell Fermi seas [3]. This is due to the existence of a spin-triplet state, which is degenerate with the ground state at Φ = 0 and stabilized over the singlet state for U > 0 [18]. If we further go into the strong correlation limit, even more anomalous 1/N eperiodicity can appear as shown by Kusmartsev [11]. Thus we have to worry about these finite-size effects due to other branches lying around to obscure the period in the anomalous flux quantization. In two or higher dimensions such situations may be improved [10,5], but we should re-emphasize that the winding-number counting here concentrates on the adiabatic evolution of a single state, where the 'global' period is determined independently of other branches.
The abrupt change from N a Φ 0 to N a Φ 0 /2 in the extended AB period takes place exactly at U = 0, which is the critical point in 1D. The change occurs despite the fact that three spectral flows (for U < 0, U = 0, U > 0) are almost identical around the respective minimum except for some offsets. This corresponds to the known fact [10] that the charge stiffness (or Drude weight) does not exhibit singular jumps even at the critical point when the system is finite.
A closer inspection shows that the long AB period for U = 0 is dominated by some level crossings that turn into a level repulsion or level anti crossings, where different sets of anticrossings are selected according as U > 0 or U < 0. It is at first puzzling how such a qualitative change can possibly occur for an infinitesimal |U|, since charge rapidity k j 's coalesce into doubly occupied k points in the Fermi sea no matter how the critical point (U = 0) is approached from repulsive or attractive sides (see Ref. [19]).
The puzzle is resolved by looking at the weak-coupling Bethe-ansatz behavior. As the k points shift with Φ, the crucial level anticrossing occurs when the upper-most doublyoccupied k point reaches k 0 = π/2 situated at ε(k 0 ) = 0, the center of the band dispersion.
This special point accommodates highly degenerate states that are connected by two-particle scattering processes that comprise the normal ones, c † k 0 ↑ c † k 0 ↓ → c † k 0 +p↑ c † k 0 −p↓ , with p the momentum transfer, along with the Umklapp process, When U is switched on, the degenerate perturbation dictates that some of these states must mix to give anticrossings.
The normal process with p = 2πN/N a selectively produces the anticrossing for the repulsive case. In contrast, the anticrossing in the attractive case is caused by the Umklapp process that transfers a pair to other ε = 0 points, which is the only possible process indeed if the pairs are not dissociated by the adiabatic change in Φ. The fact that k 0 is the key position is illustrated in Fig.1: it takes Φ c /Φ 0 = (k 0 −k F )/∆k for the upper-most doubly occupied k point (k F ) to reach k 0 , where Φ c /Φ 0 = 1.5 for N e = 6, N = 10, in exact agreement with Fig.1.
Given this property, we can readily show that a series of dissociations of doubly occupied states for U > 0 gives the extended period of N a Φ 0 , while a series of Umklapped doubly occupied states for U < 0 halves it. Thus, an infinitesimal interaction is enough to change the global topology (the winding number) of the connection in this example in 1D.
A fuller understanding in terms of the spin-charge separation in 1D emerges if we look more closely at the Bethe ansatz, where Λ α (a 'spin degrees of freedom'), not directly coupled with Φ, stays approximately constant, while the 'charge degrees of freedom' k i progressively changes with Φ for a repulsive U. This is accomplished by Λ α sequentially parting company with one k i to meet another, which is exactly where the anticrossings occur. On the other hand, each Λ α is attached to a pair, k j , k * j , for an attractive U, and the center-of-mass momentum, 2Re(k j ), of the pair remains within the first Brillouin zone due to the Umklapp process, or equivalently, k j 's have to satisfy cos(k j ) ≥ 0 for the two-bound solutions to exist.
The details will be published elsewhere.
In order to demonstrate that the present method is applicable to the case in which the Bethe-ansatz is inapplicable, and also to provide a step toward higher dimensions, we move on to the t-J ladder model. This model, originally conceived for some copper or vanadium oxides, is believed to exhibit superconductivity for small hole doping from the half-filling [20]. Here we perform the extended AB test for a 6 × 2-site system with 4 electrons, which corresponds to a hole concentration of n = 2/3.
In the result, Fig.2, we can clearly see that the periodicity is halved into N a /2 as J is increased to J > 2.04t, which indicates paring. Here we can illustrate a nice feature about the extended AB test: since the transition is associated with a level crossing (a cusp) turning into an anticrossing, we can numerically plot (inset of Fig.2) the size, ∆, of the level repulsion against the relevant parameter (J here) to identify the critical point (J c ≃ 2.04 here) at which ∆ vanishes. Thus we can estimate the critical value J c for a finite system in a well-defined manner, which may then be cast into a finite-size scaling analysis for a more pertinent definition of the critical point. Thus the present method provides a possible way to determine the phase diagram of a given model. So far we have not discussed anything directly about the coherence of the pairs, so that we are talking about a necessary condition for superconductivity. For that matter, Byers-Yang theorem also gives a necessary condition for the superconductivity (or the Meissner effect). It is an intriguing future problem to see if the coherence can possibly appear in the spectral flow.
Another comment is that the present test gives information on metal-insulator transitions such as the Mott-Hubbard transition as well. For the half-filled Hubbard model (a Mott's insulator), the extended AB period reduces down to Φ 0 . The sudden change from the full period N a Φ 0 to the minimum one is due to an appearance of the charge gap, across which the flow is inhibited to jump so that the system has to return to the original state as soon as the flux reaches Φ 0 , in analogy with the gauge argument of the quantum Hall effect. Thus we can expect that the present method can detect the existence or otherwise of the Fermi surface [21]. A more interesting example is the phase separation, where the system should respond as a single bound state as will be reported elsewhere. The different winding numbers for paired, metallic and insulating phases may be analyzed in terms of the homotopy of the phase-space (or the fiber bundle) of correlated electron systems. | 2014-10-01T00:00:00.000Z | 1995-12-30T00:00:00.000 | {
"year": 1995,
"sha1": "c3d8432a630f20c2d8a3c0ad63ca9d8047f6c8b5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9512177",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c3d8432a630f20c2d8a3c0ad63ca9d8047f6c8b5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
210473660 | pes2o/s2orc | v3-fos-license | Collective Oscillations of Majorana Neutrinos in Strong Magnetic Fields and Self-induced Flavor Equilibrium
We study collective oscillations of Majorana neutrinos in some of the most extreme astrophysical sites such as neutron star merger remnants and magneto-rotational core-collapse supernovae which include dense neutrino media in the presence of strong magnetic fields. We show that neutrinos can reach flavor equilibrium if neutrino transition magnetic moment $\mu_\nu$ is strong enough, namely when $\mu_\nu/\mu_{\rm{B}} \gtrsim 10^{-14}-10^{-15}$ with $\mu_{\rm{B}}$ being the Bohr magneton. This sort of flavor equilibrium, which is not necessarily flavor equipartition, can occur on (short) scales determined by the strength of the magnetic term. Our findings can have interesting implications for the physics of such violent astrophysical environments.
I. INTRODUCTION
Neutrinos can play a vital role in the physics of the most violent astrophysical phenomena such as neutron star mergers (NSM) and core-collapse supernovae (CC-SNe) [1][2][3][4]. Due to their weak interactions, they can act as the major channel of energy transport. Moreover, they can be crucial to heavy elements nucleosynthesis since they can modify the neutron to proton ratio through the weak reactionsν e + p n + e + and ν e + n p + e − [5]. Neutrinos can experience flavor conversions which can change their energy spectra. This, in principle, can change their interaction rates and consequetly influence their effects on the dynamics and nucleosynthesis in the extreme astrophysical environments. In addition, on the observational side, any flavor conversions can modify the neutrino signal which may be observed from these events on the earth.
Neutrinos can experience collective flavor oscillations in NSM remnants and CCSNe due to their coherent forward scatterings by the high density background neutrino gas. The presence of neutrino-neutrino interaction makes the problem of neutrino evolution in a dense neutrino medium very demanding and remarkably different from the one in vacuum and matter. It, indeed, makes this problem a nonlinear one with strong coupling among different neutrino momenta [6][7][8][9][10].
The first studies on this problem where carried out in maximally symmetric models. For example, to study collective neutrino oscillations in the supernova context, a stationary spherically symmetric SN model, i.e. the socalled neutrino bulb model was used [7]. The most important feature of the results obtained in the bulb model is the presence of the spectral swapping phenomenon in which ν e (ν e ) exchanges its spectra with ν x (ν x ) for a certain range of neutrino energies [8,9,[11][12][13][14][15]. This phenomenon is a direct consequence of collective neutrino oscillations.
However, regarding the evolution of neutrinos in NSM remnant accretion disks, the geometry is much more complicated and a self-consistent one-dimensional model is unavailable. The first studies were done in the so-called single-angle approximation scenario [16] in which it is assumed that all neutrinos emitted from the neutrino emitting accretion disk experience similar flavor evolution. The salient characteristic of the results obtained in these calculations is the occurrence of matter-neutrino resonance (MNR) [17][18][19][20][21][22][23][24][25][26]. This phenomenon results from the cancellation between the neutrino-neutrino interaction and matter potentials which can happen since in the NSM environment,ν e can be more abundant than ν e , i.e. nν e /n νe > 1. The MNR phenomenon is currently thought to be absent in the supernova environment where normally nν e /n νe < 1 and as a result, the neutrino and matter potentials have similar signs 1 .
Nevertheless, it was then realised that such oversimplified maximally symmetric models are not appropriate to study neutrino flavor evolution in dense neutrino media. On the one hand, the spatial and time symmetries in the neutrino gas can be broken spontaneously in the presence of collective neutrino oscillations [10,[30][31][32][33][34][35][36][37][38][39]. This can allow for neutrino flavor conversions at very large matter/neutrino densities. On the other hand, it has been shown that neutrinos can experience the so-called fast flavor conversion modes in dense neutrino media probably provided that ν e andν e angular distributions cross each other [38,[40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56]. Such fast conversion modes can occur on scales ∼ G −1 F n −1 ν which can be as short as a few cm's in the aforementioned extreme astrophysical environments. This must be compared with slow modes expected to occur on scales ∼ O(1) km (for a 10 MeV neutrino) determined by the neutrino vacuum frequency ω = ∆m 2 atm /2E. In addition, neutrinos are expected to have tiny but nonzero magnetic moments (see, e.g., [57][58][59] for a review) which can influence their flavor evolution in the presence of magnetic fields. In particular, the presence of ultra-strong magnetic fields (B 10 15 Gauss [60]) in NSM and magneto-rotational CCSNe (in which rapid rotation and large magnetic fields are thought to play an important role) makes them ideal settings for studying the impact of the coupling between neutrinos and magnetic field (photon) on collective neutrino oscillations. While such a coupling leads to active-sterile neutrino oscillations in the case of Dirac neutrinos, it results in neutrino-antineutrino oscillations for Majorana neutrinos.
In the minimally-extended Standard Model (MESM), the diagonal magnetic moment of Dirac neutrinos can be written as [61] where m ν is the neutrino mass and µ B = 5.788 × 10 −9 eV Gauss −1 is the Bohr magneton. The transition magnetic moment is smaller than the diagonal one by approximately four orders of magnitude. As for Majorana neutrinos, while the diagonal magnetic moment is dictated to be zero, the transition magnetic moment is similar to the transition magnetic moment of Dirac neutrinos. Although MESM predicts µ ν 10 −19 µ B , some of the theories beyond SM predict (or at least can explain) much larger values for µ ν 2 . In fact, the current experiments can only provide an upper bound on µ ν (see, e.g., Refs. [65,66]), which is many orders of magnitude larger than the value suggested by MESM. This constraint is valid for both Dirac and Majorana neutrinos and also diagonal and transition magnetic moments. The coupling between neutrinos and magnetic field can provide new channels for changing neutrino lepton number and can possibly lead to new physics, if it is strong enough. A number of papers have studied this phenomenon in astrophysical environment [67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82]. In particular, in Refs. [78,79] the authors reported that collective oscillations of Majorana neutrinos can be nontrivially affected by the magnetic term for level-of-SM or even smaller µ ν 's.
In this paper, we study collective oscillations of Majorana neutrinos in the presence of strong magnetic fields 2 There can indeed exist some difficulties here [62][63][64]. In particular, since the neutrino magnetic moment can depend linearly on the neutrino mass, any attempt to increase the neutrino magnetic moment leads to an increase in the neutrino mass as well. Thus, to be consistent with current constraints on the neutrino mass and at the same time having large values for µν , one may need a sort of fine-tuning. Though for Dirac neutrinos this necessary fine-tuning leads to theoretical difficulties to produce µν 10 −15 µ B , it is almost harmless to the case of Majorana neutrinos since it only becomes problematic when µν 10 −9 µ B which is already excluded by experiments.
with B 10 15 Gauss, thought to be present in NSM remnants and magneto-rotational CCSNe. To achieve this goal, we use a schematic multi-angle one-dimensional model for the neutrino gas in the two-flavor (Sec. II A) and three-flavor (Sec. II B) scenarios. We show that if the neutrino magnetic moment is large enough, the neutrino gas can reach a sort of flavor equilibrium (which is not necessarily equipartition) on scales determined by the magnetic term.
II. COLLECTIVE OSCILLATIONS OF MAJORANA NEUTRINOS IN THE PRESENCE OF MAGNETIC FIELDS
To study the evolution of Majorana neutrinos in the presence of strong magnetic fields, we consider a singleenergy, multi-angle neutrino gas in both two and threeflavor scenarios in which neutrinos are emitted with emission angles in the range [−ϑ max , ϑ max ]. This model is similar to the one used in Ref. [49].
At each space-time point (t, r), the flavor state of a neutrino traveling in direction ϑ can be specified by its density matrix ρ ϑ (t, r). The evolution of ρ ϑ (t, r) in the absence of collisions is governed by the Liouville-von Neumann equation of motion [83][84][85][86][87] where D t = ∂ t + v · ∇ and H ϑ = H vac + H mat + H νν,ϑ is the total Hamiltonian, with H vac , H mat and H νν,ϑ being the contributions from vacuum, matter and neutrinoneutrino interaction potentials, respectively. Here, the contribution from the coupling between neutrinos and magnetic field is included in the vacuum term. In our study, the evolution of neutrinos is considered in two models, namely a stationary one-dimensional model and a time-dependent homogenous neutrino gas. In the one-dimensional model D t = cos ϑd r while one has D t = d t in the time-dependent homogenous gas. As will be seen in what follows, the occurrence and nature of the equilibrium does not depend on the employed model since the outcome is purely determined by the presence of the strong magnetic coupling term. Nevertheless, the amplitude of the oscillations around the equilibrium can be smaller in the stationary one-dimensional model. We also assume that the physical quantities such as the matter/neutrino densities and magnetic field are constant during the propagation of neutrinos. This is justified by noting that the scales associated with neutrino oscillations in this problem (induced by strong magnetic coupling) are much shorter than the relevant scales of the astrophysical problems of interest.
A. Two-flavor scenario
To demonstrate the idea and to show how the presence of strong coupling between neutrinos and magnetic field can influence their oscillations in a dense neutrino medium, we first start with the case of two-flavor scenario. We follow the formalism developed in Ref. [78,79] and we take ρ to be a 4 × 4 matrix which includes the flavor content of neutrinos and antineutrinos This matrix has clearly the form with and ρ ν and ρν being the usual 2 × 2 flavor matrices having information on the flavor content of neutrinos and antineutrinos, respectively. It is very convenient to follow this formalism here since for nonzero Majorana neutrino magnetic moment, neutrinos and antineutrinos are coupled in the presence of magnetic field and there is a nonzero ν −ν transition amplitide. Within this formalism, the vacuum and matter potentials can be written as where λ e(n) = √ 2G F n e(n) , with n e (n n ) being the electron (neutron) number density and θ v and ω = ∆m 2 atm /2E are the neutrino vacuum mixing angle and the vacuum frequency (∆m 2 atm > 0 (< 0) for the normal (inverted) mass hierarchy) for a neutrino with energy E. In our calculations, we set θ v = 0.1 and ω = 1 though the results do not qualitatively depend on the choice of these parameters. Note that the vacuum term has the new contribution Ω = µ ν B T from the coupling of Majorana neutrino with the component of magnetic field transverse to the neutrino momentum, B T . Furthermore, unlike the case of collective neutrino oscillations in the absence of magnetic field, the neutral current contribution from neutrons to the matter potential can not be ignored since it has different signs for neutrinos and antineutrinos and can not be removed as a common phase when these two are coupled.
In addition, the neutrino-neutrino interaction potential, H νν,ϑ , is where and ρ c is defined as Note that the definition of ρ c is somewhat different from the one in Refs. [78,79] so that there is no contribution to ν −ν transition from the neutrino-neutrino interaction term [88] (see also [87,89,90]). The last term in Eq. (9) refers to a phase factor which has different signs for neutrinos and antineutrinos and therefore, can not be removed here. One can then recover the usual equations of motion of traditional collective oscillations if B = 0.
Results
In our simulations, we took ϑ max = π/3 and a fixed magnetic field with B T = 5 × 10 15 Guass. Such strong magnetic fields may not exist on very large scales in the astrophysical problems of interest. However, the scales associated with neutrino oscillations for strong Ω's are much shorter than other relevant scales in the problem and therefore, we here intend to consider the local effects of large Ω's rather than the global ones. Note also that since Hamiltonian is only sensitive to B via µ ν B T , for smaller/larger magnetic fields one can just rescale µ ν .We also set n νµ = nν µ = n ντ = nν τ = 0.4 n νe in our calculations.
The angle-averaged neutrino survival probabilities of neutrinos and antineutrinos are shown in Figs. 1 and 2. We considered two cases with nν e /n νe = 0.7 and 2, for a number of Ω's and two neutrino number densities specified by For each panel, the corresponding neutrino magnetic moment is We have confirmed that our results do not qualitatively depend on the choice of nν e /n νe and n νx /n νe , as well FIG. 2. The same information as in Fig. 1 for nν e /nν e = 2.
as electron and neutron densities (as long as Ω is the dominant term) and the mass term in the Hamiltonian.
As can be clearly seen in Figs. 1 and 2 nν e = 0.7 n ν e ρ νeνe ρν eνe (n νe + nν µ + nν τ )/3 (nν e + n νµ + n ντ )/3 (n νe + nν µ + nν τ )/3 (nν e + n νµ + n ντ )/3 the neutrino gas experiences an interesting sort of flavor equilibrium in which ν e (ν e ) reaches an approximate equalisation withν x (ν x ) so that ρ νeνe ρν xνx n νe + nν x 2 ρν eνe ρ νxνx nν e + n νx 2 with some small amplitude oscillations around the equilibrium value which can become smaller for Ω > √ 2G F n νe . The special form of the equilibrium arises from the specific structure of the vacuum Hamiltonian which couples ν e ↔ν x andν e ↔ ν x . For strong Ω's, the vacuum term dominates the evolution of neutrinos. This, combined with the decoherence induced by the neutrino-neutrino interaction term can then lead to the flavor equilibrium. Note that here the flavor conversion does not arise from cancellation between the diagonal terms in the Hamiltonian as in Refs. [69,72], where resonant conversion is responsible for neutrino flavor oscillations.
B. Three-flavor scenario
Three-flavor oscillations of a dense neutrino gas in the presence of strong coupling between neutrinos and magnetic field can be studied as a straightforward generalisation of the two-flavor case.
The 6 × 6 neutrino density matrix includes the flavor contents of both neutrinos and an-tineutrinos of all three flavors with and ρ ν and ρν are the usual 3 × 3 flavor matrices of neutrinos and antineutrinos, respectively, defined as and similarly for antineutrinos. In addition, the vacuum Hamiltonian can be written as whereH vac is the usual 3 × 3 three-flavor vacuum Hamiltonian described by two mass-squared differences ∆m 2 12 and ∆m 2 13 , three mixing angles θ 12 , θ 13 and θ 23 , and one CP -violating phase δ 3 , for which the values were taken from Particle Data Group [91]. Also, describes the contribution from the magnetic term where Ω αβ = µ αβ B T are assumed to be real quantities. Moreover, in the neutrino-neutrino interaction term, Eq. (9), G and ρ c are straightforward 6 × 6 generalisations of the corresponding 4 × 4 ones, and
Results
Collective neutrino oscillations in strong magnetic field in the three-flavor scenario is very similar to the one in the two-flavor scenario. In particular, the angle-averaged survival probabilities can reach some sort of flavor equilibrium, as indicated in Fig. 3. However, the magnetic term is more complicated in the three-flavor scenario. Thus, different flavors can, in general, reach different equilibrium values. For example, in our calculations with n νµ = nν µ = n ντ = nν τ , we observed that ν e andν e reach a flavor equilibrium in which ρ νeνe n νe + nν µ + nν τ 3 ρν eνe nν e + n νµ + n ντ 3 .
This can be explained by noting that the magnetic term couples ν e toν µ ,ν τ andν e to ν µ ,ν τ . In general, the equilibrium values of different neutrino species are only functions of the neutrino number densities but independent of other quantities such as the mass term in the Hamiltonian, ϑ max , the matter density and so on. Although individual neutrino (angle) beams can experience large amplitude flavor oscillations, the rapid variations of the angular distributions of neutrino survival probabilities, as shown in Fig. 4, allow neutrinos to reach a flavor equilibrium with relatively small amplitude oscillations around the equilibrium value.
III. CONCLUSION
We have studied collective oscillations of Majorana neutrinos in a dense neutrino gas in the presence of strong magnetic fields. Such physical environment is thought to exist in NSM remnants and magneto-rotational CCSNe.
Collective oscillations of Majorana neutrinos can lead to a sort of approximate flavor equilibrium in the presence of strong magnetic field provided that the neutrino transition magnetic moment is strong enough, i.e. when Ω = µ ν B T is comparable to the other terms in the Hamiltonian. The equilibrium state is determined by the number densities of the coupled (through the magnetic term) neutrino/antineutrino species.
In the presence of nonzero Majorana neutrino magnetic moment, although the total number of neutrinos plus antineutrinos is conserved, the number of neutrinos and antineutrinos are not individually conserved. This is different from the case of usual collective neutrino oscillations in the absence of magnetic fields (and other beyond SM terms). We do not consider the case of Dirac neutrinos in this study where the total number of active neutrinos can also be changed due to the possibility of transition between active and sterile neutrinos.
In our calculations, the magnetic coupling term can play a noticeable role only if Ω is not much smaller than the other terms in the Hamiltonian. This is different from the observation of Refs. [78,79] where the presence of the magnetic term can significantly modify collective neutrino oscillations even if it is many orders of magnitude smaller than the other terms in the Hamiltonian. This means that one should observe a remarkable impact from the magnetic term even if the length scale associated with Ω is orders of magnitude larger than the size of the SN 4 . Note, however, that any comparison between the results presented here and the ones in Refs. [78,79] must be made with great caution since the employed models are different in several respects. Apart from providing an example of a physical situation in which collective neutrino oscillations can lead to generic flavor equilibrium, our findings provide useful insight on how the presence of strong lepton number violating channels can impact collective neutrino oscillations. Our results can have important implications for the physics of the most extreme astrophysical environments such as NSM remnants and magneto-rotational CCSNe.
ACKNOWLEDGMENTS
I would like to thank C. Volpe, S. Shalgar and M. Obergaulinger for valuable discussions and H. Duan for insightful conversations and his helpful comments on the manuscript. I am also grateful to V. Cirigliano for providing me with his notes on neutrino quantum kinetics. This work is partially supported by Physique fondamentale et ondes gravitationelles (PhysFOG) of the Observatoire de Paris. | 2020-01-15T02:01:00.117Z | 2020-01-14T00:00:00.000 | {
"year": 2020,
"sha1": "7ddffa9c9c241d290e73f3fbcae95270e2af44c4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2001.04876",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9413ff87d7baf7b01a09b9b83b087e33fdf39437",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
267192109 | pes2o/s2orc | v3-fos-license | Post-colonial development in Africa – Samir Amin’s lens
ABSTRACT The paper is an attempt at applying Samir Amin’s lens in the analysis of socio-economic development in Africa. Social and economic development in Africa has been substandard, largely because of the economic system followed and because effective structural transformation has not taken place – Samir Amin’s works explained what needed to be done to transform Africa (and the broader global south). It is in this context that the paper posits that post-colonial Africa has had to contend with disruptive socio-economic and political realities instituted by European colonialism, slave trade and inappropriate integration of Africa to the so-called global economy. The fundamental explanation for the poor socio-economic development in Africa is global capitalism, and one of the possible solutions is in Samir Amin’s delinking proposal as well as the restructuring of the African economies.
Introduction
The wave of political independence in Africa starts in earnest in the 1950s with Libya (1951), Morocco (1956), Sudan (1956), Tunisia (1956), Ghana (1957) and Guinea (1958).Earlier, much earlier, Liberia (1847) and before the wave of the 1950s, Egypt (1922) attained political independence.Ethiopia was never colonised.South Africa is relatively complex, but it is probably safe to place its political independence in 1994the year when apartheid formally ended.Many African countries became politically independent in the 1960s and 1970s.It is only Zimbabwe that attained political independence in the 1980s.It is in this context that the analysis in this paper focuses on 1960-1980 and 1980-2000 over and above examining each decade since the 1950s.It is important to study social and economic development over a longer period of time because socio-economic transformation does not happen speedily and there is usually a lag (as economists would say).Arguably, the results of the work of administrations that took over from colonial administrations started showing in the late 1960s onwards, with some exceptions (e.g.Zimbabwe, South Africa and South Sudan).There have been many studies that deal with socio-economic development in Africa during the 2000s and the later period, hence this paper focuses on the first five decades of political independence in Africa.It is also worth noting that the studies referred to mainly focus on economic development or economic growth in particular.As a disclaimer, because I have written about Samir Amin previously and about post-independent Africa, the paper focuses on some of Amin's ideas.In any case, it is not feasible to exhaustively deal with Amin's huge archive about Africa (and the global south) in a journal paper.Samir Amin has also published comprehensive autobiographies that capture his thinking and there are published interviews with and of him that clarify his views on many critical issues pertaining to Africa and the global south in particular.
The paper is an attempt to apply Samir Amin's views and perspectives in explaining socio-economic development in Africa for the period under review.The focus is on some of Amin's major ideas that are relevant for the period immediately after the political independence of many African countries.The paper draws from Amin's perspectives in relation to socio-economic development, with a specific focus on Africa.The paper starts with providing brief background to its problematic (viz.post-independent development in early years of political independence in Africa).That is followed by the interpretation of development through Samir Amin's lens, in the context of the early years of political independence.Before the conclusion, the paper discusses what could bring about inclusive socio-economic development in Africa.
The context
Among the critical issues is that political transformation of the 1960s and 1970s ushered in new energies during the first two decades of independence.There were robust efforts towards socio-economic development, largely shaped by nationalist agendas.In addition to issues highlighted above in the introductory section, socio-economic development remained a huge challenge because many of the post-independence African leaders rejected the market economy which they viewed as a colonialist system.They mostly embraced socialist and communist systems as the best possible path of socio-economic development which did not go down well with former colonisers.Hyden (1983) makes a point that many countries in Africa during the first two decades of independence pursued what can be viewed as an 'economy of affection' where an indigenous form of economic and social organisation dealing with peasant production modes, governance, policymaking, and management issues were pursued.The 'affection economy' (not to be confused with socialism or communism) represents a system of support, interactions, and communications among groups connected by blood, kin, communities, and village affinities.
It could be argued that while the 'affection economy' could have served worthwhile needs such as basic survival, social maintenance, and development of the economy.It could also have been responsible for holding back development by procrastinating on changes in behavioural and institutional patterns capable of sustaining economic growth and social development.Fundamentally, however, is that the 'economy of affection' which was associated with socialist and communist socio-economic development approaches was going against the trend of capitalist accumulation and it was therefore frustrated by the powers that be of the times.This continues through the skewed distribution of political power globally.Decolonial scholars term this as the 'global colonial matrix' or 'colonial matrix of power', referring to power structures that limit prospects for socio-economic development in the global south because of the control that the West exerts on the global south.Samir Amin's analysis took this into account, largely from a Marxist perspective, and he argued for delinking among other possible solutions to this challenge.
It is also worth highlighting that socio-economic success or failure of African countries depended on economic, political, legal, and social institutions of the time.Such institutions could have created incentives for investment and the adoption of technology for business to invest, and the opportunity to amass human capital for workers.In this view, discouraging such activities could have been responsible for stagnation.There is however sufficient literature and data that provide argumentation and evidence that external influence, neo-liberal dogma and structural adjustment programmes, among other issues that had little to do with discouraging any economic activities, are responsible for poorer socio-economic outcomes than what was expected at the eve of political independence.It is in this context, as indicated earlier, that this paper focuses mainly on the early period of political independence and attempts to apply Samir Amin's lens as far as how could Africa advance wellbeing and ensure inclusive socio-economic development.
Among his critical development ideas was the categorisation of African economies into three macro-regions: Africa of the colonial economy, Africa of the concession-owning companies and Africa of the labour reserves (Amin 1972).This is written about in many other publications, including in Gumede (2022).Samir Amin (as explained in Gumede 2022) categorised the eastern and southern parts of Africa as the 'Africa of labour reserves', western parts of Africa as 'Africa of the colonial economy' and the Congo River Basin (i.e.Congo Kinshasa, Congo Brazzaville, Gabon and the Central African Republic) as 'Africa of the concession-owning companies'.The Africa of labour reserves included Kenya, Uganda, Tanzania, Rwanda, Burundi, Zambia, Malawi, Angola, Mozambique, Zimbabwe, Botswana, Lesotho and South Africa.The Africa of the colonial economy entailed former French West Africa, Togo, Ghana, Nigeria, Sierra Leone, Gambia, Liberia, Guinea Bissau, Cameroon, Chad and the Sudan.
Another critical aspect from Samir Amin's works that is relevant for this paper relates to the evolution of social formations in Africa.As explained in Gumede (2022), Samir Amin makes a point that an analysis of a concrete social formation must therefore be organized around an analysis of the way in which the surplus is generated in this formation, the transfers of surplus that may be effected from or to other formations, and the internal distribution of this surplus among the various recipients (classes and social groups)a social formation is an organized complex involving several modes of production.(Amin 1976, 18) According to Samir Amin (1976, 59), African formations were integrated at an early stage (the mercantilist stage) 1 in the nascent capitalist system … they were broken off at that stage and soon began to regress (and might not have been able to generate by themselves the capitalist mode of production because large-scale trade of pre-mercantilist Africa was linked with relatively poor formations of the communal or tribute-paying types).Samir Amin's periodisation of the mercantilist period is approximately the seventeenth to the early nineteenth century which would include slave trade.
Arguably, Samir Amin's characterisation or categorisation of the different parts of the economy still holds today, and, as he demonstrated, some of the categories/characterizations overlap.Similarly, the structure of the African economy as captured in Samir Amin's works still largely holds today.Therefore, changing the structure of the African economy is one of the critical answers for socio-economic development in Africa (as argued by many).
In other words, even if other constraints such as low savings, low investments etc. were addressed, economies in Africa would not perform well enough and they are unlikely to sufficiently advance wellbeing.Indeed, policiesparticularly social policiescan help.However, to ensure that economies in Africa perform well sustainably and to ensure that levels of human development sufficiently improve, the structure of the African economy should be reconfigured.
Africa's socio-economic development
Although data and estimates are not perfect, and some have been critiqued, it is important to examine social and economic development through empirical data in order to have a better sense of the phenomenon or phenomena instead of only talking in broad terms about Africa's socio-economic development.Samir Amin used data in his analysis of the various phenomena and to support his arguments, recommendations and activism.Indeed, it is important to be circumspect with some data and estimates.
In order to have a better sense of wellbeing, the Human Development Index (HDI) is commonly used.The HDI is a composite index that includes a measure of income per head, education and life expectancy.It is generally used as an indicator of the level of development for a country or a region or sub-region.In the analysis of the HDI, the focus is on Sub-Saharan Africa (SSA).The geographic focus on Sub-Saharan Africa (SSA) is generally used in many studies in order to acknowledge that North Africa is different from SSA, both in economic and political terms.In addition, ideologically and historically, Arab countries have pursued a different political agenda compared to SSA countries.This paper is not about such issues as it is mainly applying Samir Amin's lens in understanding socioeconomic development in the early years of political independence in Africa with the view of advancing an argument of what could Africa do to improve socioeconomic development.Gumede (2019) wrestles with the complex question of pan-Arabism and pan-Africanism.
Table 1 indicates how selected development indicators have performed during 1950-1990 for Sub-Saharan Africa (SSA).The HDI changed from 0.081 in 1950 to 0.185 in 1990 which is a relatively substantial improvement given how slowly HDI can change overtime.The increase in the HDI during 1950-1990 is as a result of improvements in life expectancy and educational attainment, both increased from 0.076 to 0.161 and 0.030 to 0.139 respectively.By implication, wellbeing improved relatively significantly during .More people in the various Sub-Saharan African countries received education and were increasingly living longer.It must be noted though that longevity (i.e.living longer) is not necessarily related to having education.The point that the HDI makes is that there were commendable improvements in access to education and in people living relatively longer.It would seem that longevity improved more than access to education.The HDI also improves when per capita incomes increase.Per capita income is a measure of standard of living.If income per head improves, it implies that the standard of living is improving.
As indicated earlier, data and estimates should be handled cautiously.The HDI, for instance, has been criticised by some who argue that it is not comprehensive.Others argue that per capita incomes are not a sound measure of the standard of living because it is based on income per head in average terms.There would be people who have very low or no income but a country's per capita income could be increasing.
There were many countries in Africa that were still colonised during the 1950s.It is during the 1960s that countries in Africa were becoming politically independent.It would seem that substantial improvements from 1960 to 1980/1990 take place, immediately after African countries become politically independent.Improvements during 1980-1990 do not seem that significant.It can therefore be argued that the 1960-1980 period is the period with substantial improvements in Africa in terms of social and economic development as Figure 1 demonstrates.
Both GDP and GDP per capita (GDP/C) have been improving since political independence in Africa but their growth has been fluctuating (see Figure 1).Hirsch and Lopes (2020, 35) confirm what Mkandawire (2001) had said that 'during the first decade or so of independence, many African countries grew impressively, particularly considering their circumstances at the time of the transition.' GDP/C however did not maintain the same consistency as the GDP, having shown a relatively small decline from 1516.392-1309,799 between 1980 and 1990 as shown in Table 1.It is not surprising that per capita incomes declined during [1980][1981][1982][1983][1984][1985][1986][1987][1988][1989][1990].Economies in Africa took a while to recover from the oil crisis.Further, the structural adjustment programmes imposed by the World Bank (WB) and the International Monetary Fund (IMF) in the 1980s and 1990s affected economic performance and development outcomes in Africa.Figure 2 confirms that economic crises of the 1970s and the subsequent structural adjustment negatively impacted economic performance and living standards.
Given that most of African countries attained political independence in the 1960s, the outcomes of their administrations at least as far as the economies in Africa are concerned, would have started to show in the 1970s and the 1980s.It is worth also examining the Figure 1.GDP growth and GDP/C growth rate trend .Source: Author's plot based on the WDI dataset.1980-1990 and 1990-2000 decades because some countries got their respective political independence in the 1970s.Zimbabwe attained its political independence in the 1980s while South Africa is a late comer so to speak.Studying economic growth by region demonstrates that developing economies in Africa performed well above the global economy during 1970-80.
For 1990-2000, as Table 2 shows, economic growth in developing economies in Africa performed at the same level as the global economy.The impact of the oil price shock in the 1970s and structural adjustment programmes in the 1980s resulted in African developing economies' growth rate being below the level of the global economy during 1980-90.
All developing economies combined have performed above the global average for the period studied .Developing economies in Asia have been the best performers and have ensured higher growth rates over all developing economies combined, resulting to growth performance above the global average for the period studied.The fundamental point the data is making regarding economic growth is that the economic performance of African countries (combined) was not as dismal during the immediate post-independent period as some claim.The Asian economic crisis and other economic crises negatively impacted many African economies in the 1990s.Therefore, the various economic crises account for the declines in economic performance in Africa during the post-independent era in general and particularly the 1980s.This worsened wellbeing in Africa, and structural adjustment programmes further weakened socio-economic development in Africa.
It is in this context that Samir Amin becomes relevant and insightful.The weakening of economies in Africa from the 1980s is largely linked to the global economy.In addition, it is linked to how Africa or African economies were integrated to the global economy.African economies have either continued declining in performance or have not recovered from global economic shocks of the 1970s and 1980s.There have since been more economic crises, and the great recession that started as a global financial crisis in 2007 further caused a deterioration in socio-economic development in Africa.As argued elsewhere, it is important to acknowledge the culpability of leaders in Figure 2. GDP growth and GDP/C growth rate trend .Source: Author's plot based on the WDI dataset.
Africa and other factors that have contributed to the worsening socio-economic outcomes in Africa.Among such factors is the interference by external players in the affairs of African countries and or in the affairs of the African continent.Some of the leaders in Africa have not only allowed this but actively sought 'partnerships' with countries and leaders outside Africa at the expense of socio-economic development in Africa.
Samir Amin, underdevelopment and development
To start with, Samir Amin attributes the pattern of underdevelopment in Africa to global capitalism and its impediments (Amin 1997).It is important to highlight that Amin explains that capitalism is not just about the 'generalized market'.It should be addressed in relation to power beyond the market because the logic of capitalism is inseparable from class struggle, politics and the state.As he put it, capitalism is a 'regime in which the world economy functions in a hierarchical, unequal and exploitative way; where "first world" countries dominate and have developed at the cost of the Third World countries' (Amin 2014, 16).The pattern of capitalist development that Amin writes about enabled 'first world' countries to resort to the mechanism of imperialist control of Third World countries of the South culminating to what he terms a 'permanent phase of capitalism' (viz.the globalised historical capitalism as being built up with no intentions to cease reproducing and deepening the polarisation of the centre-periphery relations).Indeed, capitalism continues to victimise people of the periphery by imposing direct control of the whole production system, where small and medium enterprises (and even the large ones outside the monopolies), like the farmers, were literally dispossessed, reduced to the status of sub-contractors, with their upstream and downstream operations, subjected to rigid control by the monopolies.This has ensured that African countries do not progress sufficiently, or that wellbeing as shown in the previous section remains weak and fragile in Africa.
There are those who have argued that development on the continent has been obscure because of the adoption of policies that are ineffective, the adoption of ineffectual sustainable livelihood strategies, as well as the notion that the erstwhile colonisers did not provide Africa with enough space to develop, but instead soon returned with new imperialistic inclinations such as structural adjustment policies, globalisation, and contract farming (Cheru 2009).Even if the correct policies were implemented, socio-economic development would still be constrained by the various factors, including those that Samir Amin so eloquently wrote about.Essentially, Africa has found it difficult to progress because it has been functioning within the mode of an economic system that constrains Africa's development.In other words, global capitalism has not worked in favour of development in Africa.
As indicated earlier, Samir Amin also writes about contemporary Black Africa which can be separated into expansive regions that are distinctly dissimilar.There is traditional West Africa, there is the historic Congo River Basin, there is the eastern and southern regions of the continent.Indeed, the regions that Samir Amin wrote about still exist.It is not surprising that socio-economic development has not been impressive in Africa.Most parts of Africa and the regions that Samir Amin distilled have not changed much.Put differently, there has not been effective structural transformation of economies in Africa and the relationship that Africa has with the so-called developed world is still largely characterised by centre-periphery relations.
Because one of colonialism's objectives has been to create markets for European commodities and natural resources, a connection between the African economy, the market as well as the global order, which has been under the control of, and managed by the colonisers, was required.This highlights the reason why African nations continue to be significant in the global economy.According to the Eurocentric ideology of westernisation, development of African countries remains elusive due to their scarce resources and productive base; overhyped national currencies; and the presence of massive and ineffectual public service bureaucracies that intrude in 'purely economic matters' (Erunke 2009) and the maintenance of subsidies in certain economic communities that ultimately overburden the state.
It is in that context that Samir Amin (1990) argues that in order for development to take place within the continent of Africa and throughout the Third World, there is a pressing need to delink from the global capitalist system through the adoption of new marketing tactics and values that significantly differ from the ones of the so-called developed countries.According to Amin, it is possible for poor countries to achieve economic progress without necessarily adopting 'rich countries' production system approaches.Only by delinking economically from the industrialised nations and eliminating unequal exchange can countries in the periphery begin on a healthy path of growth and eventually exceed the established capitalist countries economically.Amin thinks that for Third World nations to realise the socialist structure and establish a new world economic system, independence is necessary.Self-sufficient development must be mass-oriented since only 'mass' development may result in a 'national and self-sufficient economy'.
In short, delinking refers to 'the strict subjection of external relations in all fields to the logic of internal choices without regard to the criteria of the world capitalist rationality' (Amin 1990, 60).In addition, according to Samir Amin (1990, 55) delinking 'is associated with a "transition"outside capitalism and over a long timetowards socialism'.To be clear, Samir Amin (1990, 62) explains that delinking is not synonymous with 'absolute or relative "autarky", that is withdrawal from external, commercial, financial and technological exchanges.'Samir Amin (1990, 62) is at pains to explain that delinking actually means pursuit of a system of rational criteria for economic options founded on a law of value on a national basis with popular relevance, independent of such criteria of economic rationality as flow from the dominance of the capitalist law of value operating on a world scale.
An understanding of Marx's conception of value, as highlighted earlier, helps in better understanding what Samir Amin says.
In an interview with Ray Bush, published in the Review of African Political Economy (Amin and Bush 2014, 41:1), Samir Amin said I understand delinking as compelling the dominant forces, imperialists, to adjust at least partly or to retreat, in two areas, political and economic.At the political level, delinking implies political solidarity between countries of the south to defeat the project of military control of the planet by the US, Europe and Japan.Second, at the economic level, there is an area where I think we could start moving ahead by dismantling the current global economic control.This is to move away from financialised globalisationthat is, not globalisation in all its dimensions, particularly trade, but controlling the flows of capital, including direct foreign investment, but also portfolio investments, speculatory investments and so on.(Amin and Bush 2014, 113) Further on, Amin sees 'building a sovereign project, diversifying the economy, moving along towards its modern industrialisation, completed by growing food sovereignty [as] delinking, in the sense of compelling the global system to adjust to it' (Amin and Bush 2014, 112).
Arguably, it is this major proposal of delinking that would have unlocked Africa's development if it was pursued.It is very clear that the development of Africa is synonymous with equality as underdevelopment is with inequality; therefore, any quest towards achieving development in Africa would need to address the issue of inequality.Amin recognises that First World countries are growing at the expense of Third World countries, and the growth thereof is not equally distributed.Recognising that the capitalised system reduced countries of the periphery to being the subcontractors of central monopoly capita, and as a measure of addressing the issue of inequality for development, Amin emphasises the need for underdeveloped countries to move away from the capitalist system.He proposes socialism as an answer.It might very well be that Africa needs to come up with its own approach to socio-economic development, and not necessarily socialism.The fundamental point in Samir Amin's delinking proposal is that Africa was wrongly integrated into the so-called world economy.
To elaborate briefly, delinking as used by Amin refers to the process of compelling imperialist countries to adjust to the needs or part of the need of Third World countries of the South, rather than Third World countries simply going along with having to unilaterally adjust to the needs of the First World countries of the North.According to Yong-Hong (2013, 4) it is the 'refusal to bow to the dominant logic of the world capitalist system' by insisting for a change in the terms and not just the content of the conversation (Mignolo 2007, 459).The delinking strategy, as Samir Amin argued, is a type of revolution that liberates Third World countries from the grip of imperial power by transferring the economic hegemony to a new centre.
Samir Amin provides four propositions in justifying delinking: the first is that it is the logical political outcome of the unequal character of the development of capitalism.Unequal development, in this sense, is the origin of essential social, political and ideological evolutions.The second is that it is a necessary condition of any socialist advance, in the North and in the South.This proposition is essential for a reading of Marxism that genuinely takes into account the unequal character of capitalist development.The third is that the potential advances that become available through delinking will not guarantee certainty of further evolution towards a pre-defined socialism.Rather, socialism is a future that must be built.Fourth, the option for delinking must be discussed in political terms.This proposition derives from a reading according to which economic constraints are absolute only for those who accept the commodity alienation intrinsic to capitalism and turn into a historical system of eternal validity (Yong-Hong 2013, 5).
Amin's argument for delinking highlights delinking from all forms of exploitation, arguing that unequal exchange is the main means whereby capitalism reproduces inequalities.He sees delinking as associated with a 'transition'outside capitalism and over a long time-towards socialism, arguing that, contrary to orthodox belief, the ongoing economic growth crisis in the West and the perpetual development crisis in Africa is derived from the problem of capitalism (Amin and Bush 2014, 15).He argues that the situation in Africa of high prices, massive unemployment and stunted growth is a result of the structure of capitalism which is founded on the world capitalist law of value and its role in the accumulation of capital (Amin 1990).
Amin also stresses delinking from the strict subjection of external relations in all fields to the logic of internal choices without regard to the criteria of world capitalist rationality (Amin 1990).He argues that the concept of capitalism cannot merely be addressed in relation to the 'generalized market' but rather, need to be addressed in relation to power beyond the market because the logic of capitalism and inequality is inseparable from class struggle, politics and the state.Based on this, delinking is a process that would compel imperialist countries to adjust to the needs or part of the need of the South, rather than Third World countries simply going along with having to unilaterally adjust to the needs of the First World countries of the North (Amin 2018).Amin (2018) emphasises that African economies/countries should follow the Bandung spirit of the revival of the states and nations of Asia and Africa.In the Bandung Conference, which was a watershed moment in the history of countries in the periphery, African states and nations aligned with countries non-aligned to neo-colonialism whose rights had also been denied by the historical colonialism/imperialism of Europe, the United States and Japan in spite of the differences in size, cultural and religious backgrounds and historical trajectories.They, in solidarity and unity, rejected the pattern of colonial and semicolonial globalisation that the Western powers had built to their exclusive benefit and declared their will to complete the re-conquest of their sovereignty by moving into a process of authentic and accelerated inward looking development inspired by Marxism on a socialist path, which was the condition needed for their participation in shaping the world system on an equal footing with the states of the historic imperialist centres.
In the 1970s and later periods, Samir Amin argued for the process of 'delinking' from all Eurocentric approaches to developmentglobalisation, adjustment programmes, contract farming, etc. in pursuit of home-grown alternatives.It is his earlier work and that of Thandika Mkandawire and others that have informed a view that Africa's socio-economic development is constrained by inappropriate policies in the face of a hegemonic global political economy (Gumede 2013;2015).Amin (2016) argued that in order to bring about effective development, Third World countries needed to 'delink' themselves from the global capitalist structure that promotes unequal development.Essentially, for Samir Amin, Africa needs to adopt market approaches and standards which are different from those in the developed world in order to achieve its own peasant-based futures.This could be accompanied by the promotion of prospects of autonomous industrialisation (Ndhlovu 2020).The approaches adopted need to promote the renewal of the peasant economy which was interrupted, distorted and disfigured by the imperialistic tendencies of the Euro-North which promoted coloniality through supporting the ascendency of comprador bourgeoisie puppets who would sustain its hegemonic rule at independence (Rodney 1972).
Inclusive socio-economic development in Africa
Many argue that the lack of robust socio-economic development on the continent is because Africa has not had its own indigenous theories which it has implemented for social and economic development.I have called for an alternative socio-economic development approach that takes into account the history, the initial conditions and the economic realities that many African countries face (see Gumede 2016).Hirsch and Lopes (2020, 35) make a point that 'the colonial period was mediocre for African economic development, and independence did not change the economic trajectory significantly.' There has been too much talk but little on implementation.Scholars such as Samir Amin, Claude Ake, and Thandika Mkandawire, among many others, have been vocal about the need for inward-looking development.However, their contributions have not reached to a point of implementation by governments.This is not to say that other constraints such as those imposed by the global matrix of power are not frustrating socio-economic development in Africa or the global south broadly.This paper is not an analysis of global capitalism.Many others have done that, including Gumede (2018).
Initiatives such as the Lagos Plan of Action, the Abuja Treaty, and the New Partnership for Africa's Development, among others, remained ignored as potential concrete solutions.These different plans have also not been informed by any clear overarching framework that should be guiding inclusive development in Africa.As a result, there has not been a clear inward-looking socio-economic development agenda in and for Africa, although the African Union Agenda 2063 was a step in the right direction.The Agenda envisions an African future of unity, integration, prosperity, and peace (African Union 2013).Thus, by and large, Africa has mainly relied on borrowed theories and perspectives which, in most cases, do not speak to the context of culture and context of situations of the continent's socio-economic needs.
As argued in the preceding section, one intervention that would most like have significantly improved socio-economic outcomes in Africa is Samir Amin's notion of delinking.The one approach that can result to better socio-economic development in Africa was also pioneered by Samir Amin and popularised by other African scholars especially those associated with the Council for the Development of Social Science Research in Africa (CODESRIA).The approach referred to is based on the need for an agrarian revolution on the continent as part of Africa's collective and continuous effort to pursue culturally context-specific development using its 'own rules'.Claude Ake, Archie Mafeje and Sam Moyo are among those who elaborated this approach, largely based on the view that the majority of households in Africa directly rely on agro-based livelihood activities (at least in the 1970s/80s).Samir Amin's agrarian revolution proposal points to the idea of designing and implementing inward-looking and 'home-grown' approaches that display a clear link between social and economic policies.
Many have argued for an alternative development approach (see Gumede 2016).Julius Nyerere, the former president of the Republic of Tanzania, proposed the Ujamaa not only as development model, but also as a political-economic management model (Nyerere 1967).The Ujamaa concept prohibited personal acquisitiveness and promoted the horizontally rather than vertically distribution of wealth throughout the society.Due to its positive results in terms of socio-economic development particularly for the rural poor, the approach gained widespread support, not only in Tanzania but also across the African continent where most post-independence governments focused on agricultural development than the rest of the economic sectors.Ujamaa was partly driven by affordability in terms of capital availability and sectoral expertise in land and agricultural activities by African communities.
The majority of post-independence governments were resource poor and therefore opted to begin their development agenda on agriculture mainly because already the general populace in the region would have had existing indigenous skills in agriculture, and not in the other economic sectors (Ndhlovu 2020).It was in this context that Nyerere introduced the villagisation of production which fundamentally collectivised all forms of local productive dimensions and, although it had its own challenges such a soil exhaustion, brought about improved household livelihoods outcomes.As a result, while the approach is criticised in some circles for land degradation as a result of over-cultivation near villages, the indigenous people who benefited from the programme itself supported it for its capacity to improve their socio-economic fortunes (Himmestrand 1994).In support of the Nyerere's model, Erunke (2009) argues that the alternative indigenous paradigm for development by post-independence African governments needed to place emphasis on the creation of conducive political, socio-economic environments and an effective resource mobilisation which could translate into sustainable development so as to guarantee the right balance between the private and public sectors of the economy in pursuit of a more useful approach.
The Pan-African and African concept and spirit gained momentum in the 1960s as African scholars and activists rallied behind the Organisation of African Unity to condemn domination, suppression, enslavement, and imperialism.The concepts gave birth to terminologies such as Africa's rebirth, political liberation and sovereignty, regeneration, reconstruction, revitalisation, and reengineering.New terms such as re-Africanisation and re-membering should be added to the list.All these terms have been coined by African thought leaders as part of the continuous attempts to regain Africa's values and identity on the global scene.Pan-Africanism, as an ideology of the revolutionary movement, was used to mobilise African countries to stand up and reconstruct themselves after a century of dehumanisation by imperialistic powers of the Global North.The founding fathers of Pan-Africanism argued that: No independent African state today by itself has a chance to follow an independent course of economic development, and many of us who have tried to do this have been almost ruined or have had to return to the fold of the former colonial rulers.This position will not change unless we have a unified policy working at the continental level (Nkrumah 1963).
However, although Africa managed to gain political independence, its economic, social and political conditions remain a serious matter of concern.In addition to major records of intra-state conflicts and political instability, there have also been major abuses of human rights and dignity since independence.There has also been the 'overseer' role still maintained by some colonial masters which has continued to be a setback for the African people (Ndhlovu 2020).There have also been the increased terrorist activities which have disrupted lives in Nigeria, Kenya, Libya, Mali, Egypt, Somalia, while xenophobia erupts from time to time in South Africa.
Conclusion
This paper revisited the post-colonial social and economic development in Africa, focusing on the period immediately after many countries gained political independence in Africa and making use of Samir Amin to explain weak socio-economic development in Africa.It posits that post-independent Africa has had to contend with disruptive socio-economic and political realities instituted by European colonialism.The fundamental explanation for the poor socio-economic development in Africa is global capitalism, and the main answer is in Samir Amin's delinking proposal.Indeed, other constraints such as those imposed by the global matrix of power have to be acknowledged because they are limiting socio-economic development in Africa or in the global south broadly.
Based on what leading development thinkers in Africa have said, particularly Samir Amin, the paper proposes a major rethinking of development approaches with the intention to disregard imported development approaches which are not cognisant of the context and thus, do not relate to African socio-economic and political realities.A new development approach based on the pan-African agenda and Africa's renaissance should be at the centre of an African future not designed by the continent's colonial past.
Table 2 .
Annual average GDP growth rates, by region. | 2024-01-24T17:00:00.505Z | 2023-10-02T00:00:00.000 | {
"year": 2023,
"sha1": "0dda6e8c7c1be04fc9b808ab688dc3338f93f00a",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/02589346.2023.2300893?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "381f640138d5164c2e3566b24e1dc76daa976ced",
"s2fieldsofstudy": [
"Political Science",
"Economics",
"History"
],
"extfieldsofstudy": []
} |
204814747 | pes2o/s2orc | v3-fos-license | Efficient gene correction of an aberrant splice site in β‐thalassaemia iPSCs by CRISPR/Cas9 and single‐strand oligodeoxynucleotides
Abstract β‐thalassaemia is a prevalent hereditary haematological disease caused by mutations in the human haemoglobin β (HBB) gene. Among them, the HBB IVS2‐654 (C > T) mutation, which is in the intron, creates an aberrant splicing site. Bone marrow transplantation for curing β‐thalassaemia is limited due to the lack of matched donors. The clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR‐associated protein 9 (Cas9), as a widely used tool for gene editing, is able to target specific sequence and create double‐strand break (DSB), which can be combined with the single‐stranded oligodeoxynucleotide (ssODN) to correct mutations. In this study, according to two different strategies, the HBB IVS2‐654 mutation was seamlessly corrected in iPSCs by CRISPR/Cas9 system and ssODN. To reduce the occurrence of secondary cleavage, a more efficient strategy was adopted. The corrected iPSCs kept pluripotency and genome stability. Moreover, they could differentiate normally. Through CRISPR/Cas9 system and ssODN, our study provides improved strategies for gene correction of β‐Thalassaemia, and the expression of the HBB gene can be restored, which can be used for gene therapy in the future.
by mutations, gene therapy provides a potential treatment for the disorder.
With the advent of technological breakthroughs on gene editing, the efficient means of curing the genetic disease is to correct the mutation directly via sequence-specific endonucleases. Endonucleases can create double-strand breaks (DSBs), which then activates DNA repair by two highly conserved competing mechanisms: nonhomologous end joining (NHEJ) or homology-directed repair (HDR). 6,7 NHEJ repairs breaks via ligation of DNA ends throughout the cell cycle, and it causes nearly random insertion and deletion mutations.
Nevertheless, HDR can be exploited to make the desired sequence replacement at the DSB site by homologous recombination with a donor DNA template, which is normally most active during the S or G2 phase of the cell cycle. This allows us to utilize HDR to generate targeted gene deletion, mutagenesis, insertion or gene correction. 8,9 Recently, the clustered regularly interspaced short palindromic repeats (CRISPR)/ CRISPR-associated protein 9 (Cas9), an RNA-guided nuclease from an adaptive immune mechanism present in many bacteria and the majority of characterized Archaea, has been widely used as an endonuclease for gene correction through HDR. The system can bind DNA through 20-bp gRNA which is adjacent to protospacer adjacent motif (PAM) by Watson-Crick base pairing and then generate the cleavage by Cas9 protein. 10,11 Due to the fact that CRISPR/Cas9 system is easier to design and construct, as well as having high efficiency of gene editing, many studies have reported using the system for correcting disease-related mutations in animal somatic 12 and germ line cells, 13,14 as well as in human stem cells 15 and induced pluripotent stem cells. 16,17 The single-stranded oligodeoxynucleotide (ssODN) has been applied as the repair template to generate point mutation. 18 Contrasting to dsDNA, ssODN can be synthesized more easily and quickly, which is not required to excise the section marker. In addition, it was reported that ssODN is more efficient for HDR. 19 In this study, combining CRISPR/Cas9 and ssODN, we successfully repaired the biallelic HBB IVS2-654 mutation in induced pluripotent stem cells (iPSCs) via a seamless approach in one step. Because gRNA was not designed at the mutant site resulting from the PAM region-NGG while the mutation is in intron 2, we could not make a synonymous mutation; a more efficient strategy was adopted to correct the mutation by reducing the occurrence of secondary cleavage. After gene correction, iPSCs still kept pluripotency and genome stability. The corrected iPSCs could be utilized for hematopoietic differentiation normally, and the expression of the HBB gene was restored. Therefore, our study offered improved strategies for gene correction of β-Thalassaemia.
| Cell culture and hematopoietic differentiation
The iPSCs derived from a patient with β-thalassaemia were provided by the Third Affiliated Hospital of Guangzhou Medical University where the experiments were consented by the ethics committee.
We replaced new medium daily. The iPSCs with 80% confluence in 35 mm dish were treated with 1 mg/mL dispase (Gibco), and then, small scraped clumps were harvested, which were cultivated on Matrigel-coated 12-well plate with 1:60 dilution. According to a fivestep hematopoietic differentiation strategy, the cells were expanded in different mediums containing different cytokines (PeproTech) as previously reported. 20 At days 12 and 22, differentiated cells were collected via fluorescence-activated cell sorting (FACS).
| Differentiation of three germ layers and teratoma formation
The iPSCs were treated with dispase and cultured in ultra-low attach-
| Karyotype analysis and the assay for short tandem repeat
The iPSCs were incubated with the culture medium added 0.25 mg/ mL colcemid (Invitrogen) for 4 hours and then incubated in mixed solution containing 0.4% sodium citrate and 0.4% potassium chloride
| T7 Endonuclease I assay
We designed different primers near the predicted gRNA off-target sites throughout the whole genome from the online software CCTop. When using CCTop, we chose a custom target selection with in vitro transcription while the species was set as Human (Homo sapiens GRCh37/hg19). Other parameters were unchanged. These fragments of Genomic DNA extracted from iPSCs were amplified with the primers by PrimeSTAR GXL DNA Polymerase (TAKARA) and then purified through Universal DNA Purification Kit (Tiangen).
The purified products were used for T7 Endonuclease I (T7E1) assay following the manufacturer's protocol (New England BioLabs) and analysed on 2% agarose gels using Gel Imaging System (Bio-Rad).
| Whole exome sequencing and Sanger sequencing
The
| Statistical analysis
The data were subjected to statistical analysis by unpaired Student's two-tailed t test, which were presented as means ± SEM. A value of P < .05 was considered to be a statistically significant result. The editing efficiency = (the quantity of indel clone, homozygous repairing clone and heterozygous repairing clone/ the quantity of total tested clone) × 100%. The repairing efficiency = (the quantity of homozygous repairing clone and heterozygous repairing clone/ the quantity of total tested clone) × 100%.
| The design of different gene correction strategies for the HBB IVS2-654 mutation and ssODN selection
According to the regulation of PAM, a 20-bp gRNA, which is adjacent to AGG, was designed near the HBB IVS2-654 (C > T) mutation.
It was also proved to have higher efficiency in previous research. 21 Contrasted to dsDNA, ssODN is more efficient for HDR. 19,22 So we used a 127 bp-ssODN in the complementary strand of gRNA as the donor. However, because the IVS2-654 mutation that the designed gRNA did not contain is in intron 2 of the HBB gene and we could not make a synonymous mutation, we made a two-step strategy avoiding the occurrence of secondary cleavage. The first part was that we corrected the HBB IVS2-654 mutation and introduced a new mutation belonging to the region of gRNA at the same time. For the second part, we repaired the introductory mutation. We designed two different introductory mutations. Thus, two different ssODNs were used in the first part and two unlike gRNAs were used in the second part ( Figure 1A,B).
To correct the biallelic HBB IVS2-654 mutation, CRISPR/Cas9 system and ssODN were utilized in iPSCs which derived from the reprogramming of a patient's fibroblasts. To evaluate which ssODN had higher HDR efficiency for the two-step strategy, flow cytometric analysis was performed by gRNA with mCherry reporter, Cas9 with GFP reporter at 48 hours after the electroporation of gRNA vector, cas9 plasmid and ssODN 1 or ssODN 2 ( Figure 1C). Double positive cells were harvested and re-planted with low density. After about a week, the clones were picked up and identified via PCR. We found 29% of clones were indels in the ssODN 1 group with a total of 5% of them were heterozygote at mutant site and 2% of them were homozygote. While 29% of clones were indels, in the ssODN 2 group we found that 16% of them were heterozygote and 4% of them were homozygote ( Figure 1D). The results indicated ssODN 1 and ssODN 2 had similar editing efficiency but ssODN 2's efficiency was more stable. SsODN 2 was also more efficient than ssODN 1 for repairing. Therefore, gRNA2 and ssODN 2 were adopted in the next experiment for the two-step strategy. and GFP (Cas9; Figure 2A) and then cultured them with low density.
Until the formation of clones, we picked up 40-60 clones every time and screened the corrected cell lines at the mutant site via PCR. We could repair the HBB IVS2-654 mutation by both of the two strategies and detected the introductory mutant in the first part but not in the second part for the two-step strategy ( Figure 2D). The results for the one-step group showed 45% of clones were indels, 3% of them were heterozygote for gene editing and 1% of them were homozygote. In the two-step group, while 26% of clones were indels, 12% of them were heterozygote and 9% of them were homozygote. As demonstrated, the editing efficiency for the second strategy did not have much difference compared to the first, whereas the repairing efficiency for the two-step strategy was much higher than the one-step strategy ( Figure 2B,C).
| Characterization of pluripotency in the genecorrected iPSCs
To identify whether the iPSCs after gene repair retain normal pluripotency, two gene-corrected iPS cell lines (corrected C1-iPS and corrected C2-iPS) from the two-step strategy were chosen for further detecting. The iPSCs before (pre-iPS) or after gene correction displayed typical morphology and the AP staining of them was positive ( Figure 3B). Quantitative PCR analysis showed the iPSCs before or after gene correction had higher expression of pluripotency-related genes, such as OCT4, SOX2, NANOG, GDF3 and DPPA4, comparing with the patient's fibroblasts ( Figure 3A). Immunofluorescence results also revealed the typical pluripotency markers: OCT4, SSEA4, SOX2 and TRA-1-81 were expressed in these iPSCs ( Figure 3C, 3). Moreover, pre-iPS, corrected C1-iPS and corrected C2-iPS cell lines could differentiate into three different germ layers in vitro after the formation of EBs, which were showed via immunofluorescence ( Figure 3E). We also acquired teratomas that had three different germ layers in vivo Figure 3F). All the above results indicated that the gene-corrected iPSCs kept pluripotency.
| The stability of genome in the genecorrected iPSCs
At first, we made sure that pre-iPS, corrected C1-iPS and corrected C2-iPS derived from the same patient though STR assay ( Figure 4A).
These iPSCs had normal karyotype ( Figure 4B). We obtained these predicted off-target sites using the online software CCTop. Then, T7E1 assay was performed after we did PCR with the extracted DNA of iPSCs. The T7E1 assay and Sanger sequencing revealed there was no off-target mutagenesis at top 9 sites in corrected C1-iPS and corrected C2-iPS cells ( Figure 4C, Table S1). To further confirm whether Figure 4D, Table S2-S4). All the sites of SNVs and indels were not in accord with the predicted off-target site. The sequences at these sites had too many mismatches with gRNA which made it not easy for gRNA to target these sites. Thus, that suggested CRISPR/Cas9 system was not the direct reason for the mutagenesis.
| The restoration of HBB gene expression in the gene-corrected iPSCs
The patients with β-thalassaemia show issues related to the β-globin
| D ISCUSS I ON
Thalassaemia is one of the most common genetic disease resulting from the imbalance of globin chain production mostly caused by gene mutation. 23 Due to the limitations of different clinical treatment at present, patients cannot have effective recovery, which emphasizes the importance of seeking new ways for therapy that targets thalassaemia. Hope is offered and found with engineered nucleases. Zinc-finger nucleases (ZFNs) and transcription activator-like effector nucleases (TALENs) are produced via proteins fused with the nuclease domain of the restriction enzyme FokI and they work through protein-DNA interaction. 24 Previous reports have shown HDR rates varied at 33% and 68% when we used TALEN and dsDNA to edit. They are not used widely because they need to engineer and clone a new protein when targeting a new site, which is a complicated process. The difference of the repairing efficiency is large as well. 21,25 However, CRISPR/Cas9, an RNA-guided system, is easier to be designed and constructed with lower costs.
Gene correction is based on the sequence-specific targeting of gRNA, the DSB caused by Cas9 protein and the repair of gene via the donor template for HDR. In previous report, the HBB IVS2-654 mutation in β-thalassaemia iPSCs had been corrected by CRISPR/ Cas9 with dsDNA. 21 Another team produced the disruption of genomic elements to make indels for removing the mutation using LbCas12a RNP, and the efficiency was up to 76.6%. Whether there are potential risks for changing genome sequences is unknown. 26 Nevertheless, contrasted to dsDNA, ssODN is more efficient for HDR and has lower cytotoxicity. 19 In this study, we demonstrated an efficient approach for correcting the biallelic HBB IVS2-654 mutation in β-thalassaemia iPSCs combining CRISPR/Cas9 system with ssODN which were electroporated into iPSCs. Because antibiotic selection may interfere with the expression of corrected gene, 27,28 the positive cells with fluorescent reporter were harvested by FACs.
We finally acquired the corrected iPS cell line after Sanger sequencing for expanding clones. Next, many assays revealed the corrected iPSCs remained pluripotency, genome stability and differentiation ability. Most importantly, we examined the expression of HBB gene and concluded the function of HBB gene was successfully restored in the gene-corrected iPSCs.
Because most patients who have the heterozygous HBB IVS2-654 mutation do not show to have symptoms, repairing the monoallelic mutation can cure them. In the one-step group, the repairing efficiency was 4% on average including both heterozygote and homozygote. However, for that strategy, we could not design gRNA including HBB IVS2-654 mutant site, which should be adjacent to NGG due to spCas9. Therefore, gRNA was designed near the mutation. Then, there is the problem that gRNA can still F I G U R E 5 Hematopoietic differentiation of gene-corrected iPSCs. A, Experimental scheme for a five-step hematopoietic differentiation strategy from iPSCs. B, Images of representative morphology changes in different hematopoietic differentiation stages. Scale bar, 500 μm (Day 0, day 2, day 4, day 6, day 12); Scale bar, 200 μm (Day 22). C, Flow cytometric analysis of CD34 + expression at day 6 during the hematopoietic differentiation of n-iPS, pre-iPS, corrected C1-iPS and corrected C2-iPS. D, Representative images of colony morphologies for CFU assay after another 14 days differentiation using the CD34 + cells at day 12 during the hematopoietic differentiation. Scale bar, 100 μm (CFU-E); Scale bar, 500 μm (CFU-G, CFU-M, CFU-GM, CFU-MIX). E, Flow cytometric analysis of CD235a + expression at day 22 during the hematopoietic differentiation of n-iPS, pre-iPS, corrected C1-iPS and corrected C2-iPS. F, The agarose gel images of RT-PCR product by amplifying HBB cDNA of CD235a + cells derived from the hematopoietic differentiation of n-iPS, pre-iPS, corrected C1-iPS and corrected C2-iPS cell lines. CD34 + cells from cord blood were used as a positive control (Cord blood-E), and normal iPSCs were used as a negative control. G, Quantitative PCR analysis of HBB gene expression (normalized to β-actin) in CD235a + cells derived from the hematopoietic differentiation of cell lines before or after gene correction. Results were represented as mean ± SEM for n = 3 individual experiments; **, P < .01, ***<0.001; t test target the sequence after gene correction. The mutation is from an intron and we cannot make a synonymous mutation. To reduce the occurrence of secondary cleavage, we adopted a two-step strategy. In the first part, we corrected the HBB IVS2-654 mutation and introduced a new mutation at the same time which belong to the region of gRNA. Next, we repaired the introductory mutation in the second part. Two strategies had similar editing efficiency, whereas the repairing efficiency of the two-step strategy could reach 21% on average, which was about 5 times that of the one-step strategy. The HDR rate was also higher than it was with CRISPR/Cas9 and dsDNA in the previous report (12.3%). 21 That revealed the two-step strategy about inducing a new mutation can indeed reduce the occurrence of secondary cleavage and improve the repairing efficiency ( Figure 2). In addition, we found that the repairing efficiency, not the editing efficiency, also has significant difference if inducing a new mutation from different nucleotide ( Figure 1). Nevertheless, comparatively speaking, the two-step strategy is not simple. Hence, we tried another method in the one-step method. We electroporated all the gRNAs and ssODNs as well as Cas9 used in the two-step strategy into iPSCs together in a single time. We assumed that they could work in cells twice because plasmids can stay in cells for several days. But the final result showed the editing efficiency from the mixed gRNAs and ssODNs was lower than that of the one-step strategy mentioned above ( Figure S1). We think the most likely reason is that the mass of gRNA and gRNA2 were comparing half to the mass of gRNA used in the one-step strategy because the same total mass of gRNA should be kept in the experiment. For this aspect, we can solve the problem by utilizing a vector which can carry multiple gRNAs in the further study. Moreover, there are many other ways to improve the efficiency of gene repair, for example, adding small molecules, synchronizing the cell cycle, and adjusting delivery timing and methods. [29][30][31] Besides, using ribonucleoprotein (RNP) delivery of Cas9 with gRNAs consistently can increase activity in cells and then enhance the efficiency. 19,31 Since gRNA can target similar sequences in a genome, it is possible to have off-targets when we use SpCas9 system. 32,33 Thus, we did T7E1 assay for 9 potential off-target site analysed by the online software CCTop and confirmed them by Sanger sequencing.
This revealed that the corrected iPS cell line had no indel. Next, the whole exome sequencing was performed to assess off-targeting effects, and then, we found some SNVs and indels from the corrected iPS cell lines contrasting to the iPS cell line before repair. But all the sites from these SNVs and indels were not in the potential off-target regions according to gRNA targeting sequence ( Figure 4, Tables S2-S4). We considered the fact that sometimes high-throughput sequencing can have false-positive results, and it is possible for cells to produce some mutations after more and more passage. To solve these problems, we can do experiments with cells at the early passages and perform Sanger sequencing to identify the results from the whole exome sequencing. To improve SpCas9 system's targeting specificity, we also can adopt the following strategies: shorter gRNAs design or gRNAs with two unpaired Gs on the 5′ end which are more sensitive to mismatches, 34,35 paired nCas9s, 36 have the ability to differentiate into all cell types. Thus, we did hematopoietic differentiation and found that the function of HBB gene was successfully restored in the gene-corrected iPSCs ( Figure 5).
| CON CLUS IONS
From this study, we describe a one-step approach to correct the biallelic HBB IVS2-654 mutation in β-thalassaemia iPSCs through CRISPR/Cas9 and ssODN-mediated HDR. For the mutation in an intron and no appropriate gRNA containing the mutant site, a twostep strategy can be adopted to reduce the occurrence of secondary cleavage for improving the repairing efficiency. The corrected iPSCs keep pluripotency and genome stability. Moreover, the expression of HBB gene can be restored in vitro after hematopoietic differentiation. Therefore, our findings demonstrate that the strategies of gene correction we report here will facilitate the development of cell therapy for genetic disease in iPSCs.
ACK N OWLED G EM ENTS
The authors would like to thank Yi Liang for Karyotype and STR
CO N FLI C T O F I NTE R E S T
The authors declare no competing interests.
DATA AVA I L A B I L I T Y S TAT E M E N T
The raw data of whole exome sequencing reported in this paper have been deposited in the Genome Sequence Archive in BIG Data Center, Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, under accession numbers CRA001893, CRA001893 that are publicly accessible at https ://bigd.big.ac.cn/gsa. | 2019-10-22T13:03:27.308Z | 2019-10-21T00:00:00.000 | {
"year": 2019,
"sha1": "3597a3846d85642a3b1e792cd0d418051369e9a6",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.14669",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0ed9b920a7db7c0858a8e75d395b00746bd2e15",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
208784112 | pes2o/s2orc | v3-fos-license | Macrolides and community-acquired pneumonia: is quorum sensing the key?
Combination therapy with two antimicrobial agents is superior to monotherapy in severe community-acquired pneumonia, and recent data suggest that addition of a macrolide as the second antibiotic might be superior to other combinations. This observation requires confirmation in a randomised control trial, but this group of antibiotics have pleiotropic effects that extend beyond bacterial killing. Macrolides inhibit bacterial cell-to-cell communication or quorum sensing, which not only might be an important mechanism of action for these drugs in severe infections but may also provide a novel target for the development of new anti-infective drugs.
Outcome in community-acquired pneumonia (CAP) is adversely aff ected by increasing severity of illness, comorbidity and age. Organisational factors such as timely administration of appropriate antibiotics, prompt admission to critical care and adherence to antibiotic policies, however, are also important in infl uencing outcome [1][2][3]. Combination therapy with two antimicrobial agents seems superior to monotherapy in severe CAP, and this approach is recommended by a number of organisations [4,5]. Th e Infectious Diseases Society of America/American Th oracic Society guidelines suggest therapy with a β-lactam antibiotic, with the addition of either a macrolide or fl uoroquinolone antibiotic [4], whilst the British Th oracic Society recommends initiating a β-lactam/macrolide antibiotic combination [5].
Martin-Loeches and colleagues recently conducted a prospective, observational cohort, multicentre study involving 218 mechanically ventilated CAP patients to see what eff ect diff erent antibiotic combinations had on mortality [6]. Th ese investigators reported that the addition of a macrolide, but not a fl uoroquinolone, to standard antibiotic therapy was associated with reduced mortality in patients admitted to critical care with CAP. Death in critical care occurred in 26.1% of individuals receiving combi nation therapy with a macrolide, compared with 46.3% in those receiving fl uoroquinolones [6]. Th ese results support data from other observational studies that suggest β-lactam/macrolide combinations off er a survival advantage in severe CAP. Th is body of data is not scientifi cally robust enough, however, to adequately answer the question of whether adding a macrolide to a β-lactam confers a survival advantagethis will only be satisfactorily addressed by a large prospective random ised control trial.
In addition to activity against atypical bacteria, macrolides have ubiquitous immunomodulatory eff ects. Speculat ing how this group of drugs might off er a survival advantage when added to a β-lactam is therefore of interest, and several plausible mechanisms exist. Treatment of undiagnosed atypical pneumonia could occur since 53% of patients in the reported study had no microbiological diagnosis [6]; however, this seems unlikely as one might expect fl uoroquinolones to be equally eff ective [7]. More over, studies limited to pneumo coccal disease demon strate that addition of a macrolide improves survival [8]. It also seems improbable that synergistic killing is responsible, as equivalency with fl uoroquino lones would be expected.
Many researchers have focused on the pleiotropic immunomodulatory eff ects [9] observed with macrolides as the reason why these agents may be benefi cial in CAP. Macrolides, at doses lower than those required for antibacterial activity, alter the production of cytokines and chemokines, and reduce cellular infi ltrates and mucous production [9]. Th e immunomodulatory eff ects of macrolides are illustrated by diff use panbronchiolitis. A chronic progressive lung disease found largely in Japan, diff use panbronchiolitis is characterised by mixed restrictive and obstructive pulmonary function, interstitial infi ltrates and Pseudomonas aeruginosa infection. Long-term, low-dose macrolide treatment improves lung function and increases 10-year survival rates from around 15 to 90% [9].
Abstract
Combination therapy with two antimicrobial agents is superior to monotherapy in severe communityacquired pneumonia, and recent data suggest that addition of a macrolide as the second antibiotic might be superior to other combinations. This observation requires confi rmation in a randomised control trial, but this group of antibiotics have pleiotropic eff ects that extend beyond bacterial killing. Macrolides inhibit bacterial cell-to-cell communication or quorum sensing, which not only might be an important mechanism of action for these drugs in severe infections but may also provide a novel target for the development of new anti-infective drugs. Macrolides are now being explored in new therapeutic strategies for a wide range of pulmonary and extrapulmonary conditions, including asthma, cystic fi brosis, rhinosinusitis, infl ammatory bowel disease, psoriasis and rosacea [9]. Clearly immunomodulatory eff ects could be important in altering mortality in CAP, but these drugs also have direct eff ects on bacteria through inhibiting quorum sensing.
Quorum sensing describes bacterial cell-to-cell communication that occurs as a function of changing cell density. Th ese communication pathways are important in the pathogenesis of bacterial species causing human disease, including Staphylococcus aureus, Streptococcus pneumoniae, Escherichia coli and P. aeruginosa [10,11]. Quorum-sensing bacteria produce and release signal molecules or autoinducers, which regulate gene expression within the bacterial population and are closely linked to both biofi lm formation and expression of virulence factors. Biofi lms are structured populations of bacteria within a polysaccharide matrix, and these growth forms are more resistant to antibiotics. Th e discovery of biofi lms as an entity did not occur until the late 1970s, and they are often still only considered in the context of chronic or device-associated infections; however, pneumonia caused by S. pneumoniae exists as a biofi lm in lung tissue [11]. Acute bacterial infections associated with biofi lm formation might also be relatively common. One of the diagnostic criteria for biofi lm infection is a culture-negative result despite a clinically documented infection [12], a situation encountered in 30 to 50% of severe sepsis and septic shock [6].
Macrolides at subminimum inhibitory concentrations have been demonstrated to antagonise quorum sensing in P. aeruginosa, resulting in diminished virulence, biofi lm formation and oxidative stress response [13]. Significantly, inhibition of quorum sensing reduces pathogenicity of bacteria and impedes formation of antibiotic-resistant biofi lms, and therefore off ers an attractive mechanism whereby the addition of a macrolide could reduce mortality in CAP [6]. If macrolides do confer additional effi cacy because of immunomodulatory eff ects or inhibition of quorum sensing, or both, one might expect them to be an eff ective therapeutic strategy applicable to many other infections encountered in critically ill patients. Indeed, the addition of clarithromycin to patients with ventilator-associated pneumonia accelerated resolution of pneumonia and weaning from mechanical ventilation [14].
It may be possible to approach the question of whether immunomodulation or inhibition of quorum sensing is more important in reducing mortality experimentally. Lesprit and colleagues described the important role of P. aeruginosa quorum sensing in rat pulmonary infection using the virulent wild-type strain P. aeruginosa PAO1 and the less virulent mutant strain P. aeruginosa PAOR with a defi cient quorum-sensing pathway [15]. Using this model system it would be benefi cial to examine whether macrolides act predominantly through disrupting quorum sensing, as one would then expect to see little reduction in mortality caused by a large inoculum of the mutant PAOR but a signifi cant eff ect on pneumonia caused by a smaller dose of the wild-type PAO1.
At a time when few new antimicrobial agents are being commercially developed for clinical use and the burden of infection caused by multiresistant bacteria is increasing, the need for novel approaches to the management of infection is essential. Quorum sensing determines both bacterial virulence and biofi lm formation; it is a common pathway for pathogens and represents an attractive new target for the development of drugs in the fi ght against infection [10]. | 2014-10-01T00:00:00.000Z | 2010-07-20T00:00:00.000 | {
"year": 2010,
"sha1": "5c513d71a1dafbddf580064f450ee37e9c515e52",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/counter/pdf/10.1186/cc9084",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c513d71a1dafbddf580064f450ee37e9c515e52",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
259480122 | pes2o/s2orc | v3-fos-license | The constitutional standing of philosophical and non-religious beliefs in Bulgaria
: This paper explores freedom of philosophical and non-religious beliefs in Bulgaria. It outlines the constitutional framework of this freedom both in the contemporary Bulgarian constitutional model based on the current 1991 Constitution and in Bulgarian constitutional history as well as previous fundamental laws – the 1879, 1947, and 1971 Constitutions. The paper explores both the legal and socio-legal aspects of freedom of philosophical and non-religious beliefs, analysing the role of normative ideologies in the context of secularism – specifically, the largely secular Bulgarian society. The paper demonstrates the relationship between freedom of philosophical and non-religious beliefs and freedom of religion in Bulgaria. Attention is devoted to organised philosophical and non-religious beliefs. The article explains why freedom of philosophical and non-religious beliefs is protected as an individual and not as a collective right. The paper also explores the practical problems of the philosophical and non-religious beliefs in Bulgaria.
Introduction
This paper aims to explore the freedom of philosophical and non-religious beliefs. It provides a case study focused on the constitutional system of Bulgaria. This country might be of interest for the comparative study of law and religion due to the fact that it has been influenced by three civilisational models, each with its own ideas of moral autonomy and freedom of independent judgement, as well as a range of philosophical beliefs. The three models form three Weltanschauungs (worldviews) -European, Ottoman, and Soviet-type communist socio-legal cultures. 1 In this paper, I analyse the evolution of the freedom of philosophical and non-religious beliefs in Bulgarian constitutional history. I explore its relationship with freedom of religion and conscience. I offer an outline of the legal design and the socio-legal dilemmas related to freedom of philosophical and non-religious beliefs in the contemporary Bulgarian constitutional order. This is done with a view to particular concepts of normativity -or normative ideologies -that serve as framing concepts of the country's symbolic-imaginary constitutionalism. 2 The paper demonstrates the embeddedness of ideas about freedom of philosophical and non-religious beliefs in the context of secularism that predominates Bulgarian constitutional anthropology. Moreover, it shows the main factors that impact the design of freedom of philosophical and non-religious beliefs in the Bulgarian constitutional and socio-legal order. It also explores the relationship between the right to religious freedom and the right to non-belief in Bulgarian constitutionalism and Bulgarian constitutional history. 3 1 For a fundamental analysis of the interplay between the religious and the secular in the Balkans and Bulgaria, see : Evstatiev, Eickelman 2022, 27-48. 2 Belov 2022, 156-169. 3 See : Belov 2021a, 171-194;Belov 2021b, 187-221.
Freedom of philosophical and non-religious beliefs in the Bulgarian constitutional order and its repercussions in the socio-legal realm
The national debate on philosophical and non-religious beliefs in Bulgaria is rather limited. It is limited both in terms of theory and public discourse. Such limitations exist in the constitutional domain, as well as in discourses on law and religion that impact different areas of law. Indeed, philosophical and non-religious beliefs are considered to be relevant to constitutional debates, mostly as part of larger discussions on religious freedom. Thus, freedom to such beliefs has definitely been overshadowed by freedom of religion. It would not be an exaggeration to say that freedom of philosophical and non-religious beliefs represents part of the "missing constitution" in both theory and practice. This remains the case despite the fact that this freedom has relatively extensive standing in the written constitution, being explicitly provided for in several constitutional texts. 4 During the early years of transition from communism to democracy, the public debate was predominated by the theme of de-communisation. It concerned the replacement of communist ideas, Marxism, paternalism, and etatism with the normative ideologies of the liberal democracy supplemented by other supportive normative ideologies. 5 This discourse contained reasoning related to liberalism, the welfare state, and global neoliberalism. More recently, the above-mentioned discourse has been paralleled, upgraded, and gradually replaced by debates centred on the cleavages of "neoliberalism vs. neo-conservatism" and "globalism vs. neo-nationalism. " During the years of the COVID-19 pandemic, there has been a rise in and intense clash between pro-and anti-vaccination movements, both based on a specific set of non-religious beliefs. 4 Very limited analysis of the philosophical and non-religious beliefs mostly in the context of freedom of thought and freedom of religion and thus overshadowed by them can be found in Drumeva 2013, 244-245;Iliev 2015, 72-85. These works all largely explore freedom of religious beliefs while only mentioning its non-religious counterpart -freedom of philosophical and nonreligious beliefs. A possible reason for this could be that Bulgarian society is highly secular and became extremely detached from religion during communism; thus, protecting philosophical and non-religious beliefs was, to an extent, unproblematic during the years of transition, while the reestablishment of real freedom of religion was of much greater concern. 5 Regarding the concept of normative ideology, see : Belov 2022. Apart from the discourses largely devoted to political ideologies, a national debate on philosophical and non-religious beliefs in Bulgaria is limited. Bioethical discussions that usually fuel contemporary philosophical debates and engage both religious and non-religious beliefs are not intense. Instead, they are limited to the elitist and self-enclosed scientific discourse.
However, there are two exceptions to this. The first is the intense debate on gender issues that has been triggered by the non-ratification of the Istanbul Convention 6 by the Republic of Bulgaria. The core of the debate has been the binarity or plurality of gender and the concept of marriage according to the Bulgarian Constitution. 7 The second exception concerns the introduction of anti-pandemic measures in the context of the COVID-19 pandemic. It has produced a massive cleavage between vaxxers and anti-vaxxers -the people in favour or against the vaccines in general and the COVID-19 vaccines in particular. This debate has involved a range of philosophical beliefs related to the essence, scope, and limits of liberty, autonomy, and self-determination. Similar to many countries around the world, the polarisation of society around these two groups has rapidly gained an ideological dimension and has resulted in pro-vaccination and anti-vaccination movements based on particular ideologies. These pro-vaxx or anti-vaxx ideologies have gradually and partially gained the shape of philosophical and non-religious beliefs.
Normative ideologies in a largely secular society
Bulgarian society is largely secular. According to Article 13 para. 3 of the Constitution, Eastern Orthodox Christianity is a traditional religion in the Republic of Bulgaria. However, the Constitution also proclaims in Article 13 paras. 1 and 2 that religious denominations shall be free and religious institutions shall be separate from the state. The country's statistical surveys differentiate citizens into multiple groups according to their religion, with Eastern Orthodox Christianity being considerably predominant. Nevertheless, the Bulgarian people generally define themselves in terms of religion more on the grounds of tradition, habits, and feelings of historic belongingness and not so much on intrinsically religious reasons. In other words, while most people define themselves as Christians, Muslims or belonging to other religious groups, they indeed have largely secular attitudes towards the state, society, and social relations.
Consequently, Christian Orthodoxy and other religions (particularly Islam and Judaism) comprise an important part of the heritage of Bulgarian society. Such religions are typically practiced as part of a given tradition and tied to a specific social and group identity, as well as to sacral intentions in some cases. Hence, Bulgarian society is composed of various religious communities, with the clear majority being Christian Orthodox; nevertheless, there is also a substantial group of non-believers, some of whom are atheists or practice religious rites for other cultural-traditional reasons. Most people in Bulgaria participate in religious practices only for formal occasions (e.g., big holidays such as Christmas, Easter, Bayram, Hanukkah, etc.). This is especially true among the Christian Orthodox majority, with believers mainly participating in religious activities for traditional rather than genuinely religious reasons. 8 This is why the map of normative beliefs in Bulgaria consists mainly of secular ideologies. Marxism and communism were the official beliefs in the period of 1947-1989. Indeed, it is difficult to determine due to lack of sufficient data the extent to which these beliefs received substantial social support or were simply imposed official ideologies. Feminism is an ideology that has also been intensely promoted by the state. The full equality of men and women in all spheres of life was a key priority of the communist government and the Bulgarian Communist Party. 9 After 1989, some parts of Bulgarian society have continued to adhere to leftist ideologies (e.g., Marxism, communism, socialism, and social democracy). In recent years, though, in times of constitutional polycrisis and when the rule of law and constitutionalism are in a state of flux, 10 8 An original approach to the religious cultural heritage in post-atheist and post-communist Bulgaria is proposed in Kalkandjieva 2022, 134-160. 9 For more about feminism in Bulgaria, see: Daskalova, De Haan, Loutfi 2006. 10 See : Belov 2022. nationalism has been on the rise. Until recently, this ideology had been quite limited. Now, it serves mostly as a response to concerns stemming from perceived crisis (mis)management by public power institutions and growing anxiety among some parts of the population resulting from pandemicrelated, financial, economic, and migration challenges to the constitutional and political order.
The predominant ideology in contemporary Bulgaria appears to be liberalism. 11 However, liberalism has taken different forms -liberaldemocratic, liberal, libertarian, and neo-liberal -and has been used in conjunction with a range of other ideologies. For example, neoliberal approaches have massively affected the philosophy of economic transition. Nevertheless, a predominant part of the society seems to believe in a highly awkward mix of ideas that are often scarcely compatible, including liberalism, state paternalism, individualism, and versions of egalitarianism. There has also been a rise in nationalism, which had until recently remained a rather limited ideology.
It is interesting to note that egalitarianism, which was not only inherited from the period of communism but also has deep roots in the period before the reinstatement of modern Bulgarian statehood in 1878, 12 coexists with individualism, mostly in the form of consumerist individualism established during the period of transition from communism to democracy after 1989. Both egalitarianism and individualism have deeply influenced the shape of the landscapes of philosophical and nonreligious beliefs throughout Bulgarian constitutional history. 13 Hence, these two normative ideologies that have typically had an uneasy coexistence are now parallel parts of the Bulgarian constitutional anthropology.
Traditionally the ideological code of Bulgarian society is moderation. This is a feature that makes fully fledged democracy and liberal society impossible, as well as prevents sincere and extreme forms of autho ri t arianism. Unfortunately, Bulgarian society has inherited egalitarianism from its Ottoman and communist past while still adhering to radical 11 For a highly interesting analysis of the interplay between liberalism, atheism, and ethnophyletism in the constitutive phase of the Bulgarian nation before and after the liberation from Ottoman rule, see : Denkov, Vulchev, Gueorguieva 2020, 9-33. 12 Until 1878, the Bulgarian territories were part of the Ottoman Empire. 13 For more information about the shapes of Bulgarian constitutional identity, see : Belov 2017. individualism and egoism borrowed from the currently predominant neoliberalism. This leads to a strange mixture of sometimes contradictory ideologies -for example, a belief in the need for substantial equality but also in unrestrained individualism, or distrust in state and public institutions combined with resilient forms of paternalistic attitudes.
Finally, the predominant anthropological mood of Bulgarian society apart from moderation is scepticism. Scepticism is both implicit in diffuse ideologies and widely used in practice. It takes the form of rule scepticism, institutional scepticism, and substantial (result-oriented) scepticism. Rule scepticism and institutional scepticism concern the formal legal model, whereas substantial scepticism is focused on the "law in action" and sociolegal practices.
The concept of alternative spirituality is not recognised as a theoretical, legal, or socio-legal concept by the Bulgarian constitutional science. Thus, it must be defined in the negative as an antipode of religion as mainstream spirituality. In fact, in Bulgaria, non-religion in its different forms, such as atheism 14 and non-belief, constitutes the overarching mainstream spirituality, whereas religion is the alternative spirituality. Apart from this general finding, there are different institutionalised forms of spirituality (e.g., the masonry), which are practiced in the country. Nevertheless, it is difficult to assess how widespread these institutionalised forms of spirituality are due to the partially non-public character of these practices. A typical Bulgarian quasi-religious belief is so-called Danovism -a mixture of religious, bioethical, and psychological ideas and practices shared by a community of followers of Petar Danov. 15 He was a spiritual leader who practiced during the interwar period. This quasi-religious movement also has international followers, but its appeal is largely limited to Bulgaria.
Compared to religious beliefs, philosophical and non-religious beliefs are naturally embedded in secularism. Thus, secularism serves as an overarching paradigm in such a context. Secularism is a constitutional 14 Dimitar Denkov, Georgi Vulchev, and Valentina Gueorguieva distinguish three forms of atheism that emerged in the course of the development of Bulgarian modernity. These are the literarypoetic, the political, and scientific atheism. See : Denkov, Vulchev, Gueorguieva 2020, 16. 15 For more about the Danovism, see : Tončeva 2015;Bončo 2002, 415. For the unofficial and implicit promotion of Danovism and other occult and mystic movements in late communism due to the influence of Lyudmila Zhivkova (the daughter of the communist leader Todor Zhivkov), see : Denkov, Vulchev, Gueorguieva 2020, 19-20. principle that is in parallel with the secular character of the state. Secularism is also the general intellectual paradigm under which philosophical and non-religious beliefs are subsumed in counter-position to religion and religious beliefs. However, apart from this counter-position, philosophical and non-religious beliefs are not necessarily interrelated with secularism.
Relationship between the right to religious freedom and the right to non-belief in Bulgarian constitutionalism and Bulgarian constitutional history
Freedom of philosophical and non-religious beliefs has never been considered a form of religious freedom in Bulgaria. It has either not been provided for as a separate human right in the constitutional text although implicitly safeguarded by other constitutional rights (e.g., in the 1879 Tarnovo Constitution) 16 or have been regulated both as a separate constitutional right from religious freedom as well as in conjunction with it (e.g., in the 1947, 17 1971, 18 and 1991 Constitutions). Freedom of philosophical and non-religious beliefs was generally de-facto safeguarded in socio-political practice in the period from 1879 to 1944, albeit with substantial curtailment and limitations during World War II. Subsequently, it was suppressed during the communist regime . This freedom has largely been safeguarded both in law and practice since 1989, especially after the adoption of the 1991 Constitution. The freedom of philosophical and non-religious beliefs has been addressed as a distinct right, although it has typically been subsumed into discussions of the freedom of conscience and thought. Nevertheless, it has also been included in the triad of freedom of thought, conscience, and religion due to the fact that these are considered to be the three main forms of the expression of human beliefs. The current 1991 Constitution uses in 16 For the constitutional principles and constitutional rights and freedoms in the first Bulgarian Constitution, see : Belov 2015, 859-897;Belov 2013, 41-57. 17 Конституция на Народна република България от 6.12.1947 г.; hereinafter: 1947 Article 37 para. 1 the formulas "freedom of conscience, freedom of thought and choice of religion and of religious or atheistic views" and "freedom of conscience and religion. " It has to be considered, at least on the basis of the data that I have at my disposal, that non-believers have thus far not made any claims for the recognition of their beliefs in the name of freedom of religion. If such claims have been raised, it has been done with recourse to freedom of thought and conscience or via other rights that might secure such positions -for example, information and communication rights or freedom of association and assembly.
The first Bulgarian Constitution -the 1879 Tarnovo Constitutionprovided for freedom of religion. Nevertheless, it did not contain any provisions related to freedom of philosophical and non-religious beliefs. These beliefs were not prohibited but simply did not have constitutional status and consequently enjoyed no constitutional protection. The reason for this is that 19th-century constitutionalism was preoccupied with freedom of religion and not as much with freedom of philosophical and non-religious beliefs despite the increasing importance of the latter.
Indirect safeguards on freedom of philosophical and non-religious beliefs include freedom of education, freedom of assembly, and the indemnity of MPs for their speeches in the Parliament. Although not explicitly mentioned in the respective constitutional provisions, these constitutional rights and the safeguard for the independence of MPs are also indirect safeguards for freedom of philosophical and non-religious beliefs, which can be defended in Parliament, taught and discussed in schools, and disseminated and debated in a range of possible types of assemblies. The same is true for freedom of the press, which was proclaimed and safeguarded in the Tarnovo Constitution in a liberal-democratic manner, although it was infringed on via numerous legislative provisions and in the course of socio-political practice.
The first communist Constitution of 1947 is also the first Bulgarian Constitution that explicitly proclaim freedom of conscience together with freedom of religion. The misuse of religion for political purposes and the establishment of political organisations on a religious basis is prohibited in this document. The 1947 Constitution requires that education must be secular and in a "democratic and progressive spirit. " The dissemination of fascist, antidemocratic, and imperialist ideology is prohibited.
The second communist constitution -the 1971 Constitution -is much more indoctrinated. It is entirely obsessed with the imposition of official communist ideology. This ideology also predetermines the framework for freedom of philosophical and non-religious beliefs, which is curtailed and legally restricted to official and permitted forms of thought. Indeed, the same is true for the period of the validity of the 1947 Constitution, but from 1947 to 1971, the severe restrictions in freedom of philosophical and non-religious beliefs stemmed from political practice and ordinary legislation and did not have such manifest and clear constitutional standing.
The 1971 Constitution requires that all societal organisations must work towards the establishment of "socialist consciousness. " This was political imperative; it clearly constitutes the general formula for the permitted scope of philosophical and non-religious beliefs that predetermines the spectrum of, or more precisely the lack of, ideological pluralism. While the guidelines of ideology were set by the Bulgarian Communist Party in this Constitution, the role of the so called Fatherlands Front (an organisation that included all official political organisations) was to serve, according to Article 11 of the 1971 Constitution, as "a mass school for patriotic and communist education of the population. " Moreover, according to Article 39 of the 1971 Constitution, the education of the youth in the communist spirit is an obligation of the whole society. Furthermore, a range of educational and ideological tasks are ascribed to the family, school, state institutions, and societal organisations. Article 45 para. 3 provides that education should be grounded on the achievements of modern science and Marxist-Leninist ideology.
The 1971 Constitution provides for freedom of conscience and freedom of religion. It allows for the performance of religious ritual as well as explicitly permits anti-religious propaganda. The separation of the state from the church and religion and thus the principle of the secular state are also constitutionally proclaimed.
In sum, freedom of non-religious philosophical beliefs is proclaimed for first time in Bulgaria by the communist constitutions of 1947 and 1971. Unfortunately, this was done as part of the Marxist ideological framework, with a predominantly negative attitude towards religion and freedom of religion (despite its constitutional proclamation) as well as rigid and wideranging limits to ideal and ideological pluralism. Hence, the communist period was marked by the rise of atheism and secularism, 19 while freedom of non-religious and philosophical beliefs was nominal, fictitious, or limited to general legal declarations, and the practice of such beliefs was largely restrained. 20 The current 1991 Constitution provides for and protects freedom of philosophical and non-religious beliefs in multiple ways and in a range of provisions. According to Article 6 of the Constitution, no discrimination is allowed on the basis of religious or non-religious beliefs. Furthermore, Article 11 provides for the principle of political pluralism and prohibits the proclamation of any ideology as an official ideology of the state. This provision is a clear rejection of the establishment of official normative ideologies by the two communist constitutions. Moreover, Article 13 provides for the principle of the secular state. It declares that Eastern Orthodox Christianity is the traditional religion of Bulgaria, but there is no official religion, and the principles of the separation of religion and state and religious pluralism shall prevail. The Constitution prohibits the use of religious or non-religious convictions and beliefs for political purposes.
The most important constitutional provisions regarding freedom of philosophical and non-religious beliefs are Articles 37 and 38 of the 1991 Constitution. According to Article 37, freedom of conscience, freedom of thought, and freedom of religion and religious or atheistic views are inviolable. The state shall assist the maintenance of tolerance and respect among the believers from different denominations, and among believers and non-believers. The limitations to the freedom of conscience and religion are set in Article 37 para. 2 of the 1991 Constitution. According to it the freedom of conscience and religion shall not be practiced to the detriment of national security, public order, public health and morals, or of the rights and freedoms of others. 19 For a comparison between communist atheism and Western secularism, see: Metodiev 2022, 115-134. 20 For the state atheism and the usage of the past during the period of 1944-1989, see: Denkov, Vulchev, Gueorguieva 2020, 15-20. Central role for the protection of the freedom of philosophical and nonreligious beliefs is played also by Article 38 of the Constitution. According to it, no one shall be persecuted or restricted in his rights because of his views, nor shall be obligated or forced to provide information about his own or another person's views. 21 Furthermore, freedom of philosophical and non-religious beliefs is related also to freedom of opinion and the right to information. This is because information and communication rights set out in Articles 39-41 of the 1991 Constitution are the instruments for forming and disseminating philosophical and non-religious beliefs. 22 There is no case law of the European Court of Human Rights that has affected the Bulgarian Constitution or legislation and the national approach to freedom of philosophical and non-religious beliefs. There have been numerous Court decisions against Bulgaria related to freedom of religion, 23 but they have not concerned freedom of philosophical and non-religious beliefs. The same is true for the rules of international law protecting philosophical and non-religious beliefs.
Organised philosophical and non-religious beliefs:
The protection of the right to non-belief as a collective right The Bulgarian constitutional order and the Bulgarian legislation do not recognise any form of collective rights. Indeed, they recognise collective forms of practicing individual rights (e.g., associations, assemblies, etc.). However, there is widespread consensus in the literature that these rights 21 Dimitar Denkov, Georgi Vulchev, and Valentina Gueorguieva differentiate between three phases of secularity and desecularisation in contemporary Bulgaria . These are the "religious boom" of the 1990s, the coexistence of new religious practices, old (socialist) atheism, and other secular inspirations, as well as the recent intensification of religious dogmatism running parallel to the growing popularity of ethnic nationalism and the rise of the alt-right. See : Denkov, Vulchev, Gueorguieva 2020, 20-21. 22 Tančev 2003 remain individual rights that are simply being practiced in a collective form. Thus, no legal protection is provided for philosophical and non-religious beliefs as comprising a collective right. Only the individual human rights through which the philosophical and non-religious beliefs are practiced enjoy legal protection. There are different forms of protection that are available to philosophical and non-religious believers. They can appeal acts of administration infringing upon their rights in front of the superior administrative authorities. Moreover, they may approach the national ombudsman. The most intense form of protection is offered by the judiciary, which includes both the courts and state prosecutor's office. Unfortunately, there is no direct constitutional complaint through which the interested parties can immediately approach the Constitutional Court. Thus, they can obtain the Constitutional Court's protection only via the medium of the ombudsman or the Supreme Bar Association, which can eventually use their competence to approach the Constitutional Court. Finally, there is a Commission for Protection against Discrimination, a specialised administrative body that serves as a safeguard of freedom of conscience and thus of freedom of philosophical and non-religious beliefs. This commission was established and provided for by the Protection against Discrimination Act, which was adopted in 2004. 24 Philosophical and non-confessional beliefs can be expressed, practiced, and disseminated in two main ways. First, this can take a non-institutionalised form, although this type of practice is difficult to systematically research due to its dispersed character. Second, such beliefs can be represented via different organisational forms. Typically, this is done via non-governmental organisations using freedom of association as their organisation form.
The period since the reestablishment of modern Bulgarian statehood has witnessed the proliferation of pluralism of philosophical and non-religious beliefs, which increasingly spread alongside the emergence and gradual establishment of modern Bulgarian civil society. The scope and number of such beliefs has increased exponentially and in parallel with the cultural and educational development of Bulgarian society. After the coup d' état that took place on 19 May 1934, the state's intervention in civil society and the public sphere largely increased. This has resulted in the establishment of state-controlled official associations in different spheres of civil society. Indeed, most of them concerned freedom of association in the sphere of media, labour and social welfare protection, and art, among others. However, this has also indirectly affected the modes and channels for the spread of religious and philosophical ideas. After the entry of Bulgaria into World War II, censorship and state influence on the practice of philosophical and non-religious beliefs rose tremendously. 25 State control over forms of expression of philosophical and nonreligious beliefs continued and was severe during communism . Pluralism of philosophical and non-religious beliefs was restored in 1989 and entrenched in the 1991 Constitution. Article 53 para. 1 of the 1971 Constitution was amended in 1990 to provide that citizens enjoy freedom of conscience and religion. Citizens are thus allowed also to practice religious or atheistic propaganda. The wording of this amendment was still visibly influenced by communist constitutional rhetoric, especially in terms of the right to propaganda. According to Article 37 of the 1991 Constitution, freedom of conscience, freedom of thought, and choice of religion and of religious or atheistic views shall be inviolable. Moreover, the state shall assist in the maintenance of tolerance and respect among believers of different denominations and among believers and non-believers.
There is a wide range of philosophical and non-confessional associations present in Bulgaria. Their diversity ranges from scientific associations to NGOs, even including quasi-religious organisations such as masonic lodges. Some of them have party political leaning, e.g., associations promoting various political ideologies and ideas. Others are of a purely scientific nature. Further still, there are organisations that function as grassroots pressure groups for the promotion of different ideals, ideologies, beliefs, and interests.
Philosophical and non-confessional associations play a social role at the national level. They have a multitude of functions that are performed separately or in conjunction with one another. These associations form public opinion, promote educational goals, enhance competition among ideas and ideals, as well as shape the national ideals and ideological, 25 See: Belov 2015, 859-897. conceptual, and imaginary landscapes. Such associations also function as pressure groups for the promotion of their aims and ideals. Nevertheless, there is no safeguarded place or role for them in their relations with both the state and religious organisations. Moreover, no concrete institutionalised forms or results can be identified as clear-cut examples of the pivotal or systematic role of philosophical and non-confessional associations on a national level.
Practical problems of philosophical and non-religious beliefs
Freedom of association has been provided for by all four Bulgarian constitutions. During the period of the 1879 Tarnovo Constitution, this right played an important role in structuring and organising the emergent civil society in newly liberated Bulgaria. In fact, some non-religious societal organisations were of pivotal importance in the establishment of Bulgarian civil society in the course of the 19th century, even before the reestablishment of modern Bulgarian statehood. There were some cultural societies that have aimed at spreading modernisation, knowledge, and national-cultural awareness in line with the concepts of rationalism and the Enlightenment. In particular, the network of the so-called chitaliste ("reading society, " "читалище" in Bulgarian) functioned as a neuron network for the Bulgarian intellectual and political revival. Notably, the Bulgarian Academy of Sciences was established in 1869 -ten years before the reestablishment of Bulgarian statehood.
Although Christian Orthodoxy was formative for the national identity and served as an instrument for the construction of modern Bulgarian spirituality until World War II, the emergent Bulgarian society and state had largely secular. Indeed, the Orthodox Church was proclaimed as predominant by the Tarnovo Constitution, which also recognised freedom of belief for other religious denominations. Furthermore, there were lessons in orthodox religion in schools. Nevertheless, the state was separated from the church, and civil society was structured mostly according to secular organisations. Thus, Christian Orthodoxy served much more as a formant of national identity and belongingness than as an active element in the construction and functioning of public life and debate.
Hence, there were two pillars of the formative phase of the intellectualspiritual design of Bulgarian statehood -the Orthodox Church and a range of non-religious and non-confessional organisations. The latter included the Bulgarian Academy of Sciences, the "reading societies, " and various clubs and non-governmental organisations, some of which leaned towards or were affiliated with various political movements.
During the communist period , religion was not prohibited, but it was widely out of favour. Both communist constitutions provided freedom of religion and conscience. Article 78 of the 1947 Constitution provided that citizens have freedom of conscience and religion, as well as freedom to perform religious rites. Furthermore, according to this Constitution, the church was separate from the state, and the misuse of religion for political purposes including the establishment of political organisations based on religion was prohibited. According to Article 53 of the 1971 Constitution, citizens enjoyed freedom of conscience and religion. They had the right to perform religious rites and conduct antireligious propaganda. The separation of the state from the church and the prohibitions for the misuse of religion for political purposes and the forming of political parties on religious grounds were preserved.
Religious organisations -the Orthodox Church included -were marginal ised. Only official and traditional religious denominations, e.g., Orthodox Christianity, Islam, and Judaism -were half-heartedly permitted. This does not mean that the public discourse was based on a network of philosophical and non-belief organisations leading to free deliberation in a religiously detached context. The role of religion as a dominant ideology (although traditionally weaker in Bulgaria in comparison to other Orthodox states such as Greece, Romania, or Russia) was replaced by the official communist ideology. Thus, there was no space for philosophical and non-religious beliefs other than Marxism (or in the language of the official ideology, Marxism-Leninism). This is despite the formal proclamation of freedom of association and freedom of assembly by both communist constitutionsthe 1947 and the 1971 Constitutions of the People's Republic of Bulgaria. Moreover, religious teaching in schools was replaced with highly ideological teaching in a context indoctrinated with communist ideology.
The 1991 Constitution provides for a wide range of human rights that enable the practice of philosophical and non-religious beliefs. The current Bulgarian Constitution pays tribute to Christian Orthodoxy as a traditional religion (replacing the "dominant religion" formula used by the Tarnovo Constitution) while allowing for wide-ranging freedom of other religious or non-religious beliefs. Philosophical and non-confessional organisations are formed, organised, and operate on the basis of the proclamation in Articles 37 and 38 as well as freedom of association provided by Article 44 of the 1991 Bulgarian Constitution. Philosophical and non-religious beliefs are practices both in organised and non-organised ways. This entails practices that result in the establishment of different organisations, societies, etc., aimed at promoting a given organisation's beliefs. Moreover, freedom of assembly as well as freedom of opinion and information are also tools for such practices.
Since 1947, education in Bulgaria has been fully detached from religion. After 1989, democracy was restored, and since 1991, when the current Bulgarian Constitution was adopted, no compulsory religious education exists. Thus, Bulgarian education remains largely secular and open to the pluralism of philosophical ideas.
The right to spiritual assistance for philosophical and non-religious believers has never been provided for in Bulgarian legislation. Freedom of conscience and non-religious beliefs is provided for by the 1947, 1971, and 1991 Constitutions. However, no general right to assistance, advice, or aid for forming or expressing philosophical or non-religious beliefs exists. Such assistance is practically accomplished and is among the targets of many NGOs 26 and various cultural or educational organisations. Nevertheless, it has never been provided for as a legal right by constitutional or ordinary legislation. The educational system on all levels is among the main formants of philosophical and non-religious beliefs in parallel with traditional and new media and the family.
Criminal protection of philosophical and non-religious beliefs is also not provided for by Bulgarian legislation. This is in contrast with the criminal protection of religious or political beliefs offered by the Criminal Code. 27 Hence, philosophical and non-religious beliefs are protected insofar as they can be subsumed into or associated with political beliefs. for this is that Bulgarian legislation offers special protection of beliefs that are deemed traditionally vulnerable to different forms of violence, e.g., religious and political beliefs.
The only form of criminal protection offered to philosophical or other non-religious beliefs is granted by Article 172 of the Criminal Code. It provides protection against discrimination based on such beliefs when applying for a job. The means for protection against such discrimination are the criminal procedure commenced by the state prosecutor's office and the administrative procedure that is available for claimants in front of the Commission for Protection against Discrimination.
Baptism is not required as a prerequisite for any aspect of civic status by the state or Bulgarian legislation. Religious marriage is optional and can be concluded only after a couple is secularly married. The legal consequences of marriage are related only to the state and not the religious marriage. There is no need for children to be baptised. Thus, non-baptism is the legal norm, while baptism is simply an option available to believers. The administration of baptism is governed by the rules of the religious communities who are also responsible for administering the related data in their registers. There are no legal norms concerning the recognition of baptism by the various religious and denominational communities.
The main bioethical issues on which philosophical and nonconfessional associations have taken an official position are issues of gender. These associations have participated in the overall social debate in the media and on different deliberative platforms. They have also been constituted as amicus curiae in cases in front of the Constitutional Court and offered the opportunity to provide an advisory opinion on the issues of sex and gender according to the Bulgarian 1991 Constitution in parallel with the main official religious denominations (e.g., various Christian, Muslim, and Jewish denominations). This was a debate largely triggered by the non-ratification of the Istanbul Convention by the Republic of Bulgaria. Philosophical and non-confessional associations have also been active in the sphere of ecology and environmental protection. It is extremely difficult to summarise the attitudes of traditional religious denominations with regard to individual and organised philosophical and non-religious beliefs. Generally, there is a mood of tolerance, reconciliation, and peaceful coexistence for several reasons. Bulgarian society has always been based on religious tolerance. Both the state and religious denominations have tolerated freedom of conscience and the pluralism of beliefs apart from the communist period, when Marxism was imposed as the official ideology of the state. The period of communism massively promoted atheism and nonreligious beliefs. Thus, after the fall of communism, religious denominations have been rather passive and self-restrained, not expanding in the spheres of secularism. The only exceptions are some forms of radical Islam, which have been promoted in some geographically remote parts of Bulgaria. 28 The best example of a dialogue between religious denominations and philosophical and non-confessional associations at the national level is that which took place to save the Bulgarian Jewish population during World War II with the aid of the Bulgarian Orthodox Church. The active role of its head -the Bulgarian Exarch Stephan -deserves special attention. 29
Conclusion
Freedom of philosophical and non-religious beliefs is a permanent part of Bulgarian constitutionalism. It has been implicitly established since the liberation of Bulgaria in 1878-1879 and gained explicit constitutional recognition in the 1947, 1971, and 1991Constitutions. Nevertheless, the 1947and 1971 Constitutions, while providing for this freedom, did not actually serve as reliable safeguards for it. On the contrary, during the communist period, freedom of philosophical and non-religious beliefs was massively curtailed and infringed upon in political practice. This was a logical result of the official imposition of a single normative ideology, namely the Marxist-Leninist version of communism.
Freedom of philosophical and non-religious beliefs in Bulgaria has always been provided for in conjunction with freedom of religion. However, it has been largely overshadowed by freedom of religion. Freedom of religion is regulated in much greater detail on the constitutional and legislative levels than philosophical and non-religious beliefs. It is also 28 Some years ago, there was also a major case against some radical imams in the city of Pazardzhic. For the origins, developments, and tendencies in the development of Islam in Bulgaria and in Bulgarian society, see : Evstatiev 2022, 74-112. 29 Conversely, for an analysis of some instances of religious hatred in Bulgaria, see : Ilieva 2011. much more intensely discussed in theory and in the socio-political discourse in Bulgaria.
In some cases, a lack of legal institutionalisation is not necessarily an indicator of the underdevelopment of certain constitutional phenomena. In other words, this does not mean that the austerity of the legal provisions of freedom of philosophical and non-religious beliefs should lead us to conclude that it is less protected than its counterpart, freedom of religion. In fact, both freedom of religion and freedom of philosophical and nonreligious beliefs were endangered and infringed upon during the communist period. Moreover, they have both been generally promoted and secured since the reestablishment of the rule of law and democracy with the 1991 Constitution. | 2023-07-11T00:31:05.645Z | 2023-06-19T00:00:00.000 | {
"year": 2023,
"sha1": "4d8b8dc1f7a09c932ee46496afe49dbb9695ee06",
"oa_license": "CCBY",
"oa_url": "https://czasopisma.kul.pl/index.php/spw/article/download/15128/14449",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "baa647a835898e93dccf73f88a42e3b01015334a",
"s2fieldsofstudy": [
"Law",
"Philosophy",
"Political Science"
],
"extfieldsofstudy": []
} |
26845861 | pes2o/s2orc | v3-fos-license | E3-ubiquitin ligase/E6-AP links multicopy maintenance protein 7 to the ubiquitination pathway by a novel motif, the L2G box.
Ubiquitin ligases are generally assumed to play a major role in substrate recognition and thus provide specificity to a particular ubiquitin modification system. The multicopy maintenance protein (Mcm) 7 subunit of the replication licensing factor-M was identified as a substrate of the E3-ubiquitin ligase/E6-AP by its interaction with human papillomavirus-18E6. Mcm7 is ubiquitinated in vivo in both an E6-AP-dependent and -independent manner. E6-AP functions in these reactions independently of the viral oncogene E6. We show that recognition of Mcm7 by E6-AP is mediated by a homotypic interaction motif present in both proteins, called the L2G box. These findings served as the basis for the definition of substrate specificity for E6-AP. A small cluster of proteins whose function is intimately associated with the control of cell growth and/or proliferation contains the L2G box and is thereby implicated in an E6-AP and, by default, HPV-E6-dependent ubiquitination pathway.
Selective proteolysis represents a fast and irreversible way for the control of the regulation of transition states in biology and is commonly employed from bacteriophages to human cells (1). Post-translational modification of a lysine residue from an acceptor protein by ⑀-amidation with the C-terminal glycine residue of a poly-ubiquitin chain serves as a signal for selective proteolysis by the 26 S proteasome (2). This modification is a particularly effective form of regulation for many cellular control processes where a certain unidirectional and irreversible sequential order of events is crucial for the fidelity of a system, such as the regulation of S-phase entry or the anaphase cell cycle transition (3). Ubiquitin is transferred onto substrate proteins by an enzymatic cascade (4). The ubiquitin is first activated as a thiol ester on a ubiquitin-activating enzyme in an ATP-dependent reaction, and the ubiquitin thiols are then transferred to the ubiquitin-conjugating E2 1 enzyme and finally ligated to the target protein in concert with the E3 specificity factors. These E3 components provide substrate recognition and are thus generally considered to give specificity to the ubiquitin-mediated proteolysis system. E6-AP in association with the oncogenic E6 proteins from the human papillomavirus (HPV) resembles an E3 enzyme in that it targets the cellular tumor suppressor p53 for ubiquitin-mediated degradation (5)(6)(7). In contrast with other E3 proteins, such as the anaphasepromoting complex (3), or the S-phase entry specific SKP1, Cdc53, F box protein complex (8,9), E6-AP functions not only as an adapter between a ubiquitin acceptor substrate protein and the E2 enzyme but has intrinsic ubiquitin ligase activity. Thus E6-AP is a thiol ubiquitin acceptor from its E2 enzyme, selects substrate, and serves as the ultimate ubiquitin donor that directly couples activated ubiquitin to target proteins (10). Besides its E6-associated function, E6-AP mutations have been linked with Angelman syndrome, and this serves as the first example of a genetic disorder associated with the ubiquitination pathway in mammals (11)(12)(13). Despite its E6-dependent association with p53 in HPV pathology, the role for the E6-AP ligase function in cell growth and proliferation is poorly understood. Very little is known about other cellular substrates of E6-AP, but the link with Angelman syndrome suggests the existence of additional, essential targets for the E6-AP E3-ligase system.
A sequential order of cell cycle transitions ensure that DNA replication takes place only once per cell cycle. This regulation is provided by the replication licensing factor (RLF) that is activated on exit from metaphase, and as a consequence, cells become competent for the initiation of DNA replication. Before replication is ultimately initiated at the G 1 /S transition, however, a series of checkpoint controls during G 1 -phase have to ensure the fidelity of the entire cell for duplication. Once started, the competence for the initiation of DNA replication is erased by inactivating RLF (14 -16). Differential polyethylene glycol precipitations and subsequent purification from Xenopus egg extracts separates RLF into two components, RLF-B and RLF-M (17). RLF-M represents a hetero-hexamer complex that is conserved in all eukaryotes analyzed so far and consists of six members of the Mcm family of proteins. Mcm proteins are essential for DNA replication initiation (18) and were initially isolated as genetically defined mutants from Saccharomyces cerevisiae by their inability to replicate plasmids containing certain yeast replication origins (19). Mcm proteins are loaded at the origin of replication and move with the replication fork (20); they show homologies with DNA-dependent ATPases (21) and co-fractionate with a helicase activity in vitro (22), which taken together suggests that RLF-M is a putative candidate for a eukaryotic hexameric replication fork helicase.
In this study we show that Mcm7 interacts, in the context of the entire Mcm complex, with the HPV-18E6 onco-protein, and this discovery led us to identify Mcm7 as a novel E6-AP substrate. We describe that Mcm7 is ubiquitinated in an E6-APdependent manner, and we define the substrate recognition sequence for E6-AP in Mcm7. Intriguingly, the E6-AP/Mcm7 interaction is obtained through a novel homotypic motif, and this motif in turn is also used by the HPV-18E6 protein for interaction. We call this motif the L2G box.
EXPERIMENTAL PROCEDURES
Yeast Two-hybrid Screen-2 ϫ 10 7 yeast transformants from a human lymphocyte library in the vector pACT were screened with selection at 30 g/ml 3-aminotriazole using pAS1-GAL4 fused in frame to full-length HPV-18E6 as a bait. Positive cells were analyzed for -galactosidase activity. For confirmation of the screen the plasmids from 3-aminotriazole-resistant and -galactosidase-positive colonies were rescued in Escherichia coli DH5␣ and then retransformed into S. cerevisiae.
HeLa Cell Nuclear Extracts and Superose 12 Gel Filtration-HeLa cell nuclear extracts were prepared essentially as described previously (23). HeLa cells were grown in suspension culture and harvested at mid-logarithmic phase at a cell density of 4 -5 ϫ 10 5 cells/ml. Prior to harvesting the cells were grown in the presence of the peptide aldehyde LLnL (25 M, Sigma) for 2 h (the addition of the proteasome inhibitors proved to be essential for the isolation of 18E6 as a high molecular component associating with the RLF-M complex). Lysis buffer A contained 20 mM Hepes, pH 7.5, 3.5 mM MgCl 2 , 25 mM KCl, 0.2 mM EDTA, 0.2 mM EGTA, 10% glycerol, 20 M L-1-tosylamido-2-phenylethyl chloromethyl ketone, 20 M 1-chloro-3-tosylamido-7-amino-2-heptanone, 50 M LLnL, 5 mM N-ethylmaleimide (NEM, Sigma), 0.2 mM NaF, and 30 mM 4-nitrophenyl phosphate. Nuclei were isolated and separated at 100,000 ϫ g, and the supernatant was designated cytoplasmic extract. For the peptide assays shown in Fig. 2B, nuclei were first extracted in buffer B (as for A but with 20% glycerol and 100 mM KCl), and after centrifugation at 100,000 ϫ g, the supernatants obtained were designated soluble nuclear extract. The pellet was then extracted in buffer A containing 0.4 M KCl. After centrifugation at 100,000 ϫ g, the supernatant was designated high salt nuclear extract. The remaining pellet contained virtually no Mcm7 nor Mcm3 protein as judged by Western blotting of aliquot fractions. For gel filtration experiments as shown in Fig. 1, A and B, the nuclei were extracted with buffer B containing 0.4 M KCl (NXT), diluted after centrifugation 1:3 in buffer A, and stepwise precipitated with polyethylene glycol employing a 3 and 9% step as described (17). The 9% fraction was immediately loaded onto a Superose 12 column of a fast protein liquid chromatography-Pharmacia system and equilibrated with buffer A containing 150 mM KCl. Loading volume was 100 l. 1-ml fractions were collected, and the column was standardized with marker proteins as indicated. Fractions obtained were concentrated by 7.5% trichloroacetic acid precipitation in the presence of 0.01% sodium deoxycholate prior to analysis on reducing SDS-polyacrylamide gel electrophoresis.
Peptide Affinity Assays-Immobilization of cysteine-containing peptides onto SulfoLink (Pierce) resins and the estimation of the coupling efficiency, as measured with the Ellman's Reagent (Pierce) reaction, were done according to the supplier's instructions. Coupling efficiency was comparable among the peptides used, and resins contained approximately 400 g of peptide/ml of 50% gel slurry. For the assay, 100 g of extract in 250 l of buffer A and 30 l of 50% peptide containing gel slurry were incubated at 4°C for 20 min and then intensively washed with buffer A containing 0.4 M KCl. Proteins were eluted at 94°C in sample solution containing 2% SDS, and aliquots were analyzed in Western blots as indicated.
Immunoprecipitations and Western Blots-HeLa cell nuclear extracts or Superose 12 fractions were precleared in a mixture of protein A-and protein G-Sepharose prior to incubation with specific antibodies (2 g/ml) on ice and subsequent purification with a mixture of protein A and protein G-Sepharose (preincubated in 10 mg/ml bovine serum albumin). Precipitations from extracts shown in Fig. 1C and Fig. 2E were done from 150 g (NXT) diluted 1:1 in buffer A in the presence of 0.05% SDS and 0.05% sodium deoxycholate. Samples were washed six times in buffer A containing 0.4 M KCl and twice in buffer A alone. For Western blots, the resins were boiled in loading buffer containing 2% SDS and 0.1 M DTT. Western blots were developed using peroxidaseconjugated secondary antibodies and the enhanced chemiluminescence substrates (Amersham Pharmacia Biotech).
In Vitro Binding Assays-Assays were performed with either in vitro translated 35 S-labeled proteins produced with the TNT kit (Promega) or E. coli expressed proteins at concentrations indicated. Binding reactions were done for 2 h at either 4°C for the assays containing TNT proteins or at room temperature in buffer A containing 0.9 M (NH4) 2 SO 4 and 0.2 g/l bovine serum albumin, and then washed as indicated in Fig. 2C with the same buffer containing Tween 20 at concentrations indicated. Binding with 35 S-labeled proteins was detected by exposure to x-ray films and in Western blots for binding reactions with the recombinant E. coli expressed proteins.
Recombinant Proteins, Antibody Production, and Purification-E. coli proteins were purified from the strain BL21(DE3) by glutathione-Sepharose affinity chromatography for the GST fusion proteins and Ni 2ϩ -NTA-agarose (Quiagen) chromatography for the (His) 6 -tagged proteins. Antibodies were raised and purified essentially as described (26). Briefly, polyclonal antibodies to His 6 -18E6 and His 6 -Mcm7 (554 -720) were raised in rabbits. Antibodies used were affinity purified against His 6 -18E6 or His 6 -Mcm7 proteins covalently linked to BrCN-activated Sepharose (Amersham Pharmacia Biotech). The antibodies to GST were obtained and affinity purified in a similar way. Polyclonal peptide antibodies were raised against the peptide CATLGVGSSGRGTTYQS-RPA for Mcm3 and against SAYLENSKGAPNNSC for E6-AP (peptide-E6-AP antibody) coupled to keyhole limpet hemocyanin. These antibodies were affinity purified against the peptide coupled to SulfoLink resins (Pierce). Antibodies to GST-E6-AP were a gift from M. Scheffner and purified in batch against a GST-E6-AP protein that was bound to GST-Sepharose and immobilized by cross-linking with dimethyl suberimidate (GST-E6-AP antibody).
Ubiquitination Assays-For the Mcm7 in vivo ubiquitination assay, we used the protocol essentially as described (25) with minor modifications as follows. The pH 5.8 washing step of the Ni 2ϩ -NTA-agarose was omitted, and proteins were eluted with 1.8 ml of 200 mM imidazole. For detection of the HA-ubiquitin we used the 16B12 (Babco) monoclonal anti HA antibody; Mcm7 was detected on parallel blots. HEK 293 cells were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum and transiently transfected at a density of 0.8 ϫ 10 5 /10-cm dish, using the calcium phosphate precipitation method. Cells were collected 24 h after transfection. LLnL, if used, was added 2 h prior to harvesting of the cells. For all the experiments shown, supernatants were included in the protein purifications.
HPV-18E6 Interacts with a Subpopulation of the Mcm Holocomplex in Vivo-
We used the HPV-18E6 oncoprotein fused to the Gal4 DNA binding domain as a bait in a yeast two-hybrid interaction screen (27). A cDNA identical to the C-terminal part of the p85 Mcm sequence (28), now referred to as the human Mcm7 protein, was isolated in the screen and represented approximately 15% of the positive colonies obtained. Control strains carrying Gal4-p53, Gal4-HPV-16E5, or Gal4-HPV-16E7 baits did not show interaction with Mcm7 in a yeast two-hybrid assay, thus proving specificity for the selection procedure used (not shown). Six different types of Mcm proteins have been described, and together they form a hetero-hexamer complex which is the functional unit (29 -31) of the replication licensing factor M (RLF M) (17,32,33). We established procedures to analyze the entire Mcm hexamer complex from HeLa cells a natural source of HPV-18E6 protein (29 -31) (see "Experimental Procedures") in order to ask whether E6 is present within this complex. From the 9% polyethylene glycol fraction (a fraction that was previously shown to contain RLF-M (17)) from high salt nuclear extracts (NXT), HPV-18E6 and Mcm7 co-elute in overlapping fractions on a Superose 12 column with an apparent molecular mass of 400 -600 kDa (Fig. 1A), which is within the range reported previously for an active RLF-M fraction (29 -31). Immunoprecipitations of the relevant peak fractions, using either HPV-18E6 or Mcm7 monospecific antibodies, show precipitation of the Mcm7 protein from these Superose fractions (Fig. 1B). HPV-18E6 co-precipitates Mcm7 only from the early Mcm7 peak fractions around 600 kDa, whereas immunoprecipitations of Mcm7 show the presence of Mcm7 proteins in fractions ranging from 600 to 400 kDa, thus indicating that HPV-18E6 interacts with a subpopulation of the Mcm complex. In addition, Mcm3 co-precipitates from these extracts with antibodies against both HPV-18E6 and Mcm7 (Fig. 1C) demonstrating the integrity of the functional Mcm holocomplex. Mcm3 was previously shown to be less tightly associated with the RLF-M hexamer complex and was therefore used as an indicator for the integrity of the entire complex (31).
Mcm7 Is a Direct Target for HPV-E6 Proteins in Vitro-By having shown that the HPV-18E6/Mcm7 interaction takes place in vivo, we were next interested in determining whether E6 proteins from other HPV types could interact with Mcm7. To do this, the E6 proteins from the low risk types HPV-6 and 11 as well as from the high risk types HPV-16 and 18 were expressed as glutathione S-transferase-E6 fusion proteins (GST), and binding to in vitro translated 35 S-labeled Mcm7 protein was assessed. As can be seen from Fig. 2A, Mcm7 interacts with GST-18E6, GST-16E6, GST-11E6, and GST-6E6 indicating a strong conservation of binding to Mcm7 among both high and low risk HPV types. In order to define the region of Mcm7 bound by the E6 proteins, a deletion analysis of the Mcm7 protein was performed. A region of 78 amino acids was defined that is required for binding to HPV18-E6 ( Fig. 2A), and furthermore, a binding assay using purified recombinant His 6 -Mcm7 (fragment 577-719) and GST-E6 proteins verified that the Mcm7/E6 association is direct (Fig. 2B). As Mcm7 interactions showed a high background with GST protein alone, the nature of specific binding conditions was determined in more detail (Fig. 2C). This interaction was stabilized in the presence of 0.9 M (NH 4 ) 2 SO 4 or 2 M KCl, whereas for the elution of specific bound proteins the ionic detergent SDS was necessary, conditions that indicate that the interactions observed are specific and hydrophobic in nature ( Fig. 2C and data not shown).
Homotypic Interaction of UBE3A/E6-AP and Mcm7 Defines a Novel Motif, the L2G Box-Sequence analysis of the Mcm7 region essential for its interaction with HPV-18E6 ( Fig. 2A) defines a stretch of amino acids that has significant homology with a region of the ubiquitin ligase, E6-AP, that was previously shown to be sufficient for interacting with HPV-16E6 in vitro (34) (Fig. 2A). In addition, this region shows similarity to a sequence recently described as an E6 consensus binding site that was derived from mapping data obtained for the E6binding protein (E6BP) (35) and from a random peptide library screen (36). This region is not conserved between the different Mcm family paralogues but is nearly identical among the Mcm7 vertebrate orthologues. The corresponding region in E6-AP is not conserved in any of the recently discovered E6-AP/hect domain (37,38) containing members of the ubiquitin ligase family. To evaluate the significance of this homology between Mcm7 and E6-AP for the E6 interactions, we used peptides spanning the core of the relevant regions (Fig. 3A). These peptides were covalently coupled by an additional Cterminal cysteine via iodoacetyl groups to a non-ionic chromatographic "SulfoLink"-agarose matrix and used to test for specific precipitation of the Mcm7 and HPV-18E6 proteins, respectively, from HeLa cell extracts. Cytoplasmic extracts, low salt nuclear fractions, or high salt nuclear fractions (chromatin-bound proteins) were passed through these peptide columns, and after several washes with 0.4 M KCl containing buffers, specifically bound proteins were eluted with 2% SDS and analyzed by Western blot for HPV 18E6 and Mcm7. Peptides of the homologous region in the E6-AP protein were able to specifically retain the HPV-18E6 protein from cytoplasmic and nuclear high salt fractions (Fig. 3B) vitro binding assay was performed using purified GST-E6-AP and purified His-Mcm7 proteins. The results demonstrate that the E6-AP/Mcm7 interaction observed in the peptide assays is direct (Fig. 4A) and argue that no additional bridging or auxiliary factors are required for the E6-AP/Mcm7 interaction. Moreover, in vitro translated Mcm7 proteins deleted for the core E6-AP homology region (⌬640 -646) failed to bind GST-E6-AP (Fig. 4B). To determine whether E6-AP interacts with Mcm7 in mammalian cells, HeLa cell nuclear extracts (Fig. 4C) were immunoprecipitated with specific antibodies to both of the proteins (Mcm7 and E6-AP) and a control antibody (GST) and analyzed in Western blots for cross-immunoprecipitation. A subpopulation of E6-AP as well as of Mcm7 specifically co-precipitate with each other (lanes 2 and 3), whereas no coimmunoprecipitation is seen with the control antibodies (lane 1). Again, as was seen for the HPV18-E6/Mcm7 interaction, these complexes were detectable only from log phase nuclear extracts prepared after incubation and in the presence of isopeptidase and proteasome inhibitors (see "Experimental Procedures", negative results with no inhibitors not shown). Collectively, these data show that the E6-AP and Mcm7 proteins are able to interact in vitro and in vivo and that this interaction is mediated by homotypic motifs which we call the L2G box. In addition, the small region of Mcm7 found to be essential for the Mcm7/E6-AP interaction also represents a specific contact site for HPV-18E6 (Fig. 3B). In contrast to the
FIG. 2. Analysis of the HPV-E6/ Mcm7 interaction in vitro.
A, interaction studies were performed with E6 proteins from both high risk (16E6 and 18E6) and low risk HPV types (6E6 and 11E6). The GST-E6 fusion proteins (0.5 g) or GST (5 g) were incubated with in vitro translated, 35
FIG. 3. Binding of Mcm7 by HPV-18E6 and E6-AP requires a novel structural motif, the L2G box.
A, a stretch of amino acids that is conserved between Mcm7 and E6-AP was identified in a region that was previously shown to be necessary for the HPV-18E6 interaction with E6-AP. Alignment of the conserved region is shown. Unbroken line, smallest region in E6-AP previously mapped for HPV-18E6 interaction (34); red, conserved residues; blue, region spanning the peptide sequences used in B; dashed line, deletion introduced in Mcm7 used in Fig. 4B and Fig. 6. B, affinity purification of Mcm7 and HPV-18E6 from HeLa cell extracts with peptides spanning the conserved regions indicated in A. HeLa cell extracts were incubated with peptides immobilized on SulfoLink resins, and specifically retained proteins were analyzed by Western blotting. L, represents peptide with wild-type sequence; T, represents peptide with Leu 644 to Thr substitution for Mcm7 peptide and Leu 402 to Thr substitution for E6-AP peptide. Assays were performed with cytoplasmic (CYT), nuclear low salt (NLS), and nuclear high salt extract (NHS). (25) and probably reflects a high turnover of the ubiquitinated versus the non-ubiquitinated protein fraction, consistent with the ubiquitination step being rate-limiting.
The L2G Box in Mcm7 Is Functional in Vivo and Is the Substrate Recognition Site for the Ubiquitin Ligase E6-AP-To ask if the ubiquitination observed for Mcm7 is directly linked to the E6-AP ubiquitin ligase system in vivo, co-transfection ex-periments with either HPV-18E6 or E6-AP-encoding plasmids were performed in the assay system described above. The presence of HPV-18E6 (Fig. 6A, lane 2) or additional E6-AP (Fig. 6B, lane 3) results in a sharp decrease in the levels of both the ubiquitinated and non-ubiquitinated His 6 -Mcm7 fraction. This decrease was not observed when specific peptide aldehyde proteasome inhibitors were added 2 h prior to the protein extraction (Fig. 6, A, lane 3, and B, lane 4), suggesting E6-AP and HPV-18E6 targeted degradation of the Mcm7 protein by the proteasome. The fact that the addition of proteasome inhibitors for as short as 2 h caused a stabilization of the Mcm7-specific ubiquitin ladders proved to be a valuable and essential test for the specificity of the E6-AP or HPV-E6-dependent degradation of Mcm7. Noteworthy, His 6 -c-Jun protein used in a similar assay was not affected by E6-AP or HPV-18E6 (not shown). The decrease of the Mcm7 product observed in the presence of additional E6-AP indicates that E6-AP can target Mcm7 in the absence of E6. An E6-AP mutant protein deleted for the HPV-16E6/Mcm7 interaction domain (E6-AP⌬E) did not reduce the Mcm7 product in the same assay (Fig. 6B, lanes 5 and 9), consistent with the interpretation that this site is also essential for an HPV-E6-independent function of the E6-AP protein.
We then further analyzed an Mcm7 protein with a sevenamino acid deletion in the E6-AP interaction site (Fig. 6, A, lanes 4 -6, and B, lanes 5-8). Interestingly, His 6 -Mcm7⌬L2G, which is no longer capable of binding E6-AP (see Fig. 3B), is still polyubiquitinated in vivo (see "Discussion"), but no degradation is observed in response to HPV-18E6 or E6-AP (Fig. 6, A, lane 5, and B, lane 7) confirming that the L2G box is a substrate recognition site for the E3 ubiquitin-protein ligase E6-AP in vivo. Similar results were obtained by using either Leu/Thr 644 or Leu/Thr 645 or Glu/Ala 646 substitution mutations from the Mcm7 L2G box. 2 We did not map the binding site for non-oncogenic E6 proteins further and do not yet know if these proteins interact with the Mcm7-L2G box or through other regions of the Mcm7 protein. It has been shown that low risk E6 proteins do not interact with the homotypic region now defined as the L2G box in E6-AP (34) and that only oncogenic associated HPV-E6 types can efficiently target p53 for ubiq-2 C. Kü hne, unpublished observations. uitin-mediated degradation via E6-AP (5,7,40,41). However, recent studies have shown that low as well as high risk E6 proteins can interact with p53 (40). Our preliminary studies suggest that Mcm7 is not degraded by HPV-11E6. 2 Additional L2G Box Candidates, Involvement in a Common Pathway of Regulation-The region of p53 involved in the interaction with HPV-18E6 and E6-AP is currently not mapped to the extent presented here for Mcm7 (40). However, the L2G box consensus (S/T)XXXLLG can also be found in the p53 core region that spans the putative interaction site (Fig. 7). Interestingly, this L2G consensus is located between the two DNAcontacting residues Arg 248 and Arg 273 , the region of p53 most frequently mutated in human tumors (42). This site is not conserved in the more recently discovered homologue p73 (43). A data base search for additional (S/T)XXXLLG-containing proteins revealed a small cluster of proteins that function as regulators of DNA replication initiation and/or progression (Fig. 7). Noteworthy, cyclin D that has been previously shown to be ubiquitinated (44) and a member of the c-Abl tyrosine kinase family (c-Abl2) (45) contains an L2G consensus motif. Further matches were seen with essential DNA-modifying enzymes such as DNA polymerase-␣, DNA polymerase-⑀, the te-lomerase catalytic subunit (EST2) (45)(46)(47)(48)(49), and the proto-oncogene BLM, a DNA-helicase previously identified as the Bloom's syndrome gene product (50). Strikingly, the translation initiation factor EIF3 (51, 52), a prime candidate for an effector protein for the regulation of the general protein translation turnover and thus of cell growth, contains the L2G motif. Although interaction of these proteins with the homotypic L2G motif in E6-AP is speculative at present, this cluster of L2G box-containing proteins may also be similarly recognized by E6-AP as is Mcm7.
DISCUSSION
Proteins from small DNA tumor viruses interfere with central cellular control proteins such as p53 or the retinoblastoma protein pRB (53), and this results in a loss of tumor suppression, a hallmark of tumor development (54,55). Based on this, viral proteins serve as valuable tools for screening for new candidate proteins that are involved in cellular regulatory pathways. In recent years, interaction screens with various viral oncogenes have become a "classical" tool in molecular biology for the discovery not only of particular interaction partners but also for finding cellular components which in turn could be linked to pathways that are affected in virus pathology. We reasoned that the oncogenic HPV-18E6 protein, a viral component previously shown to be involved in E6-AP-dependent degradation of p53, should help to identify new E6-AP substrates. As a result of an interaction screen, we demonstrate an HPV-E6 association with the RLF-M component Mcm7. The characterization of this interaction in turn led to the discovery that Mcm7 is a substrate for both E6-AP-dependent and -independent ubiquitination and is specifically targeted for degradation by the 26 S proteasome. Subsequent detailed mapping of the HPV-18E6/Mcm7 binding requirements revealed two features as follows: first the interaction domain used by the virus and by the enzyme E6-AP are contained within 14 amino acids, suggesting an overlapping binding site, and second, the E6-AP interaction is mediated by a homotypic motif present in the substrate and the enzyme which we call the L2G box. The fact that Mcm7 is still polyubiquitinated in the absence of a specific binding site in-cis for E6-AP argues for an additional, E6-AP-independent process involved in the regulation of the polyubiquitin-mediated turnover of the Mcm7 protein. This probably regulates the basic turnover of the Mcm7 protein. We propose that this turnover is modulated by the L2G-binding proteins such as HPV-18E6 and E6-AP in response to as yet unidentified regulators, placing the L2G box as a highly entropically structured module.
It is striking that substrate and enzyme use the L2G box for interaction, and this interaction is sufficient for ubiquitin-mediated degradation by E6-AP; however, the E6 protein does not contain an L2G motif yet interacts with and thus functions via the L2G motif. This suggests that HPV E6 proteins have evolved to interfere with a regulatory pathway in total, such that they interfere with the substrate recognition site (L2G box) of the ubiquitin ligase E6-AP. This observation gives important information concerning tumor virus/host interactions and has exciting evolutionary implications for a virus/host adaptation. Data base searches for this L2G motif identify a small cluster of proteins that are likely candidates for a similar regulation and suggest that E6-AP has more in vivo substrates than was previously anticipated. We would speculate that, at least for some of these proteins, the L2G box was adopted by HPV-E6 proteins for host protein recognition. The E6-binding site motif has been proposed as a basis for an "anti-HPV drug" design (35,36), although the knowledge of this cellular (evolutionary) context presented above will now have to be considered.
Implications for other Hect Domain Containing E3-Ubiquitin Ligases-A family of structurally and functionally related E3ubiquitin protein ligases was recently identified which have a C-terminal homology motif with the E6-AP ubiquitin ligase catalytic domain (hect domain) in common (37). The hect domain spans approximately 350 amino acids within the C-terminal regions of the proteins, but the N terminus of every individual member shows distinct features. The human genome encodes at least 20 different hect domain proteins (38). We find in the case of E6-AP that the substrate recognition site is in the nonconserved N terminus of the protein, and we show that substrate recognition is facilitated by a homotypic interaction of the enzyme with its substrate. This might well be a precedent for a general mode of specificity selection for hect domain E3 ubiquitin ligases. Arguing that specificity of a particular ubiquitin-dependent degradation pathway is provided mainly by its E3 enzymes, the assumption of substrate selection by homotypes should help to define individual ubiquitination pathways for other hect E3 type enzymes.
E6-AP and Licensing for DNA Replication-Our findings link the E3-ubiquitin ligase E6-AP, originally discovered as the E3 ligase for HPV E6-dependent degradation of p53, to a key mediator of the "once-per-cell-cycle control" of DNA replication. Loss of cell cycle control is one of the hallmarks of cancer (54,56), and DNA tumor viruses have been invaluable in dissecting these controls in higher eukaryotes. Mcm7 could represent an E6-AP-regulated checkpoint control element for DNA licensing, consistent with a licensing model for DNA replication, in which the activated and thus functional RLF-M hexamer for initiation would be irreversibly destroyed by proteolysis after successful initiation (note that this might well be an RLF-M subpopulation). Moreover, we speculate that the outcome of a malfunction of E6-AP should be rather a loss of checkpoint 62); eukaryotic translation initiation factor 3 (p116) (EIF3, P55884); cyclin-D2 is identical in the boxed region indicated for cyclin D1 (not shown). Only MAGE-1 is shown, the family members 1, 2, 3, 6 and 8 are identical for the region indicated. function than a priori over-replication. The binding of the Mcm complex to chromatin is strongly inhibited by the recently identified, anaphase-promoting complex-regulated geminin proteins (57). These proteins were proposed to sequester the Mcm complex after DNA replication initiation from its chromatin loading and thus are thought to resemble part of a "licensing surveillance" mechanism. It will be interesting to test if the E6-AP control of Mcm7 represents either a parallel or a linear succession of these proposed mechanisms and thus if geminin links E6-AP to the cell cycle clock. We favor the idea that shortly before or after initiation of replication the Mcm7 protein in the RLF-M complex becomes a repressor of progression which is resolved by degradation. These speculative predictions of an involvement of Mcm7 in a "licensing checkpoint" are further supported by a more recent discovery of a possible inhibition of Xenopus DNA replication initiation due to an interaction of Mcm7 with the retinoblastoma tumor supressor proteins (58).
An early G 1 -phase arrest point was identified, called the origin decision point, that ensures specific recognition of the dihydrofolate reductase origin locus by Xenopus egg extracts (59). Intriguingly, transformation by SV40 can override this arrest point (60). With the knowledge presented above, it should be possible to test if the modulation of Mcm7 abundance plays a direct role in this early G 1 decision point and if the HPV oncogene E6 can bypass the origin decision point, as shown for the SV40 proteins.
E3 enzymes are a central issue because they are potential regulators of ubiquitination timing and substrate selection. We analyzed substrate selection for Mcm7, and we define a substrate recognition motif for the E3-ubiquitin ligase UBE3/ E6-AP in vivo which we call the L2G box. Substrate recognition defined to the extent presented above now allows analysis of the timing of the E6-AP function in the cell, and it is this aspect that we expect to be by-passed, not only by the viral but also by cellular oncogenes in cancer. Identification of the L2G box as a specific homotypic protein-protein interface for E6-AP and its substrates implies the existence of a common regulation for both enzyme and substrate and might serve as the basis for a regulation of an E6-AP-dependent pathway. | 2018-04-03T00:50:23.049Z | 1998-12-18T00:00:00.000 | {
"year": 1998,
"sha1": "02ff7e8d12c8bafe13280e14b4721222242d3490",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/273/51/34302.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e6a27f54c712de39672175a5fdb848749892dec7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15584861 | pes2o/s2orc | v3-fos-license | Reactivity studies of pincer bis-protic N-heterocyclic carbene complexes of platinum and palladium under basic conditions
Bis-protic N-heterocyclic carbene complexes of platinum and palladium (4) yield dimeric structures 6 when treated with sodium tert-butoxide in CH2Cl2. The use of a more polar solvent (THF) and a strong base (LiN(iPr)2) gave the lithium chloride adducts monobasic complex 7 or analogous dibasic complex 8.
Introduction
N-Heterocyclic carbenes (NHCs) have been extensively researched for a number of purposes since 1991 when Arduengo first isolated free NHCs [1][2][3]. NHCs as ligands have been known even longer. In 1968, Wanzlick and Öfele separately synthesized mercury(II) and chromium(0) imidazol-2ylidene complexes [3]. Nearly 50 years of NHC ligand research have demonstrated the importance of the electronic and steric effects that can be modified by altering the alkyl or aryl groups on each nitrogen atom. Less common are protic imidazol-2ylidene (PNHC) ligands with a hydrogen atom on one or both of the stabilizing nitrogens. The synthesis of PNHC complexes has proven to be a challenge, which has limited studies of their reactivity [4][5][6][7][8].
Protic imidazol-2-ylidene ligands (e.g., 1) have been shown to form an imidazolyl ligand (e.g., 2) after deprotonation with a basic proton-accepting nitrogen ( Figure 1). We are unaware of reports on an experimentally determined pK a value of a PNHC imidazolidene complex, but looking at related derivatives, Isobe showed that a 2-palladated pyridine was 3.57 pK a units more basic than pyridine [9,10]. Considering reactions other than simple proton transfer, imidazol-2-yl complexes have recently been used to bind to a second transition metal [11]. Additionally, Cp*Ir complexes from our group [12] demonstrated heterolysis of the H-H bond of H 2 and of the C-H bond of acetylene. The same ligand in CpRu complexes 2 and 3 showed heterolysis of dihydrogen [13]. 1 had a much faster ligand exchange rate after ionization as compared to the Cp*Ir analog (ethylene bound in 5 min at rt (CpRu) instead of 16 h at 70 °C (Cp*Ir)). Species 1 could be converted in situ to the hydride and isolated, or generated in situ and used as a transfer hydrogenation catalyst. Interestingly, the ligand substitution rate of ethylene and the heterolysis of dihydrogen was much greater for 3 than for 2.
With only a few papers exploring the utility of these imidazol-2-yl complexes, we aim to extend this to our recently reported pincer bis PNHC complexes 4-PdCl and 4-PtCl and their triflato analogs [14]. The design of these complexes was inspired by studies of Kunz et al. on aprotic analogs [15,16].
Results and Discussion
The loss of one NH proton from the bis-PNHC complex 4 could lead to structure 5, a complex concurrently containing a PNHC proton donor and a bond activating imidazol-2-yl unit. In an attempt to form 5, 4-PdCl was dissolved in CD 2 Cl 2 , and the solution was saturated with ethylene, followed by the addition of sodium tert-butoxide. After 2 h at room temperature, an NMR spectrum was acquired that showed a new, unsymmetrical species, as expected for 5. Crystals were grown by vapor diffusion of pentanes into benzene and analyzed. Surprisingly, the data showed that the dimer 6-Pd had formed such that the open site was not filled with ethylene, but rather was occupied by an imidazolyl nitrogen from a second complex ( Figure 2). The palladium and platinum dimer complexes, 6-Pd and 6-Pt, could be formed by addition of sodium tert-butoxide to the chloride analogs (Scheme 1), and isolated in 50-56% yields. a The metal-to-plane distance defined by the five corresponding N-coordinated imidazole atoms; this value would be near zero in the absence of strain.
The examination of the dimer crystal structure (see Figure 2 for 6-Pd) shows strain in the Pd1-N1' (and Pd1'-N1) bond. This is due to the metal that remains in the plane defined by the three coordinating atoms of the tridentate ligand (i.e., C1, N3, and C4). The fourth donor atom from the other has to bend out of the plane with the N1' imidazole ring because of the adjacent sterics of the tert-butyl groups on the imidazole. The strain can be quantified by examining how far the metal is from the N1 (or N1')-bound imidazole plane (C1-C2-C3-N1-N2 plane and the symmetry-equivalent atoms): for 6-Pd, 1.241 Å and, for 6-Pt, 1.094 Å ( Table 1) The NMR results are completely consistent with persistence of the dimers in solution. For monomeric species such as 4-PdCl and 4-PtCl, the NH proton resonance is typically downfield shifted with a chemical shift of ca. 11 ppm, whereas this signal is strongly shifted upfield to 8.03 (6-Pd) or 8.19 ppm (6-Pt). The crystal structures for both 6-Pd and 6-Pt show that the NH is located above the pi system of one imidazole ring of the other half of the dimer, which would be expected to shield the NH and cause a significant upfield chemical shift. Moreover, a ROESY experiment on 6-Pt ( Figure S6, Supporting Information File 1) confirms that the NH (N5', Figure 2) has a throughspace interaction with the proton on the imidazole ring (C3, Figure 2), a situation that would not be possible for a monomeric structure.
Attempts to synthesize 5 using sodium alkoxide bases led to the formation of dimer structures 6 with presumed loss of NaCl. Therefore, lithium chloride adducts 7 were targeted because LiCl adduct 3 was isolable yet highly reactive. As demonstrated by NMR spectroscopy, the dissolution of 4-PdCl in a mixture of THF (0.7 mL) and C 6 D 6 (0.1 mL) followed by the addition of one equivalent of LiN(iPr) 2 deprotonates one of the PNHC complexes. This gives 7-Pd, without evidence of dimer formation (Scheme 2). The addition of a second equivalent of LiN(iPr) 2 deprotonates the second PNHC complex, giving 8-Pd. The 1 H NMR spectrum for compound 7-Pt consists of a single NH peak at 10.90 ppm and six aromatic peaks, which all integrate to one proton. The asymmetry is also observed in the 13 C NMR spectrum, which consists of 18 peaks between 100 and 170 ppm. As for 8-Pt, the 1 H NMR spectrum has no peak where the NH peak typically is located, and in the aromatic region there are three peaks. The 13 C NMR spectrum thus consists of 9 peaks between 100 and 170 ppm, showing the reappearance of symmetry. 15 N chemical shift data give structural insight (Table 2), as exemplified by 1-3 [10]. The Δ x (difference in 15 N shifts for compound x) is near zero for a PNHC (1), maximum for the imidazolyl conjugate base 2, and slightly less for an imidazolyl lithium chloride adduct 3. The δ N for the aprotic nitrogen incapable of acid base chemistry (N2) hardly changes, whereas for the protic (N1), the changes depend on its environment. In the following discussion, To see if the faster ligand exchange would lead to LiCl loss with palladium, 7-Pd was synthesized. Unfortunately, similar results were observed with the platinum analog where 1-heptene did not react with 7-Pd (which was then converted to 8-Pd by addition of LiN(iPr) 2 ). Then AgOTf was added to 8-Pd, which formed a deprotonated dimer complex. Even with palladium, the loss of the chloride ligand seemed to be too slow.
Conclusion
In conclusion, attempts at forming an imidazolyl complex from 4-MCl using sodium alkoxides led to strained dimers 6. However, 4-MCl could be deprotonated with either 1 or 2 equiv of LiN(iPr) 2 to give 7, an intriguing species with one PNHC ligand and one Li-imidazolyl adduct, or 8, a bis imidazolyl complex. Unfortunately, substrates could not displace the chloride ligand without formation of dimer 6, and the deprotonated complexes were water sensitive. The attempts at deprotonating the more labile triflate complex 4-PtOTf led to the formation of dimer 6-Pt. To increase the lability of the chloride ligand, species 4-PdCl and 4-PdOTf were examined but gave dimer 6-Pd. In summary, the reactivity of bis-PNHC complexes 4 and bases appears to be dominated by the formation of the dimeric structures. Studies to reduce dimer formation by various means, such as increasing steric hindrance at the imidazolyl nitrogens, will be reported in due course.
Supporting Information File 1
Experimental information and NMR spectroscopy figures. | 2018-04-03T01:13:02.388Z | 2016-06-28T00:00:00.000 | {
"year": 2016,
"sha1": "3e3e718803ba2fab03b494c1478fdf9eb3721f54",
"oa_license": "CCBY",
"oa_url": "https://www.beilstein-journals.org/bjoc/content/pdf/1860-5397-12-126.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "39f539b03e5eac30ce48f5fa9c83196d1a03712f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
1728357 | pes2o/s2orc | v3-fos-license | Coping with Prospective Memory Failures: An Optimal Reminder System Design
Forgetting is in common in daily life, and 50-80% everyday's forgetting is due to prospective memory failures, which have significant impacts on our life. More seriously, some of these memory lapses can bring fatal consequences such as forgetting a sleeping infant in the back seat of a car. People tend to use various techniques to improve their prospective memory performance. Setting up a reminder is one of the most important techniques. The existing studies provide evidences in support of using reminders to cope with prospective memory failures. However, people are not satisfied with existing reminders because of their limitations in different aspects including reliability, optimization, and adaption. Through analysing the functions and features of existing reminder systems, this book draft summarizes their advantages and limitations. We are motivated to improve the performance of reminder systems. For the improvements, the relevant theories and mechanisms of prospective memory from psychology must be complied with, incorporated, and applied in this new study. Based on the literature review, a new reminder model is proposed, which includes a novel reminder planer, a prospective memory based agent, and a personalized user model. The reminder planer is responsible for determining the optimal reminder plan (including the optimal number of reminders, the optimal reminding schedule and the optimal reminding way). The prospective memory agent is responsible for executing the reminding processes. The personalized user model is proposed to learn from users' behaviors and preferences based on human-system interactions and is responsible for adapting the reminder plan to meet users' preferences as much as possible.
Background and Motivation
Forgetting is common in our everyday life. You may forget to do something at a particular time, such as forgetting to take medicines after meal and forgetting a meeting at 3:00 PM, or at a particular occasion, such as forgetting to buy milk when you are passing by a grocery store. These tasks have in common is that they are in the future. Remembering to perform a future task is referred to as prospective memory (ProM) [1]. According to , everyone is undergoing ProM failures. A significant number of 50-80% everyday forgetting is due to ProM failures [2]. Some of these memory lapses can bring fatal consequences. Every year we can see the news that an infant died in a hot car after parents left the car, forgetting that the child was sleeping quietly in the back seat. People are eager to learn more techniques for improving ProM performance and coping with ProM failures.
ProM is a complex cognitive function as it consists of several stages and various components [1,3]. A specific prospective task has its own characteristics, which determine the nature of the ProM task. First of all, the prospective task to be performed at a specific situation or at a particular time determines the type of ProM (event-based or time-based).
Second, the prospect of attending a stressful and exhausting meeting may not be appealing, while the prospect of meeting a friend could be something to look forward to. The motivation is associated with the performance of prospective tasks. However, if the stressful meeting is important since your absence may lead to serious consequences, while the task of meeting a friend may not be so important. The personal awareness of task importance also influences the ProM performance. On the other hand, a ProM task is embedded in ongoing activities, and requires us to perform it when an appropriate condition is satisfied.
The degree of attention resources occupied by ongoing tasks could influence the prospective task performance. Improving the ProM performance requires understanding ProM from its mechanisms to various factors that act on it.
ProM problems are common with age [4]. Old adults tend to use ProM reminders to cope with their ProM decline [5]. Without reminders, the challenge for performing a ProM task is that the intention needs to be initiated while you are simultaneously engaging in the ongoing tasks [6]. Reminders provide a solution to initiate the prospective task at an intended time, and they range from paper notes to advanced technology-based ones.
Originally, most of technology-based reminders (e.g., MEMOS, Memojog) were designed for users with cognitive impairment to promote their independence and assist them in health and wellbeing of individual [7]. Currently, technology-based reminders are popularly used by individuals. We acknowledge their helps in personal time management, such as Google calendar.
Research Questions
In some cases, we are not satisfied with our reminder system because of its failure in reminding, issuing burdensome reminders (annoyance), or disagreeable signal (e.g., some persons prefer sound reminder to visual reminder). We are motivated to investigate the improvements to the generic reminder system that will be more reliable, optimal and adaptive. Therefore, the tradeoff between the reliability and the annoyance as how many reminders should be issued to users for a specific ProM task is one of our main objectives.
Spontaneously, the question of when to remind arises, since reminders are issued between the time of the first reminder and the time of executing the ProM task. Considering the reminder as a created cue to the ProM task, another question of how to make the reminder salient and associated with the ProM task comes up. Therefore, how many times to remind, when to remind, and how to remind are our research questions. Consequently, the number of reminders, the reminding schedule, and the reminding way constitute our reminder plan.
In addition, we learn from some existing intelligent memory assists, such as Autominder with the adaptive feature [7]. We proposed to develop a personalized user model to observe the user's behaviors, learn the user's preferences for each feature of the reminder plan (e.g., the preference of audio or visual signal), and adapt the reminder plan to meet the personal preferences as much as possible. Therefore, our last research question is how to make the reminder plan adaptive and personalized according to the knowledge learned from users.
Based on the discussion in this section above, the highlights of four research questions are: • How to calculate the optimal number of reminders for a specific ProM task?
• How to determine the optimal reminding schedule for a specific ProM task?
• How to determine the optimal reminding way for a specific ProM task?
• How to make reminder plan adaptive and personalized based on the knowledge learned from users?
Approaches
To address research questions above, our approaches include two dependent aspects.
Firstly, we establish a structural model of the reminder system, which includes three components of Reminder Planer, ProM Agent, and Personalized User Model. The reminder planer is responsible for determining the optimal reminder plan. The memory agent is responsible for executing the reminding processes. The personalized users model is responsible for adapting the reminder plan. This structural model is developed by reasoning various factors which potentially influence the ProM performance, by complying with the process model of ProM, and by overcoming the limitations and learning the advantages from existing reminder systems. Secondly, we use mathematical functions to calculate the optimal number of reminders, the reminding schedule and the reminding way. We are also going to apply some machine learning algorithms into the personalized user model to adapt the reminder plan in the future.
Contributions
The main contributions based on literature review, model design and implementation, are summarized as follows: • We have done a comprehensive literature review, including three aspects: 1)ProM theories of four stages and six components, 2) a series of potential factors (type, age, the complexity of ongoing task, the importance of ProM task, and the motivation of ProM task) which influence the ProM performance, and 3) the existing reminder systems to cope with ProM failures, all of which provide theatrical and practical supports for our reminder system development.
•
Based on the literature review, we conducted the structural model of reminder system, which includes Reminder Planer, ProM Agent, and Personalized User Model.
•
We came up with the concept of the reminder plan with the optimal number of reminders, the optimal reminding schedule, and the optimal reminding way, which can provide reliable and optimal reminders.
•
We have integrated the reminding function into ProM processes so as to clearly understand how a ProM task fails and how reminders happen in prospective memory processes.
•
We proposed the personalized user model to adapt the reminder plan to meet with users' preferences.
Organization
In the following chapters, firstly, we investigate various factors which potentially influence the ProM performance through reviewing the ProM theories and empirical studies.
Secondly, we review a series of existing ProM assists, and compare their advantages and drawbacks. Thirdly, through applying theoretical knowledge, incorporating factors drawn from empirical studies, and learning from the existing ProM assists, we develop our reminder system including three components of the reminder planer, the ProM agent, and the personalized user model. Finally, our future work is proposed to conduct a series of experiments to work out the weight of each factor on the ProM performance, and integrate reinforcement learning and supervised learning to make our reminder system adaptive.
Chapter 2
Theoretical and Empirical Background
ProM Theory
Prospective Memory (ProM) refers to remembering to perform the intended task after a delay [1]. The intended task is stored in memory and will be executed in the future [8].
Remembering to take medication at 7:00 PM, remembering to buy milk on the way home after work, or remembering to deliver the mails when passing by a mail box are examples of the ProM task. It happens in our health and social life, and directly influences our life quality.
According to , a significant number of 50-80% memory failures are ProM problems [2]. The challenge for ProM tasks is that the intention has to be triggered while you are simultaneously engaging in the ongoing task [6].
Process Model: According to , a ProM task involves four stages: 1) intention formation (a future task is planned and encoded), 2) intention retention (the intention is maintained and waits for the perception of the target while engaging in ongoing tasks), 3) intention initiation (the moment at which the execution of intention is initiated), 4) intention execution (the maintained task is performed) [3]. From these stages, we can find that in order to completely accomplish a ProM task, people have to remember not only what content is supposed to do, but also when to perform. If people successfully retrieve what they intend to do from memory, less frequently they fail to remember what the task is. Therefore, people with ProM problems mostly fail to initiate the intention at the appropriate moment.
Six-Component Model: A number of studies devote to investigating the cognitive processes of ProM in order to explain what components are involved in encoding, retention and retrieval [9,10,11]. Dobbs and Reeves (1996) have developed a comprehensive model of six-component which are 1) meta-knowledge, 2)planning, 3)monitoring, 4)recalling the content of the intention, 5)compliance, and 6)awareness of output [9]. In this model, it emphasizes that the retrospective memory (the content of intentions) is involved in ProM processes. Meanwhile, it demonstrates that execution functions, such as planning and monitoring, play important roles in ProM. Additionally, it mentions that both the knowledge of understanding how to remember and personal abilities as potential factors affect the ProM performance.
The Type of ProM as A Factor
ProM tasks have been primarily identified by two types: time-based and event-based [12].
The time-based ProM refers to performing an intention at a specific time or after a period (e.g., remembering to take medicines at 7:00 PM or 30 minutes later). It depends on internal cues of time monitoring and self-initiation. The event-based task refers to performing the delay intention by an event (e.g., remembering to deliver the mails when passing by a mail box). This event is external environment cue, not requiring self-initiation.
A thorough review of existing studies makes clear that there are mechanical differences between event-based and time-based ProM. A majority of researchers stated that performance on event-based tasks is much better than that on time-based tasks because the latter is particularly dependent on monitoring of time and self-initiation [13,14,15]. For example, in studies of Einstein et al. [14], the experiment 3 involved both time-and event-based tasks. The participants were asked to answer a set of general questions (the ongoing task). In the time-based task, the participants were required to press a keyboard key every 5 minutes, and in the event-based task, they were required to press the keyboard key when they met the question of president. The results indicated that the participants' (both 18-21 years old and 61-78 years old adults) performance on the event-based task was higher. However, some researchers presented the different results, for exapmle, d'Ydewalle et al. conducted a face-identification task in the ongoing activity, without answering general questions which were used by Einstein et al. According to their results, the performance in the time-based task was better than that in the event-based task among old adults (55-81 years) (see also [16] [17,14]). They demonstrated that when participants were involved in the simple problems (low cognition), their performance in the time-based task was higher than that in the event-based task, whereas when they were required to resolve complex problems (high cognition), their performance in the time-based task decreased, particularly for old adults (60-86 years) [18].
d 'Ydewalle et al. (2001) reproduced the experiment by 2 × 2 × 2 design (old vs. young, event− vs. time− based task, and low vs. high complexity of the ongoing task). This study confirmed that the complexity of ongoing task is an important factor that influences ProM performance. Their result provides a better explanation to the type-related discrepancy [ [20,21]. Some studies compared children of very young age and early school age (4-7 years old)(e.g., [22]). And some studies [20]. A few studies concentrated on schoolchildren development with a relatively wider age range of 5 years (e.g., [23,24]). From 2 to 5 years old. Two studies examined the development of ProM in children aging from 2 to 5 years old [20,21]. These studies found that children have the ProM ability at as early as the age of 2. In Somerville et al. (1983), children from 2 to 4 years old were involved in the deliberate ProM activities. 2-, 3-, and 4-year-old children were arranged to 8 different ProM tasks by their mothers over a period of 2 weeks. These tasks varied in high motivation level such as "remind me to buy candy at the store" and low motivation level like "remind me to bring in washing". The time of delay to carry out the task varied in short (a few minutes) and long (a few hours) level. The results found that even 2 years old children could recall the ProM tasks and perform very well, with 80% successful in remembering tasks with high interesting and short delay.
Unlike the naturalistic study of Somerville et al. (1983), Guajardo and Best (2000) studied the 3 to 5-year-old children on the ProM task in a laboratory settings. In their study, the children were introduced a computer-based game in which they received 6 blocks of 10 pictures at frequency of 5 seconds per picture. The ProM task required the children to press a key on the keyboard once they saw a picture of a house (or a duck). The results showed that the 5-year-old children were reliably better at remembering to press the keyboard than 3-year-old children. Their study obtained a significant effect of age which means the 5-year-old children can performance better on ProM tasks than 3-year-old children [25].
From 5 to 7 years old. Some studies showed ProM has rapidly developed after the age of 5 (e.g., [26,22]). For example, Kvavilashvili et al. (2001) examined two groups of children's (5 years old and 7 year old) performance on the event-based ProM task in experiment 1. The children were engaged in an ongoing activity of naming the picture cards.
They were asked to hide the cards when they saw the animal cards. Finally they found that the 7-year-old children performed better than 5-year-old children [22]. However, some other studies obtained discrepant findings. For example, Meacham and Colombo (1980) asked the children (from 5 to 8 years old) to play a card game with experimenters. The ProM task was that when they finished the game, the children reminded the experimenters to open the surprise box which had been placed on the table before they started the game. The data analysis revealed that there was no age effect during this age period [27]. From 7 to 12 years. There are two studies on children development in the time-based ProM from 7 to 12 years old [23,24]. The computer game CyberCruiser was applied in Kerns' (2000) study. The children from 7 to 12 years old controlled the car by using a joystick. They were required to be very careful about the traffic and hazards around to get points. The points were calculated by whether hitting other vehicles or not and how fast the speed was. The time-based ProM task was to check the fuel gauge by hitting a button in case the gas ran out. The results revealed a significant ProM development from 7 to 12 years old [23]. In the study of Mackinlay et al. (2009), the ongoing activity was one-back picture which required the children (7-12 years old) to judge whether the current picture has been seen before by pressing yes or no keys. The time-based PROM task was instructed to remember to press the clock key every 2 minutes. The results indicated that older children had better performance [24]. Passolunghi et al. (1995) also acquired the findings that the older children's (10-11 years old) performance on the event-based ProM task is better than the younger children's (7-8 years old) [28]. These findings on development patterns Similarly, in the study of Kerns (2000), although the game as an ongoing task was equally interesting to both 7 and 12-year-old children, it is still difficult for 7-year-old children to play that game. Secondly, it is also possible that some paradigms cannot eliminate ceiling effects. For examples, in the study of Somerville et al. (1983), the ProM task of reminding mom to buy candies in a few minutes was too exciting for either 2-year-old children or 3-year-old children. The children cannot wait to carry out the exciting task (ProM task). It is not surprising to see 2-year-old children's performance on the ProM task is the same as older children's. Similarly, in the study of Meacham and Colombo (1980), the ProM task of reminding the experimenter to open the surprising box is also a temptation target.
Aging with ProM
In fact, initially, age was the focus of ProM studies. Craik (1986) was the first one to point out that age would be a factor that effects on ProM performance. In his study, they found that the older adults have the more complaints about their forgetfulness in terms of accomplishing deferred tasks [13]. Later, Dobbs and Rule (1987) examined age differences in ProM performance [30]. They also found a deficit for older adults (from 70 to 99 years old) compared with younger adults (from 30 to 65 years old).
Taking ProM types into account, Einstein, et al. (1995) pointed out that age effect exists on the time-based ProM by comparing old-adults and young-adults. At the same time, they stated that there was no age effect on event-based ProM [14]. d'Ydewalle et al. argued against their results through a set of experiments [18]. d' Ydewalle (1996) also conducted three experiments to test the performance on the time-and the event-based ProM task [16].
Unlike Einstein et al. (1995), they found that there was age effects existed in the event-based task. Additionally, Maylor (1996) also conducted experiments to examine the discrepancies.
She pointed out the difficulty of ongoing activities might be an interference factor to detect the age-related difference [4]. In other words, if the difficulty of the ongoing task is manipulated in the experiment, it would equate ProM performance between younger and older participants, such as Einstein and McDaniel's (1990) study, in which they controlled the older participants' ongoing task with fewer words in the list of recalled words. Maylor's (1996) experiment asked participants to recognize the face and give response when they found the person had a beard. This experiment required the attention shifting from face-recognition (one level of processing) to features of face (another level of processing).
The attention shifting is a process of execution functions. Compared with younger adults, older adults have diminished execution functions [31]. Therefore, Maylor (1996) concluded that older adults also have deficits in the event-based ProM task which requires execution functions.
Look through the ProM development across the lifespan, compared with young adults, children have less developed ProM and older adults have gradually declined in ProM, which is attributed to the development of execution functions. These age differences draw an inverted U-shaped developmental trajectory across lifespan as replying on measures of the execution function [1].
Conclusions: It is clear that age-related differences in the time-based ProM performance is related to internal cues of time monitoring and self-initiated. Age-related differences in the event-based ProM depend on some variables of execution functions such as the attention shifting from different levels of cognitive processing [4]. Therefore, both the time-based and event-based ProM seem to require the executive control processes [3,32,33].
In the laboratory setting, the ongoing task is manipulated by experimenters. Failure of adjusting the difficulty of ongoing task across the age groups may lead to confusing and contradictory results such as what we mentioned above.
The Complexity of the Ongoing Task as A Factor
In laboratory settings, the ongoing task is designed as a background work such as answering a set of questions [14], solving the mathematical or puzzle problems [20], naming a set of pictures [16,18,22,4], or playing a computer-based games [23], etc., from which the participants need to perform ProM tasks by shifting their attention from the ongoing task.
Firstly, take a look at some studies of comparing ProM performances between younger adults and older adults. In studies of Einstein et al. (1995), the ongoing task requires participants to answer a long set of general knowledge and problems-solving questions both in the time-based and event-based ProM [14]. d' Ydewalle et al. (1996Ydewalle et al. ( , 1999 employed the paradigm similar to those in Einstein et al. (1995), but face-identification as a different ongoing task applied in the paradigm [16,18]. Similarly, in the study of Maylor (1996), the ongoing task was designed as writing down the name of various famous male faces [4].
In the developmental studies of ProM, Nigro et al. (2002) designed their ongoing task as solving a series of mathematical operations and puzzles for children. The difficulty of the ongoing task was previously adjusted according to the age of the participants both in the time-based and event-based ProM [20]. In the study of Kvavilashvili et al. (2001), the children aged 4, 5, 7 years were asked to look at four stacks of cards with picture one by one and tell the experimenter as accurately as possible what the picture is. The children were also told that they can draw one picture for each stack [22]. Kern (2001) employed computer-based game in the experiment. The ongoing task required the children aged 7 to 12 years to play the Cyber Cruiser game, in which the children used a joystick to control a car in a road with traffic and hazards [23].
The naturalistic studies carry out experiments in the real-life environment. The study of Somerville et al. (1983) is a pioneer of naturalistic study. In this study, children aged from 2 to 4 years were given eight ProM tasks in their real life by their mothers. Their mothers also observed their ProM performance and acted as an experimenter [21]. Ceci and Bronfennbrenner (1985) applied a video game to mimic naturalistic study with a real-life scenario. The children were allowed to play the video game, but they needed to remember to take the cupcakes out of the oven in a delayed time of 30 minutes [34]. The ongoing task of playing the game is very likely to happen in our life. It is the same as watching TV but remembering to turn off the oven after certain minutes. In the study of Kvavilashvili and Fisher (2007), the participants did what they did daily as usual. They were required to remember to make phone calls to the experimenter either at a pre-arranged time or after receiving a certain text message on the seventh days of their experiment session [29]. The ongoing task was also their real life, in which the ProM task was embedded as making a phone call at an intended time.
The performance of the ongoing task. Most of ProM studies only pay attention on ProM performance, whereas the performance on the ongoing task is not reported. For instance, Kern (2000) emphasized that the computer-based game of Cyber Cruiser made children aged from 7 to 13 years equally engaged in playing, but Kern didn't analyse the performance of ongoing task (game score) to identify the age-related difference. Although Nigro et al. (2002) previously adjusted the complexity of the ongoing task (mathematical problems and puzzles) according to the class level for the children aged from 7 to 12 years, they didn't report children's performance on the ongoing task.
(1999), for example, they conducted the experiment by 2 × 2 × 2 design (old vs. young, event− vs. time− based task, and low vs. high complexity of the ongoing task). In their results, they analysed the ongoing task of both answering questions and face identification.
Under the condition of answering questions (high complexity), older adults even performed better on the ongoing task than younger adults in the time-based task, whereas there is no age difference in the event-based task. On the other hand, under the condition of face identification (low complexity), younger adults performed better on the ongoing task than older adults both in the time-based and event-based task, but the age-related difference in the event-based task is not significant. The analysis of interactions (ProM and ongoing task; ProM and age; age and ongoing task) suggested that age differences in ProM would disappear under some conditions (low complexity of the ongoing task) when taking the performance of ongoing task into account [18].
The study of Rendell et al. (2007) involved the face recognition as the ongoing task, in which a target cue "with glasses" or "John" is embedded. The ongoing task was measured by the proportion of famous faces correctly named. In experiment 2, the ongoing task was slowly paced than in experiment 1. They found that younger participants named a greater proportion of faces than the older adults did in both experiment 1 and experiment 2. More importantly, the finding patterns between experiment 1 and 2 indicated that when the ongoing task is less challenging (experiment 2), the older adults performed on ProM as well as the younger adults [35]. They found that even if older adults, as well as younger adults, failed to perform the ProM task, they performed similarly or better than younger adults on the ongoing task. They also stated that emphasizing importance of the ongoing task can improve the ongoing task performance [36].
Conclusions: On the one hand, the studies have already paid attention to the complexity of the ongoing task. On the other hand, the studies indicated that the complexity of the ongoing task is an important factor to interfere age-related, and type-related differences.
The Importance of the ProM Task as A Factor
The existing research supports that the importance of ProM task can affect the ProM performance, such that ProM performance would improve when the ProM task makes individual to allocate increased attentions [37,36]. Kliegal et al. (2001), in order to understand the importance of ProM task, they conducted two experiments, in which the task importance was manipulated. They found that the ProM performance improved with the higher importance of the ProM task. Smith et al. (2013) study also investigated the effects of the task importance. In this study, one group of participants received instructions emphasizing the importance of the ProM task (PMI) compared with another group of participants who received instructions emphasizing the importance of the ongoing color-matching task (CMI). The results showed that participants performed better on the ProM task under the PMI coniditon. More interesting, the age-related differences exist in the PMI condition: for younger adults, emphasizing the importance of ProM task substantially improve their performance on the ProM task and decrease their performance on the ongoing task, whereas older adults in the PMI condition slightly improve their ProM performance and kept the same level of performance on the ongoing task [36]. In other words, younger adults can vary their allocation of resources between the ongoing task and the ProM task as a function of task emphasis, whereas old adults are less capable of shifting their attention from the ongoing task to the ProM task, and they assumed that the ongoing task is more important for them than the ProM task.
Conclusions: Nonetheless, we clearly understand that emphasizing the importance of ProM task improves the ProM performance, although the degree of improvement varies by age.
The Motivation of the ProM Task as A Factor
Some studies have examined the effect of motivation to the ProM performance, especially in the naturalistic study [21].
The study of Meacham and Singer (1977) was the first one to investigate the motivation to ProM. In their study, the participants were instructed to send postcards to the experimenter on the rearranged time. They results revealed that the participants who expected to receive a reward performed better than those who did not expect to receive a reward [38].
In the study of Somerville et al., (1983), for an example, the children from 2 to 4 years old involved in the deliberate ProM activities. They were given 8 different ProM tasks by their mothers over a period of 2 weeks. These ProM tasks varied in high motivation level such as "remind me to buy candy at the store" and low motivation level like "remind me to bring in washing". The results found that even 2-year-old children could recall the ProM tasks with 80% successful in remembering high motivation tasks, as well as 4-year-old children [21].
Recently, in order to identify the motivation as an assumed factor for the paradox of age-related declines in laboratory compared with age benefits in naturalistic settings, Aberle et al. (2010) conducted an experiment, in which participants were instructed to remember to contact the experimenter repeatedly over the course of one week. One group has monetary incentive and the other group has no incentive. The results showed that young adults in the high motivation group overcame their age-related deficits and performed better than those in the low motivation group [39]. In other words, increasing the motivation of ProM task can improve the ProM performance.
ProM Assists
As mentioned before, ProM is vital for our health and social life. ProM failures produce great challenges to people and directly influence their life quality. According to , a significant number of 50-80% memory failures are ProM problems [2]. To avoid the consequences of ProM failures, people are likely to use memory assists to help their remembering, especially for old people [40]. Most studies demonstrated that both young and old people benefit from using memory assists (e.g., [12,41]). Memory assists help users to store information or to remind the user an event they might forget [42]. In this study, we focus on the reminder function of memory assists relative to ProM, rather than the storing function relative to retrospective memory.
According to Harris (1978), reminders are generally categorized as active or passive reminders [43]. Examples of diaries, lists, and calendars are passive reminders which require the user to actively check them, whereas google calendar and mobile phone are examples of active reminders which attract users' attention and instruct them when and how to perform an intention. ProM reminders vary from traditional way of pen and paper to technology-based way of electronic devices. The purpose of designing a ProM reminder also varies from specific to generic use. The current study targets on a technology-based reminder of generic use, since our ultimate aim is to produce a reminder system with more flexible and adaptable features.
NeuroPage
Originally, the technology-based memory assists have commonly targeted on cognitively impaired people. One of the earliest ProM assist was Neuropage, of which the primary users are brain injured patients [44]. Neuropage is very simple with little learning. Users However, according to a study of Caprani et al. (2006), there are two potential functions relating to ProM NeuroPage can improve. The first one is the postpone function. The second one is the task confirmation function [7]. A user may receive a reminder at an unsuitable time, therefore the ProM assist has a better function of task postponement so that the user can be reminded at a suitable time when they are available to successfully perform the intention. Meanwhile, the caregiver should know whether the intended task has been carried out or not. Involving these two functions can improve NeuroPage ability of the assistance.
MEMOS (Mobile Extensible Memory Aid System)
MEMOS is a mobile interactive ProM assist, which was designed for brain injured and older users to remind them of essential facts and dates [46]. MEMOS consists two parts of personal memory assistant (PMA) and a base station. PMA is a mobile electronic device to remind the user of important tasks and provide feedback, and the base station coordinates the activities of caregivers and notifies them about feedback of the task execution. injury. The result showed that users performed highest in using PMA from MEMOS compared with other electronic memory assists (palm pilot and mobile phone) [47]. MEMOS overcomes the major drawbacks of the NeuroPage. It supports caregivers to encode and input information, and displays important information for patients to successfully perform the task. Furthermore, it allows patients to confirm the task carried out by pressing a button, and PMA can detect this information.
Similar to NeuroPage, the primary users of MEMOS are brain injured patients. The research group has recognized its potential ProM assists for healthy older people, and planned to extend the system for an application in other field [48].
Memojog
Memojog was designed as a ProM assist built in a personal digital assistant (PDA) platform for memory impaired persons [49]. Memojog consists three components of PDA, the central sever, and the web-based database. User, caregiver, or care professional can input the users schedule and action prompts in the PDA or web-based database. The user can accept, postpone or ignore the reminder. The caregiver and care professional can acquire the user's response by data transmitted to the central server. Memojog also has multiple functions of storing personal information for the user.
Memojog system was evaluated with a group of old adults and memory-impaired users [50]. There were two field evaluations comprising of 6 participants in each evaluation (different participants in each evaluation). The results showed that the participants were happy with the Memojog system and could use it easily. Users appreciated the system reminded their intended tasks accurately.
However, the participant also gave some negative comments such as coverage problem of the inability to connect to the relevant website for changing or updating their schedule, and some hardware-related problems, e.g., the touch screen was not sensitive [50].
Both paper-based and electronic-based calendar are used. Paper-based calendars have a few limitations associated with their use, such as no alerts, forgetting to look at it and no enough space. Electronic-based calendar, such as Google Calendar, is an alternative solution to overcome these limitations. Google Calendar not only provides email reminders and pop-up reminders, but also enables users to link their calendar with their mobile phones. Users can set how far in advance of the task and how many reminders, which enable users to setup the system according to their personal needs [51]. Google calendar is also a simple system to use, such as clicking on the box corresponding to the appropriate In other words, Google calendar is more effective than the diary to support people to achieve their ProM tasks. They also found that Google Calendar was rated higher than the standard diary in assist preference. Compared to the diary, Google Calendar reduces the need for monitoring by alerting the participants to complete the events; the active reminder (linking to mobile phone) reduces the need for actively or frequently checking the calendar. Therefore, Google Calendar appeared to support the retrieval of ProM tasks, and maximize the probability that ProM tasks can be carried out within the necessary response window [51].
However, the participants in McDanald et al. (2006) complained that they finally failed in the task execution even if they noticed a timed reminder to perform a task. The main reason is that the reminder issued at users' unavailable time, but reminders cannot interact with users to update the reminders in real-time.
AutoMinder
Autominder is one of the most advanced technological reminder systems, which assists a broad population such as older adults in general [52]. The purpose of Autominder is to help the older adults live perfectly in their home environment. Compared to the previous ProM assists, Autominder has the ability to model user's daily plans, track user's task execution through the behaviors detected by the sensors at home, and make decisions about whether and when to issue reminders [52].
Autominder has three main components [52]. The first component is Plan Manager, which is responsible for storing user's initialized daily plan and updating the user's plan as the day progresses to avoid the inconsistent/conflicting activities. The second component is Client However, according to Caprani et al. (2006), the sensors and observable information are not always reliable, which may result in assumption failures. Assumption failures decrease the reliability of reminders. Furthermore, the old users may be afraid of intelligent technology, for example, they may feel wary of a mobile robot and sensors working at home [7].
Comparisons of ProM Assists
Based on a review paper [7], we list functions and memory supports of several electronic ProM assists.
Each memory assist has their evaluation studies to support their effective use of helping users to execute prospective tasks. From the review above, they have their own advantages and limitations. We compare their relevance on each stage of ProM theory: 1) intention formation: All of them support encoding, as they require users to form intentions. Some of them provide voice inputs (e.g., MEMOS, Autominder). 2) intention retention: All of them support short or long term delays, unlike the retention time in Chapter 3 Our ProM Reminder System
Introduction
Based on the literature review in chapter 2, we find that 1) support all stages of the process model [3], and 2) meet the requirement of ProM components (e.g., planning, monitoring, content recall). However, part of prospective tasks is still not carried out successfully are mainly due to the reminder issued at users' unavailable time and the reminder system failing in reminding.
Some ProM assists can provide multiple reminders, such as Google Calendar and users
can increase the number of reminders by manually changing the setting. Increasing the number of reminders is a solution to increase probability of reminders issued at a user's available time. In this case, how many reminders should be issued to a user? One time reminder is not enough to guarantee the completion of a ProM task. Too many reminders may lead to annoyance.
We are motivated to investigate how many reminders should be present for a specific ProM task which has not been addressed by existing approaches. Spontaneously, the question of when to remind arises because reminders are issued between the time of starting the reminder and the time of executing the ProM task. Being back to the reminder, the nature of it is a deliberated cue for a ProM task. So the third question is how to make this cue (reminder) salient, and highly associated with the ProM task.
Therefore, we propose to develop a reminder plan to solve these three questions. In the following we explain the three outcomes of this reminder plan including the optimal number of reminders, the optimal reminding schedule, and the optimal reminding way.
1 The optimal number of reminders depends on factors which have effects on the ProM performance. Although Autominder, Client has an optimization function which reason the user's behavior patterns to decide whether and when to remind, the observable information is not always reliable. Particularly for old people, advanced and intelligent technology has already made them to feel uncomfortable and unsafe. Even worse, assumption failures could cause confusion and apprehension [7]. Therefore, the optimal number of reminders determined by the inputs of factors should avoid system assumption failures and annoyances of redundant reminders.
2 The optimal reminding schedule decides when to issue reminders before the ProM task.
The first factor is users' assumption of how long they will be ready for the intended task. For example, you have a meeting at 2:00 PM in auditorium. You are planning to work at office before the meeting. If you assume it will take you 30 minutes to get to auditorium, it is more likely that you setup the reminder at 1:20 PM. If you assume it will take you only 5 minutes to get there, it is more likely that you setup the reminder at 1:50 PM. The second factor is retention intervals of the ProM task, because intervals range from hours to months. Even if there are multiple reminders for a ProM task which needs to be executed 7 days later, you don't want to receive reminders from today. The third factor is users' current location and situation. For example, if you setup the reminder at 1:30 PM for the 2:00 PM meeting, actually, the time cost on distance between the meeting place and your current place is much longer than 30 minutes. In this case, we proposed to adjust the reminding schedule earlier according to the actual location-based time cost. 26 3 The optimal reminding way. The nature of reminder is a deliberately created cue.
Several studies support that cues strongly associated with the intention can produce effectively retrieval [53,54]. Reminders with simple alarms/alerts cannot support the cue strongly associate with a specific ProM task. Users may encounter confusion if all ProM tasks are reminded by the same sound, especially when visual reminders are unavailable or unreachable. Our reminder system can understand the type of the ProM task (e.g., doctor appointment) to produce high associated reminders.
At the same time, because of the interplay between the ProM task and the ongoing task, we also propose that our reminder system should understand the setting of the ongoing task. For example, when you are engaging in an important meeting and you have a doctor appointment 30 minutes later, normally you don't want to be reminded loudly which is interruptive. In office-based environment, users are more likely to use text-based reminders, such as pop-up windows.
However, how to remind is also an intricate question which is concerned with not only the nature of ProM task and the environmental context, but also individual differences.
Some users prefer the audio reminder so as to make sure it is heard. Some users with cognitive impairment may hope the reminder with a picture. The thought of how to remind is consistent with the appeal of reminder systems requiring human factor analysis [7].
After generating the reminder plan, we propose a ProM agent to implement the plan.
First, in our ProM agent, the reminder as a target cue appears at the initiation stage.
Meanwhile, the reminder plan is also encoded and maintained in storage with the intended task together, waiting for the moment to be issued. Second, the ProM agent issues the reminders, and determines the next step based on the user's response to the reminder.
Finally, the ProM agent supports updates before and after the reminding process.
Finally, ProM assists should be adaptive. Through observing the user's behaviors, the system can reason and determine whether and when to issue a reminder to users. We propose to develop a personalized user model to observe the user's behaviors, learn the user's preferences for each feature of the reminder plan and adapt the reminder plan to meet the personal preferences as much as possible.
The current study targets on a technology-based reminder of generic use, since our ultimate objective is to produce a reminder system with more flexible and adaptable features.
Key Ideas for Developing Our Reminder System
The research problems we discussed above need to be addressed on a level of theory, model and practical knowledge based on analysis of the relevant psychological theories and limitations of currently used reminder systems. To design our reminder system, firstly, we overcome the limitations of the existing ProM assists and learn from their advantages.
Secondly, we develop a computational reminder model based on the ProM theories. Thirdly, we incorporate and reason the factors which potentially influence the ProM performance.
Ideas from Existing ProM Assists
Google Calendar is one of the most popular memory assists. It is welcomed by large population with its easy learn and simple use. However, according to McDonald et al.
(2011), participants still failed in some prospective activities in their everyday life even if they used the Google Calendar [51]. As we discussed above, Google calendar as a memory agent supports memory processes of encoding, retention, and retrieval. However, in daily life, there are various situations when receiving a reminder at an inappropriate time, such as the busy situation of meeting. The users cannot carry out the intended task at that time. They hope they can do it later. Therefore a reminder would benefit from a function of postponement so that the users can be reminded at a time when they are available. At the same time, when current situation such as location requires more time to be ready for the ProM task, the reminder would be adjusted earlier than before.
Unfortunately, Google Calendar does not provide the function of adjustment the reminders synchronously according to the current situation. The advanced reminder system -Autominder, although it has adaptive feature, it lacks the stage of users' encoding. The observable information is not always reliable, and system assumption failures could cause confusion and apprehension to users [7].
Our reminder system has the features of reliability, optimality and adaptivity to meet each individual's requirements as much as possible.
A Computational Model Developed from ProM Theories
With research of ProM gradually growing, several theories have been developed to explain the processes and components involved in ProM. The process model [3] We propose that our reminder system could help users to plan and organize the information clearly. In the encoding (the same as formation) stage, our reminder system requires user input information to avoid the unreliability. However, the system provides the salient and distinctive categories to fill in (e.g., what, when, where, who) so that users can consequently receive cues that highly related to the prospective task. Uniquely, this system encodes the optimal number of reminders, the optimal reminding schedule, and the optimal reminding way. In the retention stage, the system can update the information according to user responses. For examples, the system will set the number of reminders null when users accept the task; the system will maintain the number of reminders and issue reminders later when users postpone the task. The initiation stage is the most important one since people with ProM problems mainly fail to initiate the intention at the appropriate moment [6].
Especially, maintaining the monitoring is more demanding of cognitive resources. A reminder system initiates the task and issues a reminder to users at an appropriate time, which perfectly substitutes human's time-monitoring or cue-capturing. Definitely, our system is capable of eliminating the necessity of the human initiation stage. In the execution stage, if the system detects a response of accepting from terminal, it assumes that the user is carrying out the task. Ideally, users who have cognitive impairment should receive the step-by-step guidelines to ensure that the task will be successfully executed. Our system considers this function for users with cognitive impairment.
Incorporate and Reason Factors Affecting the ProM Performance
From the literature review, whether in laboratory or naturalistic settings, we know that various factors that influence the ProM performance, such as the type (time-based vs. event-based), the user's age, the complexity of the ongoing task, the importance and motivation of the ProM task. Some factors are negatively relative to the ProM performance, for example, participants' performance is better when they are involved in simple problems than they are involved in complex problems [17,18]. The others, like the importance of the ProM task, are positively related to the ProM performance. A lot of studies demonstrated that participants performed better on the ProM task if the importance of the ProM task emphasized [37,36]. So far, we have no computational model to incorporate all these factors into a memory agent.
In daily life, the situation and the context of ProM is variable and flexible.generally, No factor can guarantee the success of prospective tasks. However, we can maximize the probability of the ProM completion as much as possible. Meanwhile, we can set the optimal number of reminders to obliterate redundance. Consequently, we can achieve a reliable and optimal reminder system. Therefore, we firstly need to incorporate relevant factors into the reminder system, and then work out the weight of each factor. Finally, we use relevant principles to calculate the optimal number of reminders, the optimal reminding schedule, and the optimal reminding way.
Implementation of Our Reminder System
Our reminder system includes three components: the reminder planer, the ProM agent, and the personalized user model (see Figure 3.1). The reminder planer is responsible for producing the reminder plan according to potential factors on ProM. The ProM agent is responsible for encoding the task and the plan, maintaining the task and the plan, initiating the task, and performing the task, where reminders are triggered and issued at the initiation stage. The personalized user model is responsible for adapting the reminder plan according to human-system interactions. In the following sections we discuss the detailed implementation of each component.
Modeling of the Reminder Planer
The reminder planer is designed for producing the reminder plan according to potential factors on ProM. The optimal number of reminders, the optimal reminding schedule, and The reminder planer has three functions. The first function is to compute the optimal number of reminders. The second function is to compute the optimal reminding schedule.
The third function is to compute the optimal reminding way (see Figure 3.2).
Function -The optimal number of reminders: The Pn(Com, Imp, Mot, Age) is used to compute the optimal number of reminders. This function includes four variables: the complexity of the ongoing task, the importance of the ProM task, the motivation of the ProM task, and the user's age. The possible values for the first three variables are low, medium, and high. The value of the last variable can be either young or old. Each variable has an associated weight to influence the ProM performance. Therefore, we apply the weighted average method to calculate the optimal number of reminders.
Now we formally define the variables and the implementation of this function. The variables are: • Remind num is the optimal number of reminders; • Com is the complexity of the ongoing task; • Imp is the importance of the ProM task; • Mot is the motivation of the ProM task; • Age is the user's age; • Loc is the place of the ProM task; • Rem is the user's input of the first reminder's time; • W he is the execution time of the ProM task;
Modeling of the ProM Agent
The ProM agent performs the following activities: encoding the task and the plan, maintaining the task and the plan, initiating the task, and performing the task, where re minders are triggered and issued at the initiation stage (see Figure 3.3). • W ha is the title of the ProM task, • P er is the person relative to the ProM task, Remind num, Remind sche, Remind way, W ha, and P er are encoded in the ProM agent.
(ii) Maintaining: Remind num, Remind sche, Remind way, W ha, and P er are maintained in the ProM agent, and waiting for triggering or updating reminders. This stage accepts updates according to users' requirements. It also delivers these updates to the original inputs of the task.
(iii) Initiating: • Acc and pos are defined as user's responses to the reminder, • R delay is defined as how long the reminder to be delayed, • Reminding(Remind way) is defined as the function to issue the reminder accompanying with the way (visual or audio, long or short, music or ring) We propose to develop a new reminder system to improve the ProM performance. We draw lessons from the existing reminder systems by learning their strengths and overcoming their limitations. At the same time, we analyze the mechanisms of ProM and follow the ProM theory. In this context, our reminder system has been developed with three components: the reminder planer, the ProM agent, and the personalized user model. The reminder planer is responsible for producing the reminder plan according to potential factors on ProM.
The ProM agent is responsible for encoding the task and the plan, maintaining the task and the plan, initiating the task, and performing the task, where reminders are triggered and issued at the initiation stage. The personalized user model is responsible for adapting the reminder plan according to human-system interactions.
To realize the functions of different components of our new reminder model, a series of principles and algorithms are presented in our report. Four potential factors (the complexity of the ongoing task, the importance of the ProM task, the motivation of the ProM task, and user's age) that influence the ProM performance determine the optimal number of reminders. The environmental factors such as the locations of the ProM task and the current task, the objective factor of the user's initial expectation of the reminder's starting time, and the optimal number of reminders determine the optimal reminding schedule.
Besides the four factors in determining the optimal number of reminders, another factor (the type of the ProM task, such as personal, work, health, or finance) also determines the reminding way. The reminders strongly associated with the ProM task type make users retrieve the intention successfully as much as possible. In summary, the three components of the reminder planer, the ProM agent, and the personalized user model constitute our reminder system, which ensures system reliability, and make system optimal and adaptive.
Future Work
We build a reminder model to generate the optimal number of reminders, the optimal reminding schedule, and the optimal reminding way. These results depend on formulating and optimizing a series of factors, such as the user's age, ongoing tasks, the environmental context, and individual differences. Although these factors have been identified to influence the ProM performance and we have known how they influence the ProM performance, we still need to figure out the weight of each factor. Similarly, we also need to figure out the distribution of reminders between the time of starting the reminder and the time of performing the ProM task. These results would help to achieve the optimal reminder plan, which maximizes the probability of remembering to perform the prospective task and minimizes the potential annoyance.
Meanwhile, we propose the user model to learn the user's behaviors and preferences for each feature of reminding, and mediate on the reminder plan to meet the user's preferences as much as possible. We are going to integrate machine learning techniques such as reinforcement learning and supervised learning into study to make our reminder system adaptive in the future. | 2016-01-23T04:56:39.000Z | 2016-01-23T00:00:00.000 | {
"year": 2016,
"sha1": "71cbba4bc8dc30824b9d744ec936e4ba4bcf315f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "71cbba4bc8dc30824b9d744ec936e4ba4bcf315f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
225589528 | pes2o/s2orc | v3-fos-license | Heating design of cowshed floor heating system based on solar energy / air source heat pump in plateau cold area
According to the high altitude and cold environment of the agricultural and pastoral areas, it is of great significance for the growth of yaks to adjust the ambient temperature of the cowshed appropriately so that yaks can have a more comfortable growth environment under the condition of low external temperature. Floor heating technology is a new scientific feeding technology, which can save labor, increase efficiency, reduce investment and operate easily, and adapt to the development of modern yak breeding industry. At the same time, the solar energy system and the air source heat pump system are coupled to operate as the heat energy source, that is, energy saving and environmental protection, with the best economic benefits. In this paper, through coupling the advantages of these three systems, we design the “Cowshed floor heating system based on solar / air source heat pump”.
Introduction
With the continuous improvement of people's living standards, the domestic consumption demand for yak meat and related dairy products will continue to grow. How to ensure the supply of yak meat and related dairy products is an important problem for yak breeding. Therefore, for the high altitude and cold farming and animal husbandry areas, the heating technology to ensure the milk production of the Female Yak and improve the survival rate of the small yak is very important to improve the economic benefits of the cowshed and the rate of the yak out of the market. As a scientific breeding technology, the floor heating technology has attracted more and more attention [1]. The purpose of floor heating in the cowshed is not to improve the ambient temperature of the whole cowshed. It is mainly used to heat the rest area of yaks and provide a suitable hotbed for yaks. Its heating principle is to arrange the ground heating pipeline in the underground of yak rest area to radiate heat from the bottom to the air, so as to provide heating for yak rest. As the heat source of the ground heating system, renewable energy is the first consideration in this paper. As a kind of low-grade energy, air can be easily obtained without any environmental pollution, and air source heat pump technology has been a mature and widely used technology. But at the same time, when the ambient temperature is low, the air source heat pump system is difficult to achieve efficient energy saving [2]. Tibet's Naqu area is rich in solar energy resources, and solar energy is also a kind of green and clean energy. Therefore, on the basis of making full use of solar energy, this paper coupling the solar energy system and air source heat pump system [2]. When the solar radiation is not sufficient, the hot water temperature is increased through the air source heat pump system. When the ambient temperature is low, the air source heat pump system uses the solar energy system to obtain the radiant heat to improve the system energy efficiency, and then optimizes the operation of the two heat utilization technologies, so as to achieve the purpose of energy conservation and environmental protection.
In this paper, the advantages of solar energy system, air source heat pump system and ground heating system are effectively coupled. For the "One village one union" cowshed in Jiagong village, Luoma Town, Naqu City, Tibet Autonomous Region, "Cowshed floor heating system based on solar / air source heat pump" was designed. The purpose of this system is to use solar energy and air energy, two kinds of renewable energy, to provide 12℃ regional heating for yak rest area in cowshed in winter cold season (outdoor environment average temperature is -30℃). In order to improve the milk yield of Female Yak and the survival rate of small yak in winter.
Local meteorological data
Longitude and latitude: 31°29' n, 92°04' e; Annual average temperature: 7.5℃, annual sunshine hours: 3130.4 h, annual irradiation amount: 8705.22 MJ(m 2 ·a), annual average daily irradiation amount: 23.85 MJ (m 2 ·d), annual average daily irradiation amount of collector inclined surface is 23850KJ/m 2 (Since the installation angle of the transverse intubation collector is 15°, the compensation ratio is not considered).
Basic water temperature: 9℃. The make-up water of the system is groundwater, and the change trend of water supply temperature of groundwater is basically consistent with that of atmospheric temperature. Therefore, the calculated temperature of cold water is the average groundwater temperature.
Water quality requirements of solar hot water system: Due to different water quality conditions in different regions, the use of solar hot water system will seriously affect its use effect in areas with poor water quality. Therefore, the water quality of its water supply must meet the following indicators (refer to the standard for drinking water and sanitary water), as shown in Table 1.
Building overview
The project is a yak shed floor heating project in Naqu City, Tibet Autonomous Region, with an altitude of 4800m and a nature of farm. The cowshed is 3.2m high, 63m from the east to the west, 9m from the south to the north and covers an area of 601m 2 . The solar collector is arranged beside the cowshed, and the water tank is placed inside the cowshed, covering an area of 10m 2 . The floor heating area of cowshed is about 273.46 m 2 , 200W/m 2 is adopted according to the national standard heat load index, and the total heating heat load is 54.692 kW. The plan of the cowshed is shown in Figure 1. The 370mm thick brick wall is below 1.5m of the wall elevation, and the brick wall above ±0.00 of the elevation is built with MU10 sintered KP1 brick and M5 mixed mortar. MU10 ordinary shale brick and M7.5 cement mortar are used for masonry below the elevation of ±0.00. The wall with elevation above 1.5m is 100mm thick metal color steel sandwich panel. Fig.1 The plan of cowshed
System principle
The main working principle of the solar / air source heat pump coupled ground heating system in winter operation of cowshed is as follows: First, when the weather is clear and the solar energy is sufficient, the air source heat pump system will not operate temporarily. Only the solar energy system operates and continuously heats the hot water in the heat storage tank to 70℃. Second, when it snows or the weather is bad and the solar energy is not enough, the solar energy system is the main priority operation, and the air source heat pump system is the auxiliary heat source secondary operation, heating the hot water in the heat storage tank to 70℃. Under the two operation modes, from 5:00 to 6:00 p.m., the hot water in the heat storage tank flows to the ground heating coil through the circulating water pump, and the hot water in the ground heating coil uses radiation heat exchange to transfer the heat to the yak rest area from the bottom to the top, so as to maintain the temperature of the yak rest area at ±12℃. When the temperature of the hot water in the storage tank is lower than 40 ℃, the air source heat pump system will start automatically and heat the hot water in the storage tank. The system principle is shown in Figure 2.
Where: Ac is the daylighting area of the direct system collector, m 2 ; Qw is the daily average water consumption, 20000kg;tend is the end temperature of water in the water storage tank, 40℃; Cw is the constant pressure specific heat capacity of water, 4.18kJ / (kg·℃);ti is the initial temperature of water, 8.8℃; JT is the annual average daily solar radiation on the daylighting surface of the local collector, kJ / m 2 ; f is the solar energy guarantee rate, dimensionless; ηcd is the collector's whole day heat collection efficiency, which is determined according to the actual test results of the collector products. Here, 0.44 is taken; ηL is the heat loss rate of pipeline and water storage tank, dimensionless, 0.2 is taken here. [3] 3.2.2. Design flow. When the temperature difference of circulating water is 8℃, the circulating flow of the collector per square meter is 0.012 L/(s·m 2 ). For the solar hot water system, if the collecting and circulating pipeline is a closed circuit, the calculated flow of the pipeline is the circulating flow. Calculated according to the following formula: Where: q is the circulating flow, L/h; Qs is the circulating flow of the collector, L/ (h·m 2 ); A is the total area of solar collector, m 2 . [4]
Pipe diameter calculation
Where: q is the design flow, m 3 /s; dj is the calculated inner diameter of the pipe, m; v is velocity, m/s.
Head of water pump
Where: Hx is the lift of circulating water pump of solar energy collection system, m; hjx is the resistance loss along the path and part of the circulation pipeline of the heat collection system, m; hj is the resistance loss of the heat collection cycle through the collector, m; hf is the additional pressure, 2-5m.
Equipment selection
3.3.1. Selection of air source heat pump. Considering that solar energy heat collection system is greatly affected by weather and environmental factors, and there is an extreme situation of no heat compensation, therefore, five air source heat pumps with built-in water pumps of a company are selected for the air source heat pump. The heat production capacity of the air source heat pump is 23.5kW, the total power consumption rate of heat production is 9.8kW, the circulation flow is 5.9m 3 /h, and the heat production cop is 2.4. The working principle of air source heat pump is shown in Figure 3. The site photos are shown in Figure 4. Fig.3 Working principle of air source heat pump Fig.4 Field photo of air source heat pump 3.3.2. Selection of solar collector. According to the above formula, considering that in non heating season, solar energy collection system only provides users with domestic hot water heat, too large area is easy to cause resource waste, which will increase the total investment of the system. Therefore, a company's "high temperature resistant, cold resistant, high-efficiency absorption" vacuum tube solar collector is selected in this paper. The number of vacuum tubes is 50, and the heat collection area is 7.6m 2 . Polyurethane insulation material is used. The field photo of the solar collector is shown in Figure 5. The pipeline insulation is shown in Figure 6.
Economic analysis
Since the completion of the project in November 2018, by March 2020, the milk production of 45 female Yaks in winter has increased from 550L/month to 1100L/month, and in January 2020, 23 young yaks will be born, and the mortality rate of young Yaks in winter cold period is 0 (Young yaks are generally born in May to June every year, with a mortality rate of about 10%). As shown in Figure 7.
Fig.7
New born Yaks in January 2020 At the same time, in order to evaluate the economy of the heating system, this paper uses the dynamic investment benefit evaluation method to analyze and evaluate its price economy, its calculation formula is [5]: Where: Z is the annual cost, yuan/year; i is the interest rate; K is the total investment of equipment, yuan; D is the annual operation cost of the system, yuan/year; n is the service life of the system, year. Assuming that the annual interest rate is 10%, the service life of air source heat pump system equipment is 15a, the service life of solar energy heat collection system is 15a, and the service life of electric heating boiler system is 10a [6].
The calculation results of dynamic annual cost of three systems are shown in Figure 8. The annual dynamic cost of electric boiler is 1.49 and 1.62 times of that of air source heat pump system and solar / air source heat pump system respectively. The initial investment of the electric boiler system is the smallest, but its service life is shorter than the other two systems, resulting in the annual conversion value of the initial investment of 40.03% and 6.2% higher than the other two systems respectively. The annual operation cost of solar / air source heat pump system is 37.2% lower than that of electric boiler system, and 70.1% lower than that of air source heat pump system alone. In general, the solar / air source heat pump system has the best economy.
Conclusion
In this paper, through the effective coupling of solar energy system, air source heat pump system and ground heating system, a "Cow shed ground heating system based on solar energy / air source heat pump" is designed in high altitude and cold area. Although the initial investment of the system equipment is a little high, the system has a long service life and relatively low annual operation cost, and effectively improves the milk production of the Female Yak in winter and the survival rate of the small yak, which can effectively shorten the payback period of the investment. Solar energy and air energy, as a kind of high-quality renewable energy, are strongly supported by national policies. Therefore, the full use of solar energy composite system is more conducive to solve the contradiction between high energy consumption and energy shortage in China. | 2020-07-09T09:03:31.089Z | 2020-07-03T00:00:00.000 | {
"year": 2020,
"sha1": "e3d03ca687687b464816696b178d539924f088fc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/514/4/042075",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3044fd31eba826d742680f67b9573a68f1185b81",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
40752981 | pes2o/s2orc | v3-fos-license | Coexistence of osteopoikilosis with seronegative spondyloarthritis and Raynaud ’ s phenomenon : first case report with evaluation of the nailfold capillary bed and literature review
Osteopoikilosis (OPK also known as disseminated condensing osteopathy, spotted bones, osteopecilia) is a rare hereditary bone disease (1). It is characterized by the presence of hyperostotic spots, preferentially localized in the epiphyses and/or metaphyses of the long bones, in the carpal and tarsal bones (2). The first case was reported in 1915 by Albers-Schönberg (3-5). There are three clinical variants of OPK. The most common is the speckled type, followed by the striated form, while the mixed form is rarer. OPK has an estimated prevalence of 1 per 50,000 while the incidence in skeletal X-rays is reported to be 1 per 20,000 (1, 6). Nowadays, OPK is known to be an autosomal dominant hereditary disease caused by heterozygous mutations in the LEMD3 gene (locus 12q14). LEMD3 encodes an inner nuclear membrane protein that seems to play a role in bone morphogenetic protein signaling (7, 8). Apparently there is also a sporadic form (1, 7-9). There is no difference between gender, with males and females being equally affected; however, other Authors report that this pathology is more predominant in the male population (ratio 2:1) (1, 10). OPK is often observed incidentally when X-rays are taken for trauma. Although this pathology is usually asymptomatic, 15-20% of patients may have slight articular pain and/ or joint effusion (10-12). Specific diagnosis is based exclusively on X-ray as usually no haematochemical alterations are involved. Bone biopsy reveals focal condensations of compact lamellar bone within the spongiosa (13). We report a literature review and, for the first time, the coexistence of OPK with seronegative spondyloarthritis and Raynaud’s phenomenon in a 48-year old female. To the best of our knowledge, this is the first case of OPK studied by nailfold videocapillaroscopy (NVC). summary Osteopoikilosis (OPK) is a rare autosomal dominant bone disorder characterized by numerous hyperostotic areas that tend to localize in periarticular osseous regions. It is usually asymptomatic and is often diagnosed incidentally during X-rays. OPK may be an isolated finding or associated with other pathologies, e.g. skin manifestations, rheumatic and/or skeletal disorders. We report a literature review and, for the first time, the coexistence of OPK with seronegative spondyloarthritis and Raynaud’s phenomenon in a 48-year old female. To the best of our knowledge, this is the first case of OPK studied by videocapillaroscopy, demonstrating the absence of specific microvascular abnormalities of nailfold capillaries.
n INTRODUCTION
O steopoikilosis (OPK also known as disseminated condensing osteopathy, spotted bones, osteopecilia) is a rare hereditary bone disease (1).It is characterized by the presence of hyperostotic spots, preferentially localized in the epiphyses and/or metaphyses of the long bones, in the carpal and tarsal bones (2).The first case was reported in 1915 by Albers-Schönberg (3-5).There are three clinical variants of OPK.The most common is the speckled type, followed by the striated form, while the mixed form is rarer.OPK has an estimated prevalence of 1 per 50,000 while the incidence in skeletal X-rays is reported to be 1 per 20,000 (1,6).Nowadays, OPK is known to be an autosomal dominant hereditary disease caused by heterozygous mutations in the LEMD3 gene (locus 12q14).LEMD3 encodes an inner nuclear membrane protein that seems to play a role in bone morphogenetic protein signaling (7,8).Apparently there is also a sporadic form (1,(7)(8)(9).There is no difference between gender, with males and females being equally affected; however, other Authors report that this pathology is more predominant in the male population (ratio 2:1) (1, 10).OPK is often observed incidentally when X-rays are taken for trauma.Although this pathology is usually asymptomatic, 15-20% of patients may have slight articular pain and/ or joint effusion (10)(11)(12).Specific diagnosis is based exclusively on X-ray as usually no haematochemical alterations are involved.Bone biopsy reveals focal condensations of compact lamellar bone within the spongiosa (13).We report a literature review and, for the first time, the coexistence of OPK with seronegative spondyloarthritis and Raynaud's phenomenon in a 48-year old female.To the best of our knowledge, this is the first case of OPK studied by nailfold videocapillaroscopy (NVC).
summary Osteopoikilosis (OPK) is a rare autosomal dominant bone disorder characterized by numerous hyperostotic areas that tend to localize in periarticular osseous regions.It is usually asymptomatic and is often diagnosed incidentally during X-rays.OPK may be an isolated finding or associated with other pathologies, e.g.skin manifestations, rheumatic and/or skeletal disorders.We report a literature review and, for the first time, the coexistence of OPK with seronegative spondyloarthritis and Raynaud's phenomenon in a 48-year old female.To the best of our knowledge, this is the first case of OPK studied by videocapillaroscopy, demonstrating the absence of specific microvascular abnormalities of nailfold capillaries.
case report n CASE REPORT
A 48-year old female (CM) presented to our Rheumatology Outpatient Service in February 2010 complaining of diffused joint pain, above all in the lumbar and sacral region (with nocturnal exacerbation), wrists, shoulders and knees, of a 3-year duration.Her condition had previously been controlled with analgesic drugs by her G.P..She had also been affected by Raynaud's phenomenon for ten years and complained of morning stiffness in the hands that lasted about 30 minutes.She had a history of slipped disk in 2004 and a diagnosis of hip osteoarthritis in 2007 that had led to left hip prosthesis in the same year.In 2009, when she was 47 years old, she had an X-ray for pain in the pelvic girdle and a diagnosis of OPK was made on the basis of the presence of widespread round, oval, focal radiodense lesions lo-calized in the pubic region, distal epiphyses and metaphyses of the left femur and proximal tibia.She was given a thorough clinical examination, and laboratory tests were carried out.The clinical examination revealed a normal range of motion of the peripheral joints, a slight swelling of her wrists, and painfulness in the articular areas mentioned above.Her range of motion of the spine was normal, but there was a mild tenderness at the sacroiliac joints.There were no skin lesions.Raynaud's phenomenon was not associated to sclerodactily.Hand and knee X-rays were negative for both calcium deposits and bone erosions.The X-ray of her hands and wrists confirmed the presence of speckled OPK in the radial, ulnar and metacarpal bones (Fig. 1).X-rays of the spine demonstrated interapophyseal sclerosis of the joints in the lumbar region along with slight convexed right lumbar scoliosis in hyperlordosis.A pelvic girdle X-ray documented a bilateral sclerosis of the sacroiliac joints, along with secondary osteoarthritic manifestations.There were also round focal radiodense lesions (hyperostotic spots) localized in the proximal epiphyses and metaphyses of the left hip and in the pubic region (Fig. 2).Magnetic resonance imaging confirmed the presence of arthritic manifestations at sacroiliac joints, more evident on the right side (bilateral subchondral bone marrow edema and sclerosis, bilateral irregularities of sacroiliac joints with erosions at the right iliac site).There was an increase in inflammatory marker erythrocyte sedimentation rate and C-reactive protein (38 mm/L h and 23 mg/L, respectively).Other laboratory tests were unremarkable, including renal and liver function tests, rheumatoid factor, anti-cyclic citrullinated peptide antibodies, antinuclear antibodies, and human leukocyte antigen B27.The osseous markers, parathormone, calcium, phosphorus, vitamin D and alkaline phosphatase were all within normal ranges.Serology for chlamydia and mycoplasma, and urine cultures for ureaplasma urealyticum and mycoplasma hominis were all negative.
case report
Coexistence of osteopoikilosis with seronegative spondyloarthritis and Raynaud's phenomenon A diagnosis of seronegative spondyloarthritis was made on the basis of ASAS criteria (14).Treatment was started with prednisone at a dosage of 5 mg/day for two months, with limited benefit.Therefore, the dosage of prednisone was increased to 7.5 mg/day and methotrexate was added at 10 mg weekly.Symptoms gradually improved over a 2-month period and inflammatory markers also progressively normalized.NVC was performed to evaluate Raynaud's phenomenon (15).Results were in the normal range for capillary morphology without any scleroderma-pattern: 10 case report capillaries per linear millimetre were present with a perpendicular distribution to the periungueal edge, without pericapillary oedema.The only abnormalities observed were tortuous capillaries and rare irregular capillary ectasias, without giant capillaries (Fig. 3A and B).At time of writing (20 months after the first visit) NVC status was unchanged.
n DISCUSSION
Osteopoikilosis is known to be an autosomal dominant hereditary disease characterized by sclerosing bone dysplasia, even if there was no documented family history for the case reported herein.There are three clinical types of OPK: speckled, striated, and mixed.Our patient was affected by the speckled type characterized by widespread round and oval focal radiodense lesions and hyperostotic spots (Fig. 1 and 2).The most frequently affected sites are the bones of hands (phalanges, carpal, metacarpal) and feet (phalanges, tarsal, metatarsal).Less frequently involved sites include the legs (femur), the axial skeleton (pelvis, sacrum), and arms (radius, ulna), with rare involvement of the ribs, clavicles, skull and vertebrae (6,16,17).Our patient's lesions were localized in both proximal and distal epiphyses, metaphyses of the hip and in the proximal tibia, with symmetrical spotted lesions at wrists (radial, ulnar and metacarpal).In speckled OPK the lesions appear nodular, diffuse and symmetrical; parallel and longitudinal striae are observed in striated OPK while there is a combination of striae and mottling in mixed OPK (1).Our patient's lesions were multiple, oval and round, regular and symmetrical, around her wrists, pelvis and hip (Fig. 1 and 2).The diagnosis of OPK is usually incidental, made in concomitance with X-rays for other pathologies.In fact, only 15-20% of patients may have slight articular pain and/or joint effusion (10,(17)(18)(19).Differential diagnosis in cases of widespread focal round or oval radiodense lesions is between osteoblastic metastases, primary bone tumors, mastocytosis, tuberous sclerosis and synovial chondromatosis.In the presence of only a few striae, the differential diagnosis involves melorheostosis.As far as osteoblastic metastasis is concerned, these are more frequently observed in the axial skeleton and are not uniform but are asymmetric, and vary in size with osseous destruction and positive scintigraphy findings; OPK has symmetric, regular oval or rounded lesions, localized predominantly around the joints (19)(20)(21)(22)(23)(24)(25).No therapy is required, except in particular cases in which either the patient complains of severe articular pain (for which analgesic drugs are indicated) or symptoms associated to other pathologies are presented.In our case, no pain relief was obtained from non-steroidal anti-inflammatory drug treatment until we considered the associated pathology (spondyloarthrits) and modified the treatment regimen accordingly; remission of symptoms was then achieved.Occasionally, OPK may coexist with dermatological pathologies (scleroderma, dermatofibrosis lenticularis disseminata, keloid formation, plantar and palmar keratoma), heart or renal malformations, endocrine disorders, or skeletal diseases (exostosis, spinal stenosis, chondrosarcoma, osteosarcoma) (18)(19)(20)(21)26).OPK has also been described in association with rheumatoid arthritis, reactive arthritis, discoid lupus erythematosus, familial Mediterranean fever, psoriatic arthritis, or fibromyalgia (10)(11)(12)(13).
In conclusion, the present work reports for the first time the case of a patient affected by OPK associated with spondyloarthritis and Raynaud's phenomenon.NVC examination helped exclude the presence of a scleroderma-pattern of microangiopathy.Furthermore, this is the first paper that has analyzed the nailfold capillary bed in a patient affected by OPK, demonstrating the absence of specific microvascular abnormalities in this clinical condition.We believe that, in cases in which a diagnosis of OPK is made, particular attention must be paid not only to the differential diagnosis, but also to the possible association case report Coexistence of osteopoikilosis with seronegative spondyloarthritis and Raynaud's phenomenon of the disease with other pathologies.This will allow the best therapeutic regime to be selected, particularly in the presence of pain.
Figure 1 -
Figure 1 -X-ray of hands and wrists: the presence of speckled osteopoikilosis in the radial, ulnar and metacarpal bones is seen.
Figure 2 -
Figure 2 -X-ray of pelvis and hip: the presence of oval and round, multiple, regular and symmetrical radiodense lesions of osteopoikilosis, localized around pelvis and hip is seen.
Figure 3 -
Figure 3 -(a-B) Normal nailfold videocapillaroscopy in patient cM: 10 capillaries per linear millimetre are seen with a perpendicular distribution to the periungueal edge.capillaries are U-shaped, sometimes tortuous or with irregular ectasias; neither giant capillaries nor capillary ramifications are present. | 2017-06-17T23:54:37.862Z | 2012-12-11T00:00:00.000 | {
"year": 2012,
"sha1": "e9b096be77bda721349ad649368abed94f2c3ecd",
"oa_license": "CCBYNC",
"oa_url": "https://www.reumatismo.org/index.php/reuma/article/download/reumatismo.2012.335/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e9b096be77bda721349ad649368abed94f2c3ecd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254188705 | pes2o/s2orc | v3-fos-license | Predictors of religious participation of older Europeans in good and poor health
Religious attendance is an important element of activity for older Europeans, especially in more traditional countries. The aim of the analysis is to explore whether it could be an element contributing to active ageing as well as to assess differences between the religious activity of older individuals with and without multimorbidity defined as an occurrence of two or more illnesses. The analysis is conducted based on the SHARE database (2010–2011) covering 57,391 individuals 50+ from 16 European countries. Logistic regressions are calculated to assess predictors of religious activity. Results point that religious activity often occurs in multimorbidity what could be driven by the need for comfort and compensation from religion. It is also significantly correlated with other types of social activities: volunteering or learning, even among the population with multimorbidity. There is a positive relation between religious activity and age, although its effect is weaker in the case of multimorbidity, as well as being female. Mobility limitations are found to decrease religious participation in both morbidity groups and might be related to discontinuation of religious practices in older age. The economic situation of older individuals is an insignificant factor for religious attendance. Religious attendance can be an element of active ageing, but also a compensation and adaptation to disadvantages occurring in older age and multimorbidity. At the same time, religious activities are often provided at the community level and targeted to population in poorer health.
Introduction
Strategic documents of the European Union (EU) and the World Health Organization (WHO) indicate the activity of the older population is one of the main factors of active ageing underlining its impact on decreasing the economic costs of ageing, including the need for extensive healthcare, postponing disability, and reducing costly long-term care (WHO 2002). Typically, activities that might contribute to more active ageing include higher employment opportunities, involvement in life-long learning, physical and cultural activities (WHO 2002). In the European societies that are more traditional, such as Poland or Italy, participation in the labour market, life-long learning, or even volunteering at an older age is low, with cultural and social activities closely linked to religious participation, which remains a dominant activity for older people.
Various studies note religious involvement as an important element of healthy ageing, related to quality of life and compensating for social isolation, poor family networks, and incapacities in various fields of life (Koenig et al. 1988;Benjamins 2004;Woźniak 2012).
The analysis presented in this study examines the hypothesis of religious activity as an element of successful ageing, notifying differences in motivations for religious involvement depending on health status.
Background
Studies of religious attendance and health underline the importance of religious participation not only due to the intrinsic, internalised, and spiritual value of religious beliefs and participation, but also due to the extrinsic values of religious life related to networking, social support, and the cultural life of the religious community (Jarvis and Northcott 1987;Sloan et al. 1999;Huguelet andKoenig 2009, Koenig et al. 2014;Krause and Hayward 2014).
Health and religious participation
The discussion on the relation between religious participation and health has been held for many years without a strong conclusion on the direction of the relationship (Hummer et al. 2004).
There is evidence of the impact of religious involvement, especially religious activities of a public character, on adult mortality risks (Levin 1994;Hummer et al. 2004;Huguelet and Koenig 2009). Some studies find a positive impact of religion on mental and physical health (Levin 1994(Levin , 2012Siegel 2012) indicating that a greater level of religiosity is positively related to better health outcomes: lower morbidity and better psychological well-being (Levin 1994). It is argued that an active involvement in religious activities might even improve longevity (Hummer et al. 1999;Hybels et al. 2012;Siegel 2012). A study of the poor older population in Connecticut (Jarvis and Northcott 1987) concludes that religiousness and attendance was positively correlated with a reduction in mortality. Other studies find that different dimensions of religious involvement have a protective effect against a functional decline among the older population (Park et al. 2008;Hybels et al. 2012). The study by Park et al. (2008) concludes that attending religious services is related to lower levels of functional limitations and decreases the risks of developing limitations in the instrumental activities of daily living. A similar relation has not been found for private religious practices, such as watching and listening to religious media and prayer. The authors also state that the mechanism by which religious involvement appears to influence mortality includes aspects of social integration, social regulation, and psychological resources. Huguelet and Koenig (2009) indicate that religious practices might prevent patients from developing symptoms of depression and, if the symptoms do occur, recovery is quicker. Important explanations of the positive, bilateral relation between religious participation and health refer to the lifestyle factors, social behaviours, psychological factors, and social support that are given in a religious community (Jarvis and Northcott 1987;Levin 1994;Iannaccone 1998;Siegel 2012;Krause and Hayward 2014). Particular attention is given to health behaviours, self-perception, and social support.
Religious beliefs promote the adoption of a healthy lifestyle, governing strict rules on the use of alcohol, tobacco, drugs, diet, and sexual behaviours. In general, values of different religions discourage risk-taking behaviours, which are important risk factors for morbidity and mortality (i.e. alcohol abuse and smoking). Benjamins and Brown (2004) argue that religion might be related not only to the avoidance of risky health behaviours, but also positively related to health awareness and preventive care use. Their study shows that controlling for possible confounders of age and sex, physical and mental health, and socioeconomic status, religious individuals are more likely to receive flu vaccinations, cholesterol screenings, and prostate screenings (males).
The psychological effects of belief systems, rituals, and faith stimulating the locus of control and self-perception are also important for health status (Levin 1994). Beliefs in particular religions might encourage a peaceful state of mind or a greater sense of optimism due to a feeling of sense of purpose in life. The psychological effects of participation in rituals might have a great impact on emotions, creating an effect that might be referred to as a 'placebo' effect. Adversely, in some cases, the belief system might produce guilt or low self-esteem (Huguelet and Koenig 2009).
An important factor for a positive bidirectional relation between religion and health is social networking and social support. Religious participation is related to lesser feelings of isolation, greater social participation, and closer family ties. For single people, involvement in religious institutions may protect against loneliness later in life by integrating older adults into larger and supportive social networks (Woźniak 2012;Rote et al. 2013). Studies by Krause (2002), Koenig et al. (2014), and Krause and Hayward (2014) point that religious participation is related to gratefulness and more social support that is positively related to better self-concept, optimism, and better health in older age.
Finally, religion operates as a cushion. It mitigates the impact of stressful events, such as illness, work problems, involuntary residential changes, or hospitalisation. Huguelet and Koenig (2009), analysing the situation of older patients with neurologic symptoms, note that religion has been the most important coping factor, being a source of comfort, helping patients reframe poor conditions or loss into a positive situation, and providing a feeling of purpose or meaning.
Demographic, social, and economic correlates of religious participation The character of religious participation has changed over the past decades, with an observable decrease in church participation accompanied by a turn to less institutionalised forms of sharing values and norms (Luckmann 2011). At the same time, participation in religious communities varies depending on traditions, religious human capital, socialisation within the family, social networks, community relations, and relations with peers, as well as by social and demographic characteristics (Cornwall 1989;Ammerman and Roof 1995).
Life course trends in participation
A life course pattern of involvement in religious activities can be identified. Bahr (1970) described four life course patterns of religious involvement: traditional pattern of the highest involvement in childhood and older age; stable involvement throughout life and lack of relation between ageing and religious participation; highest religious involvement related to family life and religious education of children; decrease in religious practices in older age that accompanies drawing back from social activities. Stable involvement in religious activities throughout life supports the continuity theory of ageing pointing to internal and external coherence of individual behaviours in older age and consistency of behaviour throughout life (Atchley 1989). Many recent studies point to a ''u'' shape pattern of involvement in religious activities with the highest levels of participation in the early years of life and for older people, though participation for the oldest old (80 or more) tends to decrease due to mobility limitations in favour of private religious practices (prayer) (Wink and Dillon 2001;Heineck 2001;Timonen et al. 2011).Involvement in religious activities in older age might be an element of adaptation to losses in health and social networks and selection of a meaningful activity that gives a sense of purpose in older age, providing a compensation in situation of losses (Baltes and Baltes 1990;Freund and Baltes 1998). The higher religious participation of older people might also be related to compensation of perceived lower social security (Borowik 2002;Woźniak 2012), existential fears, and adaptation to insecurity while approaching the ends of their lives. Participation in religious practices and communities might also be a substitute for the vanishing social networks of older people (Woźniak 2012). On the other hand, despite that a similar age pattern of religious involvement is observed in all countries in Europe and the US (Smith 2009), a cohort effect should be accounted for pointing that cohorts reaching older ages tend to be religious throughout their lives (Woźniak 2012).
Gender differences
Level of religious participation is typically higher among women. This pattern has been observed for different age groups (Iannaccone 1998;Heineck 2001;Timonen et al. 2011). Among the explanations of higher female attendance in religious activities might be their involvement in the religious socialisation of children and better opportunities for time allocation to religious activities due to a lower involvement in the labour market, especially in traditional societies (Levin 1994;Heineck 2001). However, a higher labour market participation of women in contemporary societies might result in a lower involvement in religious life than in the past (Ammerman and Roof 1995). At an older age, higher religiosity is observed for the widowed (Heineck 2001). Additionally, higher attendance in religious services, accompanied by declarations of receiving comfort and strength from religion, is observed more often among women than men (Timonen et al. 2011).
Marital status and family life might also play a role in religious attendance. Ammerman and Roof (1995) show that single men are more likely to be involved in nonreligious activities, while single women tend to be more often involved in religious activities. Married couples are also more inclined to participate in organised religious activities than their single counterparts. Attendance is also higher in traditional families than in non-traditional families, such as single or stepparent parent families (Petts 2015).
Education and income
The results of research on religious participation and level of education are complex. In some studies, religiousness and attendance are found to stimulate better educational achievement, work activity, better labour market performance, higher income (Lipford and Tollison 2003), and lower involvement in deviant activities (i.e. crime, alcohol, and drugs) (Iannaccone 1998;Heineck 2001;Keister 2011). Iannaccone (1998) notes that the character of religious involvement is different depending on education level, with more orthodox religious values more often observed among the less educated and the poor. Other empirical analyses support a secularisation hypothesis that Eur J Ageing (2016) 13:145-157 147 higher education decreases individual religious attendance, noting a strong negative relation between attendance and higher education across religious groups (Halicka and Halicki 2002;Pędich 2002;Woźniak 2012;Zhang 2012). Higher education and higher incomes are also constraints to religious attendance due to high opportunity costs, as time devoted to religious practices could be used for labour-related purposes (Ammerman and Roof 1995; Heineck 2001; Woźniak 2012). In the future in less traditional societies, the negative correlation between higher levels of education and lower religious participation might also appear in the later life due to reaching older age by less religious cohorts (Hungerman 2011;Woźniak 2012). Participation in religious activities might be important for the active and healthy ageing being related to positive emotions, better self-perception, social networking, and support; however, involvement in activities might be dependent upon a variety of factors. Previous research shows that the sense of engagement, a purpose in life, generosity, and involvement are prominent predictors of healthy ageing, even more important than a health status itself (Reichstadt et al. 2007). The sense of purpose and social networking might be related both to religious involvement (Keonig et al. 2014;Krause and Hayward 2014) and to health status. Religious involvement in older age in different morbidity status might be an element of compensation of losses in health, social networks, lower security, and adaptation to changing life circumstances in multimorbidity related to age. On the other hand, for religious individuals morbidity might not be an obstacle in continuation of their religious involvement in older age giving a meaning to life that arises from internal continuity. To test these relations, the article investigates patterns of religious involvement of people with and without morbidity to identify resources (functional capacity, age, education, social involvement, other) that stimulate the engagement in religious activities. The definition of multimorbidity refers to the disablement process as described by Verbrugge and Jette (1994) in which multimorbidity is an expression of the chronic conditions and impairments experienced by older people, while religious involvement is an intra-individual factor that might prevent further disablement.
Methods
The analysis uses data from the Survey of Health, Ageing and Retirement in Europe (SHARE) of 2010 and 2011 (Wave 4, Release 1.1.1), a cross-national survey study covering individuals aged 50? from 16 European countries. The analysis of religious activities covers 57,391 individuals. Individuals who have not answered the question on religious attendance or answered ''don't know'' were excluded from the original SHARE sample (300 respondents, 0.52 % of the total SHARE sample). More than every fifth person in the sample suffers from multimorbidity and every tenth individual reports mobility limitations (Table 1). Dementia occurs occasionally. Females constitute over half of the sample and more frequently suffer from a higher number of morbidities. Almost half of the sample is below the age of 65, while every fifth person is over the age of 75. In this age group, multimorbidity is the most frequent. Additionally, in the 50-64 age group, almost every third person suffers from two or more morbidities. More than half of individuals live in households consisting of two members. Approximately 80 % of the sample has a primary or secondary education, while 20 % reports higher educational attainment. More than half of the sample is retirees and 27 % is either employed or self-employed. Unemployment or receiving some type of sickness benefit is much less common. When social activities are in question, every fifth respondent is involved in sports and club activities or is providing some type of care. Voluntary work and educational activities are less common, with 16 % of individuals reporting involvement in some type of voluntary activity. 13 % of the sample reports participating in religious activities and less report participating in educational activities.
Participation in religious activities
Participation in religious activities is assessed by the question if an individual has attended/taken part in activities of a religious organisation (church, synagogue, mosque) in the past twelve months. This question does not specify the types of activities individuals might be involved in. While it might consider various types of activities (including community meetings and voluntary activities), it is assumed that attendance in public religious services is the primary activity. The dependent variable is binary, identifying if an individual has or has not attended religious activities.
Multimorbidity assessment
The analysis is performed for two groups based on morbidity level. Multimorbidity is assessed using an indicator based on the number of self-reported morbidities. The list of morbidities includes myocardial infarction, stroke, or cerebrovascular disease; diabetes or high blood sugar; chronic pulmonary diseases, including pneumonia, emphysema, or asthma; arthritis, including osteitis and rheumatic disease; cancer, including leukaemia and lymphoma (without minor skin cancers); gastric or duodenal ulcer; Parkinson's disease; cataracts; and hip, femoral, and
Predictors of religious activity
Potential predictors of religious activity include health status, demography, human capital, labour market position, income, and social participation. Models controlling for functional health status and excluding these types of predictors are presented and compared. This is to control for the fact that health status might not only be on the pathway between morbidity and disability (Verbrugge and Jette 1994), but also might be an important determinant of religious participation, especially in the case of poor functional abilities (Sloan et al. 1999).
Functional abilities and mental health is assessed by the ability to perform basic activities of daily living, mental health, and the occurrence of dementia. To assess mobility limitations, a binary variable of reporting at least one limitation in activities of daily living has been created. The mobility items specified in the survey include walking 100 m; sitting for 2 h; getting up from a chair after sitting for a long period; climbing several flights of stairs without resting; climbing one flight of stairs without resting; stooping, kneeling, or crouching; reaching or extending arms above shoulder level; pulling or pushing large objects, such as a living room chair; lifting or carrying weights over 10 pounds (5 kg), such as a heavy bag of groceries; and picking up a small coin from the table (Jagger et al. 2011). Mental health is assessed using the Euro-D scale (Prince et al. 1999), assigning poor mental health if the number of symptoms is greater than three. A separate binomial is created for occurrence of cognitive dysfunctions, such as dementia, e.g. Alzheimer's disease. The presence of cognitive illnesses was assessed in the survey through selfreporting.
Basic demographic factors include age, sex, marital status, and household size. Three age groups have been differentiated: 50-64, 65-74, and 75? . Households have been categorised into three groups depending on size: single, two members, and three or more members. Marital status has been categorised into single, married, divorced, and widowed. Human capital is measured by level of education. An original SHARE variable corresponding to the ISCED-97 scale was simplified into the three categories of primary, secondary, and tertiary education.
Socio-economic status is measured by labour market position and income level, with the latter calculated separately for each country.
Social participation is assessed by a set of dichotomous variables on participation in volunteering, educational activities, clubs and sports, as well as the provision of regular but informal care of any type (to a spouse, children, or others) in the previous year.
The main part of the analysis is multivariate countrypooled logistic regression models identifying the predictors of religious participation by morbidity level, which is a grouping variable. Coefficients indicating the strength of the relation in the logistic model should not be simply compared between the models (Allison 1999); thus, the average marginal effects of each model are presented and discussed. Models have been calculated separately for individuals with and without multimorbidity to present differences in the set of predictors of religious activity depending on morbidity status. Following, a control variable of mobility limitations has been introduced, and again models have been compared to assess whether mobility might be a significant factor explaining religious activity depending on morbidity and if it impacts other relations. Presenting models separately for each morbidity status allows for simple and clear understanding of the possible set of relations for individuals with or without multimorbidity.
Results
The participation in religious activities among the older population in the case of multimorbidity is slightly higher than the participation of those without multimorbidity (Table 1). Participation is more frequent among females with multimorbidity when compared to healthy females, individuals before the retirement age with multimorbidity when compared to 'younger' older people without morbidities, the better educated in poor health when compared to those healthier, and the employed or self-employed and wealthier individuals with multimorbidity when compared to individuals without illnesses (Table 2).
Among the main predictors of religious participation of older people are sex, age, functional ability, and active participation in social life (Table 3).
Females are more likely to participate in religious activities than males and the result is significant in both morbidity groups. The likelihood of religious involvement increases with age for both morbidity groups. The significance of effect is smaller in case of no multimorbidities. In both groups of older people, with and without multimorbidity, dementia negatively affected religious participation, while the role of mental health was not significant. In the group with no multimorbidity, religious practices are negatively correlated with divorce. In the group with multimorbidity, this relation is not significant.
Living in a two-person household decreases the probability of involvement in religious activities, but only among individuals with multimorbidity. In the group with no multimorbidity, household composition is an insignificant determinant of religious participation.
Despite level of morbidity, secondary education decreases the likelihood of religious practices when compared to primary education. This relation is not observed for individuals with a higher educational attainment. The labour market status of older individuals is, in most cases, an insignificant predictor of religious involvement, despite morbidity level. The only exception is the category of 'other', which includes the labour market inactive that are often involved in family care or household work. The probability of religious involvement of this group is higher than that of the retired, but only in the case of healthier individuals. Similarly, income level is insignificant for participation in religious practices at an older age, with the only exception decrease in the probability of religious participation for those with the highest incomes and no multimorbidity.
Social participation is a significant predictor of religious participation. Involvement in sports, clubs, educational activity, and, especially, volunteering are significantly and positively related to participation in religious activities despite morbidity level. In the case of the provision of informal care, the positive relation is less significant for the multimorbidity group than for healthier people; however, it is still an important predictor of religious participation.
Including mobility items (Table 4) shows that mobility limitations are an important limitation to religious participation, decreasing probability of religious attendance, while other relations depicted earlier are similar. This is observed in people with and without multimorbidity.
Discussion
Analysing religious participation based on multimorbidity is not easy given the complexity of the relation between religious participation and health (Sloan et al. 1999) as well as the multidimensionality of religiosity (Jarvis and Northcott 1987;Sloan et al. 1999) and the complexity of religious participation. On the one hand, participation could be driven by deep faith, but also by other needs, including those driven by age and feelings of frailty related to poor health or the need for cultural or social participation when the church Source own calculations based on SHARE data 2010-2011 Italic values indicate that Chi-square test is significant at the 0.05 level * p values for Chi-square tests of association between the dependent variable (religious participation) and other characteristics and morbidity ** Income quartiles are calculated separately for each country, adjusting for differences in income distribution in each country might be one of the few locally available institutions fulfilling the need (Kędziora 2013). The analysis of the predictors of religious participation of older people confirms the previous findings of typically higher activity among women (Iannaccone 1998;Heineck 2001;Timonen et al. 2011) with a possible explanation including a greater involvement in religion throughout life related to the socialisation of children or grandchildren and better opportunities to allocate time for religious activities due to a decreased involvement in the labour market, especially in more traditional religious societies (Levin 1994;Heineck 2001). This effect could also be attributable to the adaptation and compensation effect as women not only have a higher attendance in religious services, but also declare finding comfort and strength from religion more often than men (Timonen et al. 2011). The observed higher religiosity of widowed women of an older age is often explained by the effect of finding comfort and consolation from religious practices and beliefs in times of loss. This study, however, does not support this hypothesis.
The research confirms the importance of age for the involvement in religious practices (Heineck 2001;Smith 2009;Timonen et al. 2011;Kędziora 2013). The increased participation of the oldest cohorts might be related to various factors, including compensating for decreased involvement in family life or social networks and more available time, as well as the cohort effect of older individuals being more traditional and oriented towards religion. Lower significance of age among people with multimorbidity might imply lower capabilities of involvement in religious activities due to health limitations and point to the possible discontinuation of public religious practices in favour of continuity of private practices for religious individuals (Sloan et al. 1999). The analysis indicates differences in the likelihood of religious participation depending on marital status and multimorbidity. The lower probability of participation among healthier divorced people can be explained by their exclusion from the religious community in most religious denominations. This effect is not visible in case of poorer health when adaptation and compensation motivations might be of greater importance then the rules of religious exclusion. Other types of marital status were insignificant despite the results of other research pointing to stronger support and social networks for married couples in religious communities (Wilcox and Wolfinger 2008;Petts 2015). At the same time, people living in two persons household, most likely couples, in case of morbidity are found to be less motivated to participate in religious activities than single couples. This might imply that in case of poor health they find support at home, from their spouse and are less likely to search for psychological support outside, in church.
The relations between educational attainment and religious participation partly confirm the findings of Hungerman (2011), where higher educational attainment was negatively related to religious participation later in life. Here, against primary education, the relation is significant only for secondary education. For higher educated no motivation is found for religious activity in case of higher morbidity. The poor involvement in religious activities of the working population has been found in other studies (Heineck 2001) with an explanation of higher opportunity costs for such involvement. Religious participation is also in conflict with employment, especially for professional groups and those with higher incomes due to the scarcity of time available for religious practices. This concept has been only partially confirmed in this study, where higher incomes are found to decrease the likelihood of religious participation for the group of older people without multimorbidity. Only the labour market inactive that are involved in home activities are correlated with religious participation. This could occur for individuals living in families that are more traditional or communities that have more opportunities to allocate time for religious activities.
The analysis confirms that participation in social activities is positively related to religious activity. Previous studies show a positive relation between volunteering and religious attendance (Smith 1994) though the effect differentiates between denominations-an element not tackled by current analysis. In addition, members of the religious community are more willing to be involved in the voluntary activities supported by their churches (Wilson and Janoski 1995). Involvement in clubs or educational activities might also be stimulated by religious groups.
Finally, the results point to the health constraints of religious attendance. Sloan et al. (1999) underline that functional disabilities might be a significant constraint to public religious participation and the results confirm this relation. This study also adds dementia and other cognitive disorders to the list of constraints of religious participation.
This study adds to the existing literature by comparing behaviours in the two morbidity groups. While the level of religious participation for those with morbidities is only slightly higher than that for those with good health, there are subtle differences in predictors between the morbidity groups. They point to lower obstacles in religious participation among individuals with higher morbidity, with age being a less important predictor of participation and family situation and family decomposition (divorce) being insignificant. These results might be related to the greater need for comfort and consolation of individuals with poor health who face higher insecurity due to their poor health. Such results might support the selectivity, optimisation, and compensation theory pointing to religious involvement being one of possible adaptation mechanisms in less secure situation of poor health. On the other hand, even in the case of multimorbidity, religious attendance is positively correlated with other types of social participation, which might indicate that the health deterioration is not an obstacle to religious practices among more active individuals or that religious organisations often are the providers of cultural or educational activities to people in poor health, which enables their participation.
SHARE is a unique database providing evidence on, among other types of activity, the religious participation of the older population and allowing for comparison with information on health status, morbidity, and functional abilities, as well as information on the social and economic status of individuals. However, the definition of religious activity in the SHARE questionnaire is blurred, with unknown types of religious activities and a reference to a broad timeframe of the year preceding the survey. As a result, it might not only cover regular attendance in religious practices, but might include more occasional religious attendance or participation in activities organised by churches, but not related to religious practices, such as volunteering and participation in church clubs or educational meetings. Another drawback of the survey question is that it does not differentiate between denominations, as the level of involvement in public religious activities might differ between denominations, i.e. being higher for Catholics and Muslims and lower for Anglo-Saxon Protestants (Heineck 2001). This is a field for further study. The imprecise definition of religious participation might result in slightly different participation statistics across countries than in other surveys. For example, compared to other European research (Smith 2009;Eurobarometer 2010), the SHARE statistics indicate a higher frequency of religious participation in the Netherlands and a lower frequency of participation in Italy and Portugal. An analysis of religious involvement of older people across countries would be an interesting field for further research. Acknowledging differences in the level of participation depending on morbidity, there is also a space for studying relations with severity of diseases.
Conclusion
The analysis shows that along other activities, older Europeans participate in religious activities, which could be related to a need for comfort or by cultural or social needs. The religious participation could be driven by spiritual and psychological reasons, but also by the cultural and institutional offers of religious organisations. The occurrence of multimorbidity differentiates religious participation. This should be interpreted bearing in mind that religious activities are often oriented towards those sick and in pain and account for mobility issues, including individuals unable to leave their home or care facility. The wide range of activities of churches and religious organisations is specifically aimed at older and ill people, and religious services are typically provided to older persons at the community level, which increases their accessibility and might be combined with educational or cultural activities. | 2022-12-04T14:51:32.139Z | 2016-03-12T00:00:00.000 | {
"year": 2016,
"sha1": "3aebb8aa39b1e34ffdd7b404c0de99fe6f71d20f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10433-016-0367-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "3aebb8aa39b1e34ffdd7b404c0de99fe6f71d20f",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
239630071 | pes2o/s2orc | v3-fos-license | The Phase Transformations Induced by High-Pressure Torsion in Ti–Nb-Based Alloys
Abstract The study of the fundamentals of the α → ω and β → ω phase transformations induced by high-pressure torsion (HPT) in Ti–Nb-based alloys is presented in the current work. Prior to HPT, three alloys with 5, 10, and 20 wt% of Nb were annealed in the temperature range of 700–540°C in order to obtain the (α + β)-phase state with a different amount of the β-phase. The samples were annealed for a long time in order to reach equilibrium Nb content in the α-solid solution. Scanning electron microscope (SEM), transmission electron microscopy, and X-ray diffraction techniques were used for the characterization of the microstructure evolution and phase transformations. HPT results in a strong grain refinement of the microstructure, a partial transformation of the α-phase into the ω-phase, and a complete β → ω phase transformation. Two kinds of the ω-phase with different chemical compositions were observed after HPT. The first one was formed from the β-phase, enriched in Nb, and the second one from the almost Nb-pure α-phase. It was found that the α → ω phase transformation depends on the Nb content in the initial α-Ti phase. The less the amount of Nb in the α-phase, the more the amount of the α-phase is transformed into the ω-phase.
Introduction
Ti-Nb-based alloys are very promising candidates for biomedical applications, since they can show excellent corrosion resistance with low elastic modulus close to that of human bone and good ductility, allowing the production of precise and versatile geometries of medical devices (Bönisch et al., 2013;Panigrahi et al., 2015). However, Ti-Nb-based alloys are expected to have lower strength than conventional biomaterials (Geetha et al., 2009). It is well known that mechanical properties can be significantly improved by the application of severe plastic deformation (SPD) as a result of an increased density of crystal lattice defects and strong grain refinement down to ultrafine or even nanometer scale (Valiev et al., 2006). In this context, SPD methods, including equal channel angular pressing and high-pressure torsion (HPT), have been successfully applied to strengthen ternary and quaternary β-type Ti-Nb-based alloys (Matsumoto et al., 2005;Lee et al., 2013). Another issue is that SPD can, under certain conditions, cause phase transformations (Straumal et al., 2014). Phase transformations, driven by SPD, are especially effective in materials with allotropic modifications. Titanium possesses three allotropic variations: the low-temperature α-Ti with a hexagonal close-packed crystal structure (space group P63/mmc), the hightemperature β-Ti with a body-centerd cubic structure (space group Im-3m), and the high-pressure ω-Ti with a hexagonal structure (space group P6/mmm) (Murray, 1981). It was found that, in Ti-based alloys, the high-pressure ω-phase forms more easily from the β-phase during SPD and also from the α-phase at a hydrostatic pressure between 2 and 12 GPa, depending on the experimental technique, pressure environment, and alloying additions (Ivanisenko et al., 2008). The α → ω and β → ω phase transformations induced by SPD are typical martensitic transformations (diffusionless). The shear stress and alloying of Ti with β-producing elements (such as Co, Ni, Fe, Nb, or Mo) facilitate the formation of the ω-phase. Recently, much attention has been paid to the formation of the high-pressure ω-phase from the α-phase of commercial purity titanium (Shirooyeh et al., 2014;Zhilyaev et al., 2014) or from the β-phase or α ′ martensite of Ti-Fe-based alloys during HPT (Kilmametov et al., 2017;Kriegel et al., 2018). However, in the case of the Ti-Nb-based alloys, detailed knowledge about the fundamentals of the α → ω phase transformation under the influence of SPD and thermal stability of the ω-phase is still lacking. Therefore, the main goal of this work is to study the effect of the addition of an alloying component (Nb) on the mechanisms of the α → ω and β → ω phase transformations caused by the HPT in Ti-Nb-based alloys with 5, 10, and 20 wt% of Nb, as well as the study of the ω-phase thermal stability.
Material and Methods
Pure titanium (99.98%) and niobium (99.99%) were used for the preparation of the Ti-Nb-based alloys with 5, 10, and 20 wt% of Nb. The alloys were melted in an induction furnace in a pure argon atmosphere. The obtained cylindrical ingots of the alloy, each with a diameter of 10 mm, were cut by spark erosion into disks with a thickness of 0.7 mm. The samples of the examined alloys were sealed in quartz ampoules and annealed at 700, 670, and 520°C for 168 h for the alloys with 5, 10, and 20 wt% of Nb respectively, in order to obtain the (α + β)-phase state with a different amount of the β-phase. The samples were annealed for a very long time in order to reach the equilibrium niobium content in the α-solid solution. After annealing, the samples, together with their ampoules, were quenched in water. The annealed samples were subjected to HPT at room temperature under the pressure of 7 GPa for five full rotations with a deformation rate of 1 rpm in a Bridgman anvil-type unit using a custombuilt computer-controlled device manufactured by W. Klement GmbH, Lang, Austria. XRD studies were carried out using a Siemens D-500 X-ray diffractometer (Malvern Panalytical, Malvern WR14 1XZ UK) with Cu Kα radiation. Transmission electron microscopy (TEM) investigations were carried out using a TECNAI G2 FEG super TWIN (200 kV) (FEI, Hillsborough, OR, USA) with an energy dispersive X-ray spectrometer (EDS) manufactured by EDAX (AMETEK, Inc., Berwyn, PA, USA). The thin foils for TEM observation were prepared by a twin-jet polishing technique using a D2 electrolyte manufactured by the Struers company (Cleveland, OH, USA). The focused ion beam (FIB) technique was applied by means of FEI Quanta 3D microscope (30 kV) (FEI, Hillsborough, OR, USA) for the preparation of thin foils of deformed material in order to obtain the interface between the second phase and the α-matrix. Spot diffraction was analyzed with the TIA software for the Tecnai microscope. First, the Dhkl distances of the selected reflections and the angles between them were measured. Phase identification was made with CARINEV3 software. Prior inspection of the initial material was also carried out using an FEI E-SEM XL30 scanning electron microscope (SEM) (FEI, Hillsborough, OR, USA) equipped with EDAX Genesis EDS spectrometer (FEI, Hillsborough, OR, USA). The SEM images were taken using backscattered electron signal (BSE mode) in order to obtain the composition contrast between different phases. All samples for microstructural studies were cut out at a distance of half the radius of the deformed samples. The in situ XRD studies were carried out using a Panalytical Empyrean diffractometer (Malvern Panalytical, Malvern WR14 1XZ UK) (Cu Kα radiation) equipped with an Anton Paar HTK 1200 high-temperature chamber (Malvern Panalytical, Malvern WR14 1XZ UK). The bulk samples were placed on an Al 2 O 3 sample holder and introduced into the chamber, which was subsequently evacuated, then flushed and filled with high-purity Ar gas. Samples were heated at a rate of 5°C/min., and diffraction patterns were collected in the 40-940°C temperature range with a step size of 20°C. The 2θ range was chosen between 30 and 80°with a step size of 0.033°. The acquisition time per single pattern was 25 min preceded by 10 min of temperature stabilization. The collected data were refined using Rietveldtype FullProf software (Wojdyr, 2010).
Results and Discussion
The microstructure observations of the Ti-5 wt%Nb, Ti-10 wt% Nb, and Ti-20 wt%Nb alloys after annealing by SEM (Figs. 1a-1c) showed the presence of β-phase lamellas, uniformly distributed in the α-matrix. The lamellas of the β-phase are enriched in niobium and have a bright contrast in comparison to the dark α-matrix on the micrographs made in BSE mode. The study of the selected area electron diffraction patterns (SAED) obtained by TEM also allowed the identification of the second phase precipitates as the β-phase surrounded by the α-matrix (Figs. 1d-1f). The measurement of the chemical composition showed that the niobium content reaches 4.2, 6.5, and 10.0 wt% in the α-matrix and 14.7, 29.2, and 48.0 wt% in the β-phase for the Ti-5 wt%Nb, Ti-10 wt%Nb, and Ti-20 wt%Nb alloys, respectively. In other words, the increase of niobium content in the alloys leads to increased niobium content in the αand β-phases.
It seems that the more niobium in the alloys, the larger the volume fraction of the β-phase and the finer the grain size of the microstructure. The refinement of the microstructure can also be associated with a decrease in the annealing temperature. The volume fraction of the β-phase was calculated based on XRD analysis and reaches about 12, 28, and 42% for the Ti-5 wt%Nb, Ti-10 wt%Nb, and Ti-20 wt%Nb alloys, respectively. XRD curves (Fig. 2) of the samples in the initial state confirmed the presence of αand β-phases. After HPT, all peaks in XRD patterns were broadened and their intensity decreased, which is usual for SPD strong grain refinement. Moreover, the ω-Ti phase appeared in all samples, a certain amount of α-Ti phase remained, and peaks from the β-phase completely disappeared. It was found earlier that even a 0.1 rotation of HPT deformation of the Ti-4 wt% Fe alloy in the β-state led to the formation of 90% of the ω-phase . This confirms that the high-pressure ω-phase more easily forms from the β-phase during HPT. The crystallographic mechanism of the α → ω phase transformation implies shear deformation along the (00.1) α planes (Trinkle et al., 2003). The XRD patterns show that the strong (00.2) peak of the α-phase completely disappeared, and the high intensity (11.0 + 10.1) doublet of the ω-phase appeared after HPT in the sample pre-annealed at 400°C. This confirms the α → ω phase transformation mechanism observed earlier in pure Ti under HPT conditions (Ivanisenko et al., 2008) After the HPT process, the microstructure refinement and deformation of the bright phase are clearly visible on the micrographs obtained by SEM (Figs. 3a-3c). If the β-phase completely disappears, then to which phase do the bright precipitates belong? The TEM study of the microstructure revealed the answer to this question. Figures 3d-3f present the microstructure of the thin foil (obtained by a twin-jet polishing technique) of the Ti-5 wt%Nb alloy after HPT as an example. It can be seen that there is a strong grain refinement of microstructure down to nanometer sizes. The SAED pattern showed many reflections from the ω-phase and some reflections from the α-phase. The TEM observations of microstructure showed the presence of small grains of rounded shape and large grains of irregular shape with distinctive streaky contrast. The morphology of the large grains is similar to the ω-phase (Shurygina et al., 2018), while the small grains probably belong to the α-phase. The microstructure of the HPT-deformed Ti-10 wt%Nb and Ti-20 wt%Nb alloys is similar to that described above.
The preparation of thin foils using the FIB method was performed in order to find out what happened with the β-phase, which remained visible in the micrographs after the HPT process. The foil was cut at the interface between some bright contrast particles and the deformed matrix. It should be noted that the observed microstructure, in this case, corresponds to the crosssection of the deformed sample, which is different from the microstructure in Figures 3d-3f taken from the surface of the sample. The bright-field (BF) image (Fig. 4a) showed the presence of large elongated precipitates with morphology similar to the β-phase in the initial state (Fig. 1d). The SAED pattern (Fig. 4b) taken from one of them, marked as circle 1 on the BF Fig. 2. XRD patterns of the Ti-5 wt%Nb (curve 1), Ti-10 wt%Nb (curve 2), and Ti-20 wt%Nb (curve 3) alloys before (lower curves) and after (upper curves) HPT deformation. XRD patterns of the Ti-10 wt%Nb alloy in enlarged scale in the right corner of the figure. image, showed that it belongs to the ω-phase grain. Moreover, the high-angle annular dark-field observation of microstructure (Fig. 4d) and the mapping of Nb and Ti elements (Figs. 4e,4f) showed that these large grains are enriched in Nb up to 14.8 wt %. It should be noted that this chemical composition is close to the β-phase in the initial state (14.7 wt%Nb). Since no reflections of the β-phase were observed after HPT and the β → ω transformation is a martensitic process proceeding without changing the composition, it can be concluded that the β-phase completely transformed into the ω-phase. The SAED pattern (Fig. 4c) taken from a deformed matrix, marked as circle 2 on the BF, showed the presence of nanocrystalline grains of the αand ω-phases. Therefore, two kinds of the ω-phases are observed. The first comes from the β-phase enriched in niobium, and the second one from the almost Nb-pure α-phase.
The amount of the ω-phase transformed from the β-phase corresponds to the amount of the β-phase in the initial state, that is, 12, 28, and 42% for the alloys with 5, 10, and 20 wt%Nb, respectively. However, according to XRD analysis, the total amount of the ω-phase for the examined samples reached about 71, 76, and 86%, respectively (Table 1). This means that the remaining amount of the ω-phase appeared from the partial transformation of the α-phase. Moreover, it turned out that the smaller the amount of Nb in the α-phase, the greater the amount of the α-phase is transformed into the ω-phase (Table 1). Similar results were also obtained in a Ti-4 wt%Co alloy subjected to HPT under the same conditions (Korneva et al., 2021). HPT of the Ti-4 wt% Co alloy resulted in a strong grain refinement of the microstructure and a partial α → ω phase transformation. It was found that HPT-induced α → ω phase transformation depends on the cobalt content in the initial α-phase and the morphology of the microstructure. The lower cobalt content and smaller grain size of the α-phase leads to a higher amount of ω-phase induced by HPT (Korneva et al., 2021). Synchrotron X-ray analysis was performed along the radius of the HPT-deformed samples. Figure 5a shows the XRD pattern measured for the Ti-5 wt%Nb alloys as an example. Based on the XRD pattern of the examined alloys, the amount of the ω-phase was calculated as a function of the shear stress (Fig. 5b). The higher the shear stress is, the more the amount of the deformation-induced ω-phase.
The thermal stability of the metastable ω-phase was studied by in situ high-temperature XRD measurements. An in situ XRD map of the Ti-5 wt%Nb alloy before and after HPT is presented in Figure 6 as an example. The heating of the initial state above 600°C resulted in a slight shift of all observed peaks toward the lower diffraction angles. This shift can be associated with the increase of the lattice parameters due to thermal expansion. After deformation, the (11.0) + (10.1) peak of the ω-phase as well as the (10.0) and (10.1) peaks of the α-phase are clearly visible in the 33-42°range of 2θ angles on the standard XRD curves in Figure 2. However, only the (11.0) + (10.1) peak of the ω-phase can be distinguished in the in situ XRD map before heating. The absence of the α-phase peaks in this range of 2θ angles in the in situ XRD map is related to the different duration of XRD measurements. For the in situ method, there is not enough time to register many X-ray counts compared to the standard method. Analysis of the XRD in situ map showed that the heating of the deformed samples up to 350°C resulted in the complete disappearance of the ω-phase and the appearance of the (10.0), (00.2), and (10.1) α-phase peaks. Therefore, the decomposition of the ω-phase into the α-phase is observed. Since the ω-phase is enriched in niobium as an alloying element (by analogy with the ω-phase enriched in iron in Ti-Fe alloys (Ivanisenko et al., 2008;Kilmametov et al., 2017), it is assumed that the α-phase arising after the decomposition of the ω-phase is also enriched in niobium. Next, heating to the highest temperatures resulted in a slight shift of α-phase peaks toward the lower diffraction angles. This shift also can be associated with the increase of the lattice parameters due to thermal expansion and the appearance of the new α-phase with lower cobalt content. The same situation was observed in the case of the Ti-4 wt%Co alloy subjected to the HPT process (Korneva et al., 2021). It should be noted that the HPT-induced ω-phase volume fraction in pure Ti (under the same conditions) reached only approximately 40% , and the process of reverse ω → α transformation is finished at 180°C for heating at a rate of 10°C/min. Therefore, alloying with niobium results in a twofold increase of ω-phase volume fraction and an increase in its thermal stability up to 350°C. The thermal stability of the cobalt-doped ω-phase, observed in HPT-deformed Ti-4 wt%Co alloy, is reached at around 450°C (Korneva et al., 2021). In the case of the Ti-Febased alloys , the HPT-induced ω-phase completely decomposes slightly above 600°C. In other words, the thermal stability of the Nb-doped ω-phase is higher than that of the pure Ti and lower than that of the Co-or Fe-doped ones.
Conclusions
HPT of the Ti-Nb alloys resulted in a strong grain refinement of the microstructure, a partial transformation of the α-phase into the ω-phase, and a complete β → ω phase transformation.
Two kinds of the ω-phase with different chemical compositions were observed after HPT. The first one was formed from the β-phase, enriched in Nb, and the second one from the α-phase. The α → ω phase transformation depends on the Nb content in the initial α-Ti phase. The lower the level of Nb in the α-phase, the greater the amount of the α-phase is transformed into the ω-phase.
The temperature range of the reverse ω → α transformation is about 350°С. The thermal stability of the ω-phase is higher than that of pure Ti (180°C) and lower than in Ti-Co (450°С) and Ti-Fe-based alloys (600°C) subjected to HPT. | 2021-10-21T16:18:35.723Z | 2021-09-07T00:00:00.000 | {
"year": 2022,
"sha1": "dc011331d509f800a95488dfba9fa48900ea82ef",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/CCC1D6D9A2005ADD3295402D5D944891/S1431927621012277a.pdf/div-class-title-the-phase-transformations-induced-by-high-pressure-torsion-in-ti-nb-based-alloys-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "f0e52bc05514a8ac42798c652d6f44e8d29a613f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
15631731 | pes2o/s2orc | v3-fos-license | Transient QED effects in absorbing dielectrics
The spontaneous emission rate of a radiating atom reaches its time-independent equilibrium value after an initial transient regime. In this paper we consider the associated relaxation effects of the spontaneous decay rate of atoms in dispersive and absorbing dielectric media for atomic transition frequencies near material resonances. A quantum mechanical description of such media is furnished by a damped-polariton model, in which absorption is taken into account through coupling to a bath. We show how all field and matter operators in this theory can be expressed in terms of the bath operators at an initial time. The consistency of these solutions for the field and matter operators are found to depend on the validity of certain velocity sum rules. The transient effects in the spontaneous decay rate are studied with the help of several specific models for the dielectric constant, which are shown to follow from the general theory by adopting particular forms of the bath coupling constant.
The spontaneous emission rate of a radiating atom reaches its time-independent equilibrium value after an initial transient regime. In this paper we consider the associated relaxation effects of the spontaneous decay rate of atoms in dispersive and absorbing dielectric media for atomic transition frequencies near material resonances. A quantum mechanical description of such media is furnished by a damped-polariton model, in which absorption is taken into account through coupling to a bath. We show how all field and matter operators in this theory can be expressed in terms of the bath operators at an initial time. The consistency of these solutions for the field and matter operators are found to depend on the validity of certain velocity sum rules. The transient effects in the spontaneous decay rate are studied with the help of several specific models for the dielectric constant, which are shown to follow from the general theory by adopting particular forms of the bath coupling constant.
I. INTRODUCTION
The rate and the spectral and spatial characteristics of the spontaneous decay of an atom depend on the properties of the atom and of the radiation field, and on the interaction between them. The radiation field changes by the presence of other matter [1]. One can try and manipulate the emission properties once the influence of this medium is understood.
In quantum optics of linear dielectrics, one tries to describe the material medium in an effective way with the help of the classical dielectric function ε(r, ω), which in general is a complex function of both position and frequency and in this full generality describes the propagation and loss of light at each point in the dielectric. Sometimes it is possible to neglect the spatial variations (including local field effects), dispersion and losses altogether. The spontaneous emission rate of an atom in such a simple dielectric is the refractive index n of the medium times the rate Γ 0 in vacuum [2] - [4].
The situation becomes more complicated when material dispersion has to be taken into account [5] - [11]. Since the Kramers-Kronig relations tell that dispersion and loss always come together (be it not always at the same frequencies), one should like to include losses as well in order to describe all frequencies in one theory. The damped polariton model [12] - [15] provides us with such a microscopic theory. From that theory it was shown that the radiative spontaneous emission rate equals Γ 0 times the real part of the refractive index at the transition frequency [16].
The quantum mechanical treatment of dissipative systems is more complicated than the classical one, because of the extra requirement that equal-time commutation relations do not change over time [17,18]. Based on the damped-polariton model and on the fluctuation-dissipation theorem, phenomenological quantization theories were constructed that meet these requirements. In these theories, the dielectric function is an input function and the Maxwell field operators satisfy quantum Langevin equations with both loss and quantum noise terms [19,20]. With the use of a Green-function approach, the phenomenological quantization theories have been generalized to inhomogeneous dielectrics, first for multilayer systems and later for general ε(r, ω) [21] - [22]. Field commutation relations turn out only to depend on the analytical properties of the Green function. However, the calculation of spontaneous emission inside such a medium would involve the actual computation of the Green function, which for general ε(r, ω) is not easy.
A special case of the former theories is the quantum optical description of inhomogeneous systems at frequencies where both dispersion and losses can be neglected. Then a description in terms of modes is possible, where the mode functions are harmonic solutions of the classical wave equation featuring a position-dependent dielectric "constant" ε(r) [23]. This encompasses the now theoretically and experimentally very active research area of the so-called photonic crystals [24], where a periodic modulation of the refractive index at the scale of the wavelength of light can drastically modify the mode structure compared to vacuum. By increasing the refractive-index contrast, even a photonic bandgap can open up, giving rise to a frequency interval for which waves cannot travel in the crystal in any direction, so that spontaneous emission would be inhibited completely. Until now, such a bandgap has not been found conclusively in the optical regime [25]. It has been proposed to look for frequencies close to material resonances, where refractive indices can be quite substantially higher or lower than 1 [26].
Interesting new effects have been predicted for bandgap systems, such as photon-atom bound states and non-exponential spontaneous decay at the edges of the gap [27]. A current debate is whether the Weisskopf-Wigner approximation can be used in the calculation of spontaneous emission near an edge of a photonic bandgap. This question seems to depend strongly on the analytic or singular behavior of the density of states at the edges of the gap, which has recently been calculated for face-centered cubic and diamond-like crystal structures [28]. If near the edge of the bandgap a large part of the modes has a cavity-like structure, producing nonzero dwell-times near the emitting atom, then an emitted photon has a nonzero probability of being reabsorbed, which would give Rabi-like oscillations of the atomic population that are missed in the Weisskopf-Wigner approximation.
Non-exponential decay can also be caused by the interference of possible decay-channels: for short times after the excitation of the atom, a larger frequency interval of the medium states plays a part in the decay process than for later times. Ultimately, only the refractive index at the atomic transition frequency plays a role, all in concordance with the energy-time uncertainty relation. This interference process already happens for spontaneous emission in vacuum. However, when the medium has a strong jump in the density of states around the atomic transition frequency, the interference effect will change substantially.
To separate the latter cause of non-exponential decay from the former, it is interesting to consider the spontaneous emission inside homogeneous lossy dielectrics with strong and narrow material resonances, where the density of states can also change very rapidly. Here all states correspond to simple plane wave modes, so that real reabsorption processes do not play a role. In this article, we use the damped-polariton model formulated by Huttner and Barnett [13,14] to study the interference effects of spontaneous emission. If absorption is neglected in the damped-polariton model, then we are left with the Hopfield model of a dielectric [8,29], which has a frequency bandgap inside which the refractive index is purely imaginary. The analogy between this polariton band gap system and photonic crystals was drawn in [30].
The organization of the paper is as follows: in section II we introduce the theory and solve its equations of motion using Laplace transformations. In section III we show that the consistency of our solutions depends on the validity of a number of velocity sum rules, which are then proved. In section IV, we find that for long times all field operators can be expressed in terms of the initial bath operators, and we give an interpretation of the result. We also show how to relate the result to phenomenological quantization theories. Before we can discuss transient effects of spontaneous emission in section VI, we discuss in section V the Lorentz oscillator model and the point scattering model. We show how both these models can be found from the damped-polariton theory by choosing a suitable coupling to the bath. The paper ends with a discussion of the results and with conclusions in section VII.
II. THE MODEL AND SOLUTIONS OF THE EQUATIONS OF MOTION
The damped-polariton theory describes the interaction of light with an absorbing homogeneous medium. The coupling of the matter to a frequency continuum is the cause of the light absorption. The continuum could be a phonon bath or something else, but for the moment that is not specified: it is a collection of harmonic oscillators with a frequency-dependent coupling to the matter fields. Since the medium is homogeneous, the dynamics can be separated into a transverse and a longitudinal part. In this article we concentrate on the transverse excitations as described by the following Hamiltonian [13,14]: with We use the same notations as in [14]. In particular,kc stands for k 2 c 2 + ω 2 c , where the frequency ω c equals α/ √ ρε 0 , with α the coupling constant between field and matter, and ρ the density. The resonance frequency ω 0 of the polarization field is renormalized toω 0 , which is the positive-frequency solution of The k-integrals in the Hamiltonian are understood to also denote a summation over the two transverse polarization directions labeled by λ. The creation and annihilation operators satisfy standard bosonic commutation relations. The Heisenberg equations of motion for the bath annihilation operators are: and similarly for the creation operators. In the following we drop the (λ, k)-labels. We solve implicitly for the bath variables, as was done in [31] in a classical treatment of the model: The annihilation operators are defined in terms of the (transverse) physical fields: and similarly for the creation operators. Here A and E are the vector potential and the electric field, X the polarization field and P its canonical conjugate. Insertion of the solution (8) and its Hermitian conjugate in the equations of motion gives: In the last equation, the bath operator B(t) is defined as whereas the the function F in the convolution in (10) is: We get a system of algebraic equations by taking the Laplace transform, which we denote by a bar: Through the operatorB(p) the bath remains part of the system of equations: this is as far as we can "integrate out" the bath variables. Now we can determine the dielectric function ε(ω), which is a classical quantity, by putting the determinant of the (4 × 4) coefficient matrix to zero. The determinant gives the dispersion relation D(p) ≡ε(p)p 2 + k 2 c 2 = 0, (14) with the "Laplace dielectric function" The functionF (p) is the Laplace transform of F (t), which was defined in equation (12). From this we find the dielectric function with infinitesimal positive η and The difference between F (ω) and F (t) is denoted by their arguments. The dielectric function satisfies the Kramers-Kronig-relations and has the property of a response function that ε(−ω * ) equals ε * (ω). It can be shown that it has no poles in the upper half plane, provided that the integral in (6) exists. Previous authors [14,31,32] assumed that the analytical continuation of V 2 (ω) to negative frequencies is anti-symmetrical in frequency. Then (16) reduces to the dielectric constant in [31], where it was shown to be identical to the more complicated expression in [14]. We combine (13) and (15) and write the Laplace fields in terms of the fields at time t = 0, with coefficients that are functions of the Laplace dielectric functionε(p) and susceptibilityχ(p) =ε(p) − 1. For the electric field we find: The other Laplace operators can be found in the same way and are listed in the Appendix. The inverse Laplace transform gives the fields at time t in terms of the fields at time t = 0: where, for instance, The operator B E (t) in equation (19) is the contribution of the t = 0 bath operators to the electric field. This term will be analyzed in more detail in section IV. The equal time commutation relations of the field operators are All other inequivalent combinations of operators commute. In particular A and X are independent canonical variables. Hence, we have the property [A, With the help of (19) and (21), we can also calculate non-equal time commutators, for example: In principle we have solved the complete time evolution of the field operators. In section III we analyze in more detail their short-time behavior, whereas in section IV we consider the long-time limit.
III. SHORT-TIME LIMIT: SUM RULES
For fixed k, the zeroes of the dispersion relation (14) are the poles of the integrand in (20). We assume that they are simple first-order poles and rewrite the integral (20) as an integral over frequencies ω = ip. Then, using contour integration in the lower frequency half plane, we find the coefficients for the electric field: Some details of the calculation and a list of coefficients M mn (t) of other operators can be found in the Appendix. In these expressions, the frequencies Ω j = Ω j (k) are the complex-frequency solutions of the dispersion relation is also a solution of the dispersion relation. We can choose Ω j (k) to be the solution with a positive real part. The summation over j is a summation over all the polariton branches of the medium. For each branch, the complex phase velocity is defined as v p,j (k) = Ω j (k)/k and the group velocity as v g,j (k) = dΩ j (k)/dk. For convenience, we leave out their explicit k-dependence in the following.
From equation (19) we can see that the "diagonal" coefficient M EE (t) in (23) Velocity sum rules can be derived in a systematic way by evaluating the following two types of integrals: Here ε(ω) is an arbitrary dielectric function that satisfies the Kramers-Kronig relations, so it is not necessarily of the specific form (16). The integrals can be evaluated using contour integration in the complex frequency plane. We can close the contours either in the upper or in the lower half plane. Equating the two answers gives a velocity sum rule. In this way one finds for all wavevectors k: These sum rules can be found from (24) with n = 1 and n = −1, respectively. Both relations have been obtained before [8,14,33,34]. The second was coined the Huttner-Barnett sum rule in [33], because of its importance in phenomenological quantum theories of dielectrics. A second group of sum rules has the form The rules with q = −1 and q = 1 follow from (25) with m = 0 and m = 2, respectively; the case with q = 0 follows from (24) with n = 0. All of these sum rules are independent of any specific form of the dielectric function, as long as it satisfies the Kramers-Kronig relations. Other sum rules do depend on the behavior of ε(ω) for high or low frequencies. For example, from (25) This sum rule depends on the static limit of the dielectric function. For conductors the dielectric function is singular at ω = 0 [35], but for dielectric functions which can be found from the damped-polariton model, ε(0) is finite. Two other sum rules can be derived when for high frequencies ω 2 χ(ω) approaches a constant value that we name −ω 2 lim . From (25) with m = 3 we then find Moreover, if ω 2 χ(ω) + ω 2 lim falls off faster than ω −1 , then the integral produces the sum rule (28) with q = 2.
Returning now to the time-dependent coefficients (23) (and the other ones in the Appendix), one finds by inspection that one needs all the above sum rules except (29) to prove that the coefficients have the right limits for t = 0. In particular, from equation (16) it follows that the frequency ω lim as defined above exists in the damped-polariton model and equals ω c . Then with (27) and (30) we see that indeed one has M EX (0) = 0 in (23).
It is easy to prove the above sum rules in the following one-resonance model: This ε(ω) is real and violates the Kramers-Kronig relations, but it can be considered as a limiting case of an acceptable dielectric function. The high-frequency limit of ω 2 χ(ω) indeed equals −ω 2 c . The two sum rules (26), (27) were shown to be valid for this model [14] and we want to check (30) as well. The dispersion relation is which has two (real) solutions Ω 2 + and Ω 2 − with sum (ω 2 0 + ω 2 c + k 2 c 2 ) and product k 2 c 2 ω 2 0 . It follows that for all k v 3 p, in agreement with (30). The other sum rules can also be checked for this simple model. The sum rules (28) obviously hold, because all group and phase velocities are real in this model. In models that respect the Kramers-Kronig-relations, these sum rules are nontrivial.
IV. LONG-TIME LIMIT
A. Field and medium operators
The coefficients M EE (t) etc. in (23) damp out exponentially in time. Every polariton branch has its own characteristic damping time τ j (k) = 1/(Im Ω j (k)). After a few times the maximum characteristic damping period, with the maximum taken over all branches, the exponentially damped coefficients can be neglected. We call this the long-time limit. The speed at which it is attained, depends on ε(ω) and on k. For long times, only the bath operator B E (t) in (19) survives, because it has poles on the imaginary axis in the complex p-plane: Hence, in the long-time limit, all field operators are functions of the initial bath operators alone. For the electric field we find where the subscript l denotes the long-time limit. The temporal (and spatial) Fourier components of the long-time solutions are: where the superscript + denotes the positive-frequency component of the operator. For future reference we also give the long-time limit of the electric field operator as a function of position and time: Similar expressions can be given for the other operators. Notice that these long-time solutions indeed are solutions of the equations of motion (10) and of the Maxwell equations. The canonical commutation relations (21) should be preserved in this long-time limit. Also, the non-equal time commutation relations like in equation (22) should be time-translation invariant. The commutation relations can be verified with the equality which follows from equations (16) and (17). Since ε i (ω) is anti-symmetric in ω, all commutators can be shown to be proportional to integrals over the whole real frequency axis. Contour integration then leads to the required results. The solutions found above can be related to those obtained by explicit diagonalization of the full Hamiltonian of the model. In [14] this diagonalization was carried out by using Fano's technique. In that way the field and medium operators were written in terms of the diagonalizing annihilation operators (called C(k, ω) in [14]) and the corresponding creation operators. If one replaces the bath annihilation operators b ω (k, 0) in the long-time solutions (37) by the diagonalizing annihilation operators C(k, ω), and if one makes similar replacements for the creation operators, the expressions for the field and medium operators in [14] are recovered.
The long-time solutions can be interpreted as follows: when the dielectric medium is prepared in a state that is not an eigenstate of the Hamiltonian and if the coupling V (ω) is nonzero for all frequencies, then the medium tends to an equilibrium that is determined by the state of the bath. The time it takes for this equilibrium to settle down is the time after which the long-time solutions can be used for the field operators. So one can always use the long-time solutions in the calculations, unless the medium has been specially prepared in a non-equilibrium state a short time before one does the experiment. The interpretation of the long-time solution will become clearer in section VI where we calculate spontaneous emission.
In summary, for times long after t = 0, all field operators can be expressed solely in terms of the bath operators at time t = 0. The time evolution is governed by the bath Hamiltonian alone. The field operators still satisfy Maxwell's equations and the canonical commutation relations. Classical expressions for the Maxwell fields would have died exponentially to zero in this long time limit.
B. Relation with phenomenological theories
The long-time solutions of the field operators can be related to expressions in phenomenological theories, as we will show presently. In phenomenological quantum mechanical theories of homogeneous absorbing dielectrics [19] - [21], a noise current density operator J is added to the Maxwell equations in order to preserve the field commutation relations: The displacement fieldD + in the last equation is defined in terms of the electric field and the dielectric function asD We writeD to stress the difference with the microscopic displacement field D in section II. After taking the spatial Fourier transform, and using B + = ∇ × A + and E + = iωA + , so that the first of the Maxwell equations is satisfied, one finds from the second equation: The vector potential and all Maxwell fields can be calculated in terms of the noise current density J.
The canonical commutation relations are preserved, if for the noise current one chooses [19,20]: Instead of using the noise current operator, one defines basic bosonic operators so that these operators satisfy simple commutation relations: Now we turn to the long-time solutions of the field operators that we determined in section IV A. The long-time solution of the vector potential in (37) obviously is a solution of the following inhomogeneous wave equation: This kind of equation is well-known in Langevin theories [17,18]: the coupling to a bath gives a damping term (here: a complex dielectric constant) in the equations of motion of the system. Besides damping, there is an extra term that is neglected classically. This term is the quantum noise operator, which features the bath operators at time t = 0. The long-time solution (47) can justify the phenomenological equation (43), if we identify where we used equation (39). We see that up to a phase factor, the bath operators b ω (λ, k, 0) from the microscopic theory serve as basic bosonic operators f (λ, k, ω) in the phenomenological theories. We want to stress that the identification (48) is only valid in the long-time limit when the medium is in equilibrium with the bath.
In section II we saw that −ε 0 E is the canonical conjugate field of A and that [A, −D] gives the canonical result as well. Since we can make the identification (48), the same relations hold in the phenomenological theory that was described in this section. But now let us calculate the commutator [A, −D] withD + defined as in equation (42) andD − as its Hermitian conjugate. We can use the long-time solutions, because the commutation relations are preserved: with ǫ r (ω) the real part of the dielectric constant. The symmetry of the integrand enables us to rewrite the right-hand side as an integral over all real frequencies. When using contour integration, one cannot replace ε * (ω) by ε(−ω * ), but the analytical continuation to complex frequencies of ε * (ω) = ε(−ω) must be used instead: where we assumed as before that all poles of the dispersion relation are first-order poles. Note that ε(−Ω j ) depends on the behavior of the dielectric function in the upper half plane. Contrary to a statement in [20], the commutator does not give the canonical result, because in general there is no sum rule for the right-hand side of the equation. In other words, (D − ε 0 E) is canonically independent from E, but (D − ε 0 E) is not. The operator (D −D) is proportional to the Langevin noise term in the wave equation for the electric field. Now let us neglect absorption at all frequencies. Strictly speaking, the limit ε i (ω) → 0 is unphysical because it violates the Kramers-Kronig relations, but the limit is sometimes taken for dielectrics that show negligible absorption at optical frequencies [21,33]. When ε(ω) becomes real, the solutions Ω j become real and in that limit one has ε(−Ω j ) → ε(Ω j ) = (c/v p,j ) 2 . Inserting this in (50) and using the Huttner-Barnett sum rule j Re (v g,j /v p,j ) = 1, we immediately find the canonical result for [A, −D]. We compare this with the results in [33], where the dielectric function is assumed to be real. There a phenomenological Lagrangian was introduced and the fields A and −D were correctly identified as a canonical pair. The Huttner-Barnett sum rule was invoked to show that their commutator indeed had the canonical form. It was concluded that it is misleading that also [A, −ε 0 E] has the canonical form.
Here we have learnt that this misleading result is not surprising: in the limit of real dielectric constants and only then, both [A, −ε 0 E] and [A, −D] can have the canonical form in the same gauge, the reason being thatD approaches D in that limit.
V. MODEL DIELECTRIC FUNCTIONS
Phenomenological theories as discussed in section IV B have expressions for ε(ω) as input. In practice, this input will be the outcome of measurements of the dielectric function. By choosing the appropriate microscopic coupling constants and resonance frequencies in the damped-polariton model, one can hope to find a given dielectric function, thus providing a connection with phenomenological theories. It was argued in [31] that the well-known Lorentz oscillator form of the dielectric function could not be found from the damped-polariton theory in this way. We shall reconsider this issue below.
A dielectric function that follows from the damped-polariton Hamiltonian (2) will have a single resonance, because there is only one resonance frequency ω 0 in the matter fields. Experimentally, one may find more resonances in the ε(ω). This should not be used as an objection to the damped-polariton model, because in principle one could easily extend the theory with more material resonances. In this section, we consider two of these one-resonance models.
A. The Lorentz oscillator model
We want to find microscopic coupling constants in the damped-polariton theory so that the resulting ε(ω) has the following Lorentz oscillator form: Here ω res is the resonance frequency of the medium and ω c,Lor is a frequency that is related to the coupling strength between the electromagnetic and the matter field. Identifying ε(ω) from equation (16) with ε Lor (ω), we find apart from the trivial identification ω c = ω c,Lor where the frequency shift ∆ is defined such that ω 2 res =ω 2 0 −ω 0 ∆/2. The coupling V 2 (ω 1 ) is fixed by the identification of the imaginary parts and for all frequencies it equals V 2 (ω 1 ) = 4κ 0 ω 1 /(πω 0 ). However, if we insert this coupling in the equation for the real parts, we find that the frequency shift ∆ is infinitely large. Also, the renormalized frequencyω 0 in equation (6) blows up. We can solve this problem by introducing a frequency cut-off in the coupling, namely V 2 (ω 1 ) = 4κ(ω 1 )ω 1 /(πω 0 ) with With this choice one findsω 0 = ω 2 0 + 2κ 0 Ω, which clearly has a strong dependence on the cut-off frequency. The shift ∆ becomes both finite and frequency-dependent: The principal value integral can be evaluated by means of contour integration in the complex frequency plane. In this way we arrive at the following expression for the dielectric function: .
It is well-known that there are two branches of solutions of the dispersion relation when the dielectric function is of the form (51): there is an upper and a lower polariton branch. The dielectric function (55) gives rise to another branch: it has a purely imaginary frequency with magnitude of the order of the cut-off frequency. This "cut-off branch" has negligible k-dependence. In fact, the leading k-dependent term for large Ω is 2iω 2 c κ 0 k 2 c 2 /Ω 4 . Clearly, the group velocity on this branch is practically zero, so that the contribution of the cut-off branch to the velocity sum rules of section III can be neglected.
We conclude that high cut-off frequencies can be chosen such that in the optical frequency regime the dielectric function cannot be discerned from a Lorentz dielectric function with resonance frequency ω res = ω 0 and damping constant κ 0 . The solutions of the dispersion relation of the upper and the lower polariton branch together satisfy the sum rules of section III.
B. The point scattering model
In general the dielectric function ε(ω) describes the propagation of a coherent light beam in a fixed direction in an isotropic medium. A complex ε(ω) means that there is extinction, which can be caused either by scattering or absorption, or both. The dielectric function does not contain information about the extinction mechanism. A well-known dielectric medium showing polariton behavior is the dilute gas, which can be described as a collection of point dipoles that scatter light independently. If only one type of elastic scatterers is present, each having only one resonance, then the dielectric function is given by [36]: where n = N/V is the density of the scatterers (not to be confused with the refractive index n(ω)) and Γ e = e 2 /(4πε 0 m e c 2 ) is the classical electron radius. This dielectric function can also be found if one supposes that the medium consists of classical harmonically bound point charges whose motion is described by the Abraham-Lorentz equation. The dielectric function (56) has the property that the corresponding T -matrix t(ω) satisfies the optical theorem, with t(ω) defined as ε(ω) = 1 − nt(ω)(c/ω) 2 . However, (56) is not a proper response function, since it has a pole near the very large positive imaginary frequency 3ic/(2Γ e ). This can be related to the need for the a-causal phenomenon called pre-acceleration to avoid so-called runaway solutions of the Abraham-Lorentz equation [37]. Although we know that in the damped-polariton theory only proper response functions can be found, we proceed like in the previous subsection and try to find coupling constants that in the optical regime give rise to the dielectric function (56). Equating with (16) we get ω 2 c = 4πc 2 Γ e n and V 2 (ω 1 ) = 4Γ(ω 1 )ω 3 1 /(3πω 0 c), with Here we have inserted a convenient frequency cut-off from the start in order to keep finite the frequencỹ ω 0 and the shift ∆. Contour integration givesω 2 0 = ω 2 0 + √ 2Γ e Ω 3 /(3c), and The dielectric function has the form In this case, the resonance frequency shifts to frequencies lower than ω 0 and the shift is larger for larger cut-off frequencies. However, since the classical electron radius is so much smaller than an optical wavelength, it is very well possible to choose a cut-off frequency such that ω 0 ≪ Ω ≪ c/Γ e . Then for optical frequencies, the dielectric function (59) is of the form (56). Note that for high frequencies ω 2 χ(ω) → −ω 2 c for the dielectric function (59), but not for (56). Again, the frequency cut-off introduces a cut-off branch. In Fig. 1 we plot the real parts of the three solutions Ω j (k) of the dispersion relation. As a measure of the damping, we introduce κ which is given by Γ e ω 2 0 /(3c). For purpose of presentation, the numerical values of both ω c and κ were chosen artificially large for a dilute gas. The frequencies on the cut-off branch are of the same magnitude as the cut-off frequency Ω, much higher than the optical regime. The imaginary parts of the upper and lower polariton branches are plotted in Fig. 2. The imaginary part of the cut-off branch is large negative and practically constant for parameters as given in Fig. 1. Again, since the group velocity on the cut-off branch is practically zero, the upper and lower polariton branches together satisfy the sum rules of section III. In particular, Fig. 2 illustrates that the upper and lower polariton group velocities v g,u and v g,l satisfy the sum rule Im (v g,u + v g,l ) = 0. The cut-off, which was necessary to produce the dielectric function in the damped-polariton theory, neatly removes the pre-acceleration behavior associated with a pole in the upper halfplane and leads to a good response function. The form of the coupling V (ω) given above (57) has the following physical interpretation. By equating the damped-polariton dielectric function with (56), we assumed that the dilute gas can be described as a homogeneous dielectric. The light scattering by the gas molecules can be accounted for by an absorptive coupling to the free electromagnetic field, as long as only single scattering of light is relevant. Then scattered light is lost for propagation in the original direction. If the matter-bath coupling is dipole coupling, then for optical frequencies the product V 2 (ω 1 )/ω 1 should be proportional to the density of states of the electromagnetic field, which goes quadratically in frequency. This is indeed the case.
VI. SPONTANEOUS EMISSION
The spontaneous emission rate in principle is a time-dependent quantity. In this section we investigate the transient dynamics of the spontaneous emission rate of a guest atom in an absorbing medium, when the transition frequency of the guest atom is close to a material resonance of the medium. We show how our results relate to previous treatments of spontaneous emission in absorbing dielectrics, where Fermi's Golden Rule was used to show that the time-independent (equilibrium) value for the spontaneous emission rate equals Γ 0 Re[n(ω A )] [11,16]. Recently, local field effects have been included in quantum electrodynamical formulations of the problem [3,6,9,11,38], but we shall not focus on them in this paper.
We model the guest atom as a two-level atom with ground state |g and excited state |e and Hamiltonian H A = ω A |e e|. The medium (with fields and bath included) is described by the damped-polariton model, with Hamiltonian H M given by (1). The total Hamiltonian is H = H 0 + V , with H 0 = H M + H A and V = −µ A · E(r A ), the dipole interaction between the atom and the medium; µ A is the atomic dipole moment operator and E(r A ) is the electric field operator at the position r A of the atom.
Suppose that the damped-polariton system is prepared at time 0 in a state described by a density matrix ρ M (0). We do not assume that ρ M (0) commutes with H M , nor that it factorizes into a product of a density operator for the bath and a density operator for the undamped-polariton system (as is often assumed for convenience [39]). At time t 0 > 0 we bring the guest atom in its excited state, and couple it to the damped-polariton system. Using perturbation theory, one can calculate [18] the time-dependent probability that the guest atom has emitted a photon at time t > t 0 . We define the derivative of this quantity as the instantaneous spontaneous emission rate Γ(t). It is given as where µ is now the dipole transition matrix element of the guest atom.
If the guest atom is excited a long time after the initial preparation of the medium, all transient effects in the electric field have damped out. Hence, the field may be replaced by its long-time limit E l (r A , t), which is given in (38). Since E l depends only on the bath operators at t = 0, we may write (60) in the form: Here ρ red is the reduced density matrix obtained by tracing out the electromagnetic and material degrees of freedom: ρ red (0) = Tr em,mat ρ M (0). For the special case that the initial density matrix ρ M (0) factorizes, the reduced density matrix is the bath density matrix ρ bath (0) at t = 0. In general, the initial state of the electromagnetic and material degrees of freedom at t = 0 does not play a role in the emission rate. Spontaneous emission in its pure form arises if the reduced density matrix describes the ground state of the bath. Let us assume this is indeed the case. Upon inserting (38) in (61) we can perform the t ′ -integral, the integrals over the wavevector and the summations over the polarization directions. This leads to with n(ω) = ǫ(ω) the complex refractive index. For times (t − t 0 ) that are large enough, one may replace sin[(ω − ω A )(t − t 0 )]/(ω − ω A ) by πδ(ω − ω A ). However, the time scale at which this replacement is valid, depends on the resonance structure of the refractive index n(ω). Since we want to study just this time scale, we will not make the replacement. To evaluate the integral we multiply the integrand by a convergence factor Ω 4 /(Ω 4 + ω 4 ), with Ω ≫ ω A . The specific choice of the cut-off frequency Ω will only affect Γ(t) at time differences t − t 0 much smaller than a single optical cycle. We need to use a high-frequency cut-off at this point, because the dipole approximation is incorrect for high frequencies.
For the dielectric function, we take the Lorentz oscillator form (55), and we choose the cut-off frequency in that model to be identical to the one inserted in (62). In Fig. 3, we give the real part of the refractive index which clearly changes rapidly near ω = ω 0 . It is a familiar figure and it shows that the refractive index does not change much while increasing the cut-off frequency Ω from 10ω 0 to infinity. The density of radiative modes around the material resonance is proportional to ω 2 Re[n(ω)]. With this model for the dielectric function and the parameters as in Fig. 3, we calculated Γ(t) in the case that the transition frequency ω A exactly equals ω 0 . Since the integrand in (62) is rapidly fluctuating, it is expedient to use complex countour deformation to evaluate the integral. We add an infinitesimal positive imaginary part to the denominator and split the sine into two complex exponentials. The contour of the integral with exp[i(ω − ω A )(t − t 0 )] in the integrand is deformed towards the positive imaginary axis. The contribution from the pole arising from the convergence factor can be neglected at time scales t−t 0 ≫ ω −1 A . Likewise, the integration contour of the integral with exp[−i(ω − ω A )(t − t 0 )] is deformed towards the negative imaginary axis. Again, the pole contribution from the convergence factor is negligible. Further contributions, which cannot be neglected, arise from the branch cuts of n(ω) = ε(ω) and from the pole at ω A . The latter contribution is easily evaluated and yields the equilibrium value Γ(∞) = Γ 0 Re n(ω A ). In contrast, the branch cuts yield time-dependent contributions to Γ(t). For large Ω they are situated at ω 1 = −iκ 0 + ω 2 0 − κ 2 0 and ω 2 = −iκ 0 + ω 2 0 + ω 2 c − κ 2 0 . Around ω 1 and ω 2 , we can approximate the dielectric function by The branch cut at ω 1 gives the following contribution to the spontaneous emission rate: where J(t) is defined as: The branch cut around ω 2 gives a similar contribution. The integrals arising from the branch cuts and from the imaginary axis can easily be evaluated numerically, since their integrands are no longer rapidly fluctuating. The result is the solid line in Fig. 4. We see that the spontaneous emission rate builds up until it finally reaches the time-independent equilibrium value Γ 0 Re n(ω A ). Fig. 4 is an analytical approximation for Γ(t), which captures the main features of the time dependence, at least qualitatively. It is derived by retaining only the contribution (65) in the time dependent part of Γ(t), as this is dominant for large t. Moreover, we approximate J(t) by the first term in its asymptotic expansion for large t − t 0 . In this way we arrive at the following approximate expression for Γ(t):
The dashed line in
As explained, this approximation contains only the contribution from the branch cut at ω 1 ; the branch cut at ω 2 gives a faster decaying term, which goes like e −κ0(t−t0) /(t − t 0 ) 3/2 . The contributions from the integrals along the imaginary axis decay even faster. It can be seen from equation (67) that the amplitude of the time-dependent part of Γ(t) falls off as e −κ0(t−t0) /(t − t 0 ) 1/2 and also, that the amplitude of the extra term is largest around resonance, when ω A ≃ ω 2 0 − κ 2 0 . Away from resonance oscillations with frequency ω 2 0 − κ 2 0 − ω A are present. Fig. 4 shows the on-resonance case when the time-dependent term shows no oscillations, but has relatively large amplitude.
The main result of the present discussion is the time-dependence of the spontaneous emission rate. The time-independent value is not reached instantaneously, but at a time scale that is governed by the resonance characteristics of the medium. In fact, the smaller the resonance width κ 0 , the longer it takes to reach the time-independent value. Typically, it takes ω 0 /κ 0 optical cycles, as follows from the exponential e −κ0(t−t0) in the approximate expression (67). For narrow resonances with ω 0 /κ 0 large, the transient dynamics may take a substantial amount of time.
VII. DISCUSSION AND CONCLUSIONS
We have solved the equations of motion for the field operators in the damped polariton model using Laplace transformations. The solutions of the field and medium operators are the sum of a transient and a permanent part. The latter are expressed solely in terms of the initial bath operators. Long after the initial time all field and medium operators are functions of the bath operators alone, provided the coupling to the bath is nonzero for all frequencies. The long-time solutions satisfy quantum Langevin equations in which the initial bath operators figure as the quantum noise source. The same continuum that produces the absorption also forms the noise source that keeps the commutation relations in order. This is conceptually simpler than expressing the quantum Langevin noise in terms of the creation and annihilation operators that diagonalize the total Hamiltonian of the damped-polariton model [14].
The effects of the initial state of the field and medium variables on the expectation values at a later time are noticeable only during a short period that is determined by the characteristic relaxation times of the damped polariton modes. Once these transient effects have died out the expectation values are determined by the reduced density matrix which follows from the full density matrix at the initial time by taking the trace over the degrees of freedom of field and matter (without bath). If the full density matrix at the initial time factorizes, the reduced density matrix equals the initial bath density matrix.
The method of long-time solutions can be used for other dissipative quantum systems as well. For models in which the Hamiltonian can be diagonalized completely, it is an alternative to the Fano diagonalization technique. The latter can be quite complicated [14,40], whereas our long-time solutions are found after the simple inversion of a 4 × 4 matrix, as one sees from section II and IV. More generally, the long-time method may be useful for dissipative systems with a bilinear coupling to a harmonic oscillator bath whose dynamics can be integrated out.
We employed the method of long-time solutions to study transient effects in a medium described by a Lorentz oscillator dielectric function. This dielectric function (and that of the point-scattering model as well) can be derived from the damped-polariton model by taking a suitable bath coupling. Although a cut-off procedure turns out to be indispensable, the essential physics in the optical regime can be represented adequately in this way. Once the connection with the damped-polariton model has been established, spontaneous emission processes by a guest atom in a Lorentz oscillator dielectric can be investigated by means of the long-time method. Although transient effects due to the initial preparation of the dielectric have damped out after a few medium relaxation periods, transient behaviour of a different type shows up in the initial stages of the decay process. This transient behaviour, which is related to the preparation of the guest atom in its excited state, leads to a non-exponential decay -or in other words to a time-dependent spontaneous emission rate -if the atomic transition frequency is near a resonance of the dielectric. The non-exponential dynamics takes place at time scales that are inversely proportional to the width of the resonance. As we have shown, the characteristics of the time-dependent decay rate can be captured in an analytic asymptotic expression of which the qualitative features are corroborated by numerical methods. | 2016-03-01T03:19:46.873Z | 2001-01-16T00:00:00.000 | {
"year": 2001,
"sha1": "bfb224f26732ed8c238b095b565ed4a4deff946f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0101075",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6d7483912fed75b3ff56089232dfdb18bb9df91e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247368802 | pes2o/s2orc | v3-fos-license | Successful IMRT and concurrent chemotherapy for a patient with intrathoracic extensive‐stage small cell lung cancer
Abstract Treatment of extensive‐stage (ES) small cell lung cancer (SCLC) is a challenge with poor local control and dismal overall survival. Although single extrathoracic metastasis was defined as M1b according to the eighth edition of the tumour–node–metastasis (TNM) classification of lung cancer, M1b includes involvement of a single intrathoracic nonregional lymph node (LN) such as pericardial, internal mammary or paravertebral LNs. Here, we report a successful treated case of a 50‐year‐old female with ES‐SCLC with right pericardial LN involvement, cT1cN3M1b (LYM). She initially received two cycles of induction chemotherapy consisting of cis‐Diamminedichloroplatinum/cisplatin (CDDP) and etoposide and achieved a very good partial response. She then received curative chemoradiotherapy with intensity‐modulated techniques (45 Gy in 30 fractions BID), followed by an additional cycle of chemotherapy. She is free of recurrence for more than 2.5 years.
INTRODUCTION
Small cell lung cancer (SCLC) is the most common and aggressive pulmonary neuroendocrine carcinoma, accounting for about 15% of all diagnosed lung cancer cases. 1 Since the late 1950s, the Veterans Administration Lung Cancer Study Group (VALSG) staging system divided SCLC into limited-stage (LS) or extensive-stage (ES) SCLC. 2 LS-SCLC was initially characterized as tumoural involvement limited to one hemithorax (with or without local extension) with no distant extrathoracic metastatic disease and inclusion in a single radiation port, and then a modified version of VALSG staging for SCLC included ipsilateral supraclavicular lymph nodes (LNs) and contralateral mediastinal or supraclavicular LNs and ipsilateral pleural effusions. However, the International Association for the Study of Lung Cancer (IASLC) recommended to use the seventh edition of the American Joint Committee on Cancer (AJCC) tumour-node-metastasis (TNM) staging system for lung cancer instead of the VALSG staging system. 3 The recent criteria of the eighth edition of the TNM staging system also correspond to stages I-III and stage IV for LS-SCLC and ES-SCLC, respectively. 4 Although most clinicians and clinical trials blend the modified VALSG and IASLC criteria by classifying contralateral mediastinal and ipsilateral supraclavicular LN involvement as LS-SCLC, tumoural involvement with intrathoracic LNs beyond the nodal stations shown in the IASLC LN map of lung cancer, such as internal mammary, peri(para)cardiac and paravertebral LNs was classified as ES-SCLC.
Although patients with SCLC have been treated with systemic chemotherapy with or without radiation therapy (RT) and a significant minority of patients with SCLC are amenable to surgical resection, immune checkpoint inhibitors have been recently incorporated into the treatment for ES-SCLC. 1 Consolidative thoracic RT (TRT) is beneficial for selected patients with ES-SCLC with complete response or good response to systemic therapy, especially with residual thoracic disease and low-bulk extrathoracic metastatic disease. 5,6 Intensity-modulated RT (IMRT) is an innovative radiation technique that optimizes the dose distribution in three dimensions (3D) by focusing radiation on tumour burdens from multiple directions with nonuniform dose intensity in the radiation field, thereby reducing the dose to normal tissues around the tumour and surrounding organs. 7 It has a very wide range of applications, including large tumours or targets that are difficult to treat with ordinary 3D-RT. NRG/ RTOG 0617 trial showed that patients treated with IMRT had significantly less G3-5 pneumonitis and lower heart doses in locally advanced non-SCLC. However, limited data on IMRT are available in SCLC. Here, we report a case of a patient of ES-SCLC with a right pericardial LN involvement who was successfully treated with chemoradiotherapy and remained recurrence-free for more than 2.5 years.
CASE REPORT
The patient was a 50-year-old woman. She visited the Department of Otolaryngology at a local general hospital for a lump in the neck in February of a certain year. The biopsied specimen of the right cervical LN showed SCLC; positron emission tomography-computed tomography (PET-CT) revealed accumulation of the right middle lobe tumour and the bilateral supraclavicular, bilateral mediastinal and right pericardiac LNs with very small amount of right pleural effusion ( Figure 1); and brain magnetic resonance imaging (MRI) showed no metastasis. The patient with ES-SCLC, cT1cN3M1b (LYM), was referred to our hospital in the same month. She had a high tumour marker level of Neuron-specific enolase (NSE) 183 ng/ml (normal range, <16.3 ng/ml) and aspiration cytology of the left supraclavicular LN revealed SCLC (Figure 1). Three days later, chemotherapy with CDDP plus etoposide was started, and PET scan for re-evaluation after two cycles of chemotherapy showed little accumulation compared with that of pretreatment ( Figure 2). Forty-five Gy of volumetric modulated arc therapy, which is an extension method to dynamic multi-leaf collimator IMRT at 1.5 Gy twice a day, was added to the third cycle of chemotherapy ( Figure 2). V20 (volume of lung receiving 20 Gy or more), V5 (volume of lung receiving 5 Gy or more) and a mean lung dose are 26.23%, 63.42% and 1403 cGy, respectively. The fourth cycle of chemotherapy had to be delayed due to G2 radiation oesophagitis and G4 neutropenia. Since she did not receive prophylactic cranial irradiation (PCI), surveillance with CT and head MRI was performed every 3-6 months for the first 2 years of treatment, and CT scan at 2 years and 3 months after the last chemotherapy confirmed that the complete response had been maintained. The tumour marker NSE has been stable within normal limits for more than 2.5 years.
DISCUSSION
The seventh and the eighth editions of the TNM staging system are useful in the treatment of SCLC. 3,4 Stages I-III and stage IV in these systems correspond to LS and ES in the VALSG staging system. In the seventh edition, intrathoracic and extrathoracic metastases were classified as M1a and M1b, respectively. M1b in the seventh edition was divided into M1b and M1c in the eighth edition. In the eighth edition, M1b was defined as involvement of a single extrathoracic metastasis, while newly created M1c was defined as involvement of multiple extrathoracic metastases. Lung cancer that extends to an intrathoracic nonregional LN beyond the nodal stations of the IASLC LN chart is considered as a distant metastasis (M1b disease). 4 M1b disease had a similar prognosis to intrathoracic metastases (M1a disease), and a better prognosis than M1c disease. Single-site metastases (SSM) to the brain alone had a better prognosis than SSM to other sites, 4 but the prognosis for patients with intrathoracic nonregional LN metastases, such as our case, was unclear due to rarity.
Consolidative TRT is recommended for ES-SCLC patients who had a complete or good response to chemotherapy. 5,6 While platinum-based chemotherapy is the mainstay of treatment for ES-SCLC, the CREST phase III trial was conducted to evaluate TRT (30 Gy in 10 fractions) in ES-SCLC patients who responded to chemotherapy. There was a significant reduction in intrathoracic recurrence and improved 2-year survival in the intervention group compared to the control group. A post hoc analysis showed patients with two or fewer distant metastases had better survival after TRT, suggesting that this approach should be considered in ES-SCLC patients with residual thoracic disease and low-volume extrathoracic metastases who have a complete or good response to chemotherapy. On the other hand, in the recent NRG Oncology RTOG 0937 phase II trial, patients with ES-SCLC and one to four extracranial metastases after a complete or partial response to chemotherapy were randomized to PCI alone or PCI plus TRT (45 Gy in 15 fractions) for intrathoracic disease and/or consolidative radiation (30-45 Gy) for extracranial metastases. Although this trial was terminated at interim analysis because of slow recruitment and no significant differences in 1-year overall survival (OS), the lower risk of first thoracic recurrence and higher proportion of patients with failures at any new sites in the TRT group suggested the need for better systemic therapy and RT including timing, dose and fractionation of RT. In this regard, the CALGB 30610 trial, which compared dose escalating TRT of 70 Gy in 35 fractionations to accelerated hyperfractionated (AHF)-TRT of 45 Gy twice daily in 30 fractions (the Turrisi method), showed no significant difference in OS and progression-free survival (PFS), 8 supporting high-dose once-daily RT as an acceptable option for patients with LS-SCLC. This notion is also supported by the CONVERT study. Furthermore, in a Norwegian phase 2 study, 9 the experimental arm involving high-dose (60 Gy in 40 fractions) AHF-TRT resulted in a substantial survival improvement without increased toxicity, compared with 45 Gy of the Turrisi method, suggesting that AHF-TRT with 60 Gy is an alternative to the Turrisi method in LS-SCLC.
The number of metastases is a prognostic factor, and the majority of the long-term survivors with ES-SCLC had been reported to have either a single metastatic site or metastases limited to contralateral hemithorax and/or contralateral cervical or axillary nodes. 10 In addition, subsequent subgroup analyses of the CREST trial revealed that the OS benefit of TRT was limited to patients with residual thoracic disease, and the PFS of TRT was conferred to patients with two or fewer metastatic sites and no liver or bone metastases. Furthermore, the retrospective study, using a large cohort of patients from the National Cancer Database, showed a significant difference in survival with the additional use of TRT with chemotherapy and also with greater number of radiation treatments (higher radiotherapy doses). 11 As our case was controlled by IMRT and chemotherapy, ED-SCLC patients with SSM such as contralateral cervical or axillary or intrathoracic nonregional LN metastasis, excluding liver or bone metastases, might be good candidates for chemo-IMRT. As IMRT is performed from multiple directions, the area of low-dose irradiation increases, and also attention should be paid to radiation pneumonitis and the risk of secondary cancer. The combination of chemotherapy and consolidative TRT while managing systemic toxicities for ES-SCLC patients might be the paradigmatic model of multidisciplinary treatment.
For the first-line treatment of ES-SCLC, chemoimmunotherapy of platinum and etoposide combined with anti-PD-L1 antibodies including atezolizumab and durvalumab has been shown to improve survival and has become the standard of care. 1,[5][6][7] In this treatment setting, the role of consolidative TRT is less clear. In this regard, a phase II/III RAPTOR trial (NRG Oncology LU-007) is very important which compares the effect of adding RT (up to five sites including primary thoracic disease) to the usual maintenance therapy with atezolizumab versus atezolizumab alone in ES-SCLC patients without progressive disease after four to six cycles of platinum plus etoposide chemotherapy combined with atezolizumab.
In SCLC, better local and systemic therapies are necessary to improve OS. We here present a case of a female ES-SCLC patient with right pericardial LN involvement. To our knowledge, this is the first case report of intrathoracic ES-SCLC, who is successfully treated with IMRT and concurrent chemotherapy. Continued advances in multimodal | 2022-03-11T16:22:04.080Z | 2022-03-09T00:00:00.000 | {
"year": 2022,
"sha1": "c20304600e74b2e32dc1619a9197c4f0ccaaaabb",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/rcr2.919",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13b92681733cf57f362d2c846f5cb70e65318227",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
32396072 | pes2o/s2orc | v3-fos-license | Avaliação da estrutura dos centros de atenção psicossocial do município de São Paulo , SP Evaluation of psychosocial healthcare services in the city of São Paulo , Southeastern Brazil
MÉTODOS: Foram incluídos no estudo 21 CAPS para atendimento de adultos, vinculados à Secretaria Municipal de Saúde de São Paulo (SP), entre 2007 e 2008. Foram coletadas informações sobre as instalações físicas dos serviços, recursos humanos disponíveis e procedimentos de cuidado ao paciente, utilizando instrumento padronizado. Foram realizados análise descritiva dos dados e o teste de qui-quadrado para testar a associação entre os tipos de atividades e a origem e localização dos serviços.
INTRODUCTION
The fi rst psychosocial healthcare service (CAPS) in Brazil was created in the city of São Paulo in March 1987 on account of the country's political redemocratization and a process of re-examination of conceptual framework models.New conceptual frameworks brought about the introduction of new mental care and fi nancing models that gained momentum in Latin American and especially in Brazil 7 in the late 1980s.In 1989 the Psychosocial Healthcare Centers were established in Santos. 4They were directly originated from the Basaglian tradition of deconstructing the asylum model, which has infl uenced the establishment of CAPS as the building blocks of care for patients with severe chronic mental conditions in the community.
The Ministério da Saúde (Brazilian Ministry of Health) Decree N o .224 of January 29, 1992 established the roles of partial hospitalization services and CAPS.a Decree No. 336/GM b (02/19/2002) established that CAPS should focus their care actions on adults, children, and adolescents with severe chronic mental conditions, and substance abuse.CAPS for adult care should include the following types of services: CAPS I, CAPS, II and CAPS III, from lower to higher complexity/size and population coverage.b These three types of public services have the same role and their priority should be to provide care to patients with severe chronic mental disorders in their community on an intensive, semi-intensive or non-intensive basis.Day hospitals are not mentioned in this decree.There has been a considerable growth of CAPS over the last two decades.In July 2006, there were 1,220 specialized mental rehabilitation services in Brazil.c The most recent Brazilian studies assessing quality of mental health services, especially CAPS, have focused on the implementation of these services 8 or they are qualitative studies evaluating a single service. 6Few studies have developed instruments to assess these services regarding satisfaction of users and family with care provided, 2 and others have focused on characterizing CAPS clientele. 1ere are many issues involving the implementation of CAPS designed as a "synthesis service" gathering together all different levels of care in a single unit within local health systems.It is important to assess how adequate CAPS setting is for mental health rehabilitation in the community and to what extent these services may be deviating from their original role and becoming outpatient services that provide extensive care to chronically ill patients.Mbaya et al 5 (1998) conducted a one-day survey in ten day hospitals in the United Kingdom and found that, although most services were originally described as focused on mental health rehabilitation, in fact more than half of their patients with psychotic disorders received psychotherapy but were not enrolled in rehabilitation programs.
The city of São Paulo has a peculiar history regarding care of patients with severe chronic mental illnesses.In the 1980s, the State Health Department promoted the expansion of outpatient mental care clinics.In 1987, the fi rst CAPS in Brazil was created in the city of São Paulo and became a landmark as a care option to replace the almost all exclusive approach of hospital treatment.In the early 1990s, a new mental care model was implemented in the city.Attention to patients with severe mental conditions would be provided preferentially at day hospitals during crises and at outpatient clinics for disease follow-up.Mental rehabilitation activities would be carried out preferentially at the Centers for Socialization and Cooperatives (CECCO).The introduction of the Health Care Plan (PAS) by the mid-1990s completely ruined this mental health program: day hospitals and CECCOs continued to exist but they operated as isolated units with no connection between them.Starting from 2002, with local strengthening of the Sistema Único de Saúde (SUS -National Brazilian Health Care System), there has been an increasing number of CAPS in São Paulo.
In the light of that, in parallel to the recent restructuring of SUS and mental care services in the city of São Paulo, the objective of the present study was to describe CAPS operation as for their physical infrastructure, human resources, and activities.
PROCEDURES FOR ANALYSIS
A study on the CAPS infrastructure and activities provided as part of adult patient care was conducted in the city of São Paulo, Southeastern Brazil, in coordination with the city's Health Department.Twenty-one out of 22 CAPS I and II operating by December 2007 participated in the study (one CAPS refused to participate).Five CAPS were located in the north area of the city, two in the south, four in the midwest, fi ve in the east, and fi ve in the south.There were no CAPS III.The study was based on guiding principles proposing the evaluation of three dimensions: infrastructure, process, and results. 3 structured interviews with service staff were carried out to assess care processes.A cohort study including users receiving intensive care, i. e., who went to the CAPS three or more times per week, was conducted to evaluate results.a Data were collected at each service using a standard questionnaire with questions about physical infrastructure of facilities, human resources, admission, follow-up, and discharge protocols, and activities provided in-site and outside CAPS.The data collection instrument was completed by the study team with information provided by service managers or a professional staff person during the week of ethnographic observation.Data were double entered using EpiData software and then checked for consistency.Data was collected from April 2007 to April 2008.All activities provided at CAPS were grouped into empirical categories constructed over the study based on their approaches and purposes.A descriptive analysis was carried out and the chi-square test was performed to test the association between CAPS activities and service background and location.
All service teams were informed about the study objectives and procedures and service managers agreed with the participation of their facility in the study.The study was approved by the Research Ethics Committee of the Health Department of the City of São Paulo (Protocol No. 0306/06 -CEP/SMS).
ANALYSIS AND DISCUSSION OF RESULTS
Of 21 CAPS studied, ten were originally established as outpatient clinics (four located in the north area of the city), eight were originally psychiatric day hospitals (three located in the east area) and only three were CAPS.The background of services was quite mixed; some went through all three different types of organization and others were created from the merger or splitting of preexisting services.
Physical infrastructure
Eleven CAPS operated in rented buildings (houses); some buildings had inadequate room distribution as well as in-site access areas.All other CAPS operated in the city's Health Department buildings.CAPS that were originally outpatient clinics operated in large buildings with many offi ces for individual consultations and few adequate rooms for group activities while others operated in adapted rented houses with limited space.Eleven CAPS operated in two-storey buildings and ten of them did not have access for people with special needs.Only two CAPS had semi-industrial size kitchens where income generation activities were developed; all the rest had only regular-size home kitchens.The number of offi ces ranged from one to seven (median = 2); the number of rooms for group activities/workshops ranged from one to six (median = 3); and the number of rooms for management ranged from one to four (median = 2).All CAPS had room for outdoor activities and 17 of them had a special room for providing care during crisis.
All services kept patient records and other information and patient records were completely legible in 14 CAPS.Some services had daily records of users' attendance to CAPS and their involvement in activities and others had only specifi c records including fi rst consultation, management, changes of treatment plan, and description of crisis.
Human resources
Many different professionals worked in the capacity of service manager: psychologists (in 11 CAPS), psychiatrists (in four), occupational therapists (in four), social workers (in one), and nurse (in one).Four CAPS did not have occupational therapists and two CAPS did not have social workers in their staff.Eleven CAPS had at least one pharmacist in their staff.All CAPS had at least one psychiatrist, although in one CAPS there was only one psychiatrist who worked in the capacity of manager and was in charge of all patient consultations.More than half of CAPS had at least four psychologists in their team and the mean number of nursing providers (assistants, aids, and specialists) was six per service (Figure 1).In only one CAPS a psychiatrist worked in the capacity of manager and most providers were females.
Only seven CAPS had professional supervision.Team meetings were carried out on a weekly basis except in three CAPS where these meetings were on a daily basis.
Types of care
Seventeen CAPS provided care to walk-in patients and referrals, and all the remaining services provided care only to patients referred from other services.As for referrals, 14 CAPS had one provider in charge of referrals and three CAPS had a small team of providers in charge of referrals.Users' meetings took place at 16 CAPS.Treatment plans at fi ve CAPS did not include a discharge protocol.At six CAPS users were discharged but these facilities were their reference service and they could be seen at the service without having to go through triage.At six services users were discharged and referred to primary care units.Understaffi ng of mental health providers and inadequate referral system were the main diffi culties pointed out as preventing user discharge from the service.In all CAPS, during the study period, there were seen 457 in-site group activities categorized as follows: arts and culture (arts, cultural, body expression, and music activities); psychotherapy (including verbal groups and other types of psychotherapy, clinical/ therapy group follow-up, and community therapy); socialization-related activities (reunions, outing groups, and other support groups, play activities, and social gatherings); craftwork; income generation; physical and psychophysical integration activities; daily life activities (self-care, cooking, vegetable gardening, and gardening); and other mixed groups (age-specifi c, citizen's rights, and others).
The type of activities provided was associated with CAPS original background (Figure 2).The majority of activities offered at CAPS are arts and culture; however, services that were originally outpatient clinics offer mostly craftwork activities and services that were day hospitals offer mostly psychophysical activities (p=0.02).There were also seen different types of activities by city area: in the north area most were craftwork and in the south most were socialization activities (Table; p<0.001).In all CAPS, more than 20% of group activities were arts and culture.There were seen 41 group activities involving the community in the CAPS area of coverage, of which 31 were physical and/or socialization activities and ten were cultural activities.They were generally carried out at CECCOS, local clubs, cultural centers, and libraries.
GENERAL CONSIDERATIONS
CAPS for adult care affi liated to the city's Health Department showed diverse organization and operation characteristics.As for infrastructure, facilities were quite different physically.Team staff also comprised many different providers; some services had mostly psychologists and others had providers with diverse backgrounds.The Ministério da Saúde Decree 336 a establishes that CAPS II team should consist of at least one clinical psychiatrist, one mental health nurse, four licensed providers including psychologists, social workers, nurses, occupational therapists, educators, or other providers involved with mental health care; and six mid-level staff including nursing assistants, and administrative, education, and craftwork staff.It can be noted that the mean number of psychiatrists at CHMS is well above the recommended as well as the total number of other licensed providers (especially psychologists).The number of mid-level staff was also well above the recommended minimum.
There was also a variety of in-site group activities.In some CAPS, these activities were mostly craftwork workshops (sewing, crochet, upholstery); in other services these workshops focused on psychophysical activities (lian gong, tai chi) and or psychotherapy approaches.The different profi les of activities may be due to local area differences and different socioeconomic conditions of the community where CAPS are located and the availability of health, leisure, sports, and culture resources in the area.In addition, service background also infl uenced the approaches used at services.Services that were originally outpatient clinics in 1980s and then became CAPS put an emphasis on craftwork activities, which have been traditionally included as part of care of stable patients with severe chronic mental illnesses.Services originally created as day hospitals continued to provide many activities based on psychophysical approaches, which can be provided during crisis when patients have marked disorganization.CAPS in the city of São Paulo do not have a linear common development: some services were outpatient clinics until 2002 and became CAPS; others were day hospitals or were created from the merger of day hospitals and outpatient clinics, and others resulted from the splitting of other services.In many instances the transition from the original model of operation (outpatient clinic or day hospital) has been incomplete.Many staff persons still answered the phone by saying "day hospital" or "mental care outpatient clinic" and the same is true for the signs at the front of the facility.
Another major factor explaining the diversity of activities provided at CAPS is that they are developed and implemented based on provider's skills and preferences, as reported by managers or other professional staff when they were asked about CAPS objectives and methods.While non-standard activities can offer room for creativity and customized approach, they can paradoxically create a signifi cant gap between what is offered by the service and what users need.Services providing many handmade activities (e.g., craftwork) may not be as attractive to young people and males who are culturally not interested in this kind of work.On the other hand, services focusing on group psychotherapy approaches can be disappointing to those looking for an occupation or inclusion in other daily life settings.Many professional staff reported diffi cult coordination between mental health and rehabilitation services and other health resources as a major obstacle to patient discharge.Divergence was seen between services regarding discharge after patient follow-up at CAPS and again it can be observed the overlapping of care models.Some CAPS had taken up the position of "synthesis service," where users would always be provided care regardless of the level of care required and would not ever be discharged.But other CAPS were concerned about patient discharge and referral to other services was considered during the development of the treatment plan at admission.They prioritized care during crisis or symptom exacerbation using an approach similar to that of day hospitals.The availability of only 22 CAPS for adult care in a city with nearly 10 million people make the idea of CAPS as "synthesis service" quite unfeasible because they are always operating over the limit.The overall poor condition of all other mental health resources in the city (understaffi ng of mental health providers, inexistence of admission and user referral systems) make it diffi cult or either unfeasible follow-up of users outside CAPS.It is a deadlock situation that service teams must resolve.
It is expected that the analysis of information obtained from ethnography observations of services and interviews with professional staff can help further understanding the different proposed models of CAPS implemented in the city of São Paulo.
Figure 1 .a
Figure 1.Means and standard errors of the number of providers at psychosocial healthcare services.City of São Paulo, Southeastern Brazil, 2007-2008.
Figure 2 .
Figure 2. In-site group activities provided at psychosocial healthcare services, by service background.City of São Paulo, Southeastern Brazil, 2007-2008.
In October 2008, the state of São Paulo had 196 CAPS, of which 48 were CAPS I, 64 CAPS II, 17 CAPS III, 22 CAPSi (for children and adolescents), and 45 CAPSad (for substance abuse treatment).d The city of São Paulo, in February 2009, had 53 CAPS affi liated to the city health department, of which 25 were CAPS for adults, 16 CAPSad, and 12 CAPSi.e Ethnographic observations and semia Ministério da Saúde.Decree No. 224, pf January 29, 1992.Establishes guidelines and regulations for mental health care.Diario Ofi cial Uniao.30 Jan. 1992; Seção 1;1168.
Table .
In-site group activities provided at psychosocial healthcare services, by city area.City of São Paulo, Southeastern Brazil, 2007-2008.CAPS managers and professional staff reported mainly in-site activities.Activities provided outside CAPS and family activities were often not considered "CAPS activities."Organizational activities (e.g., team, mental care, and board of directors meetings, and participation in mental health forums) were not included in the schedule of activities at most services.These activities are most likely not recognized as work or they are not considered to have the same relevance.It may indicate that services are more concerned with what goes on within their facilities in detriment of their inclusion in the community area.Few CAPS had common activities and most treatment plans involved only in-site activities | 2019-03-08T14:03:05.818Z | 2009-08-01T00:00:00.000 | {
"year": 2009,
"sha1": "127643e0a1648818d4c8295203e81fc33ca3fc86",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rsp/a/TznvNn3jqnb7B6kDYNscxts/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2ae6616fbc957f064758c15f9611b594369d4331",
"s2fieldsofstudy": [
"Medicine",
"Political Science",
"Psychology"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
} |
26745816 | pes2o/s2orc | v3-fos-license | Conformable fractional Dirac system on time scales
We study the conformable fractional (CF) Dirac system with separated boundary conditions on an arbitrary time scale \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathbb{T}$\end{document}T. Then we extend some basic spectral properties of the classical Dirac system to the CF case. Eventually, some asymptotic estimates for the eigenfunction of the CF Dirac eigenvalue problem are obtained on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathbb{T} $\end{document}T. So, we provide a constructive procedure for the solution of this problem. These results are important steps to consolidate the link between fractional calculus and time scale calculus in spectral theory.
Introduction
Fractional calculus means differentiation and integration of a noninteger order. The idea of fractional calculus was introduced by Leibniz and L'Hopital in . However, the study of noninteger order derivatives did not appear in the literature until , when Lacroix [] presented a definition of the fractional derivative based on the usual expression for the nth derivative of the power function. Within years the fractional calculus became a very attractive topic for mathematicians. Fractional calculus has many applications in science and engineering such as memory of a variety of materials, signal identification, temperature field problems in oil strata, diffusion problems, etc. (see [-]). Many different forms of fractional differential operators like the Grunwald-Letnikow, Riemann-Liouville, Hadamard, Caputo, Riesz and conformable ones have been presented (see [-]). Recently, researchers have started to deal with the discrete versions of fractional calculus benefitting from the theory of time scale (see [-]). For example, Benkhettou et al. [] introduced the concept of the CF derivative of order α on T. They explained all properties of the CF derivative on T. The CF derivative of a function defined on T reduces to the Hilger derivative when α = . Before expressing the CF derivative of order α ∈ (, ] on T, we should give a historical development of time scale calculus. Time scale calculus was first considered by Hilger [] in in his doctoral dissertation under the supervision of Aulbach [, ] to unify difference and differential equations. However, similar ideas had been used before and go back at least to the introduction of the Riemann-Stieltjes integral which unifies sums and integrals. More specifically, T is an arbitrary, non-empty, closed subset of R. Many results as regards differential equations carry over quite easily to related results for difference equations, while other results seem to be totally different in nature. The time scale calculus can be applied to any fields in which dynamic processes are described by discrete or continuous time models. So, it has various applications involving non-continuous domains like modeling of certain bug populations, chemical reactions, phytoremediation of metals, wound healing, and maximization problems in economics and traffic problems. In recent years, several authors have obtained many important results in different topics on T (see [-]). Although there are many studies in the literature on T, very little work has been done as regards BVPs (see [-]). The work of combining fractional calculus and time scale calculus in spectral theory is much less extensive. To fill this gap, we consider below the CF Dirac eigenvalue problem on an arbitrary time scale.
Let us consider the CF Dirac eigenvalue problem with separated boundary conditions and λ > is a spectral parameter, y σ = y(σ ). Throughout this study, we assume that q, r ∈ L α J are real-valued, continuous functions where Here, T α (y(t)) indicates the CF derivative of the function y order α and (γ + δ ) × (η + β ) = . Moreover, y(t, λ) = (y (t, λ), y (t, λ)) T ∈ C(J, R) denotes the eigenfunction of problem (.)-(.) where C(J, R) is the space of all continuous functions on J and T denotes the transpose. We want to look at to the classical spectral theory of Dirac system from a different perspective. Here, spectral properties and results on the solution of problem (.)-(.) will be discussed for the first time with this study. By setting α = in (.)-(.), the problem reduces to the classical Dirac eigenvalue problem which includes the Hilger derivative []. In the case of T = R and α = in (.)-(.), we get the following classical Dirac system: (.) Equation (.) is known as the first canonic form of the Dirac system. The Dirac operator is the relativistic Schrödinger operator in quantum physics. It is a modern presentation of the relativistic quantum mechanics of electrons intended to make new mathematical results accessible to a wider audience. It treats in some depth the relativistic invariance of a quantum theory, self-adjointness and spectral theory, qualitative features of relativistic bound and scattering states, and the external field problem in quantum electrodynamics, without neglecting the interpretational difficulties and limitations of the theory. There are several studies about the classical Dirac system from many perspectives in the literature (see [-]). Let us give a brief description of the structure of our study. In Section , we express some fundamental notations and definitions as regards CF calculus on T. In Section , we prove some basic theorems for the CF Dirac system on T. Using some methods, we get asymptotic estimates of the eigenfunction for the problem (.)-(.) in Section . Some conclusions are presented in Section .
Methods
In this section, we want to recall notations, lemmas and theorems for CF calculus on T. To give basic results for the problem (.)-(.), we should express some fundamental notions as regards time scale calculus. The next definitions are crucial for this theory. Forward and backward jump operators at t ∈ T, for t < sup T, are defined as respectively. The distance from an arbitrary element t ∈ T to the closest element on the right is called the graininess of T and is determined by We also need to explain T κ along with the set T to express the Hilger derivative of a function. If T has a left-scattered maximum m, then [] defined the CF derivative of order α and its properties on T to generalize the Hilger derivative. Let h : T → R, t ∈ T κ and α ∈ (, ]. For t > , one can define T α (h)(t) to be the number provided it exists with the property that, given any t-s t -α exists as a finite number. In this instance for all a, b ∈ T.
Some spectral properties of CF Dirac system on time scales
In this section, we give some important results for the CF Dirac system on T. It is well known that (.)-(.) has only real eigenvalues and its eigenfunctions are orthogonal when T = R and α = []. The following results will generalize this basic consequences to the CF case for the problem (.)-(.). Let us firstly give a lemma to be used in the proofs of the main theorems.
Lemma . Let h, g : T → R be continuous functions, a, b ∈ T and α ∈ (, ]. Then
Proof The proof can easily be obtained by using a similar procedure to [].
Theorem . The CF Dirac operator L α is selfadjoint on L α J.
Proof Let x(t) = (x (t), x (t)) T and y(t) = (y (t), y (t)) T be solutions of CF Dirac eigenvalue problem (.)-(.). Then By considering the definition of the inner product on L α J and the boundary conditions, we get where t ∈ J is right-dense. This completes the proof.
Proof Let λ be a complex eigenvalue and y(t, λ) = (y (t), y (t)) T be an eigenfunction corresponding to the eigenvalue λ of the problem (.)-(.). Since q and r are real-valued functions and η, β, γ and δ are also real, we obtain If we take the α-CF integral of the last equality from ρ(a) to b with respect to t, we have Since λ =λ, we get y σ (t) = y σ (t) ≡ . This is a contradiction. Hence, the eigenvalues of the problem (.)-(.) are real.
Proof Here C α (J, R) denotes the space of all functions whose CF derivatives of order α are continuous.
(a) The definition of W and the product rule for a CF derivative of order α yield (b) By using the inner product on L α J, we get Hence, the proof is completed.
Asymptotic estimates of eigenfunctions for CF Dirac system on time scales
In this section, we get the asymptotic estimates of the eigenfunction of the problem (.)-(.) on T.
It completes the proof.
Conclusions
Fractional type eigenvalue problems have attracted the attention of many authors. Because of this, we consider a CF Dirac equation system with boundary conditions on T to obtain some spectral properties. Finally, we get asymptotic estimates of the eigenfunction for the problem (.)-(.). We know that these results are important steps for fractional spectral theory on time scales. As the work in this area progresses, we believe that many specific results will be obtained as regards this topic. For this purpose, this study will be very useful.
As further work, we think the ideas can be extended to obtain asymptotic estimates of the eigenvalues for eigenvalue problems. After this stage, we can define inverse problem on time scales. Thus, further important results will be achieved. | 2018-04-03T00:47:51.713Z | 2017-07-10T00:00:00.000 | {
"year": 2017,
"sha1": "e8b05fa629eed192e58a67b40cbd0e730d1c562d",
"oa_license": "CCBY",
"oa_url": "https://journalofinequalitiesandapplications.springeropen.com/track/pdf/10.1186/s13660-017-1434-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8b05fa629eed192e58a67b40cbd0e730d1c562d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
211760401 | pes2o/s2orc | v3-fos-license | Efficiency Assessment of Batik Industry Wastewater Treatment Plant in Center for Handicraft and Batik Indonesia
— Batik as Indonesian cultural heritage has gained growing interest from international customers as well as local ones. However, the increasing production is also followed by negative impacts to environment in the form of wastewater. Most of batik industries dispose their wastewater directly into the environment without prior treatment. Therefore, it is necessary to build a wastewater treatment plant (WWTP) as a pilot project for the industries. This paper is focused on efficiency evaluation of the wastewater treatment process for batik industry WWTP at the Center for Handicraft and Batik which can serve as a model for small scales industries. Wastewater samples were taken from each treatment unit outlets. Eleven parameters were analyzed from the samples: pH, temperature, biochemical oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), total dissolved solids (TDS), phenol, total chrome, total ammonia (NH3-N), sulphides (S), and oil and grease. Treatment efficiency was calculated for all parameters and the effluent analysis results were compared with permissible maximum values as stated in Local Regulation of Special Region of Yogyakarta No. 7 of 2016 on Wastewater Discharged Standards. The results indicate that each treatment process could reduce the concentration of the pollutants. The overall value final effluent was below the standard so the effluent could be discharged safely to the environment.
INTRODUCTION
Batik is one of traditional textile industries in Indonesia. Since UNESCO's official recognition of batik as Intangible Cultural Heritage of Humanity in 2009, Indonesia's batik industries have rapidly grown, contributing significantly to Indonesia's economic growth. The increase of batik demands has caused batik manufacturers to increase their production capacity which in turn also caused greater effects to the environment. In Indonesia, batik is mostly produced by Small and Medium Enterprises (SMEs). They usually build their processing units alongside the river, residential area or other places which are not designed as industrial areas. Therefore, the facilities to treat their industrial effluents are not available. The SMEs usually discharge the effluents into a special vessel or directly into a river or drainage system after minimal or no treatment [1,2]. The discharge of wastewater without proper treatment is one of the major problems faced by the batik industries [3].
The untreated batik wastewater leads to several environmental problems. The dyes are chemically stabile, nonbiodegradable and some of them are suspected as carcinogens and toxics [4,5]. Thus, an appropriate treatment is needed to remediate the effluents in compliance with local standards and regulations.
Measures have been taken by government, industries, universities, research institutes and other organizations to prevent the water pollution caused by batik industries. Some Wastewater Treatment Plants (WWTPs) have been built using various waste water treatment technologies. However, to the best of my knowledge, a little attention has been paid to investigate the efficiency of the batik WWTPs performance. The efficiency of a wastewater treatment is important as it serves as a basic indicator of WWTP function [6]. It depends on the amount and composition of waste water, condition and type of sewer network, producers, used technical and climatic equipment and other conditions [7]. Performance evaluation of WWTP is required to assess the existing effluent quality in order to meet higher treatment requirements, and to know whether the treatment plant is likely to handle higher hydraulic and organic loading [8]. Since there are only a few batik WWTP in Indonesia, the efficiency assessment is also needed to determine the feasibility of a WWTP as a pilot project to be implemented in other batik industries. The assessment results can also be used as recommendation of WWTP optimization [9].
Center for Handicraft and Batik (CHB) is a research institution located in Yogyakarta, Indonesia. Its main purpose is to provide services regarding research, development, training, testing, certification and standardization for handicraft and batik industries. Batik industry wastewater is generated from research and training activities. The wastewater is treated in the wastewater treatment plant before it is discharged to the environment.
In this study, the performance efficiency of WWTP in CHB was evaluated. The aim of this study was to assess the performance of batik WWTP in CHB regarding its accordance to the permissible standards as stated in the Local Regulation of Special Region of Yogyakarta No. 7 of 2016 on Wastewater Discharged Standards.
II. LITERATURE REVIEW elements in such process, the chemical reagents used in batik manufactures varies in chemical composition ranging from inorganic to organic compounds [2]. The wastewater generated from the processes requires huge number of organic compounds of a complex structure. If the batik wastewater is not treated well, it will lead to several environmental problems. The dyes are chemically stable and non-biodegradable as well as some of them are suspected as carcinogens and toxics [4,5]. Thus, an appropriate treatment is needed to remediate the effluents in compliance with local standards and regulations [3,10]. The locals use traditional methods for producing batik, so the untreated effluents contain dyes, waxes, heavy metals [11] with high total dissolved solids (TDS), total suspended solid (TSS), biochemical oxygen demand (BOD) and chemical oxygen demand (COD) contents [12]. The effluents are known to be one of the most difficult substances to treat due to the recalcitrant nature of dyes and other chemicals [11,13].
B. Batik Wastewater Treatment Technologies
There are several methods applied in batik wastewater treatment such as chemical, physical and biological methods. One of the most common chemical processes in wastewater treatment is coagulation [14]. It is the process of adding coagulant to destabilize colloidal particles so that the particles collide and grow [15]. However, the residue of coagulants such as alum and ferric chloride can cause Alzheimer diseases and similar health-related problems [14].
The use of oxidizing agents such as fenton, ozone, hydrogen peroxide (H2O2) and ultraviolet is promising as an alternative for better treatment [16][17][18]. However, some studies show that while these oxidants offer an effective decolorization, the COD removal is not significant [19,20].
Sorption has gained wide attention over the last decades as a physical method to remove impurities from wastewater [17]. This process has been found to be effective and economical in removing dyes and reduce BOD [17,21]. Some examples of commonly used adsorbents are activated carbon, inorganic oxides, and natural adsorbents (such as clays and clay minerals, cellulosic materials, chitin and chitosan) [19]. However, a study by [14] indicated that some adsorbents have limited adsorption capacities of the dyes.
A membrane is a layer that is able to separate a mixture of two or more components [14]. Filtration using membrane has been employed to remove dyestuff from textile wastewater effectively [17,21]. A membrane is resistant to temperature, adverse chemical environment and microbial attack [22]. However, it has some issues regarding the residue disposal, the possibility of clogging and the membrane displacements [19].
Biological methods are by far the most universal technique for dye wastewater decolorization [22]. The microbes used in the methods degrade the organic matters in the wastewater [5]. Compared to the chemical and physical methods, biological methods have some advantages such as most cost effective, lower sludge production, applicable to wide range of dyes, and non-toxic end products [14,21]. A wide range of microorganisms have been utilized applied to treat dye wastewater, such as bacteria [18], algae [19] and filamentous fungi [15,19,20].
To effectively remove the pollutants in the wastewater, batik WWTP usually employs the combination of the three methods with various sequences [22].
A. Profile of WWTP
The wastewater plant contains primary and secondary level. The primary level consists of wax trap tank, sedimentation tank, coagulation-flocculation tank, anaerobic filters, and activated carbon adsorption. The secondary one is formed by the sludge drying bed to dewater the sludge.
Wax trap tank. The wax trap tank is located near the wax removal process unit. The wax is removed from the batik fabric by boiling the fabric in hot water to dissolve the wax and rinsing it with clean water. The wastewater from those processes, which contains a large amount of wax, is treated in the wax trap tank. It is then allowed to sit in the tank until the wax float at the surface of the tanks due to its lighter density. The wax is then removed regularly to be recycled and reused in the next batik process.
Sedimentation tank. The sedimentation tank allows the suspended solids to settle out of the wastewater because of the greater specific gravity compared to water.
Coagulation-flocculation tank. In this tank alum is added as coagulant and it is then rapidly mixed. Coagulation is the process of destabilizing colloidal particles so that particles growth can occur as a result of particles collisions [5]. Bigger particles can settle more easily and separate with the liquid phase. The solid phase is removed into sand bed filter while the liquid phase flows into the next treatment unit.
Sludge drying bed. Sludge drying bed is provided to dewater sludge with filtration and evaporation mechanism. Its construction consists of stones, gravels and sand. Perforated pipes at the bottom of the system flows the liquid to the anaerobic filter. The dry solids are removed periodically and stored in the hazardous waste storage. Adsorption tank. In this unit, activated carbon adsorbs the heavy metal and remaining dyes from wastewater. The activated carbon used are from coconut shells and woods.
B. Sampling and Analysis
Wastewater samples were taken every three months starting from January to December 2018. Grab water samples were collected at four sampling points: the inlet of WWTP (P1); the outlet of the sedimentation tank (P2); the outlet of coagulation-flocculation tank (P3); and the outlet of WWTP (P4). The layout of WWTP and the sampling points is illustrated in Fig. 1. The taken samples were immediately transported in two-liter plastic bottles to accredited environmental laboratories of BBTKL, Ministry of Health to be analyzed. All the laboratory analysis for the samples was Table I. The effluent of the WWTP has to follow the permissible limit in the standards before being discharged into the environment.
C. Efficiency Calculation
The removal efficiency was assessed for all observed parameters. The efficiency of cleaning process EA (%) is defined by the standard ČSN 75 6401 as the ratio between removed concentration of pollutants and their initial concentration. The removed efficiency of component A in the system is given by the equation: where: CI is the concentration at the system input (mg/l) and CE is the concentration at the system output (mg/l).
IV. RESULTS AND ANALYSIS
The efficiency of each determined treatment unit is illustrated in Table II to Table IV.
A. BOD
The BOD test aimed to calculate the oxygen required by the microorganisms to degrade the organic substances in wastewater. This test is one of the most important tests in monitoring the activity of river pollution. By measuring the BOD level, it is possible to determine the level of environmental contamination at any time [23]. BOD value is also used to measure the abundance of organic waste as an effort to plan and evaluate the efficiency of the biological treatment system of organic waste management. Changes in organic matter content expressed by the BOD value will occur in every cleaning process of the rivers polluted by organic wastes [24].
The BOD value in the WWTP influent was 2050 mg/L in average. This value was around 24 times higher than the standards. During the monitoring period, the WWTP could reduce the BOD up to 98.7% of efficiency in total. The effluent contained 26 mg/L of BOD concentration, which was below the permissible limit.
The biggest BOD removal occurred in wax removal and sedimentation tank. In this process, BOD decreased to 180 mg/L with 91.2 of efficiency. Wax contributed a large amount of BOD in the wastewater. Thus, the removal of the wax at the initial stage of the treatment reduced the BOD value significantly. In sedimentation tank, most of the solids have settled down. The sampling point for this point was located in the coagulation tank before the wastewater was chemically treated. The wastewater was pumped up from the sedimentation to coagulation tank. During that process, the aeration might occur in the pump and contributed to the decrease of BOD. The coagulation-flocculation process did not contribute significantly to BOD reduction.
After biological process in anaerobic filter and activated carbon adsorption happened, BOD value reached below permissible limit. Microorganisms in anaerobic filter degraded organic compounds in wastewater such as azo groups, thus reducing the BOD. The efficiency was 76.4% and the final BOD concentration was 26 mg/L.
B. COD
Chemical Oxygen Demand (COD) is the amount of oxygen required to oxidize the organic substances in wastewater through chemical reactions. The chemical reaction will convert the organic substances into CO2 and H2O [25].
The batik wastewater that contains wax, resin, dyes and fixing agent such as silicate results in high COD [13]. COD removal was a similar manner as the BOD efficiency. The COD value (7817.5 mg/L) was far above the standards. The overall efficiency of WWTP for COD was 99.1%.
Based on the analysis results, it was found that the COD values decreased in each treatment process. In the early stage (P2), the percentage of COD removal was 91.2%. The decline of the COD value was because the solid material had started to settle and had been oxidized in the pumping process [25]. The efficiency of COD removal in the coagulation process was 34.6%. However, after coagulation and flocculation process occurred, the COD value reached only slightly above the standards. In the biological and adsorption process, the efficiency was 74.9%. According to [26] the performance characterization of the anaerobic filter versus organic load added is important. The removal of COD in the processes prior to biological process prevented organic shock loads in anaerobic filter, resulting in relatively high removal efficiency.
C. TSS
TSS (Total Suspended Solid) is the number of suspended particles that are not dissolved in wastewater. The utilization of dyes, wax and fixing agents was attributed to the high concentration of TSS in the WWTP influent (1315 mg/L). The initial stage of WWTP removed more than half of TSS concentration (59.7%). The highest removal efficiency increased drastically in coagulation-flocculation process (83.8%). Coagulation indicates the process which colloidal particles and very fine solid suspensions are destabilized, so that they can begin to agglomerate if the conditions are appropriate. The colloids commonly found in wastewater are stable because of the electrical charge that they carry. The charge of colloids can be positive or negative. However, most colloidal particles in wastewater have a negative charge. The addition of alum as coagulant created positively charged ions and neutralized the repulsive charges between the particles. The van der Waals force then caused the particles to agglomerate and formed micro floc. Flocculation refers to the process by which destabilized particles actually conglomerate into larger aggregates so that they can be separated from the wastewater [27]. TSS concentration decreased considerably during biological treatment, achieving 12 mg/L and reached below permissible limit. The removal efficiency was 86%. It indicated that the biological content in the wastewater was attributed to the TSS concentration.
D. TDS
TDS (Total Dissolved Solid) is the amount of dissolved particles that present in wastewater. The particles size is small enough to survive the filtration process. In the WWTP influent, the TDS concentration has reached the standards (483 mg/L). During the treatment processes, TDS decreased so that it reached 143 mg/l in the effluent. The total removal efficiency was 70.4%. The highest removal efficiency was obtained by biological treatment process (44.4%). These results indicated the ability of microorganisms to remove TDS from wastewater.
E. Phenol and Total Chrome
Phenolic compounds have hazardous effects and high toxicity even in low concentration. In batik wastewater, phenol is originated from alcohol groups used as a means for removing wax. Phenol concentration in the influent was far below the permissible limit.
Chrome is usually found in batik synthetic dyes, thus it is carried into the wastewater. However, in this WWTP influent, the total chrome concentration was below 0.0213 which was the limit of detected chrome value in the AAS used. Therefore it can be said that the chrome concentration was very low.
F. Total Ammonia (NH3-N)
Ammonia is typically found in synthetic batik wastewater due to the use of sodium nitrite as oxidizing agent in coloring process using indigosol dyes [28]. However the concentration of ammonia in WWTP influent (0.2463 mg/L) was already in accordance with the standards (3 mg/L). It might be because most of the coloring processes were conducted by using other dyes such as naphtol and remazol. It was found that every treatment process in the WWTP can reduce the ammonia further, reaching 0.1444 mg/L in the effluent.
G. Sulphides
The concentration of sulphides in WWTP influent (0.5218 mg/L) was fairly above the permissible standards (0.3 mg/L). Sulphides are often found in batik synthetic dyes and in hydrochloric acid (H2SO4), which is used as indigosol dyes solvents. The highest sulphides removal efficiency occurred during coagulation and flocculation process (22.9%). Sulphides concentration reached below the permissible limit after biological and adsorption process, which was 0.29 mg/L.
H. Total Oil and Grease
Oil and grease were found in batik wastewater because wax is used as dye resisting agent in the production process. After being dyed, the wax is removed thus it presents in the wastewater. The initial oil and grease concentration was 11 mg/L indicating more than two times of the allowed standards. Oil and grease causes damages for aquatic organisms, plant, animal as well as mutagenic and carcinogenic for human being [29]. During the treatment processes, oil and grease decreased to 4.8 mg/L in the effluent. The total removal efficiency was 56.4%. The highest removal efficiency was observed in biological treatment process (31.4%).
I. pH and Temperature
pH and temperature are indicators of the biological process in the WWTP. In this WWTP there is no unit process aiming to adjust the pH. The use of both acid and alkaline in the production process makes the pH in the influent neutral (7.2). After coagulation and flocculation process, the pH slightly dropped into 6.9 due to alum which was utilized as coagulant. However, this value was still in the permissible range according to the standard. The temperature was constant (29.1°C) throughout the units and it was still in the range of room temperature.
J. Total efficiency of WWTP
The measured values of all parameters at inlet and outlet of WWTP are shown in Table V. The WWTP can reduce all the parameters below the permissible limits in the standards. The total average efficiency is 65.2%. The highest efficiency is was in COD and TSS removal (99.1%) and the lowest is in phenol removal (12.0%). However, the initial concentration of phenol does not exceed the standards.
V. CONCLUSION & RECOMMENDATION
The objective of this study was to make an evaluation of the performance of WWTP. Conclusions were drawn from the results of the sampling and its analysis. The main conclusion points of the study can be summarized into the following points: (1) the removal efficiencies of all parameters were acceptable according to the process guidelines; (2) all the effluents from every sampling points of the WWTP were in accordance with the Local Regulation of Special Region of Yogyakarta No. 7 of 2016 on Wastewater Discharged Standards; (3) the WWTP was feasible to be pilot project for batik SMEs. | 2019-11-22T00:44:32.347Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "8eec3d4800d35ca22a1983793211d10edc2406b1",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/icosite-19.2019.23",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6567d984b1157df7c69ace765454dc7e366d6812",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Business"
]
} |
235481787 | pes2o/s2orc | v3-fos-license | Orthopedics and 3D technology in Turkey: A preliminary report
Objectives In this study, we present the use of case specific three- dimensional (3D) printed plastic models and custom-made acetabular implants in orthopedic surgery. Materials and methods Between March 2018 and September 2020, surgeries were simulated using plastic models manufactured by 3D printers on the two patients with pilon fractures. Also, custom-made acetabular implants were used on two patients with an acetabular bone defect for the revision of total hip arthroplasty (THA). Results More comfortable surgeries were experienced in pilon fractures using preoperative plastic models. Similarly, during the follow-up period, the patients that applied custom-made acetabular implants showed a fixed and well-positioning in radiographic examination. These patients did not experience any surgical complications and achieved an excellent recovery. Conclusion Preoperative surgical simulation with 3D printed models can increase the comfort of fracture surgeries. Also, custom-made 3D printed acetabular implants can perform an important task in patients treated with revision THA surgery due to severe acetabular defects.
Three-dimensional (3D) printing technology was invented in the 1980s. In the last decade, 3D printing has a huge impact on the manufacturing industry. [1] With the advancement of medical visualization, the use of 3D printing materials has become more common in healthcare, education, and research. [2] The 3D printing technology helps us to achieve two major goals in orthopedics surgery: first, the production of 3D printed anatomical models for planning and surgery simulation and, second, the production of custom-made 3D-printed prostheses. [3] For surgeries involving irregular bones such as pelvis or tibial pilon fractures, preoperative planning is particularly difficult in cases of complex anatomy and severe deformity. [4,5] Pilon fractures are usually caused by a high-energy axial load, involved by metaphyseal bone comminution, articular impaction This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes (http://creativecommons.org/licenses/by-nc/4.0/). ©2021 All right reserved by the Turkish Joint Diseases Foundation and comminution, and severe soft tissue injuries. The main goal of the surgical treatment is the anatomic reduction of the articular surface, restoring length, and reattachment between the metaphysis and diaphysis of bones. [6] These plastic bone models can facilitate a better understanding of the pathoanatomy, and surgeons can use them to simulate surgery. [2,3] In recent years, the use of 3D printed models has enabled the normal 3D anatomy of the tibial plafond on the fracture site to be evaluated and also helped to determine intraoperative contours of the plates. [6,7] Currently, custom-made prosthetic implants that exactly match with patient's bone pathology can be produced with 3D printing technologies.
The management of massive acetabular bone defects remains a challenging problem, particularly in the revision of total hip arthroplasty (THA). [8,9] Custommade triflange monoblock acetabular components can provide individualized treatment based on the presence of a severe acetabular bone defect.
Such implants can cover a pelvic discontinuity and reconstruct the anatomical hip center. [8][9][10][11] However, there is no meta-analysis or randomized-controlled study on 3D print assisted revision hip arthroplasty surgery in the literature. Only case series are reported on this topic. The University of Health Sciences of Turkey is a specialized university with a health-theme that has 10 medical faculties both in different cities and abroad and has affiliation agreements with more than 60 training and research hospitals across the country. At the same time, there are more than 20 application and research centers with different content within the institutional internal structure. [12] Gülhane Medical Design and Manufacturing Application and Research Center (GMDMC) is one of these centers that was established in Ankara in 2011. The main goal of the center is to design and manufacture custom-made medical implants or prepare presurgical diagnostic or study models for unusual and specific cases. With its know-how, design, and manufacturing capability, GMDMC is an important center in Europe and the world. Also, it is supported by the university and the state ( Figure 1). [13] In recent years, the center has produced custom-made medical implants, particularly for veterans, tumor patients, and patients with hard tissue losses in difficult to shape areas. Thus, the service network of the center covers almost all surgical branches. Recently, many customized titanium cranial, sternocostal, pelvic, acetabular, and craniomaxillofacial reconstruction implants and plastic models for surgical preparations have been designed and manufactured in this center. [14] In the present study, four cases were reported to illustrate the surgical experience with implementation of 3D printing at orthopedic surgery. The objectives of the study were (i) to describe the workflow of plastic anatomic models in two patients with pilon fractures and (ii) to evaluate the follow-up results of custom-made monoblock acetabular components in two patients with massive acetabular bone defects.
Patients
Between March 2018 and September 2020, surgeries were simulated using plastic models manufactured by 3D printers on the two patients with pilon fractures. Also, custom-made acetabular implants were applied to the two patients with acetabular bone defects. A written informed consent was obtained from each patient. The study protocol was approved by the University of Health Sciences, Istanbul Kanuni Sultan Süleyman Training and Research Hospital Ethics Committee (Date/no: 12.02.2021/KAEK-2021.02.45). The study was conducted in accordance with the principles of the Declaration of Helsinki.
Printing the 3D model
All the models were produced in GMDMC. The 3D bone geometry of patient's extremity reconstructed by medical image processing software (Mimics ® 19.0, Mimics Innovation Suite, Materialise Medical, Leuven, Belgium). For this purpose, three steps were followed respectively as (i) data acquisition, (ii) image processing of computed tomography (CT) data, and (iii) 3D computer aided design (CAD) modeling of the extremity.
Firstly, CT scans of patients were obtained (Siemens Somatom Emotion 16 CT Scanner; Siemens Healthineers AG, Erlangen, Germany) in Istanbul, Kanuni Sultan Süleyman Training and Research Hospital, University of Health Sciences. The CT data were provided in the Digital Imaging in Medicine and Communications (DICOM) format that is used for medical imaging in thin slices (512×512 pixels, pixel size: 0.6-19.2 mm and slice thickness: 0.75-1.0 mm). Then, the CT images were introduced into the Mimics ® software to reconstruct the geometrical bony structure of the extremity. The CT scans were processed by the Mimics ® software as masking, segmentation, 3D model formation, and reconstruction of the patient's bone model. Finally, the exact 1:1 plastic bone models of the extremity and the mirrored contralateral extremity were obtained. Plastic models were produced by plastic manufacturing machines using either material jetting (J750, Stratasys, USA) or binder jetting technology (Z Printer 650 Zcorp. USA).
Surgeries
Patient 1-A 57-year-old man presented to the clinic with a motor vehicle accident history. He had swelling, abrasions, and ecchymosis on his ankle. X-ray and CT scans showed an articular and metaphyseal involvement of tibial plafond and fibula ( Figure 2). Two-stage surgery was performed: the initial closed reduction of the fracture stabilization with external fixation. Definitive management such as open reduction and internal fixation was performed, once the soft tissue swelling improved (usually within 10 to 14 days). During this time, CT images in DICOM format were introduced to the Mimics ® software to reconstruct the 3D geometrical bony structure of the ankle (Figure 3). Later, 3D plastic models of the ankle were manufactured (J750, Stratasys, USA).
Surgical simulation
Before surgery, intraoperative fixation and reduction maneuver imitated accurately on the models. The dimension of the implant could be selected by attaching the plate to the real-size bone model ( Figure 4). Furthermore, the plate could be bent to fit the model at the appropriate position with ideal length, location, and orientation. Then, the selected titanium plate and screws were sterilized for surgery.
Surgical methods
After removal of external fixation, anatomical reduction and stabilization of the articular surface were planned. Then, reduction of the metaphyseal comminution occurred after the joint surface was re-established. Finally, the fracture was fixed by the chosen plate and screws ( Figure 5). Pilon fracture surgery was easily performed with the models used before surgery. During the 16-month follow-up, the patient did not experience any surgical complications and he achieved an excellent recovery.
Patient 2-
A 25-year-old man who fell from a height was diagnosed with a pilon fracture. Similar procedures were performed in this patient as in Patient 1. The exact (1:1) prototype model of the fracture was produced with 3D printing technology (J750, Stratasys, USA). Simulation of the surgical technique to be used was firstly performed on the plastic model that was produced in 1:1 dimensions of the fractured area. The result of the simulative operation is shown in Figure 6. The simulative operation is, then, used to guide the actual operation. Postoperative radiographic examination showed satisfactory fracture reduction and fixation. The plate and screw were in a good position (Figure 7). This patient, with a follow-up of 14 months, did not experience any surgical complications and achieved an excellent ankle function.
Patient 3-
A 65-year-old female patient presented with worsening right groin and lateral hip pain for six months that was being mobilized in a wheelchair. She had a history of right hip severe coxarthrosis treated with a cementless THA in 2015. Revision surgery was performed in 2016 due to medial protrusion of the acetabular medial wall. The acetabular deficiency was treated by a conventional acetabular component (reinforcement ring). After two years, acetabular insufficiency developed again. Firstly, the pelvis of the patient was examined with an anteroposterior (AP) X-ray and, then, a CT scan was taken. A specific metal artefact reduction protocol was used to reduce noise and improve the image quality. The 3D CT scan revealed a failed uncemented THA with Paprosky 3B defect ( Figure 8). The DICOM images were transformed into a 3D geometrical bony structure of the hip by Mimics ® software (Figures 9a, b).
Firstly, for preoperative planning, the right hip plastic acetabulum model was printed by a 3D printer (Z Printer 650 Zcorp. USA) using high-performance composite powder material that binds with a special liquid binding material. Then, a triflange acetabular implant was designed (3 Matic, Materials, Leuven, Belgium) with a multidisciplinary working between surgeon and engineer. Before the manufacturing of the titanium implant as a final product, a plastic model of the custom-designed medical implant was printed (J750, Stratasys, USA) ( Figure 10). After the necessary arrangements were made on the design, soft data were sent to the metal printer to be printed as the final product. The titanium medical implant was printed in 3D by using DMLS technique (M2, Concept Laser GmbH, GE Additive, Lichtenfels, Germany).
Surgical methods
Using a posterolateral approach to the hip, previous implant was removed. The patient's anatomical landmarks were identified by using the preoperative anatomical plastic model. The position and depth of the acetabular cavity were determined and prepared accordingly. Acetabular cup position was assessed, and the implant was stabilized with screws ( Figure 11). Then, an ultra-high molecularweight polyethylene acetabular liner and a metal head were inserted. After the reduction of the joint, the stability of the implant was checked under the guidance of fluoroscopy. There was a fracture in the shaft of the femur. The fracture was fixed by a titanium plate, screws, and cables. Postoperative radiographs showed the accurate placement of the acetabular implant ( Figure 12). Patient 4-Following a complex history of left THA revisions for aseptic loosening, a 64-year-old female presented with coxarthrosis presented with significant left hip pain, inability to mobilize, Paprosky type 3 acetabular defect and pelvic discontinuity (Figure 13a-d). Similar procedures were performed in this patient as in Patient 3 (Figure 14-16).
RESULTS
We performed more comfortably surgery in pilon fractures by using plastic preoperative models. Also, postoperative pelvic radiographs of those patients showed well-positioned and fixed custom-made acetabular implants during follow-up. The patients did not experience any surgical complications and achieved an excellent recovery.
DISCUSSION
In this study, operations were simulated first on plastic models that produced with 3D printing technology in two trauma patients. Also, it was experienced that custom-made acetabular implants were successfully applied in two patients with massive acetabular defects who underwent THA surgery failure.
Orthopedic surgery, closely related to biomedical engineering, is one of the first medical fields that use 3D printing technology. Preoperative planning is the most important part of all orthopedic surgeries. Preoperative design of anatomical models can provide a significant increase in understanding the patient's bone anatomy and orthopedic deformity. In a multi-center study conducted by Bagira and Chaudhary, [15] 3D printed models were found to be valuable tools in orthopedic surgeries involving complex pathoanatomy such as pelvic trauma, periarticular fracture, and revision arthroplasty. Chen et al. [16] also used a 3D-printed guide template to assist in the achieve accurate placement of plates and screws in the pelvis of 14 adult cadavers. In a similar study, Kang et al. [7] showed that the use of a full-size 3D printed model altered surgeons' choice of preoperative locking plates, particularly when inexperienced surgeons evaluate a complex fracture. Zheng et al. [6] showed that it was possible to use 3D printing technology to treat pilon fractures in clinical practice. In addition, the models produced in the 3D printer have been reported to increase the comfort of surgery for orthopedic treatment with shorter operation time, less blood loss, smaller incision size, and less use of fluoroscopy. [17][18][19] However, no significant difference has been reported between the groups regarding the rate of infection, fracture union time, traumatic arthritis, and malunion. [19] In our study, surgical procedures that performed for each patient provided a comfortable surgery both for the surgeon and patient, despite the short-term follow-up, similar to the findings of the previous studies. Also, preoperatively selected and bended plates on plastic models were used during surgery without making any change.
Aseptic loosening is the most prevalent cause for THA revision. Acetabular cup revision is a major challenge in the presence of a severe bone defect. Therefore, multiple surgical reconstruction options have been reported for severe acetabular defects such as structural allograft, porous tantalum components, a jumbo acetabular cup, and anti-protrusio cage. [8][9][10] As the most optimal method for acetabular revision of large defects is still unknown, researches are ongoing. Therefore, custom-made 3D printed acetabular implants were developed to achieve primary implant stability, even in cases of pelvic discontinuity situations. [8][9][10]20] Çıtak et al. [8] reported that 3D-printed acetabular components had a 56% complication rate in severe acetabular defects, although it was promising in future revision THA. In contrast, Aprato et al. [9] showed encouraging results of the Bespoke acetabular system for Paprosky type 3B acetabular defects and all procedures were rated positively. On the other hand, Kieser et al. [21] reported that the mid-term results of the ossis custom 3D-printed triflange acetabular implant for the management of severe acetabular defects were encouraging. Kavalerskiy et al. [22] presented 17 hip replacement revision surgery patients that were preoperatively planned with 3D models to understand the anatomy and bone defect. However, only three custom-made acetabular implants were used for these patients. In our study, during the follow-up period, no complication was experienced with the custom-made prosthesis that was applied to the patients with Paprosky type 3B acetabular defects. Also, no loosening on the radiographs of the patients were seen and clinical findings gradually improved throughout the follow-up.
Nonetheless, the main limitations of the present study are the limited sample size and relatively short follow-up without a control group. However, this study may be of interest to readers, as it is one of the first applications in our country, that was designed, manufactured, and applied under the same roof of a university.
In conclusion, 3D printers have begun to be used successfully in orthopedics and traumatology, as well as in many fields of medicine. However, its use in orthopedics is relatively new. Besides, its enormous effect and benefit are not well known by most surgeons. In most of the difficult orthopedic cases, preoperative surgical simulation with 3D printed models can increase the comfort of fracture surgeries. Also, customized 3D printing implants can perform important tasks in joint arthroplasty, when fabricated implants do not fit the size of the patient for severe defects. | 2021-06-20T06:17:07.183Z | 2021-06-11T00:00:00.000 | {
"year": 2021,
"sha1": "578c896caa1a750cd1867b2240346b3786915936",
"oa_license": "CCBYNC",
"oa_url": "https://jointdrs.org/full-text-pdf/1241",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fba12f2a8561e683822714773cba8bbdd8a203d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195872996 | pes2o/s2orc | v3-fos-license | Comparison of Machine Learning Methods With National Cardiovascular Data Registry Models for Prediction of Risk of Bleeding After Percutaneous Coronary Intervention
Key Points Question Can machine learning techniques, bolstered by better selection of variables, improve prediction of major bleeding after percutaneous coronary intervention (PCI)? Findings In this comparative effectiveness study that modeled more than 3 million PCI procedures, machine learning techniques improved the prediction of post-PCI major bleeding to a C statistic of 0.82 compared with a C statistic of 0.78 from the existing model. Machine learning techniques improved the identification of an additional 3.7% of bleeding cases and 1.0% of nonbleeding cases. Meaning By leveraging more complex, raw variables, machine learning techniques are better able to identify patients at risk for major bleeding and who can benefit from bleeding avoidance therapies.
I. Motivation
We sought to improve prediction of bleeding using machine learning methods compared with an existing model that was derived from the same dataset. With the machine learning methods, we started with variables that had been selected or defined in the existing model, and then extended this set to any additional variables related to those variables selected or defined in the existing model (e.g., pre-procedure hemoglobin continuous value rather than 2 variables of pre-procedure hemoglobin ≤13 and >13 g/dL). We did conduct additional experiments to determine if any improved performance from the machine learning models was a result of new variable selection or the use of a different analytic approach. Finally, we conducted supplementary analyses to determine how effective the machine learning models would be if they selected a smaller set of the most predictive variables, so that this targeted set could be more acquired for incorporation into post-PCI clinical care and decision making.
For this reason we evaluated two different methods. The first is a traditional statistical technique in logistic regression, but with lasso regularization, to understand if a difference in the method by which logistic regression selects the variables impacts performance. The second was gradient descent boosting, which is better equipped to develop models that have binary, categorical, and continuous data. We conducted analyses to determine if the variables provided the greatest impact, the methodology, or the combination of variables and methodology, while still using techniques that maintained a level of interpretability for clinical understanding.
II. Inclusion Criteria and Outcomes Definitions
Our initial sample used inclusion and exclusion criteria for the existing full NCDR bleeding-risk model, 1 updated to include all index PCI procedures from July 2009 through April 2015 (eFigure 1). Briefly, this study population excluded patients who had repeated PCI procedures per admission (197,412 cases), died in the hospital or had missing bleeding information (10,231 cases), or from sites with no bleeding events (1,165 PCI cases from 22 sites). We also excluded patients who underwent coronary artery bypass grafting (CABG) during the index admission, because the high risk of bleeding after CABG may obscure the bleeding risk attributable to PCI alone. 2 We must note a limitation in our work regarding deidentified procedure admission information. Namely, because our dataset is de-identified, we can only identify PCI procedures that are related if they occur in the same admission. Therefore, the distinct number of procedures we have isolated do not necessarily mean each is a new patient. This has the potential to introduce some bias through patients with repeated admissions and PCI, and is a limitation of our work.
The primary outcome, as in the existing NCDR bleeding model, was major post-PCI bleeding. The outcomes definitions are identical to those in work by Rao et al. 3 Major bleeds are: 1) Major bleeding occurring within 72 hours after the PCI or before discharge is said to have occurred if there is a site-reported bleed (external or hematoma >10 cm, >5 cm, and >2 cm for femoral, brachial, and radial access, respectively); 2) Bleeds reflecting a post-PCI hemoglobin decrease of 3 g/dL in patients with a pre-PCI level of at least 16 g/dL; 3) Any non-surgery blood transfusion for pre-procedure hemoglobin levels of at least 8 g/dL; 4) intracranial hemorrhage, cardiac tamponade, and gastrointestinal or retroperitoneal bleeding. 3
III.
Extreme Gradient Boosting (xgboost) Gradient descent boosting was selected as the primary machine learning technique used for this analysis for several reasons. First, decision tree-based methods are inherently more interpretable than popular deep learning techniques. By seeing how often variables are selected across a variety of the decision trees made, we are able to interpret how important each variable is. Second, decision tree-based methods are able to make use of multimodal data seamlessly. In other words, they are able develop models that have binary variables, categorical variables, and continuous variables alike. Finally, with regards to the variety of decision tree-based methods that exist, we chose xgboost because of the advantages this technique provides. This method develops one decision tree at a time that has limited depth. This requires this tree to find variables that best split the population, in the hopes of having leaf nodes at the bottom of the tree that best split bleeding cases versus. non-bleeding cases. After this tree is developed, through a series of tests that identify the best variable for this split (accounting for potential outliers in the data), the model determines how much of the training set variation can be explained by this tree. Based upon this it develops the next tree with the inherent goal of better explaining the variation in the proportion of the training set not explained by the first decision tree. This procedure continues until a group of trees helps develop a robust predictive model. For more details we encourage readers to understand why this technique is potentially preferable to other decision tree techniques such as random forest, as explained by the authors of the technique. 4 When the trees are finished training, our predictive model has several interpretable factors. The top variables selected across each of the trees indicates important variables in understanding low versus high risk procedure cases. Second, the higher a variable is on the tree the more important it was deemed to be in understanding bleeding cases versus non-bleeding cases. Finally, we can understand risk for each individual patient by understanding the paths in each tree that predicted that patient's risk.
IV. Cross-Validation and Model Hyper Parameters
The cross-validation process described in the main text for the new models was also repeated for the 2 existing NCDR bleeding-risk models to detect any differences in discrimination that might arise from using a cross-validation approach rather than the singlederivation/validation cohort split used in prior work. 1 This allowed us to directly compare performance of the existing technique in this stratified cross-validation approach to the new methods and variables considered.
All analyses were conducted in R, with the base GLM function used for logistic regression with the pre-selected variables to recreate the existing models, the GLMNET package used for the logistic regression with lasso regularization, 5 XGBOOST for the gradient descent boosting, 4 and pROC for the ROC and c-statistic calculations. 6 We used mgcv and sandwich for the continuous smoothing functions for calibration curves. 7,8 For the GLMNET package in R, the hyper parameters for the model were set by using the default values in the cv.glmnet function with a 10-fold internal cross-validation. This is pre-built in the package glmnet (cv.glment). This method creates, from the training set, an internal 10-fold training and testing set. It iterates through and compares different tuning parameters and selects the lambda that provides the highest AUROC within the training. This lambda is learned from the training set (along with the other parameters) and is used as the default for prediction in the testing set.
For xgboost, we set the number of trees to 1000, with an eta of 0.1 and a maximum depth of each tree of 6. We used the default learning rate and depth of tree, and preset the number of trees for computational efficiency but to provide a sufficient number of trees since boosting learns slowly. This limitation should be addressed in future work to grid search a wider number of trees (100 to 10000) with varying depth (from 1 to 10).
V. Implementation
In order to implement these methods the data must be taken through several preprocessing steps. First, the cohort must be extracted, as in II above. Then the data must be split for internal validation. In our case we take the process of randomly separating 80 % of the data for training, and holding out 20% for validation, while keeping the event rate consistent in that. At this point we run imputation techniques based on the 80% training data. We then feed this training data and the training labels to the machine learning methods. In the case of xgboost we discuss the parameters used to tune the model in the prior section (IV). Finally, we then generate a prediction on the final 20%. We repeat this process five times, with a new 20% of the data serving as the test case every iteration. In R, this amounts to three lines of code. The first train the xgboost model with the data matrix where each procedure is a row and each column is a variable, we then generate a prediction using the model and the test data, we finally compare that prediction with the ground truth for the test set in pROC to plot the ROC curve. Source code is available at https://github.com/bobakm/NCDR_CathPCI_MajorBleed_Public.
VI. Using the final model
In order to recreate these models for use, we have made our source code available to extract the same patient cohort if one has access to the CathPCI registry data. The specific variables and hyperparameters are provided to the training of the xgboost model. For using the model, new test cases will have missing data imputed based upon the training set, and as described in the body of the paper. This model will produce a probability associated with the risk of major bleeding post-PCI.
VII. Additional Dataset Comparisons
We sought to better understand the updated samples. Specifically, we split analyses by year, for cases considered in the existing NCDR bleeding-risk models and newly collected cases, to confirm that changes in bleeding rates did not affect model discrimination (they did not) and that we are recreating the performance of the existing technique. Additionally, we provided supplementary analyses to ensure our top features selection technique was a fair selector of variables.
The datasets used added variables in specific orders to determine the impact of variables versus methods. The blended variables set had 28 additional variables that primarily included continuous-variable versions of variables that the existing model had converted to dichotomous variables; continuous variables such as pre-procedural hemoglobin, previously used as 2 dichotomous variables of pre-procedural hemoglobin (≤13 and >13 g/dL), were also added to the dataset. This variable set will provide the best performing model. Methods will be compared in this set to understand the model improvement resulting from method as well as resulting from the additional 28 variables. The post-PCI variable set was used to provide a direct comparison to the existing technique to evaluate improvements as a result of machine learning techniques specifically. The pre-PCI variable set was created to evaluate impact of the risk score, which uses different decision thresholds than the post-PCI model, the continuous version of these variables was used for this dataset. This variable set provided a direct comparison to the existing risk score technique to evaluate improvements as a result of machine learning techniques specifically for data-driven decision thresholds.
To verify that the additional samples did not affect discrimination, we compared data available to us from July 2009 to April 2011, similar to the data range of the existing NCDR bleeding model, which had an event rate of 4.6%, and from all the subsequent additional observations (May 2011 to April 2015) that had an event rate of 4.9%. The c-statistic for the additional samples, using the existing NCDR bleeding model in a 5-fold cross-validation, was 0.78 (0.77-0.78). This c-statistic was similar to the existing NCDR bleeding model on the original sample.
Additionally, we compared the final blended model to that of the blended model, using only 10 variables for a variety of reasons. First, the pre-PCI model also had 10 variables. These variables could be used to replace the pre-PCI case. Second, the eleventh variable (cardiogenic shock within 24 hours) is co-linear to the ninth variable (cardiogenic shock within 24 hours or at the start of PCI).
VIII. Feature Selection and Ranking
The feature selection and ranking techniques for xgboost are detailed in the manuscript. The full xgboost ranking of selected features can be found in eTable 2, which includes not only the ranked features from the entire dataset but also those features not selected by the model. However, as mentioned in the main manuscript, the ranking of the top 10 variables, and their contribution to the forward selected c-statistic, could be considered an unfair comparison. In particular, the dataset is trained on the entire data, and we can assume the model is a good fit by the 5-fold cross-validation. However, the c-statistic calculated by the forward selection is a result of using training data and testing data together, which is not ideal. We wanted to develop a technique that identifies the top contributors. To show that the results are a fair indicator of the results, we ran several other feature-ranking techniques.
Using the blended dataset, we ensured that the top-10 comparisons were fair by validating them in several different tests. First, in a 5-fold cross-validation using the training data as the testing data, we ensured that the model is not extremely overfitting. Second, we ensured that the top-10 variables are stable across each fold and that each top-10 variable was in the final top 10 of multiple folds, and ensured the consistency from each fold to the total dataset by showing the average ranking of each feature and its standard deviation across the 5 folds. Third, we ensured that the stepwise feature selection was fair by showing a stepwise selection with a 90/10 training/testing split that had similar incremental gains.
By running the 5-fold cross-validation on the blended dataset again, we can check off the first 2 tests together. First, the mean c-statistic (and 95% confidence interval) for using the training data as testing data in each fold was 0.838 (0.838-0.839), higher than the 0.82 achieved when using the testing set. This upper bound on the c-statistic is similar to the 0.82 achieved with a test set. This means using the entire training set to determine feature importance cannot alter the stepwise results by a large margin. eTable 3 uses the variables (in rank order from the main manuscript) to show their average feature rank and number of times they appear in the top 10 in each fold in the 5-fold cross-validation. The average ranking and low standard deviation show the stability of the selection of the top 10 variables. Finally, eTable 4 shows the forward stepwise c-statistic calculation for the top-10 variables ranked in a more traditional fashion: showing the variables selected in a 90% training dataset and then tested in a forward stepwise fashion on the remaining 10% testing data. The values show incremental improvements very similar to those listed in the manuscript.
IX.
Additional Results on Decision Thresholds The evaluation of the decision threshold and the model calibration give an evaluation of the model's performance when used prospectively to decide whether a patient should receive bleeding-avoidance therapies, and to evaluate our performance via the f-score, positive predictive value, and false discovery rate (ratio of false positives to all positive predictions). eTable 1 shows the f-score for the best model in each variable set. The existing pre-PCI NCDR bleeding risk model achieved a mean f-score of 0.25 (0.25-0.26) and the best model in each variable set of post-PCI NCDR model. The existing post-PCI NCDR bleeding-risk model achieved a mean f-score of 0.26 (0.26-0.26), which did not change when switching between modeling methods.
X. Additional Evaluations of Risk
We intended to show that risk is somewhat dynamic, and understand the difference between the bedside risk score and the full risk model. eFigure 2 takes each patient's risk, as calculated by the best performing model using the 10 variables from the pre-PCI model, and the risk calculated by the best performing full post-PCI model, and takes the difference. The clustering of bleeds towards the positive nature shows that the full post-PCI model raises the risk score of a lot of patients, most of whom have a bleeding outcome. This visually demonstrates an improvement in the c-statistic, as well as indicate more evidence that a specific threshold can separate bleeds and non-bleeds. eFigure 2 shows the risk difference when calculating the bleeding risk from the blended variable set and from the existing post-PCI NCDR bleeding-risk model. Overall, the model trained on the blended variable set more accurately identified postprocedural bleeding risk over the model trained on the existing NCDR bleeding-risk variable set, evidenced by the higher concentration of bleeds in the region of largest difference in risk between the blended model and the existing post-PCI NCDR bleeding-risk model (right side, eFigure 3).
Additionally, we evaluated the performance of our models with a variety of decision thresholds. Specifically, in order to develop an ROC curve, a decision threshold is varied between a probability of 0 and a probability of 1. At each of these evaluation points, it is possible to calculate decision threshold-specific metrics. We ultimately present the threshold-specific metrics based upon the threshold that results in the highest f-score. The balance used in this study assumes an equal cost between false positives and false negatives. However, this may not be the case. Certain institutions may wish to use bleeding avoidance medications on patients with minimal risk, while others may wish to treat different risk thresholds with different strategies. In order to evaluate this performance, we also compared the positive predictive value and false discovery rates of the methods by using the highest decile of risk, rather than the data driven threshold, to show that the performance gains still exist. eFigures 3 and 4 show the quantity of bleeds and non-bleeds identified when selecting a threshold at the decile boundary and at the mean rate of the decile. Quantities, however, might be misleading due to the number of people at or above the mean rate of the decile versus in the decile entirely, so we show the rates in eFigure 5. We see the false discovery rate drops when using the full post-PCI model trained by xgboost, and that the parsimonious model using the top predictors performs similarly well.
Since the calibration plot identifies the largest difference in the highest decile, we further analyzed the predictive nature of the model in this decile. eFigure 4 shows the correctly identified bleeds when using the decile threshold and mean decile rate, respectively. The highest decile of risk is any predicted risk ≥9.5% for the existing NCDR post-PCI bleeding-risk model, 10.9% for the blended post-PCI model, and 10.8% for the blended post-PCI bleeding model using only the top-10 predictive variables. The mean predicted rate for the highest decile of risk was 18.2% for the existing NCDR post-PCI bleeding risk model, 22.0% for the blended post-PCI model, and 21.5% for the blended post-PCI model with 10 variables. eFigure 4 shows the incorrectly treated non-bleeds. While the FPs drop greatly when viewing the highest decile of risk and using the blended model, the quantity of FPs increases when using the mean predicted risk as a decision threshold. Note that the optimally selected thresholds in Table 4 are between these 2 rates. However, fewer cases are at a level of risk at or above the mean in the existing NCDR post-PCI bleeding-risk model. eFigure 5 plots the false-discovery rates and positive predictive values for each model at the respective thresholds, showing an improvement in both scenarios.
XI.
Limitations in Implementation Implementing these models for clinical use has been shown to be practical in a number of settings. For example, Huang et al. discussed implementation details of their prediction of acute kidney injury, citing that such implementations were possible if the appropriate fields were extracted from the electronic health record, and gave an example of a real-time risk calculator being implemented in the Cleveland Clinic. 9 However, extraction of such variables may be a limitation and require advanced techniques such as natural language processing to properly extract the needed variables. Even with this limitation, the curated data and model presented in this work can still be used for retrospective benchmarking of quality of care, and enhance understanding of when to employ bleeding avoidance strategies in case reviews.
XII. Additional Discussion and Future Directions
A machine learning model's strong discriminatory abilities, present even with incremental improvements using only the top-10 predictors, allow for confident selection of a subset of clinically useful predictive variables. If the collection of extraneous and collinear variables is not desirable, the blended top-10 variable model with a c-statistic of 0.81 performs well. Selecting a small, parsimonious set of predictors selected by the modeling technique could facilitate development of bedside tools that gather pertinent variables from the electronic medical record, calculate relevant scores, and characterize patients' risk profiles in a variety of ways that clinicians could use to better care for patients.
A third enhancement of this work (in addition to the two presented in the main text) is the prospective prediction as demonstrated in the top-predictors method. By selecting risk thresholds and evaluating treatment vs. non-treatment cases, it is possible to compare risk models with how they would be used clinically. For example, comparing the false discovery rates and quantities of predicted bleeds of the blended model and the blended model using only 10 variables versus the existing NCDR bleeding risk model shows specific improvements with each added layer of complexity. This improvement occurred because gradient descent boosting extracted the full continuous ranges of variables that had previously been used only as dichotomous variables.
Dichotomous versions of continuous variables were rarely selected by the model, illustrating the power of gradient descent boosting in selecting its own decision cut-points for continuous variables.
A fourth enhancement is the evaluation of the predictive method in a prospective manner. While the thresholds should vary based upon the use case and costs in each setting the models would be used in, the choice of the data-driven thresholds selected here can greatly reduce the false-discovery rate, which helps reduce treatment by bleeding avoidance therapies by focusing on those at greatest risk, and also reduces the costs associated with mistreatment. The f-score approach is an enhancement beyond the c-statistic discrimination and calibration plots; specifically, if the model is used prospectively, it better pinpoints when to expect a bleeding event and its consequences.
These enhancements allow for the opportunity to extend this work in several areas. The first is to explore enhancements to the bleeding model by considering the further array of available data in the CathPCI registry. Other laboratory values, prior history variables, and values that were not found to be statistically significant in the prior work 1 should be re-evaluated with these machine learning modeling techniques. The second is to explore the dynamic nature of the bleeding risk throughout the patient encounter. Two models were developed in this work, one as a pre-PCI model and one as a post-PCI model before treatment with bleeding avoidance therapies. The data within the CathPCI registry can be split into a variety of key decision points, including choice of access site, choice of bleeding avoidance therapies, and even choice of closure method for femoral PCIs, allowing for multiple models that will show the varying risks before and immediately following key treatment decisions. The third is to extend beyond the bleeding model, applying the techniques presented here to a variety of the models available in NCDR across the registries collected by the American College of Cardiology, evaluating discrimination improvements, identifying predictive factors, and evaluating risk threshold and prospective prediction performance measures.
It will be essential to use electronic medical records to implement machine learning methods if we wish to verify their successes and potential shortcomings in future prospective studies. Registry data are highly curated, and it is unlikely that all potentially pertinent variables for an entire span of a patient's admission would be available for immediate use from the electronic medical record. Two considerations are essential: first, it is important to recognize that electronic medical records have only so many variables readily available, so models built will need to be adjusted to maximize the variables. Second, it may be that certain variables in the registry matter greatly and are not available within the electronic medical record, and should be identified specifically. | 2019-07-11T13:15:39.451Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "70b138c71256416c2b881b27e5d082b98382e4de",
"oa_license": "CCBY",
"oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2737843/mortazavi_2019_oi_190275.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70b138c71256416c2b881b27e5d082b98382e4de",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237255590 | pes2o/s2orc | v3-fos-license | Codon usage and protein length-dependent feedback from translation elongation regulates translation initiation and elongation speed
Abstract Essential cellular functions require efficient production of many large proteins but synthesis of large proteins encounters many obstacles in cells. Translational control is mostly known to be regulated at the initiation step. Whether translation elongation process can feedback to regulate initiation efficiency is unclear. Codon usage bias, a universal feature of all genomes, plays an important role in determining gene expression levels. Here, we discovered that there is a conserved but codon usage-dependent genome-wide negative correlation between protein abundance and CDS length. The codon usage effects on protein expression and ribosome flux on mRNAs are influenced by CDS length; optimal codon usage preferentially promotes production of large proteins. Translation of mRNAs with long CDS and non-optimal codon usage preferentially induces phosphorylation of initiation factor eIF2α, which inhibits translation initiation efficiency. Deletion of the eIF2α kinase CPC-3 (GCN2 homolog) in Neurospora preferentially up-regulates large proteins encoded by non-optimal codons. Surprisingly, CPC-3 also inhibits translation elongation rate in a codon usage and CDS length-dependent manner, resulting in slow elongation rates for long CDS mRNAs. Together, these results revealed a codon usage and CDS length-dependent feedback mechanism from translation elongation to regulate both translation initiation and elongation kinetics.
Synthesis of large proteins encounters many obstacles in cells. Errors in transcriptional, posttranscriptional and translational processes are expected to increase as coding sequence (CDS) length increases. Long mRNAs and large proteins are more likely to be degraded or damaged than shorter ones (35). In addition, large proteins have more protein domains, resulting in increased complexity in the protein folding process and greater likelihood of misfolding than small proteins. Previous studies have shown that CDS length negatively correlates with protein abundance, translation initiation rate, and ribosome density (22,(36)(37)(38)(39)(40), suggesting the existence of mechanisms that preferentially inhibit translation of large proteins. Modeling studies suggest that the negative influence of CDS length on translation is likely due to less efficient ribosome recycling on mRNAs with longer compared to shorter CDS regions (39,41). Despite these issues, many large proteins are critical for cellular functions in diverse biological processes. The mechanisms that allow efficient production and proper folding of large proteins are not clear.
The best characterized mechanism that regulates translation efficiency is the initiation process, which largely determines the number of protein molecules that can be made from an individual mRNA transcript (42)(43)(44). Ribosome queuing near AUG start codon caused by ribosome stalling or collision impacts translation initiation efficiency (45,46). A previous study using a reporter gene in Saccharomyces cerevisiae suggested that rare codons near the start codon could inhibit translation initiation probably due to ribosome queuing near the start codon, whereas optimal codons near the start codon presumably result in rapid liberation of the start codon and therefore high translation initiation rates (47). However, other studies showed that codon usage near the start codon appears to influence translation initiation rate due to its effects on mRNA structures rather than translation elongation (48,49). Thus, the mechanism underlying the coordination between translation initiation and elongation under nutrient replete growth conditions and the role of codon usage in this process are still unclear.
In eukaryotes, translation initiation begins with the binding of the ternary complex (the aminoacylated initiator methionyl-tRNA (Met-tRNA i ), GTP and the initiation factor 2 (eIF2)) to the 40S ribosome to form the pre-initiation complex (42)(43)(44)50). Phosphorylation of eIF2␣, a subunit of eIF2, at serine 51 is an important regulator of translation initiation and is known to be induced by many types of stress conditions to result in global inhibition of translation initiation. The phosphorylation of eIF2␣-GDP inhibits the guanine nucleotide exchange activity of eIF2B and blocks the recycling of unphosphorylated eIF2␣-GDP into the translationally active form eIF2␣-GTP. In higher eukaryotes, GCN2 is one of the several kinases responsible for the phosphorylation of eIF2␣ at serine 51 after its activation from an autoinhibited state (42,50,51). In fungi, however, the GCN2 homolog is the major and the only known eIF2␣ kinase responsible for eIF2␣ phosphorylation at serine 51 (52). Upon nutrient limitation or amino acid starvation or other stress conditions, GCN2 is activated and phos-phorylates eIF2␣ to initiate the adaptive pro-survival integrated stress response, resulting in temporary translation repression of most mRNAs and activation of amino acid biosynthesis (53)(54)(55)(56). In addition, mutation or depletion of enzymes required for tRNA modification can also trigger eIF2␣ phosphorylation (57,58). Although GCN2 can be activated by interacting with uncharged tRNA caused by amino acid starvation, recent evidence also suggests that GCN2 can also be activated by other mechanisms such as by interacting with the ribosomal P-stalk (59)(60)(61). Although the stress-induced eIF2␣ phosphorylation is expected to cause global translation repression, recent studies showed that this is not the case for low levels of eIF2␣ phosphorylation, suggesting that, under certain conditions, eIF2␣ phosphorylation has specific rather than a broad inhibitory effect on general translation (50,62).
The filamentous fungus Neurospora crassa exhibits a strong codon usage bias for C/G at wobble positions and has been an important experimental model system for studying the functions of codon usage biases (6,7,16). We have previously shown that codon usage plays an important role in regulating elongation speed and the co-translational protein folding process in Neurospora (9,16,21,63). Use of preferred codons speeds up the local rate of translation elongation while rare codons slow down translation elongation and potentially result in ribosome pausing and premature termination, a mechanism that can affect translation efficiency (21,23). We also showed that codon usage could influence gene expression levels by affecting transcription efficiency (13,33,64). These results led us to propose that codon usage represents a code within the genetic codons that regulates both gene expression level and protein structure.
In this study, we used N. crassa as a model system to understand the CDS length-dependent mechanism that regulates translation. We discovered that there is a conserved codon usage-dependent genome-wide negative correlation between protein abundance and length, suggesting that optimal codon usage is a mechanism that allows for efficient production of large proteins critical for cell functions. We found that translation of mRNAs with non-optimal codon usage preferentially induced eIF2␣ phosphorylation and reduced protein levels in a CDS length-dependent manner, indicating a feedback mechanism from translation elongation to control translation initiation. Furthermore, we showed that the GCN2 homolog, CPC-3, which is the only known kinase responsible for eIF2␣ phosphorylation in Neurospora (52), also regulates translation elongation rate in both codon usage and CDS length-dependent manners, resulting in slow elongation rates for mRNAs with long CDS regions. Together, these results revealed a codon usage-and CDS length-based feedback mechanism from translation elongation to regulate both translation initiation and elongation kinetics.
Neurospora strains and growth conditions
The N. crassa wild-type (WT) strain FGSC 4200 (a) and the cpc-3 strain (65) were used in this study. Strains were cultured on slants containing 1 × Vogel's, 3% sucrose, and 1.5% agar. Liquid cultures were grown in 2% glucose medium (1 × Vogel's, 2% glucose). Specifically, fresh conidia (7-10 days post inoculation on slants) of the WT or cpc-3 strains were cultured in 50 ml 2% glucose medium in petri dishes at room temperature for 2 days. The cultures were cut into small discs with a diameter of 1 cm, and then the discs were transferred into flasks with the same liquid medium and were grown with orbital shaking (200 rpm) for 12 h before various experiments. Race tube medium contained 1 × Vogel's, 0.1% glucose, 0.17% arginine, and 1.5% agar. All the strains were cultured under constant light at room temperature unless otherwise specified. For chemical treated experiments, various indicated drugs were added into 2% glucose medium for the WT strain for 20 min before harvesting. 3-aminotriazole (3-AT, Sigma-Aldrich, Cat. No. 8056), puromycin (Puro, Sigma-Aldrich, Cat. No. 540411) and tigecycline (TIG, Sigma-Aldrich, Cat. No. PZ0021) were used at the final concentration of 50 mM, 0.06 mg/mL and 0.5 mg/mL, respectively. Anisomycin (ANS, Sigma-Aldrich, Cat. No. A9789) and cycloheximide (CHX, Sigma-Aldrich, Cat. No. C1988) were used at the indicated concentration.
Plasmid construction, transformation and cpc-3 complementation
For gene expression at the csr-1 locus in N. crassa, a bastaresistance (bar) gene was inserted downstream of the ccg-1 promoter (Pccg-1) of a parental plasmid, Pcsr1, to create the Pcsr1-bar plasmid. Pcsr1-bar is a csr-1-targeting expression vector with an expression cassette in which Pccg-1 and bar flank the gene of interest (edp or cpc-3 in this study), and this cassette is flanked by csr-1 downstream and upstream fragments which serve as the recombination sites for double homologous recombination (66). Afterwards, the resulting plasmid was transformed into N. crassa strains by electroporation, transformants were screened for resistance to both glufosinate-ammonium (0.25 mg/ml, Sigma-Aldrich, Cat. No. 45520) and to cyclosporin A (5 g/ml, Sigma-Aldrich, Cat. No. 30024), which resulted in >90% positive transformants. Homokaryatic strains were obtained by microconidia purification. To generate the cpc-3 complementation strains, a construct expressing the WT cpc-3 (cloned from genomic DNA) with 3× Flag epitope tag under the control of the ccg-1 promoter was introduced at the csr-1 locus in the cpc-3 strain. The expression of the Flag-tagged CPC-3 and the rescue of eIF2␣ phosphorylation in the complementation strains were confirmed by immunoblotting (Supplementary Figure S1).
RNA isolation and quantitative reverse transcription PCR (qRT-PCR)
The culture conditions were the same as described above. RNA extraction and qRT-PCR were performed as previously described (58). -tubulin transcript (NCU04054) was quantified as an internal control. The primer pairs 5 -ACAACCCCTCACATCAACCAA-3 , 5 -CCGCCCTTGTCATCGTCATCC-3 and 5 -GCGTATCGGCGAGCAGTT-3 , 5 -CCTCACC AGTGTACCAATGCA-3 were used to amplify the reporter gene edp and -tubulin gene, respectively. The primers for different versions of edp were designed to amplify the 5 end region of the transcripts (5 UTR, 3× Flag and 8xGly linker), which is common to all the transgenes to ensure the same amplification efficiency.
In vitro translation and protein analyses
In vitro translation assay was performed as previously described (21). Specifically, the N. crassa cell free lysate was obtained as previously described (21,67), except that the protease inhibitor cocktail from MedChemExpress company (Cat. No. HY-K0010) was used. Equal moles (0.65 pmol for each reaction) of different versions of mRNAs were individually added into N. crassa cell free lysate to translate for 15 min at 26 • C unless otherwise specified. SDS-PAGE loading buffer was added into the samples immediately and followed by heating the samples at 98 • C for 10 min.
Protein extraction was performed as previously described (58). For western blot analysis of the Flag-tagged EDP, the anti-Flag (Sigma-Aldrich, Cat. No. F3165) antibody was used. Densitometric analyses of the western blot results were performed using Image J.
For phosphatase treatment, total proteins from the WT and cpc-3 strains were obtained by using protein lysate buffer with or without phosphatase inhibitors (PP inhibitors: 25 mM NaF, 10 mM Na 4 P 2 O 7 .10H 2 O, 2 mM Na 3 VO 4 and 1 mM EDTA). The protein extracts from lysate buffer without PP inhibitors were further treated with Lambda protein phosphatase (Lambda PP, NEB, Cat NO.: P0753S) according to its protocol.
Cell-free translation assay to determine TFAs
To prepare the mRNA templates for in vitro translation, in vitro transcription of different mRNAs was performed as previously described (21). In vitro translation assays to determine TFAs were performed as previously described (21) and the luminescence of luciferase signal was recorded continuously in 20-s intervals.
Ribosome profiling and mRNA-seq
The WT and cpc-3 strains expressing 1× WT-EDP, 3× WT-EDP, 1× OPT-EDP and 3× OPT-EDP, respectively, were used for ribosome profiling and accompanying mRNA-seq experiments under nutrient replete condition at room temperature (1 × Vogel's, 2% glucose). At least three biological replicates for the ribosome profiling experiment for each strain were used. Ribosome profiling experiments were performed as previously described (58). CHX was not added into cultures before sample collection and was only added into the lysate buffer at final concentration of 0.1 mg/ml. The ribosome profiling and mRNA-seq methods were described in the protocol for ARTseq Ribosome Profiling Kit (Yeast) (Illumina, Cat NO.: RPYSC12116). The adaptor in the kit was replaced with synthesized 5 -/5rApp/NNNNAGATCGGAAGAG CACACGTCT/3ddC/ to avoid potential adaptor ligation biases. The RT-primer in the kit was replaced with synthesized 5 -/5Phos/RNCGTCGGACTGTAGAACTCTG/i Nucleic Acids Research, 2021, Vol. 49, No. 16 9407 Sp18/AGACGTGTGCTCTTCCGATCT to avoid potential ligation biases during the circularization process. The resulting libraries were sequenced by the BGI DNBseq platform.
Bioinformatics analyses of ribosome profiling results
The workflow for ribosome profiling experiment data analysis is the same as previously described (58), with the following modifications: Both of the ribo-seq and RNA-seq reads were mapped to CDS regions of genes. Both ribosome protected fragment (RPF) and mRNA level for each gene were measured by Transcripts Per Kilobase Million (TPM). Ribosome density was measured by TPM of RPFs normalized by TPM of mRNAs. The calculation of the relative codon decoding time (RCDT) is the same as previously described (21,23). The raw reads of biological replicates of each strain were merged when we analyzed the ribosome profiling data. Ribo-seq and RNA-seq data from WT and cpc-3 strains expressing 1× WT-EDP, 3× WT-EDP, 1× OPT-EDP and 3× OPT-EDP, respectively, were regarded as four independent biological replicates when we performed genome-wide analyses for the WT and cpc-3 strains.
Metabolic labeling
Fresh Neurospora conidia of the WT and cpc-3 strains were cultured separately in 50 ml 2% sucrose medium (1 × Vogel's, 2% sucrose) in flasks with orbital shaking (200 rpm). After culturing at 30 • C for 8 h, EasyTag L-[ 35 S]methionine (PerkinElmer) was added into the medium for 45 min before sample collection. The same amounts of protein extracts (100 g) from each sample were used to determine the levels of 35 S incorporation as previously described (68).
Polysome profiling
The culture condition for polysome profiling is the same as the metabolic labeling experiment. Cultures of the WT and cpc-3 strains were frozen in liquid nitrogen immediately after collection. Tissue samples were grounded into powder in liquid nitrogen and equal volume of the tissue powder of each sample was added the same volume of lysis buffer (1 × polysome buffer in ARTseq Ribosome Profiling Kit (Illumina, Cat. No. RPYSC12116), 1% Triton X-100, 0.1 mg/ml CHX, 1× protease inhibitor cocktail (EDTAfree, MedChemExpress, Cat. No. HY-K0010), 0.2 U/l SUPERase•In (Invitrogen, Cat. No. AM2694) and 2 mM DTT). The lysates were then centrifugated at 15 000 rpm for 10 min before the A 260 of the supernatant was measured by NanoDrop Microvolume Spectrophotometer. The A 260 /ml of the lysate was calculated according to the protocol for the ARTseq Ribosome Profiling Kit. The same OD amount (20 OD 260nm ) of the lysate for each sample was loaded onto 10-50% sucrose gradient buffer containing 20 mM HEPES (pH 7.6), 0.1 M KCl, 5 mM MgCl 2 , 10 g/ml CHX, the 1× protease inhibitor cocktail and 10 units/ml SUPERase•In. The sucrose gradients were then centrifuged at 35 000 rpm for 2 h at 4 • C using a SW41Ti rotor in a Beckman Coulter (Optima L-80 ultra) centrifuge. Sucrose gradients were analyzed using a BioLogic LP chromatography System (Bio-Rad, Cat. No. 731-8350).
Mass spectrometry analyses
Cell culturing was performed as described in the metabolic labeling experiment above. For mass spectrometry (MS) analysis to compare the relative amounts of different proteins within the same sample, ∼100 g proteins for each sample from the WT strain were run 1 cm into 7.5% SDS-PAGE gel. Gel slices were cut into small pieces for quantitative MS analysis. MaxQuant was used to analysis the MS data (69), and an intensity-based absolute quantification (iBAQ) value was used as a measure of protein abundance (70)(71)(72). For the quantitative MS analysis to compare the protein levels between the cpc-3 strain and its complementation strain, the cultures were grounded into powder in liquid nitrogen before adding equal volume of lysis buffer (50 mM triethylammonium bicarbonate (TEAB) and 5% SDS) to equal volume of the powder for each sample. The protein extracts were centrifuged at 12 000 rpm for 10 min and the supernatants were used for the subsequent TMT Mass Tagging and MS analyses. The protein concentration for each sample was measured and equal amounts of total proteins were used during the experiment. There are four repeats for each strain. The abundance data of the quantitative MS (TMT labeling) analysis were not normalized by molecular weights. The protein levels were determined by the abundance data normalized by the sum of all the raw abundance data in each sample. The P-values for the four replicates were determined by Student's twotailed t-test, which were further adjusted by False Discovery Rate (FDR). Those with FDR values <0.05 were identified as differentially expressed proteins, either significantly up-regulated or down-regulated. All the MS analyses were performed by the UT Southwestern Proteomics facility.
Codon manipulation, indices calculation and data collection from databases
The codon usage of luciferase and edp genes were optimized based on the N. crassa codon usage frequency from the Codon Usage Database (https://www.kazusa.or. jp/codon/). The WT and optimized (OPT) versions of luciferase genes are the same as previously described (21). The sequences of WT/OPT-Luc, WT/OPT-edp, and WT/OPT-GFP were shown in Supplementary Figure S2. The CBIs and tAIs were calculated using CodonW (http://codonw. sourceforge.net/) and stAIcalc (73), respectively. The tRNA copy number-related data used for calculating tAIs was collected from GtRNAdb database (http://gtrnadb.ucsc.edu/ GtRNAdb2/). For the protein abundance data, the em-PAI data of N. crassa was obtained previously in our lab (13) and the iBAQ data is obtained from this study. The publicly available protein abundance data of S. cerevisiae, Drosophila melanogaster, Caenorhabditis elegans and Mus musculus were used in the analyses (74)(75)(76)(77).
Gene functional enrichment analysis
The functional category (including Gene Ontology (GO), Interpro, and KEGG terms) enrichment analyses were performed with the functional annotation tool of the DAVID bioinformatics web server (http://david.abcc.ncifcrf.gov/), and the whole genome annotation was used as background. The genes of each enriched functional category, the enrichment fold change, and the various statistical parameters of the enrichment analysis including Pvalues, Bonferroni-corrected P-values, Benjamini-corrected P-values, and FDR values were determined.
Optimal codon usage has a conserved role in promoting the production of large proteins
To determine the potential role of codon usage in determining the levels of proteins of different sizes, we performed MS analysis of the WT N. crassa whole-cell extracts and determined the relative abundances of ∼3000 proteins (Supplementary Table S1). As expected, a negative correlation (Pearson's correlation coefficient R = −0.37) was observed between protein abundance and protein length proteomewide ( Figure 1A, left). However, when the analysis was only limited to genes with strong codon usage biases (codon bias index (CBI) > 0.5, genes with a strong preference for optimal codons) (63,78), the negative correlation between protein abundance and protein length was mostly abolished (R = −0.08) ( Figure 1A, center). When genes were limited to those with low CBI values (<0.2, genes with weak codon usage biases), the negative correlation between protein abundance and protein length became stronger (R = −0.47) (Figure 1A, right). We also quantified gene codon usage bias using the tRNA adaptation index (tAI), a measure that takes into account tRNA concentrations and the efficiencies of codon-anticodon pairing (79). As gene tAI values increase (i.e. codon usage becomes more optimal), the negative correlation between protein abundance and protein length in a scanning window progressively weakened. The same observation was seen for the MS results obtained in this study using the iBAQ method or our previous result using the em-PAI method (13) ( Figure 1B).
To determine whether the codon usage effect observed above in Neuropsora is conserved in other eukaryotes, we determined the correlations between protein abundance and protein length as a function of gene tAI values in S. cerevisiae, D. melanogaster, C. elegans and different mouse tissues using previously reported proteomic MS data (74)(75)(76)(77). As in N. crassa, there are negative correlations between protein abundances and protein lengths in all these eukaryotic organisms, and the negative correlations progressively weaken as codon usage becomes more optimal ( Figure 1B), regardless of the protein quantification methods used. These results suggest that optimal codon usage can counter the negative impact of CDS length on protein expression to allow large proteins to be efficiently expressed.
To determine whether the codon usage and lengthdependent effect on protein levels is due to the regulation at mRNA level, we determined the correlations between mRNA levels and CDS lengths as a function of gene codon usage using our RNA-seq results from the Neurospora WT strain (see below). As shown in Supplementary Figure S3, codon usage does not appear to affect the negative correlation between mRNA level and CDS, suggesting that the codon usage effect on the correlation between protein abundance and length is likely regulated at the translational level.
The codon usage effects on mRNA translation and ribosome density are CDS length-dependent
To confirm the codon usage effect on protein abundance in a CDS length-dependent manner, we created four Nterminally Flag-tagged reporter constructs with different codon usage biases and CDS lengths for expression ( Figure 2A). The CDS regions of the reporters correspond to the sequence of NCU05784, which encodes a small (125 aa), hypothetical protein, which we named elongation-dependent phosphorylated protein (EDP, see below). To determine the effect of CDS length on protein expression independent of codon usage, we created the 1 × EDP and 3× EDP (3 tandem EDP repeats) constructs ( Figure 2A). The EDP open reading frames are composed of either the WT codons or OPT codons. These expression constructs were targeted to the csr-1 locus in the N. crassa genome. Homokaryotic transformants containing each reporter construct were obtained. For the WT codon usage constructs, 1× EDP was produced in a considerably higher level than 3× EDP (Figure 2B and C). Codon optimization resulted in higher protein levels for both 1× EDP and 3× EDP, but the codon optimization effect on protein up-regulation was much more robust for 3× EDP than for 1× EDP such that their abundances were comparable after codon optimization ( Figure 2B, C and Supplementary Figure S4A). Thus, consistent with the bioinformatics analysis results, codon usage has differential effects on protein expression in a CDS lengthdependent manner: Optimal codon usage preferentially allows large proteins to be efficiently expressed, and nonoptimal codon usage has a more potent inhibitory effect on the expression of larger proteins than smaller ones.
To determine whether codon usage affects translation efficiency in a CDS length-dependent manner, we performed ribosome profiling using the WT strains expressing the different reporter proteins. Ribosome profiling is a powerful approach for studying mRNA translation dynamics in vivo as it provides codon-level resolution of ribosome locations and ribosome occupancy on mRNAs (21,80,81). Ribosome density on a given mRNA can be determined by the number of its RPFs normalized by its RNA level within the CDS region, and can reflect the ribosome flux of that mRNA (80). The relative ribosome density on the WT 3× EDP mRNA was significantly lower than that on the WT 1× EDP mRNA ( Figure 2D and Supplementary Figure S4B). Note that the CDS regions of these two mR-NAs have the identical codon usage profile. In contrast, the relative ribosome densities were comparable for the optimized (1 × OPT and 3 × OPT) mRNAs ( Figure 2D and Supplementary Figure S4B). These results suggest that nonoptimal codon usage preferentially inhibits translation of mRNAs with longer CDS regions. In addition, despite of the higher EDP protein levels for the OPT reporters, their ribosome densities were actually lower than that for the Proteome-wide protein abundance data are described and cited in Materials and Methods. All genes with detected protein levels were ranked by their tAIs, and the Pearson's correlation coefficients were calculated in continuous scanning windows from low to high tAIs. Each scanning window has 500 genes for N. crassa, S. cerevisiae, D. melanogaster, C. elegans and 1000 genes for different tissues of M. musculus. Methods used to quantify the relative protein levels are indicated. Figure S4B). Because we previously showed that optimal codons can dramatically reduce ribosome densities on mRNAs in Neurospora due to increased elongation speed (21,23), this result suggests that the reduction of ribosome density caused by increased elongation speed due to codon optimization more than counters the increase of ribosome density caused by increased translation initiation efficiency.
1× WT reporter (Supplementary
To examine the CDS length-dependent effect on translation genome-wide, we determined the ribosome densities within CDS regions of all predicted Neurospora genes using the ribosome profiling results of the WT strain and calculated the correlations between tAIs and ribosome densities as a function of CDS lengths. It is important to note that codon usage has a major impact on elongation rates and optimal codons can dramatically reduce ribosome densities on mRNAs in Neurospora (21,23). Thus, ribosome density measurement will overestimate the ribosome flux for mR-NAs with poor codon usage and underestimate the flux for those with optimal codon usage. The correlations between gene tAIs and ribosome densities are weakly negative for short mRNAs ( Figure 2E). As CDS length increases, however, the correlations gradually become positive, suggesting that optimal codon usage of long mRNAs positively correlates with ribosome density ( Figure 2E). Because optimal codons result in fast elongation rates which lower ribosome density, the weak positive correlation actually indicates a strong positive effect of optimal codon usage on ribosome flux. Thus, codon usage affects ribosome flux on mRNAs in a CDS length-dependent manner: codon optimality preferentially enhances translation efficiency/ribosome flux of long mRNAs. These results also indicate the existence of a feedback mechanism from translation elongation to regulate translation initiation.
Codon usage and CDS length-dependent separation of gene functions in the genome
Because codon usage differentially affects protein expression level in a CDS length-dependent manner, we hypothesize that large proteins with critical functions would have optimal codon usage profiles to allow their efficient synthesis. To examine this, we grouped all predicted N. crassa genes based on their CBI values and CDS lengths and performed gene functional enrichment analyses for four different groups of genes (each with 1000 genes) (Supplementary Figure S5): (i) those with the longest CDS regions among those with strong codon usage biases (CBI ≥ 0.3), (ii) those with the longest CDS regions among those with non-optimal codon usage biases (CBI ≤ 0.15), (iii) those with the shortest CDS regions among those with strong codon usage biases and (iv) those with the shortest CDS regions among those with non-optimal codon usage biases. There was no significant functional enrichment (Pvalue < 1e−10) for genes in group (iv). In contrast, many genes with similar functions or in the same biological process were significantly enriched in the other three groups ( Figure 2F, left panels and Supplementary Table S2). As predicted, the genes with long CDS regions and optimal codon usage are enriched for functional categories associated with essential cellular processes such as amino acid activation and amino acid metabolic process, tRNA aminoacylation, plasma membrane components, and non-coding RNA metabolic process ( Figure 2F left panels and Supplementary Table S2). In contrast, the genes with the long CDS regions and poor codon usage profiles are mostly enriched for functional categories involved in the responses to environmental stimulus, cell communication, and transcriptional regulation (Figure 2F left panels and Supplementary Table S2). The genes with the shortest CDS regions and optimal codons are significantly enriched for functional categories related to translation, ribosomal proteins and ribosome biogenesis, mitochondrial components and respiratory chain ( Figure 2F, left panels and Supplementary Table S2).
We also classified the genes of the three groups in Figure 2F, left panels into six mutually exclusive lineage specificity groups based their conservation in other organisms (82): (i) eukaryote/prokaryote-core (genes with homologs in nonfungal eukaryotes and/or prokaryotes), (ii) dikarya-core (genes with homologs in Basidiomycota and Ascomycota species), (iii) Ascomycota-core, (iv) Pezizomycotina-specific, (v) N. crassa-orphan genes and (vi) others (genes with homologs identified in prokaryotes or non-fungal eukaryotes in addition to Pezizomycotina, but not in members of the Basidiomycota, Saccharomycotina or Taphrinomycotina). As shown in the right panels of Figure 2F, the genes with long CDS regions and optimal codon usage are mostly genes in class (i) and (ii), indicating that they are conserved beyond fungi and likely have functions critical for cell survival. In contrast, the genes with long CDS regions and poor codon usage profiles are mostly N. crassa-specific and Pezizomycotina-specific. Systematic deletion studies of Neurospora genes previously revealed that the essential genes are mostly genes that are conserved beyond fungi, while the Neurospora-specific genes are not critical for cell survival (65). Together, these results are consistent with our hypothesis that optimal codon usage is a mechanism that allows large proteins required for critical cellular functions to be efficiently produced.
Ribosome stalling at a stage between pre-accommodation and pre-translocation induces eIF2␣ phosphorylation
The effect of codon usage on ribosome flux suggests that codon usage-dependent elongation can feedback to regulate translation initiation under nutrient replete growth conditions. Phosphorylation of eIF2␣ is an important regulatory mechanism of translation initiation, and is known to be induced by many types of stress conditions to result in global inhibition of translation initiation of many mRNAs (42)(43)(44)50). Since codon usage has been shown to play an important role in determining elongation speed and rare codons cause ribosome pausing with empty A site in Neurospora (21,23), we first examined whether ribosome stalling can trigger eIF2␣ phosphorylation by using different pharmacological inhibitors that can block translation elongation at different steps of the eukaryotic translation elongation cycle ( Figure 3A).
We first treated Neurospora cultures with 3-AT, a competitive inhibitor of the product of his-3 gene, which results in accumulation of uncharged tRNAs and cellular amino acid starvation. This treatment resulted in a significant elevation of eIF2␣ phosphorylation ( Figure 3B). Puromycin (Puro, tyrosyl-tRNA-like) and tigecycline (TIG, tetracycline-like) can cause ribosome stalling at the stage between pre-accommodation and pre-translocation in the translation elongation cycle (83-85) ( Figure 3A). Treatment of Neurospora cultures with either agent also enhanced eIF2␣ phosphorylation in vivo ( Figure 3B). In contrast, treatments with anisomycin (ANS) and cycloheximide (CHX), which inhibit peptide bond formation and translocation, respectively, resulted in dramatic dose-dependent decreases of eIF2␣ phosphorylation ( Figure 3C). These results suggest that ribosome stalling at the stage between pre-accommodation and pre-translocation but not at other stages of the elongation cycle induces eIF2␣ phosphorylation. This phenomenon may be caused by distinct ribosome conformations when ribosomes stall at different functional states. Consistent with this notion, alteration of ribosome conformation at distinct elongation states caused by CHX, TIG and ANS treatments was previously demonstrated experimentally (84). During the preparation of this manuscript, a similar conclusion on the effects of some of these inhibitors was also reached in yeast (86).
Induction of eIF2␣ phosphorylation by mRNA translation is dependent on CDS length and codon usage
Although eIF2␣ phosphorylation can be induced by treatment of cultures with translation inhibitors, it is not clear whether it can be regulated by codon usage or CDS length under normal growth (nutrient replete) conditions. To examine these possibilities, we took advantage of the Neurospora cell free in vitro translation system that was previously shown to accurately reflect protein translation in vivo (21,87,88). Cellular mRNAs were depleted from this system by micrococcal nuclease digestion so that the translation of a single species of mRNAs and its impact on eIF2␣ phosphorylation can be examined. We synthesized a series of capped and polyadenylated WT luciferase (luc) mRNAs with an in-frame stop codon at different positions ranging from the 10th codon to 310th codon from the start codon ( Figure 3D). Quantification of eIF2␣ phosphorylation after translation of these mRNAs revealed that there was a CDS length-dependent effect on eIF2␣ phosphorylation: mRNAs with long CDS regions result in higher level of eIF2␣ phosphorylation than those with short CDS regions ( Figure 3E).
To determine whether eIF2␣ phosphorylation is dependent on codon usage, we evaluated the eIF2␣ phosphorylation level in the presence of the WT luc mRNA or the OPT luc mRNA. As expected, the translation of the WT mRNA resulted in a significantly higher level of eIF2␣ phosphorylation than the translation of the OPT version ( Figure 3F and Supplementary Figure S6A). To confirm that this result is not gene specific, we synthesized and translated the WT or OPT versions of mRNAs encoding one or two copies of the GFP CDS regions (1 × GFP and 2 × GFP, respectively) (Supplementary Figure S6B). Both of the WT GFP mRNAs induced significantly higher levels of eIF2␣ phosphorylation than the OPT mRNAs ( Figure 3G). Strikingly, the induction of eIF2␣ phosphorylation by the WT 2× GFP was much higher than that by the WT 1× GFP, while the OPT 2× GFP mRNAs had little effect on eIF2␣ phosphorylation compared to the OPT 1× GFP. Together, these results demonstrate that translation elongation can induce eIF2␣ phosphorylation in a CDS length-dependent and codon usage-dependent manner in the absence of translation stress. Thus, codon usage and CDS length can potentially regulate translation initiation by affecting eIF2␣ phosphorylation.
Loss of eIF2␣ kinase CPC-3 preferentially up-regulates protein expression for mRNAs with long CDS and poor codon usage
eIF2␣ phosphorylation does not have to cause global downregulation of translation initiation (44,50). Our findings that eIF2␣ phosphorylation depends on codon usage and CDS length in the absence of translation stress suggest that this is an elongation-dependent feedback mechanism that may alter translation efficiency of specific mRNAs. We hypothesize that long mRNAs and those enriched with rare codons cause local accumulation of phosphorylated eIF2␣, resulting in specific rather than a general suppression of mRNA translation. cpc-3 (cross pathway control-3, NCU01187) encodes the Neurospora homolog of the yeast and mammalian GCN2, and is the only known kinase responsible for eIF2␣ phosphorylation in Neurospora (52). As expected, eIF2␣ phosphorylation is completely abolished in the cpc-3 strain (Supplementary Figure S1). We compared the expression of the four EDP reporters described above in the WT and cpc-3 strains. Although the deletion of cpc-3 did not affect the expression levels of the WT or OPT 1×EDP or OPT 3×EDP, it significantly increased the protein level of the WT 3×EDP ( Figure 4A, B and Supple-mentary Figure S7). These results suggest that eIF2␣ phosphorylation preferentially inhibits the translation of long CDS mRNAs with poor codon usage, resulting in their specific translation inhibition rather than a general translation inhibition. Although deletion of cpc-3 increased the protein level of WT 3× EDP, its level was still lower than that of the WT 1xEDP level (Supplementary Figure S7), suggesting that the effect of CDS length on protein abundance is determined by both CPC-3-dependent and CPC-3-independent mechanisms. A CPC-3-independent mechanism may be involved in the negative influence of CDS length on translation due to less efficient ribosome recycling for mRNAs with long CDS than for short CDS mR-NAs (39,41). It should also be noted that the negative effect of 'CDS length' on protein level depends on codon usage, because unlike the WT reporters, the protein levels of the 1× OPT and 3× OPT EDP are comparable ( Figure 4A and Supplementary Figure S7), which is consistent with the proteomic data analysis results ( Figure 1). This result indicates that the negative effect of CDS length on protein production is attenuated for mRNAs with optimal codon usage.
Deletion of cpc-3 resulted in a decrease of ribosome density of the reporter mRNAs with long CDS
To understand how CPC-3 influences translation in vivo, we also performed ribosome profiling experiment in the cpc-3 strains expressing the different EDP reporters under nutrient replete condition and compared the relative ribosome density of the EDP reporter mRNAs in the WT and cpc-3 strains by normalizing the number of RPFs on CDS regions with mRNA levels. Consistent with the result in Figure 2D, the relative ribosome density of 3× WT EDP was significantly decreased compared to that of the 1× WT EDP in the WT strain, while that of 3 × OPT EDP is comparable to that of 1× OPT EDP ( Figure 4C). In addition, the ribosome densities of 1× and 3× WT EDP mRNAs were both significantly higher than that of their OPT counterparts in the WT strain ( Figure 4C), indicating that codon optimization results in faster translation elongation speed, which reduced ribosome density despite the strong up-regulation of OPT mRNA translation (21). Surprisingly, compared to the WT strain, the relative ribosome density of the 3× WT EDP but not 1× WT EDP was significantly decreased in the cpc-3 strain ( Figure 4C). The up-regulation of 3× WT EDP protein level but a decrease of ribosome density on its mRNA in the cpc-3 strain suggests that, in addition to its role on translation initiation, CPC-3 may also have a role that preferentially inhibits translation elongation on mRNAs with long CDS and poor codon usage. As a result, the increase of elongation speed on 3× WT EDP mRNA in the cpc-3 strain more than counters the effect of the increase of translation initiation, resulting in a decrease of ribosome density.
CPC-3 slows down translation elongation rate in a codon usage-dependent manner
To confirm our conclusion above and determine the effect of CPC-3 genome-wide, we calculated the gene-specific ribosome densities for all predicted Neuropsora genes using the ribosome profiling and accompanying RNA-seq results of the WT and cpc-3 strains. We found that among genes with more than 2-fold changes (FDR < 0.05) in ribosome density in the cpc-3 strain compared to the WT strain, 98% genes had a decreased ribosome density whereas only 2% had an increased ribosome density ( Figure 5A). To further confirm this result, we also performed polysome profiling experiments, which showed that the ratio of polysomes/monosome in the cpc-3 strain was lower than that in the WT strain ( Supplementary Figure S8A and B), consistent with the ribosome profiling result.
The global decrease of ribosome densities on mRNAs in the cpc-3 strain is unexpected due to the well-established role of CPC-3 and eIF2␣ phosphorylation in inhibiting translation initiation. To confirm the inhibitory role of CPC-3 and eIF2␣ phosphorylation in translation, we performed 35 S-methionine pulse labeling experiment to compare the overall translation efficiencies between the WT and cpc-3 strains grown at 30 • C. As expected, the cpc-3 strain had a significant increase of 35 S-methionine incorporation level than the WT strain ( Figure 5B), indicating that CPC-3 and eIF2␣ phosphorylation indeed inhibit general translation efficiency in Neurospora. Just like what we found for the 3× WT EDP reporter mRNA, the increase of general translation efficiency but reduction of ribosome densities on most of mRNAs in the cpc-3 strain suggests an overall increased translation elongation rate in the cpc-3 strain that can more than counter the effect of the increase of translation initiation on ribosome density.
To determine the role of CPC-3 in translation elongation rate, we calculated the RCDTs for all 61 amino acidencoding codons in the WT and cpc-3 strains using the ribosome profiling results. Consistent with our previous studies (21,23,58), there was a clear codon usage bias in RCDTs for all codon families in the WT strain. The most preferred synonymous codon was always the one with the lowest RCDT in each codon family, while rare codons had the highest RCDTs ( Figure 5C and D). Although the codon usage biases in RCDTs did not change in the cpc-3 strain, RCDTs were reduced for all codons ( Figure 5C and D). This indicates that there was a global increase in translation elongation rate in the cpc-3 strain, resulting in the decreased ribosome density on most mRNAs despite the increase of translation initiation. We next examined whether the effect of CPC-3 on translation elongation is codon usagedependent. We observed that the decrease of RCDTs in the cpc-3 strain compared to the WT strain was always greater for the rare codons than for the most preferred codons in all codon families ( Figure 5E), indicating that CDC-3 preferentially slows translation elongation at rare codons.
To further confirm this conclusion, we utilized the Neurospora cell-free translation system and the WT and OPT luciferase reporters, which was previously used to demonstrate the codon usage effect on elongation speed (21). Because luciferase is known to be folded co-translationally and becomes functional within a few seconds after the completion of translation, the time of first appearance (TFA) val- ues of luciferase signal for the WT and OPT luc mRNAs reflect differences in translation elongation rates (21,58,89). In addition, translation initiation time was previously estimated to be less than several seconds (37,47,90). Thus, the TFA changes should reflect changes in elongation rates. Similar to what we reported previously (21), the TFA of the OPT luc mRNA was significantly shorter than that of the WT luc in the WT extracts ( Figure 6A and B). In the cpc-3 extracts, the TFA values of both WT and OPT luc mRNAs were reduced, confirming the effect of CPC-3 on translation elongation. Importantly, the impact of loss of CPC-3 on translation elongation rate was clearly codon usagedependent: The TFA was significantly faster (by ∼40 s) for the WT luc mRNA in the cpc-3 extracts than in the WT extracts, but for the OPT luc mRNA, the TFA was only marginally reduced in the cpc-3 extracts and was not statistically different from that in the WT extracts ( Figure 6A and B).
When the WT 1× EDP reporter was expressed in the WT strain, we noticed that there were several protein species with different mobilities in SDS-PAGE gel ( Figure 4A). Phosphatase treatment of the protein extracts indicated that the bands that migrated more slowly were phosphorylated EDP (Supplementary Figure S9). EDP was mostly in the hypo-phosphorylated form in the WT strain but was mostly hyper-phosphorylated in the cpc-3 strain ( Figure 4A and 6C). The expression of the OPT 1 × EDP, however, resulted in hyper-phosphorylation of EDP in both of the WT and cpc-3 strains ( Figure 6C). This codon usage-dependent protein phosphorylation profile change is very similar to what we previously observed when the codon usage profiles of genes encoding circadian clock proteins FRQ and PER from Neurospora and Drosophila, respectively, were changed (16,30,63). These results suggest that the EDP phosphorylation profile is affected by the co-translational protein folding process that is sensi- tive to translation elongation rate regulated by codon usage. Rapid translation elongation caused by either codon optimization or deletion of cpc-3 results in altered EDP structure that promotes its phosphorylation. To further confirm this, we compared the WT 1 × EDP expression profiles for cultures grown at 20 and 30 • C. We have shown previously that higher temperature increases translation elongation rate (21). As expected, in the WT strain, the protein expressed from WT 1× EDP was hypo-phosphorylated at 20 • C but became hyper-phosphorylated at 30 • C ( Figure 6D). On the other hand, EDP expressed from WT 1 × EDP mRNA was hyper-phosphorylated at both temperatures in the cpc-3 strain. Together, these results show that CPC-3 not only regulates translation initiation via eIF2␣ phosphorylation but also translation elongation in a codon usage-dependent manner. Thus, CPC-3 plays an important role in determining the codon usage effect on elongation speed so that optimal codons are decoded much faster than rare codons. Our results here also caution the use of ribosome density as a reflection of translation efficiency because elongation rate can have a major impact on ribosome density.
CPC-3 influences translation kinetics in a CDS lengthdependent manner
Because the CPC-3-mediated eIF2␣ phosphorylation is dependent on both codon usage and CDS length, we examined whether the role of CPC-3 on translation kinetics is also influenced by CDS length. We calculated the proportions of genes with up-regulated (change fold of cpc-3/WT > 2) or down-regulated (change fold of cpc-3/WT < 0.5) ribosome density in a 500-gene scanning window. After ranking genes by their CDS lengths from short to long, we found that as CDS length gradually increased, the proportions of genes with down-regulated ribosome density increased markedly ( Figure 7A). In contrast, the proportions of genes with up-regulated ribosome density decreased as CDS length increased in the same corresponding window ( Figure 7A). When we ranked genes by their log 2 [change fold ( cpc-3/WT)] of ribosome density from low to high and determined their averages of CDS lengths in 500-gene scanning windows, it was clear that the genes with downregulated ribosome density are mostly those with long CDS mRNAs (>700 aa), whereas the genes with up-regulated or unchanged ribosome density tend to be short CDS (91). (G) Comparison of daily growth rates of the WT and cpc-3 strains by race tube assay at 25, 34 and 42 • C. The asterisk indicates P < 0.05 (n = 6) as determined by Student's two-tailed t-test. (H) A schematic illustration of a model explaining the mechanism of translation elongation feeding back to regulate translation initiation and elongation speed in a codon usage and CDS lengthdependent manner. Bottom left: ribosome does not pause at optimal codons during elongation. Bottom right: Rare codons cause ribosome pausing during elongation, which potentially promotes the interaction between CPC-3 and ribosomes, resulting in CPC-3 activation and phosphorylation of eIF2␣-GDP. The phosphorylated eIF2␣-GDP prevents the recycling of eIF2␣-GDP to become eIF2␣-GTP, thus inhibiting the formation of pre-initiation complex (PIC) and therefore translation initiation. Translation of an mRNA with a short CDS triggers less rare codon-mediated ribosome pausing, thus less CPC-3 activation and less the eIF2␣ phosphorylation-mediated inhibition of initiation. Translation of an mRNA with a long CDS potentially triggers more rare-codon mediated ribosome pausing events, resulting in high local concentration of phosphorylated eIF2␣-GDP, which inhibits ribosome recycling and translation re-initiation. In addition, CPC-3 also inhibits translation elongation rate in a codon usage-dependent manner so that codon decoding rates for optimal codons are faster than those for non-optimal codons. mRNAs ( Figure 7B). These results are also consistent with the results of the four EDP reporters ( Figure 4C), which showed that the relative ribosome density was significantly decreased for the 3× EDP mRNAs but not for the 1× EDP mRNAs in the cpc-3 strain. Together, these results suggest that the inhibitory effect of CPC-3 on translation kinetics is dependent on both codon usage and CDS length.
We then performed proteomic quantitative MS analyses to identify differentially expressed proteins between the cpc-3 strain and its complementation strain by TMT mass tagging technology (Supplementary Table S3). The result showed that, after ranking proteins by their lengths from short to long, the proportions of up-regulated proteins (FDR < 0.05) increase in a 1000-gene scanning window as protein size increases (Supplementary Figure S10A). In addition, comparison of the protein length profiles showed that the up-regulated proteins are preferentially larger proteins (with an average of 645 aa) than the predicted proteome (with an average of 450 aa) (Supplementary Figure S10B). These results suggest that CPC-3 preferentially inhibits the expression of large proteins. It should be noted, however, our MS analysis preferentially identifies abundant proteins and failed to detect the vast majority of proteins encoded by mRNAs with poor codon usage (13).
Codon decoding rates are CDS length-dependent and are regulated by CPC-3
It was previously assumed that the same codons can be recognized and translated with similar efficiency on different mRNAs (21,22). The CDS length-dependent effect on ribosome density prompts us to examine whether codon decoding rate is also affected by CDS length. Thus, we compared RCDTs of all codons using the ribosome profiling data of the WT strain for mRNAs with long CDS regions (>600 aa) or short CDS regions (<300 aa). Remarkably, all codons have higher RCDTs for long CDS mRNAs than those of short CDS mRNAs ( Figure 7C, D and Supplementary Figure S11). However, the difference of RCDT of each codon between long and short CDS mRNAs was much smaller in the cpc-3 strain than in the WT strain ( Figure 7E). These results suggest that CPC-3 also regulates the CDS length-dependent effect on codon decoding rate. By analyzing the previously published ribosome profiling results in S. cerevisiae (91), the similar CDS length-dependent effect on codon decoding rates was also observed for all codons ( Figure 7F), suggesting that the effect of CDS length on elongation rate is conserved in eukaryotes. Because of the role of elongation rate in regulating co-translational protein folding and because folding of large proteins is more complicated than folding of small proteins, the CDS lengthdependent regulation on translation is likely an adaptive mechanism that slows translation elongation to facilitate optimal co-translational folding of large proteins.
CPC-3 deletion resulted in increased sensitivity to heat shock treatment
Because translation kinetics influences co-translational protein folding (6,32,92,93), the deletion of cpc-3 in Neurospora should broadly affect co-translational protein folding as indicated by the EDP phosphorylation profile change. If so, proteins in the cpc-3 strain may be more sensitive to conditions that trigger protein misfolding, resulting in impaired cell growth. To examine this possibility, we compared the growth rates of the WT and cpc-3 strains at normal growth temperatures (20-34 • C) and at 42 • C. The 42 • C treatment induces a heat shock response in Neurospora and impairs cell growth. The heat shock treatment should also make nascent proteins prone to be misfolded if their cotranslational folding processes are not optimal. Although the two strains had a similar growth rate at 25 • C, the cpc-3 strain grew more rapidly than the WT strain at 34 • C, which may be due to elevated protein translation ( Figure 7G). At 42 • C, however, the growth of the cpc-3 strain was almost completely inhibited whereas the WT strain still exhibited modest growth. This result indicates that the deletion of cpc-3 causes increased sensitivity to heat shock treatment, which is consistent with the role of CPC-3 in regulating translation kinetics, which in turn influences protein folding and function.
DISCUSSION
In this study, we showed that codon optimality regulates protein translation in a CDS length-dependent manner by regulating both translation initiation rate and elongation speed. Analyses of the proteomic results from Neurospora, yeast, fly,worm and mouse showed that protein abundance negatively correlates with protein length genome-wide. The negative correlation, however, is dependent on codon usage: As codon optimality increases, the negative correlation progressively weakens. The conserved nature of this observation suggests a common mechanism mediated by codon usage that regulates protein synthesis in a protein size-dependent manner. Using gene reporters with different codon usage biases and different CDS lengths, we showed that non-optimal codon usage preferentially reduced the production levels of large proteins and that optimal codon usage eliminated the length-dependent effect on protein production in Neurospora. Gene functional enrichment analysis showed that there is a functional separation of gene functions based on codon usage and CDS length: The genes encoding long mRNAs with optimal codons are significantly enriched for functional categories of essential cellular processes, whereas those encoding long mRNAs with non-optimal codon usage are enriched for functional categories involved in regulatory processes. These results suggest that optimal codon usage is a mechanism that permits efficient production of large proteins critical for cell survival.
Further, we showed that codon optimality regulates ribosome density and ribosome flux on mRNA genome-wide in a CDS length-dependent manner. We showed that codon usage-and CDS length-dependent eIF2␣ phosphorylation occurs in the absence of translation stress, suggesting a mechanism for how codon usage and CDS length regulate translation by feeding back on translation initiation (Figure 7H). We propose that rare codons cause ribosomes with empty A sites to pause. Such ribosome pausing results in CPC-3 activation and eIF2␣ phosphorylation. For translation of mRNAs with the same codon usage, the paus-Nucleic Acids Research, 2021, Vol. 49, No. 16 9419 ing occurs more often on long CDS mRNAs than that on short CDS mRNAs due to the existence of more rare codons, resulting in higher level of eIF2␣ phosphorylation, which can inhibit translation initiation by blocking the formation of the pre-initiation complex. However, as in yeast and mammals, CPC-3 may also have substrates other than eIF2␣, such as the methionyl-tRNA synthetase. The phosphorylation of methionyl-tRNA synthetase has been shown to inhibit methionyl-tRNA synthetase activity, thus reinforcing the transient inhibition of translation initiation exerted by eIF2␣ phosphorylation (94,95). Thus, the eIF2␣ phosphorylation-independent function of CPC-3 may also contribute to its functions in translation.
Unlike the eIF2␣ phosphorylation induction under stress conditions that results in the integrated stress response and global repression of translation (42,(53)(54)(55)(56), the codon usage-dependent and CDS length-dependent induction of eIF2␣ phosphorylation is mRNA specific and does not cause a global increase of eIF2␣ phosphorylation under nutrient replete growth conditions. Indeed, we showed that the deletion of cpc-3, which results in loss of eIF2␣ phosphorylation, preferentially increased the abundances of large proteins encoded by mRNAs with non-optimal codon usage ( Figure 4A and B). This result suggests that a high local concentration of phosphorylated eIF2␣ specifically inhibits translation initiation of long CDS mRNAs with nonoptimal codon usage. This notion is also consistent with our discovery that the correlation between codon usage and ribosome density increases as CDS length increases ( Figure 2E). It is important to note that such an effect on ribosome density occurs despite of the known dramatic opposing effect of codon usage on ribosome occupancy because of its role in elongation rate in Neurospora (21,23), suggesting that codon usage feeds back on translation initiation in an mRNA-specific manner. Consistent with our model that translation elongation feeds back to influence mRNA-specific translation initiation, it was previously proposed that translation initiation and elongation coordinate with each other to optimize protein production: mRNAs that encode high-abundance proteins usually have high translation initiation rates, fast elongation rates, and optimal codon usage (90). In addition, low levels of eIF2␣ phosphorylation may have specific rather than broad effects on translation (50,62). Importantly, it was previously shown that certain chemical modifications of mRNA transcribed in vitro can specifically enhance its translation in cells through attenuating eIF2␣ phosphorylation and increasing translation initiation (96). The specific effect of eIF2␣ phosphorylation can be caused by the compartmentalization of translation or the local ribosome recycling for translation re-initiation of circularized mRNAs (97)(98)(99)(100).
Our data indicate that the presence of rare codons activates CPC-3 to phosphorylate eIF2␣. The yeast and mammalian homolog GCN2 has been shown to be associated with ribosomes and such association is important for GCN2 activation (101)(102)(103)(104). The interaction of GCN2 with the ribosomal P-stalk can potently activate GCN2 in the absence of uncharged tRNA (59,60). Therefore, it is likely that rare codons trigger ribosome pausing, which may promote the interaction between GCN2 (or CPC-3) and ribo-somes, resulting in the kinase activation and the subsequent eIF2␣ phosphorylation. It was also previously shown that ribosomes have different conformations at different stages of the elongation cycle (84). It is possible that the ribosome conformation at a specific functional state with an empty A site promotes the interaction between ribosomes and CPC-3 and the latter activation.
As expected, deletion of cpc-3 resulted in a general increase of protein synthesis, confirming the roles of eIF2␣ phosphorylation in inhibiting translation initiation. Unexpectedly, however, deletion of cpc-3 in Neurospora also had a major impact on translation elongation rate, and this effect was also dependent on codon usage and CDS length. The increased translation elongation rate in the cpc-3 strain was demonstrated by three independent methods: the relative codon decoding rates determined by ribosome profiling, the in vitro translation assay that was used to directly compare translation elongation rates, and the in vivo protein conformation reporter that is sensitive to elongation rate change. Although the elongation rates of all codons were increased in the cpc-3 strain, the effects were codon usage-dependent: Deletion of cpc-3 preferentially increased elongation rates of mRNAs rich in rare codons, indicating that CPC-3 regulates elongation rates in a codon usagedependent manner. Thus, in the WT strain, CPC-3 amplifies the codon usage effect on elongation speed so that codon decoding rates for optimal codons are much faster than those for rare codons.
The negative correlations between CDS length and protein abundance, translation initiation rate, and ribosome density suggest that increasing ORF length may decrease the translation initiation efficiency (22,(36)(37)(38)(39)(40). Our results here suggest that both CPC-3-dependent and CPC-3independent mechanisms are involved in the CDS lengthdependent regulation on protein production. The CPC-3independent mechanism may be due to the less efficient mRNA circularization, ribosome re-initiation or ribosome recycling for long CDS mRNAs than for short CDS mR-NAs (39,41). Because protein production rate should be mostly determined by translation initiation rate on mRNAs unless there are significant amounts of translation abortion events (40), the up-regulated protein synthesis rates (including the 3 × WT EDP reporter) in the cpc-3 strain suggest their increased translation initiation. However, if there are strong ribosome stalling or premature termination during translation elongation, the increase of the translation elongation can also promote translation efficiency. Our results showed that CPC-3 plays an important role in regulating translation elongation in addition to its role in regulating translation initiation. Therefore, for the feedback mechanism mediated by codon usage and CDS length, the effects of CPC-3 and eIF2␣ phosphorylation on translation initiation should play an important role in regulating protein synthesis levels. In addition, the role of CPC-3 in translation elongation can also contribute to translation efficiency by regulating ribosome stalling or premature translation termination events (21,23).
Codon decoding rates are also regulated by CDS length: the rates are slower for mRNAs with long CDS regions and faster for mRNAs with short CDS regions. This phenomenon was observed in both N. crassa and S. cerevisiae, suggesting a conserved mechanism regulating translation elongation speed in eukaryotes. Because the elongation rate regulates co-translation folding process, and large proteins have more structural domains and should be more prone to be misfolded than small proteins (93,(105)(106)(107), a slow elongation rate likely promotes optimal co-translational folding of large proteins.
In higher eukaryotes, in addition to GCN2, the eIF2␣ phosphorylation at Ser 51 can also be mediated by protein kinase R, PKR-like endoplasmic reticulum kinase and heme-regulated inhibitor (42,50,51). Therefore, it is possible that these additional kinases may also contribute to the feedback process from translation elongation to initiation. In addition, although our results demonstrated the involvement of CPC-3 in regulating the feedback mechanism from elongation to initiation in a codon usage and CDS length-dependent manner, CPC-3 independent mechanism may also exist.
Although how CPC-3 slows down elongation rate is not known, GCN2 in both yeast and mammalian cells has been shown to interact with the translation elongation factor eEF1A and this interaction keeps GCN2 inactive under nutrient-replete conditions (108)(109)(110). It is possible that this interaction also negatively influences the ability of eEF1A to deliver cognate aminoacyl tRNAs to the ribosomal Asite during elongation. Together, our results here suggest that translation elongation can feed back on both translation initiation and elongation kinetics through a mechanism that depends on codon usage and CDS length to allow optimal synthesis of proteins of different sizes.
DATA AVAILABILITY
Ribosome profiling and RNA-seq data have been submitted to the NCBI Gene Expression Omnibus under accession number GSE168595. Customized scripts used for ribosome profiling analyses were deposited at https://github.com/ lxlscc0715/scripts-for-ribosome-profiling-and-RNA-seq.
SUPPLEMENTARY DATA
Supplementary Data are available at NAR Online. | 2021-08-22T06:16:21.170Z | 2021-08-20T00:00:00.000 | {
"year": 2021,
"sha1": "31395d101d8f33b9857f960eea2d768bf2c278fc",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/49/16/9404/40358629/gkab729.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a81cb843263becd2d488e8a12e1ac202da99456e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229715702 | pes2o/s2orc | v3-fos-license | Appropriate locations of fixed bearings of continuous beams considering rail-bridge thermal interaction
Due to the rail-bridge thermal interaction, the high additional axial force in continuously welded rails on continuous bridges may lead to rail buckling or breaking. However, there is little research on the influence of the location of the fixed bearing of continuous beam on the additional force of rail. In order to study the influence of bridge bearing arrangement on the additional longitudinal force of CWR, the thermal interaction model is established for rail, and simple and continuous beams considering nonlinear stiffness and the methods are proposed to determine the locations of fixed bearings of continuous beams corresponding to the maximum additional forces in rail reaching minimum values. Multiple continuous beams with several different lengths and simple beams with three types of bearing arrangements are taken into account to find the effect laws of the locations of the fixed bearings of continuous beams on the maximum additional forces in rail. The results show that as long as the same number of continuous beams, the ratios of the distances of adjacent two fixed bearings to the distance between the two fixed bearings of the simple beams neighbour to the first and last continuous beams respectively are approximately equal to each other. Furthermore the appropriate locations of the fixed bearings of continuous beams are recommended. The results can guide designing the location of the fixed bearing of continuous railway bridge while reducing the additional axial force in continuously welded rails due to bridge thermal effect.
Introduction
A larger number of continuously welded rail (CWR) track are laid on railway bridges.The high additional axial force in CWR due to the rail-bridge longitudinal interaction caused by the temperature change of bridge may lead to rail buckling or breaking.Thus, the problem of the rail-bridge thermal interaction has been gained wide attention.In the early stage, the analytical methods were mainly used.Fry´ba 1,2 and Esveld 3 put forward basic differential equation and analytic solutions of the additional thermal force of CWR on simply or continuously supported bridges considering linear stiffness.
Then, the numerical or field measurement methods were mainly employed by many scholars.For example, Larsson and Karoumi 4 established a finite element model to predict the thermal effect in a hollow concrete section.Kasˇpa´rek et al. 5 and Carvalho et al. 6 monitored the rail-bridge interaction under daily temperature cycle for a long time, and proposed a new method to determine the given rail safety and buckling temperatures by using ANASY finite element software.Mirza et al. 7,8 used the finite element software ABAQUS to study the mechanical properties of rail-bridge systems under the action of thermal load and different levels of seismic load.Zakeri et al. 9,10 analysed the lateral resistance of frictional concrete sleepers and proposed the new definition for variations in neutral and float temperatures in CWR.Mosayebi et al. 11 studied the effects of continuous or discrete supports, Vshaped rail irregularity and geometrical stiffness on the vehicle-track dynamic interaction.Jiang 12 analysed the transmitting ways of the rail-bridge longitudinal interaction.Chen et al. 13 studied the influence of several factors on the additional longitudinal force of CWR on continuous beams with linear stiffness.Ruge and Birk 14 and Ruge et al. 15 investigated the longitudinal forces of CWR due to railbridge interaction considering nonlinear characteristic.Okelo and Olabimtan 16 analysed the nonlinear rail-structure interaction of an elevated skewed steel guideway by using the commercial software GT STRUDL.Zhang et al. 17,18 obtained the fastener resistance parameter in the laboratory and studied the rail-bridge interaction of a long-span bridge.Alfred et al. 19,20 used the monitoring-based nonlinear finite element modeling to research the rail-bridge structure interaction.Yun et al. 21,22 measured and investigated the response of the rail-bridge interaction caused by the temperature change.De Backer et al. 23 used finite element method to research the application limits of CWR on temporary bridge decks.Lou et al. 24 proposed a method to determine the locations of the rail expansion regulator and the fixed bearing of the continuous beam, corresponding to the maximum additional forces of rail reaching minimum values.Yan et al. 25 studied the distribution of the longitudinal force of CWR on suspension railway bridge with length exceeding 1000 m.Wenner et al. 26 analysed the long-term measured data of rail stress and displacement in recent 2 years and compared to those of the model calculation results.Lou et al. 27 investigated the influences of nonlinear stiffness, span number of bridge, constant longitudinal restoring force, bridge bearing arrangement, and span length of bridge on the additional longitudinal stress and displacement of rail for multi-span simply supported bridges.Ramos et al. 28 studied the bridge length limits due to track-structure interaction in continuous girder prestressed concrete bridges.Based on the bilinear resistance model, Dai et al. 29 proposed an analytical algorithm to analyse the track-bridge interaction of long-span steel bridges under thermal action.
Continuous beams are widely adopted in the railway bridge.Compared with the length of simple beam, that of continuous beam is longer, thus the maximum additional force in rail induced by the temperature change of beam is greater.If the location of the fixed bearing of continuous beam is not appropriate, the high additional axial force induced by the rail-bridge longitudinal thermal interaction may lead to rail buckling or breaking.To the knowledge of the authors, few papers have studied the influence of the location of the fixed bearing of continuous beam on the additional force of rail.Therefore, this paper focuses on the issue about the appropriate location of fixed bearing of continuous beam to reduce the additional axial force in CWR.Compared with the existing literature, the new findings and novelties of this paper are below.(1) The effect of the locations of the fixed bearings of continuous beams on the maximum additional force of rail is investigated considering nonlinear stiffness; (2) The methods are presented to determine the locations of fixed bearings corresponding to the maximum additional forces in rail reaching minimum values; and (3) To reduce the additional axial force in continuously welded rails due to bridge thermal effect, the appropriate locations are recommended for the fixed bearing of continuous railway bridges with the different number.
Longitudinal thermal interaction model of CWR track and foundations
The foundations of a CWR track from left to right are considered as embankment, simply supported bridges, continuously supported bridges, simply supported bridges, and embankment.CWR, simply and continuously supported bridges are modelled as beam elements.Each continuous beam has only one fixed bearing and the fixed bearing is assumed to locate at any position below the beam in the numerical analysis.A schematic planar model of CWR track and foundations longitudinal thermal interaction is shown in Figure 1, in which, L s and L c denote the length of simple and continuous beams, respectively, L 0 denotes the distance between the two fixed bearings of the simple beams neighbour to the first and last continuous beams respectively, the number of continuous beams from left to right is named as 1, 2, ., n, and n is the number of the last continuous beam, l i denotes the distance between the left end and the fixed bearing of the i-th continuous beam (hereafter referred to as the i-th fixed bearing), l ls1 denotes the distance between the fixed bearing of simple beam neighbour to the first continuous beam and the first fixed bearing, l n rs denotes the distance between the fixed bearing of simple beam neighbour to the last continuous beam and its fixed bearing, o ci denotes the midpoint of the i-th continuous beam, l i i + 1 denotes the distance adjacent two fixed bearings of continuous beams, o 0 denotes the midpoint of L 0 , and the letters of SB and CB denote the simple and continuous beams, respectively.Three types of bearing arrangements of simple beams are considered.As shown in Figure 1(a) to (c), the type A denotes that the fixed and movable bearings of simple beams are arranged alternately from left to right, the type B denotes that the fixed and movable bearings of simple beams on the left side are arranged alternately, but they are reversed on the right side, and the type C denotes that the movable and fixed bearings of simple beams on the left side are arranged alternately, but they are reversed on the right side.
The track between rails and bridge is modelled as coupling shear elements with nonlinear stiffness as shown in Figure 2. The rail-beam thermal interaction descriptions can be found in Lou et al. 27 For convenience, the key statements are listed here.The nonlinear characteristic of the difference u D of the coupling element dependents on the a critical value ũ where, u R and u B denote the longitudinal displacements of the rail and the upper surface of the beam, respectively.It is assumed that the simple and continuous beams are not influenced by the rails during the temperature change of beam and that they can move freely.u B of simple and continuous beams can be written as where, a B denotes the coefficient of thermal extension of the beam, DT denotes the temperature change with reference to the initial temperature of the beam, and x B represents the distance between the calculated section and the fixed bearing of beam.
If the absolute value of displacement difference u D is less than this critical value ũ, there is a linear elastic relationship between the difference u D and the longitudinal restoring force q, as shown in Figure 2(c), they can be expressed as in which, c denotes spring stiffness per unit length in the linear phase, if u D .0, a force q with negative value acts on the rail, consequently a force q with positive value acts onto the bridge; if u D \0, a force q with positive value acts on the rail, consequently a force q with negative value acts onto the bridge.
If the absolute value of difference u D is not less than this critical value ũ, the rail slips relative to the ballast or concrete strip.The corresponding nonlinear stiffness law shown in Figure 2(c) is applied, and the longitudinal restoring force in the coupling element is a constant q with The track between rails and embankment is modelled as longitudinal spring with nonlinear stiffness.Eqs. ( 1), ( 3) and ( 4) can be used in the rail on embankment only by the value of u B with zero.
The mechanics equation of CWR track and foundations longitudinal thermal interaction considering nonlinear stiffness can be found in Lou et al., 27 and are omitted here to reduce the length of this paper.The corresponding program for calculating the additional thermal force in rails of the presented model is compiled.
Verification
A CWR track on single continuous beam and two approach embankments are considered.The parameters are listed in Table 1 from Chinese code, 30 except the parameters of L c and L E .The spring stiffness per unit length with 4.4 3 10 6 N/m 2 of rail foundation is adopted here.The fixed bearing is assumed to located at 30 m away from the left end of the beam.The four parameters of ũB , qB , ũE and qE listed in Table 1 are not adopted in the example, but used in the Section 5 to reduce the number of Table .Figure 3 plots the distributions of the additional displacement and force in the rail along its length induced by the temperature change of beam, which are obtained by the finite element method and the analysis method, 2 respectively.The abscissa indicates the distance between the certain section and the left end of rail.The positive and negative values of the ordinates represent tension and compression, respectively, and the following signs have the same meaning.Figure 3 shows that the two solutions are in good agreement.Therefore, the presented model and the self-compiled program can be regarded as correct.
Procedures to determine the appropriate locations of the fixed bearings of continuous beams
The maximum additional tension and compression forces of rail vary with the location of the fixed bearing of continuous beam.When the maximum additional tension and compression forces of rail on two continuous beams reaches their minimum values, the steps to determine the locations of fixed bearings of two continuous beams are as follows.
(1) Firstly, the second fixed bearing is located at the left end of beam.
(2) The first fixed bearing is also located at the left end of beam.Then, the additional force of rail along its length is calculated and their maximum values can be obtained.It should be noted that the maximum and minimum values of compression force refer to the absolute values.(3) The location of the second fixed bearing remains unchanged.The location of the first fixed bearing is changed from left to right until arriving the right end of the beam, and the changing interval may be adopted the length of beam element.(4) The maximum additional tension and compression forces of rail corresponding to each location of the first fixed bearing are obtained and their relationship curves are plotted.(5) The minimum values in the above curves are just those of the maximum additional tension and compression forces of rail while the second fixed bearing is located at the left end of the second continuous beam.(6) Then, the location of the second fixed bearing is changed from the left to right until arriving the right end of the beam, and the changing interval may be also adopted the one length of beam element.The minimum values of the maximum additional tension and compression forces of rail corresponding to each location of the second fixed bearing are gained by repeating the steps ( 2)-( 4).( 7) The curves of the minimum values mentioned in the step ( 6) with the location of the second fixed bearing are drawn.The abscissas corresponding to the minimum values in the curves are the locations of the second fixed bearing to be determined.( 8) Finally, the location of the second fixed bearing determined above remains unchanged.Using the steps ( 3) and ( 4), the curves of the maximum additional tension and compression forces of rail with the location of the first fixed bearing can be obtained.The abscissas corresponding to the minimum value in the curves is the locations of the first fixed bearing to be determined.
The procedures to determine the appropriate locations of the fixed bearings of three continuous beams are similar to above those of two continuous beams, only with the following modifications.The second fixed bearing in the above procedures is replaced by the third fixed bearing, then the new step, i.e., the second fixed bearing is located at the point O 0 with unchanged, is added as the step (1).
Results
The influence of the locations of the fixed bearings of continuous beams on the maximum additional force of rail, and their locations corresponding to the maximum additional force of rail reaching the minimum value are researched in this section.The foundations of a CWR track are the same as the described in Figure 1.The total number of simple beams is 10 and they are evenly distributed on two sides of continuous beams.The length with 32 m of simple beam is adopted.The 3 types of bearing arrangements of simple beams are considered as shown in Figure 1(a) to (c).Except for the parameter c, the other parameters in Table 1 are adopted in this section.In order to make the obtained representative laws, the continuous beams with different numbers and lengths are considered.The continuous beams with total number of 1, 2, 3, 4 and 5 are used, and the length of L c with 120, 135, 150, 165 and 180 m are adopted.The following results are based on the temperature rise with 15°C of beam.If the changed temperature is 215°C, then the tension and compression in the results will be reversed.
Results of one continuous beam
Table 2 lists the minimum values of the maximum additional forces of rail and the corresponding location of the fixed bearing of one continuous beam, as well as the maximum additional force of rail corresponding to the fixed bearing located at the midpoint o c1 .Table 2 shows that (1) For the same length of continuous beam, the bearing arrangements of simple beams have the influence on the values of min F maxc and min F maxt , and according to the values of min F maxc and min F maxt in descending order, they are respectively B, A, C, and C, A, B. Accordingly, the type A of bearing arrangement is recommended.(2) While the maximum additional A 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 B 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 C 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 F maxc, fbmid (310 5 A 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 B 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 C 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 0.5/0.5 F maxt, fbmid (310 5 L 0 = l 1rs, c L 0 = 0:5, and l ls1, t L 0 = l 1rs, t L 0 = 0:5, and this means the fixed bearing is located at the point of O 0 .(3) For the type of A, the values of F maxc, fb mid À min F maxc min F maxc 3100% are 14.79% to 24.80% for the beam with different lengths, which show that the difference between F maxc, fb mid and min F maxc is obvious, and this indicates the fixed bearing located at the midpoint o c1 is not appropriate.
where, min F maxc and min F maxt denote the minimum value of maximum additional compression and tension forces of rail, respectively.Their definitions are as follows.The maximum additional compression and tension forces of rail vary with the location of the fixed bearing, the minimum value among all maximum additional compression forces is called min F maxc , and the minimum value among all maximum additional tension forces is called min F maxt .lls1, c and l ls1, t denote the distance between the location of fixed bearing of simple beam neighbour to the first continuous beam and the locations of the fixed bearing corresponding to min F maxc and min F maxt , respectively.l nrs, c and l nrs, t denote the distance between the locations of fixed bearing of simple beam neighbour to the last continuous beam and the locations of the fixed bearing corresponding to min F maxc and min F maxt , respectively.l i i + 1, c and l i i + 1, t denote the distance between the locations of the two fixed bearings of the i-th and i + 1-th continuous beams corresponding to min F maxc and min F maxt , respectively, i = 1, 2, ., n-1.F maxc, fb mid and F maxt, fb mid denote the maximum additional compression and tension forces of rail corresponding to the fixed bearing located at the midpoint continuous beam, respectively.Figure 4 draws the curves of the maximum additional tension and compression forces of rail with the location of the fixed bearing of one continuous beam with L c = 120 m under the three types of A, B, and C, in which, the abscissa represents the location of the fixed bearing away from the left end of continuous beam.The laws can be found from Figure 4 in which, the abscissa represents the location of the second fixed bearing away from the left end of beam. Figure 5 (a) draws the compression and tension forces, and Figure 5(b) supplements the tension force due to the unclear change of the ordinate of the tension force in Figure 5(a).Figure 5 shows that the locations of the second fixed bearing to be determined are 69.5 and 111 m, respectively.This means that the distances between the locations of the second fixed bearing and the location of the point of O 0 are 69.5 and 111 m, respectively.According to the step (8) in the Section 4, their locations remain unchanged, then the curves of the maximum additional compression and tension forces of rail with the location of the first fixed bearing can be obtained and plotted in Figure 6(a) and (b), respectively.Figure 6(a) and (b) show that the locations of the first fixed bearing to be determined are 80.5 and 39 m, respectively.This means that the distances obtained between the first fixed bearing and point of O 0 are also 69.5 and 111 m, respectively.In addition, it can be found that the minimum values of compression and tension forces in Figure 5 are equal to those in Figure 6, respectively.The figures for continuous beams with the other lengths, the other type of bearing arrangement of simple beams and the number of three are omitted to reduce the length of this paper.It can be concluded that the locations of the first and the second fixed bearings are both symmetrical with respect to the point O 0 while the maximum additional compression and tension forces reach their minimum values.Furthermore, for the continuous beams with the number of three, the locations of the first and the third fixed bearings are symmetrical with respect to the point O 0 while the maximum additional compression and tension forces reach their minimum values.Modified procedures for determining the locations of fixed bearings.Based on the above symmetrical law of the fixed bearings, the procedures mentioned in the Section 4 for determining the locations of fixed bearings of two continuous beams can be modified as follows.The modified procedures can significantly improve the calculation efficiency due to the reduction of the number of cycles in the calculation process.
(1) The first and second fixed bearings are, respectively, located at the left and right ends of the beam for the types B and C.But for the type A, the second fixed bearing is located at the distance of L s away from the right end of the beam.Then the maximum values of the tension and compression forces of rail are obtained.(2) Both the locations of the first and second fixed bearings are changed towards the point O 0 at the same time until the second fixed bearing reaches the left end of the beam.The maximum additional tension and compression forces of rail for each location of the first fixed bearing are gained and the relationship curves are plotted.The abscissas corresponding to the minimum value in the curves are the locations of the first fixed bearing to be determined.The location of the second fixed bearing can be obtained by using the symmetrical law.
Similarly, if the last paragraph of Section 4 is modified to the above procedure, it can be used to find the appropriate locations of the fixed bearings of three continuous beams.Again, the two continuous beams with L c = 150 m and the type B of bearing arrangement of simple beams are taken as an example by using the modified procedures.The relationship curves have been plotted in Figure 7. Figure 7 shows that the minimum values of the maximum additional compression and tension forces are 22.715 3 10 5 N and 2.199 3 10 5 N, and the corresponding abscissas are 80.5 and 39 m, respectively, and they are equal to those in Figure 6.This illustrates that the minimum values of the maximum additional compression and tension forces and corresponding locations of fixed bearings by the modified procedures are the same as those by the procedures mentioned in the Section 4.
In addition, Figure 7 shows that the tension forces are between 2.199 3 10 5 N and 2.433 3 10 5 N, with the difference of 10.64%, however, the compression forces are between 22.715 3 10 5 N and 25.175 3 10 5 N, with the difference of 90.61%.This further explains that the location of the fixed bearing of continuous beam has little influence on the maximum additional tension force of rail, but has a very significant influence on the maximum additional compression force of rail.
Minimum values of the maximum additional forces of rail and corresponding locations of the fixed bearings of multiple continuous beams.On the basis of the modified procedures, and the laws mentioned above, the values of min F maxc and min F maxt , and corresponding the locations of fixed bearings for the continuous beams with the number of 2-5, as well as the values of F maxc, fb mid and F maxt, fb mid are obtained and listed in Tables A1 to A4, respectively.The following laws can be concluded from Tables A1 to A4.
(1) For the same number and length of continuous beams, the bearing arrangements of simple beams have the influence on the values of min F maxc and min F maxt , and according to the values of min F maxc and (a) (b) min F maxt in descending order, they are respectively B, A, C, and C, A, B. Thus, the type A of bearing arrangement of simple beams is recommended.(2) Each fixed bearing located at the midpoint O ci cannot ensure the maximum additional tension and compression forces of rail reaching their minimum values.The value of F maxc, fb mid À min F maxc min F maxc is generally greater than that of ,which means the influence of the location of the fixed bearing on the maximum additional compression force of rail is greater than that on the maximum additional tension force of rail.
(3) With the same number of continuous beams, for the same length of continuous beam and the same type of bearing arrangement of simple beams, there are l ls1, c = l n rs, c , l i i + 1, c = l jj + 1, c , l ls1, t = l n rs, t , l i i + 1, t = l jj + 1, t in which, i, j=1, 2, ., n-1, and n is the number of continuous beams.For example, Table A3 shows that there are l ls1, c = l 4 rs, c = 131.5 m, l 12, c = l 23, c = l 34, c = 163 m for L c =180 m and the type A. This means that the locations of fixed bearing are symmetrical with respect to the midpoint of o 0 .(4) As shown in Table A1, while the maximum additional compression and tension forces of rail reach their minimum values, the locations of fixed bearings of two continuous beams are not same.Most of the values of F maxc, fb mmt À min F maxc min F maxc 3100% are greater than 50%, however, most of the values of 3100% are smaller than 2%.Therefore, it is not appropriate to arrange the fixed bearings of continuous beams at the position where the maximum additional tension force reaches the minimum value, and it should be arranged at the position where the maximum additional compression force reaches the minimum value.(5) With the same number of continuous beams, for the different lengths of continuous beams and the different types of bearing arrangement of simple beams, the distances of adjacent two fixed bearings (including the fixed bearings of simple beams neighbour to the first and last continuous beams) are not equal to each other, however, the ratios of them to L 0 are approximately equal to each other.For example, Table A3 shows that the value of l ls1, c is 90.25 m for L c =120 m and type A, and that value of l ls1, c is 116 m for L c =150 m and type B. The values of l ls1, c are not equal obviously.However, the corresponding ratios of l ls1, c L 0 are 0.1763 and 0.1747, respectively, with only 0.92% difference, which can be considered as approximately equal.Again, Table A4 shows that the value of l 12, c is 112.25 m for L c =120 m and type A, and that of l 12, c is 133.5 m for L c =150 m and type C. Similarly, the values of l 12, c are not equal obviously.But, the corresponding ratios of l 12, c L 0 are 0.1776 and 0.1780, respectively, with only 0.23% difference, which can be considered as approximately equal.The law that the values of ratio are approximately equal to each other, and it can be helpful for the arrangement design of the fixed bearings of continuous beams with other length to reduce the maximum additional force of rail.(6) For the same length of continuous beam and the same type of bearing arrangement of simple beams, the distances of neighbour two fixed bearings increase with the number of continuous beams.For example, the values of l ls1, c are 84.25,88, 90.25 and 91.5 m, and those of l 12, c are 103.5, 108, 110.5 and 112.25 m, respectively, for two, three, four, and five continuous beams with L c =120 m and type A. This can be helpful for the arrangement design of the fixed bearings of continuous beams with the number more than five.
Conclusion
The longitudinal thermal interaction model of CWR track, simple and continuous beams considering nonlinear stiffness is established.Then, two methods are proposed to determine the locations of fixed bearings of continuous beams corresponding to the maximum additional tension and compression forces of rail reaching their minimum values.Furthermore, multiple continuous beams with several lengths and simple beams with three types of bearing arrangements are considered to find the effect laws of the locations of the fixed bearings of continuous beams on the maximum additional forces in rail.The obtained results can guide designing the location of the fixed bearing of continuous railway bridge to reduce the additional axial force in CWR induced by the bridge thermal effect.The conclusions are as follows.
(1) With the temperature rise of beam, the location of the fixed bearing of continuous beam has little influence on the maximum additional tension force of rail, but has a very significant influence on the maximum additional compression force of rail.The law will be reversed with the temperature falling of beam.(2) The appropriate locations are recommended for the fixed bearing of continuous railway bridges with the different number considering rail-bridge thermal interaction.The results can guide designing the location of the fixed bearing of continuous railway bridge while reducing the additional axial force in continuously welded rails.(3) For two continuous beams, it is not appropriate to arrange the fixed bearings of continuous beams according to the locations while the maximum additional tension force reaches the minimum value, and it should be arranged according to the locations while the maximum additional compression force reaches the minimum value.(4) While the maximum additional tension and compression forces of rail reach their respective minimum values, the locations of the fixed bearings of continuous beams are both symmetrical with respect to the point O 0 .Each fixed bearing located at the midpoint of continuous beam cannot ensure the maximum additional tension and compression forces of rail reaching their minimum values.
(5) As long as the same number of continuous beams, for the different lengths of continuous beams and the different types of bearing arrangement of simple beams, the distances of adjacent two fixed bearings (including the fixed bearings of simple beams neighbour to the first and last continuous beams) are not equal to each other, however, the ratios of them to L 0 are approximately equal to each other.This can be helpful for the arrangement design of the fixed bearings of continuous beams with other length to reduce the maximum additional force of rail.
Figure 1 .
Figure 1.Schematic planar model of CWR track and foundations longitudinal thermal interaction: (a) type A of bearing arrangement of simple beams, where L 0 = L s + n Á L c , (b) type B of bearing arrangement of simple beams, where L 0 = 2L s + n Á L c and (c) type C of bearing arrangement of simple beams, where L 0 = n Á L c .
Figure 2 .
Figure 2. Longitudinal track-bridge coupling element and the nonlinear stiffness law: (a) u R .uB , (b) u R \u B and (c) nonlinear stiffness law.
Figure 3 .
Figure 3. Distribution of the additional force and displacement in the rail: (a) additional force of rail and (b) additional displacement of rail.
( 1 )Figure 4 .
Figure 4.The curves of the maximum additional tension and compression forces of rail with the location of fixed bearing of one continuous beam with L c = 120 m: (a) type A of bearing arrangement, (b) type B of bearing arrangement and (c) type C of bearing arrangement.
Figure 5 .
Figure 5.The curves of the minimum values of the maximum additional forces of rail with the location of the second fixed bearing for the case of two continuous beams with L c = 150 m and the type B: (a) compression and tension forces and (b) tension force.
Figure 6 .
Figure 6.The curves of the maximum additional forces of rail with the location of the first fixed bearing for the case of two continuous beams with L c = 150 m and the type B, in which, the values of l 2 = 69.5 and 111 m remains unchanged for (a) and (b), respectively: (a) compression force and (b) tension force.
Figure 7 .
Figure 7.The curves of the maximum additional forces of rail with the location of the first fixed bearing for the case of two continuous beams with L c = 150 m and the type B: (a) compression and tension forces and (b) tension force.
Table 1 .
The parameters of CWR track on continuous beam and embankment.
Table 2 .
The minimum values of the maximum additional forces of rail and the corresponding location of the fixed bearing of one continuous beam, as well as the maximum additional force of rail corresponding to the fixed bearing located at the midpoint o c1 .
Table A1 .
Results of two continuous beams.
Table A2 .
Results of three continuous beams.
Table A3 .
Results of four continuous beams. | 2020-12-30T06:18:36.244Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "e15ffda2d0437fe4f4bd5d74e2f5530792ec51c7",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0036850420982458",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "264f3785a5ed5d9f43c5d992214e189718ac7702",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246816769 | pes2o/s2orc | v3-fos-license | A Numerical Approach for Singularly Perturbed Nonlinear Delay Differential Equations Using a Trigonometric Spline
In this paper, a computational procedure for solving singularly perturbed nonlinear delay differentiation equations (SPNDDEs) is proposed. Initially, the SPNDDE is reduced into a series of singularly perturbed linear delay differential equations (SPLDDEs) using the quasilinearization technique. A trigonometric spline approach is suggested to solve the sequence of SPLDDEs. Convergence of the method is addressed. The efficiency and applicability of the proposed method are demonstrated by the numerical examples.
The proposed equation usually plays an important role in illustrating different applications, such as theory of nonpremixed combustion [2], geodynamics [3], oceanic and atmosphere circulation [4], and chemical reactions [5]. More attention has been given in the past to the computational analysis of SPDDEs [6][7][8][9]. However, motivation for the research and solution of the SPNDDE has been increasing in the last few years. These problems may have steep exponential boundary layers as a solution. Classical methods for solving such types of problems are ineffective since a boundary layer structure is present when the perturbation parameter goes to zero. For these equations, effective numerical methods should be established, the accuracy of which does not depend on ε. Hence, in this work, we proposed a higher order numerical scheme using a trigonometric spline which gives more accuracy with a smaller number of mesh points. The existence and originality of the solutions of a SPNDDE with shift were studied by Lange and Miura [10]. The authors in [11] presented a fixed-point strategy to solve a second order SPDDE. The authors in [12] assemble two methodical spectral Legendre's derivative methods to solve numerically the Lane-Emden, Bratu's, and singularly perturbed type equations. For generating numerical spectrum solutions to linear and nonlinear second-order boundary value problems, a new operational matrix approach based on shifted Legendre polynomials is introduced and studied in [13].
In [14], the authors proposed schemes with finite differences for solving the system of SPNDDE. In [15], a B-spline collocation method is constructed to solve Equations (1) and (2). In [16], the authors used shifted Legendre polynomials for studying the spectral collocation approach to solve neutral functional-differential equations with proportional delays. In [17], the Legendre spectral collocation approach is suggested by the authors for handling multipantograph delay boundary value problems. In [18], a new numerical method is proposed for solving a class of delay timefractional partial differential equations. The fractional partial differential equations are reduced into an associated system of algebraic equations that may be solved by some robust iterative solvers using the localization method, which is based on space-time collocation in some appropriate points. In [19], the authors developed a numerical technique for nonlinear singly perturbed two-point boundary value problems based on a noniterative integration method with a modest deviation argument.
The following is a concise summary of the contents of the paper. In Section 2, the approach of quasilinearization and the analysis of convergence are discussed. The continuous problem is discussed in Section 3. In Section 4, the procedure using a trigonometric spline for the solution of the problem is derived. Error estimates of the proposed scheme are discussed in Section 5. Numerical examples and computational results are shown in Section 6. Finally, the Section 7 ends with the conclusion.
The Method of Quasilinearization
Using the method of quasilinearization [20], the given nonlinear differential Equations (1) and (2) are reduced into a sequence of SPLDDEs. We take the initial approximation θ 0 ðsÞ which serves as a starting point for the function θðsÞ in F and expand Fðs, θðsÞ, θ′ðs − δÞÞ, around the function θ 0 ðsÞ; we get In general, we can write for ν = 0, 1, 2, ⋯ Using the quasilinearization technique, Equations (1) and (2) become and F ðνÞ = Fðs, θ ðνÞ , θ′ ðνÞ ðs − δÞÞ. Thus, Equation (6) with Equation (7) is linear in θ ðν+1Þ ðsÞ. Now, we solve the problems given by Equations (6) and (7) using the nonpolynomial spline method. Theoretically, the solution to the nonlinear problem satisfies where θ * ðsÞ is the solution of the nonlinear problem. Computationally, we require Here, Tol. is a prescribed small tolerance. Once the tolerance test is achieved, the iteration is terminated.
Convergence Analysis
The convergence of the sequence of solutions hθ ðνÞ i is obtained as follows. For convenience purpose, we refer Fðs , θ, θ ′ ðs − δÞÞ as FðθÞ in the entire convergence part. Consider the problem with After quasilinearization, we have a sequence hθ ðνÞ i of linear equations defined by the following recurrence 2 Computational and Mathematical Methods relation: where F ′ ðθÞ = ∂FðθÞ/∂θ: Let θ ð0Þ ðsÞ be an initial approximation; then using Equation (12), we have Using Equations (12) and (14), we have Equation (15) is a differential equation of second order in ðθ ðν+1Þ ðsÞ − θ ðνÞ ðsÞÞ. Thus, by using Green's function, the integral form of Equation (15) is where the Gðs, tÞ is the Green's function and determined by [21] G s, t where max s,t jGðs, tÞj = 1/4 . By using the mean value theorem, where θ ðν−1Þ ðsÞ ≤ t ≤ θ ðνÞ ðsÞ. Substituting Equation (18) into Equation (16), we get On both sides of Equation (18), taking the maximum of the moduli over the region of interest, we get A simplification yields where K 1 = ða 2 /ð8εð1 − a 1 /4εÞÞÞ < 1. This shows that, given K 1 < 1, the sequence hθ ðνÞ ðsÞi of linear equations converges quadratically. As a result, to get the approximate solution of Equation (1) with Equation (2), it is required to estimate the solution of the sequence of SPLDDEs of the form where
Continuous Problem
When the delay argument δ is oðεÞ, sequential expanding for the term θ ′ ðν+1Þ ðs − δÞ in Equation (22) yields with θ ν+1 The boundary layer appears on the left or right side of the interval depending on the sign of the coefficient p ν ðsÞ, i.e., as p ν ðsÞ > 0 or p ν ðsÞ < 0, respectively.
where c 1 is constant and M is positive constant independent of h and ε.
Trigonometric Spline
The integration domain [0, 1] with mesh size h = 1/N is decomposed into N equal subintervals, so that s i = ih, i = 0
Computational and Mathematical Methods
where a i , b i , c i , and d i are constants and τ is a free parameter.
Method of Solution
At the grid points s i , Equation (25) may be discretised by Using Equation (39) in Equation (38) and utilising the first derivatives of θ using the following estimations: we get Using Equation (41), we have the following tridiagonal system:
Computational and Mathematical Methods
Here,
Error Estimate
The truncation error in the proposed numerical scheme is given by Thus, for different values of ω, α and β in the approach (Equation (42)), the following different orders are indicated: for α = 1/12, β = 5/12, ω = −1/20ε. Here, k 1 , k 2 , M are positive constants, independent of h and ε.
Proof. Using Lemma 3, we have Therefore
Computational and Mathematical Methods
Similarly, Now, ☐ The matrix form of the system Equation (42) is where A is the matrix of the system Equation (42), θ ðν+1Þ and B are the corresponding vectors, and μ i ðθ ðν+1Þ Þ is the local truncation error. Thus,
Numerical Examples
To show the relevance and validity of the approach, it was implemented for the following problems. The maximum pointwise errors (MAEs) (E K N,ε ) are determined by using the double mesh principle [3]:
Conclusion
To solve a singularly perturbed nonlinear delay differentiation equation, a computational technique is proposed using a trigonometric spline. The SPNDDE is reduced into a series of linear SPDDEs using quasilinearization. A trigonometric spline approach is suggested to solve the sequence of linear SPDDEs. The scheme was implemented on two problems. The values of the maximum absolute errors produced by the suggested scheme are compared to the results in [15,23] presented in Tables 1-4. Comparisons reveal that the suggested scheme outperforms the methods given in [15,23] in terms of maximum error. Results of simulation have shown that as we increase the value of the parameter N, the accuracy of the computed approximate solutions is significantly improved. In addition, while the error values generally increase as the perturbation parameter ε decreases, they are usually within reasonable limits even for small values of it. It is also worth noting that the approach works well even when h ≥ ε is used. Figures 1-4 depict the layer behaviour at various δ values. It has been noticed that when the delay value increases, the thickness of the boundary layer increases as well. The simulation results show that the computational method proposed in this study is capable of giving accurate results for SPNDDE.
Data Availability
The proposed equations usually play an important role in illustrating different applications, such as theory of nonpremixed combustion, geodynamics, oceanic and atmosphere circulation, and chemical reactions. | 2022-02-15T16:03:55.113Z | 2022-02-13T00:00:00.000 | {
"year": 2022,
"sha1": "072ddb39c32c548547262d591704f8fe958d5e70",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cmm/2022/8338661.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b8a6e790a0eaf2c5a0bf2f3b42debe7634e5b5b4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
24067334 | pes2o/s2orc | v3-fos-license | Crystal structure of 2-(2,3-dimethoxynaphthalen-1-yl)-3-hydroxy-6-methoxy-4H-chromen-4-one
In the title compound, C22H18O6, the dimethoxy-substituted naphthalene ring system is twisted relative to the 4H-chromenon skeleton by 88.96 (3)°. The two methoxy substituents are tilted from the naphthalene ring system by 1.4 (4) and 113.0 (2)°, respectively. An intramolecular O—H⋯O hydrogen bond closes an S(5) ring motif. In the crystal, pairs of O—H⋯O hydrogen bonds form inversion dimers with R 2 2(10) loops and C—H⋯O interactions connect the dimers into [010] chains.
Supporting information for this paper is available from the IUCr electronic archives (Reference: FF2142).
S1. Introduction
Flavonols, such as Quercetin, Azaleatin and Kaempferol, are a class of flavonoids that have a 3-hydroxyflavone backbone. Because of their wide spectrum of biological activities (Burmistrova et al. 2014, Dias et al. 2013, variety of flanonols have been isolated from natural sources and synthesized (Bendaikha et al. 2014;Prescott et al. 2013). In addition, they have been used as fluorescent probes for sensing and imaging due to their dual fluorescence. The fluorescence of flavonols has been shown to be related to the angle between the 4H-chromene-4-one moiety and the attached aromatic ring (Klymchenko et al. 2003). Our research project has been focused on development of novel flavonols which show broad range of biological activities (Lee et al. 2014), therefore the title compound was synthesized and its crystal structure was determined. A starting material, chalcone (III), was prepared by the previously reported methods (Yong et al. 2013). Flavonol was obtained by oxidative cyclization of the chalcone (III) with H 2 O 2 in alkaline methanol medium (Fig. 3). In the title compound, C 22 H 18 O 6 , angle between the dimethoxy-substituted naphthalene ring and the 4H-chromenon skeleton is 88.96 (3)°, which shows they are almost orthogonal each other. In our previous report on flavonol (Yoo et al., 2014), the angle between 4H-chromenon and benzene ring is 5.2 (4)°. The methoxy groups in naphthalene ring at C12 and C13 are tilted from naphthalene ring by 1.4 (4)° and 113.0 (2)°, respectively. Methoxy group at C12 (meta position) lies almost in the same plane of naphthalene ring. Methoxy group at C13 (ortho position), however, is twisted away from the plane of naphthalene ring. An intramolecular O-H···O hydrogen bond closes S(5) ring motif. In the crystal, pairs of O-H···O hydrogen bonds form inversion dimer with graph-set notation R 2 2 (10) and C -H···O interactions connect the dimers into [010] chains. Examples of structures of flavonols have been published (Narita et al., 2015;Serdiuk et al., 2013).
To a cooled reaction mixture was added 2 ml of 50% (w/v) aq. KOH solution and stirred at room temperature for 20 h. At the end of the reaction, ice-water was added to the mixture and acidified with 3 N HCl (pH = 3-4). The precipitation was filtered under vacuum and washed with methanol to give chalcone compound III (yield: 48%, m.p: 407-408 K). The chalcone compound (III, 1 mmol, 364 mg) was dissolved in 6 ml of methanol and 4 ml of THF. The reaction was cooled in a water-ice bath (2-4 °C) and a cold solution of 16% sodium hydroxide (1 ml) was added with stirring. After 10 min, to the reaction mixture was added 2 ml of 35% H 2 O 2 . The end point of reaction was monitored by TLC. After completion of reaction, the reaction mixture was acidified with 3 N HCl (pH = 4-5). The pale yellow precipitate obtained was filtered and washed with ethanol to give the titled compound (66%). Recrystallization in the ethanol solvent gave crystals
S2.1. Refinement
The H atoms were placed at calculated positions and refined as riding with C-H = 0.95 A [U iso (H) = 1.2 U eq (C)].
Figure 1
Molecular structure of the title compound, showing the atom-labelling scheme and with displacement ellipsoids drawn at the 50% probability level. Synthetic scheme for the title compound. Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. Symmetry codes: (i) −x, −y+1, −z; (ii) x, −y+1/2, z+1/2; (iii) x, y−1, z; (iv) −x, y−1/2, −z+1/2. | 2016-05-04T20:20:58.661Z | 2015-10-14T00:00:00.000 | {
"year": 2015,
"sha1": "dee909df6209ffa9a217f88c4bf48ad81a7bf08d",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2015/11/00/ff2142/ff2142.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2b8c6deb14aeea10c61fa20cd0d393f36f6bb8a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
238253310 | pes2o/s2orc | v3-fos-license | Investigating coronal wave energy estimates using synthetic non-thermal line widths
Aims. Estimates of coronal wave energy remain uncertain as a large fraction of the energy is likely hidden in the non-thermal line widths of emission lines. In order to estimate these wave energies, many previous studies have considered the root mean squared wave amplitudes to be a factor of $\sqrt{2}$ greater than the non-thermal line widths. However, other studies have used different factors. To investigate this problem, we consider the relation between wave amplitudes and the non-thermal line widths within a variety of 3D magnetohydrodynamic (MHD) simulations. Methods. We consider the following 3D numerical models: Alfv\'en waves in a uniform magnetic field, transverse waves in a complex braided magnetic field, and two simulations of coronal heating in an arcade. We applied the forward modelling code FoMo to generate the synthetic emission data required to analyse the non-thermal line widths. Results. Determining a single value for the ratio between the non-thermal line widths and the root mean squared wave amplitudes is not possible across multiple simulations. It was found to depend on a variety of factors, including line-of-sight angles, velocity magnitudes, wave interference, and exposure time. Indeed, some of our models achieved the values claimed in recent articles while other more complex models deviated from these ratios. Conclusions. To estimate wave energies, an appropriate relation between the non-thermal line widths and root mean squared wave amplitudes is required. However, evaluating this ratio to be a singular value, or even providing a lower or upper bound on it, is not realistically possible given its sensitivity to various MHD models and factors. As the ratio between wave amplitudes and non-thermal line widths is not constant across our models, we suggest that this widely used method for estimating wave energy is not robust.
Introduction
It is well known that the solar corona is heated up to millions of degrees. The primary mechanisms proposed to achieve this heating can be separated into two classes: the dissipation of stored magnetic energy and the dissipation of magnetohydrodynamic (MHD) waves (see, for example, Parnell & De Moortel 2012;Arregui 2015;De Moortel & Browning 2015;Klimchuk 2015;Van Doorsselaere et al. 2020, for reviews on coronal heating theories). In recent years, due to higher spatio-temporal resolution of imaging and spectroscopic instruments, MHD waves have been shown to be ubiquitous within the solar atmosphere. One signature of these waves is the non-thermal broadening of emission lines (e.g. Hollweg 1973;Van Doorsselaere et al. 2008). Using the slit spectrograph aboard Skylab, the broadening of transition region emission lines, as well as the broadening of the spectra in quiet Sun regions and coronal holes have been observed (e.g. Doschek et al. 1976a,b;Feldman et al. 1976). Subsequently, Hassler et al. (1990) detected the broadening of the transition region and coronal emission lines and concluded that the most likely cause was waves in the corona. Some other studies found that non-thermal broadening varies with height through the solar atmosphere. For example, Doyle et al. (1998) found an increase in the Si VIII non-thermal line width with increasing altitude above the solar limb, whereas Hahn et al. (2012) reported a decrease in line width at relatively low heights in coronal holes.
Counter-propagating waves are thought to be present in the solar atmosphere and can cause turbulence. Such turbulence can go on to broaden emission lines (e.g. Tomczyk & McIntosh 2009;Liu et al. 2014;Morton et al. 2015;Van Ballegooijen et al. 2017). Although, the non-thermal broadening of emission lines is not necessarily due to the unresolved temporal Doppler velocity amplitudes caused by MHD waves, other solar phenomena can influence the non-thermal line widths as well. These include plasma upflows and plumes near magnetic footpoints (e.g. De Pontieu & McIntosh 2010;Tian et al. 2011aTian et al. , 2012 and larger scale upflows within coronal holes (e.g. McIntosh et al. 2011;Tian et al. 2011b).
Enhanced non-thermal line widths are a signature of multiple unresolved plasma flows along the line-of-sight (LOS). Hence, they can account for the discrepancy between the true wave energy and the observed wave energy attained from Doppler velocities (e.g. McIntosh & De Pontieu 2012;Pant et al. 2019). In a previous study, De Moortel & Pascoe (2012) present a 3D model of transverse waves propagating along multiple loop strands. These waves were generated by a lower boundary driver designed to mimic random footpoint motions. The Article number, page 1 of 11 arXiv:2110.00257v1 [astro-ph.SR] 1 Oct 2021 authors found that by estimating the kinetic energy using the LOS Doppler velocities, it fails to capture at least 60% of the total kinetic energy in the simulation and hence it is essential to include the enhanced non-thermal line widths in the kinetic energy estimations.
The root mean square (rms) velocity of the wave amplitude (v rms ) can be used to estimate the energy within a wave (e.g. Hollweg 1981). As such, obtaining a relation between v rms and the non-thermal line width (σ nt ) is useful for achieving a more accurate estimate of the total wave energy. Such a relation may be given by σ nt = αv rms ; however, there is some discrepancy between the value of α to be used, as well as a lack of any convincing justification for this chosen value. In Hassler et al. (1990); Banerjee et al. (1998), and Doyle et al. (1998), these authors computed the Alfvénic wave energy using α ≈ 1/ 2, where the 1/ 2 accounts for the polarisation and direction of propagation of the wave relative to the LOS. This is the most commonly used value of α in estimates of the energy within an Alfvénic wave (e.g. O'Shea et al. 2005;Banerjee et al. 2009;Hahn et al. 2012). However, Chae et al. (1998) and Tu et al. (1998) both suggested that v rms = σ nt (α = 1).
In order to investigate the relationship between the wave amplitude and non-thermal line width in more detail, Pant & Van Doorsselaere (2020) (PVD2020) considered a selection of velocity drivers in a simple mathematical model. They found that for a mono-periodic linearly polarised velocity driver oscillating along the LOS, σ nt /v rms ≈ 2. On the other hand, when the oscillations act in different directions (akin to the superposition of spectra of all oscillating structures along the LOS in the optically thin corona), the ratio σ nt /v rms is approximately one. This value was also found when the authors used a multi-frequency driver or circularly polarised transverse oscillations. The authors confirmed their findings using forward modelling on numerical MHD simulations of transverse MHD waves in a gravitationally stratified plasma. They conclude that depending on the scenario, σ nt /v rms > 2 or σ nt /v rms > 1; however, the ratio is never equal to 1/ 2 as was used in previous studies. In other words, the root mean squared wave amplitudes are never bigger than the non-thermal line widths and previous studies may have overestimated the wave energy.
In this study, we expand on the work of PVD2020 by examining the behaviour of the wave amplitudes and non-thermal line widths using a variety of more complex numerical models. Firstly, we investigate Alfvén waves in a uniform plasma and explore the effects of wave interference. Then, we consider observational signatures of transverse MHD waves propagating through a complex magnetic field. Finally, we investigate the relationship between non-thermal line widths and velocity amplitudes in simulations of heating in a coronal arcade. In Sect. 2, we give an overview of these three numerical models. Then, in Sect. 3, we explain the calculation of v rms and σ nt as well as analyse the results of the ratio σ nt /v rms in all three models. Finally, our findings are discussed and summarised in Sect. 4.
Numerical models
We begin by providing a brief description of the three numerical models which we analyse in this article. All three models use the Lagrangian-remap code, Lare3D (Arber et al. 2001), which solves the fully 3D non-ideal MHD equations in normalised form, given by where all variables have their usual meanings. The non-ideal terms, resistivity (η) and viscosity (ν), dissipate energy from the magnetic and velocity fields, respectively. The viscosity term results in a force F visc in the equation of motion (2) and a heating term Q visc in the energy equation (3). It is the sum of the background viscosity and two small shock viscosity terms. These shock viscosities, which are present in all of the numerical models, are designed to prevent shocks and ensure numerical stability. With the exception of the shock viscosities, nonideal terms are only included within one of the three numerical models (see Sect. 2.3). The effects of thermal conduction, optically thin radiation, and gravity are neglected in our simulations.
Alfvén wave model
The first and simplest of our three numerical simulations is the Alfvén wave model. The setup consists of a homogeneous plasma, with a density and temperature of 1.67 × 10 −12 kg m −3 and 1.2 MK, respectively, and a uniform magnetic field (20 G) aligned with the vertical z axis (see Fig. 1a). Alfvén waves are driven into the system using the following condition on the bottom z boundary, v y (t ) = v 0 sin (ωt ) , where the angular frequency ω ≈ 0.42 s −1 which results in a period of approximately 15 s. Three wave amplitudes (v 0 ) are considered : 12 km s −1 (low), 24 km s −1 (medium), and 48 km s −1 (high). A fourth configuration is also investigated where the amplitude of the wave is 24 km s −1 , but the driver is made up of two components as follows, The LOS that we consider in the Alfvén wave model is parallel to the y axis. The first wave driver (Eq. 5) acts along the LOS and the second wave driver (Eq. 6) oscillates at an angle of 45°to the LOS. The simulations that use Eq. 5 for their velocity driver are denoted by v y:χ where χ ∈ {L, M, H}, for low, medium, and high wave amplitudes, respectively. Finally, the fourth simulation, which uses the same amplitude as v y:M , shall be denoted by v mix . The x and y boundaries are periodic and the z boundaries were set to have a zero gradient for all variables, with the exception of the velocity field. All components of the velocity on the z boundaries are zero apart from the velocity driver on the bottom boundary, as described above. The velocity was set to zero on the top z boundary to ensure that the waves are reflected here. This subsequently results in wave interference between upward and downward propagating waves. shows a time-distance plot (along the z axis) of v y:H (similar for v y:L and v y:M ). One feature which is important to the subsequent analysis of this model (see Sect. 3.1) is the prevalence of nodes (e.g. at z ≈ 40 Mm).
The computational domain has dimensions of 2 Mm × 2 Mm × 100 Mm and uses a numerical grid of 8 × 8 × 1024 cells. As this simulation is invariant in the x and y directions, we used a coarser grid resolution for these axes than for the z axis.
Complex magnetic field model
For our second model, we consider a simulation which also uses a sinusoidal boundary driver. However, in this case, the magnetic field structure is a lot more complex (complex magnetic field model). The simulation used here was previously discussed and investigated by Howson et al. (2020b) and subsequently forward modelled by Fyfe et al. (2020).
The initial magnetic field configuration in Howson et al. (2020b) was derived from a simulation investigated by Reid et al. (2018). In the latter article, three magnetic threads were twisted at their footpoints by rotational velocity drivers. The kink instability was triggered in the central thread which ultimately destabilised the remaining threads. The end result was a very complex magnetic field configuration which Howson et al. (2020b) used as their initial condition. Of the two field profiles considered in Howson et al. (2020b), we only analyse the more complex state (see Fig. 1c). The initial temperatures and densities observed within this model are approximately 1.7 MK -4.7 MK and 1.12 × 10 −12 kg m −3 −2.15 × 10 −12 kg m −3 , respectively.
Using this initial condition, the authors excited transverse waves into the numerical domain. To do this, a wave driver is imposed on the bottom z boundary given by with an amplitude and angular frequency of approximately 20 km s −1 and 0.21 s −1 , respectively. This corresponds to a period of τ ≈ 28 s. As with the Alfvén wave model, the x and y boundaries are periodic while the z boundaries have gradients set to zero for all variables expect for the velocity field. On the bottom z boundary, the velocity driver (Eg. 7) is imposed and the velocity is set to zero on the top z boundary. This causes waves to reflect at the top boundary and subsequently results in wave interference from upward and downward propagating waves.
For this model, the numerical domain consists of a 256 × 256 × 1024 grid, which covers physical dimensions of 30 Mm × 30 Mm × 100 Mm. However, within the forward modelling analysis in Fyfe et al. (2020), which we subsequently used to obtain the non-thermal line widths (see Sect. 3), the grid used in Howson et al. (2020b) was spatially resampled to every fourth grid cell along x, y, and z. This was to reduce the computational cost and was shown to have no significant impact on the synthetic spectroscopic data. For more information on the behaviour and forward modelling of the simulation, we direct the reader to Howson et al. (2020b) and Fyfe et al. (2020), respectively.
Arcade model
The last of our three numerical models considers a potential coronal arcade where a complex velocity driver is implemented. This simulation was studied by Howson et al. (2020a) Article number, page 3 of 11 and hence we direct the reader to this article for further information. The authors considered several numerical simulations (with different characteristic driving timescales) and they present results for ideal, resistive, and viscous regimes. Howson et al. (2020a) constructed a numerical arcade within an initially homogeneous plasma with a temperature and density of approximately 1 MK and 1.67 × 10 −12 kg m −3 , respectively. The arcade magnetic field has the form B Here, B 0 = 100 G and L = 10 Mm. Such a magnetic field is a potential field that is also invariant along the y axis (see Fig. 1b). The domain contains 256 3 grid cells with physical dimensions of −10 Mm ≤ x, y ≤ 10 Mm and 0 Mm ≤ z ≤ 20 Mm. As mentioned previously, resistivity and viscosity are included in separate simulations, as well as including an ideal case. The non-ideal regimes allow for the dissipation of energy through the magnetic and velocity fields, respectively. A step function is used for the resistivity where it is zero for z < 1 Mm and η 0 for z ≥ 1 Mm, where η 0 corresponds to a magnetic Reynolds number of 10 4 . The resistivity is set to zero for z < 1 Mm to prevent the slippage of magnetic field lines through the velocity field (with the exception of numerical slippage). Finally, the viscous simulations implement a uniform viscosity which produces a fluid Reynolds number of 10 3 . Howson et al. (2020a) implemented a boundary driver which mimics this chaotic nature of photospheric motions by varying the driver in time and space. The velocity driver on z = 0 Mm was created using the summation of 2D Gaussians and takes the form Here, v i , θ i , r i , and l i are the peak amplitude, direction, centre, and length scale of the Gaussian components, respectively. Finally, t i and τ i represent the time of peak amplitude and the duration of the individual Gaussian components, respectively. These quantities arise from the following statistical distributions, where N µ, σ 2 and U (u 1 , u 2 ) are the normal and uniform distributions, respectively, with mean -µ, variance -σ 2 , and lower and upper bounds of u 1 and u 2 , respectively. The start and end time of the simulations are denoted by t s and t f , respectively. Howson et al. (2020a) analyse three different driving timescales and here we consider the lower and upper values τ µ = 15 s and 300 s (referred to as T S and T L simulations for the short and long timescales, respectively). To allow for a comparison between the two drivers, the spatio-temporal average of the drivers' velocity were set to 1.2 km s −1 by choosing the appropriate value for v µ . The integer N was chosen to be a function of the timescale τ µ to ensure that a similar number of components in the summation were active at any given time.
The boundary conditions are periodic on the x and y boundaries. All variable gradients are set to zero on the z boundaries apart from the velocity driver, imposed on the bottom boundary. In addition, a damping layer was implemented above z = 18 Mm near the top of the domain. This damping layer prevents the reflection of upward flows back into the domain.
Non-thermal line widths and wave amplitudes
In order to investigate the relation between the non-thermal line widths and the amplitudes of the waves observed in the three models (see Sect. 2), we began by measuring the wave amplitudes using the rms velocity of the waves (v rms ). As for the numerical simulations in Pant et al. (2019) and PVD2020, v rms was calculated as a function of height (z) as follows, where T denotes the number of simulation output times and we averaged over the x y planes for all heights.
The synthetic specific intensity used in determining the non-thermal line width was obtained using the forward modelling code FoMo (Van Doorsselaere et al. 2016). It uses the CHIANTI atomic database (Dere et al. 1997;Landi et al. 2013) to produce optically thin EUV and UV emission lines, and it allows for different LOS angles. The emission lines and LOS angles used in our three models are summarised in the final two rows of Table 1, while Fig. 1 illustrates the LOS angles. Table 1 also lists the numerical cadence, exposure time, and the driver period used in the three models.
Within this article, we consider various exposure times during our simulations (see Table 1 for the exact values used in each model). The exposure times were chosen such that they are not a multiple of the model's velocity driver period, with some smaller and some greater than this period. For a given exposure time, an average of the specific intensity was taken. In each case, the observing started at the beginning of the simulation. Once the specific intensity (I λ ) was calculated, the total intensity (I ), Doppler shift (λ DV − λ 0 ), line width (σ), and subsequently the non-thermal line width (σ nt ) could be calculated. This was achieved using the moments of I λ as follows, where σ 1/e is the exponential line width (i.e. 2σ) converted into units of velocity and σ th is the thermal velocity (Fe IX: 15.7 km s −1 , Fe XII: 21.5 km s −1 , and Fe XVI: 27.9 km s −1 ) using the peak formation temperature of the emission line. As with v rms , σ nt is also a function of the height. It was calculated using Eq. 12 which has been used in previous work (e.g. Testa et al. 2016;Pant et al. 2019;Pant & Van Doorsselaere 2020). To denote the axes in the plane-of-sky (POS), a dash is used (e.g. x' denotes the horizontal axis in the POS). Since all the LOS angles considered in this article are perpendicular to the vertical axis, z = z . Using the ratio of the non-thermal line width (σ nt ) and the root mean squared velocity (v rms ), we now investigate the relation between these two variables in our numerical models and compare them to the ratios found in PVD2020. Table 2 gives an overview of our models and the most relevant (similar) models and corresponding ratios studied in PVD2020. Where our models differ from those in PVD2020, the lowest expected ratio from PVD2020 is quoted. Finally, we note that PVD2020 use exposure times equal to a multiple of their driver's period (with the exception of their multi-frequency driver), whereas in this article, we consider both exposure times which are and which are not multiples of the driving period.
Alfvén wave model analysis
The ratio σ nt /v rms as a function of the height for all the Alfvén model simulations and exposure times is shown in Fig. 3. The first feature which clearly stands out is the presence of peaks, which correspond to nodes (see Fig. 2,) and hence v rms is smaller on average at those heights. However, given that the velocity is on average smaller, we would also expect σ nt to decrease at these locations, leaving the ratio somewhat unaffected. This is clearly not the case and is due to our choice for the thermal line width. In real observations, the temperature in the region of interest is unknown which is why we simply selected the peak formation temperature of the emission line to represent the thermal line width. Within this current simulation, the temperature of the plasma is actually ∼400,000 K hotter than the Fe IX peak formation temperature. Therefore, there is an additional component within the non-thermal line width (Eq. 12) as the thermal line width is underestimated. The non-thermal line width is now larger than it should actually be and can be denoted by σ nt = σ real + δ, where σ real is the true non-thermal line width and δ is the additional component due to our choice of thermal line width. As v rms is smaller at the altitudes which correspond to the peaks in the ratio, the ratio becomes artificially large due to the δ/v rms term.
To illustrate that this is indeed the case, another v y:L simulation was performed with a plasma temperature which is only 20,000 K above the Fe IX peak formation temperature (see Fig. 4 for the plot of its ratio versus height). As is seen in Fig. 4, the peaks become less extreme when the plasma temperature is closer to our chosen thermal line width (the peak formation temperature of the emission line). As these peaks form for v ≈ 0, it is unlikely that they will be seen in real observations, as some flows will always be present. However, there will be a significant additional component in the non-thermal line width in this calculation whenever v δ, which may occur frequently. This highlights the importance of selecting an appropriate thermal line width (e.g. through DEM analysis).
Simulations v y:L , v y:M , and v y:H show a decrease in the ratio with an increase in wave amplitude. This is the result of the additional non-thermal line width component (δ) due to our estimate of the thermal line width. Similar to the behaviour of the peaks caused by the prevalence of nodes, the ratio in the v y:L simulation is most significantly impacted due to the smaller velocity perturbations leading to artificially larger ratios due to the δ/v rms term. This term decreases with an increasing wave amplitude and we indeed see that the ratios for v y:L , v y:M , and v y:H decrease. To confirm that this behaviour is indeed caused by the additional component, δ, the ratios Fig. 3a which uses the peak formation temperature as the thermal line width (15.7 km s −1 : less than the thermal line widths in Fig. 5). When the additional component of the thermal line width is reduced, by changing the thermal line width from the peak formation temperature to the minimum thermal line width, the three simulations produce similar ratios, all of which are below the ratio of 2 given in PVD2020 (see Table 2). This clearly illustrates the importance of an appropriate thermal line width. In real observations, it is not always possible to determine the exact temperature of the plasma. The difference in the thermal line widths (i.e. the minimum thermal line width present and the thermal line width at the peak formation temperature) is approximately 2 − 3 km s −1 and only has a noticeable effect on the v y:L and v y:M simulations; hence the approximate additional component of the thermal line width is 10%-23% of the velocity driver's amplitude for these two simulations. This suggests that any additional component in the thermal line width greater than 10% of the velocity driver's amplitude will result in artificially large ratios.
The closest comparison to the simulations v y:L , v y:M , and v y:H are the mono-periodic linearly polarised oscillations along the LOS in PVD2020; hence we would expect the ratio to be greater than or equal to 2 (see Table 2) if the spectra are averaged over one or multiple periods of the driver. However, in our study, we have chosen exposure times that are not an exact multiple of the velocity driver's period. As seen from terion for smaller exposure times). This effect, as discussed previously, is the consequence of the additional component in the non-thermal line width (δ) and all three simulations are in fact below this ratio when the 'minimum' thermal line width is used (see Fig. 5). To allow for a comparison between our simulations and the equivalent in PVD2020, we re-calculated the ratios with an exposure time equal to the driver's period, while still using the peak formation temperature as the thermal line width. When the new exposure time is applied to the v y:H simulation, the ratio becomes the orange line in Fig. 6a. There is little difference in the ratio between an exposure time equal and not equal to a multiple of the driver's period (see the orange line in Fig. 6a and the red line in the first panel of Fig. 3, respectively). As a comparison, we also analyse the ratio from a simulation of a standing wave (no wave interference present) which has the same amplitude as v y:H . When an exposure time equal to the period of the driver is used, we satisfy the 2 criterion (see Fig. 7), unlike the ratio in the v y:H simulation.
To explain this result, we considered the effect due to the exposure time and the effect due to the wave interference. Firstly, we shall consider the effects of the exposure time with no wave interference. At a single point in the POS ((x , z) = (0.1, −28.6) Mm), a time frame was examined during the v y:H simulation before the first reflected wave front reached this altitude (z = −28.6 Mm). The ratio calculation was evaluated for two different exposure times. One exposure time is equal to the driver period (15 s) and one is not (22.5 s). These are denoted by the green and red asterisks in Fig. 6a, respectively. Figure 6b illustrates the wave behaviour and time frames over which the asterisks in Fig. 6a were calculated. The left-hand panel shows a time frame before wave interference and the right-hand panel shows a time frame during interference. When no wave interference is present, we see a lower ratio when the exposure time is not a multiple of the driver. Since v rms has no influence on the difference between the ratios, as it is the same for these two cases, we focus on the non-thermal line width. When anti-parallel flows are present along the LOS within an exposure time, the specific intensity becomes double peaked. Whether these peaks are symmetric about λ DV , is dependent on the exposure time used. If the exposure time is equal to a multiple of the driver's period, then the specific intensity is symmetric. Conversely, if the exposure time is not a multiple of the driver's period, then under-sampling a wave period causes asymmetry. As the line width is controlled by the variation in the velocity profile along the LOS, this undersampling can result in a decrease in the total line width. For example, this happens if the extrema in the velocity profile do not occur during the exposure time. Figure 8 depicts such an example by illustrating the resultant specific intensity (top row) from a wave (bottom row) equal to the period of the driver (left column) and a wave with a period less than the driver (right column). As a result of the under-sampling, the ratio σ nt /v rms typically decreases when exposure times are not a multiple of the driver's period.
However, when considering the full simulation and using an exposure time equal to the period of the driver, the ratio does not increase as described above (readers can compare the red line in Fig. 3 to the orange line in Fig. 6a, i.e. the ratio without and with an exposure time equal to the driver's period, respectively). This is due to the presence of wave interference. To analyse this, two cases are considered, one, as before, which uses a time frame before any wave interference is present (green asterisk in Fig. 6a) and another which uses a time frame during wave interference (purple asterisk in Fig. 6a) in the v y:H simulation. Both cases use an exposure time equal to the period of the wave's driver. For the purple asterisk, asymmetric specific intensities are obtained, but in this case, this is due to the wave interference. As a result of the asymmetry, the ratio in Fig. 6a also decreases. Another contributing factor is the change in v rms due to the unequal wave amplitudes as a result of wave interference (readers are encouraged to compare the wave profiles in the two panels of Fig. 6a). These two factors explain why the criterion for the ratio is satisfied in the standing wave case (Fig. 7) and not in the v y:H simulation (orange line in Fig. 6a) even though an exposure time equal to the period of the driver was used.
One circumstance for which the 2 criterion is attained for the high amplitude Alfvén wave simulation when the peak formation temperature is the thermal line width (see red lines in Fig. 3) is when an 'infinite' exposure time is considered (see blue line in Fig. 6a), that is the exposure time is equal to the length of the simulation. In fact, when investigating the ratio for numerous exposure times, it was found that any exposure time greater than approximately 220 s sufficed. Since the frequency of the imposed driving does not match the natural, fundamental frequency (or one of the higher harmonics), once the wave reflects off the boundaries, a beating behaviour occurs. This leads to the presence of longer periodicities in the domain than the period of the driver. As the wave amplitude changes over short times, in order to obtain a representative view of the wave behaviour, we need the exposure time to be greater than the beating period. In other words, in order to obtain the ratio found in PVD2020, we need larger exposure times (or more periods). Within observations it is unlikely that the footpoint motions are monoperiodic, as is the case within this model, and hence this result may occur to a lesser extent within the corona.
We now consider the ratio generated from the v mix simulation (see purple curves in Fig. 3). Firstly, the ratio is approx-imately a factor of 2 less than the ratio achieved in the v y:M simulation regardless of the thermal line width used (i.e. either the peak formation temperature or the more accurate thermal line width). This is due to the difference in the alignment of both the drivers in comparison to the y axis: The component of the velocity along the LOS is a factor of 2 smaller in the v mix simulation, but v rms is the same in both cases. From the scenarios considered in PVD2020, σ nt > v rms is a lower bound on the ratio (see Table 2). This is indeed satisfied within the v mix simulation; however, when a more accurate thermal line width was used, it was determined that the ratio did not satisfy σ nt > v rms . We did not apply the condition σ nt > 2v rms as the LOS is no longer aligned with the direction of oscillation. Irrespective of whether the simulations meet the conditions presented in PVD2020, none of the ratios -including v y:L , v y:M , and v y:H -reach 1/ 2, that is the ratio which has been used over the past decade in several studies. This is also the case when a more accurate thermal line width is implemented.
Complex magnetic field model analysis
Within the complex magnetic field model analysis, some locations in the domain contained plasma at temperatures lower the peak formation temperature of Fe XVI and hence Eq. 12 had no real solutions. Two approaches were taken. The first approach was to neglect these locations in the averaging. The second approach was to set the non-thermal line widths at those problematic points to zero. These two approaches did not differ significantly; hence we have only included the latter in Fig. 9, which illustrates σ nt /v rms as a function of the height during the complex magnetic field model simulations.
Even though the driver is mono-periodic and linearly polarised along LOS y , comparing it to the simulation with the equivalent driver in PVD2020 and, hence, using a ratio threshold of 2 is not an appropriate comparison. This is firstly because we considered LOS x , which is not aligned along the direction of the driver, and secondly Howson et al. (2020b) and Fyfe et al. (2020) have shown that the polarisation of the waves changes from strictly v y at the driver to also containing a v x component throughout the rest of the 3D domain. Therefore, this simulation must be compared to the lowest threshold PVD2020 present; hence examining the ratios with respect to the threshold σ nt /v rms > 1 is used here (see Table 2). From Fig. 9, it is clear that this threshold is not always satisfied.
Firstly, observations along LOS y only achieve the threshold of one for larger exposure times, whereas LOS x fails to attain this target entirely. Increasing the exposure time does increase the non-thermal line width and hence the ratio σ nt /v rms (as with the Alfvén wave model). However, even when an 'infinite' exposure time was implemented, there was little difference between that and Fig. 9c which has an exposure time of 105 s. We see that observations along LOS x never attain the threshold. This is a consequence of the non-thermal line width since v rms is the same for both LOS angles. In Fig. 20 of Fyfe et al. (2020), it is shown that the mean magnitude of v y is greater than that of v x . Hence, the non-thermal line width is smaller along LOS x compared to LOS y . In essence, LOS x is not observing the dominant component of the velocity field (v y ) even though it is included in v rms , and hence the ratio is less than one. The same but less extreme effect is causing the ratio to decrease for LOS y . Indeed, it is below one for smaller exposure times. This effect not only explains why LOS x does not attain the threshold, but also reveals why there is a difference between the two LOS angles.
Two more factors which may influence the ratio are the complexity of the field and the presence of wave interference.
In the Alfvén wave model, we showed that wave interference decreased the ratio as a result of the asymmetric specific intensities. In a similar way, we see asymmetric line profiles for the complex field due to wave interference and phase mixing along the LOS (Howson et al. 2020b) and, hence, a reduction in the ratio. Here, we only consider exposure times that are not equal to a multiple of the drivers' period; however, we know from the Alfvén wave model that even with an exposure time equal to the period of the driver, the presence of wave interfer- Fig. 10: σ nt /v rms as a function of the height (z) for the complex magnetic field model with an exposure time of 105 s. The different LOS angles are LOS y (blue) and LOS x (red) and the emission lines are Fe XII (solid lines) and Fe XVI (dashed lines); however, the minimum formation temperature was used rather than the peak formation temperature. The dashed horizontal lines, from top to bottom, are 1 and 1/ 2. ence still decreases the ratio below the anticipated threshold of one.
As seen in Fig. 9, not only does LOS x not reach the threshold of one, but it also sits on (or below, dependent on the emission line) the ratio 1/ 2. Previously PVD2020 found that this was not attainable in their model. We do see an increase in the ratio (see Fig. 10) when thermal line widths approximately equal to the minimum formation temperature of the ions are used (Fe XII: 18.4 km s − 1 and Fe XVI: 22.9 km s − 1). However, even in this case, LOS x still crosses the ratio of 1/ 2 and hence the root mean squared wave amplitude is greater than the non-thermal line width, contrary to PVD2020. The discrepancy between the complex magnetic field model and the findings of PVD2020 lies in the complexity of the models and the LOS angles. Indeed, when the LOS is parallel to the velocity driver, the two factors which produce a decrease in the ratio are the presence of wave interference and the changing polarisation of the wave. These two factors, combined with a LOS perpendicular to the velocity driver, generated ratios even less than those aligned with the driver. All of these are factors which are not present in PVD2020. And finally, as in the Alfvén wave model, there is an additional component in the non-thermal line width (δ) due to underestimating the true thermal line width by using the peak formation temperature. This means that if a more accurate thermal line width is used, these ratios will be even smaller and a larger discrepancy will be present between this model and PVD2020.
Arcade model analysis
The final model examined within this article is the arcade model. Fig. 11 shows the ratio σ nt /v rms as a function of the height (z) for various simulations using the 29 s exposure time.
The 261 s and 739 s exposure times generated very similar ratios and hence have been neglected in the figure. Due to the damping layer close to the top z boundary and the effect it has on v rms , we neglected z > 18 Mm. The threshold of one, from the multi-frequency driver simulation in PVD2020, is used for comparison with this current model (see Table 2). From Fig. 11, it is clear that all regimes (ideal, resistive, and viscous) and driving timescales (short and long) are above the threshold. However, we need to err on the side of caution with this anal- ysis. The thermal line width is underestimated by just under 2 km s −1 , as we used the peak formation temperature of the ion rather than the temperature of the simulation. This would not be an issue if the velocity perturbations present were significantly larger; however, the average of v rms is approximately 1.5 km s −1 . As with the peaks and ratios seen in the Alfvén wave model, the additional component in the non-thermal line width (δ), alongside the low velocity perturbations, causes the ratio to be larger than it should actually be. To confirm this, one of the simulations was analysed again with the velocity field artificially increased to be twenty times greater than the original simulation. In this case, the ratio decreased to a value of about 1/ 2. Therefore, further analysis of this model is somewhat unreliable as the term δ/v rms is dominating the behaviour of the ratio.
In Howson et al. (2020a), the authors demonstrate that more heat is generated in the T L simulation than in the T S simulation. Bearing this in mind, the increase in the ratio from the T S simulation to the T L simulation, seen in Fig. 11, may be due to the increased heating in the T L simulation, alongside the constant thermal line width used for both simulations. More specifically, there is a larger additional component in the nonthermal line width (δ) in the T L simulation than in the T S simulation.
Finally, when comparing the results for different exposure times, unlike the Alfvén wave model and the complex magnetic field model, there is very little difference between the ratios. This, however, is difficult to examine as it is most likely due to the overshadowing of the large additional component in the non-thermal line width.
Discussion and conclusions
In this paper, we have expanded on the work of Pant & Van Doorsselaere (2020), where the authors examine the relation between the root mean squared wave amplitudes (v rms ) and the non-thermal line widths (σ nt ). The ratio σ nt /v rms was frequently used to estimate observed wave energies. However, PVD2020 claim that the value of this ratio is incorrect and that previous wave energies have possibly been overestimated. In this article, we look at more complex MHD models than the ones investigated in PVD2020 in order to determine if their claim still holds.
To be able to estimate the non-thermal line width from observed line profiles, it is necessary to first establish the thermal component of the line width. To mimic the information available in actual observations, we based the thermal line width in our study on the peak formation temperature of the emission line (unless otherwise stated for comparative purposes), rather than the actual temperatures in the 3D simulation domains. However, when the temperature in the simulation domain is larger than the peak formation temperature, this can affect the reliability of the ratio σ nt /v rms . Indeed, by writing the non-thermal line width as σ nt = σ real + δ, where σ real is the 'true' non-thermal line width and δ is the additional component due to underestimating the thermal line width, it is clear that the ratio σ nt /v rms becomes larger than it should actually be. When velocities in the domain are small, this additional component in the non-thermal line width can dominate the ratio. By comparing simulations, we deduced that the σ nt /v rms ratio becomes unreliable in locations where the additional component in the thermal line width is greater than about 10% of the velocities. Another scenario is when the plasma temperatures are less than our chosen thermal line width. When this is the case, the non-thermal line width (see Eq. 12) has no real solution and hence the analysis breaks down.
As well as the thermal line width, the choice of exposure time was also found to affect the ratio σ nt /v rms . Again, to reflect actual observations, we chose to make the exposure times independent of the period of the drivers in our simulations. In other words, the exposure time was not chosen to be an exact multiple of the period. It was found that when a noninteger multiple of the driver's period was used as the exposure time, the ratio would decrease in comparison to an exposure time equal to a multiple of the period of the driver. This was due to the under-sampling of wave periods when the exposure time did not equal a multiple of the driver's period, resulting in smaller non-thermal line widths (see Fig. 8 for an example of under-sampling). One method, however, that was found to increase the ratio was to use larger exposure times. This increased the ratio in some of the simpler simulations (i.e. Alfvén wave model), such that the ratio coincided with that in PVD2020, when previously they did not with smaller noninteger multiples of the driver's period as the exposure time. Due to the multi-frequency nature of the corona, this exposure time result may not emerge in observations as our simulations contain monoperiodic drivers.
Another influential factor in the ratio σ nt /v rms is the presence of wave interference. It was found to decrease the ratio when comparing a simple Alfvén wave model without and with the reflection of the wave off the top boundary (i.e. generating wave interference).
Within the complex magnetic field model, both the exposure time and the wave interference played important roles in reducing the ratio between the non-thermal line width and the root mean squared wave amplitudes. In addition, the LOS angle was also found to play a critical role. Two LOS angles were considered in this model, one parallel (LOS y ) and one perpendicular (LOS x ) to the velocity driver (v y ) on the bottom boundary. Throughout the simulation, the mean magnitude of v y is greater than that of v x (see Fig. 20 of Fyfe et al. 2020). Hence, LOS x is not observing the dominant component of the velocity field, but it is included in the v rms calculation. This resulted in not only LOS x producing ratios less than LOS y , but also generating a ratio which is less than the one predicted in PVD2020 (σ nt /v rms > 1). For LOS y , the ratio only reaches one for larger exposure times and is less than one for smaller exposure times.
Our models use a static background with waves driven using mono-or multi-periodic drivers. This setup is simplistic in comparison to the corona's more dynamic behaviour, where the background is not necessarily time-independent and waves can be turbulently driven. Although we consider spatial complexity in our complex magnetic field model, the complexity of the field is still time-independent. If temporal variations to the background on timescales similar to the waves were also present, identifying waves might no longer be possible. For example, Goossens et al. (2019) show that spatial complexity mixes the properties of the MHD waves; this also holds when temporal variations are present. However, we found that using the ratio σ nt /v rms to estimate wave energies is not a robust approach and this conclusion equally holds if short-timescale temporal variations in a dynamically changing corona are present.
Our analysis has highlighted several key issues which need to be taken into account when estimating wave energy budgets from observations. For example, it is important that an appropriate thermal line width is selected and this is not necessarily the formation temperature of the emission line under investigation. One method of obtaining a more accurate thermal line width is through DEM analysis. From the numerical models presented in this article, the average value of σ nt /v rms = 1.7. Although this average satisfies the findings of PVD2020 (i.e. either σ nt /v rms > 2 or σ nt /v rms > 1 dependent on the scenario), we do find that the ratio for the different models ranges from 0.38 to 5.92 (neglecting the values caused by boundary conditions). Overall, the ratio is highly dependent on a number of factors (e.g. LOS angles, magnitude of the velocity perturbations, presence of wave interference, and the length of the exposure time) and hence it is not possible to identify a single value for the σ nt /v rms ratio. | 2021-10-04T01:15:44.710Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "c4c32e898d791e9ffb678de99ce1317edb95dcf9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2110.00257",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c4c32e898d791e9ffb678de99ce1317edb95dcf9",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
201653530 | pes2o/s2orc | v3-fos-license | Improving Visual Feature Extraction in Glacial Environments
Glacial science could benefit tremendously from autonomous robots, but previous glacial robots have had perception issues in these colorless and featureless environments, specifically with visual feature extraction. This translates to failures in visual odometry and visual navigation. Glaciologists use near-infrared imagery to reveal the underlying heterogeneous spatial structure of snow and ice, and we theorize that this hidden near-infrared structure could produce more and higher quality features than available in visible light. We took a custom camera rig to Igloo Cave at Mt. St. Helens to test our theory. The camera rig contains two identical machine vision cameras, one which was outfitted with multiple filters to see only near-infrared light. We extracted features from short video clips taken inside Igloo Cave at Mt. St. Helens, using three popular feature extractors (FAST, SIFT, and SURF). We quantified the number of features and their quality for visual navigation by comparing the resulting orientation estimates to ground truth. Our main contribution is the use of NIR longpass filters to improve the quantity and quality of visual features in icy terrain, irrespective of the feature extractor used.
I. INTRODUCTION
S CIENTIFIC endeavors to many glaciers, such as Antarctica, are difficult and time-consuming. Extreme cold and lack of infrastructure restrict experiments. Some glaciers are littered with deadly crevasses, hidden under a deceiving layer of snow. Others break off or "calve" into the ocean, causing seismic events that register on the Richter scale. Glaciers are an environment ripe for automation.
Perception is a critical part of automation. Many machine vision algorithms rely on image features to extract meaning from an image. For navigation applications, these features are usually based on corners, regions in an image with large image gradients in two directions. Modern feature detectors find features that are invariant to camera translations and inplane rotations. The motion of these features can inform a robot on where it is going or how the environment around it is changing -an integral part of robotics.
In our literature review, we found that a lack of visible features hamstrings robots in glacial environments. In many cases, successful glacial robots need to rely on other types of sensors. Featureless layers of snow and ice do not provide enough visual features for robotic decision making. However, glaciologists have tools to help them analyze snow and ice from afar. In particular, glaciologists make extensive use of near-infrared (NIR) light to differentiate between types of snow and ice. We leverage NIR light to improve the number and quality of visual features for machine vision applications. We investigate the optical properties of ice and snow to understand why glaciologists use this tool, and how we can arXiv:1908.10425v2 [cs.CV] 30 Nov 2019 adapt it for machine vision applications.
To test our hypothesis, we build a camera rig that detects both NIR and visible light, and use it to collect short video clips of Igloo Cave at Mt. St. Helens (Fig. 1). Igloo Cave is an ice cave that formed as the result of the volcanic activity of St. Helens. Our analysis of the video clips shows that filtered NIR vision generally outperforms unfiltered vision in feature extraction and camera orientation estimation in glacial environments such as Igloo Cave.
II. RELATED WORK A. Glacial Robots and Vision
The NASA funded Nomad robot was the first autonomous Antarctic robot. Its mission was to find meteorites in the Elephant Moraine. It was equipped with stereo cameras, but, as reported by Moorehead et al. [1]: "In all conditions, stereo [vision] was not able to produce sufficiently dense disparity maps to be useful for navigation" .
More recently, Paton et al. [2] mounted stereo cameras on the MATS rover to explore the use of visual odometry in polar environments. They found that feature-based visual odometry performed poorly in icy environments: "From harsh lighting conditions to deep snow, we show through a series of field trials that there remain serious issues with navigation in these environments, which must be addressed in order for long-term, vision-based navigation to succeed ... Snow is an especially difficult environment for vision-based systems as it is practically contrast free, causing a lack of visual features". Similar to Paton et al., Williams and Howard [3] developed and tested a 3D orientation estimation algorithm on the Juneau Ice Field in Alaska. They wrote "When dealing with arctic images, feature extraction is possibly the biggest challenge". They used contrast limited adaptive histogram equalization (CLAHE) post-processing to enhance contrast and make features stand out better. Their algorithm can extract many more features than previously possible, but they still experience significant pose drift.
To summarize, previous attempts at glacial robots have had less-than-successful performance with vision in icy environments. By and large, this is mostly due to lack of visual features in vast sheets of ice and snow.
B. Near-Infrared Filtering and Glaciology
Near-infrared (750-2500nm) imaging is a known tool in glaciology. Champollion used NIR imaging to get better images of hoarfrost in Antarctica [4]. NIR imagery from the MODIS satellite has been used to calculate continentwide surface morphology and ice grain size measurements in Antarctica [5]. Matzl and Schneebeli took NIR photographs of roughly one square meter of ice and snow, generating a 1D spatial map of grain structure within the snowpack [6]. They found found that at meter-scales, differences in the snowpack are visible in NIR .
C. Near-Infrared Feature Extraction
Relatively little work on feature extraction has been done in the near-infrared. Kachurka et al. [7] evaluated standard Fig. 2: NIR albedo depends much more on ice grain size than visible light. For reference, the human eye is most receptive at 0.56µm [14]. Adapted from [11].
ORB SLAM in the short-wave IR (SWIR), with the addition of a small keyframe modification to reduce the occurrence of reinitialization. Johannsen et al. [8] suggest that the ORB feature extractor performed best in their thermal IR feature extractor benchmark . Neither of these evaluate performance in the NIR waveband. Additionally, glacial environments appear drastically different than their urban test environments. Sima and Buckley [9] and Ricaurte at al. [10] discuss optimizing feature extractors in SWIR and thermal IR to enable matching to features captured in visible light, but again, not for icy evironments.
A. Scattering Models
Wiscombe's seminal work on the optics of snow and ice utilizes Mie theory to describe scattering. Their model describes the optics of ice and snow from 300nm to 5000nm. They find that the reflectance of ice grains between 750 and 1400nm is mostly dependent on the size of the grains [11] ( Fig. 2), thereby exposing structure invisible outside those wavelengths. For reference, visible light ends at 740nm. Since their work was published, several other papers have confirmed that snow albedo (brightness) is sensitive to ice grain size in NIR wavelengths [12] [13].
B. Specific Surface Area and Grain Size
Ice and snow are made up of small ice crystals called ice grains that measure from tens to thousands of microns across [5]. The term "grain size" refers to the diameter of these grains, but is sometimes misleading. In optics, the grain size of ice has two meanings: the true size of the grain or the optical size of the grain. Thus far, we have referred to the optical grain size. The optical size is used in idealized lighting models to reconcile the error between modeled and observed values for a specific true grain size.
The specific surface area (SSA) of snow and ice is defined as the ratio between the surface area and volume of the ice. SSA is strongly coupled with optical grain size [15], but can also effectively represent differences in grain shape. SSA has been shown to better represent the optical bulk-properties of realworld snow and ice [16]. The SSA can also represent spatially varying properties of snow and ice, such as air content or ice age [17]. While individual ice grains are usually too small to resolve by camera, regions of snow and ice with differing SSA are not. Varying SSA regions appear differently when viewed in NIR light. These differences in NIR light produce more numerous and distinct visual features than if viewed in visible light.
IV. EXPERIMENT
We set out to compare the number and quality of features extracted from NIR and visible light imagery. First, we define the scenes where video is taken. Then, we discuss the camera rig design and camera parameters. We go over the video capture procedure and the metrics we use to evaluate each scene.
A. Cave Scenes
We analyze video from four different scenes inside Igloo Cave at Mt. St. Helens. The first scene is a featureless firn wall, the second scene is a striated firn wall, and the third scene is planar snow. The fourth scene is a walking tour around one portion of the cave. Indirect sunlight illuminates all but the planar snow scene, which is illuminated by the lamp on the camera rig.
B. Camera Rig Design
A hand-held camera rig was built to collect NIR data and compare it to visible light. We mount two identical PointGrey FLEA-3 monochrome cameras to a 3D printed structure in a stereo configuration with a 10cm baseline (Fig. 3). The right camera has a filter wheel flush with the lens assembly. The filter wheel contains five NIR longpass filters with cut-on wavelengths of 800nm, 850nm, 900nm, 950nm, and 1000nm. These filters block light below their cut-on wavelength. We also attach a terrarium lamp on the underside of the rig, centered between the two cameras. The terrarium lamp has a ceramic reflector that reflects light in both visible and IR spectrums. A 75W halogen-tungsten incandescent bulb sits in the terrarium lamp to provide smooth, continuous illumination over both the visible and infrared spectrums. Mounted between the cameras is a VectorNav VN-200 inertial measurement unit (IMU) that provides ground truth orientation data. The VN-200 provides yaw to within 0.3°and pitch/roll to within 0.1°R MS, and runs at 800Hz.
C. Camera Parameters
Varying lighting conditions and the differing transmissivity of each filter made hand-setting camera parameters for each scene very difficult. Due to the significant difference in light received by the sensors, one set of parameters would not work for both cameras. By setting camera parameters differently for each camera, we could bias the results. For these reasons, we Because the NIR camera receives less light, it has a higher gain and prolonged exposure, which results in noisier and blurrier video. This provides some advantage to the visible light camera, but we did not attempt to quantify the extent of the advantage. We set the camera to capture 20 frames per second, but due to in-situ video compression the framerate would sometimes drop as low as 15 frames per second.
D. Procedure
We hold the camera rig by hand and take short videos while trying to keep the rig from moving too much. In all scenes, the rig is between one and six feet from the region of interest. If the scene is too dark for the unfiltered camera, the illuminator is turned on. For each scene, we run the experiment five times, each time cycling to the next NIR longpass filter on the right camera. For the cave tour, the camera rig is held a few feet from the cave wall as the operator walks about the cave. The path is identical for all filters. In our videos, we observe only snow and ice. Special care is taken to ensure that no rocks or foliage appear in any of the videos. Videos that contained enough volcanic ash to affect the results were discarded, except for the cave tour.
E. Preprocessing
Each image frame goes through a preprocessing pipeline before analysis. Lens distortion causes straight lines to appear slightly curved in the image; images are rectified to remove this effect. Next, we remove vignetting created by the filter wheel. Hough circles are used to detect the vignette perimeter. Once the perimeter is determined, we inscribe a bounding square in the hough circle. On both cameras, we only use data within the bounding square. Finally, the resulting image goes through CLAHE to improve contrast, as Williams and Howard suggest for icy environments [18] [3].
F. Metrics
We evaluate multiple feature detectors: SIFT [19], SURF [20], and the slightly modified scale-space version of FAST used in the ORB paper [21]. All feature detectors we use are scale-invariant by way of a scale-space pyramid. Each feature detector, except for SURF, uses default OpenCV parameters to reduce the chance of biasing parameters to improve NIR imagery at the expense of visible light imagery. The minimum Hessian threshold for SURF is raised to 500 to produce features similar in quantity and quality to SIFT and FAST.
1) Feature Count: The most straightforward metric is counting the number of features in each picture. Five features is the practical lower bound for visual pose estimation [22]. With RANSAC, more features result in more samples for pose estimation at the expense of some computational overhead [23]. We take the median number of features per frame over the entire video. Then, we take the mean over all feature extractors.
2) Valid Orientation Percentage (VOP): Just counting the raw number of extracted features can be misleading because "false features" are counted. False features are features created from camera noise or other sources that do not persist between frames and are not useful for vision. The ultimate test for feature extraction is whether the features are good enough to provide valid visual odometry estimates.
We estimate the essential matrix E using our lab-estimated intrinsics K. We feed the keypoints from a specific extractor to OpenCV's findEssentialMat function which uses Nister's five-point algorithm to determine E [22] [24]. Once we have E, we decompose it into a rotation matrix and translation vector. Due to the inherent noisiness caused by double integration of accelerometer data, we cannot analyze translation. We compare the relative difference in orientation between two frames to the ground truth value recorded by the IMU. If the relative 3D rotation is within five degrees of the ground truth, we consider the estimate valid and invalid otherwise. If for any reason we are unable to construct or decompose E, we consider that estimate invalid. We provide the percentage of frame pairs with a valid estimate out of the total number of frame pairs. In other words, this metric describes how often we are are able to accurately estimate relative camera orientation from the extracted features. We call this metric the Valid Orientation Percentage (VOP).
V. RESULTS
Although we used filters up to 1000nm, indirect lighting conditions combined with reduced camera sensitivity results in pitch black videos for longer wavelength filters. Even with the lamp, some filter and scene combinations were too dark for analysis. For this reason, we exclude the 950nm and 1000nm filter results.
We provide our results in Fig. 4, 5. CLAHE modified imagery always outperformed non-CLAHE imagery, so we omit the non-CLAHE results. The overall best performing filter is 850nm, beating visible light (no filter). The 800nm filter performed almost as well as the 850nm filter, and still beat unfiltered light. Filtered light outperformed unfiltered light except in the cave tour scene, due to volcanic ash that provided additional features in the visible spectrum. Looking at performance arranged by feature extractor, filtered light outperformed unfiltered light for all tested feature extractors (Fig. 5c). This shows that the NIR performance gains are extractor agnostic -NIR provides more visual information for the feature extraction and visual odometry problems, irrespective of the extractor used.
A. Concrete Examples
We attempt to connect the results back to our SSA hypothesis through qualitative means. The planar snow scene is the best example of the spatially-varying SSA. When comparing the visible light image (Fig. 1b) to the NIR image (Fig. 1c), there is a stark difference. The NIR image almost looks like a cloudy sky or a nebula. The darker regions are those with smaller SSA. These are likely regions of older snow, where dendritic grains transition to round grains [25]. The brighter areas could be regions of new snow with higher SSA.
Also visibly interesting is the striated firn wall scene (Fig. 7). The striation in this scene is known as melt-freeze crust, where melting snow or rain creates a layer of water, then refreezes producing large ice grains [26]. These large ice grains result in a small SSA and a dark streak in the NIR image (Fig. 7b). Note that in the unfiltered image, the SSA has little effect and the streak is barely visible (Fig. 7a).
B. Practical Considerations
While other light spectrums have interesting interactions with ice crystals, NIR light is the most practical. Most silicon CMOS and CCD camera sensors are sensitive to NIR light. Many machine vision cameras come without a NIR-blocking filter, allowing them to view NIR light out of the box. Consumer cameras tend to have NIR blocking filters to restrict the sensor to the human vision range. These filters can easily be replaced with NIR longpass filters, allowing almost any commercial camera to see in only NIR wavelengths.
While cameras sensitivities vary, the spectral sensitivity of the Flea3 cameras is representative of other commercial cameras. For most cameras, we expect that 800nm and 850nm Fig. 5b, 5c, NIR corresponds to the best performing filter for each scene. pass filters with CLAHE post-processing will produce the best visual features. The noisy low-light photography produced by the 900nm and higher filters combined with noise-sensitive CLAHE results in many features created from noise. A sensor that is more sensitive to NIR light would perform better in longer wavelengths with CLAHE. Most of the testing occurred inside a darkened cave, the darker filters will likely perform better outside in direct sunlight.
C. Future Work
The cameras we used only touch the very beginning of the NIR spectrum. With specialized NIR sensors, it may be possible to extract even more features. Indium-Gallium-Arsenide sensors are commercially available and span the full NIR spectrum. Furthermore, other types of sensors such a polarization sensors may provide additional benefits.
We evaluated the feature extractors without changing the extractor default parameters to isolate light wavelength as the independent variable. We have shown that NIR generally outperforms visible light in this task, the next step is to find the optimal feature extractor and associated parameters.
All analyzed scenes are from inside Igloo Cave at St. Helens, which means that all imagery is "indoors". While this is important for NASA's future goals, future research should strive to obtain test data from outdoor environments to test far-field visual navigation as well.
Although we quantified orientation error between frames, we did not explore how NIR imagery would perform longterm in a SLAM scenario. We would have liked to test this, but we had no way to correct IMU drift while collecting ground truth data. Future experiments should focus on improving the quality of ground truth data.
VII. CONCLUSION
Our experimental results from Igloo Cave suggest that NIR light is an attractive alternative to visible light for feature extraction and visual navigation in glacial environments. In most of our cases, the NIR imagery outperformed visible light imagery. We were able to accurately estimate camera orientation much more often in NIR imagery than in visible light imagery. The biggest disadvantage of using NIR pass filters on a regular camera indoors is the reduced amount of light that hits the sensor. Longer exposures and higher gain can mitigate this this to an extent, but indoors, ensuring adequate lighting is very important. Above 850nm, the light reduction started to severely impact the image quality in the form of blur or noise. With larger illuminators or more sensitive cameras, it is likely that the optimal wavelength will be higher and perhaps the performance even better. | 2019-08-27T19:29:34.000Z | 2019-08-27T00:00:00.000 | {
"year": 2019,
"sha1": "38b4499fc5f95eea2e3b264b188733cc167b4c76",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1908.10425",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "71bd134aa87248cca40e3676c7ab55b431ff55fa",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
10685809 | pes2o/s2orc | v3-fos-license | Comparative Genomic Analysis of Primary and Synchronous Metastatic Colorectal Cancers
Approximately 50% of patients with primary colorectal carcinoma develop liver metastases. Understanding the genetic differences between primary colon cancer and their metastases to the liver is essential for devising a better therapeutic approach for this disease. We performed whole exome sequencing and copy number analysis for 15 triplets, each comprising normal colorectal tissue, primary colorectal carcinoma, and its synchronous matched liver metastasis. We analyzed the similarities and differences between primary colorectal carcinoma and matched liver metastases in regards to somatic mutations and somatic copy number alterationss. The genomic profiling demonstrated mutations in APC(73%), KRAS (33%), ARID1A and PIK3CA (6.7%) genes between primary colorectal and metastatic liver tumors. TP53 mutation was observed in 47% of the primary samples and 67% in liver metastatic samples. The grouped pairs, in hierarchical clustering showed similar somatic copy number alteration patterns, in contrast to the ungrouped pairs. Many mutations (including those of known key cancer driver genes) were shared in the grouped pairs. The ungrouped pairs exhibited distinct mutation patterns with no shared mutations in key driver genes. Four ungrouped liver metastasis samples had mutations in DNA mismatch repair genes along with hypermutations and a substantial number of copy number alterations. Our results suggest that about half of the metastatic colorectal carcinoma had the same clonal origin with their primary colorectal carcinomas, whereas remaining cases were genetically distinct from their primary carcinomas. These findings underscore the need to evaluate metastatic lesions separately for optimized therapy, rather than to extrapolate from primary tumor data.
Introduction
The emerging concept of polyclonality is gaining importance in cancer biology [1]. The monoclonal evolution of a tumor from a single cancer cell has been extensively studied, and is generally considered to involve the selective clonal expansion of dominant tumor clones. More recently, the alternative concept of polyclonal evolution has emerged. This model consists of two key concepts: the self-seeding hypothesis and the mutator phenotype model. The former proposes that tumor clones leave the primary site, enter systemic circulation via tumor vasculature, and colonize a distant site, thereby establishing a new subpopulation [2,3]. The mutator phenotype model proposes a small number of highly diverse tumor cell clones (polyclonal) instead of a few competing clonal subpopulations; in fact, several solid tumor types, including colon cancers, have been suggested to be highly polyclonal [4].
Identifying the origin of cancer is pivotal in understanding the genetic events involved in tumor initiation and progression [5]. As with primary cancers, metastases can also have either a single or polyclonal origin [6,7]. Generally, metastases carry similar mutations to those of the primary cancers from which they originate, but additional mutations occur after transformation [6,8]. The continual, and often accelerating incidence of mutations results in genetic heterogeneity between primary and metastatic cancers; this mostly increases the resistance to therapy in the latter, which is the predominant cause of cancer-related death worldwide [6,9].
Colorectal carcinoma (CRC) is the third most common malignancy and the second leading cause of cancer deaths in many countries [10,11]. Nearly 50% of CRC patients develop colorectal liver metastasis (CLM) [12]. Without treatment, patients with CLMs have a median survival of only 5-10 months, with less than 0.5% surviving beyond 5 years [13]. Several studies have addressed the clonal origin and genetic heterogeneity of CRCs [14,15]. No clear consensus has emerged from this as, although one report concluded that tumors mainly originate from a single clone [15], the results of other studies suggested that the majority of tumors have a polyclonal origin [16,17]. Recently, whole genome sequencing of matched primary and metastatic acral melanoma has also revealed considerable genetic heterogeneity between the primary and metastatic tumors, as evidenced by de novo, non-synonymous single nucleotide variation [18]. Pancreatic cancer metastases have also been sequenced to evaluate the clonal relationships between primary and metastatic cancers, leading to the identification of clonal populations that gave rise to distant metastases [19,20]. It is therefore vital to understand different concepts relating to the origin of cancer and the genetic heterogeneity between a primary tumor and its distant metastases for developing effective therapeutic strategies [21,22].
In order to assess the polyclonality and genetic heterogeneity in CRC, we evaluated the genetic and clonal relationship between primary CRCs and their matched CLMs by performing targeted exome sequencing and high resolution copy number variation (SCNA) analysis of 15 triplets of normal colorectal tissue, primary CRC, and matched CLM samples. Our results provide valuable insights into the clonal relationship and genetic differences between primary CRCs and their matched CLMs, and will consequently help in defining potential targets for systemic therapies.
Patient cohort description
The median age of patients in the study was 61 ( Table 1). The cohort included 1 T2 stage, 10 T3 stage, and 4 T4 CRC tumors, all of which are primary resection specimens. Six patients had single hepatic metastasis while 9 patients had two or more hepatic metastases at resection. Clinical and histo-pathological information for the cohort set used in the study is provided in Table 1.
SCNA analysis
We performed SCNA analysis on 15 triplets of normal colorectal tissue, CRC, and CLM samples. Using paired analysis (i.e., normal colorectal tissue vs. CRC or normal colorectal tissue vs. CLM), we identified somatic SCNAs in either the CRC or CLM samples (Table 2 and Figure S1). CRC and CLM pairs from 11 patients showed a similar number of SCNAs (Table 2); however, the remaining 4 pairs (#250, #262, #526, and #721) showed a substantial increase in the overall number of SCNAs in the CLM samples, especially those involving homozygous copy loss and loss of heterozygosity (LOH) ( Table 2).
We performed unsupervised hierarchical clustering of the somatic SCNA data from 15 pairs of CRC and CLM samples in order to evaluate the genetic diversity between the primary and metastatic CRCs. Unsupervised hierarchical clustering of SCNA data has been used previously to determine the genetic relationships between primary and metastatic cancers [23]. The present analysis was based on the assumption that genetically similar CRCs and their matched CLMs will be closely related in hierarchical clustering. Fifty-three percent (8 of 15) of the primary CRCs were closely related to their matched CLMs, indicating clonal and genetic similarity in these CRC-CLM pairs. The remaining 47% (7 of 15) of the CRC-CLM pairs were only distantly related, suggesting distinct genetic relationships between these CRCs and their matched CLMs ( Figure 1).
Of note. SCNA patterns of 8 closely related pairs were similar while those of the 7 distantly related CRC-CLM pairs were distinct ( Figure S2). We also calculated and compared the average numbers of one copy gains, one copy losses, high copy gains, homozygous losses and LOH between grouped and ungrouped CRC-CLM pairs ( Figure S3). Thorough comparison of hierarchi-cal clustering and genetic similarities are described in the next section of the results.
Exome sequencing
We performed whole exome sequencing on 15 triplets of normal colorectal tissue, CRC, and CLM samples. Using a PCR-based microsatellite assay, we confirmed that all 15 primary CRC samples were microsatellite-stable. Mutations were detected and filtered according to our in-house bioinformatics workflow ( Figure S4). In total, 1079 and 4366 mutations were identified in the CRC and CLM samples, respectively (Table S1). The mutation spectra observed in the CRC and CLM samples were consistent with previous observations in solid tumors and were not significantly different between the CRC and CLM samples ( Figure S5) [24].
Somatic mutation profiles and significantly altered pathways
We found that 2,224 genes had at least 1 non-synonymous, splicing, or frameshift mutation in the CRCs or CLMs (Table S5). This data was then used to investigate the mutational status of the major signaling pathways altered in CRC (i.e., those centered on P53, Wnt, TGF-Beta, and VEGF), by comparing the frequencies with which the genes involved in these pathways were mutated ( Figure 3). This revealed that APC was mutated in 73% of both the CRC and CLM samples, TP53 in 47% of the CRC samples and 67% of the CLM samples, and KRAS in 33% of both the CRC and CLM samples. SMAD4, FAT4, and BRAF were also mutated in the CRC and CLM samples with varying frequencies. In the VEGF signaling pathway, we found mutations in the KDR (0% and 27%), FLT1 (7% and 7%), and FLT4 (0% and 7%) genes of the CRCs and CLMs, respectively ( Figure 3). Mutations in VEGF pathway genes were mainly confined to the CLM samples, the most striking example of which being the KDR gene which was only mutated in the hypermutated CLM samples (Table 3).
Evaluating genetic relationships on the basis of mutations and SCNA
To evaluate genetic relationships between the 15 pairs of CRC and CLM samples, we evaluated whether the same genotypic changes and mutations in major cancer driver genes occurred in each pair. This evaluation was based on the assumption that CRC and CLM pairs with similar genetic alterations are likely to share mutations in the major driver genes. In eight cases, the CRC-CLM pair did indeed share mutations in key CRC-related genes (APC, KRAS, TP53, SMAD4, BRAF, and FAT4) (Table S4). In these pairs, no significant difference was observed in the number of mutations between the CRC and CLM samples (P = 0.28) (Table 4). Conversely, in the remaining seven cases, none of the CRC-CLM pairs shared a mutation in key CRC-related genes; however, the total number of mutations differed significantly between the CRC and CLM samples (P,0.05) ( Table 4 and Table S4).
Importantly, concordance analysis of the mutation data reconciled with the SCNA data analysis. All CRC-CLM pairs, which were closely related in the unsupervised hierarchical clustering of SCNA data, shared mutations in key CRC-related genes (53%). However, the seven CRC-CLM pairs that were only remotely related in the hierarchical clustering of the SCNA data, did not share a mutation in key CRC-related genes (47%).
Taken together, these findings indicate that in 53% of the cases, each CRC-CLM pair had similar genetic alterations, whereas those in the remaining 47% of the cases had distinct genetic alterations.
Discussion
By comparing the SCNA data and mutation profiles of 15 paired CRC and CLM samples, we found that approximately half of them, showed genetic heterogeneity with respect to their corresponding primary CRC. To the best of our knowledge, this is the first comprehensive study to use genomic profiling of primary CRCs and their matched metastases and to define the distinct features of the metastatic lesions in terms of their mutation and SCNA profiles.
Fifty-three percent of the CRC-CLM pairs in the clustered group shared a high number of mutations, including some in the APC, KRAS, TP53, and SMAD4 genes (Table 4, Figure 1). The presence of many shared mutations (30-65%) indicated that somatic mutations may accumulate within the microenvironment of a primary cancer before disseminating to their metastatic sites, something commonly referred to as the linear progression model of tumor evolution [22]. The remaining 47% of the CRC-CLM pairs, which were grouped independently of each other, showed significant differences in their mutation profiles and SCNA data. They had no shared mutations in cancer initiating genes and no significant differences in SCNA profiles (Figure 1). The distinct relationship and prominent genetic heterogeneity in these pairs indicate that the CLMs might have originated from a group of genetically distinct primary CRC clones interacting in close proximity with polyclonal model of tumor progression.
Six CRC-CLM pairs had somatic KRAS mutations in at least one sample. Three pairs had the same KRAS mutations, and the other three did not. Therefore, the discordance rate of KRAS mutation between CRC-CLM pairs was 50% (3/6) ( Figure 1 and Table S4). Knijn and colleagues [25] reported high concordance rate of KRAS mutations between primary CRC and CLM tumors. In contrast, a series of studies have demonstrated high discordance rate, between CRC-CLM pairs, of KRAS mutations ranging from 8%-60% [26][27][28]. In our study, those three CRC-CLM pairs with discordant KRAS mutation status were also clustered distinctly by SCNA analysis. The discordance of KRAS mutation, along with distinct SCNA clustering patterns, between these CRC-CLM pairs supports our hypothesis that primary CRCs and their corresponding CLMs may have different clonal origins in these samples.
The polyclonal tumor progression model can help direct therapeutic strategies [4]. The major disadvantage of a monoclonal tumor origin model is the assumption that most of the initial events that led to the primary cancer will also be found in the metastasized cells, which overlooks the possibility that small populations of tumor cells with distinct genetic characteristics in close proximity to each other may be responsible for the metastasis [4]. Such metastatic cells, which originate from primary tumors, might have a different response to therapy.
Hypermutation caused by the loss of DNA mismatch repair activity is termed MSI [29]. We found that four CLM samples were instable microsatellites, resulting in hypermutations. The KDR gene, a significant prognostic marker in colorectal carcinoma [30], was mutated only in the hypermutated samples. It is also noteworthy that there was an apparent relationship between hypermutation and chromosomal instability. Recently, a distinct copy number status of the DNA mismatch repair gene MLH1 was shown to be associated with elevated levels of mutation in pancreatic cancer [31]. Previous studies showed contradicting evidences about MSI tumors and chromosome instability in colorectal cancers. Some studies reported that MSS tumors show higher rate of chromosome instability than MSI tumors [32][33][34][35]. Other studies reported substantial overlap between MSI tumors and chromosome instability [32,[35][36][37][38]. Of note, all these studies were done using primary CRC samples, and the relationship of MSI and chromosome instability in CLM samples is yet to be revealed. We found that MSI tumors were associated with a large number of gene deletions/amplifications and increased frequency of LOH, in other words, chromosomal instability in CLM tumors. Further research is required to reveal the relationship between MSI, chromosome instability and metastasis of primary CRCs.
Angiogenesis is regulated principally by interactions between vascular endothelial growth factors and VEGF receptors and play a central role in cancer growth and metastasis [39,40]. Several studies have reported the genetic polymorphism of the KDR gene implicating the risk of coronary artery diseases [41,42]. However, the clear role of individual KDR SNPs and their physiological functions in cancer progression and prognosis remains unknown.
In the current study, all of the patients with KDR SNPs (i.e., rs187037 and rs2305948) had recurrence after curative resection of CRC and liver (p = 0.925; data not shown). However, due to small sample KDR mutation was not statistically significant. A larger number of samples are needed to validate the KDR mutation and its characteristic role in tumor recurrence. The mean survival time of patients with metastatic CRC has increased from 6-8 months to more than 2 years due to the emergence of targeted treatment and improved surgical resections. Nevertheless, the therapeutic option for non-responders to oxaliplatin-or irinotecan-based chemotherapy, with or without cetuximab or bevacizumab, is very limited. Hence, better treatment strategies for metastatic CRC have to be developed. An emerging body of evidence suggests that primary CRC may present as polyclonal in nature and that the resulting metastases might therefore have a genetically different from the majority of the primary tumor. In such cases, the biology and genetic profile of the primary tumor may be significantly different from the metastases. This would be an important concern in targeted, personalized therapy. Our results suggest that the mutational profiles of approximately 50% of metastatic liver tumors might be different from that of the primary tumor, which underscores the need to evaluate metastatic sites separately for identification of potential targets for systemic therapy.
Study population
Between June 2009 and June 2011, 53 patients underwent curative resection of CRC and liver metastasis at Gachon University Gil Hospital (Incheon, South Korea). The criteria for inclusion in this study were as follows: (1) hepatic metastasis from CRC confirmed by spiral abdominopelvic computed tomography; (2) liver metastasis as the first manifestation of M1 disease without any documented disseminated disease, as determined by preoperative imaging; (3) no prior history of neoadjuvant chemoradiation or chemotherapy, including molecular targeted agents; (4) curative resection performed for both primary colorectal and liver lesions; (5) the resected specimens should be synchronous tumors (simultaneous resection, n = 11; two-stage resection within 6 months, n = 4); and (6) microsatellite stable primary CRCs. We selected 15 patients with CRC and matched liver metastasis based on these inclusion criteria. The basic characteristics of the patients are shown in Table 1. All tumors were reviewed by a single pathologist, and only specimens with .70% tumor content were included in the analysis. The study protocols were approved by the Institutional Review Board of Gachon University Gil Hospital (IRB approval number: GIRBA 2535). Written informed consent was required from all participants. Information, such as sex, age, tumor stage, was extracted from the clinical database for this cohort.
PCR-based microsatellite assay
A set of microsatellite markers consisting of two mononucleotide repeat markers (BAT25 and BAT26) and three dinucleotide repeat markers (D2S123, D5S346, and D17S250), as recommended by the National Cancer Institute Consensus Group, were used to determine tumor the microsatellite instability (MSI) status. Aliquots containing 50 ng DNA were amplified in 20-mL reaction mixtures containing 2 mL of 106 buffer (Roche, Mannheim, Germany), 1.7-2.5 mmol/L MgCl 2 , 0.3 mM each primer pair, 250 mM deoxynucleotide triphosphates, and 2.5 U DNA polymerase (Roche, Mannheim, Germany). PCR was performed with an initial denaturation step of 94uC for 5 min, followed by 30 cycles of 1 min at 94uC, 1 min at 55uC, and 1 min at 72uC and a final extension step of 10 min at 72uC. The samples were analyzed on an ABI Prism 3100 Genetic Analyzer using 0.7 mL of amplified sample combined with 0.3 mL of GeneScan 500 Size Standard and 9 mL of HiDi Formamide according to the manufacturer's guidelines (Applied Biosystems, Foster City, CA, USA). Data were analyzed using ABI Prism 3100 Data Collection software (Applied Biosystems, Foster City, CA, USA).
DNA extraction, library preparation, and targeted exome sequencing DNA was extracted using a DNeasy Blood & Tissue Kit (QIAGEN, Valencia, CA, USA). DNA quality was checked by 1% agarose gel electrophoresis, and DNA concentration was measured using a PicoGreen dsDNA Assay (Invitrogen, Carlsbad, CA, USA). SureSelect sequencing libraries were prepared according to the manufacturer's instructions (Agilent SureSelect All Exon Kit 38 Mb; Agilent Technologies, Santa Clara, CA, USA) using a Bravo automated liquid handler. The quality of the amplified libraries was verified by capillary electrophoresis (Bioanalyzer; Agilent Technologies, Santa Clara, CA, USA), after which pairedend DNA sequences were obtained from the libraries using the Illumina HiSeq platform (Illumina, San Diego, CA, USA).
Bioinformatics analysis
Sequence data were aligned to the human reference genome GRCh37 (http://www.ncbi.nlm.nih.gov/projects/genome/ assembly/grc/human/index.shtml) using the Burrows-Wheeler Aligner [43] with default parameters. We sequenced at the average depth of 52.44X for targeted regions. The PCR duplicates were removed using the Picard algorithm (http://picard. sourceforge.net). We performed realignment and quality recalibration for the sequenced data using the Genome Analysis Toolkit (GATK) [44]. After alignment, we used Varscan [45], Strelka [46], and Mutect [47] to call mutations, including insertions and deletions (indels), for each chromosomal position and also used GATK [44] for indel detection with 15 triplet specimens consisting of normal colorectal tissue, primary CRC, and matched CLMs. We annotated the mutations using ANNOVAR [48] with the Ensembl Gene annotation database for human genome build 37 (http://www.ensembl.org/) and searched for matches in the dbSNP137 (http://www.ncbi.nlm.nih.gov/projects/genome/ assembly/grc/human/index.shtml), 1000 genomes data [49], and COSMIC database [50]. We filtered the mutations from the targeted regions and selected non-synonymous, synonymous, gain or loss of the stop codon, frameshift indels, non-frameshift indels, and splicing site mutations.
SCNA analysis
The single nucleotide polymorphism (SNP) array of CytoScan TM HD (Affymetrix, Inc., Santa Clara, CA, USA) was used. SCNA analysis of the CytoScan TM HD Array was performed using BioDiscovery Nexus Copy Number 6.1 (http://www. biodiscovery.com/software/nexus-copy-number/) software. The SNP-Fast Adaptive States Segmentation Technique 2 segmentation algorithm was used with default parameters.
Clustering
Complete linkage hierarchical clustering was performed to evaluate the concordance between the primary CRC and CLM samples. Average and single linkage hierarchical clustering were also applied; however, all clustering methods yielded similar results. Paired t-test and two-sample t-test were used for statistical analyses, and P,0.05 was considered to indicate statistical significance.
Data Link
Whole exome sequencing data: Sequence Retrieve Archive (SRA) accession number is SRP034161.
Cytoscan array data: GEO accession number is GSE53799. Author Contributions | 2016-05-12T22:15:10.714Z | 2014-03-05T00:00:00.000 | {
"year": 2014,
"sha1": "def0d6f315e945fd92ff6355a7c43d160cc470c7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0090459&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "def0d6f315e945fd92ff6355a7c43d160cc470c7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
228806786 | pes2o/s2orc | v3-fos-license | Home-Based Locational Accessibility to Essential Urban Services: The Case of Wake County, North Carolina, USA
: Accessibility is an important concept in urban studies and planning, especially on issues related to sustainable transportation planning and urban spatial structure. This paper develops an optimization model to examine the accessibility from single family homes to major urban facilities for services or amenities using geographical information systems. The home-based accessibility to facilities is based upon the point to point direct distance from sampled homes to sampled facilities. Descriptive statistics about the accessibility, such as min/max, mean/median, and standard deviation/variance were computed. Variations of accessibility for a range of categories by home price and year built were also examined. Multivariate linear regression models examining the housing value with respect to home-facility accessibility by facility types were implemented. The results show that desirable urban facilities, which are also more frequently used for livability, enjoy better accessibility than undesirable urban facilities. The home-based accessibility’s positive or negative associations with home price along with year built and/or residential lot size exist for most facilities in general, and by confirming to the literature, the home-facility accessibility in particular does strongly impact home values as evidenced by fair to excellent R 2 values. Accordingly, this research provides evidence-based recommendations for sustainable urban mobility and urban planning.
Introduction
Today, more than 50% of the world population live in cities. In developed countries, such as the United States, Germany, Japan, and Singapore, more than 85% of the population live in urbanized areas and work in diverse non-farming businesses [1]. The major attraction of a city is attributed to better jobs, more quality services, and greener environment [2]. In cities, most working residents commute to workplaces, get services from urban facilities, and enjoy social/cultural well-being at urban amenities. All these urban living functions make home-based trips inevitable [3,4]. This research takes on Wake County, North Carolina of the United States, as a test bed to study the homebased accessibility to major urban facilities and amenities as a way to comprehend the urban spatial form for home-based travel. Wake County is a well-urbanized county with Raleigh as the capital city for both the county and the State of North Carolina.
Urban residents live in various types of homes, including single family homes or villas, apartments, townhouses, or condominiums, regardless in low-rise, middle-rise, or high-rises buildings. In the United States, about 65% of the households live in single family homes or villas, while the rest live in apartments, townhouses, or condominiums, owned or rented [5]. In this research, the focus is on single family homes or villas located in Wake County only. Similar to homes or villas, there are also various urban facilities and amenities, such as schools, libraries, parks, shopping centers and offices to provide various essential services and amenities in education, recreation, shopping, working, and others critical to high quality urban living.
Homes are the origins and destinations for a normal daily life cycle while urban facilities are intermediate stops to provide services or amenities. Therefore, home-based travel to and from urban facilities is an essential function with spatial and social dimensions for city living. Indeed, spatial relationships between homes and facilities in a city are critical to almost all planning issues, land use policies, and zoning regulations, which together affect the city's urban structure and travel pattern [6,7]. For instance, in transportation or residential planning, home-to-work travel is much affected by relative locations of homes and offices [3,6,8,9]. In school district planning, home and school locations are essential elements in planning school boundaries and bus routes [10][11][12]. In environmental planning, locations of undesirable facilities in closer proximity with respect to low-income housing and minority population have spawned intense debates on social equity [13][14][15].
Although various spatial relationships about home-facility can be studied, this research emphasis on the spatial relationship between a home and a facility, or more formally, the home-based accessibility to urban facilities for essential services and amenities. The following questions are particularly pertinent to this research. (1) What are the min/max, mean/median, and standard/ variance of the shortest distances from single family homes to common urban facilities, as calculated by the optimization based assignment model developed in this research? Answers to this question would help local planning agencies make policies regarding commuting to work places, routing for school buses, locating fire stations, etc.; (2) What accessibility disparity, if any, exist among single family homes of different value brackets or built at different type periods from 1790 to 2020? Findings to this inquiry are valuable to the environmental justice concern and equitable urban growth both for the current metro county and for its future urban planning; (3) Is home-based accessibility a good explanatory factor for the single family home value using a hedonic price model for homes of different value brackets and build at various time periods? Insights into this question can certainly help infill for mixed income housing development and determination of property tax rates, among others. Accordingly, this research provides evidence-based findings and policy recommendations on sustainable urban planning for Wake County and its major cities, especially regarding residential and facility locations and associated mobility and equity aspects. This paper is organized as follows. After the introduction in Section one, Section two provides a concise review of the relevant literature. Section three presents the optimization model, in which model assumption, notation, formulation, database, and modeling steps are discussed. Section four summarizes the results and highlights the findings. Conclusions and remarks are given in Section five.
Literature Review
The accessibility concept was generally proposed long time ago (Hansen, 1959) [16] and today competing definitions of accessibility exist in the literature [17][18][19][20][21]. Yet Bhat, et al. (2000, p.1) [22] perhaps defined accessibility in a broadest way as "a measure of the ease of an individual to pursue an activity of a desired type, at a desired location, by a desired mode, and at a desired time." Today, accessibility has been well developed from various perspectives, by different methods, and for diverse applications, such as in Geurs and Van Wee [6], Farrington [23], Curl, et al. [8], Martens [14] and Geurs, et al. [4], to name but a few. A concise review of common accessibility measures and applications is provided below, followed by a brief discussion of the home-based locational accessibility used in this research.
Accessibility Measures
Various models and measures can be found in the literature: (1) The isochronal measure of accessibility [24][25][26] is based on the number of destinations within a set of specified travel time or distance ranges from a home, for example, the number of parks some distance away from the home. It is simple, but arbitrarily excludes facilities beyond a certain distance. (2) The gravitation measure of accessibility [16] is based upon the gravitation push and pull between homes and facilities. Accessibility is expected to decline further apart from homes to facilities. While conceptually simple, this accessibility measure largely depends on the impedance factor and the weights for the facilities (i.e., number of shops, number of jobs, or gross leasable areas). (3) The utility measure of accessibility considers travel behaviors [17,27,28] and incorporates individual traveler preferences into accessibility measure, which, for example, can be used to examine urban parks chosen as utilitymaximized destinations. (4) The constraint measure of accessibility examines the activities to be carried out and the resources (i.e., time) allocable to these activities [29,30]. This measure combines space-time and utility measures in a more complicated way with superimposed constraints. (5) datadriven measure of accessibility fully utilizes detailed location data in various ways, such as digital points of interests, cell phone or GPS data, or physical parcel-building level data [15,[31][32][33].
Locational Accessibility as Used in This Research
Accessibility depends on home and facility locations and can be measured by travel distance, time, or cost. Hence, it is a spatial or locational indicator for potential trips between origins and destinations [36,52]. Based on the direct home-facility distance [17,33] rather than the traditional graph theoretic link connectivity or network path impedance as used in Taaffe and Gauthier this research is concerned with the aggregated distance or accessibility from all single family homes to all facilities for any given type of facility [53]. Given the spatial scale of the urbanized areas being at the Wake County level and the aggregate nature of this study, it is reasonable to use direct distance, rather than network distance, to quickly and sufficiently capture the general spatial patterns of homefacility accessibility for tens of thousands of homes and hundreds of facilities sampled.
The location accessibility from single family homes to facilities is modeled through a shortest distance-based optimal assignment model [54] and executed in ArcGIS. The model output is further processed to generate mean/median distances, hence accessibility, from single family homes facilities. For desirable facilities that people prefer to live nearby (i.e., parks, schools, stores), the longer the mean/median distance, the less advantageous of the location and accessibility for single family homes. The contrary is true for undesirable facilities that people prefer not to live close to (i.e., solid waste landfill sites, heavy industrial plants, airports).
If a subgroup of single family homes (i.e., less expensive in value, older by year built) experiences consistent worse accessibility for most desirable and/or undesirable facilities, it raises a red flag on possible patterns of questionable land uses, biased zoning practices (i.e., exclusionary zoning), unequal environmental justice, or neighborhood social inequity (i.e., including gentrification) [13,38,43].
For instance, in transportation or land use planning, jobs-housing travel is much influenced by household size, car ownership rate, road network, as well as by relative locations of homes and work places [4,6,9]. In school district planning [12], home and school locations and their distances are critical factors for optimal school bus routing [55]. In environmental planning, some evidences on undesirable facilities closer to low-income communities have caused heart-searching debates among all stake-holders [7]. In health care planning, locations and connections of homes and hospitals and health clinics can profoundly affect people's health [38,45,52,56].
The above review indicates that almost all models on accessibility found in the literature, regardless of place or people based, are based upon distance or its equivalent time or cost measure, yet only a few models [33,52] are similar to the assignment based optimization model used in this research. Also, the number of facilities considered in the literature is small, varying from one single type of facilities (i.e., park, school, bus stop) to a few, either positive (desirable) or negative (undesirable), while this research consider almost two dozens of facilities in total, both positive and negative, the most compared to models reviewed. Further, regressions on both property value brackets and year built intervals make it possible to examine in detail value and time effects on accessibility. These three major features distinguish this research, especially its model, from those in the literature.
Model Assumptions
Several important assumptions are made in this study. First, an urban facility can be classified into either desirable or undesirable. Desirable facilities provide physical-economic-social services or amenities on education, healthcare, safety, recreation, or employment, such as banks, churches, schools, colleges, museums, daycare centers, health care and medical clinics, hospitals, recreation facilities, libraries, restaurants, stores, shopping centers and supermarkets. In general, people prefer to live closer to these facilities. Undesirable facilities offer employment, manufacturing, transportation, and waste processing services, but at the same time, may pose life disturbing or threatening risks. These may include heavy manufacturing plants, airports, landfill sites or solid waste centers, hazardous materials processing factories. It should be noted that the undesirable facilities are all needed for cities to offer quality life just the same way as desirable facilities. Their distinction here is mainly for classification and discussion purposes.
Second, individual facilities of the same type would provide the same quantity and quality services to different people or households. For example, it is believed that all supermarkets would be the same in providing quality and variety shopping services, hence, the best one to a home is the closest supermarket. Third, with the same logic, each single-family home needs to access only one facility of each type. For example, since all banks are assumed to be the same, the household living in a single-family home only goes to the closest one. Fourth, the home-facility accessibility can be measured by descriptive statistics of the direct air distances between all homes and the facilities of the same type ( Figure 1).
The use of direct home-facility distance is mainly for modeling convenience and simplicity. More realistic alternatives would be to use the Wake County's real road network. Then, networked trip distances, times, or costs would have to take into account many network factors, such as traffic flow, volume, and capacity, speed limit, link and path geometries, in addition to home and facility locations. However, this research focuses on the aggregated accessibility patterns for entire Wake County centered around its central city Raleigh. The use of direct distance between homes and facilities is assumed to be able to fulfill the research questions, as they are county-wide, involve homes in tens of thousands and facilities up to hundreds, and primarily are based on mean or median distances for each type of facilities. In other words, the research questions are answered not for single home or single facility type. Here, classing desirable and undesirable facilities is arbitrary. Also, other than services, desirable and undesirable facilities may all bring opposite effects. For instance, while living near heavy manufacturing factories may expose more to noise, dust, or odor, it provides better access to industrial jobs. Similarly, residing close to a recreational park or playground encourages more entertainment or physical activities, yet most people would rather reside not next to such facilities for possible through traffic, especially from outside the neighborhood, and other reasons.
Model Formulation
Denote ∈ as the facility i = {1, 2, 3, …} in facility index, I. Similarly, denote ∈ as the type of facility, l = {1, 2, 3, …} in facility type index, L. Define ∈ as the single family home unit j = {1, 2, 3, …} in home index, J. Similarly, define ∈ as the type of home k = {1, 2, 3, …} in home type index, K. Also, define decision variable = 1 if facility i of facility type l serves home unit j of home type k; = 0, otherwise.
= direct air distance from home unit j to facility i. Note that can be measured by using the shortest distance between i and j in the real transportation network.
= objective function value: ∈ , ∈ , ∈ , ∈ The objective function in (1) is to minimize the total distance ( ) from all single-family homes to all facilities of all types. The constraint in (2) ensures that all single-family homes be assigned to the facilities of a given type. Constraint (3) ensures that each facility is signed to at least one but not more than M homes (can be set at a very high number). The constraint in (4) guarantees that each home is assigned to one and only one facility. Some key descriptive statistics based on optimal results (labeled with *) from the model (1)-(4) are calculated below.
Aggregated average or mean distance: Maximum distance: Minimum distance: Sample standard deviation:
Study Site and Database
Wake County land parcel database was used. Wake contains the City of Raleigh, the Capital of NC, and the Research Triangle Park formed by Duke, UNC-Chapel Hill, and NCSU (see Figure 2). Other major cities include Apex, Cary, Chapter Hill, Wake Forest, and a few other small towns. The Wake County land parcel and building data were in two files and not perfect, necessary data processing and sampling were performed to ensure complete and correct data. This process includes polygon-to-centroid translation, geo-coding, and address matching. The final data are in ArcGIS format and include: (1) countywide land parcel centroid GIS layer with attributes including parcel identification number (PIN) and (2) countywide building attribute table with the same PIN, which were used to join the parcel centroid GIS layer. It should be noted that not all facilities of a certain type were used, nor were all single-family homes in each price or year built category. Samples were randomly drawn based on number of criteria, such as data completeness (i.e., no zeros for key attributes), the total property value (i.e., $50 k < $20 million), number of building or structure on a parcel (i.e., =1), etc. Figure 3 visually maps the home and facility samples (large dots in different colors for facilities and grey for single family homes) along with city (light blue) and county (cream) boundaries and road networks (green blue).
Hospital (11) Medical Clinic (98) Nursing Home (18 Figure 4 summarizes the modeling procedure. The first step is to turn parcels or structure blueprints polygons into their centroids by using the polygon-to-centroid procedure in ArcGIS. The next step is to link building attribute table to the parcel centroid layer to sort out single family homes and facilities by land use codes. The outcome from the step two is used as input to the optimal assignment model in Equations (1)- (4). The third step is to input the optimal assignment model output into SPSS to compute descriptive statistics as designed in Equations (5)-(8).
Key Statistics for Accessibility
The descriptive statistics of overall locational accessibility for Wake County are summarized in Table 1, which is organized by the numbers and types of facilities, which are generally classified into four groups-(1) Environmental, Health, and Rescue; (2) Cultural, Recreational, and Educational; (3) Auto, Food, Shopping, and Other Business; and (4) Manufacturing, Waste Management, and Transportation services. The facilities in the first three groups are considered as desirable while the facilities in the last group are regarded as undesirable. The mean distances from homes to frequently used facilities, such as Restaurants, Shopping centers (neighborhood and community), Banks, Churches, Daycare Centers, Libraries, and Elementary Schools, are shorter than the mean distances to other less frequently used facilities. The mean distances to the Airport, Heavy Manufacturing plants, Truck Terminals, Theaters, and Landfill sites are much longer than that to other types of facilities. Churches have the highest accessibility with mean distance at 3170.53 feet, followed by Elementary Schools (6589.45 feet), Fire Stations (8109.94 feet), Banks (8778.47 feet) and Daycare (8820.28 feet). The Airport has the lowest accessibility with mean distances at 58,645.84 feet, followed by Regional Shopping Centers (42,632.23 feet), Heavy Industrial Mfg. (32,351.88 feet), and Theater (29,806.50 feet). Undesirable facilities are generally further away from homes than desirable facilities.
It makes sense that the mean accessibility to schools decreases from Elementary Schools to High Schools because there are more schools from the primary, to middle, and to high school levels and because the spatial distributions of single family homes and schools are corresponding through urban planning. The similar pattern can be seen for Community, Neighborhood, and Regional Shopping Centers. In general, the large the number of facilities, the shorter the local accessibility from homes. For example, the Church had the largest number at 454 while we only have 1 Airport, making their median distances to be 2744.24 feet and 53,252.05 feet, respectively. These patterns hold true for min, max, median, and even standard deviation. One interesting observation is that some homes were located very closely to some desirable facilities, such as the min accessibility value for Elementary School (17.8 feet) and Middle School (30.76 feet) while others were a bit away, such as for Airport (7202.66 feet) and Regional Shopping Center (1227.62 feet). Similar patterns can be seen for, ax accessibility values, for example, for Airport (137,358.28 feet), Regional Shopping Centers (115,819.10 feet) and Heavy Manufacturing (96,215.82 feet). Finally, it is worth mentioning that the percentage of homes located in the 100-year Flood Plain was 3.63%.
The about results can be used to partially explain locations of homes and facilities and the urban spatial structure. Facilities and homes can be regarded as the joint products of the private market equilibrium and the public regulatory control under certain levels of safety, health, and livability for the population in Wake County. An aggregated locational accessibility measured by the shortest distances from homes to all facilities of a type roughly represents the service or amenity catchment for that type of facility. Different types of facilities would have different catchment sizes due to facility capacity, the nature of service or amenity provided, people's lifestyles, and of course, their relative spatial relations, assuming other factors being equal. These catchments and relations, while change over time, largely form the spatial structure of the much-urbanized Wake County. Figure 5 is a series of ArcGIS visualization images illustrating the service catchments formed by shortest distance-based accessibility for 18 desirable facilities, four undesirable facilities and the flood plain. Each image is shown with Wake County boundary, shortest distances from tens of thousands of single-family homes to facilities of different numbers, ranking from 1 for Airport to 138 for Bank to 454 for Church. The 22 images show color-code accessibility grouped into five classes using Jenks natural breaks classification. The first four images show the locations of facilities (in red) providing Environmental, Health, and Rescue services or amenities, and corresponding spider lines representing five classes of home-facility accessibility in light to dark blue colors. The facilities include hospital, medical clinic, nursing home, and fire stations. The second 9 images depict facilities (in dark blue) for Cultural, Recreational, and Educational services or amenities, and five classes of home-facility accessibility in light yellow to dark blue colors. These facilities include School, Library, Bank, and Church, etc. Clearly, the Theater has the smallest number (5) while the Church has the large number, their accessibility catchments are quite different in sizes. The similar patterns can also be seen for Auto, Food, Shopping, and Other Business facilities and manufacturing, Waste Management, and Transportation facilities, whose relatabilities are visually represented by lines from yellow to red or dark blown colors, respectively. Figure 6 shows three close-up images for visualization. Small grey or black dots represent homes, larger red or dark blue dots for Hospitals, Churches, and homes in Flood Plain, which is in light yellow. Color coded home-hospital and homechurch accessibility maps are drawn using mean distances in feet classified into five groups with Jenks natural breaks. Figure 7 shows the frequency distribution of locational accessibility by shortest home-facility distances. Each chart's vertical axis represents for frequency and horizontal axis is for distance ranges. Clearly almost all of them are not distributed in perfect bell-shape nor normal distribution. The distributions are more or less right-skewed, which indicates that of all the distance categories, the mean is located to the right of the mode or the highest histogram bar. The practical meaning is there are lots of shorter distances than longer ones, especially due to more extreme shorter distances. For example, Medical Clinics, Daycare Centers, Schools, Churches, Banks, Community and Neighborhood Shopping Centers, Supermarkets and Restaurants, indicating that these relatively usefrequency, for instance, necessary daily visit or a few times visits per week, enjoy better accessibility than the mean or median accessibility indicates in Table 1.
Interestingly though, some distributions are roughly close to the normal bell shape, such as the Parks. Heavy Manufacturing and Landfill have a bimodal like distribution with two modes or peaks. However, a closer looking at them indicate the two peaks are very close and can be literally regarded as semi-normal distributions. The last distribution in Figure 7 is really a value distribution for homes located within the Flood Plain. Clearly, it is fairly right-skewed, with the majority of these homes are lower values and only a handful of homes are expensive ones, raising a red flag implying potential environmental justice issue to be further studied.
Accessibility Disparity by Home Value and Year Built?
The descriptive statistics in Table 1 only portray an aggregated picture of home-facility accessibility or catchments for urban services or urban spatial structure. Tables 2 and 3 provide more detailed views of the accessibility by home value and year-built categories. In Table 2 Table 3 summarizes the mean home-facility accessibility for homes built from 1790 to 1990 by fifty-year intervals, 1990-2020 over three decades and for the entire period. Please note that the sample for homes between 1790-2020 removes those homes without year built from the total samples used in Table 1. Clearly, from 1790 to 2020, to (1) Environmental, Health, and Rescue facilities, the mean distances all increased from 1790-1840 (i.e., 15,896.45 to 20,615.40 feet for Hospital), but all dropped until 1890 for Medical Clinic (i.e., 12,013.69 to 7391.49 feet), then all increased to 2020 (i.e., 5181.64 to 9401.95 feet for Fire Station); to (2) Cultural, Recreational, and Educational facilities, similar mean accessibility patterns can be observed with various period growth or decline rates (i.e., large ups and downs for Theaters and minor changes for Churches), except for Parks and Daycare Centers; to (3) Auto, Food, Shopping, and Other Business facilities, again, similar mean accessibility patterns prevail with larger changes to Regional Shopping Centers and mild fluctuations for all other facility types; and to (4) Manufacturing, Waste Management, and Transportation facilities, all mean distances went ups or downs from period to period without much overall increases or decreases. The periodical dynamics of mean accessibility for all facilities indicate the dynamic nature of urban spatial development mainly by homes and facilities. It is very likely that they were largely developed separately yet referenced each other in terms of spatial location choices influenced by developers and consumers. Figure 8 provides three snap charts illustrating median accessibility for all homes, mean accessibility for homes built between 1990 and 2020, and homes valued between 275 k and 350 k. What striking is the overall patterns in terms of home-facility distances are very consistent, for example, the distances to undesirable facilities in Airport, Heavy Manufacturing, and Truck Terminals are larger than to most other desirable facilities. Also, among the desirable facilities, distances to Regional Shopping Centers, Theaters, and Hospitals are longer. Moreover, accessibility to Churches, Banks, Elementary Schools, Daycare Centers, and Restaurants is likely the most convenient. These results are very much related to the use-frequencies for homes to get services from the facilities and likely be reinforced by private development practice and public planning regulation. Overall, they should reflect the spatial lifestyles for people in Wake County.
Home-Based Accessibility Good to Explain Home Value?
Various studies have reported the positive or negative impact of accessibility to home value [26,35,43]. The relevant literature concludes that the accessibility to desirable facilities from homes can boost home values and the undesirable facilities can negatively influence home values. This section conducts several such home value impact analyses using multivariate linear regression. The basic idea is to use the total home value, including land and building, as the dependent variable and use home-facility accessibility as independent variables. One such regression model is run for each of the four facility groups and one for all facilities listed in Table 1. The five regression models are executed for all sampled homes and facilities. For each model, a backward elimination process was used until final significant variables were selected under 95% significance. except for a couple of variables. Since linear regression works better over a normal distribution of data points, most of the variables listed in Figure 7 were transformed into their logarithmic equivalents except Parks, Regional Shopping Centers, Heavy Manufacturing, and Airport, whose distributions were relatively similar to normal distribution. Table 4 summarizes the regression results. B represents the variable coefficients. Beta values are the standardized coefficients telling the positive or negative effective strength of individual variables on dependent variables. The standard outputs from SPSS (IBM, Armonk, NY, USA) also include standard deviations, t-statistics, and lower and upper bounds of B, which are computed at p-values <0.05. Fair to excellent R 2 values (from 0.454 to 0.914) for the five models indicate that the locational accessibility based on home-facility distances is a good predictor for property values. Also, each model yields some coefficient whose signs do not directly make logic sense or as expected due to partial collinearity between some variables, non-linearity of some variables, and/or their rightskewed distribution. Finally, for each model, we added year built and lot size by acreage as nondistance independent variables. In any case, comparison studies with the literature and improved accessibility or regression models are needed to further justify the accessibility's impact to property values.
Specifically, with R 2 = 0.835, most facilities in the Environmental, Healthy, and Rescue category were significant with some expected and surprising coefficient signs. For example, the larger the size of a residential lot size, the higher the property value. Also, Fire Station was assumed as desirable facility type, hence, closer accessibility would positively affect property value. However, this was not the case for assumed desirable Hospital, Medical Clinics, and Nursing Homes, meaning people actually prefer reasonable distances away from them perhaps due to associated from and to traffic and other perceived specific negative characteristics internal to these facilities and/or their services.
With R 2 = 0.545, seven out of nine variables in the Cultural, Recreational, and Education category were significant and most of their coefficient signs were expected. For example, in addition to parcel lot size in acreage, all the schools and Library were statistically proper to be regarded as desirable facilities to which living near-by could boost property value. However, Theater and Church are surprises as they have positive coefficients, meaning the further away was preferred as the property values were higher. What is more surprising is the missing of Parks and Gym-sports in the final regression variable selection, perhaps indicating low demand for these recreational outdoor and indoor facilities in Wake County.
Five out of six predictors from the category of Auto, Food, Shopping, and Other Business Services are identified as significant for R 2 = 0.634. Normally, Regional Shopping Centers and Supermarkets are larger in size and more in merchandise selections. These features perhaps helped these facilities preferred by people to live in close proximity. However, contrary to expectation, presumed desirable Restaurant, Community and Neighborhood Shopping Centers had positive coefficients, indicating property owners preferred to living further away from these facilities. If so, perhaps the shopping traffic and zoning policies separating local residential and commercial land uses might have played a role.
Three out of five undesirable facility types were statistically significant at the adjusted R 2 = 0.456, the lowest of all. Lot in acreage and year built were expected to have positive signs, meaning the newer and the larger a property lot is, the higher the property price becomes. However, Airport and Heavy Manufacturing facilities are out of expectation with negative signs, indicating these facilities would be reconsidered as desirable facilities, rather than undesirable facilities.
Finally, when all predictors were considered together, the best R 2 = 0.926. Significant variables in regressions by category were mostly selected as well with expected positive or negative signs for coefficients, such as for lot, Hospital, Clinic, Restaurant, Library, Shopping Centers, etc. However, some new variables were selected, such as Daycare, Gym-Sports, and Truck Terminal. Only Gymsports had the expected signs while Daycare and Truck Terminal had not, indicating assumed desirability by facilities are worth of a second consideration, especially together with frequencies for facility visits linked to people's lifestyles or lifecycles. Also, some significant variables selected in different regression models changed coefficient signs, such as Church, Airport, indicating that the role of accessibility as a predictor for property valuation is really relative, depending on the factors to be considered.
Conclusions and Remarks
Using Wake County as the testbed and relevant parcel and building GIS databases, this research developed an assignment model to derive shortest direct distances from single family homes to desirable and undesirable facilities to measure home-facility accessibility. Corresponding descriptive statistics, such as min/max, mean/median, and standard deviation/variances, were calculated as aggregated indicators for home-based locational accessibility, which is also viewed as a way to understand the spatial structure of the urbanized areas in Wake County. Snap shots of the homefacility accessibility by home value and year built were also examined. Selected GIS visualizations of the home-facility accessibility, at the county level, in close-up view, or by distance frequency, were selectively presented.
The results show that home-based accessibility does yield some insights on accessibility for services/amenities and the urban spatial structure. Specifically, first, within the fixed space in Wake County, the accessibility in general depends on the number of facilities, the supply side of urban services/amenities, and in specific varies by actual locations of the facilities and single family homes, the demand side of urban services/amenities. The larger the number of facilities of a given type, the shorter the distance-based accessibility. Second, the facilities in relatively large numbers, such as Church, School, Bank, Shopping Centers, or Restaurant, provide services/amenities what are needed more frequently, for example, daily, weekly, etc. The opposite is true as well. For example, people to the facilities in relatively smaller numbers, such as Theaters, Hospitals, or the Airport, have to travel longer distances, hence, lower in accessibility. Third, undesirable facilities, such as Airport, Landfill, Truck Terminals, and Heavy Manufacturing, are typically handful in quantity, large in size, and negative in some aspect (i.e., noise, dust, waste) to human health and safety. They are located further away from homes and, hence, lower in accessibility than that for desirable facilities. Finally, the aggregated mean or median accessibility is only a general indicator of home-facility accessibility as the distributions of the distances are almost all right-skewed, some are extremely right-skewed. The distance skewedness indicates that more homes enjoy better accessibility than the mean or median accessibility indicates. These accessibility patterns and distributions correspond well with the classic location and land use theories [6,16,24,46,57] and reported practice for recreational and leisure facilities [8,37,[39][40][41]56], physical activity [38], and health care facilities [41][42][43][44][45], among others, and for mobility [46][47][48]55] and equity [7,[13][14][15]21,40] policies in planning.
The mild to strong association results from the five multivariate linear regression models tell that home-based accessibility is a good predictor of home value across space. This is consistent with some existing studies in the literature [26,34,35]. However, this consistency varies by facility type, number, and frequency in demand for service. The variations also exist along with combinations of variables considered, for example, by category or cross categories. The degree of consistency or variation implies the complexity and scale of property value-accessibility association or relationship, especially when the accessibility is measured at the county level. More specifically, the existing studies showing stronger accessibility-property value relations focus more on a specific location, a neighborhood, or a limited spatial area of a city while the accessibility is treated more for a specific type of facility, not across a range of facilities as considered in this research. Nevertheless, the model developed in this research, including the sample data, shortest paths, descriptive statistics, and the regression model, can certainly be further refined. For example, perhaps single shortest direct distance from home to facility can be expanded to include multiple instead of just one to better reflect the household behavior, which tend to use more than one facility or service/amenity. In this case, the constraint (4) in the model (1)-(4) can be relaxed, ∑ = , where p can be any specified number to reflect policy orientation. Similarly, we can take the facility capacity into consideration in future research by specifying corresponding service threshold in constraint (3), ≤ ∑ ≤ , where N and M can specified to match facility's size or handling capacity. Of course, other than using direct distance, network distance from home to facility can also be considered in future improvement of the model by using real street networks. The model input data can also be further defined and selected, including facility groupings and functions. Moreover, facility classifications other than binary desirable or non-desirable and their visit frequencies depending on people's lifecycle and life styles should be considered for realistic accessibility modeling. Finally, more interesting reverse analyses of facility's desirability using accessibility.
Facilities were classified according to their main service functions. Single family homes were defined by county land parcel and structure records. The major limitation of the binary desirable or un-desirable classification or definition is that it poorly handles mixed uses or functions of facilities and homes and ignores other housing types [33]. The research is also limited to Wake County in North Carolina. The utility of the optimization model is yet to be seen when applied to other cities or counties, regardless urban sizes, so refinements of the model, especially on comparing it with other existing models and cases, can be achieved.
The findings from this research confirm classic theories, policies, and practices on urban spatial structure and development from the home-facility location and accessibility perspective. Theoretical confirmations include the land rent theory balancing between land and transportation expenditure for households and businesses [19,46,57], decaying urban or population density from centers to peripherals [2,16], and multiple nuclei urban model by Harris and Ullman [58] and single urban center model [26,58]. Similar validations of policies can be found in Wilson et al. on zoning [7,33], land use [16], and mobility [25]. The findings also well reflect long-held urban planning practices stressing for equity [40,56], environmental justice [7,39,49], and jobs-housing balance [3]. Perhaps the most important take-away from this research is that good theories, policies, and practices on zoning, land use, facility location, and site selection for more balanced location and accessibility patterns are necessary for any city in general and for Wake County and its associated cities, for example, the capital City of Raleigh, in particular.
Author Contributions: G.S. provided the initial concept, research design, data collection, analysis approach and wrote the manuscript. Z.W., L.Z. and Y.L. helped with the analysis approach. L.Z. and X.Y. helped revise the manuscript. All authors have read and agreed to the published version of the manuscript. | 2020-11-05T09:09:18.451Z | 2020-11-03T00:00:00.000 | {
"year": 2020,
"sha1": "a69830481eb354b3666614b34ee77ba7a00124fb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/21/9142/pdf?version=1605085904",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ed11ccaef94b35c212dc7803a7ed0288fcffa664",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Geography"
]
} |
259985987 | pes2o/s2orc | v3-fos-license | Evaluation of symptomatic small bowel stricture in Crohn’s disease by double-balloon endoscopy
Purpose To assess the efficacy of double-balloon endoscopy (DBE) for the detection of small-bowel strictures in Crohn’s disease (CD). Methods This tertiary-referral hospital cohort study was conducted between January 2018 and May 2022. CD patients with symptoms of small-bowel stricture were enrolled sequentially. All of the patients were subjected to both computed tomography enterography (CTE) and DBE, and their symptoms of stricture were assessed using the Crohn’s Disease Obstructive Score (CDOS). The diagnostic yield of DBE was compared to that of CTE, and the relationship between the DBE findings and CDOS was investigated. The factors influencing the DBE diagnosis were examined using Cox regression analysis. Results This study included 165 CD patients. The CDOS scores were higher in 95 patients and lower in 70 patients. DBE detected 92.7% (153/165) and CTE detected 85.5% (141/165) of the strictures. The DBE diagnostic yields were 94.7% (90/95) in the high CDOS patients and 91.4% (64/70) in the low CDOS patients (P = 0.13). Patients with a history of abdominal surgery and abscess had a lower diagnosis rate in the multivariate analysis. Conclusion DBE has been demonstrated to be an efficient diagnostic method for detecting small bowel strictures in CD patients. Additionally, there was no difference in the diagnostic yields between patients with low and high obstructive scores.
Introduction
Crohn's disease (CD) is a chronic inflammatory gastrointestinal disease that can lead to a variety of complications. Intestinal strictures are common CD complications, occurring in 15-30% of patients within the first 10 years after diagnosis [1,2]. Strictures are frequently associated with obstructive symptoms, necessitating endoscopic and surgical intervention [3,4]. Therefore, it is critical to accurately diagnose and evaluate CD stenosis.
Because of the unique anatomy and technical limitations, diagnosing and evaluating isolated small-bowel CD is difficult for gastroenterologists. Transabdominal ultrasonography (TUS), computed tomography enterography (CTE), magnetic resonance enterography (MRE), small-bowel capsule endoscopy (CE), and double-balloon enteroscopy (DBE) are some of the new endoscopic and radiologic techniques for evaluating small intestinal stenosis that have been developed in recent years (DBE).
TUS is a useful tool for the diagnosis and monitoring of small-bowel strictures [11]. However, accurate and dependable results in US depend on having seasoned operators. Numerous studies have reported on the detection efficiency and ability of MRE predict surgical outcomes [12,13]. This method, however, is time-consuming and costly. Furthermore, the interobserver consistency of MRE has been variable [14]. CTE is extremely effective at detecting small-bowel CD, with a sensitivity of 83% and a specificity of 88% [15]. Additionally, the rapid collection and image reconstruction of CTE allows for visualization of the entire small bowel and extraintestinal lesions [16]. Although CE is a very useful noninvasive tool for evaluating intestinal mucosal lesions in CD patients with smallbowel involvement, capsule retention has been reported in up to 5-13% of patients with known Crohn's disease [17][18][19]. The risk of capsule retention is much higher in patients with small-bowel obstruction [20]. Furthermore, tissue diagnosis and endoscopic treatment cannot be performed when necessary [21].
DBE has been developed in recent decades for treating small-bowel diseases [22,23]. The benefits of this deep enteroscopy technique include more direct visualization of the small intestine, the ability to obtain tissue biopsies for histopathology, and the ability to treat strictures [24][25][26]. Hence, DBE has become a widely accepted modality for assessing small-bowel CD [27]. Previous studies assessed the efficacy of DBE for the diagnosis and treatment of CD, and the majority of these studies involved patients with isolated CD of the small bowel [28][29][30][31].
The relationship between CD small-bowel strictures detected by DBE and the severity of stenosis symptoms, however, remains unknown. In addition, the factors influencing the diagnosis of DBE in patients with small bowel CD are still unknown. Hence, we conducted a prospective cohort study to evaluate the diagnostic yield of DBE in small-bowel CD patients with a symptomatic stricture.
Patients and data collection
From January 2018 to May 2022, 165 CD patients with symptomatic small bowel strictures were enrolled at Anhui Medical University's First Affiliated Hospital. This facility is a tertiary care facility for inflammatory bowel disease (IBD). All of the included patients met the following inclusion criteria: (1) a defined diagnosis of CD; (2) small bowel stricture symptoms; and (3) isolated small bowel strictures. Patients with an intra-abdominal abscess, suspected perforation, acute strangulated intestinal obstruction, contrast media allergies, or contraindications to DBE or CTE were excluded. All of these patients underwent CTE and DBE and were preoperatively followed up (Fig. 1). The time between CTE and DBE was reduced to less than one month. For the DBE procedures, all patients provided informed consent.
The prospectively obtained demographic and clinical data included sex, age, time from CD diagnosis to DBE, location of CD, previous surgery, laboratory values (complete blood cell count, albumin, C-reactive protein), and CD activity index [32].
The severity of stenosis symptoms
The Crohn's Disease Obstructive Score (CDOS) was used to assess and quantify obstructive symptoms. The score was developed based on four core items (obstructive pain feature, signs of nausea, vomiting, dietary restriction, and hospitalization) and was tested in a recent clinical study [33]. In the CDOS, the severity of stenosis symptoms is graded from 1 to 6. The patients were divided into low-(1-3) and high-score (4-6) groups to compare the relationship between the DBE findings and obstructive severity.
DBE procedure and evaluation
All patients who underwent DBE procedures were sedated with a combination of intravenous and inhalation anesthesia. DBE was performed with an EN-580T enteroscope (FUJIFILM, Tokyo, Japan) and an overture, which was performed by three IBD endoscopists with at least 200 cases of experience. The insertion route was chosen according to the estimated location of the suspected lesion, mainly based on the results of CE or radiological findings (i.e., enteroclysis, CTE or MRE ). If the location of the small-bowel lesion is unknown or uncertain, clinical presentation of small bowel stricture was the basis for starting DBE from antegrade or retrograde approach. Oral administration of 2000 ml polyethylene glycol-electrolyte lavage solution (Beaufour Ipsen Industrie, Dreux, France) 4 h before the retrograde DBE examination was used for bowel preparation. Antegrade DBE was performed after an 8-hour fast and before processing. The depth of DBE insertion was calculated using a method described in the previous literature [34]. CDassociated DBE stricture was defined as failure to pass the endoscope or an internal diameter of the small-bowel lumen of less than 10 mm [35]. During the DBE examination, the small-bowel stricture site was described as either the jejunum, terminal ileum, or proximal ileum [36]. The jejunum was defined as the section of the small bowel from the proximal part of the small bowel to the proximal part of the ileum. The terminal ileum was defined as a 10 cm section from the ileocecal valve. The proximal part of the ileum was defined as the section of the bowel between the proximal end of the terminal part and the terminal ileum.
CT enterography procedure and evaluation
Four hours before CTE examinations, all patients underwent intestinal preparation with 2000 ml polyethylene glycol-electrolyte lavage solution (Beaufour Ipsen Industrie, Dreux, France). Before scanning, 1500 ml of mannitol solution was taken orally, eventually reaching the small bowel for evaluation. A 128-slice MDCT scanner was used to perform CTE (GE Medical System, Chicago, IL, USA). The scan parameters were as follows: 5 mm layer thickness and spacing; 1.375:1 pitch; kV 120; and mAs 300. After injecting 100 ml of contrast agent (320 mgI/ mL) into the elbow vein, the entire abdomen was scanned with delays of 45 and 90 s. An experienced gastrointestinal radiologist who was blinded to the clinical and endoscopic information analyzed the CTE images.
On CTE imaging, intestinal strictures were defined as follows: a localized thickened bowel wall and constriction of the intestinal lumen, enhanced bowel wall thickness ≥ 25%, reduction in luminal diameter ≥ 50%, and dilation of the small intestine proximal to the stricture ≥ 3 cm [37].
Statistics
Means ± SDs or medians and range are used to describe quantitative variables. The DBE and CTE diagnostic yields were expressed as percentages and compared using the chi-square test. The factors influencing the DBE diagnosis were examined using Cox regression analysis. Variables with P < 0.05 in the univariate analysis were tested further in a multivariate analysis. SPSS 21.0 was used for all statistical analyses (IBM Corporation, Armonk, NY, USA). P < 0.05 was regarded as statistically significant.
Patient characteristics and severity of stricture symptoms at baseline
A total of 174 patients with symptomatic small intestinal strictures were enrolled in the study from January 2018 to May 2022. Nine patients were excluded due to intra-abdominal abscess (n = 5), acute severe intestinal obstruction (n = 2), or contraindications to DBE or CTE (n = 2). Hence, this study included 165 CD patients with small-bowel strictures (Fig. 1). Table 1 shows the baseline
Results of DBE and CTE
In our study, 165 CD patients with symptomatic small intestinal strictures underwent 179 DBEs. The antegrade route was used for 14 procedures, while the retrograde route was used for 137 procedures. DBE was performed via both routes on 14 patients. In the DBE procedures, no patient experienced adverse events (such as anesthesia accident, gastrointestinal perforation or hemorrhage, or pancreatitis).
The overall diagnostic yield of DBE in CD patients with small-bowel strictures was 92.7% (153/165 patients), while with CTE, it was 85.5% (141/165 patients). DBE and CTE both detected strictures in 137 patients. Sixteen patients had DBE-positive strictures but negative CTE results. Of these, 15 had disease restricted to the ileum, and 1 had disease in the jejunum. Strictures were not accessible at DBE in 4 cases, which resulted from adhesions because of previous intra-abdominal abscess/ intestinal fistula and history of CD-associated abdominal surgery, but were all detected by CTE.
We then associated the DBE or CTE findings with the severity of stricture symptoms. Based on the CDOS, the patients were divided into the low-score and high-score groups. The DBE diagnostic yields were 91.4% and 94.7% in the low-score and high-score groups, respectively (P = 0.13). Intriguingly, patients in the high-score group had a significantly higher CTE diagnostic yield than those in the low-score group (90.1% vs. 75.9%, P = 0.01).
Over the course of DBEs, 10 strictures in 6 patients were dilated. Obstructive symptoms were relieved after balloon dilatation in all patients. Within the study period, 5 of 6 patients remained surgery-free. In terms of surgery, stricturoplasty and bowel resection were performed in 3 and 5 patients, respectively.
Factors associated with successful detection of DBE
Univariate analysis was used to examine the factors associated with successful detection using DBE (gender, age of diagnosis, disease duration, history of CD-associated abdominal surgery, previous intra-abdominal abscess/ intestinal fistula, levels of CRP, CDAI, location of disease, perianal disease, severity of stricture symptoms). Variables with P < 0.05 in the univariate analysis were tested further in a multivariate analysis. In the multivariate analysis, previous intra-abdominal abscess/intestinal fistula (hazard ratio = 2.021, 95% confidence interval (CI): 1.075-3.826, P = 0.021) and history of CD-associated abdominal surgery (hazard ratio = 2.852, 95% CI: 1.146-3.467, P = 0.017) were considered independent prognostic factors ( Table 2).
Discussion
In this study, we evaluated the efficacy and safety of DBE for detecting small-bowel strictures in CD patients. Our study's main findings were as follows: (1) DBE was an effective method for diagnosing strictures in CD patients with obstructive symptoms. (2) The severity of stricture symptoms did not affect the diagnostic yield of DBE. (3) A history of abdominal surgery and abscess was linked to the failure to detect DBE. Up to 67% of CD cases involve the small bowel [39], with 10-30% of cases involving solitary lesions in the small bowel [40]. Small-bowel strictures in CD patients can be difficult to diagnose, particularly in patients with extensive small-bowel involvement. The DBE technique has made examination of the entire small-bowel feasible [41], let alone the investigation of the deep small bowel [22]. DBE is an efficacious tool for evaluating small-bowel strictures linked with CD, according to our findings. The diagnostic yield of DBE procedures performed by experienced IBD endoscopists was 92.7%. Several studies found that the diagnostic yield of DBE in CD patients ranged from 22 to 70% [26,[42][43][44]. However, in a study comparing the diagnostic yields of DBE and fluoroscopic enterolysis, Naoki Ohmiya et al. found that DBE had a diagnostic yield of up to 95% for small-bowel obstruction [45]. A comparison study compared MR and balloon enteroscopy for small-bowel strictures in CD [13]. All strictures in 57 patients that were detected by balloon enteroscopy were MR-positive. Furthermore, 37 patients had endoscopic strictures that could not be detected using MR imaging. The following are some possible explanations for our study's high diagnostic yield: (1) All DBE procedures were performed by IBD endoscopists with at least 200 cases of experience. (2) Based on previous medical history, either antegrade or retrograde DBE was chosen. The patients in this study had symptomatic small-bowel strictures and did not have early-stage disease. DBE complications have been reported in determining the safety of this technique for assessing the small bowel in CD patients. According to these findings, the rate of complications (e.g., bleeding, perforation, and pancreatitis) ranged between 1.2% and 1.6% [46,47]. Nonetheless, no DBE-related adverse events were observed during our diagnostic process, confirming the safety of DBE even in CD patients with small bowel strictures.
Previous research has shown that CTE and MRE are both valuable diagnostic techniques for investigating small-bowel lesions in CD patients [48,49]. MRE has the benefits of no requirement for radiation exposure and includes high temporal and spatial resolution. However, CTE outperforms MRE in terms of scan time, lack of artifacts, and availability in most hospitals [50]. Consequently, CTE has been recommended as a useful tool for assessing disease activity and complications in CD involving the small bowel [51,52]. Abnormal CTE results in the small bowel normally indicate the need for DBE, so it is critical to compare DBE and CTE findings. In our study, we compared the diagnostic yields of DBE and CTE in patients with small-bowel obstructive symptoms, and DBE correctly detected more strictures than CTE. An early study looked at the role of CT in the diagnosis of small intestine obstruction. CT results were used to correctly identify 63% (29 of 46) of those with small-bowel obstruction [53]. CTE outperforms conventional CT in detecting small-bowel strictures. When different criteria and gold standards are used in different studies, the sensitivity of CTE for the detection of small-bowel stenosis ranges from 85 to 93% [28,[54][55][56]. In this study, we found that the overall diagnostic yield of CTE for establishing a diagnosis of small bowel obstruction was 85.5%, which is consistent with previous findings. Although CTE's diagnostic ability was found to be equivalent to DBE's, we discovered that the diagnostic efficacy varied according to the severity of symptoms. Maglinte et al. classified patients with small intestine obstruction into low-grade and high-grade partial obstructions, with CT detecting 81% of the high-grade obstructions and 48% of the low-grade obstructions [53]. These findings could be attributed to DBE's ability to directly visualize mucosal lesions.
Our study examined not only the diagnostic ability of DBE in CD patients with small-bowel strictures but also the factors associated with DBE efficacy in these patients. Previous intra-abdominal abscess/intestinal fistula and history of CD-associated abdominal surgery were considered independent prognostic factors of DBE detection failure. Adhesions from previous surgeries and a complicated phenotype (such as intra-abdominal abscess or fistula) of CD may make DBE insertion difficult. In a multicenter retrospective study investigating DBE results and the influence on CD management, the target area of 17% of patients could not be reached due to adhesions from previous surgeries, which limited deep penetration [27]. Kohei Matsushita et al. investigated the efficacy and safety of DBE in pediatric patients after surgery. In four postoperative patients and 2 nonoperative patients, there was difficultly in transanal pleating due to adhesions or thickening of the intestinal wall caused by inflammation (P = 0.02) [57]. These findings are consistent with our conclusion that DBE has limitations due to strongly adhered adhesions in CD patients. There were several potential limitations to this study. First, this was a single-center study. The patients in the study were all enrolled at a tertiary care facility. Second, all DBE procedures were conducted by three experienced IBD endoscopists, which may result in a higher diagnosis rate and better outcomes. Finally, further clinical outcome analysis in CD patients with small bowel strictures should be conducted.
Our study concludes that DBE is an effective and safe method for assessing CD patients with small-bowel strictures. Furthermore, the benefit of DBE was demonstrated in low-grade obstructions. Previous intra-abdominal abscess/intestinal fistula and history of CD-associated abdominal surgery were considered independent prognostic factors of DBE detection failure. | 2023-07-21T13:07:36.081Z | 2023-07-20T00:00:00.000 | {
"year": 2023,
"sha1": "54e72d43b68f7fbb39a261c1ff2a1b04380053b1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "39c4d7457e42fcb9b68db3b506ddfabe7270167b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233460715 | pes2o/s2orc | v3-fos-license | Dyslexic Readers Improve without Training When Using a Computer-Guided Reading Strategy
Background: Flawless reading presupposes the ability to simultaneously recognize a sequence of letters, to fixate words at a given location for a given time, to exert eye movements of a given amplitude, and to retrieve phonems rapidly from memory. Poor reading performance may be due to an impairment of at least one of these abilities. Objectives: It was investigated whether reading performance of dyslexic children can be improved by changing the reading strategy without any previous training. Methods: 60 dyslexic German children read a text without and with the help of a computer. A tailored computer program subdivided the text into segments that consisted of no more letters than the children could simultaneously recognize, indicated the location in the segments to which the gaze should be directed, indicated how long the gaze should be directed to each segment, which reading saccades the children should execute, and when the children should pronounce the segments. The computer aided reading was not preceded by any training. Results: It was shown that the rate of reading mistakes dropped immediately by 69.97% if a computer determined the reading process. Computer aided reading reached the highest effect size of Cohen d = 2.649. Conclusions: The results show which abilities are indispensable for reading, that the impairment of at least one of the abilities leads to reading deficiencies that are diagnosed as dyslexia, and that a computer-guided, altered reading strategy immediately reduces the rate of reading mistakes. There was no evidence that dyslexia is due to a lack of eye movement control or reduced visual attention.
Introduction
Dyslexia is regarded as a specific learning disorder. According to the criteria of the Diagnostic and Statistical Manual of Mental Disorders (DSM5) dyslexia is indicated by (1) inaccurate and effortful word reading, (2) difficulty understanding the meaning of what is read and (4) difficulty with spelling that persisted for at least six months and remains below the skills expected for the chronological age. The DSM5 also requires that the difficulties cannot be explained by intellectual disabilities, poor visual or auditory acuity, psychiatric or neurological disorders, psychological adversity, or inadequate educational instruction [1]. According to these criteria, approximately 5-15% of school children in the USA are dyslexic [2][3][4].
To date, the nature of dyslexia is unclear, and reading therapies are still unspecific, long lasting and of limited success. Therapies to improve the reading performance of dyslexics include therapies based on the discrimination of auditory stimuli [5][6][7][8][9][10], Phonological Awareness which comprises different approaches that are intended to promote reading skills [11], decomposing words into syllables and sounds [12][13][14][15][16][17][18], identifying phonems in words [19][20][21], naming letters, objects, numbers and colors [15,22], and rhyming [23]. Other therapies are based on visual movement discrimination training [24][25][26][27][28], a training to improve eye movement control [29][30][31][32], and syllable segmentation [33][34][35][36]. Therapy studies focused on breaking up words into syllables improved reading performance in children. However, since syllables often contain more letters than poorly reading children can recognize simultaneously longer syllables lead to an increased error rate in some children. The therapeutic effect in these therapy studies was far less than in the studies with words broken up into units containing no more letters the subjects could recognize simultaneously [37][38][39]. Some therapies, like Fast For Word training [7][8][9] did not yield reproducible results. Cohen et al. [40] and Gillam et al. [41] found that Fast For Word training was not better than other traditional therapies. A detailed meta-analysis of studies on the effectiveness of this training program showed absolutely no effect [42][43][44]. The effect of PATH therapy [27,28] was based on few patients and could not be replicated [45]. In addition, the therapy rests on assumptions about the role of magnocells in dyslexia that are still a matter of debate [46,47].
Training procedures that improved reading performance required many months of practice. During this time, many influences on reading performance could not be controlled, and the effect size did not exceed Hedges g = 0.9 [48]. It has already been shown that a reading therapy which has a high training effect of Hedges g between 1.4 and 2 [37][38][39] can be achieved within one single session in which all possible influences on reading performance are controlled. Not only a new reading strategy was practiced during this reading therapy. Subjects also simultaneously practiced expanding their field of attention and focusing their attention on a specific area. It cannot be excluded that the control of eye movements was also improved by computerized steering of eye movements during therapy. This coincides with the assumption that practicing eye-movement control alone may improve reading performance [29][30][31][32]. The present study investigated whether reading capacity can be improved in one single session if (1) only the reading strategy is changed, (2) no training is conducted that may improve the ability to expand the field of attention, or to focus attention, and (3) if no eye movement training is carried out. To explore in what way the reading strategy needs to be changed, it must be investigated in how far a dyslexic child's reading strategy leads to reading errors. To that end, we must investigate whether reading performance improves when readers do not attempt to recognize more letters of a word or word segment at a time than they can recognize simultaneously. Therefore, it is necessary to investigate how many letters dyslexic children can recognize in a pseudoword, and whether their abilities to recognize pseudowords of a given length improve if fixation times are prolonged. To investigate whether premature saccades and too large saccade amplitudes have an impact on reading performance, reading performance with and without computer guidance of eye movements must be examined. To rule out the possibility that an improvement in reading performance is due to an attention training or an eye movement training, neither a training to improve visual attention performance nor an eye movement training was performed.
When reading a text the reader must fixate the right location within a word; s/he must fixate a word or word segment for a given time interval, and the reader must be able to visually process several letters simultaneously. After the end of the fixation time that is needed to recognize the word or word segment, saccades of a given amplitude must be programmed and executed. As visual acuity is only sufficiently high in the fovea and the parafoveal area and drops dramatically up to 10 degrees eccentricity, a word to be read must be projected into the area of highest visual acuity in the middle of the retina. Therefore, a word must be fixated so that its image appears in the fovea and parafoveal region. This must be achieved by eye movements that shift the image of the word to be read onto the region of highest visual acuity. This implies that eye movements of the right amplitude must be executed. As these abilities are necessary conditions for correct reading, the absence or an impairment in one or several of these abilities causes reading problems that are diagnosed as dyslexia.
Computer programs which steer a reader s eye movements have been developed to explore the influence of appropriate eye movements on reading performance. It has been shown that children improve their reading performance after a 30-min training session when their reading eye movements are adjusted by a computer program [37][38][39]. This was achieved with the help of a reading therapy in which the subjects learned to subdivide the text into segments that consisted of no more letters than they could recognize simultaneously, to fixate these segments at the right location, to execute correct reading saccades, to prolong the time during which the gaze was directed to the segments, and to prolong the time needed to retrieve the sounds that belong to letters or a string of letters from memory. In this way impaired abilities that caused haltingly and faulty reading were compensated and reading performance improved whereas a control group which received no reading therapy showed no improvement [37][38][39].
Therapies that attempt to improve diminished brain functions (which were assumed to cause dyslexia) require long-term practice, have limited effects, and often encounter insurmountable biological limits. The present study investigated whether a therapy is effective that does not attempt to improve diminished brain functions, but compensates these with a new reading strategy tailored to the subjects individual reading abilities.
Patients
Reading performance was investigated in a group of 60 German children (40 boys and 20 girls) aged 8 to 15 years (mean age: 122.4 months, SD = 19.3 months) who were diagnosed as dyslexic by the Zuercher Reading Test [68]. All children were native German speakers and attended Bavarian primary schools. Forty children were below the 6th percentile (1.5 SD), and 20 children were below the 2.5 percentile (i.e., 2 SD). All children had a pediatric, an ophthalmological, and a psychological examination. They were right-handed, had no neurological, psychiatric, visual, or auditory deficits and no speech disorders. The children were second-to-tenth graders who knew all individual letters and were expected to read fluently but were far behind the required reading ability. Their reading disabilities were not based on lack of teaching or inadequate educational instructions, as the children had had the same educational instructions as other children in the same grade. The children had been referred to the pediatric clinic (Kinderzentrum München) of the Institute for Social Pediatrics and Adolescent Medicine of the University of Munich because of their reading problems.
The children's IQ was in the normal range on the Hamburg-Wechsler Intelligence Test for children [69]. All children participated in experiment 1.
Methods
Experiment 1 tested under which conditions poor readers were able to correctly read at least 95% of a list of 20 pseudowords using the Celeco Software-Package for the Diagnosis and Therapy of Dyslexia [70]. For this purpose the length of the pseudowords, the presentation time, and the time to pronounce the pseudowords were altered until all subjects were able to read at least 95% of the pdeudowords correctly [39].
As each letter of the pseudowords corresponded to a different phonem in the German language, it was possible to identify the letters in the pseudowords that had been read incorrectly. A word was considered read incorrectly if at least one letter had been omitted or replaced by a letter not present in the pseudoword, if the location of at least one letter had been changed, or if at least one letter had been incorrectly added.
Lists of 20 pronounceable 2-, 3-, 4-, 5-or 6-letter pseudowords were presented at eye level on a monitor. The sequences of the letters in the pseudowords were also found in colloquial German words. Each of these pseudowords contained the same number of consonants and vowels at the same location within the word.
The distance between the eyes and the monitor was 40 cm. The words were black (luminance of 4 cd/m 2 ; altitude 14 mm; space between types: 4 mm) on a background of 68 cd/m 2 . The presentation times of the pseudowords varied between 250 and 500 milliseconds. Luminance and presentation time of stimuli were assessed using a Gigahertz Optimeter P 9201 with a sampling rate of 20 microseconds. Each trial began with the presentation of a green fixation mark (luminance: 30 cd/m 2 ; background luminance: 68 cd/m 2 ) in the center of the monitor. The child was asked to direct his/her gaze to the fixation mark. When the child maintained fixation, the fixation mark disappeared and was replaced by a pseudoword that was centered at the fixation point. Fixation of the word segments and saccadic eye movements were recorded using an infrared eye-tracking system (IRIS eye tracker; sampling rate: 500 Hz). In the first trial, a sequence of 20 pseudowords consisting of 4 letters was presented. Each pseudoword appeared for 250 ms. The child was instructed to read each pseudoword aloud. If the child was unable to pronounce the word correctly, s/he was asked to spell and write the word. The time between the onset of the presentation of the pseudoword and the onset of the child's speech was measured by the computer.
If more than one out of 20 words were not read correctly, it was investigated whether a prolongation of the fixation time alone was sufficient to improve the child s ability to recognize the pseudowords. If 90% (i.e., 18 out of 20 pseudowords) or less of a sequence of pseudowords were read correctly, a different sequence of 20 pseudowords of the same length was presented. The presentation time of each pseudoword in the new sequence was increased by 50 ms. If still less than 90% or less of this sequence of letters was read correctly, a new sequence of 4-letter pseudowords was presented. Each new pseudoword was presented for 350 ms. If 90% or less of a sequence of pseudowords was read correctly, within a presentation time of 500 ms, a different list of pseudowords was presented and the number of letters was reduced by one. Therefore, the fixation times increased and/or the number of letters to be read was increased or decreased until 95% of a list of pseudowords was read correctly.
If more than one of these 20 pseudowords with a length of n letters were not read correctly at a presentation time of 500 ms, the experiment was repeated with a different list of 20 pseudowords with a length of n − 1 letters. Again the presentation times were increased by steps of 50 ms until the child was able to read at least 19 of the 20 pseudowords correctly. If the subject was able to read at least 19 of the 20 pseudowords with a length of n letters that were presented for 250 milliseconds correctly, the experiment was repeated with pseudowords with a length of n + 1 letters. The children were instructed not to start pronouncing until they were sure of the word and not to start pronouncing immediately. To avoid too early pronunciation a sound signal was given 700 ms after the pseudoword appeared. The subjects were not supposed to start speaking until they heard the sound signal. After each pronunciation, the subjects were asked to correct themselves, if necessary, within 5 to 10 s. After an interval of between 5 and 10 s, the green fixation mark was presented again. When the child s gaze was on the fixation mark, a different pseudoword appeared for the same time interval as the previously shown pseudoword. The children s reading performance was registered by recording their voice with a microphone. Speech onset, the presented pseudoword, the presentation time of the pseudoword, and the voice of the subject were recorded by a computer. The experiment took no longer than 45 min.
Statistics: Rates of reading mistakes were compared using the Bonferroni-Holm corrected Wilcoxon-test.
Results
The results of experiment 1 are summarized in Figure 1 and Table 1. Dyslexic children differed considerably with regard to the number of letters they were able to recognize and Brain Sci. 2021, 11, 526 5 of 19 the fixation time needed to recognize a sequence of letters. From the result of experiment 1 it follows that a sufficiently long fixation time is a prerequisite for the children's ability to recognize pseudowords of a given length. It also follows that reading mistakes occur if pseudowords are too long and readers try to recognize more letters of pseudowords simultaneously than they are able to. Furthermore, reading mistakes may come about when a reader pronounces a word or word segment before the sequence of sounds has been retrieved from memory (too short a speech-onset latency. Table 1. The number of letters (columns 2-5, from left to right), fixation times (first column on the left) and mean speech onset times (bottom row) at which 60 dyslexic children were able to read at least 95% of the pseudowords correctly. First column on the left: presentation times (i. e. fixation times) of the pseudowords; second column: number of subjects who were able to read 3-letter pseudowords within fixation times between 250 and 500 ms; third column: number of subjects who were able to read 4-letter pseudowords within fixation times between 250 and 500 ms; fourth column: number of subjects who were able to read 5-letter pseudowords within fixation times between 250 and 500 ms. Fifth column: number of subjects who were able to read 6-letter pseudowords within fixation times between 250 and 500 ms. Bottom row: means and standard deviations of speech onset latencies. TG and CG indicate the number of children who belonged to the therapy group (TG) or control group: (CG). For 3-and 4-letter pseudowords, the means of the rate of misread letters increased from positions 1 to 3. In the case of 5-letter pseudowords, the means of the rate of misread letters increased from positions 1 to 5. For 6-letter pseudowords, the means of the rate of misread letters increased from positions 1 to 4 and decreased again from positions 4 to 6 ( Figure 1). For 3-and for 6-letter pseudowords, the Bonferroni-Holm corrected p-value of the Wilcoxon-test showed that the p-value of the difference between the means of the frequencies of misread letters at the 1st, 2nd, 3rd, 4th, 5th, and 6th position within a pseudoword was p > 0.1. For 4-letter pseudowords, the difference between the means of the rate of reading mistakes at positions 1 and 3 within a pseudoword was p < 0.0001 (Bonferroni-Holm corrected Wilcoxon-test). The difference of the means between the frequency of reading mistakes at the 1st and 4th positions was p < 0.0003 (Bonferroni-Holm corrected Wilcoxon-test). The difference of the means of the frequency of reading mistakes at positions 2 and 3 within a pseudoword was p < 0.014 (Bonferroni-Holm corrected Wilcoxon-test). For all other comparisons, the p-value of the Bonferroni-Holm corrected Wilcoxon-test was p > 0.09. For 5-letter pseudowords, the difference between the means of the frequency of reading mistakes at positions 1 and 3, 1 and 4 and 1 and 5 within pseudowords was p < 0.0001 (Bonferroni-Holm corrected Wilcoxon-test). Comparison of the means of the rate of misread letters at positions 2 and 4 yielded a Bonferroni-corrected pvalue of the Wilcoxon-test of p < 0.0034. The difference of the means between the frequency of reading mistakes at the 2nd and 4th position was p < 0.021 (Bonferroni-Holm corrected Wilcoxon-test). For the comparison of the means of the rates of misread pseudowords at all other positions the p-value of the Bonferroni-Holm corrected Wilcoxon-test was p > 0.1. Table 1 shows also that the length of the pseudowords had no effect on the speech onset latency: If the speech onset latencies for 3-letter pseudowords, 4-letter pseudowords, 5letter pseudowords, and 6-letter pseudowords are compared, the Cohen-d effect size shows always no effect (speech onset latencies for 3-letter pseudowords vs. frequency of reading mistakes at the 1st and 4th positions was p < 0.0003 (Bonferroni-Holm corrected Wilcoxon-test). The difference of the means of the frequency of reading mistakes at positions 2 and 3 within a pseudoword was p < 0.014 (Bonferroni-Holm corrected Wilcoxon-test). For all other comparisons, the p-value of the Bonferroni-Holm corrected Wilcoxon-test was p > 0.09. For 5-letter pseudowords, the difference between the means of the frequency of reading mistakes at positions 1 and 3, 1 and 4 and 1 and 5 within pseudowords was p < 0.0001 (Bonferroni-Holm corrected Wilcoxon-test). Comparison of the means of the rate of misread letters at positions 2 and 4 yielded a Bonferroni-corrected p-value of the Wilcoxon-test of p < 0.0034. The difference of the means between the frequency of reading mistakes at the 2nd and 4th position was p < 0.021 (Bonferroni-Holm corrected Wilcoxon-test). For the comparison of the means of the rates of misread pseudowords at all other positions the p-value of the Bonferroni-Holm corrected Wilcoxon-test was p > 0.1. Table 1 shows also that the length of the pseudowords had no effect on the speech onset latency: If the speech onset latencies for 3-letter pseudowords, 4-letter pseudowords, 5-letter pseudowords, and 6-letter pseudowords are compared, the Cohend effect size shows always no effect (speech onset latencies for 3-letter pseudowords vs. Percentage of incorrectly read letters in 3-letter, 4-letter, 5-letter, and 6-letter pseudowords according to their location in the words. Vertical bars above columns indicate standard deviation. The letters were either omitted, replaced by other letters, or shifted to an incorrect position. Numbers denote the letters in the pseudowords from left to right. F denotes the letter that occurred at the fixation point. The rate of incorrectly read letters increased from left to right. The rate of reading mistakes was lowest for the first letter. In 3-letter and 4-letter pseudowords it was highest for the third letter, in 5-letter pseudowords for the fifth letter, and in 6-letter pseudowords for the fourth letter.
Experiment 2: Immediate Improvement in Reading Ability after Changing the Reading Strategy
Experiment 2 investigated whether a child's ability to read a text improves if the computer guides the children's reading strategy such that (1) the child only attempts to simultaneously recognize words or word segments consisting of no more letters than it is able to recognize simultaneously, (2) the amplitude of the reading saccades does not exceed the number of letters the child is able to recognize simultaneously, (3) the child fixates the Figure 1. Percentage of incorrectly read letters in 3-letter, 4-letter, 5-letter, and 6-letter pseudowords according to their location in the words. Vertical bars above columns indicate standard deviation. The letters were either omitted, replaced by other letters, or shifted to an incorrect position. Numbers denote the letters in the pseudowords from left to right. F denotes the letter that occurred at the fixation point. The rate of incorrectly read letters increased from left to right. The rate of reading mistakes was lowest for the first letter. In 3-letter and 4-letter pseudowords it was highest for the third letter, in 5-letter pseudowords for the fifth letter, and in 6-letter pseudowords for the fourth letter.
Experiment 2: Immediate Improvement in Reading Ability after Changing the Reading Strategy
Experiment 2 investigated whether a child's ability to read a text improves if the computer guides the children's reading strategy such that (1) the child only attempts to simultaneously recognize words or word segments consisting of no more letters than it is able to recognize simultaneously, (2) the amplitude of the reading saccades does not exceed the number of letters the child is able to recognize simultaneously, (3) the child fixates the word segments for the time interval needed (adequate fixation intervals), and (4) if the time interval between the onset of the presentation of the word and the onset of the pronunciation of the word by the child is sufficiently long (adequate speech-onset latency). The role of an increased fixation time and pseudoword length in improving reading performance had already been investigated in Experiment 1, in which the stimuli were stationary and did not require eye movements. Experiment 2 examined the role of appropriate eye movements on improvement in reading performance. The length of the sequence of letters that could be recognized simultaneously and the required fixation times found in Experiment 1 were transferred to Experiment 2. A colored cursor then indicated the length of word segments and fixation times. Experiment 2 investigated whether reading performance improved when the length of the word segments to be read and the fixation times were adapted to the performance recorded in Experiment 1 and eye movements were guided by the computer.
Children with Dyslexia
All children with dyslexia who had participated in Experiment 1 participated in Experiment 2.
Procedure
Half of the children in the therapy group read one half of a text without the help of a computer. The other half of the children in the therapy group read the other half of the texts with the help of a computer. The computer controlled how many letters subjects attempted to recognize simultaneously, the fixation times, the eye movements, and the speech onset latency. The children of the control group read both halves of the text without the support of a computer.
The children were assigned to the therapy group (30 children) or to the control group (30 children) according to their ability to read the letters of pseudowords simultaneously. After each pseudoword experiment, the number of letters a child could recognize simultaneously was known. Children who could recognize the same number of letters simultaneously were assigned to the therapy group or the control group in such a way that there was approximately the same number of children in each group. If several children had the same ability to recognize a certain number of letters simultaneously, the children were assigned to the therapy group and control group in such a way that there were approximately the same number of children in each group who needed the same fixation time. Children who had almost the same ability to recognize a certain number of letters simultaneously and needed the same fixation time were assigned to the therapy group or the control group in such a way that in both groups there were approximately the same number of children who had almost the same age. Thus, the therapy group and the control group were similar in the ability to read letters simultaneously, and in the fixation time they needed to read a given number of letters simultaneously. Table 1 shows the distribution of the children in the therapy group and the control group (mean age in the therapy group: 120.83 months; SD 16.24 months; mean age in the control group: 124.03 months SD: 21.45 months). Comparison of the ages of both groups using the Wilcoxon-test resulted in a p-value of 0.5.
The children of the therapy group and of the control group red the same texts. In the therapy session the children of the therapy group read with the help of a computer. Half of the children in the therapy group read the first part of four different texts before the therapy session without the help of a computer and the second part during the therapy session with the help of a computer. The other half of the children in the therapy group read the second part of the texts before the therapy session without the help of a computer and the first half during the therapy session with the help of a computer. Half of the children in the control group read the first part of the texts first and the second part later. The other half of the controls read the second part of the texts first and the first part later. The controls received no support by the computer while reading the same texts as the therapy group. The children were sitting in front of a monitor. The distance between the eyes and the monitor was 40 cm. The words were black (luminance of 4 cd/m 2 ; altitude 14 mm; space between types: 4 mm) on a background of 68 cd/m 2 . Luminance of stimuli and background was measured with a Gigahertz Optimeter P 9201. Fixation of the word segments and saccadic eye movements were recorded using an infrared eye-tracking system (IRIS eye tracker; sampling rate: 500 Hz). Only the therapy group was instructed to apply an adequate reading strategy using the Celeco Software-Package for the Diagnosis and Therapy of Dyslexia [39,70]. To adopt such an adequate reading strategy, children's reading strategy was guided by a computer program that instructed the reader (1) to read only words or word segments not containing more letters than the children were able to recognize simultaneously according to the result of Experiment 1, (2) to fixate these words or word segments for the appropriate time interval, (3) to start to pronounce the words or word segments only after an appropriate time interval, and (4) to execute eye movements of an amplitude that matches the length of the words or word segments whose letters can be recognized simultaneously (adequate reading saccades).
To support this reading strategy, a yellow mark indicated the point to be fixated within each word or word segment. A green cursor (segment cursor) to the left and/or right of the yellow fixation mark indicated which letters in the word segment were to be read simultaneously together with the letter indicated by the yellow fixation mark. The yellow and green marks indicated which adjacent letters in a word or word segment should be read while fixating the yellow mark. The subjects were to read the text aloud so that reading errors could be recognized immediately by the therapist. Whenever a word segment was recognized, the next word segment was shown. Then the yellow fixation mark was moved to the middle letter of the next word or word segment, indicating the goal of the saccade (i.e., the location where the gaze should be directed when the next word segment has to be read). A green cursor (segment cursor) to the left and/or right of the yellow fixation mark again showed which letters of the newly shown word or word segment were to be read while the eyes fixated the shifted yellow mark. The fixation mark and the segment cursor moved from one word segment to another as they were to be read in succession. The whole text was presented on the monitor. The text to the left of a word or word segment that had to be read was not shown on the monitor to prevent the child from exerting a saccade to the left and refixate a word or word segment that had already been read. Only the text that had not yet been read appeared on the monitor. An acoustic signal was presented 1 s after the yellow and the green cursors had moved to the new segment to be read. The acoustic signal indicated when the subject was allowed to pronounce the word segment to prevent a premature pronunciation of the word segment to be read.
Statistics
Rates of reading mistakes and reading time were compared using the Wilcoxon-test. To show that the improvement of pseudoword recognition is an effect of the change of the reading strategy, Cohen d effect size statistics [71,72] was used: Cohen d = X 1 − X 2 Sw Sw = (n 1 −1)S 2 1 +(n 2 −1) S 2 2 (n 1 +n 2 −2) X 1 and X 2 are the means and S 1 and S 2 are the standard deviations of the rate of reading mistakes. n 1 and n 2 are the number of values from which each mean values was calculated.
Results
The children who read the first or the second half of the text without the help of the computer read in the mean 16.57; SD = 6.76 (which corresponds to 7.67%; SD = 3.13%) of the words incorrectly. When these children read the remaining half of the text with the help of the computer, in the mean only 5.03 words; SD = 3.56 (corresponding to 2.33%; SD = 1.65%) of the words were read incorrectly. This corresponds to a decrease of 11.72 words read incorrectly, i.e., a 69.97% decrease of reading mistakes. The difference between the number of reading mistakes for non-computer supported reading was p ≤ 0.00001 according to the Wilcoxon Test. The effect size Cohen d was 2.137 (CI = 1.24-3.034; CC = 95%).
In the control group the rate of reading mistakes increased when the subjects read one half of the text first and when reading the remaining half afterwards. When reading one half of this test first, a mean of 14.4 words (6.67%); (SD = 4.0 (1.85%)) were read incorrectly. When the controls read the remaining half of this test later on, 17.2 (7.96%) of the words (SD = 3.43%) were read incorrectly (p > 0.1: Wilcoxon Test). The effect size Cohen d was 0.512 (CI =−0.215-1.239; CC = 95%). Thus, a comparison between the performance of the two halves of the children in the control group showed a weak effect. A comparison between the therapy group and the control group (Cohen d therapy group -Cohen d control group ) showed an effect size of 2.649. This is the highest effect size that has ever been measured for a reading therapy. This high effect size was already reached immediately after changing the reading strategy without any reading training.
Children who read half of the text without the help of the computer took 158.32 (SD = 61.98) seconds to read the texts. When the children read with the help of the computer, it took them 342.43 (SD = 77.89) seconds. Comparison of both time intervals with the Wilcoxon-test resulted in p < 0.00001. When the control group read one half of the text first they needed 138.57 (SD = 64.15) seconds. When they read the remaining text they needed 164.2 (SD = 84.74) seconds. Comparison of both time intervals with the Wilcoxon-test showed a p-value of p < 0.00001. Whereas there was no difference between the therapy group and the control group when they read the first part of the text without the help of a computer (Wilcoxon-test: p > 0.05), there was a marked difference between the reading times of the control group when reading the second half of the texts and the reading times of the therapy group who read the second half of the texts guided by the computer (Wilcoxon-test: p < 0.00001).
Discussion
The aim of Experiment 1 was to investigate how many letters pseudowords can contain to be recognized simultaneously and how long each individual child must fixate pseudowords of a given length to recognize them. The children were assigned to the therapy group or to the control group according to their abilities to simultaneously recognize a string of letters in pseudowords and the fixation times that they needed. This ensured comparability of the therapy group and the control group. The result of Experiment1 determined the positions and lengths of the yellow and green cursor in Experiment 2. The cursor indicated how many letters the children should try to read at a time to simultaneously recognize a string of letters. As the amplitudes of the saccades that must be executed to connect each word or word-segment to be read without a gap between them depend on the number of letters each child can recognize simultaneously, the result of Experiment 1 is also presupposed to guide the childrens' reading eye movements by the computer.
The results of experiment 1 demonstrate that dyslexic readers can read at least 95% of pseudowords correctly if they fixate them at the right location, the length of the words is adjusted to the subjects ability to simultaneously recognize letters, and the fixation time and the speech-onset latency are prolonged. This is not due to lack of knowledge of the grapheme-phoneme correspondence. All subjects who participated in experiment 1 were familiar with the grapheme-phoneme correspondence of all letters and the knowledge of this correspondence was unimpaired in all subjects. No practice was necessary to improve the subjects' abilities to read pseudowords. Experiment 1 was performed with pseudowords because they cannot be guessed, and each letter must be recognized. When reading normal words, they can be guessed if only a few letters are recognized. Table 1 shows that the ability to recognize a succession of letters simultaneously, the length of the fixation times needed to recognize a given string of letters, and the length of the speechonset times that the subjects needed to pronounce at least 95% of the pseudowords correctly differed considerably among subjects. Whereas five subjects were able to recognize 6 letters simultaneously (Table 1), 19 subjects were only able to recognize 3 letters simultaneously, 18 subjects were able to recognize 4 letters simultaneously, and 18 subjects were to recognize 5 letters simultaneously. If a subject who is only able to recognize 3 letters simultaneously tries to recognize 6 letters simultaneously s/he will make reading mistakes: S/he will swap letters, displace letters, omit letters, and read letters that do not occur in the word. The finding that recognition depends on the presentation time of stimuli is in agreement with psychophysical [73][74][75][76][77][78][79] and neurobiological studies [80] and earlier studies on the recognition of pseudowords [37][38][39]. Temporal summation explains both the improvement in recognizing a sequence of letters with increased fixation time [37][38][39] and poor readers' prolonged fixation times reported in previous studies [81][82][83][84][85]. Poor readers tend to misread words more often than good readers. If they misread a word, they often notice that the misread word is not a meaningful word or that the misread word does not fit into the context of the sentence in which it occurs. To correct this mistake they do what everyone does when s/he assumes that s/he has not recognized something correctly: S/he directs his/her gaze longer at the object to be recognized and focuses his/her attention on it. However, this is not sufficient to correctly recognize the word if the fixation time is still too short or if s/he tries to recognize more letters simultaneously than s/he can.
In Experiment 1 the presentation time was limited to 500 ms, because temporal summation is effective up to this fixation time [73]. The results of Experiment 1 and earlier studies on temporal summation [37][38][39][73][74][75][76][77][78][79] support the hypothesis that improvement of word recognition requires prolongation instead of shortening [32,[86][87][88] of the fixation time. The finding that readers improve when they extend their fixation times demonstrates that attention does not decrease during fixation, and that the readers can maintain their attention for the required fixation time. This contradicts the assumption that poor reading is caused by an attention deficit. If readers had been unable to maintain attention, the rate of misread letters would have increased as the fixation time had increased.
As reading mistakes occurred not only at the beginning and at the end of words, but at all positions within words, a reduced ability to recognize a succession of letters simultaneously cannot only be attributed to a narrowing of the field of attention [58][59][60][61][62][63][64][65][66][67]. The area in which humans can detect and recognize visual stimuli varies depending on where attention is focused. As early as 1909, Balint [89] demonstrated that subjects could narrow or widen their field of attention depending on the extension of the object they were watching. In 1917, Poppelreuter [90] showed that patients with a normally extended visual field may still be unable to recognize objects next to the object on which they are focusing their attention. The objects further from the fixation point were only recognized with longer fixation times. Poppelreuter described this as a narrowing and widening of the field of attention and called this phenomenon "a disturbance of overview". Williams and Gassel [91] also showed that the visual field narrows if a subject directs his/her attention vigorously to a point in the middle of the perimeter used to assess the extension of the visual field. Subsequent studies have confirmed these results and have shown that the retinal area in which many stimuli can be detected simultaneously narrows when subjects focus their attention on a given point in the visual field [58][59][60][61][62][63][64][65][66][67]. It has also been shown that the field of attention can be shifted to all areas of the visual field regardless of eye movements [92][93][94]. In agreement with earlier studies [37][38][39] the present study demonstrates that dyslexic children have different abilities to simultaneously recognize a given number of letters in pseudowords. The ability to recognize a given number of letters improves when presentation time is prolonged. If a narrowing of the field of attention was the cause of reading mistakes, most errors would be expected to occur at the beginning and the end of the pseudowords. Figure 1 demonstrates that this was not the case. The results of experiment 1 also show that poor reading performance is not due to a different masking (crowding) effect [49][50][51][52][53][54][55][56][57][58][59]. If this was the case one would assume that letters in the middle of pseudowords which are masked by other letters to the left and to the right (crowding effect) are more often misread than letters at the right end of the pseudowords. However, subjects misread letters at the end of pseudowords which are not masked by other letters on both sides, more often than letters in the middle of pseudowords which are masked on both sides irrespective of the word length ( Figure 1). The result also shows that letters at the fixation point and immediately to the right of the fixation point were not misread more frequently than letters at the fixation point or further to the right of the fixation point. This does not support the assumption that dyslexics suffer from an unusual foveal or parafoveal processing [48][49][50][51][52][53][54][55][56]. Children made less reading errors for letters at the beginning than at the end of the words. These results are in agreement with the result of earlier studies [37][38][39]. Table 1 also shows that the number of letters that can be recognized at the same time depends on the fixation time. If the fixation time is prolonged more letters can be recognized simultaneously. This finding contradicts the assumption that attention was not focused on the area in the visual field where the pseudowords were presented: Each trial began with the presentation of the fixation point, and it was controlled whether the subjects directed their gaze steadily to the fixation point. The children knew where the pseudoword would appear, and they could focus their attention on this area before the pseudoword was presented. The subjects were also able to read 95% of the pseudowords correctly without visual attention training and without improving their visual attention capabilities. Therefore, reduced ability to simultaneously recognize all letters in a pseudoword should not be regarded as a consequence of an impaired visual attention span [61,62,65,66,[95][96][97][98]. Many poor readers improved their ability to simultaneously recognize a string of letters by applying a longer fixation time, i.e., an increased temporal summation [73][74][75][76][77][78][79]. Nineteen children were unable to recognize pseudowords consisting of 4 or 5 letters even if the fixation time was prolonged up to 500 ms. These children have a more severe impairment of simultaneous recognition that cannot be compensated by an increased temporal summation. A reduced simultaneous-recognition capacity is different from an attention disorder and should be regarded separately.
The results of experiment 2 demonstrate that a reduction in the word length, a prolongation of the fixation time, and computer guidance of reading eye movements are sufficient to drastically improve reading performance. The rate of reading errors immediately dropped by 69.97%, and the effect size of the computer-guided reading strategy was Cohen d = 2.649. The subjects in the control group who read without computer assistance showed no improvement in reading performance.
The Role of Eye Movements in Dyslexia
The question of whether and to what extent irregular eye movements contribute to a reading disorder is still a matter of debate [29][30][31][32]39,[81][82][83][84][85][99][100][101][102][103][104][105][106]. The importance of eye movements for reading results from the distribution of visual acuity in the visual field. Visual acuity is highest in the fovea and a small paravoveal area and drops dramatically towards the periphery. At 5 degrees of eccentricity, visual acuity drops to 50%, and at 10 degrees eccentricity, there is only about 30% visual acuity left [107]. Therefore, we are only able to read the text of a book if words we want to read are projected onto the area of sufficiently high visual acuity in succession. If the words a person reads exceed the area of sufficiently high acuity, letters outside that area cannot be recognized.
An ideal reader executes reading saccades that match the number of letters s/he can recognize simultaneously in the reading direction. If the amplitude of reading saccades exceeds the number of letters that a person can recognize simultaneously, reading mistakes are inevitable. If a reader who recognizes only 3 letters simultaneously exerts a saccade over 7 letters, s/he can only recognize three letters before performing the saccade, and s/he can read three letters after having completed the saccade. Then there is a gap of four unrecognized letters between the three letters that were read before and the three letters that were read after the saccade.
An ideal reader executes a succession of staircase-like saccades in the reading direction. Saccades opposite to the reading direction (regressions) may occur in normal readers and are frequent in dyslexics [39,81,82,101]. A reader may exert staircase like reading saccades the amplitude of which do not exceed the number of letters that the person is able to recognize simultaneously. This person's reading performance may not be hindered by regressions if they are interspersed in correct staircase like reading saccades. Regressions may occur in the presence of suitable eye movements in the reading direction without causing poor reading performance. It may be more difficult for the eye to find the correct target of a reading saccade if one or more regressions occur before the saccade to the correct location in the word to be read next is completed. It has been shown that reading therapy is successful even if regressions are interspersed in correct reading saccades after reading therapy [39]. In the present study the text that had already been read was deleted such that regressions were prevented. Guiding eye movements and preventing regressions increased reading performance more than when regressions were present as was the case in an earlier study [39]. The present study demonstrates that inappropriate eyemovements impair reading performance if saccades in the reading direction are initiated too early, and if their amplitude exceeds the number of letters that the reader can recognize simultaneously [38,39]. If patients are unable to direct their gaze on a word or word segment for at least 200 ms or to execute eye movements of an appropriate amplitude, reading errors will occur. All children in the present study and in previous studies in which they learned an appropriate eye movement strategy [37][38][39] were able to fixate words for a sufficiently long time interval and perform appropriate eye movements. This means that the children did not have a reduced ability to fixate words or to perform appropriate saccades [81,102,103] but that they used an incorrect eye movement strategy.
The result of experiment 2 shows that the reading performance of dyslexic subjects improves immediately without any training if their reading strategy is guided by a computer. The computer shows them (1) where a word segment to be read should be fixated, (2) how many letters s/he should try to recognize simultaneously, (3) how long s/he should fixate each segment, (4) that the amplitude his/her reading saccades should not exceed the number of letters that the subject can recognize simultaneously and where the goal of each reading saccade should be, (5) that the subject should not exert eye movements opposite to the reading direction, and (6) how long the time between the beginning of the fixation of a word segment and its pronunciation should be. Such a reading strategy must take into account the individual's abilities to simultaneously recognize a given number of letters, the fixation time and speech-onset time needed to recognize and pronounce a given word segment. The results show that under these reading conditions the rate of reading mistakes decreased dramatically and that such a computer guided reading yields the highest effect size that has ever been measured in a reading therapy.
In languages such as German, Italien and Spanish there is a close grapheme-phoneme correspondence. There is no reason to assume that reading strategies differ significantly in languages with about the same grapheme-phoneme correspondence. Eye movement records of normal readers whose mother tongue is English show the same staircase-like eye movements as normal German readers [39,82,84,99,108,109]. English-speaking readers, like German readers, must shift the word segments to be read with saccades directed to the right in the foveal and parafoveal area. The sequence of saccades in reading direction and fixation phases is a necessary and sufficient condition for reading, presupposing the reader has a normally developed visual system. Eye movements that are neither necessary nor sufficient for improving reading performance are inappropriate or superfluous. Only when readers split the text into small word segments, the pronunciation of which depends on the letters that follow the word segment to be pronounced, as it is often the case in English, other eye movements are required. In this case, the reader must first look to the word segment to be pronounced, then to the following word segment, and finally back to the word segment to be pronounced.
Does Slow Reading Improve Reading Performance?
Slow reading may be due to long fixation times or to searching eye movements during reading (Figure 2a: A and B). The results of experiment 1 show that reading performance improves if readers prolong the time interval which they fixate the word or word segment The subject exerts staircase-like eye movements with only one regression and longer fixation times during computer-guided reading. C: Speach spectogram during free reading; F: Speach spectogram during computerguided reading. The speech spectogram (F) shows that the subject pronounces more slowly during computer-guided reading than during free reading (C). Figure 2 B: A and B: eye movements of a dyslexic reader who participated in experiment 2 during free reading. The subject exerts mainly staircase-like eye movements. D and E: staircase-like eye movements of the same subject during computer-guided reading with only one regression (arrow) and longer fixation times. C: Speach spectogram during free reading; F: Speach spectogram during computer guided reading. The speech spectrogram (F) Figure 2. (a,b): Searching eye movements of a subject who participated in experiment 2 with many regressions during free reading. D and E: The subject exerts staircase-like eye movements with only one regression and longer fixation times during computer-guided reading. C: Speach spectogram during free reading; F: Speach spectogram during computer-guided reading. The speech spectogram (F) shows that the subject pronounces more slowly during computer-guided reading than during free reading (C). Figure 2 B: A and B: eye movements of a dyslexic reader who participated in experiment 2 during free reading. The subject exerts mainly staircase-like eye movements. D and E: staircase-like eye movements of the same subject during computer-guided reading with only one regression (arrow) and longer fixation times. C: Speach spectogram during free reading; F: Speach spectogram during computer guided reading. The speech spectrogram (F) shows that the subject pronounces more slowly during computer-guided reading than during free reading (C). Even if fixation time is prolonged up to 500 ms, some readers are unable to recognize more than 3 or 4 letters simultaneously. If slow reading is due to long fixation times, reading performance improves only if the reader does not try to recognize more letters simultaneously as s/he is able to. Otherwise long fixation times do not improve reading performance.
Slow reading may also be due to searching eye movements in and against the reading direction that are executed during a long time interval before the subjects start to pronounce the text to be read (see Figure 2). These readers' eye movements to the right typically exceed the number of letters that can be recognized simultaneously, and fixation times are shorter than they need to recognize a given succession of letters. Therefore, slow reading improves reading performance only if the reader does not try to recognize more letters than s/he can and if s/he does not scan the text for a long time with inadequate searching eye movements. In the present study, computer aided reading was slower than free reading because the children were instructed not to pronounce the words or word segments to be read before they were sure that they had recognized the word or word segment correctly (speech onset latency). Even if they were prone to pronounce a word or word segment after 600 or 800 ms, they were not allowed to do so because they were only allowed to pronounce the word or word segment after a tone signal had been presented 1 s after the beginning of the fixation period. After the children had heard the tone signal they could only start to pronounce the word or word segment after a given reaction time of at least 250 ms. Prolongation of the speech onset latency was important because many children are liable to make reading mistakes when they start to pronounce before the sequence of sounds has been retrieved correctly from memory. Prolongation of speech onset latency prevents premature pronunciation.
Conclusions and Outlook for Future Research
The results of the present study demonstrate that reading problems are not due to a lack of eye movement control or reduced visual attention. The results suggest that poor reading is caused by an inappropriate eye movement strategy which consists of executing a saccade before the word or word segment to be read has been recognized. This means that the fixation times required to recognize the words or word segments are too short. Reading mistakes occur if the saccades executed to the next word or word segment to be read are greater than the number of letters that can be recognized simultaneously. As a result, recognized words or word segments do not connect without gaps, and letters in the gaps are overlooked. When reading aloud, readers begin pronunciation too early, i.e., before the sound sequence associated with the word to be read has been correctly retrieved from memory. When these causes of reading errors are eliminated by a new, computer-guided reading strategy, the rate of misread words immediately drops dramatically.
The computer program used in the present study has also been successfully used to teach dyslexics a reading strategy tailored to their individual abilities and needs. The children reduced their rates of reading errors by nearly two-thirds in one session [37][38][39]. To further improve reading performance, parents were instructed to conduct reading training with this computer program daily. Parents reported that the children continued to improve their reading skills. If the children are no longer under the therapist's control in everyday life, it is not possible to determine which factors other than the reading training influence their reading performance. It would be desirable to conduct the study over a longer period of time, during which all influences on reading performance can be controlled. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Conflicts of Interest:
The author declares no conflict of interest. Neither the manuscript nor any parts of its content are currently under consideration or published in another journal. | 2021-05-01T06:17:15.380Z | 2021-04-21T00:00:00.000 | {
"year": 2021,
"sha1": "8508290ef17a1e09c73d8a1d75497024293ce477",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/11/5/526/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b11bf5fe2d759e90d9705d92f1e68fb448e1458",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119366368 | pes2o/s2orc | v3-fos-license | Stability of a modified Jordan-Brans-Dicke theory in the dilatonic frame
We investigate the Jordan-Brans-Dicke action in the cosmological scenario of FLRW spacetime with zero spatially curvature and with an extra scalar field minimally coupled to gravity as matter source. The field equations are studied with two ways. The method of group invariant transformations, i.e., symmetries of differential equations, applied in order to constraint the free functions of the theory and determine conservation laws for the gravitational field equations. The second method that we apply for the study of the evolution of the field equations is that of the stability analysis of equilibrium points. Particularly, we find solutions with $w_{\text{tot}}=-1$, and we study their stability by means of the Center Manifold Theorem. We show this solution is an attractor in the dilatonic frame but it is an intermediate accelerated solution $a \simeq e^{A t^p}, \text{as}\; t\rightarrow \infty$, and not a de Sitter solution. The exponent $p$ is reduced, in a particular case, to the exponent already found for the Jordan's and Einstein's frames by A. Cid, G. Leon and Y. Leyva, JCAP 1602, no. 02, 027 (2016). We obtain some equilibrium points that represent stiff solutions. Additionally we find solutions that can be a phantom solution, a solution with $w_{\text{tot}}=-1$ or a quintessence solution. Other equilibrium points mimics a standard dark matter source ($0<w_{\text{tot}}<1$), radiation ($w_{\text{tot}}=\frac{1}{3}$), among other interesting features. For the dynamical system analysis we develop an extension of the method of $F$-devisers. The new approach relies upon two arbitrary functions $h(\lambda, s)$ and $F(s)$. The main advantage of this procedure is that it allows us to perform a phase-space analysis of the cosmological model, without the need for specifying the potentials, revealing the full capabilities of the model.
Introduction
Various models have been proposed for the explanation of the results which followed from the detailed analysis of the recent cosmological data [1][2][3][4][5]. The observable late time acceleration have been attributed to the so-called cosmological fluid Dark energy. The nature of dark energy it is unknown and the theoretical approaches to the problem can be classified in two categories. In the first category in the context of General Relativity an "exotic" matter source is introduced which provides the late time acceleration of the universe [6][7][8][9][10]. On the other hand in the second category the expansion of the universe it is attribute to terms which follows from the modification of General Relativity (GR), see for instance [11][12][13][14][15][16][17] and references therein. In the latter theories the new terms which follow from the modification of the Einstein-Hilbert action provide a geometric explanation for the acceleration of the universe.
In the context of this work we are interested on the Brans-Dicke gravitational action in cosmological studies. Brans and Dicke in 1961 proposed a gravitational action which satisfies Mach's principle [18]. In that theory a new degree of freedom is introduced which is attribute to a scalar field which is nonminimally coupled to gravity. The importance of that theory is that it is equivalent under conformal transformation with GR which includes minimally coupled scalar field. Furthermore, other higher-order theories can be written in terms of Brans-Dicke field by using Lagrangian multipliers [19]. In the cosmological scenario of a spatially flat Friedmann-Lemaître-Robertson-Walker geometry we assume the existence of a second perfect fluid which is described by a scalar-field minimally coupled to gravity. In this consideration and in the Einstein frame the gravitational field equation is that of GR in the so-called σ-models. That is, two scalar fields with interactions in the kinetic and in the dynamical parts of the Lagrangian. In [20] was recently presented exact solutions in the context of multi-scalar field cosmologies. Two-scalar cosmology was discussed, with interesting results, in the seminal works [21,22]. Integrable cosmological models with non-minimal coupling have been studied, e.g., in [23]. In [24] it was shown that sometimes it is more easy to prove the integrability of the model with non-minimal coupling then the corresponding model in the Einstein frame. Bianchi I model with non-minimal coupling has a general solution in the analytic form, but in the case of zero potential [25].
In this paper we propose a modified Brans-Dicke theory where the Brans-Dicke field Φ is driven by a potential U (Φ) and the matter content is modeled by a second scalar field ψ with potential W (ψ). The potentials are not specified from the starting point. So, in order to specify the unknown potentials, we first express the action in the dilatonic frame by introducing the dilaton field ϕ with potential V (ϕ). The potentials can be derived by applying the method of group invariant transformations. The existence of a symmetry vector is important since the latter can be used in order a invariant surface to be defined in the phase-space of the dynamical system. More details on the application of group invariant transformations in cosmological studies can be found in [27][28][29][30] and references therein. In the other hand one can consider the potentials to be free functions and then find the generic features of the dynamical system, under the assumption that the system can be written in closed form. In this regard, we propose a general method for the construction of the phase space that relies in the specification of two arbitrary functions F (s) and h(s, λ). The equilibrium points with s constant such that h is only a function of λ (depending on the choice of W ), and with F identically zero, are easily found due to the problem can be reduced in one dimension. When F (s) is not trivial, we discuss a general classification that can be implemented straightforwardly, as for any of the specific choices of F for the scalar field potentials commonly used in the literature. The search of the equilibrium points with λ = 0, on the other hand, is not an easy task, and the success on it depends crucially on the choice of h(s, λ).
The main advantage of this procedure is that it allows us to perform a phase-space analysis of the cosmological model, without the need for specifying the potentials. This phase-space and stability examination let us to bypass the non-linearities and complications of the cosmological equations, which prevent complete analytical treatments by obtaining a qualitative description of the global dynamics of these scenarios, which is independent of the initial conditions and the specific evolution of the universe. Furthermore, in these asymptotic solutions we are able to calculate various observable quantities, such as the dark-energy and total equation-of-state parameters, the deceleration parameter, the various density parameters, etc. However, in order to remain general, we extend beyond the usual procedure [31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47]. As far as we know this methodology has not introduced yet in the literature, although it is inspired in the method of the F -devisers extensively used in the relativistic setting in [45][46][47][48][49][50] and that has been formalized in [51][52][53].
For illustrating the advantages of the method we consider some specific forms of the potentials V (ϕ) and W (ψ) which leads to specific forms on the functions F (s) and h(s, λ). For the Brans-Dicke field Φ we consider a power-law potential where in terms of the field ϕ has the exponential form V (ϕ) = V 0 e lϕ . As far as concerns the second scalar field we study the cases where the potential is (a) exponential and (b) power-law. Finally, we comment about general features of the equilibrium points of the dynamical system.
Comparing with the Jordan-Brans-Dicke theory introduced and studied in [26] in the Jordan's and Einstein's frames we have the following. In the Jordan frame the potentials of [26] are (we have renamed the original constants as λ U and λ W ): Therefore, the fields of this theory in the dilatonic action will be the dilaton ϕ with potential and a second scalar field ψ with potential Hence, the model studied in [26] can be considered as an special case of the model studied in section 4.1 Case: . The paper is organized as follows. Our model is defined in Section 2. The point-like Lagrangian and some exact solutions by using group invariant transformations are presented in Section 3. In Section 4 we rewrite the field equations in dimensionless variables and we end up with a five first-order differential-algebraic system with two unknown functions which are related with the potentials of the two scalar fields. For some explicitly forms of the potentials, we study the evolution of the field equations by using dynamical systems tools. In particular we consider the cases where the Brans-Dicke scalar field is power law while the minimally coupled field has an exponential potential or a power law potential. The case: W (ψ) = W 0 e kψ and V (ϕ) = V 0 e lϕ is studied in Section 4.1, whereas, the case: W (ψ) = W 0 ψ k and V (ϕ) = V 0 e lϕ is studied in Section 4.2. Going to the general set up, we find generic features of the dynamics without specifying the potentials in Section 5. This allows to find generic results that are independent of the model choice. The cosmological implications of the model at hand are discussed in Section 6. Finally our conclusions and discussions are given in Section 7.
Gravitational model
Let us consider the gravitational Action integral to be where Φ is the Brans-Dicke field and ψ represents a quintessence field. U (Φ) and W (ψ) are the corresponding potentials for the scalar fields. For the sake of simplicity and without loss of generality we rescale the Brans-Dicke field Φ and the associated potential U (Φ) as, Consequently, under a conformal transformation the action (2.1) is transformed into the dilatonic action: The field equations associated to action (2.3) are given by: . We assume that the geometry which describes the universe is that of spatially flat Friedmann-Lemaître-Robertson-Walker spacetime (2.5) For the latter line element and for the comoving observer (u a = δ a t , u a u a = −1) we calculate the field equations to be where (2.6a) is the modified first Friedmann's equation, equation (2.6b) is the Raychaudhuri (acceleration) equation and equations (2.6c), (2.6d) are the "Klein-Gordon" equations in which the two scalar fields should satisfy. In the following section we determine the point-like Lagrangian for the field equations as also we search for solutions by using the method of group invariant transformations.
Minisuperspace approach and exact solutions
From the Action integral (2.3) and for the FRW spacetime with line element the following Lagrangian density can be defined by where the field equations (2.6a)-(2.6d) follow from the Euler-Lagrange with respect to the variables {N, a, ϕ, ψ}. Lagrangian (3.8) describes a singular system of second-order differential equations, because the determinant of the Hessian matrix is zero, i.e.
∂L ∂ẋ i ∂ẋ j = 0. Specifically the field equations form a constraint dynamical system [54], with constraint equation ∂L ∂N = 0. Without loss of generality we can consider that N (t) = N (a (t) , ϕ (t)), where now Lagrangian (3.8) is autonomous and admits the symmetry vector field ∂ t , where the corresponding conservation law is the Hamiltonian function H = const. However from the first modified Friedmann's equation we have that H = 0.
We consider that N =N (t) e −ϕ/2 and a = Ae −ϕ/2 , where now the line element (3.7) becomes while the Lagrangian of the field equations is written as follows in which Ω 0 = 3+2ω 0 2 . Lagrangian (3.10) is nothing else that the cosmological model of two scalar fields minimally coupled in gravity but with interaction in the kinetic and dynamic terms. Specifically Lagrangian (3.10) describes the field equations for the action integral (3.11) whereḡ µν = e ϕ g µν .
The last action belong to the action of the so-called nonlinear σ-models [55]. On the other hand the action integral (3.11) can be seen like that of complex scalar field where the norm of the complex plane is not defined by the unitary matrix but from a space of constant curvature E A B =diag(Ω 0 , e −ϕ ), with Ricciscalar R (2) = − 1 2Ω 0 . Finally because of the constraint equation any solution of the dynamical system with Lagrangian (3.10) will be also a solution for the system (3.8) (for a discussion see [27]). Some exact solutions for cosmological models of the form of (3.11) can be found in [28,56] and reference therein. In the following without loss of generality in (3.10) we selectN = 1.
In order to specify the unknown potentials V (ϕ) and W (ψ) we apply the method of group invariant transformations. We find that for Lagrangian (3.10) admits the Noether point symmetry vector where the corresponding conservation law is 1 (3.14) Consider now that β = 2, and that the value of the conservation law is zero, that is, I X = 0, then from (3.14) followṡ By replacing in the Hamiltonian function we have 16) where in the limit c = 0, the field equations corresponds to that of GR with a cosmological constant and a stiff matter, the latter follows from the kinetic part of the scalar fieldΨ, where For a nonzero constant c, (3.16) corresponds to the first Friedmann's equation of GR with a minimally coupled scalar field, where the general solution is given in [57]. In the limit where Ω 0 = − 1 4 , i.e. ω 0 = − 5 4 , from (3.17) we have the closed-form expression ψ = √ c tanh Ψ, where (3.16) becomes that is, of a quintessence field with the hyperbolic potential W (Ψ) = sinh 4 (Ψ). In general, for β = 2 and from the symmetry vector (3.13) we define the Lagrange system dt The constraint equation ∂L ∂N = 0, have been applied.
from where we define the invariants u = At Noether symmetry is also a Lie point symmetry for the field equations. The invariants can be used to reduce the order of the differential equations or to determine a special solution. Consider that the invariants are constants, i.e. (u, v, w) → (A 0 , e ϕ 0 , ψ 0 ), then we observe that solve the field equations for the gravitational field equations with Lagrangian (3.10) and N (t) = 1, for the potentials (3.12) when the constants W 0 , V 0 , Ω 0 and β are related as follows and Solution (3.20) is a special solution of the field equations in the Einstein frame. By going back now in the Jordan frame, where β−1 , β = 1, 2, and t = σ 2 (ϕ 0 ) e t for β = 1, hence for the scalefactor holds a (τ ) ≃ τ 5−β 3(β−1) , β = 1, 2, and a (τ ) ≃ e σ 2 τ . The latter is a de Sitter solution while the first one is a perfect fluid solution in which where there exists acceleration, i.e. w eff < − 1 3 , for β ∈ (−∞, 2) ∪ (5, +∞), while for β = 13 5 , we have a radiation solution and for β = 7 3 the solution is that of a pressureless fluid. We continue our analysis with the equilibrium point analysis for the gravitational field equations, but we keep now the potentials unspecified.
The dynamical system
In order to express the above equations as an autonomous closed dynamical system we define the normalized variables and the auxiliary variables which are related by depends only on ϕ and simultaneously ϕ is an implicit , it depends on both ϕ and ψ, thus, using the implicit relation between ϕ and s through s = − V ′ (ϕ) V (ϕ) and between ψ and Γ ψ through , we obtain λ = g(s, Γ ψ ). Assume that λ = g(s, Γ ψ ) can be explicitly solved for Γ ψ , say Γ ψ = h(s, λ).
We have a dynamical system for the state vector (x, y, z, λ, s) defined in the phase space whose evolution is given respectively by (4.4).
Defining the function and calculating the total derivative we have From this it follows that if we take the initial conditions over the surface C = 0, the solutions remains on this surface all the time. And if we take the initial conditions on the half-space C > 0, the solutions remains on this region for all the time. By estimating we can see how the errors propagates if we take the initial conditions on the surface C(x, y, z, s, λ) = C 0 , with C 0 arbitrarily small.
To explicitly obtain an autonomous dynamical system, first, it is necessary to determine a specific potential form V (ϕ) and W (ψ). However, one could alternatively handle the potential differentiations when F can be expressed as an explicit one-valued function of s, that is F = F (s), as well as it can be defined an explicit function h = h(s, λ) for some examples. Therefore we result to a closed dynamical system for s, λ, and a set of normalized-variables. A similar approach has been applied in isotropic (FRW) scenarios [45][46][47][48][49][50], however for the purpose of the present work we will improve it. Such a procedure is possible for general physical potentials, and for the usual ansatzes of the cosmological literature it results to very simple forms for F (s), as can be seen in Table 1.
In order to continue we consider some specific forms of the potentials V (ϕ) and W (ψ) which leads to specific forms on the functions F (s) and h(s, λ). For the Brans-Dicke field Φ we consider a power-law potential where in terms of the field ϕ has the exponential form V (ϕ) = V 0 e lϕ . As far as concerns the second scalar field we study the cases where the potential is (a) exponential and (b) power-law. Finally, we comment about general features of the equilibrium points of (4.4) for arbitrary h(λ, s) and F (s) functions.
The equilibrium points of the system (4.8) are the following: It is always a saddle with a three dimensional unstable manifold if l < 2.
The eigenvalues are It is a saddle.
The eigenvalues are It is a saddle.
The eigenvalues are It is a saddle otherwise.
. It is a saddle.
It is a saddle.
It is a sink for It is a source for It is a saddle otherwise.
It is a saddle with a three dimensional stable manifold provided ω 0 ≥ 45 2 .
4.1.1 Center manifold of P 1 .
From the previous linear analysis we have found that the equilibrium point P 1 is nonhyperbolic with a three dimensional stable manifold provided ω 0 > − 3 2 , 1 < l ≤ 1 16 (6ω 0 + 25). In this subsection we use the Center Manifold Theorem to show that the solution corresponding to P 1 is indeed locally asymptotically stable under the above conditions.
Introducing the new variables which are real, the point P 1 is shifted to the origin and the linear part of the vector field is transformed to its real Jordan canonical form. Therefore, the evolution equations becomes The system (4.11) is written in diagonal form where (u, v) ∈ R × R 3 , C is the zero 1 × 1 matrix, P is a 3 × 3 matrix with negative eigenvalues and f, g vanish at 0 and have vanishing derivatives at 0. The center manifold theorem asserts that there exists a 1-dimensional invariant local center manifold W c (0) of (4.13) tangent to the center subspace (the v = 0 space) at 0. Moreover, W c (0) can be represented as The restriction of (4.13) to the center manifold is (4.14) If the origin of (4.14) is stable (asymptotically stable) (unstable) then the origin of (4.13) is also stable (asymptotically stable) (unstable). Therefore, we have to find the local center manifold, i.e., the problem reduces to the computation of h (u) .
Substituting v = h (u) in the second component of (4.13) and using the chain rule, v ′ = Dh (u) u ′ , one can show that the function h (u) that defines the local center manifold satisfies The equation (4.15) can be solved approximately by expanding h (u) in Taylor series at u = 0. Since h (0) = 0 and Dh (0) = 0, it is obvious that h (u) commences with quadratic terms. We substitute into (4.15) and set the coefficients of like powers of u equal to zero to find the non-zero coefficients are . Therefore, the local center manifold of the origin can be expressed The dynamics on the center manifold is given by a gradient like equation u ′ = −∇Π(u), where Π(u) = (l−1)u 4 8(l+1) 2 , for which the origin is degenerate local minimum whenever l > 1 (recall the existence conditions for P 1 are 2ω 0 + 3 = 0, l ≥ 1). This implies that the center manifold of P 1 is stable when 1 < l ≤ 1 16 (6ω 0 + 25). For l > 1 16 (6ω 0 + 25) the unstable manifold in not empty. Neglecting the order terms O(λ 3 ) the center can be given in the original variables by the graph: In the figure 1 we present some orbits of the dynamical system (4.8) projected on the space (x, y, z) for W (ψ) = W 0 e kψ and V (ϕ) = V 0 e lϕ with ω 0 = 50 and l = 8. The initial conditions were chosen randomly to show that, irrespectively of the initial conditions, the orbits are attracted by the center manifold of the equilibrium point P 1 . Latter on, in Section 5.1 it will be shown that the cosmological solutions represented by these orbits, tends to the solution associated to P 1 . Furthermore, as shown in Figure 3, the cosmological parameters behaves in accordance with the current cosmological paradigm. This feature makes the model very interesting from the cosmological point of view. In the Fig. 2 are displayed some orbits of the dynamical system (4.8) projected on space (x, y, z) for ω 0 = −2, l = −3. The initial conditions are chosen randomly to show that, irrespectively of the initial conditions, the orbits are attracted by the equilibrium point P 6 . In this example, the phase space is the interior of an hyperboloid that corresponds to the boundary of the phase space and it is represented by a gray mesh. The late-time attractor is a phantom dominated solution.
As we have commented before, the model studied in [26] can be considered as an special case of the model W (ψ) = W 0 e kψ and V (ϕ) = V 0 e lϕ with k = −λ W , l = 1 − In this section we have investigated the stability of the equilibrium solutions in the dilatonic frame. In the reference [26] it was studied the stability of the equilibrium points in both the Jordan and the Einstein frames, so our results complements those found in [26]. In particular, notice that the equilibrium point (in the Jordan frame), named J 4 in [26] corresponds to P 1 investigated in this section with the identification λ U = (1 − l)γ, γ −1 = ω 0 + 3 2 , due to it satisfies The stability conditions deduced in [26] in the Jordan's frame formulation and also in the Einstein's frame formulation are λ U < 0, γ > 0. That is, ω 0 > − 3 2 , l > 1. The stability in the dilatonic frame formulation is 1 < l ≤ 1 16 (6ω 0 + 25). Which are the equivalent conditions with the identifications λ U = (1 − l)γ, γ −1 = ω 0 + 3 2 . The equilibrium points J 4 and E 4 (the representations of P 1 in the Jordan's frame and in the Einstein's frame, respectively) corresponds to an intermediate accelerated solution instead of a de Sitter solution (see derivation in [26]). That is, attractor in the Jordan frame corresponds to the solution of the form a(t) ≃ e α 1 t p 1 , as t → ∞ where α 1 > 0 and 0 < p 1 < 1 for a wide range of parameters. Furthermore, when we work in the Einstein frame we get that the attractor is also the solution of the formā(t) ≃ e α 2t p 2 , ast → ∞ where α 2 > 0 and 0 < p 2 < 1, for the same conditions on the parameter space as in the Jordan frame. An equivalent result can be deduced straightforwardly for the dilatonic frame. We proceed as follows. According to the center manifold calculation, we have from (4.16a), the definition λ := ke −ϕ/2 , and the definition (4.1) that (as ϕ → ∞): with the identifications λ U = (1 − l)γ, γ −1 = ω 0 + 3 2 we obtain the same exponent p = p 1 = p 2 = 2γ 3γ−λ U . Since p < 2 3 , P 1 is not a de Sitter solution (that requires p = 1).
It is a saddle point with a three dimensional unstable manifold for β < 3. It is a saddle.
The eigenvalues are It is a saddle.
The eigenvalues are It is a sink for β < 0, ω 0 < − 3 2 . It is a saddle otherwise.
. It is a saddle.
It is a saddle.
The eigenvalues and the nature of the equilibrium points has to be handled for specific choices of the parameters in the region of existence.
The eigenvalues and the nature of the equilibrium points has to be handled for specific choices of the parameters in the region of existence.
Center manifold of P 1 .
From the previous linear analysis we found that the equilibrium point P 1 is nonhyperbolic with a three dimensional stable manifold provided β > 2, ω 0 ≥ 1 6 (16β − 41). Introducing the new variables The system (4.24) is written in diagonal form where (u, v) ∈ R × R 3 , C is the zero 1 × 1 matrix, P is a 3 × 3 matrix with negative eigenvalues and f, g vanish at 0 and have vanishing derivatives at 0. The center manifold theorem asserts that there exists a 1-dimensional invariant local center manifold W c (0) of (4.13) tangent to the center subspace (the v = 0 space) at 0. Moreover, W c (0) can be represented as for δ sufficiently small. The restriction of the dynamics to the center manifold is where the function h (u) that defines the local center manifold satisfies Following the same procedure implemented in section 4.1.1 we obtain where a 1 = 0, . Therefore, the dynamics on the center manifold is given by the gradient-like equation under the potential Π(u) = (β−2)u 4 8β 2 , for which the origin is a degenerate local minimum whenever β > 2 (recall the existence conditions for P 1 are β > 2, ω 0 ≥ 1 6 (16β − 41)), and under these conditions, the center manifold of P 1 is stable. In the original variables (4.1) the center manifold can be locally expressed as the graph According to the center manifold calculation, we have from (4.16a), the definition λ := − 2β ψ e −ϕ/2 , and the definition (4.1), and introducing the time rescaling df dτ = ψ 2 df d ln a we have that (as λ → 0): Integrating the equations, and using the first integral ln a a 0 = ψ 2 dτ we have the general solution (4.29d)
Cosmological consequences
By considering now the equations written in the dilatonic frame: (2.6a),(2.6b), (2.6c), and (2.6d), we can define the following observable quantities: These cosmological parameters can be written in terms of the phase space variables expressed as w tot is related to the deceleration for isotropic metrics by q = 1 2 (1 + 3w tot ). We continue with the discussion for the interpretations of the model for the choices studied in sections: 4.1, and 4.2. We finish the section with a discussion of the generic features of the models. Table 2 we present the cosmological parameters corresponding to the formulation in the dilatonic frame as given by Eqs. (2.6) for W (ψ) = W 0 e kψ and V (ϕ) = V 0 e lϕ . We have the following results: • P 1 satisfies Ω 2 = 1 − Ω 1 = (l−1) l+1 with w 1 = −1, w 2 = −1 and w tot = −1. Both energy densities Ω 1 , Ω 2 are of the same order of magnitude, that is, it is a scaling solution. We have proved that its center manifold is stable for 1 < l ≤ 1 16 (6ω 0 + 25). Hence, this point is a late-time attractor.
Label Table 2: Cosmological parameters corresponding to the formulation in the dilatonic frame as given by Eqs. (2.6) for W (ψ) = W 0 e kψ and V (ϕ) = V 0 e lϕ .
• The equilibrium points P 2 , P 3 and P 4 satisfies w 1 = 1, w 2 = 1, w tot = 1. That is, they represent stiff solutions. The three solutions are saddle therefore they are not relevant for the late-time cosmology, neither for the early-time cosmology.
• The equilibrium point P 8 , which exists for ω 0 > − 3 2 , ω 0 = 0, satisfy Ω 2 = 0 that is, the energy density of the dilatonic field is dominant and the energy density of the quintessence field is negligible. Furthermore, . The total energy density represents a standard matter source with 0 < w tot < 1. It is a saddle. Therefore they are not relevant for the late-time cosmology, neither for the early-time cosmology.
• The equilibrium point P 10 satisfies Ω 2 = 1. That is, it is dominated by the quintessence field and the contribution of the dilatonic field to the total energy density is negligible. It satisfies w 2 = 1 3 and w tot = 1 3 . This means that the corresponding cosmological solution mimics radiation. Interestingly, it is a saddle with a three dimensional stable manifold provided ω 0 ≥ 45 2 .
In the Fig. 4 is presented evolution of the dimensionless energy densities Ω 1 , Ω 2 and the observables w 2 , w tot , q vs ln(a) for W (ψ) = W 0 e kψ and V (ϕ) = V 0 e lϕ with ω 0 = −2, l = −3. For this model we have the same results for the first nine equilibrium points in Table 2, in section 5.1 replacing l = β − 1, k = 2β, and the additional equilibrium points P 11 − P 15 .
• The observables for P 11,12 and P 14,15 have to be evaluated for specific choices of the parameters.
Some results for arbitrary potentials
In the sections 4.1 and 4.2 we have investigated an exponential potential V (ϕ) for which s is a constant such that h is only a function of λ that depends on the choice of W , and F is identically zero. For complement these results, in this section we comment about the generic features of the equilibrium points of (4.4) for arbitrary h(λ, s) and F (s) functions.
Since the system is form-invariant under the change (y, z, λ) → (−y, −z, −λ). Thus, without losing generality we can investigate just the sector y ≥ 0, z ≥ 0, λ ≥ 0. Henceforth, we will focus on the stability properties of the system (4.4) for the state vector (x, y, z, λ) defined in the phase space The equilibrium points of (4.4) that are independent of h(s, λ) are summarized below. In table 3 are shown the cosmological parameters corresponding to the formulation in the dilatonic frame as given by Eqs. (2.6) for the equilibrium points in the invariant set λ = 0 for arbitrary potentials. Exists for s c ≤ −1. The eigenvalues are This line of equilibrium points contains the cases s c =ŝ : F (ŝ) = 0, for which the eigenvalues simplifies to 0, 0, −3, , in other case the stable manifold is lower dimensional.
The equilibrium point is a saddle and has a four dimensional unstable manifold providedŝ > −2, F ′ (ŝ) > 0. In other case the unstable manifold is lower dimensional.
Exists for The eigenvalues are It is a saddle.
The eigenvalues are It is a saddle.
Label The points P 1 -P 9 were found in the previous three examples, for which s is a constant such that h is only a function of λ that depends on the choice of W , and F is identically zero (that is, the problem can be reduced in one dimension). When F (s) is not trivial, the above classification can be implemented straightforwardly, as for the specific choices of F in table 1. The search of the equilibrium points with λ = 0 is not an easy task, and the success on it depends crucially on the choice of h(s, λ). Indeed, for a given h, there are equilibrium points on the surface x − 2λz(h(s, λ) − 1) = 0. On this surface the existence conditions of an equilibrium point are (x, y, z, λ, s) : y ≥ 0, z ≥ 0, λ ≥ 0, 2ω 0 + 3 = 0,
Discussion and Conclusions
In this work the Brans-Dicke action have been considered in the cosmological scenario of FLRW spacetime with spatially flat curvature; while a minimally coupled scalar field was considered as a matter source. We show that this action in the Einstein frame provides the dilatonic action integral and it is equivalent with the σ-models. The method of group invariant transformations, i.e., symmetries of differential equations, was applied in order to constraint the free functions of the theory and determine conservation laws for the gravitational field equations. We found that for a family of potentials there exists a Noetherian conservation law. From the admitted symmetries we derived the zero-order invariants and we derived specific solutions for the field equations which correspond to matter-like dominant eras. Additionally, we have studied the stability of the equilibrium points of the dynamical system for to specific and for arbitrary potentials.
For the model 1, corresponding to the formulation in the dilatonic frame as given by Eqs. (2.6) for W (ψ) = W 0 e kψ and V (ϕ) = V 0 e lϕ , we have obtained the following main results. The equilibrium por P 1 corresponds to a solution with w tot = −1. We have proved that its center manifold is stable for 1 < l ≤ 1 16 (6ω 0 + 25). We show this solution is an attractor in the dilatonic frame but it is an intermediate accelerated solution a ≃ e At p , p := 2 2+l , 32 57+6ω 0 < p < 2 3 , as t → ∞, and not a de Sitter solution. The exponent p is reduced, in a particular case, to the exponent already found for the Jordan's and Einstein's frames by [26]. We have obtained some equilibrium points, P 2 , P 3 and P 4 , that represent stiff solutions which are saddle. The equilibrium point P 5 , satisfies w tot = −1. It is a sink for l > −1, ω 0 < − 3 2 or a saddle otherwise. The equilibrium point P 6 , corresponds to a solution where the energy density of the dilatonic field is dominant and the energy density of the quintessence field is negligible. According to whether w tot := 2(l 2 −1) 3(l+2ω 0 +2) − 1 satisfies w tot < −1, w tot = −1 or −1 < w tot < − 1 3 we have found it represents a phantom solution, a solution with w tot = −1 or a quintessence solution. It is a sink for l < −1, ω 0 < − 3 2 . It is a saddle otherwise. Other equilibrium points as P 8 , mimics a standard dark matter source with 0 < w tot < 1. It is a saddle. The equilibrium point P 9 , corresponds to a solution where the energy density of the dilatonic field is dominant and the energy density of the quintessence field is negligible. Furthermore, w 1 = 3ω 0 + √ 6ω 0 +9+3 3ω 0 , w 2 = −1, and w tot = 3ω 0 + √ 6ω 0 +9+3 3ω 0 . That is, the second fluid behaves as a cosmological constant whereas the effective equation of state (of the total cosmic budget) is that of quintessence field for − 9 8 < ω 0 < − 5 6 , a cosmological constant for ω 0 = − 5 6 and the phantom field for − 5 6 < ω 0 < 0. It is a sink for −2 < l ≤ −1, 1 6 (l − 4)(l + 2) < ω 0 < 0, or l > −1, − 5 6 < ω 0 < 0 (in both cases it is a phantom attractor). It is a source for l ≤ −2, ω 0 > 1 6 (l − 4)(l + 2), or l > −2, ω 0 > 0 (and it behaves as an standard matter source then). It is a saddle otherwise. Finally, the equilibrium point P 10 is dominated by the quintessence field and the contribution of the dilatonic field to the total energy density is negligible. It satisfies w 2 = 1 3 and w tot = 1 3 . This means that the corresponding cosmological solution mimics radiation. Interestingly, it is a saddle with a three dimensional stable manifold provided ω 0 ≥ 45 2 . These results illustrates the capabilities of the model. For the second model, we have V (ϕ) = V 0 e (β−1)ϕ , W (ψ) = W 0 ψ 2β . The particular parameters where chosen to lead to Noether pointlike symmetries. For this model we have the same results for the first nine equilibrium points in Table 2 , in section 5.1, and discussed before, by replacing l = β − 1, k = 2β. We have found the additional equilibrium points P 11 − P 15 , whose stability and cosmological observables has to be evaluated numerically.
We recall that the points P 1 -P 9 were found in the previous examples, under the assumption s is a constant such that h is only a function of λ that depends on the choice of W , and F is identically zero (that is, the problem can be reduced in one dimension). When F (s) is not trivial, the above classification can be implemented straightforwardly, as for the specific choices of F in table 1. The search of the equilibrium points with λ = 0 is not an easy task, and the success on it depends crucially on the choice of h(s, λ). For example, given h(s, λ) ≡ 1, we have the additional equilibrium points (x, y, z, λ, s) = 0, 0, 2 3 , 2, s c , where h(s c , 2) = 1. For h(s, λ) = 1 − 1 2β , we have the additional points P 11 -P 15 investigated in section 4.2. A more complete study requires the specification of the free functions and this is far the scope of the present research.
A possible generalization on the context of scalar-tensor theories will be of interested. In this respect, after dealing with two simple examples, we made the first steps to provide a complete dynamical system analysis of dilatonic JBD cosmology keeping the potentials arbitrary, which is a major improvement since it allows for the extraction of information that is related to the foundations of the cosmological model and not to the specific potentials forms. In particular, we apply an extended version of the method of f -devisers [51][52][53] -in the sense that is was developed for two free functions such that additionally to the f -deviser we have an h-deviser. Using this approach one first performs the analysis without the need of an a priori specification of the potentials forms, and in the end one just substitutes the specific potential forms in the results, instead of having to repeat the whole dynamical elaboration from the start. Therefore, the results are richer and more general, revealing the full capabilities of dilatonic JBD cosmology. | 2018-12-10T14:58:24.000Z | 2018-12-10T00:00:00.000 | {
"year": 2018,
"sha1": "01b80a1d94e84977041827c0a59897c280a2a7a8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1812.03830",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a6c98519f290342accd15395c915030d212ef924",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
245420752 | pes2o/s2orc | v3-fos-license | GADD45B induced the enhancing of cell viability and proliferation in radiotherapy and increased the radioresistance of HONE1 cells
This study aimed to investigate the key role and mechanism of GADD45B in the radiation resistance of nasopharyngeal carcinoma (NPC) cell lines. Radiotherapy-resistant HONE1 (HONE1-R) cells with stable genetic radioresistance were cultured under continuous radiation stimulation. CCK-8 and clone formation assays were used to verify the radioresistance of the cell line. Transcriptome sequencing was used to identify the most important differential signaling pathway in the cell line. Quantitative reverse transcription polymerase chain reaction (qRT-PCR) and Western blot analysis were used to verify the sequencing results. GADD45B-siRNA was used to knock down the key gene so as to verify the downstream gene expression and analyze its mechanism. The transcriptome analysis showed that 702 genes were upregulated and 772 genes were downregulated in the HONE1-R cell lines. The core differential signaling pathway was mitogen-activated protein kinase (MAPK) signaling pathway, and the core differential gene was GADD45B. After GADD45B was knocked down, the cell viability and proliferation ability of HONE1-R cell lines significantly decreased under radiation, and the expression of cyclin B1 and p-CDK1 decreased significantly. MAPK is the core signaling pathway in radioresistance of NPC. GADD45B plays an important role by affecting cell viability and proliferation in NPC radioresistance. GADD45B is a potential target of radioresistance in NPC.
Introduction
Nasopharyngeal carcinoma (NPC) is a malignant tumor of the head and neck originating from the epithelial cells of the nasopharynx. Squamous cell carcinoma is the main pathological type of NPC. Comprehensive treatment based on radiotherapy is preferred for patients with NPC due to the complexity of anatomical location and high sensitivity to radiation [1]. Early-stage NPC can be cured by radiotherapy, and the 5 year survival is up to 90% [2]. Nevertheless, more than 70% of patients with NPC are in the middle-or advanced-stage disease at initial diagnosis. The 5 year survival of patients with Stages III and IV NPC is only 53-81.8 and 28-66.39%, respectively [3,4]. Local recurrence and distant metastases of NPC caused by radiotherapy resistance are the major obstacles in the clinical treatment, despite some progress in the current therapeutic strategies [5,6]. Secondary radiotherapy protects against radiotherapy resistance or recurrence of NPC, but easily leads to severe complications and yields limited benefits [7]. Therefore, clarifying the molecular mechanism underlying radiotherapy resistance of NPC and enhancing radiotherapy sensitivity are important for improving the clinical outcomes.
The transcriptome is a set of all RNA transcripts. Transcriptome analysis of a specific tissue or cell at a specific stage or in a functional state from an overall perspective can clarify the molecular mechanism of diseases [8]. This analysis has been applied to the research on NPC, achieving a certain outcome. Zhou et al. [9] identified 420 differentially expressed long noncoding RNAs and 31 circular RNAs in the transcriptome information of NPC and predicted their targets. Chang et al. [10] analyzed the transcriptome data of NPC tissues. They found that SSX2IP was significantly upregulated in NPC tissues, which was closely linked to the invasive ability of NPC cells and predicted a poor prognosis. However, the transcriptome analysis of radiotherapy-resistant NPC cells has been rarely reported.
In this study, a stable NPC cell line, radiotherapyresistant HONE1 (HONE1-R), was first established by comparing the transcriptome sequencing data of HONE1-R and sensitive cell lines. Then, the study focused on the key gene GADD45B related to NPC and its related signaling pathway. The mechanism of the radioresistance of GADD45B to HONE1-R cell line was further explored by knocking down the expression of GADD45B.
Cell culture
The NPC cell line HONE1 was purchased from the Query Network for Microbial Species of China (http://www. biobw.org/; catalog number: bio-105809). The cells were cultivated in the dulbecco's modified eagle medium (DMEM) containing 10% fetal bovine serum (FBS) and 1% penicillin and streptomycin in a humidified atmosphere of 5% CO 2 at 37°C. Adherent cells with more than 80% confluence were passaged using trypsin.
Generation of the HONE1-R cell line
HONE1 cells in the logarithmic growth phase were digested into a single-cell suspension and implanted in a six-well plate (4 × 10 4 cells per well). After adherence, the cells were induced with 8 Gy irradiation. Surviving subclonal cells were reinduced with 6 Gy irradiation, cultured, and passaged for live cells for a total of three periods. Finally, the surviving cells were induced with 8 Gy irradiation, and the live cells were stable HONE1-R cells. The HONE1-R cells were subcultured to more than five generations and stably inherited.
CCK-8 assay
HONE1 and HONE1-R cells in the logarithmic growth phase were used to prepare a cell suspension (2 × 10 3 cells/100 µL), which was implanted in a 96-well plate. After cell adherence, 6 Gy irradiation was used. In each well, 10 µL of CCK-8 solution (Apexbio, USA) was added at the indicated time points. After 1 h, the optical density at 450 nm was measured using a microplate reader to calculate the proliferation rate.
Colony formation assay
The HONE1 and HONE1-R cells were implanted in a sixwell plate (300 cells per well). Until cell adherence, the cells were irradiated with 6 Gy X-ray radiation and cultured until the formation of visible colonies. They were fixed in methanol for 15 min, washed with phosphatebuffered saline, and stained with 1% crystal violet for 20 min. Visible colonies containing more than 50 cells were captured and counted.
Construction of a specific cDNA library
and high-throughput sequencing Cellular RNAs were isolated using TRIzol and subjected to agarose gel electrophoresis and purification. The mRNA was enriched using oligomeric dT nucleotide-coated resins and separated into short reads by adding the fragmentation buffer. Using the mRNA as a primer, the first strand of cDNA was synthesized through Random Primer 6; the second strand of cDNA was then synthesized by the induction of the loading buffer, deoxyribonucleotide triphosphate, DNA polymerase I, and RNaseH. Double-stranded cDNA was purified using AMPure XP beads, followed by the generation of a poly-A tail and the connection of DNA ligase. Subsequently, optimal-sized segments were selected using AMPure XP beads. The second-strand cDNA containing U was degraded by the USER enzyme to ensure that all sequencing information was from the first-strand cDNA. Finally, cDNA was subjected to polymerase chain reaction (PCR), and purified PCR products were used to generate a cDNA library. The cDNA library was primarily quantified using Qubit 3.0. The insert size of the cDNA library was detected using Qsep100. The qualified library was subjected to sequencing using Illumina HiSeqTM2000/Miseq, and the obtained clean reads were compared with the ribosome database to clear the ribosome sequences.
Analysis of differentially expressed genes
The posterior distributions (Z) of reads per million mapped reads calculated by the Bayesian method were analyzed using generalized fold change (GFOLD), and GFOLD (Z) was calculated. A differentially expressed gene was determined when its GFOLD (Z) was >0 or <0. The fold change was also calculated. A gene showing a log 2 FC ≥1 or ≤−1 was considered as a differentially expressed gene, and a gene showing a false discovery rate of ≤0.05 was considered as a significantly differentially expressed gene. The functional annotation chart was produced using DAVID (https:// david.ncifcrf.gov/).
Bioinformatics analysis
The gene ontology (GO) terms classified gene functions into molecular functions (MF), biological processes (BP), and cellular components (CC). GO annotations were obtained using the database. A P value of <0.05 was considered as a cut-off value, and statistically significant annotations were analyzed using the cluster profiler package. Target genes in different pathways were analyzed using Kyoto Encyclopedia of Genes and Genomes (KEGG) with the cut-off value at P < 0.05, and they were later assessed using the cluster profiler package.
The signaling pathway networks were depicted using Cytoscape 3.4.0 software (Institute for Systems Biology). Each pathway network was depicted based on the pathway terms, and the networks with P < 0.05 were analyzed using KEGG.
qRT-PCR
Ten differentially expressed genes were subjected to qRT-PCR. The primer sequences are listed in Table 1. Reverse transcription was conducted using the Prime-Script RT Reagent Kit with gDNA Eraser (Takara Biotechnology Co., Ltd); the conditions were as follows: at 42°C for 2 min with gDNA eraser and buffer 1 and then at 37°C for 15 min and 85°C for 5 s with enzyme mix, RT primer, and buffer 2. qPCR (Takara Biotechnology Co., Ltd) was performed at 95°C for 10 min, and then at 95°C for 15 s, 60°C for 30 s, and 72°C for 40 s for 40 cycles, with a final step at 72°C for 5 min. The reactions were set up in 96-well Microseal PCR plates (Bio-Rad Laboratories, Inc.) in triplicate.
TCGA database analysis
The GADD45B expression level in cancer and paracancer tissues was searched from TCGA (the Cancer Genome Atlas) database. The tumor type was squamous cell carcinoma of head and neck. After downloading all the relevant data, 43 pairs of paired tumor tissue and adjacent tissue data were obtained. With the same method, 43 pairs of paired data were obtained. log 2 (count + 1) was used to analyze the gene expression in these samples.
Cell transfection
The si-GADD45B and the negative control (si-NC) oligonucleotide sequences in this study were synthesized by biological engineering. The sequences were as follows: si-GADD45B-1: sense: r(CGUUCUGCUGCGACAAUGA)dTdT; antisense: r(UCAUUGUCGCAGCAGAACG)dAdT; si-GADD45B-2: sense: CGACAACGCGGUUCAGAAGUU; antisense: 5′-CUUC UGAACCGCGUUGUCGUU-3′. The cells were grown to more than 80% confluence and then switched to a serum-free medium for 4 h. The transfection plasmid gently mixed with polyethyleneimine was then applied and incubated at room temperature for 40 min. Subsequently, the I reduced serum medium (Opti-MEM) was replaced for 24 h of cell culture, followed by conventional culture in the DMEM containing 10% FBS and 1% penicillin and streptomycin.
Statistical analysis
Statistical analyses were performed using GraphPad Prism 8. Data were expressed as mean ± standard deviation (mean ± SD). All data conformed to normal distribution. Comparisons among multiple groups were analyzed using one-way analysis of variance, followed by Tukey's or Bonferroni's post hoc test. A P value of <0.05 indicated a statistically significant difference. All experiments were performed in triplicate.
Verification of the ability of radiotherapy resistance of HONE1-R cell lines
The results showed that the cell viability of HONE1 cell lines decreased by 82.64%, whereas the cell viability of HONE1-R cell lines decreased by 28.09% (Figure 1a). The colony formation number of HONE1 cell lines decreased by 79.9%, and the colony formation number of HONE1-R cell lines decreased by 32.1% (Figure 1b). The results showed that the 50% inhibition dose for HONE1 and HONE1-R cell viability was 1.70 and 6.80 Gy, respectively (Figure 2a), and the 50% inhibition dose for colony formation was 2.09 and 7.61 Gy, respectively (Figure 2b). The findings suggested that the HONE1-R cell line had strong radioresistance and stable heredity.
Mitogen-activated protein kinase (MAPK) was the most important signaling pathway in HONE1-R radiotherapy resistance
Transcriptome sequencing of HONE1 and HONE1-R cell lines showed that 702 genes were upregulated and 772 genes were downregulated in the HONE1-R cell lines (Figure 3a). GO analysis showed that the function of differentially expressed genes mainly concentrated in protein heterodifferentiation activity (MF), cell-cell junction (CC), protein extracted matrix (CC), and regulation of myoid cell differentiation (BP) (Figure 3b). KEGG analysis showed 18 differential signaling pathways; the most significant differential pathways included systemic lupus erythematosus, astronomy, interleukin 17 signaling pathway, apoptosis, tumor necrosis factor (TNF) signaling pathway, and MAPK signaling pathway (Figure 3c). The analysis results of differential signaling pathway interaction network are shown in Figure 3d. The most central network was the MAPK signaling pathway; in addition, the apoptosis pathway and TNF signaling pathway were also important.
GADD45B might be a key gene in the HONE1-R radiotherapy resistance
Ten differentially expressed genes between HONE1-R and HONE1 cells were selected from MAPK, apoptosis, and TNF pathways to verify the results of sequencing; further validation was performed using qRT-PCR and Western blot analysis. Among these 10 genes, IKBKG, JUN, and AKT3 were significantly changed in MAPK, apoptosis, and TNF signaling pathways; the other 6 genes were key genes in these three pathways. The results were compared with the sequencing results and were consistent with the sequencing data. The protein levels also showed the same trend as sequencing (Figure 4a and b). GADD45B mRNA and protein expression levels were significantly upregulated, suggesting that GADD45B might be the key gene of HONE1-R radiation resistance.
GADD45B effectively improved the activity and proliferation of HONE1-R cells under radiation
The GADD45B expression level was significantly downregulated in squamous cell carcinoma of head and neck (Figure 5a). After transfection of GADD45B-siRNA1 and GADD45B-siRNA2, the expression level of GADD45B in HONE1-R cells was significantly downregulated, showing good transfection efficiency and effectiveness (Figure 5b). Subsequently, HONE1-R cells transfected with si-NC and GADD45B-siRNA1 were exposed to 6 Gy radiation, and the cell viability was detected. The results showed that the cell viability of HONE1-R cells was significantly inhibited after the inhibition of GADD45B (Figure 5c). The clone formation results showed that the HONE1-R cell line transfected with GADD45B-siRNA1 also had weaker proliferation ability and radioresistance (Figure 5d). Subsequently, the downstream gene of GADD45B was detected.
The results showed that after exposure to 6 Gy irradiation, the levels of cyclinB1 and p-CDK-1 in HONE-1 cells transfected with GADD45B-siRNA1 decreased abnormally compared with that transfected si-NC (Figure 6a and b). These results suggested that GADD45B could effectively improve the cell viability and proliferation ability of HONE1-R cells under radiation, thus significantly improving the radiation resistance ability of the HONE1-R cell line.
Discussion
Currently, a comprehensive strategy based on chemotherapy is the most common option for patients with NPC [1]. Unfortunately, the development of radiotherapy resistance leads to treatment failure and poor prognosis owing to local recurrence or distant metastases [6]. Hence, the relevant genes and signaling pathways involved in radiotherapy resistance of NPC were analyzed in this study to improve the therapeutic efficacy and prognosis. By generating the HONE1-R cell line and analyzing the sequencing data in HONE1-R and HONE1 cells, the radiotherapy-resistant genes in HONE1-R cells were found to be mainly enriched in MAPK, apoptosis, and TNF signaling pathways, MAPK is the most significantly different signaling pathway between HONE1-R and HONE1 cell lines; GADD45B might be an important gene in HONE1-R radiotherapy resistance. MAPKs are serine/threonine protein kinases widely present in cells. An abnormally activated MAPK signaling pathway causes the loss of differentiation and apoptosis capacities, uncontrolled proliferation, and development of drug resistance in cancer cells [11]. Williams et al. [12] demonstrated that the application of the MAPK inhibitor PD0325901 significantly enhanced the radiotherapy sensitivity of pancreatic cancer cells, and combined inhibition of MAPK and Protein kinase B further enhanced the sensitivity. Sun et al. [13] suggested that the overexpression of TOB1 elevated the radiotherapy sensitivity of breast cancer cells and cervical cancer cells by activating the MAPK/extracellular regulated protein kinases signaling pathway and regulating p53 phosphorylation. He et al. [14] generated ATM-deficient bladder cancer cells highly sensitive to irradiation, and the MAPK and nuclear factor kappa-B (NF-κB) signaling pathways were significantly inactivated in these cells. Yu et al. [15] conducted highthroughput sequencing in irradiated HeLa and control cells. MAPK signaling, endocytosis, axon guidance, neurotrophic signaling, and soluble N-ethylmaleimide sensitive-factor attachment receptor interactions in vesicle trafficking were found to be significantly different between these cells. Cook et al. [16] proved that Cox-2-derived PGE2 induces Id1 via EP4-dependent activation of MAPK signaling and the Egr1 transcription factor in glioblastoma cells. PGE2mediated induction of Id1 was required for optimal tumor cell self-renewal and radiation resistance, and finally mediated radiation resistance in patients with glioblastoma. These previous findings suggested that the MAPK signaling pathway might play an important role in tumor radiation tolerance. The KEGG and signaling pathway network analysis showed that the MAPK signaling pathway was the most significant and the most core signaling pathway between HONE1-R and HONE1 cell lines. The MAPK signaling pathway was abnormally activated in the HONE1-R cell line: 18 genes were upregulated and 14 genes were downregulated. Sequencing, RT-qPCR, and Western blot analysis confirmed that GADD45B, AKT3, JUN, and FOS were significantly upregulated and IKBKG, MAP4K2, and MAPKAPK3 were significantly downregulated in the MAPK signaling pathway, indicating their important role in radiation resistance of the HONE1 cell line.
The GADD45 family includes GADD45 α, GADD45 β, and GADD45 γ, which are related to stress signal and regulate cell cycle, proliferation, differentiation, survival, senescence, and apoptosis [17,18]. GADD45B is widely distributed in various mammalian tissues. The expression level of GADD45B is low in the normal physiological state, but it increases significantly under the influence of environmental stress or injury stimulation [19]; GADD45B has different expression and regulation mechanisms in different cell states, and plays different or even opposite roles [20]. Previous studies confirmed that the regulatory effect of GADD45B on cell cycle mainly depended on the MAPK signaling pathway. GADD45B can directly bind to MEKK4 and MEKK7 in the MAPK signaling pathway and inhibit the JNK/p38 signaling pathway [21]. For example, Cho et al. [22] proved that the expression of GADD45B increased under the hypertonic condition and mediated G2/M-phase arrest of the cell cycle, which promoted DNA repair and cell proliferation. Yu et al. [23] confirmed that the activation of NF-κB also increased the expression of GADD45B, inhibited the function of GADD45 α, increased the degradation of p53, and thus inhibited apoptosis, suggesting that the balance of GADD45 α/GADD45B might determine the survival or apoptosis of cells. Recent studies have shown that GADD45B is crucial in tumor radiotherapy. Barros-Filho et al. [24] found that the increased expression of GADD45B was an important marker of shortened diseasefree survival in patients with papillary thyroid cancer after total thyroidectomy and radioiodine therapy. Vairapandi et al. [25] studied lung cancer cell lines. The results showed that GADD45B caused G2/M cell cycle arrest by acting on CDK1/cyclin B1. However, GADD45B could activate the G2/M checkpoint in cooperation with GADD45 α and GADD45 γ after the cells were exposed to ultraviolet radiation. Inowa et al. [26] studied the stem cell-enriched side population (SP) cells in tumors. The inhibition of GADD45 significantly reduced the viability and invasiveness of nec8 SP cells, suggesting that the high expression of GADD45B promoted the proliferation and migration of SP cells, which might determine their stem cell-like phenotype and have an important impact on their drug resistance. In addition, Cheng et al. [27] reported that GADD45B could induce the downregulation of cyclin B1 and the disassociation of CDC2/cyclin-B1, thus inhibited cell proliferation and induced mitosis delay in cancer cells. GADD45B inhibited the activity and proliferation of HONE1-R cells in the normal physiological state [28]; our present study showed that the overexpressed GADD45B could effectively increase the activity and proliferation of HONE1-R cells and enhanced their resistance to radiation therapy after radiation exposure.
Conclusion
In conclusion, a stable and genetically resistant HONE1-R cell line was constructed. The transcriptome sequencing analysis showed that the MAPK signaling pathway was the most important signaling pathway of radiation resistance in HONE1-R cells. More attention should be paid to the key gene GADD45B. This gene can inhibit the viability and proliferation of HONE1-R cells under normal physiological conditions, but it can effectively enhance the viability and proliferation of HONE1-R cells under radiation stimulation and increase the radioresistance of HONE1-R cells. Therefore, GADD45B may become a new target for radiotherapy.
Conflict of interest:
The authors declare no conflicts of interest related to this study.
Ethical approval: The conducted research is not related to either human or animal use.
Data availability statement: The authors declared that data and material in the manuscript are available. | 2021-12-23T16:16:57.816Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ac9ba9dcae196bb29e4a6f2199d7c052ef2e439f",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/chem-2021-0105/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b4f654c10faae69d8999309b640670a95427d352",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
203599039 | pes2o/s2orc | v3-fos-license | PEST analysis of wind energy in the world: From the worldwide boom to the emergent in Colombia
This article presents the analysis of the political, economic, social and technological aspects of the countries with the greatest development and generation of wind energy in the world, with the aim of studying the successes and disadvantages over the years and serve as a reference to a country like Colombia, which is in full technological development and is very interested in investing in new alternatives to the imminent ecological disaster resulting from polluting emissions. Colombia signs the Paris Agreement and immediately creates a large number of opportunities supported by state policies and economic investment, specifically in the construction of wind farms that guarantee a real alternative to the country’s energy demand. Wind energy supplies 5% of the world’s electricity, a contribution in which countries such as the United States, China, and Germany are the main producers. For this reason, it is necessary to study and analyze the factors that prevent the technological development of wind energy in Colombia, if it is considered a country with abundant resources in renewable energy sources and zero cost, a country that aspires to be a protagonist in the next 5 years according to the Ministry of Mines and Energy.
Introduction
Environmental problems and the increase in the global temperature of the planet are the main consequences of the emissions of CO2 and other pollutants generated by fossil fuels around the world [1,2]. This has caused collective concern due to climate change that is evident in different parts of the world, seriously affecting crops, vegetation, fauna and the balance of ecosystems [3,4]. In recent years, all efforts have been directed towards the protection and care of the environment by all organizations in the world, regardless of the area or scope of work [5]; this intention is reflected in the treaties and agreements signed throughout the world that allow for the regulation of the use of pollutants and the amount of emissions that seriously affect the environment (see Figure 1) [6].
To achieve this change requires commitment, dedication, and innovation from all entities and countries [8], understanding that the priority of the planet is the reduction of pollutants; therefore, the possibility of creating innovative technologies and methods for the creation of clean systems that provide a real alternative to the problems of climate change, such as wind energy, has been studied. Wind energy is a renewable energy source that uses the kinetic energy from the wind to generate electricity, it currently accounts for 32% of total renewable energy generation and supplies 5% of the world's 2 1234567890 ''"" [9,10]. It is important to recognize that renewable energy sources currently generate 25% of electric power as can be seen in Figure 2. It is clearly a high-potential alternative that over the years has provided electricity to meet the world's energy demand [11], given that the traditional energy sources responsible for generating electricity are among the causes of the increase in polluting gas emissions. The pioneering countries in wind power generation are the United States, China, and Germany with a total installed capacity of 89GW, 188GW and 56GW, respectively, and together provide 61.7% of the world's wind power (WWEA source) [12], guaranteeing the public the potential of wind power and its ability to supply conventional energy sources with fossil fuels [13]. Environmental issues have concerned the leaders of many countries, but above all, society [14]. Therefore, the public acceptance of this technology is crucial for its successful introduction into society [15]. For this reason, research on this type of energy has been generated for many years and has been of great importance for the analysis of the viability of a project that involves high investments and technologies unknown to any country [16,17]. It is necessary to use tools that allow society to visualize the current state of renewable energies in the world, specifically wind energy and compare our condition against countries that have found solutions through successes and mistakes in the development of new technologies [18].
The tool presented in this article is a PEST analysis, a tool that allows improving decision making on an issue, considering different criteria such as political, economic, social and technological factors (as its acronym indicates) that surround it [19]. Based on the above, it is possible to affirm that it is not enough to analyze the factors that directly depend on the subject under study, it is also necessary to know how the environment is and what impact it may have on the creation, development, and use of wind energy [20]. China, the United States, and Germany are protagonists because they are the countries that generate the most wind energy on the planet, although ironically they are countries with a high degree of pollutants; this causes uncertainty in society and, at the same time, causes concern due to the fact that emissions are not reduced considerably but the electricity demand in the world increases, this has made it useful to analyze the most unfavorable factors in view of the growth and development of wind energy [21]. In this sense, this article seeks to establish a comparison of the current status of Colombia in relation to the main producers of this energy in order to consider the existing gap and promote development in Colombia since the study of renewable energies is currently a priority research area in engineering [22].
Related work
In 2013, Kolios et al. [23] presented an analysis of Tidal energy in the United Kingdom, using the PESTLE approach (political, economic, social, technological, legal and environmental); the study highlighted certain potential risks in the political and economic sphere that would bring problems in the future of the project, particularly due to the technology used in the generation and other factors to be taken into account. The authors pointed out that the PESTLE analysis tool allows to visualize in a concise way the main risks and facilitated the decision making regarding the course of the Tidal Energy implementation project. A few years later, Igliński et al [24] carried out a PEST analysis of the renewable energy sector both in the voivodship of Łódzkie, known as a renewable energy generating area, and in the Polish town of Leśmierz; the study sought to compare the current state of renewable energy in both locations and to analyze the factors that make the optimum development of energy generation more unfavorable than in another town or country.
Review of concepts
This section presents the fundamentals of the PEST analysis, based on the integral position of wind energy in the world, its incidence and the characteristics that have led to its development and implementation. In addition, it is necessary to identify the technology used for electricity generation and then analyze its economic impact on the countries under study.
Global context of wind energy.
Grid-connected systems are usually located in wind farms where considerable amounts of electrical energy are generated and distributed throughout the electrical circuit [23]. As shown in Figure 3, the vast majority of wind farms are onshore. On the other hand, stand-alone systems are those that satisfy the demand of remote sites where they do not supply conventional energy sources, these systems require backup wind turbines that store energy for periods when the wind calms down and electricity generation decreases [24].
Wind energy technology.
Wind energy is one of the renewable energies that currently has more production thanks to the installed capacity worldwide. In 2017, it has exceeded the figure of 500 GW of wind power installed in the world according to the European Renewable Energy Observatory.
There are different wind machines that are responsible for transforming the kinetic energy of the wind into mechanical energy; these are classified in different ways, according to the position of its axis of rotation, number of blades, electric generator used, control method, among others; in this article the purpose is the generation of energy to meet the electricity demand in the world so the different wind systems for the generation of electricity will be defined. These systems can be applied in micro-grid electric power systems, which are independent systems with generators and batteries (isolated) [23] or connected to the grid, as shown in Figure 4. These systems consist of a low complexity generation technology, such as the use of wind turbines connected to the electricity distribution networks.
Grid-connected systems are usually located in wind farms where considerable amounts of electrical energy are generated and distributed throughout the electrical circuit [24].
PEST analysis of wind energy
Wind power is one of the fastest growing forms of electricity generation in the United States, with the largest share of renewable electricity generation capacity in the country according to the American Wind Energy Association [26]. The most favorable aspect is the political factor, due to the government's incentive for renewable energy and access to the electricity grid. In addition, the tax credit, sales, and production incentives play an important role in the promotion of wind energy.
In addition, China ranked first in 2016 in terms of new wind turbines, well above all other countries, so much so that it installed more than 50% of what everyone else installed [27]. This accelerated growth was due to government policies and the tariffs given to those entities that invested in the wind energy sector. The most unfavorable aspect is technological; regulators have tried to manage the pace of construction to give grids more time to expand transmission capacity, but 12% of the total wind power generated was wasted last year, according to official figures.
Germany is the country with the highest wind power generation in the European Union, although, like the other members of the European Union, it must urgently strengthen its efforts to deploy wind energy as part of a comprehensive renewable energy strategy and develop a roadmap for a near future of 100% renewable energy. It currently has 42% of installed capacity throughout Europe (6.58GW), and has the record achieved in 2017 as the European country that increased its production of wind energy more than the previous year (15% more than in 2016). The most unfavorable thing is the insufficient commitment from the government, due to political problems and wild overcharges; for example, the billions in maintenance derived from the excessive energy level of the national grid. Now, without nuclear power and without reforming the electricity grid, Germany continues to burn coal in the name of renewable energy. Today, the different varieties of coal still account for over 40% of the country's emissions, and they alone account for almost a third of the country's emissions. In Colombia, the main disadvantage is the lack of guarantee that the system provides to investors who fear to lose their money, there are no political incentives to ensure permanence in the market for possible investments. All this problem is due to the fact that the country is going through a complicated period in terms of social and economic investment, which is why the World Bank has devised strategies to guarantee development and sustainability in the country. In terms of resources, the country is extremely privileged because it has the potential for wind energy in La Guajira, where an average energy density of 1530 W/m 2 measured at 50 meters high is estimated. Other regions with high wind potential such as Atlántico, San Andrés, among others, are also considered.
The results of the PEST analysis for the four countries mentioned are compiled and are shown below in Table 1. China will invest a total of 700 billion Yuan ($102 billion) in wind energy over the period 2016-2020. The share of wind energy in the overall electricity mix should increase to 6% by 2020, compared to 3.3% in 2015, according to a plan by the National Energy Administration.
Germany was the largest investor in 2017.
Its total investment was 6,700 million euros for the construction of new onshore and offshore wind farms. This large investment justifies 30% of the total investments in wind energy made in 2017.
Colombia will have an investment of 700 million dollars that will allow it to guarantee the electricity supply and increase its installed capacity in the next 15
Conclusions
This article presented in an organized and concise manner the political, economic, social and technological situation of the countries with the greatest electricity generation from wind energy, justifying the fact that these countries are in the top 10 in the production of articles and research for many years; a detailed analysis is made of the actions and activities that have allowed over the years the development and implementation of wind energy in the electricity grid of the countries with the intention of supplying the energy demand and reducing the use of traditional energy sources that cause damage to the environment, negatively affecting climate change on the planet. Colombia is in the development stage, but it is an attractive place for research due to its interesting resources, lacking a solid policy and investment on the part of the national government; many foreign entities have been interested in investing in order to take advantage of the resources available in the country. For this reason, this article is presented to analyze the current state of Colombia in the face the pioneer countries in wind power generation powers and to identify the most unfavorable factors in the face of the development of this new technology. The main problem for countries seeking to increase the production of this renewable energy are the political decisions and the laws that establish the limit of production without encouraging the demand for clean energy; that is, it is necessary to regulate the production of energy with nonrenewable sources to encourage the increase in the generation of renewable energies. In this way, the development of wind power throughout the world will be promoted. Construction costs must be managed by government agencies to facilitate foreign investment and technological development, and the import and export of technology. Although the investment is high, maintenance and raw material costs are zero, the renewable energy source is unlimited and provides an air free of pollutants; sufficient reasons to opt for this technology as soon as possible and meet the objectives set out in the Paris agreement. | 2019-09-26T06:50:26.771Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "69d0d9fa02317606f0b6ae6aeb3d114e5bf4663c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1126/1/012019",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0f231b099a0da8f03302267933298ef95b571911",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science",
"Economics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233348099 | pes2o/s2orc | v3-fos-license | Prevalence of Neurocognitive Impairment and Associated Factors Among People Living with HIV on Highly Active Antiretroviral Treatment, Ethiopia
Background: The burden of HIV is mainly found in Sub Saharan Africa. The HIV-associated neurocognitive impairment is found to be higher and it can exist at all stages of HIV. The HIV-associated neurocognitive impairment has a significant impact on a patient’s daily living and highly active antiretroviral treatment (HAART) adherence. Therefore, this study aimed to determine the prevalence and associated factors of HIV-associated neurocognitive impairment among adult people on HIV treatment. Methods: A total of 423 people living with HIV/AIDS were planned to include in the study. A systematic random sampling technique was used to get the study participants. Binary logistic regression analysis was used to identify associated factors of HIV-associated neurocognitive impairment. Factors with a p-value of ≤ 0.2 on bivariate analyses were recruited for multivariate logistic regression analyses, and 95% CI at p-value < 0.05 was considered as statistically significant. Variance inflation factors for continuous variables and Spearman rank correlation for categorical variables were performed. There was no multicollinearity between suspected predictor variables. Model fitness was checked using Hosmer and Lemeshow Test, and its p-value was 0.45. Result: A total of 422 individuals on HAART were included which gave a response rate of 99.8%. The prevalence of HIV-associated neurocognitive impairment was 41% (95% CI=36.3, 45.6). Older individuals, low monthly income, having comorbid depression and anxiety, have no communication about safe sexual intercourse, higher duration of HIV illness, and having poor social support were statistically significant associated factors of HIV neurocognitive impairment. Conclusion: Two among five HIV patients on HAART treatment experienced HIV-associated neurocognitive impairment. It will be better if health professionals working at the HIV/TB clinic screen and consult HIV patients for psychiatric evaluation and treatment. Due attention should be given to HIV patients with associated factors.
Background
The human immunodeficiency virus (HIV) infected over 76 million people in the world. In the world, about 37.9 million people were living with HIV by the end of 2018. Sub-Saharan Africa remains among the hardest hit regions by the pandemic, with nearly one in every 25 adults (4.2%) living with HIV, accounting for nearly two-thirds of the global total HIV cases. 1,2 The 75% burden of the Human Immune Deficiency Virus (HIV) exist in Sub-Saharan Africa. The sub-Saharan region contains only twelve percent of the world population. 3 In Ethiopia, the prevalence rate of HIV infection reduced from 3.3% in 2000 to 0.9% in 2017 The number of HIV infections among adult Ethiopians was increased by 3748 HIV new infections from 2016 to 2017 and it became 722, 248 in 2017. The highest estimated prevalence of HIV infection in Ethiopia found in Addis Ababa (5%) and Gambella (4%). 4 Neurocognitive impairment is a deficit in attention or concentration, memory, motor activities, and psychological functioning at the workplace. 5 Neurological deficit is the crucial cause of morbidity and disability among people living with HIV. Based on the updated research on nosology of HIV-associated neurocognitive disorder classified into Asymptomatic Neurocognitive Impairment (ANI), HIV-associated Mild Neurocognitive Disorder (MND), and HIV-Associated Dementia (HAD). 6 HIV-associated mild cognitive impairment affects up to 30% of PLWHIA. HIV-associated dementia is rare case and it affects up to 5% of HIV infected persons. HIV related cognitive impairment can happen at any stage of HIV. The severity of HIV-associated cognitive impairment might be complicated due to direct impact of the HIV and the ART drugs CNS toxicity. 7 The prevalence of HIV-associated neurocognitive impairment ranges from 22% to 90%. [8][9][10][11][12][13][14] Despite the problem is significant it is rarely studied in Ethiopia during the HAART era. Therefore, this study aimed to determine the prevalence of HIV-associated neurocognitive impairment among adult people living with HIV and on HAART and to identify factors associated with it.
Study Design, Period
The institutional-based cross-sectional study design was employed from 01 September 2019 to 01 September 2020. The study is part of a mega project titled the health-related impact of HIV/AIDS among people on HAART treatment in North Shoa Zone, Amhara, Ethiopia.
Study Setting
The study was conducted in selected public hospitals of North Shoa Zone, which provides ART HIV/AIDS treatment and services. Debre Berhan Town has located 695 km from Bahir-Dar, the capital of the region, and 130 km from Addis Ababa, the capital of Ethiopia. There are seven public hospitals in North Shoa Zone, one comprehensive and referral hospital, and six primary hospitals. During the data collection period, only three primary hospitals and the referral hospital were giving comprehensive HIV care and treatment, and all hospitals were included in the study. According to the North Shoa zonal health office report of 2017 a total number of 3406 HIV-positive peoples receiving clinical care.
Population
Source population: All adult HIV infected patients in public hospitals of North Shoa Zone in Amhara, Ethiopia.
Study population: All adult HIV infected patients on HAART in selected public hospitals of North Shoa Zone and who were available during the study period.
Inclusion criteria: The sample includes participants who can read and write, aged 18 years and above, and on HAART.
Exclusion criteria: Individuals who cannot respond due to the severity of the illness were excluded.
Sample Size Determination
In this study, the sample size was determined using the single population proportion formula. 50% proportion of HIV-associated neurocognitive impairment since there are no prevalence studies done by using Mini-Mental State Examination tool (MMSE), 95% confidence level, 5% margin of error, a 10% non-response rate was assumed to get the final sample size. The final sample size became 423 patients on HAART.
Sampling Procedures
A systematic random sampling technique was used to get study participants from each of the four study health institutions. The proportional allocation was used to allocate the final samples to each study area. The sampling fraction was determined by dividing the total study population who has monthly follow-up during the study period by the total sample size and participants were interviewed every 8th interval.
Data Collection Tools
A semi-structured questionnaire was used to collect data on socio-demographic characteristics, medical factors, and HIV illness-related factors. Comorbid medical disorders were reviewed in the patients' chart record. Substance use, anxiety, and depression, current health status, perceived social support, and cognitive impairment variables were assessed by using the following tools as operationalized.
Substance use: Lifetime substance use of the substance was defined as lifetime use of at least one of specified substances after starting HAART and last month's use of the substance was defined as the use of at least one of the specified substances in the previous one months before data collection.
Presence of anxiety and depression: Combined anxiety and depression symptoms were defined positive from the total sum of PHQ-4 scores, and rated as normal (0-2), mild (3)(4)(5), moderate (6)(7)(8), and severe. The presence of anxiety symptoms alone was suggested by a total score of ≥ 3 for the first 2 questions of PHQ-4, and depressive symptoms suggested by a total score of ≥ 3 for the last 2 questions of PHQ-4. 15 Perceived social support: was assessed using the Oslo-3 item Social Support (OSS-3) scale to see the degree of social support and this scale is widely used in Ethiopia. 16,17 OSS-3 in this study was scored according to total points ranging from 3-14; "poor support" 3-8, "moderate support" 9-11, and "strong support" 12-14. 16 Current health status of PLWHA: was assessed by asking participants to rate their current health status according to a five-category index: HIV positive, with no symptoms, have symptoms, but have not had to change normal daily routines, have symptoms that have required to change parts of normal routines of daily activities (extra rest is not required during a normal day), because of symptoms, being in bed, or resting (less than half of waking hours), because of symptoms, being in bed, or resting (more than half of waking hours). 18 Neurocognitive impairment: According to the Mini-Mental State Examination tool (MMSE), individuals living with HIV who score less than 25 out of the total score of 30 were considered to have a deficit in cognition. Also, individuals who score less than 13, 14-19, 21-24 were labeled with severe, moderate, and mild neurocognitive impairment respectively. 19
Data Collectors and Data Quality Control
The principal investigator recruited four-degree holder nurses (one from each of the four study sites) for data collection. Data collectors were trained and oriented on how to use the questionnaire, the ethical principles of confidentiality, and data management before their involvement with data collection. The questionnaire was designed and modified appropriately. The questionnaire including all instruments used was translated into a local language. Amharic (participants interviewed with Amharic version) to be understood by all participants and translated back to English. The training was given to data collectors. A pre-test was done on 5% of the sample size before the start of actual data collection to test the simplicity and easy understandability of the questionnaire at a distinct primary health care facility and based on the finding from the pretest, the questioner was revised accordingly and time needed for an interview was estimated. The data collectors were supervised routinely by assigned supervisors and the completed questionnaires were checked daily by the principal investigators and assistant investigators for completeness and consistency.
Data Processing and Analysis
After data were checked for completeness and consistency, it was coded and entered into Epi Data 3.1. Then, data were exported to SPSS version 20 for analysis.
In descriptive statistics; tables, and frequency/percentage were used to present the information. Bivariate and multivariable binary logistic regression analysis was conducted to identify factors associated with HIV-associated neurocognitive deficit. Only factors with a p-value of ≤ 0.2 on bivariate analyses were recruited for multivariable logistic regression analyses and 95% CI at P-value < 0.05 was considered as statistically significant. The standard method of entry (enter method) was used to select variables. Collinearity diagnostics for continuous variables and Spearman rank correlation for categorical variables was performed and there was no multicollinearity or significant correlation between predictor variables. Model fitness was checked using Hosmer and Lemeshow Test, and its p-value was 0.45.
Ethical Considerations
Helsinki declaration for medical research involving human subjects was followed.
Associated Factors of Neurocognitive Impairment
Bivariate and multivariate binary logistic regression analysis was done to identify statistically significant factors of HIV-associated neurocognitive impairment among people on HAART treatment. Among the sixteen associated factors eligible for multivariate analysis, seven variables became significant in the final analysis. Among sociodemographic characteristics of people on HAART treatment age, and family average monthly income contributes to the neurocognitive impairment. The results reveal that older participants were associated with a 6% increased odds of a diagnosis of HIV-associated neurocognitive impairment compared to the younger participants, 1.06 (95% CI=1.03, 1.08). Individuals who earn less than 500 Ethiopian Birr (ETB) per month had nearly five times higher odds to suffer neurocognitive impairment as compared with those who earn above 1500 ETB per month, 4.22 (95% CI=2.02,8.81). Moreover, individuals who earn 501-1000 ETB per month had nearly four times higher odds to suffer neurocognitive impairment as compared with those who earn above 1500 ETB, 3.93 (95% CI=2.09,7.36).
Medical factors also contribute to the prevalence of neurocognitive impairment among people on HAART. Individuals who had comorbid depression and anxiety had nearly six times higher odds to suffer neurocognitive impairment than those who did not have comorbidity, 5.51 (95% CI=1.81, 16.79). Individuals who had no communication about safe sexual intercourse had almost three times higher odds to develop neurocognitive impairment than those who had effective communication, 2.88 (95% CI=1.61, 5.16). Higher duration of HIV illness in a year associated with a 1% increase in HIV-associated neurocognitive impairment, 1.01 (95% CI=1.001, 1.02). Individuals on HAART who had poor social support had almost four times higher odds to develop neurocognitive deficit as compared with those who had strong social support, 3.65 (95% CI=1.86,7.17), (Table 3).
Discussion
The prevalence of HIV-associated neurocognitive deficit was 41% (95% CI=36.3-45.6). HIV-associated neurocognitive impairment is associated with older age, low income, having depression and anxiety, having communication about safe sexual intercourse, having higher illness and treatment duration, and having poor social support. The current study prevalence in line with two studies done in Ethiopia 39.3% and 36.4 20,21 and Botswana 38%. 22 This study is higher than studies done in Ethiopia (33.3%), 8 South Asia (22.7%), 9 and a systematic review and meta-analysis did across Sub-Saharan Africa countries. 23 This discrepancy might happen due to the study design and population. Such as the study was done in Southern Asia includes only the adult population on HAART. Moreover, systematic review and meta-analysis studies investigate the pooled prevalence of the dependent variable (neurocognitive deficit) and may affect the prevalence. Moreover, it is also lower than studies done in Kenya (81.1%), 10 and southern Ethiopia (67.1%), 24 Malawi (70%), 11 India (90.1%), 12 Northern Italy (47.1%), 13 and United States (52%). 14 This variation might be due to sociodemographic variation; tools used, and study period. Most studies listed here are done almost five years back. Therefore, this may increase the prevalence of HIVassociated neurocognitive impairment as treatment qualities are increasingly modified.
The results reveal that older participants were associated with 6% increased odds of a diagnosis of neurocognitive impairment compared to the younger participants. The association supported by the studies done in Ethiopia 20,25 ), Kenya, 10 and southern Asia. 9 The possible justification might be due to HIV-associated neurocognitive impairment may be enhanced by the effect of brain aging.
Individuals who earn less than 500 Ethiopian birrs (ETB) per month had nearly five times higher odds to suffer neurocognitive impairment as compared with those who earn above 1500 ETB per month. Moreover, individuals who earn 501-1000 ETB per month had nearly four times higher odds to suffer neurocognitive impairment as compared with those who earn above 1500 ETB. Having a higher monthly income may allow the patient to fulfill basic needs like nutrition. Therefore, patients could get a balanced diet, and the brain will function well.
Individuals who had comorbid depression and anxiety had nearly six times higher odds to suffer neurocognitive impairment than those who did not have comorbidity. The association might be due to having comorbid/associated mental illness that could worsen a neurocognitive impairment. Depression had pseudo cognitive impairment and it may overestimate the association.
Individuals who had no communication about safe sexual intercourse had almost three times higher odds to develop neurocognitive impairment than those who had effective communication. This may allow the patient to adhere to HAART treatment and make life easy. This allows the patient to control the viral load and low severity of HIV infection promotes good cognition. Higher duration of HIV illness in a year is associated with a 1% increase in HIV-associated neurocognitive impairment. This association supported by the study done in southern Ethiopia, 24 and Kenya. 10 This might be due to the advanced stage of the HIV illness, the impact of the virus itself, and ART drugs.
Severity of HIV
Individuals on HAART who had poor social support had almost four times higher odds to develop neurocognitive impairment as compared with those who had strong social support. Individuals may support patients living with HIV by remembering the medication time and can give material support. These factors may allow the patient to adhere to HAART treatment and improve their cognition.
Limitation
Since this study is done in selected hospitals in North Shoa Zone, the neurocognitive impairment might not apply to all PLWHA in Ethiopia. This study only includes HIV patients on HAART and may underestimate the prevalence of neurocognitive impairment as HAART can reduce the problem. Moreover, since the sub-cortical processes are primarily affected, the prevalence of neurocognitive impairment can be underestimated because the milder forms of neurocognitive impairment cannot be reliably detected.
Conclusion
Two among five HIV patients on HAART treatment were developed HIV-associated neurocognitive impairment. Neurocognitive impairment hurt the patient's overall quality of life and also HAART adherence. It will be better if health professionals working at the HIV/TB clinic screen and consult HIV patients for psychiatric evaluation and treatment. Due attention should be given to HIV patients with associated factors.
Data Sharing Statement
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Ethics Approval
Helsinki declaration for medical research involving human subjects was followed. Ethical clearance was obtained from the Institutional Health Research Review Committee (Ref. No. IHRERCB-0590/2019) of the college of health and medicine, Debre Berhan University. A permission letter was written for each study health institution and a permission letter was taken from the study institution administrator. Verbal informed consent was taken from each study participant. The informed verbal consent process was approved by Debre Berhan University Institutional Ethics Review Board.
Publish your work in this journal
HIV/AIDS -Research and Palliative Care is an international, peerreviewed open-access journal focusing on advances in research in HIV, its clinical progression and management options including antiviral treatment, palliative care and public healthcare policies to control viral spread. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. For personal use only. | 2021-08-20T20:55:47.218Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ab4b97f37e3fd0dd48df7ca61995198733be5e32",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=68587",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a2fd0f41fae531cef2f1859f8b7b084ff4a531de",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5993631 | pes2o/s2orc | v3-fos-license | Mechanism of action of cytochalasin: evidence that it binds to actin filament ends.
To test the idea that cytochalasin retards actin assembly by binding to filament ends, we have designed a new assay for cytochalasin binding in which the number of filament ends can be varied independently of the total actin concentration. Actin is reacted with polylysine-coated polystyrene beads to make filament ends (Brown and Spudich, 1979, J. Cell Biol. 80:499-504) and then reacted with [3H]cytochalasin B. We have found that cytochalasin B binds to beads in the presence of actin, and that the number of cytochalasin B binding sites can be varied as a function of the number of filament ends independent of the total actin concentration by varying the bead concentration.
Recently, it has been discovered by a number of laboratories (1,4,5,9,10) that cytochalasins inhibit the rate of actin assembly . Far less than equimolar amounts of cytochalasin are required to achieve this inhibition ; correspondingly, cytochalasin binds to actin filaments with a stoichiometry of one site per 500 to 10,000 actin monomers (4)(5)(6)9), or one binding site per 1-30 /Am of filament. ' It seems likely that cytochalasin binds to the actin at the filament end where assembly is taking place, as already suggested (1,4,5,9,10,12). The critical experiment to test this hypothesis is to show a direct relationship between the amount of cytochalasin bound and the number of filament ends . Until this report, it has not been possible to systematically vary the number of actin filament ends. One must consider the possibility that cytochalasin is not binding to actin at all, but instead is binding to some nonactin species which would only need to contaminate an actin preparation at a level of 0 .01-0 .2%, given the above values . Thus, it is crucial not only to vary the number of actin filament ends, but also to do so without varying the concentration of the actin or putative contaminants .
In this report, we have varied the number of filament ends ' While all the binding data agrees to the extent that a low number of cytochalasins bind per filament, the range of values obtained in different laboratories is rather broad . It is worth noting that in our hands, average filament length is a function of how the actin is purified; our present procedure yields Dictyostelium actin filaments >10 Pan long, whereas a different procedure yields filaments <2 pm long (14) . This variation may be the result of trace contaminants which determine length . (see also reference 6) .
MATERIALS AND METHODS
Actin from Dictyostelium discoideum was purified by the method of Uyemura et al. (14), and was disassembled as previously reported (3,4). Polylysine was covalently linked to itself while bound to polystyrene beads (via glutarladehyde cross-linking of the amino groups) . This caused thepolylysine to become permanently associated around the bead (assayed using polylysine made radioactive by reductive alkylation with [3H]borohydride; reference 11).
The procedure was as follows: 1 vol of 0.11-pm polystyrene beads (Sigma Chemical Co ., St. Louis, Mo .; 10% solids) or 10 vol of 1 .1-pm polystyrene beads (Dow Chemical Co., Midland, Mich . 10% solids) were added to 50 vol of 5 mg/ ml polylysine (Sigma Chemical Co ; type IB) while vortexing. The mixture was stirred at 4°C for 3 h to overnight. The beads were sedimented (10,000 rpm, 10 min, in an SS-34 rotor for 1 .1-gm beads; 40,000 rpm, 20 min in a type 65 rotor for 0.1 Igmbeads) and resuspended by sonication in 2,000 vol of 0.1 M potassium phosphate buffer, pH 6.2 . While stirring in the hood, sodium cyanoborohydride (Sigma Chemical Co.) was added to --10-' M, and glutarladehyde to 1.6 x 10' M immediately thereafter . Beads were then sedimented and resuspended in 200 vol of methanol to which -0.2 M sodium borohydride was then added to reduce any remaining aldehydes. Beads were washed well with water and stored in 0.02% sodium azide.
Bead concentration is expressed as milligram per milliliter polystyrene, which was measured spectrophotometrícally after dissolving the beads in dioxane (16). Number of beads per milliliter was obtained in the case of 1 .1-gm beads by counting in a hemocytometer. The 0.11-pm beads are too small to be counted in the light microscope . Therefore, they were mixed with a known concentration of 1.1-pm beads and counted in the electron microscope . The number of beads per milliliter can also be calculated from the milligram per milliliter of polystyrene (density = 1 .05) . The calculated number agreed with the measured number for the 0.11-,um beads, but was 40-73% of the measured number for the l .1-lam beads (two batches). Therefore, there is error in the measurement of polystyrene concentration and/or in the bead counting .
Monomeric (G) actin was mixed with polylysine-coated beads in 3 MM imidazole, pH 7 .5, 0.2 mM dithiothreitol, 0.1 mM ATP (G buffer). Assembly to filamentous (F) actin was induced by raising the salt concentration to 0.1 M KCI and 0.1 mM MgClx (F buffer).
The binding of actin to beads was followed using ['S]actin labeled in vivo (12). Cytochalasin binding was followed using ['H]cytochalasin B (CB) (New England Nuclear, Boston, Mass.) . Binding was assayed in 80 pl total reaction mix by sampling 10-ILI aliquots and comparing total counts per minute vs. counts per minute in the supernate after centrifugation for 5 s at 100,000 g in a Beckman Airfuge (Beckman Instruments, Spinco Div ., Palo Alto, Calif.) to pellet the beads . The "moles CB bound" shown in Scatchard plots (Figs. 2 and 3) are per 10 ul.
RESULTS
Actin Monomer Binds to Polylysine-coated Beads to Make Nuclei from Which Filaments Can Grow We have previously reported that polylysine-coated beads accelerate nucleation of the assembly of actin filaments in F buffer (3). To investigate the mechanism, we examined the interaction of actin with beads in G buffer. Filaments do not assemble under these conditions, but monomers adsorb to the beads; we will refer to this as "directly bound actin ." First, we improved the beads by covalently coupling the polylysine coating around them (see Materials and Methods). This removed the complication that polylysine could slowly desorb from the beads .
Next, we mixed monomeic ["Slactin with polylysine-coated beads in G buffer, and sedimented the beads (Fig. 1). We find that monomers bind to the bead and that the bead is saturated at^-4 x 105 actin molecules/1 .1-pin bead (two determinations gave 3.6 x 10 5 and 5 .0 x 105) and 4 x 103 actin molecules/ 0 .11-'Lm bead . Thus, roughly the same number of acties are directly bound per unit surface area to the large and small beads. This amount of actin is enough to cover the bead surface approximately twice, based on an area of 25 nm2 occupied per actin monomer . Because of the range of possible error in bead quantitation (see Materials and Methods), we do not wish to suggest that the beads bind exactly a bilayer of actin, but only that the actin is close-packed at the bead surface . This finding supports the idea that beads accelerate nucleation of actin assembly by bringing monomers close together at the bead surface. This would be expected to facilitate the interaction of the directly bound actin monomers to form nuclei from which filaments can grow when assembly is induced by addition of salt. According to this interpretation, actin can become associated with beads in two ways: it binds directly to the polylysine at the bead surface, even in low salt ; and, upon addition of salt, the remaining free actin can form filaments by assembling onto those directly bound actin molecules that have interacted to form nuclei.
Actin Bound to Beads Binds Cytochalasin
The binding of [3HICB to beads t actin was assayed (Fig. 2). A Scatchard plot of the data revealed high affinity binding (Kd = 6 x 10-a M) when actin filaments were grown onto the beads. There was no high affmity binding to beads in the absence of actin (Fig. 2).
High affmity binding to the directly bound actin alone could be observed when the remaining free actin was removed before salt was added (Fig. 3, binding sites under these conditions, but approximately the same Kd (4 x 10-8 M) was obtained as when filaments were assembled onto the beads (Fig. 3, compare open and closed circles) . In this case there are no visible filaments, and CB is presumably binding to the nuclei formed at the bead surface, which have the conformation of actin filament ends . Salt is required either for nucleus formation or for the binding of cytochalasin, since no high affinity binding is seen in the absence of salt (Fig. 3, triangles).
Cytochalasin Binding to Beads is a Function of the Number of Actin Filament Ends
To show that cytochalasin binding is a function of the number of filament ends, we wanted to vary the number of ends while holding the actin concentration constant. This was accomplished by varying the concentration of the beads. When filaments were assembled onto increasing concentrations of beads while the actin concentration was held constant at 0 .2 mg/ml, there was apparently no change in the number of filaments/bead? Thus the total number of filaments should increase linearly with bead concentration. Because the actin concentration is held constant, the filament length must get shorter; Fig. 4 confirms that this is the case .
This experiment was repeated, and beads assayed for bound actin and bound CB ; the results are consistent with our expectations (Fig . 5) . The open circles indicate that almost all of the actin is bound at all bead concentrations. At lower bead concentrations the number of high affinity CB-binding sites is seen to increase linearly with increasing bead concentration (Fig. 5, closed circles) . However, at bead concentrations greater than -4 mg/ml, the number of CB-binding sites starts to decrease . This decrease coincides with a slight increase (from 94 to 100%) in actin binding . A likely explanation for these changes is that there is no longer enough actin to saturate all of the bead surface. Thus the actin binding increases slightly 2 There were 25 t 18 filaments per bead at 1 mg/ml beads, 26 t 9 at 2 mg/ml beads, and 30 t I 1 at 3 mg/ml beads . It should be noted that since the standard deviation is large, and also the amount of F-actin seen is less than expected from the concentration of actin used, these numbers are consistent with but do not prove that the number of filaments/bead stays constant .
presumably because the critical concentration of monomer formerly in equilibrium with the filaments on the bead now becomes adsorbed to the bead . We suggest that the CB binding decreases as a result of the loss of close packing of actin monomers on the bead surface and therefore of the ability to make nuclei or filament ends .
To confirm that actin was indeed becoming limiting at the breakpoint in Fig . 5, we performed the experiment in a different way (Fig . 6) . We again mixed actin with beads, but washed the beads to remove unbound actin before the addition of salt . As indicated earlier (Fig . 3), high affinity CB binding to this directly bound actin is seen under these conditions . Fig . 6 shows that when the assay is performed in this way, the amount of actin bound increases linearly with increasing bead concentration until all of the actin is bound . The amount of CB bound also increases at lower bead concentrations . At the point that all of the actin is bound, the amount of CB bound begins to decrease . Because the decrease coincides with the point where the bead is no longer saturated with directly bound actin, presumably there are fewer actins in close apposition which can interact and form nuclei to which CB can bind .
Ratio of Cytochalasin-Binding Sites to Actin Filament Ends
If the above arguments are correct, one would expect that the number of cytochalasin-binding sites would be equal to or a small multiple of the number of filament ends . Because we can measure the number of cytochalasin-binding sites and can obtain an approximation of the number of filaments, we are in a position to calculate an estimate of the number of binding sites per filament end . It proved difficult to obtain a precise number; however, the stoichiometry is consistent with cytochalasin binding reflecting the number of filament ends.
The number of filaments per bead was determined by electron microscopy. The smaller 0.11-pin beads were used since the amount of filament obscured by traversing the circumference of the bead and/or by the halo of stain around the bead is minimal . We calculated from measuring average length and number of filaments that the amount of F-actin seen using 0.11-,um beads was approximately that expected for the concentration of actin used. However, one might still expect to miss any filament shorter than -0.1 Am using 0.11-pm beads. Furthermore, such short filaments could not be ruled out by the quantitation of F-actin, as they would constitute a small fraction of the total actin concentration. Thus we can only obtain a minimum estimate of filament number and thus of the number of filament ends. Fig. 7 is a histogram of the number of filaments per 0.11-p n bead, which averaged 8 ± 5. Each separate experiment gave the same distribution of filament number . 0 .2 mg/ml (measured to be 71% bound) or 2 mg/ml actin was assembled off 0 .74 mg/ml 0.11-Wm beads . A drop was placed on an EM gird, and in some cases rinsed with 50 mM M8CI 2 to induce paracrystals for better visualization, before negative staining with 1% aqueous uranyl acetate . filaments per bead, we obtain an average of 3, and a range of 2-9 CB binding sites per filament . Considering that our estimate of filament number may be low, the number of CB binding sites per filament might be less than this . Again, we emphasize that these numbers are approximations, but they fall in the range expected,
DISCUSSION
To use cytochalasin as a tool to study the functioning of actin in cells, it is first necessary to understand the details of its molecular mode of action . We provide evidence that cytochalasin retards actin assembly by binding to actin filament ends, as we can vary the number of filament ends and obtain a corresponding variation in the number of cytochalasin-binding sites . Our evidence indicates that the high affinity cytochalasin binding is not along the length of the filament, or to a possible trace contaminant, as binding can be varied while these other components are kept constant.
The ability to vary number of actin filament ends by varying the bead concentration is demonstrated directly by EM at the lower bead concentrations, where filaments are still long enough to be seen. Our argument that there is a loss of filament ends beyond saturating bead concentrations has not been independently demonstrated, but this is the most plausible explanation of the loss of cytochalasin binding and is consistent with other findings on the mode of action of the beads (3) .
The data presented here, taken together with other very recent findings, provide a reasonably convincing picture of the mode of action of cytochalasin : A number of laboratories have shown that low concentrations of cytochalasin retard actin assembly (1,4,5,9,10) and bind with high affinity to F-actin (4-6, 9) but not G-actin (5), with a stoichiometry of close to one cytochalasin per filament (6, and this paper) . Because the assembly is known to occur preferentially at one end of the filament (7,8,13,17), we would expect cytochalasin to interfere with assembly by binding to that end . MacLean-Fletcher and Pollard (10), by assembling actin onto filament fragments decorated with heavy meromyosin, have demonstrated elegantly that cytochalasin blocks the preferred assembly end . In the absence of cytochalasin, there is a 6:1 bias in favor of growth off the barbed end ; in the presence of cytochalasin, growth off the barbed end is preferentially blocked . It has been proposed (15) that the two ends of the actin filament have different equilibria with monomer, so that at steady state, monomer is continually coming off the "disassembly" (pointed) end and adding to the "assembly" (barbed) end to give rise to "treadmilling." If this is the case, and cytochalasin binds to the assembly end, depolymerization to a new equilibrium with the disassembly end would be expected . Several authors have observed that cytochalasin D does increase the critical concentration of monomer in equilibrium with F-actin (1, and unpublished observations of Dr. P . A. Simpson in our laboratory); however, others have not observed such an effect (6,10) perhaps because buffer conditions affect the extent of the difference, or because a small change is obscured by nonpolymerizable monomer (12) . Spectrin-actin complex may be complimentary to cytochalasin by blocking the other end of the filament, as it has complimentary effects (2): it decreases the critical concentration, and tends to block the ability of cytochalasin to cause depolymerization to a higher critical concentration .
This new understanding of the mode of action of cytochalasin suggests that the reason for its profound effects on cell functioning may be because actin assembly-disassembly reactions are central to the regulation of many cellular processes . | 2014-10-01T00:00:00.000Z | 1981-03-01T00:00:00.000 | {
"year": 1981,
"sha1": "dad86307c51129d07cc6c16ff3d5a2de54aa3a46",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/88/3/487/1389069/487.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "dad86307c51129d07cc6c16ff3d5a2de54aa3a46",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
92847546 | pes2o/s2orc | v3-fos-license | Assessing the Impact of Climate and Change and Variability on Irish Potato (Solanum Tuberosum L.) Production from 1995 to 2015 in Tubah Sub Division, North West Region, Cameroon
Climate change and variability are common phenomena that affect crop productivity globally but with significant differences between different regions of the world. Studies of the impacts of these phenomena on Irish potato –Solanum tuberosum L.-production within Tubah Sub-Division based on records of precipitation and temperatures from 1995 to 2015, indicated an increasing mean annual temperature rate of 0.09 C per year and a slight increasing annual rainfall rate of 25.8 mm per year. Potato yields within the same period equally increased by 1.26 t/ha per year until 2012 when the yields started decreasing due to a correspondingly marked increase in both temperature and rainfall. The drop in potato yields has also been attributed to disease infestation such as potato blight and rot which are favoured by the increases in temperature and rainfall. Statistical correlation and regression analyses of these data revealed that the potato yields showed weak positive correlations with temperature (R= 0.02) and with rainfall (R= 0.12). Results from questionnaire survey, focus group discussions and semistructured interviews indicated that the potato crop (63 %) of most farmers were negatively affected by climate change and variability through the increased sporadic rainfall which enhanced potato blight and rot. Some adaptation strategies to these climatic factors are already being practised by most farmers (81 %) who are making use of a combination of fertilizers, pesticides, improved seeds and irrigation practices to remedy the situation although further approaches such as the use of resistant species are necessary towards improving on the dwindling potato yields.
INTRODUCTION
Several pieces of evidence have shown that climate change is real and poses serious consequences on the suitability and productivity of current agriculture in specific agro-ecological zones of the world as well as on the incidence and severity of diseases affecting crops (IPCC, 2007).Recent warming of the climate system is suggested by the observations of increases in rising global average sea level, melting of snow and ice and global average air temperatures (IPCC, 2007).Global Climate Models project a rise in global temperature by 1.8 °C to 4 °C by the year 2100.According to NCAR (2016), climate change can affect food availability, access, utilization, and the stability of each of these over time.African countries are among the most vulnerable to climate change because their economies largely depend on climate-sensitive agricultural production systems (IPCC, 2007).Short term climatic variability has equally become more frequent and its impact on agriculture is felt across the continent.
In Cameroon, mean annual temperature has increased by 0.7 ÚC since 1960 at an average rate of 0.15 ÚC per decade and is projected to increase by 1.0 to 2.9 ÚC by the 2060s and 1.5 to 4.7 ÚC by the 2090s while rainfall projections are less certain with anticipated change of between approximately -7 % and +20 % (Ayonghe, 2001;Meehl et al., 2007& IPCC, 2012).Agriculture remains the backbone of the economy of Cameroon and employs more than 70 % of the population.However, it has been largely affected by oscillations in climatic elements; especially rainfall and temperature which have effects on both plantation and peasant agriculture (Molua & Lambi, 2006).Solanum tuberosum L., commonly known as Irish potato, is the world's major noncereal food crop and the fourth largest crop after; maize, rice and wheat, with production reaching a record of 325 million tons in 2007 (FAOSTAT, 2008).It is also an important crop in Cameroon, ranking fifth in tons produced among the major staple crops cassava, plantain, cocoyam and maize) (Badu-Apraku et al., 2009).Potato is grown in many different environments, but it is best adapted to temperate climates (Haverkort, 1990).At high temperatures usually above 17 0 C (Stol et al., 1991), tuberization diminishes bringing about a decrease in potato yield (Reynolds & Ewing, 1989).Borah & Milthorpe (1962) and Bodlaender et al. (1964) discovered that potato plant needs a minimum temperature of 6 0 C for seed tuber sprout development and emergence above the soil surface and a maximum of 18 0 C for optimum stem elongation.High air and soil temperatures can also promote the build-up of soft rot bacteria, increase tuber infection and rotting of potatoes (Davidson, 1948).The crop grows best in cool but frost-free seasons and does not perform well in heat (Bodlaender, 1963& Hijmans, 2003).
The optimum precipitation for early potato ranges from 250 mm to 350 mm and any precipitation higher than this optimum leads to yield loss as it prolongs germination and sprouting and increases disease incidence (Rymuza et al., 2015).Monneveux et al. (2012) estimated the best precipitation requirement for potato to be between 500 mm to 700 mm for a 120 to 150 day growing season.Anochilli (1978) and N.P.C.K (2015), also reported that the optimum rainfall requirement of Irish potato ranges between 750 mm to 1250 mm per annum and 1200 mm -1800 mm respectively.There are many factors that affect potato yields such as; lack of water and nutrients in the soil, damage from pest and diseases and changes in weather variables (Ogola et al., 2011).Weather variables encompass; rainfall, temperature, wind, humidity and sunshine among others but the present study considered temperature and rainfall as the most critical elements affecting potato yields.
Over 200,000 farmers grow potatoes in Cameroon, mostly smallholders.The Northwest and West regions are areas of greatest concentration, accounting for 80 % of national production (Badu-Apraku et al., 2009).The study was limited to Tubah Sub Division which is an area with a high production of the crop (Fig. 1).Since potato is generally sensitive to environmental extremes, such as high temperature and limited soil moisture or low rain fall under a changing climate which cause yields to decrease, and since there is a constant increased demand for potato, these changing weather patterns are leaving food security at risk in the study area and Cameroon at large.However, there has been limited study on the state, severity and past trend of climate change and variability within this area from which adaptation strategies for growing this crop and ensuring future production can be conceived.
Thus, the present study is aimed at assessing the annual climatic trends (temperature and rainfall) and Irish potato production trends between 1995 and 2015.It also determines if there exist any statistical correlations between the trends of the climatic variables and potato yields and investigates how changes in temperature and rainfall have affected potato yields.Lastly, it verifies how potato farmers perceive the phenomenon of climate change and are adapting to changes in temperature and rainfall.
MATERIALS AND METHODS
Climatic data (total annual rainfall and mean annual temperatures) from 1995 -2015 were computed from data provided by the Meteorological Unit of the Ministry of Transport, Bamenda.Irish potato production data (in tons per hectare per year (t/ha/yr)) for 21 years (1995 -2015) in Tubah Sub Division were obtained at the Institute of Agronomic Research for Development (IRAD) in Bambui.
The method proposed by Ajadi et al. (2011), was used to process the agricultural production yield data and climate, whereby Excel 2010 was used to analyse variations in climatic trends and agricultural production trends with the correlation that exists between them within 1995 and 2015.The descriptive statistical techniques used in Excel 2010 to show the correlation that exists between climatic variables and potato yields were mean, standard deviation, correlation, and the regression inferential statistical technique.These analyses were presented in graphs and on tables.A questionnaire design containing both structured and unstructured questions as used by Amon (2013) in his study of analysis of rainfall variability on Irish potato production in Ol-joroorok Division, Nyandarua country, Kenya, was self-administered to Irish potato farmers in order to find out their perception of the impact of climate change and variability on the potato crop and their adaptation strategies.A stratified purposive and random sampling technique as used by Regassa (2016) was adopted as a method of administering the questionnaires.The strata were the four fondoms of Tubah Sub Division namely: Kedjom Ketinguh, Kedjom Keku, Bambili and Bambui.For the purpose that Kedjom Ketinguh was the dominant producing area in terms of Irish potato, fifty (50) questionnaires were administered to farmers randomly while thirty (30) were administered to each of the remaining three strata, given that they produce Irish potato in small quantities making a total of 140 questionnaires.The questionnaires were analysed using Excel 2010 whereby, results were presented on pie charts, bar charts, columns and tables.Focused Group Discussions (FGDs) were also held with some of the Irish potato farmers group within the sub division to get information on the impact of climate change and variability on potato and how they are adapting to the changes in temperature and rainfall.A checklist was used to obtain information during the FGDs.The following potato farming groups were used: Akungni Farming Group and Ntamengon Integrated Association Farming Group in Bambui, Kwecham Nimui Social Group in Bambili, as well as the Struggling Men Famers Group and Abohfen Promised Land Farmers Group in Kedjom Ketinguh.
Ground control points were collected from Irish potato farms in the four fondoms within the sub division with the use of a Garmin GPS to produce a locational map of some Irish potato farms.The ArcGIS 9.2 software was used to produce the location map of Irish potato farms within the four fondoms (figure 2).
RESULTS
Trend analyses of mean annual temperature and total annual rainfall between 1995 and 2015, are presented in figures 3a and 3b respectively.Mean annual temperature showed an increasing warming trend of about 0.09 0 C per year (figure 3a) while total annual rainfall showed an increasing trend of about 25.80 mm per year (figure 3b).The potato yields by contrast showed an increasing trend from 1995 until 2012 when it started dropping with time (figure 4).(1995 -2015) Relationship between temperature and potato yields Correlation and regression analyses of the climatic parameters and Irish potato yields from 1995 to 2015 are presented (Figures 5 and 6 and Table 1).Potato yields were increasing with increasing temperature (Figure 5).(1995 -2015) Relationship between rainfall and potato yields In Tubah Sub division, potato yields increased with a slight increasing rainfall trend (Figure 6).From the regression analyses (Table 1), rainfall and yields had a weak positive correlation (R = 0.12).The relation was not significant at 95 % Confidence Interval (P V = 0.61) and rainfall explained only 1.42 % (R 2 = 0.014201) of the variation in yields.Farmers' awareness of climate change and its impacts From the pie chart (Figure 7a), most (95 %) of respondents were aware of the impacts of climate change (changing rainfall and temperature patterns between 1995 and 2015) with the majority (53 %) indicating that climate change was affecting Irish potato yields in the study area (Figure 7b).The majority of farmers (63 %), confirmed that potato blight and potato rot affected potato yields in Tubah Sub Division (Kedjom Ketinguh, Kedjom Keku, Bambili and Bambui) (Table 2 and figure 9a and b).
Based on Figure 8, 43 % of farmers attested to the fact that increased rain affected potato yields, followed by 36 % who said both increased rain and increased temperature affected potato yields.The majority (81 %) of farmers (Figure 10) were using climate change adaptation strategies for potato production and 32 % of them used improved seeds, irrigation, fertilizer and pesticide as measures to adapt to climate change (Table 3).(1995 -2015).
DISCUSSION
Temperature and rainfall trends in Tubah Sub Division had increased between 1995 and 2015 in conformity with the statement of the IPCC (2007) that, recent warming of the climate system is obvious as it is now evident from observations of increases in global average air temperatures, melting of snow and ice, and rising global average sea level, and that, global surface temperature has increased by 0.74 °C during the hundred years ending in 2005.The warming trend of about 0.09 0 C per year in the study area from 1995 to 2015 is an indication that the area will be hard hit by global warming in the nearest future.The slight increasing rainfall trend is in line with the findings of Hulme et al. (2001) that, West Africa has experienced an increase in rainfall during the past 10 years when contrasted from the extended droughts years from the 1960s to the 1990s during which annual average rainfall decreased by as much as 30 %.
The correlation analysis of potato yield and temperature showed that there was no significant relationship between the two variables as PV was 0.94 at 95 % confidence interval and potato yields had a very weak positive relationship with temperature (R = 0.02).Temperature had very little effect (0.03 %) on potato yields between 1995 and 2015.This is contrary to what Ambrose (2013) discovered in his findings that, temperature has a significant positive correlation with Irish potato yields in Nigeria.
Even though, there has been an increasing trend between the two variables, a decrease in temperature as experienced in 1996 from 21.0 0 C to 19.9 0 C in 2011 led to a continuous increase in potato yields from 1.4t/ha to 35.5 t/ha.The rise in temperature as observed between 2012 and 2014 (20.7 0 C -21.8 0 C) led to a rapid decrease in potato yields from 32.5 t/ha to 20.5 t/ha.This means that potato yields tend to decrease only when temperatures are 20.7 0 C and above, which confirms what Rymuza et al. (2015) observed that potato productivity is greatly reduced at temperatures higher than the optimum of 20 0 C as, the rates of photosynthesis and respiration are affected with the former being reduced and the latter increased.Levy & Veilleux (2007) and Monneveux et al. (2012) Forty-three (43%) percent of the farmers who said climate change was affecting potato yields emphasized that, increased rainfall was the main cause while 36% were of the opinion that potato yields are highly affected by increased rain followed by increased temperature.This is in line with what IPCC (2007) stated that, increasing temperature and precipitation may be the reason that potato crops in many regions which previously had no presence of late blight disease have become affected in recent years.
Sixty-three (63%) percent of the farmers stated that potato blight and rot were caused by increases in both rainfall and temperature which is in line with the findings of Olanya et al. (2006) that, late blight was regarded by farmers in Nyandarua as the most common disease in potato resulting from high rainfall.High air and soil temperatures can also promote the build-up of soft rot bacteria, increase tuber infection and rotting of potatoes (Davidson, 1948).
With regards to adaptation measures, 81% of the farmers were already aware of the impacts of climate change.This confirms what IPCC (2007) said, that adaptation to climate change as far as agriculture is concerned is already taking place as farming communities have a long record of coping and adapting to the impacts of weather and climate on potato.Thirty one percent were already using a combination of improved seeds, fertilizer, pesticide and irrigation as strategies to limit the impact of changes on potato yields.Some of these strategies confirm what Moldovan et al. (2011) recommended that, improved potato seeds can be used as a measure to adapt to climate change and increase potato yields.Fry and Shtienberg (1990) also stated that protectant pesticides should be frequently applied to destroy blight in potato plant.
CONCLUSION
The continuous increase in temperature (0.09 0 C per year) and rainfall (25.80 mm per year) between 1995 and 2015 has led to slight increases in yields (1.26 t/ha/yr), though with accompanying increases in disease infestations, with greatest decline in yields (-20.5 t/ha) recorded in 2014.Irish potato yields correlation with climatic elements equally had different relationships.
Yields have been increasing with temperature, even though the relation was a very weak positive one (R= 0. 02) while in terms of rainfall, potato yields at R= 0.12.Yields increased with increasing rainfall but started decreasing only when rainfall was at extremes (3000 mm + ).Furthermore, temperatures of 20.7 0 C + decreased yields from 2012 to 2014 (32.5 t/ha to 20.5 t/ha).These changes were associated with disease infestations of the crop in the form of potato blight and rot arising from both increases in rainfall and temperatures.Most farmers were already using improved seeds that are disease resistant together with pesticides and fertilizers against these effects.However, potato crops could not sustain the climatic extremes that occurred between 2012 and 2014 whereby temperatures increased from 19.9 0 C to 21.8 0 C with an accompanying significant increase in rainfall from 2282.4 mm to 3482 mm.Even though rainfall and temperature affected potato yields in the study area, the climatic variables did not correlate significantly with Irish potato production.This means that temperature and rainfall may not be the most critical climatic variables affecting potato yields and that other climatic variables such as evaporation, relative humidity, sunshine hours, soil temperature, as well as climate variability which affects the onset dates of rains, dry spell or drought, are equally important parameters to be considered in such studies.
Other parameters that may affect the potato yields may include poor farm management practices such as failure in allowing the land to fallow, the poor timing of planting the potato with respect to the start of the rainy season, and/or the impacts of insufficient rainfall at the early stages of growth.The remedial strategies will include ensuring that the land is fallowed every other year and the provision of irrigation facilities whenever the rainfall is insufficient.The cultivation of other Irish potato varieties that have been recommended for Sub Saharan Africa in the International Potato Centre's (CIP) list, especially species that are more heat tolerant and late blight resistant to current climatic conditions are highly recommended for this area.
Figure 2 :
Figure 2: Distribution of Irish Potato Farms within Tubah Sub Division
Figure 5 :
Figure 5: Correlation and Regression Analyses of Mean Annual Temperature and Potato Yield Trends in Tubah Sub Division(1995 -2015)
Figure 6 :
Figure 6: Correlation and Regression Analyses of Total Annual Rainfall and Potato Yield Trends in Tubah Sub Division(1995 -2015)
Figure 7a :Figure 7 :
Figure 7a: Farmers Awareness of climate change and its impacts
Figure 9 :
Figure 9: Photographs showing (a): a potato plant affected by blight which appears as dark leaves on the plant and (b): potato rot on the hand.
Table 1 :
Regression Summary Output for Rainfall and Potato Yields in Tubah Sub Division(1995 -2015)
Table 3 :
Farmers Adaptive Measures to Temperature and Rainfall Changes in Tubah Sub Division Figure 10: Number of Farmers Using Climate Change Adaptation Measures for Potato Production in Tubah Sub Division | 2019-04-03T13:07:21.395Z | 2018-07-18T00:00:00.000 | {
"year": 2018,
"sha1": "076dcb6fd86b2c23f1e4aae110ee08e7c5bcb9a5",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/jcas/article/download/174679/164070",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a1f8761a4bd1fdc9893e95f3279d0719367f4770",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
29635113 | pes2o/s2orc | v3-fos-license | Balloon Vaginoplasty: A Revolutionary Approach for Treating Vaginal Aplasia
Vaginal aplasia is a rare anomaly that carries psychologic, physical and sexual problems to the female and her partner. Whereas a number of vaginoplasty methods have been developed, refined, and modified, no state-of-the-art surgical approach has been established. This is due to a number of factors including regional differences, surgeon experience and preference for a method, and patient choice. The goal of vaginoplasty is to develop a space between the bladder and the rectum suitable for satisfactory intercourse for both partners. This review will discuss in details the common available procedures of vaginoplasty with stress on the evident pros and cons of each technique. Thereafter, it will discuss the new era of balloon vaginoplasty whether done laparoscopically or via the retropubic space. Every procedure will be discussed meticulously with excellent illustrations. Some tables to compare different techniques will be provided. In short, a step by step educational approach will be delivered to the readers to start practicing such simplified procedures in their own hospitals.
Introduction
Vaginal aplasia is a rare anomaly that carries psychologic, physical and sexual problems to the female and her partner.Whereas a number of vaginoplasty methods have been developed, refined, and modified, no state-of-the-art surgical approach has been established.This is due to a number of factors including regional differences, surgeon experience and preference for a method, and patient choice.The goal of vaginoplasty is to develop a space between the bladder and the rectum suitable for satisfactory intercourse for both partners.This review will discuss in details the common available procedures of vaginoplasty with stress on the evident pros and cons of each technique.Thereafter, it will discuss the new era of balloon vaginoplasty whether done laparoscopically or via the retropubic space.Every procedure will be discussed meticulously with excellent illustrations.Some tables to compare different techniques will be provided.In short, a step by step educational approach will be delivered to the readers to start practicing such simplified procedures in their own hospitals.
Background
Vaginal aplasia (figures 1-3) is a rare anomaly occurring in approximately 15,000 to 10,000 births (1).It carries an emotional, sexual, and social embarrassing effect on those women (2,3).Previously, those cases are neglected by the general gynecologists and sent to be treated by very limited specialized centers all over the word.Thanks to continuous refinement and innovation of reconstructive surgical techniques, some of those women could be able to conceive (4)(5)(6).Even if this anomaly is associated with uterine aplasia, there is a hope for uterine transplantation within the coming few years (7,8) because of successful animal transplantation (9,10) and competent organ cryopreservation (11).These modern achievements pushed interested centers to refine their procedures and offer those cases the best available care.Mayer-Rokitansky-Kuster-Hauser syndrome (MRKHS) is a subtype of vaginal agenesis comprising congenital absence of vagina and a variety of Mullerian duct anomalies, with aplasia of the uterus being the most common feature.In general, these patients have normally functioning ovaries, which are often located at the pelvic brim (figure 3).Anomalies of the urinary tract and the skeleton are frequently associated with MRKHS (12).
Therapeutic options of vaginoplasty
Numerous surgical and nonsurgical procedures with varying degrees of success have been described for correction of the condition, but none have proved to be universally accepted.As this chapter is planned to focus on balloon vaginoplasty, I just quote some references on the common techniques of vaginoplasty (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29).A lot of the published work on vaginoplasty demonstrates the feasibility of a particular procedure, highlights its possible advantages, and expresses the skills of the surgeons.The question now is not whether the procedure is feasible, but whether the approach is superior and beneficial to a particular patient, cost effective for the community at large, and more importantly easily performed by the general gynecologists without sophisticated instrumentation.The following algorism (figure 4) summarizes broadly the different methods of vaginoplasty.
Different therapeutic options
Vecchietti operation
Conventional surgery
Gynecologic surgeons or gynecologic endoscopists of average experience would find most of the published surgical techniques of vaginoplasty sophisticated and difficult to perform, and both cost and effectiveness must be considered conjointly when evaluating new surgical procedures.Given the rarity of the condition and the number of available methods, outcome data can be difficult to obtain (30).The traditional operative techniques (31) have major disadvantages, including prolonged recovery time and significant scarring (32).These techniques require lengthy, often embarrassing self-catheterization, which can be painful, and they may yield a vagina of only limited length (33).Many centers prefer Vecchietti's neovaginoplasty because of its low perioperative morbidity and quicker recovery period (34).These conventional surgical procedures are tedious, time consuming, and require a higher level of surgical expertise.
Laparoscopic approach
As described by Cooper et al. (35), laparoscopic neovagina can be created by drawing an olive into the vaginal grove and applying continuous tension via sutures passed at laparoscopy to a tensioning device on the anterior abdominal wall (35) using different instrument sets (36).
Fig. 5. Laparoscopic Vecchietti procedure However, there are risks inherent in the most difficult step, passing the thread-bearing cutting needle from the abdominal wall to the retrohymenal fossa, through the vesicorectal space (37).It is important to ensure bladder and rectal integrity (38).Ultrasonographic control may increase safety (39).
Disadvantages of the Vecchietti procedure:
It requires specialized teams utilizing sophisticated instrumentation, however, and it is tedious to perform.It has the drawbacks of requiring daily traction for 8 to 10 days.Moreover, it lifts the posterior urethrovesical angle, making it more obtuse, with the possible consequence of causing stress incontinence later on (the posterior traction on the urethral supports also placing the patient at higher risk for stress incontinence).Furthermore, a change in the pelvic floor balance has been suspected (40).Another problem with the Vecchietti vaginoplasty is the use of a special abdominal traction device for a few days (41).
Other laparoscopic procedures are more complicated and tedious, such as the Davydov procedure (32) or sigmoid colpoplasty (42,43).Soong (44) described a laparoscopically assisted neovaginoplasty in which laparoscopic dissection of the rectovesical space is followed by traction of the pelvic peritoneum by a vaginal clamp and insertion of a vaginal stent for 1 week (45).The latter is not a pure laparoscopic approach and appears to be time consuming, although Soong did not comment on the operative time.
Advantages of laparoscopic approach
Apart from the common well established advantages of laparoscopic surgery over the conventional surgery, laparoscopy permits identification of the pelvic peritoneum as well as the proper site for the summit of the neovagina using the most mobile portion to form the vaginal fornix.Laparoscopy significantly facilitates this procedure, reduces operating time and risks, and makes the operation available to a wide range of surgeons skilled in laparoscopy (46).
Missing data in the previous studies:
The question is not whether the procedure is feasible, but whether the approach is superior and beneficial to a particular patient, cost effective for the community at large, and more importantly easily performed by the general gynecologists without sophisticated instrumentation.These guidelines constructed the frame of our institutional research on neovaginoplasty in recent years.
Examples and characteristics
Vaginoplasties with grafts Vaginoplasties without grafts Free skin graft (the McIndoe method) -Sigmoid vaginostomy -Amnion graft -Pedunculated skin graft -Pelvic peritoneum graft -Free UB graft -Grafts from the buccal mucosa The Vecchietti operation [7], which can be Conventional Laparoscopic balloon vaginoplasty Nonanatomic access to the vaginal dimple The posterior urethrovesical angle is lifted, which makes it more obtuse A change in the balance of the pelvic floor has been suggested Table 1.
Balloon vaginoplasty innovation
This concept is developed at Assiut University (Egypt) by our team in 2007.Its main goal is to introduce a simplified approach that can be done by many gynecologists all over the world.As any innovation, it quickly passed through sequential steps of modifications to get the best available safe as well as effective approach.
Advantages of native balloon vaginoplasty over foreign tissue vaginoplasty
Cancer of the neovagina created by exogenous tissue, for example, bowel, skin graft, vulvar skin flaps, rectus abdominis (myocutaneous) flaps, or inverted penile skin, has been documented at younger ages than cancer of the native vagina (47).Tissue dysplasia can be expected because the tissue is suddenly subjected to new contacts or stresses (47).Therefore, recent interest has focused on dilatation as a treatment of choice (48,49).Most of the international centers promote the use of vaginal dilators (50).The success rate is reported to be up to 81% after vaginal dilatation (51).This can be attributed to the inherent nature of the vagina in the form of a high capability of elasticity and dispensability.Vaginal maximal tissue elongation is proved to be higher than that of normal skin (49).It seems logical that dilatation or other surgical procedures based on proper understanding of the nature of this organ would be preferred over techniques based on the idea of replacement of the vagina by skin, amniotic membrane, sigmoid, or otherwise.Moreover, replacement techniques would lead to scar formation.The prevalence of dyspareunia increases after transvaginal reconstructive pelvic surgeries (52).This concept stands behind the increased popularity of the conventional or laparoscopic Vecchietti operation as it is devoid of vaginal scars.Of peculiar advantages of balloon vaginoplasty particularly retropubic balloon vaginoplasty is the possibility of surgical intervention for recurrent or failed cases done by other procedures.Herein, I'll summarize the different techniques of balloon vaginoplasty.
Laparoscopic balloon vaginoplasty after dissection of the rectovesical pouch (53):
Under general endotracheal anesthesia, a standard laparoscopy evaluation is performed with two auxiliary 5-mm suprapubic portals.Dissection of the peritoneum covering the vesicorectal pouch is performed.A piece of gauze is inserted inside the rectum and is gently manipulated by a nurse in different directions, as directed by the surgeon.A metal catheter is inserted into the bladder, which is moved according to the directions of the surgeon.Gentle, sharp dissection of the vesicorectal space is done until a free area in between is achieved.Dissection then should be progressed until near the vaginal skin.The left 5-mm suprapubic trocar is extracted, followed by advancement of a blunt-ended grasper to make a gentle dissection of the peritoneum until reaching the dissected area.A 18F silicone Foley catheter is advanced extraperitoneally to replace the left-side blunt-ended grasper up to the dissected area.From the right side, blunt-ended grasping forceps are pushed into the rectovesical space.Vaginally, a snip is made on top of it, followed by advancement of another grasper to pick up the tip of the catheter vaginally.The balloon is inflated with 6 cm3 saline while the catheter is advanced upward; tension is maintained by applying two disposable umbilical cord clamps on the stretched catheter.To avoid skin ischemia or pain at the site of traction, a sterile dressing is insinuated beneath the clamps.To be fitted, a small hole is made at the center of the dressing before its application below the clamps.Maximal tension is achieved by continuous traction before applying the clamps (Fig. 6).The integrity
Extraperitoneal Silicone Foley's catheter
Two umbilical cord clamps Fig. 6.Lateral view of the extrapeeritoneal catheter of the bladder is easily checked by gentle testing using the blunt tip of a metal catheter.All laparoscopic instruments are extracted without any suturing.A Foley catheter is inserted into the urethra.
Disadvantages:
Despite being extraperitoneal, nevertheless, this procedure requires a considerable experience of laparoscopic surgery to dissect rectum from the bladder safely.
Laparoscopic balloon vaginoplasty without dissection of the rectovesical pouch (54,55)
A silicon coated balloon catheter is manipulated by a specially designed inserter, which is passed transperitoneally and through the pelvic floor where the balloon is positioned at the vaginal dimple.An upward, gradual (1-2cm/day) traction is applied on the catheter stem from the abdominal side for one week.A concomitant increase in balloon capacity (5ml every other day) to increase the width of the neovagina is also done.Sexual relations are recommended as early as one week after surgery.
Disadvantages:
Despite being an easy procedure, it is a blind and intraperitoneal approach.Practically, an extraperitoneal approach is proved to be effective and carries no risk of coiling of some loops of intestine or peritoneal irritation.
Modified laparoscopically assisted BV (56,57):
The procedure starts with diagnostic laparoscopy (video).After full evaluation of intraabdominal and intrapelvic structures, the telescope is directed toward the pelvic floor.
A suction irrigation cannula is introduced through the ancillary abdominal puncture and pushed firmly against the pelvic floor in the region of the pouch of Douglas (Figure 11).Simultaneously, using palpation, the tip of the cannula is introduced transperineally through the vaginal dimple, positioning it so that it presses at a central point of the dimple.The surgeon's right hand held the cannula from the abdominal side, and the left hand guides the pressed tip of the cannula from the perineal side.Next, a conventional surgical needle, its curve attenuated, is threaded with a long, double-stranded silk suture (DSSS) and passed through the vaginal dimple at the point where the cannula tip is positioned.The needle perforates the pelvic floor, and as it appears at the pouch of Douglas, the cannula is removed and a laparoscopic grasper/needle-holder is inserted so that the needle could be extracted through the ancillary abdominal puncture (Figure 12).Then, the needle is removed from the DSSS, and the suture is threaded into the opening of an 18-gauge silicon coated Foley's catheter.Traction is exerted on the DSSS from the perineal side until the catheter is pulled back through the abdominal port to the pelvic floor and through the pelvic floor to the dimple (Figure 13).This step is greatly facilitated by exerting counter traction on the catheter, stretching it to decrease its caliber especially while it is moved through the pelvic floor, since the channel created by the needle is very narrow.After the balloonbearing end of the catheter appears at the dimple, it is inflated with 15 mL of saline.Traction is exerted from the abdominal side until the balloon moved up, carrying the stretched dimple above the introitus.Catheter placement in past procedures relied on a unique catheter inserter, but the catheter can be manipulated into position using a suction irrigation cannula and a surgical needle.Traction on the catheter is maintained without the supporting plate that had been expressly made for that purpose.Traction on the catheter should be maintained using a supportive plate.It may be made of stainless steel and sterilized by autoclave, closely resembles a DVD disk.In the first method, a thick, multilayered dressing is tightly wrapped around the catheter, until a cylinder, at least 5 cm high and 10 cm wide, is formed perpendicular to the abdomen.The outermost layer of the dressing is encircled with adhesive tape to guard against unraveling (Figure 14).In addition, the cord clamp would be well supported away from the abdominal wall, thus preventing pressure necrosis.An alternative supporting plate made of 3 DVDs joined together with silicon and sterilized with ethylene oxide can be used.It is placed around the catheter and over a dressing so that it distributed the force from traction over a wide surface area, preventing pressure sloughing of the abdominal skin.Postoperative care is the same as that previously described for BV and included controlled traction and distension, prevention of infection, continued psychosocial support, and emphasis on early resumption of sexual activity; patients are instructed to use a condom and gentamicin cream during the first 10 days after discharge.
Disadvantages:
It seems to be non applicable by the general gynecologists as it is an intraperitoneal approach with a risk of subsequent intestinal coiling, risky as they reported a case of rectal injury out of three cases (33.3%) added to the risks of laparoscopy particularly if the patient has a scar of correctiob of other malformations or pelvic surgery.
Space of Retzius (figure 15)
In 1858, Retzius described the eponymous space, situated anterior and lateral to the urinary bladder (prevesical space) (58).It is the space between the symphysis, the bladder, and the anterior abdominal wall.It is bordered anteriorly by the pelvic bone; posteriorly by the endopelvic fascia (the urogenital, pubocervical, and pelvic fascia, which cover the bladder and the urethra); and laterally by the obturator muscle.It contains loose connective tissue and fat and affords the surgeon access to the bladder without opening the peritoneal cavity.
It is an optimal extraperitoneal approach well addressed in the field of urogynecology.Access though it would eliminate laparoscopy and its complications.Moreover, because of its proximity, it seems logical to access the vaginal dimple through it rather than through the auxiliary laparoscopic portals.Space of Retzius
Retropubic fine needle vaginoplasty (59)
Patients are prepared as usual for any simple gynecologic operation.Under IV propophol anesthesia, the bladder is evacuated followed by insertion of a rigid catheter guide (bladder stylet) loaded inside a urethral catheter.It aims to mobilize the bladder neck away from the tip of the needle when it passes into the retropubic space.A 7 mm suprapubic incision is made just lateral to the midline in the suprapubic area 2 cm above the symphysis pubis in the same manner as tension free traction (TVT) operation for treating genuine stress incontinence (60).A fine single lumen egg retrieval needle with its stainless handle tightly fitted to a cut distal end of a Foley catheter is used.We prefer Swemed Sense needle (Virtolife Sweden AB, Kungsbacka, Sweden) which has a reduced distal end of 0.9 mm OD while the rest of the needle has 1.4 mm OD (Figure 16).An additional advantage of this needle is good malleability that allows bending of the needle during insertion to adapt the curve of space of Retzius (Figure15).The bended needle is inserted through the suprapubic incision directed towards the space of Retzus with simultaneous mobilization of the bladder inwards and laterally into the ipsilateral side with the bladder stylet.To help the needle reach the correct place in the center of the vaginal dimple, a small incision in the vaginal dimple is made which is used to allow the introduction of the operator's contraletral index finger to guide the tip of the fine needle.Once the tip of the needle appears, the bladder catheter is removed followed by cystoscopic examination.If the bladder is intact, the needle is advanced with some force to allow the fitted catheter to bypass the anterior abdominal wall layers.Once the balloon is seen from the vaginal side, it is disconnected from the needle, inflated with 6-8 cc of saline, and pulled upwards (figure 16).Traction is maintained by applying two alternating umbilical cord plastic clamps on its part adjacent to the anterior abdominal wall.To avoid skin ischemia, an intervening layer of sterile gauze is placed underneath a stainless steel fenestrated plate (Figure 17).To avoid retention effect of the balloon, a urethral catheter is inserted and fixed.After a short postoperative interval, before discharge, the patient is instructed to maintain antibiotic coverage and to take a nonsteroidal anti-inflammatory drug whenever required.She is taught how to evacuate and care for the urine collection bag, make frequent proper vaginal douches using bovidone iodine 10%, and apply sterile vulvar dressings to guard against ascending infection.Three days later, she has to come to the office for more traction on the catheter which is maintained using a third umbilical clamp.The abdominal and the uretheral catheters are removed on day 8.Despite being simple, it is a relatively unsafe as the needle is passed blindly towards the vaginal dimple sometimes after several trials.There is a risk of needle puncture of the surgeon's fingers.Moreover, the needle is malleable and unstable during its perforation of the retropubic space.Lastly, the vaginoabdominal approach is more comfortable than abdominovaginal approach used during the fine needle procedure.
Transretropubic traction vaginoplasty (TRT) (60)
Using an olive rather than balloon but goes along the road of balloon vaginoplasty that's why I preferred to include it here.
Idea: In the course of a few days, a plastic olive placed on the vaginal dimple is lifted by a mesh tape inserted through the space of Retzius and anchored to the anterior abdominal wall.The upward traction exerted on the vaginal dimple is sufficient to create a neovagina.
Steps
The patients are prepared for the TRT vaginoplasty as for any simple gynecologic surgical procedure.Under general anesthesia, bladder evacuation is followed by the insertion of a urethral catheter, through which a rigid guide (or bladder stylet) is then passed.This step is taken to mobilize the bladder neck away from the tip of the needle when it passed into the retropubic space.A 7-mm incision is then made on both sides in the suprapubic area, 2 cm from the midline and 2 cm above the symphysis pubis, as if to install tension-free vaginal tape to treat stress urinary incontinence (61).A sharp, curved needle especially designed with a wide eye attached to a plastic handle (Fig. 18) is used to perforate the vaginal dimple bilaterally, just 1 cm below the bladder neck.A strip of mesh composed of a knitted polypropylene monofilament (Pro Mesh (Ethicon, Somerville, NJ, USA), 1 cm in width and 30 cm in length, is stretched and passed through the eye of the needle.A 2 x 3 cm, fenestrated plastic olive is threaded like a bead on the tape.The needle is inserted through the vaginal dimple skin without prior incision, directed upwards and slightly laterally toward the space of Retzius, with simultaneous mobilization of the bladder inwards and laterally on the same side with the bladder stylet.After which, the needle is directed slightly medially toward the suprapubic incision on the same side.Once the tape is seen through the incision, the needle is withdrawn while the tape is clamped with a forceps.The bladder catheter is then removed and a cystoscopic examination performed.If the bladder is intact, the same steps are repeated on the other side.Traction on the tape is maintained by placing a plastic umbilical cord clamp on each end of the mesh tape.A layer of sterile gauze is placed underneath a fenestrated plate of stainless steel to avoid skin ischemia.A urethral catheter is inserted to prevent urinary retention from the pressure applied by the plastic olive.
After a short postoperative interval, before discharge, the patient is instructed to take an antibiotic medication for 1 week and a nonsteroidal anti-inflammatory drug (NSAID) whenever required.She is also taught how to void her bowels, care for her urine collection bag, frequently cleanse her vagina with a 10% povidone iodine solution, and apply sterile vulvar dressings to guard against ascending infection.She returned to the office 3 days later to increase the traction to the tape, and the new traction is maintained using an additional umbilical clamp on each side.If the patient is afraid of experiencing pain from the traction, she is given an intravenous injection of an NSAID 15 minutes before the procedure.The tape, plastic olive, and uretheral catheter are removed on the eighth day after the procedure.
During an examination, a medium-sized speculum is inserted into the vagina and the vaginal length is measured and recorded.The patient is encouraged to start sexual intercourse on that day.She presented to the office every 2 weeks for the next 2 months for evaluations.Each spouse is privately asked about dyspareunia and sexual satisfaction at each visit.The husbands are also asked about penetration.A score of 100 is used for satisfaction and penetration.It is a simple scoring chart designed at our institution after the visual analog scale for pelvic pain.After the 2-month follow-up, the couple presented to the clinic only when they had operation-related complaints.Advantages: TRT vaginoplasty seems to be superior as it does not depend on endoscopy, does not require dissecting the rectovesical space or the vaginal dimple, and is performed relatively quickly.Moreover, the risk of stress incontinence is still present in patients undergoing balloon vaginoplasty because of the posterior traction exerted on the vaginal dimple (62).
Fig. 18. TRT vaginoplasty
Disadvantages: a bit sophisticated and these instrumentations are not available in all operating rooms.Moreover, we reported exaggerated patient discomfort and repeated complaints particularly during traction on day 3.In the following table, I'll summarize the differences between some vaginoplasty techniques.
Transretropubic balloon vaginoplasty approach (63)
The patients are prepared for the operation as for any simple gynecologic surgical procedure.Under spinal anesthesia, bladder evacuation is followed by the insertion of a urethral catheter, through which a rigid guide (or bladder stylet, figure 1) is then passed.This step is taken to mobilize the bladder neck away from the tip of the needle when it passed into the retropubic space.A 5-mm incision is then made on the midline in the suprapubic area, 2 cm above the symphysis pubis.A especially designed sharp, curved needle with a wide eye attached to a plastic handle (Fig. 19) is used to perforate the vaginal dimple centrally, just 1 cm below the bladder neck.The needle is inserted through the vaginal dimple skin without prior incision, directed cephalically upwards and slightly laterally toward the space of Retzius, with simultaneous mobilization of the bladder inwards and laterally on the same side utilizing the bladder stylet.In all cases, the perforation is controlled by perioperative ultrasonographic examination of the space of Retzius.Thereafter, the needle is directed slightly medially toward the suprapubic incision in the midline.Once the fenstrum of the needle is seen through the incision, the distal end of a silicone Foley catheter is fixed to it by a double strengthened Vicryl 2 sutures.Thereafter, the needle is withdrawn downwards and the Vicryl suture is cut leaving the distal end of the Foley catheter outside the vaginal dimple which is immediately filled with 4-6 cc of saline to avoid excessive pain.The bladder catheter is then removed and a cystoscopic examination is performed.Upward traction on the catheter is maintained by placing a plastic umbilical cord clamp on its abdominal side.A layer of sterile gauze is placed underneath a fenestrated plate of stainless steel to avoid skin ischemia (Fig. 19).A urethral catheter is inserted to prevent urinary retention from upward traction the inflated balloon.
Operative time is recorded for all case.After a short postoperative interval of few hours, before discharge, the patient is instructed to take an antibiotic medication for 1 week and a nonsteroidal anti-inflammatory drug (NSAID) whenever required.She is also taught how to void her bowels, care for her urine collection bag, frequently cleanse her vagina with a 10% povidone iodine solution, and apply sterile vulvar dressings to guard against ascending infection.She returned to the office 3 days later to increase the upward traction and the size to the balloon by reinflation with extra 3-4 cc of saline and the new traction is maintained using a new umbilical clamp.The suprapubic and uretheral catheters are removed on the 8th day after the procedure.
Modified retropubic vaginoplasty(64-66)
Through a small supra pubic puncture, the catheter inserter is passed from above into the retropubic space just behind the pubic bone and guided to the center of the vaginal dimple.
Then, a cystoscopic examination is performed to ensure bladder and uretheral integrity.This is followed by gradual controlled distention of the balloon and traction on the catheter stem as described in laparoscopic balloon vaginoplasty.Drawbacks: this is a relatively unsafe procedure as the inserter is about 5 mm in caliber and there is no safety issue to protect the urethra from injury during perforation.Non usage of a bladder stylet is another disadvantage as it is used for contralateral displacement of the urethra away from the perforation site.Again, they used perforation from the abdominal site which is considered difficult and unguided if compared with perforation from the vaginal side.Another technical problem of their procedure is the absence of a fenestrum to easily carry threads to the vaginal side.
Which balloon vaginoplasty should I use?
After this detailed discussion of the published studies on balloon vaginoplasty, I would recommend retropubic balloon vaginoplasty (63) due to the following causes: Transretropubic balloon vaginoplasty is a simple, fast, safe and available extraperitoneal procedure.It can be easily done by any gynecologist having basic knowledge of the anatomy of the retropubic space.Not only does it save time for the gynecologists, but it also saves a lot of money for the patient and the community as well.
A recent study on laparoscopic balloon vaginoplasty by ElSaman et al reported a case of rectal injury out of three cases (33.3%).This possibility is remote if the retropubic approach is utilized.Being an extraperitoneal approach makes retropubic balloon vaginoplasty very suitable for cases with extensive intraperitoneal adhesions or with history of intestinal surgery.Moreover, due to its proximity, it seems logic to access the vaginal dimple through it rather than the laparoscopic portals.Retropubic balloon vaginoplasty disturbs neither the urethral support nor the urethrovesical angle.It may even add support to the bladder neck, although for a short period.This technical point is considered as an additional advantage of the retropubic approach over transabdominal conventional or even laparoscopic vaginoplasty.Retropubic balloon vaginoplasty is a good example of minimally-invasive surgery for vaginal aplasia, an anomaly currently corrected with very sophisticated techniques utilizing foreign tissues such as peritoneum, skin, sigmoid, or amnion grafts.Selection of the retropubic space and performing the procedure in the same way as tensionfree tape (TVT) for treating stress incontinence is assuring regarding safety and is confirmed by absence of complications in our pilot study.It seems that this transretropubic balloon vaginoplasty is superior to transretropubic vaginoplasty using a tape and an olive in terms of shorter mean operative time (8.5 vs. 26.5min)due to single perforation of the space of Retzius, elimination of the exaggerated intolerable pain on traction on the tape that required analgesics in all cases of the previous study, and utilization of the silicone balloon with easier traction due to elastic recoil character and changeable balloon size.Clearly, this procedure is much cheaper than transretropubic vaginoplasty.A modified retropubic vaginoplasty has been recently published by ElSaman et al.The authors used an inserter carrying threads at its distal end and passed from the suprapubic side towards the vaginal dimple.This instrument is short and straight unlike the long curved needle used in this study.Moreover, their procedure is blind and completely unsafe as this instrument has a wide caliber with a high possibility of urethral injury.Out of three cases, they reported one case of urethral damage (33.3%) that required catheterization for 8 days.Another technical problem of their procedure is the absence of a fenestrum to easily carry threads to the vaginal side.
Fig. 16 .
Fig. 16.Instrumentation and a diagram of fine needle vaginoplasty | 2017-08-27T09:36:08.094Z | 2011-08-23T00:00:00.000 | {
"year": 2011,
"sha1": "aa25a69cf620723865e77574c2ca9c5be45f9195",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5772/25120",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f6353a051989993c514706b67e5c02dcc2afa2f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234969359 | pes2o/s2orc | v3-fos-license | Genome-wide association study of seed coat color in sesame (Sesamum indicum L.)
Sesame (Sesamum indicum L.) is an important and ancient oilseed crop. Sesame seed coat color is related to biochemical functions involved in protein and oil metabolism, and antioxidant content. Because of its complication, the genetic basis of sesame seed coat color remains poorly understood. To elucidate the factors affecting the genetic architecture of seed coat color, 366 sesame germplasm lines were evaluated for seed coat color in 12 environments. The genome-wide association studies (GWAS) for three seed coat color space values, best linear unbiased prediction (BLUP) values from a multi-environment trial analysis and principal component scores (PCs) of three seed coat color space values were conducted. GWAS for three seed coat color space values identified a total of 224 significant single nucleotide polymorphisms (SNPs, P < 2.34×10−7), with phenotypic variation explained (PVE) ranging from 1.01% to 22.10%, and 35 significant SNPs were detected in more than 6 environments. Based on BLUP values, 119 significant SNPs were identified, with PVE ranging from 8.83 to 31.98%. Comparing the results of the GWAS using phenotypic data from different environments and the BLUP values, all significant SNPs detected in more than 6 environments were also detected using the BLUP values. GWAS for PCs identified 197 significant SNPs, and 30 were detected in more than 6 environments. GWAS results for PCs were consistent with those for three color space values. Out of 224 significant SNPs, 22 were located in the confidence intervals of previous reported quantitative trait loci (QTLs). Finally, 92 candidate genes were identified in the vicinity of the 4 SNPs that were most significantly associated with sesame seed coat color. The results in this paper will provide new insights into the genetic basis of sesame seed coat color, and should be useful for molecular breeding in sesame.
Introduction
Sesame (Sesamum indicum L., 2n = 2x = 26), which belongs to the Sesamum genus of the Pedaliaceae family, is one of the earliest domesticated crops [1]. It is mainly planted in tropical and
Measurement of seed coat color and statistical analysis
Sesame seeds were harvested from five randomly chosen plants in each row at maturity, and were used to evaluate the seed coat color. Seed coat color was scored using a HunterLab colorimeter (ColorFlex EZ, Hunter Associates Laboratory Inc., Virginia, USA), and decomposed into L, a, and b color space values. The L-value represents brightness (black to white, 0 for black, 100 for white), the a-value represents the color from red to green (positive represents red, negative represents green), and the b-value represents the color from yellow to blue (positive represents yellow, negative represents blue) [30]. Descriptive statistics for sesame seed coat color value for each environment, were computed using the PROC UNIVARIATE procedure (α = 0.01) of SAS 8.02 software (SAS Institute, Cary, NC, USA). Best linear unbiased predictions (BLUPs) were used to estimate seed coat color values across multiple environments using the R [31] package "lme4" [32]. The BLUP model for the phenotypic trait was y ijk = μ +G i +E j +(GE) ij +B k(ij) +ε ijk , where μ is the total mean, G i is the genotypic effect of the ith genotype, E j is the effect of the jth environment, (GE) ij is the interaction effect between the ith genotype and the jth environment, B k(ij) is the effect of replication within the jth environment, and ε ijk is a random error following Nð0; s 2 e Þ [33]. The analysis of variance (ANOVA) was performed using QTL IciMapping V4.0 [34]. Broad sense heritability was calculated as: is the genotypic variance, s 2 GE is the genotype by environment variance, s 2 ε is the residual variance, k is the number of environments, and r is the number of replications [33]. Principal component analysis (PCA) can transform a set of correlated variables into a substantially smaller set of uncorrelated variables as principal components (PCs), which can capture most information from the original data [35]. Borcard et al. [36] recommended that the variables used in the PCA should be scaled to zero-mean and unitvariance. Therefore, PCA for three color space values was performed using R function "prcomp" with the setting "scale = TRUE" [31]. The first 2 PCs which explained 93%~97% of the total variance in different environments, were retained for GWAS.
Marker-trait association analysis
In a previous study, the association-mapping panel was genotyped by using SLAF-seq, and 89,924 high quality SNPs (minor allele frequency (MAF) � 0.01 and integrity � 0.7) were identified [29]. In this study, to avoid the possible false SNP affecting the result of GWAS, a set of 42,781 SNP markers with a MAF � 0.05 and integrity � 0.7 was used to perform markertrait association analysis. PCA matrix of the 42,781 SNPs was performed using the GCTA software [37]. The kinship (K) matrix was estimated using Tassel 5.0 software [38]. Marker-trait association analysis was performed for three color space values, BLUP values and two PCs of color space values using mixed linear models (PCA+K model) implemented in Tassel 5.0 software [38]. In the PCA+K model, the mixed linear model correcting for both PCA-matrix and K-matrix, were employed to reduce errors from population structure and relative kinship. The uniform Bonferroni threshold was used for the significance of associations between SNPs and traits at the significance level of 0.01. In this study, the threshold was −log 10 (0.01/42,781) � 6.6 where 42,781 is the number of SNP markers. Manhattan and QQ plots were drawn using the R package "qqman" [39].
Candidate gene prediction
To define the regions of interest for selection of potential candidate genes, the LD blocks, in which flanking SNP markers had strong LD (r 2 > 0.6), were defined as the candidate gene regions [40]. All genes within the same LD block (r 2 > 0.6) were considered as candidate genes. For significant SNPs outside of the LD blocks, the 99 kb (the LD decay distance) flanking regions on either side of the markers were used to identify candidate genes [29]. LD heatmaps surrounding peaks in the GWAS were constructed using the R package "LDheatmap" [41].
Phenotypic variations of sesame seed coat color
To evaluate the phenotypic variation of seed coat color in the sesame association panel, three color space values (L-value, a-value, and b-value) in each environment and BLUP values across multiple environments were analyzed (Fig 1 and S1 Fig). Descriptive statistics for seed coat color were presented in S1 Table. The sesame association panel exhibited wide variations in seed coat color. The L-value exhibited a wide range of 10.53 to 63.40, with the coefficient of variation (CV) ranging from 14.08 to 22.94% among different environments. Similarly, the avalue ranged from 0.08 to 11.22, with CV ranging from 24.07 to 37.40%, and the b-value ranged from -0.47 to 18.75, with CV ranging from 15.51 to 24.50%. Because L-value represents brightness ranging from black to white (0 for black, 100 for white), a-value represents the color from red to green (positive represents red, negative represents green), and b-value represents the color from yellow to blue (positive represents yellow, negative represents blue), the measured values and distributions indicate that black, white, red, and yellow are predominant in the sesame seed coat color, which is consistent with the observation that the seed coat color distributions in the association panel (Figs 1 and 2 and S1 Fig). ANOVA was performed to reveal the effects of G (genotypes), E (environment) and G × E (interaction between G and E) for seed coat color trait in multi-environments. The results showed that there were highly significant differences among G, E, and G × E (P < 0.01). The broad-sense heritability of the Lvalue was calculated to be 98.16%, while the broad-sense heritability of the a-value and b-value was 97.55% and 96.88%, respectively.
PCA was performed for three space color values to investigate the relationships among three space color value variables. PC1 explained 56%~65% of the trait variances in different environments, and three space color values showed high negative loadings on PC1. This result suggested that seed coat color with high PC1 scores exhibited samll values for L-value, a-value and b-value. PC2 explained 34%~43% of the trait variances. Cumulative Proportion of variances for PC1 and PC2 were 93%~97%, and the PCA results were consistent with each other across different environments (S2 Table), suggesting that PC1 and PC2 can be used as quantitative indices to characterize sesame seed coat color.
Genome-wide association analysis for sesame seed coat color
To uncover the genotypic variation of seed coat color in sesame, GWAS were performed for three color space values from different environments and BLUP values across all environments. Using three color space values, a total of 224 significant SNPs (P < 2.34×10 −7 ) were identified in 12 environments (Fig 3), and the R 2 , the phenotypic variation explained (PVE) by SNPs, ranged from 1.01% to 22.10%. As shown in quantile-quantile plots (S2 Fig), the genomic inflation was considerably controlled. Among 224 significant SNPs, 35 were detected in more than 6 environments, 24 were detected in more than 8 environments, and 14 were detected in more than 10 environments (S3 Table). Using BLUP values, 119 significant SNPs were identified, with PVE ranging from 8.83 to 31.98% (S3 Fig). Comparing the results of the GWAS using phenotypic data from different environments and the BLUP values, all significant SNPs detected in more than 6 environments were also detected using the BLUP values (S3 Table).
Regarding L-value, 38 significant SNPs were detected on 5 linkage groups (LGs), with PVE ranging from 8.75% to 21.90%. Among these SNPs, 24 were detected using the BLUP values of L-value. The most significant SNP S1_6648896 on LG1 was detected in all 12 environments and was also detected using the BLUP values. On LG2, 8 multi-environment significant SNPs (S2_12167303, S2_12178804, S2_12178823, S2_12194998, S2_12232894, S2_12232938, S2_12447358, S2_12247409) were significantly associated with L-value in 7,8,8,8,7,10,8, and 9 environments and were also detected using the BLUP values (S3 Table). Regarding avalue, 17 significant SNPs were identified on LG2, LG3 and LG7, and 9 were detected using the BLUP values of a-value. Of all the significant SNPs, S7_6839839 was detected in all 12 environments and was also detected using the BLUP values, (S3 Table). Regarding the b-value, 169 significant SNPs distributing on LG1, LG2, LG3, LG4, LG5, LG6, LG7, LG8, LG9, LG10, LG11 and LG13 were identified, with PVE ranging from 8.68% to 31.35%. The Manhattan plots showed that 3 peaks on LG1, LG2, and LG8 were repeatedly detected in more than 6 environments and were also identified using BLUP values of b-value. Nine significant SNPs were detected on LG1. The SNP S1_6648896 with the lowest P value on LG1 was detected in 9 environments and was also detected using BLUP values. Seventy significant SNPs were detected on LG2. S2_12168663 and S2_12337057 were both detected in 7 environments. S2_12336812 was detected in 8 environments. S2_12167303 and S2_12247358 were detected in 9 environments. S2_12026452, S2_12178804, S2_12178823 and S2_12194998 were detected in 10 environments. S2_12015779, S2_12015820 and S2_12247409 were detected in 11 environments. S2_12232894 and S2_12232938 were detected in 12 environments. These 14 SNPs were also detected using BLUP values. On LG8, 4 multi-environment significant SNPs (S8_7910606, S8_8220220, S8_8311600, S8_8313501) were significantly associated with b-value in 7, 6, 6, and 7 environments and were also identified using BLUP values (S3 Table).
GWAS for PC1 and PC2 identified 197 significant SNPs (P < 3.3×10 −7 ); however, significant SNPs were not detected for PC3 (S4 Fig; S4 Table), which indicated that PC3 might be composed of nongenetic factors. The quantile-quantile plots were shown in S5 Fig. Among 197 significant SNPs, 30 were detected in more than 6 environments, 19 were detected in more than 8 environments, and 14 were detected in more than 10 environments. For PC1, the GWAS results were consistent with those for L-value and b-value. One hundred and eightyeight significant SNPs were identified on 12 LGs, explaining 8.68-33.93% of the phenotypic variation. Four peaks on LG1, LG2, LG4, and LG8 were repeatedly detected in more than 6 environments. The most significant SNP S1_6648896 on LG1 was repeatedly detected in 9 environments, explaining 12.93%~20.51% of the phenotypic variation. Nineteen significant SNPs on LG2 were indentified in more than 6 environments. The most significant SNP S2_12232938 on LG2 with PVE of 11.95~33.93% was detected in 12 environments. The most significant SNP S4_7766099 on LG4 was repeatedly detected in 6 environments, and explained 9.47%~15.26% of the phenotypic variation. Three significant SNPs on LG8 were detected in more than 6 environments. The most significant SNP S8_8313501 on LG8 was repeatedly detected in 8 environments, and explained 9.47%~15.26% of the phenotypic variation. The GWAS results for PC2 were consistent with those for a-value. Six significant SNPs on LG7 were detected in more than 6 environments. The most significant SNP S7_6839839 was repeatedly detected in 12 environments, and explained 14.14%~26.18% of the phenotypic variation.
Candidate genes associated with sesame seed coat color
To predict the putative genes associated with sesame seed coat color, we focused on the most reliable and stable peaks on different LGs, including S1_6648896, S2_12232938, S7_6839839 and S8_8313501 (Fig 4). The haplotype analysis showed that the SNPs S1_6648896, S2_12232938 and S7_6839839 were all in genomic regions that were in state of linkage equilibrium, while S8_8313501 was involved in a 213-kbp LD block. Within the LD block (S8_8313501), or 99 kbp either side of the SNPs (S1_6648896, S2_12232938 and S7_6839839), a total of 21, 20, 31 and 20 genes were identified, respectively (S5 Table). Of the 92 genes, 26 had no definite annotation concerning their biological functions, and 12 were annotated as putative or probable proteins. The remaining 54 genes had domains of known functions. Gene ontology (GO) analysis indicated that 40, 39 and 31 genes were involved in the cellular component category, the molecular function category and the biological process category, respectively. In the cellular component category, these genes were grouped into cell (39 genes), cell part (39 genes) and organelle (36 genes) subcategories. Within the molecular function category, the majority of genes were involved in catalytic activity (14 genes), binding (15 genes), transcription regulator activity (6 genes). In the biological process category, most gene were annotated to metabolic process (23 genes), cellular process (31 genes), response to stimulus (20 genes).
Discussion
GWAS has become an efficient and powerful tool at identifying genetic variations and loci responsible for the agronomically important traits. In 2015, a GWAS of oil quality and agronomic traits with 705 sesame lines identified several causative genes, demonstrating the feasibility of GWAS in sesame [27]. In the present study, the panel of sesame accessions with wide geographic distribution, plentiful phenotype variation, sufficient genetic variation and weak population structure is advantageous for GWAS implementation [29]. However, the reliability of GWAS is usually disturbed by phenotypic variance associated with the environment. Multienvironment analysis and unbiased predictions are practical ways to correct for this error [25]. The trait experiments were performed at four sites, which belong to three climate classifications, temperate monsoon climate (PY and SQ), subtropical monsoon climate (NY), and tropical marine monsoon climate (SY). Among four sites, there are large differences in geographic position and climate. ANOVA showed that significant variations were observed in G, E and G×E. This result suggested that sesame seed coat color was controlled by the genetic, environment effect and their interaction. Then, GWAS for coat color traits were performed in 12 environments, and many significant SNPs were only detected in a specific environment. However, the SNPs detected in more than 6 environments were detected using BLUP values in a multienvironment trial analysis. These multi-environment SNPs are reliable and will be used for further analysis. Therefore, the multi-environment trial analysis could effectively avoid influences from the environments, and is the way forward in the study of complex quantitative traits.
PCA is an effective approach for collecting information from complex, multiple traits that are highly correlated; furthermore, it is valuable for extracting underlying factors for traits by dimension reduction [35]. As PC scores represent integrated variables, they can result in robust, reliable GWAS results [35]. In this study, PCA on three space values (L-value, a-value and b-value) revealed that PC1 captured 56%~65% of variations for all values, PC2 captured 34%~43% of variations for L-value and a-value. Cumulative Proportion of variances for PC1 and PC2 were 93%~97% (S2 Table). Thus, PC1 and PC2 are good indicators for sesame seed coat color. Using the three color space values, 224 significant SNPs (P < 2.34×10 −7 ) were identified. After combining the same SNPs associated with different seed coat color values (Lvalue, a-value and b-value), 185 SNPs were remained. Using the PC scores (PC1 and PC2) for GWAS, 201 significant SNPs associated with PCs were identified. The GWAS results for PC1 and PC2 were consistent with those for three color space values, indicating PC1 and PC2 can represent three space color values to perform GWAS.
To further confirm these significant SNPs associated with seed coat color in this paper, we compared our GWAS results with QTLs from previous linkage studies. Wang et al. [15] identified 4 QTLs (qSCa-4.1/qSCb-4.1/qSCl-4.1, qSCa-8.1/qSCb-8.1/qSCl-8.1, qSCl-8.2, and qSCb-11.1/qSCl-11.1) for seed coat color in a RIL population. Most of QTLs (3/4 QTLs) were verified by significant SNPs in the present study. Eighteen significant SNPs on LG2 were mapped to the confidence interval of the QTL qSCa-4.1/qSCb-4.1/qSCl-4.1. One significant SNP (S1_6648896) and three significant SNPs (S1_9324398, S1_9330855 and S1_9332327) on LG1 were mapped to the confidence intervals of QTLs qSCa-8.1/qSCb-8.1/qSCl-8.1 and qSCl-8.2, respectively. These comparison results corroborated our findings. Zhang et al. [6] found 4 QTLs (QTL1-1, QTL11-1, QTL11-2, and QTL13-1) for sesame seed coat color, however, because of AFLP markers having been mainly used in the study of Zhang et al. in an independent genetic map, it is difficult to determine the relationship of the present loci to them. The remaining SNPs, which were not mapped to the confidence intervals of reported QTLs, indicated the likely existence of new seed coat color-related sites or environment-specific SNPs.
Considering SNPs detected in the most environments with high genetic affect, 4 reliable and stable peaks on 4 LGs were focused on, and 92 candidate genes in the vicinity of 4 significant SNPs were identified. For the 4 SNPs (S1_6648896, S2_12232938, S7_6839839 and S8_8313501), the annotation genes included pentatricopeptide repeat-containing protein (SIN_1006005, SIN_1006010, SIN_1012034), malate dehydrogenase (SIN_1006006), basic helix-loop-helix (BHLH) DNA-binding superfamily protein (SIN_1006020 and SIN_1024895), cytochrome P450 94A2 (SIN_1006022), polyphenol oxidases (SIN_1016759 and SIN_1023237), F-box/LRR-repeat protein 3 (SIN_1023224), etc. SIN_1016759 encodes a predicted polyphenol oxidase (PPO), which participates in the oxidation step in the biosynthesis of proanthocyanidin, lignin, and melanin, and produces black pigments via the browning reaction in plants [42][43][44]. In sesame, Wei et al. [27] reported that SIN_1016759 was strongly associated with seed coat color, Wang et al. and Wei et al. [15,18] showed that SIN_1016759 was located in the genomic region of a major QTL for seed coat color. qRT-PCR showed that SIN_1016759 was highly expressed in black sesame seeds from 11 to 20 days but not expressed in white sesame seeds [18], indicating that SIN_1016759 may play an important role in the formation of sesame black coat color. SIN_1023237 encodes a laccase-3 which belongs to multicopper oxidase family [45]. Laccase enzymes were shown to contribute toward cell morphology, secondary cell-wall biosynthesis, and resistance to biotic and abiotic stresses in plant [46]. They also play major roles in proanthocyanidins and lignin deposition and are involved in browning reactions on seed coat pigments [42,43,47]. SIN_1006022 encodes a cytochrome P450 protein, and may be related to the formation of seed coat color. Cytochromes P450 play important roles in biosynthesis of flavonoids and their coloured class of compounds, anthocyanins, which are responsible for the pigmentation pattern of vegetative parts and seed [48][49][50][51]. SIN_1023226 encodes a WRKY-type transcription factor, which is one of the WRKY family members [52]. The WRKY genes family in flowering plants encode a large group of transcription factors which play essential roles in diverse stress responses, developmental, and physiological processes [53]. SIN_1024895 encodes a bHLH transcription factor. Plant bHLHs are involved in secondary metabolism (including the flavonoid pathway), organ development and responses to abiotic stresses [54][55][56]. Previous reports have shown that the WRKY and bHLH genes are involved in regulation of seed coloration [57][58][59][60].
Conclusions
In this study, GWAS for sesame seed coat color were performed using 42,781 SNPs with 366 sesame germplasm lines in 12 environments. GWAS for three color space values, BLUP values from a multi-environment trial analysis and PCs of three color space values identified 224, 119, and 197 significant SNPs, respectively. The 35 significant SNPs detected in more than 6 environments were also detected using the BLUP values. Furthermore, GWAS results for PCs were consistent with those for three color space values. Multiple QTLs reported in previous studies were verified by significant SNPs in the present study, corroborating the GWAS results. Moreover, the most reliable and significant SNPs (S1_6648896, S2_12232938, S7_6839839 and S8_8313501) on 4 different LGs were focused on, and 92 candidate genes were identified. The GWAS showed great power in uncovering genetic variation in sesame seed coat color, and the results will provide new insights into the genetic basis of sesame seed coat color. | 2020-02-06T09:17:23.930Z | 2019-12-04T00:00:00.000 | {
"year": 2021,
"sha1": "59f210c11d3d03b2ae463616abbfdc66c7fea06b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0251526&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8cdfc115e350ccba94b0a1ad25cc942cc08d5d2e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
271492026 | pes2o/s2orc | v3-fos-license | Mapping of the exterior architecture of the mesocephalic canine brain
Despite extensive studies published on the canine brain, inconsistencies and disagreements in the nomenclature and representation of various cerebral structures continue to exist. This study aimed to create a comprehensive mapping of the external architecture of the mesocephalic canine brain with a focus on the major gyri and sulci. Standardized dissection techniques were used on 20 ethically sourced brains obtained from 6 to 10-year-old dogs that were free of neurological disorders. Distinct gyri and sulci with unique locations and bordering structures were observed. Thus, it was possible to identify the often-ignored subprorean gyrus. In addition, this study was able to illustrate the unique locations and bordering structures of gyri and sulci. The findings can contribute to a consensus among researchers on the canine brain anatomy and assist in clarifying the inconsistencies in cerebral structure representation. Furthermore, the results of this study may hold significant implications for veterinary medicine and neuroscience and serve as a foundation for the development of diagnostic and therapeutic approaches for various neurological diseases in dogs. Our findings offer valuable insights into the unique evolutionary adaptations and specialized behaviors of the canine brain, thereby increasing awareness about the neural structures that enable dogs to demonstrate their unique traits.
Materials and methods
All animal experiments were approved by the Animal Research Ethics Committee of United Arab Emirates University and conducted in accordance with relevant guidelines and regulations.The reported experiments comply with the ARRIVE guidelines.In our research, we carefully selected dogs from three representative mesocephalic breeds to ensure a comprehensive understanding of the cerebral hemisphere's architecture.This selection involved 7 Golden Retrievers, 7 Labrador Retrievers, and 6 German Shepherds.These breeds were chosen as they represent a standard head shape, avoiding the extreme variations seen in dolichomorphs and brachymorphs dog breeds.This strategy ensures that our findings provide a balanced representation of the canine cerebral structure.The brains were obtained from dogs (aged 6-10 years) that had died for various reasons and were free from any known neurological or behavioral disorders.The brains were collected from local animal clinics after obtaining permission from their owners.While our study provides insights into the external architecture of the canine brain, it is important to note that our findings are not influenced by parameters such as age and sex.
To study the external architecture of the dog brain, 10% formaldehyde was injected into the right and left common carotid arteries of all studied samples.A large window was created in the dorsal wall of the skull using a surgical saw.The head was then placed in a container containing 10% formaldehyde and stored in a cold room at 5 °C for 2 weeks, a duration considered optimal for properly fixing the brain tissues.The skull was then carefully opened using surgical tools to expose the brain before being extracted.The meninges were removed from the surfaces of the hemispheres to reveal the cerebral gyri and sulci, which were examined with the naked eye and captured using a Sony a7R II camera.
To study the medial surface of the cerebral hemispheres, a surgical blade was used to cut the brain longitudinally through the longitudinal fissure.The two halves of the brain were separated and flattened to allow for better visualization of the medial surfaces.The cerebral gyri and sulci on the medial surface were examined using the same techniques as those used for the lateral and dorsal surfaces.
Finally, Adobe Photoshop was used to process the images and generate the figures.The different cerebral gyri and sulci were colored and labeled using a standardized coloring technique to improve the clarity of the figures.The brains were handled with care throughout the entire process to ensure the accuracy of the results.
Results
Several distinct gyri and sulci, each with their unique location and bordering structures, were observed.The beginning and end of the sulci were determined, and each of the identified gyri was named.
Sulci of the canine brain
Sulci are fissures that define the gyri in the brain and represent extensions of the subarachnoid space.The length and depth of these structures can vary; some are continuous, while others are interrupted (Figs. 1, 2, and 3).The nomenclature of the sulci differs significantly among studies, leading to various interpretations.
Pseudosylvian fissure
In this study, the pseudosylvian fissure was identified as the most prevalently recognizable landmark in the lateral hemispheric surface.Moreover, the pseudosylvian fissure was found to divide the lateral rhinal sulcus into the rostral and caudal parts and was entirely enveloped by the Sylvian gyrus (Fig. 1).www.nature.com/scientificreports/
Presylvian sulcus
The presylvian sulcus was identified as a clear sulcus that formed the caudal border of the prorean gyrus.It separated the prorean and frontal gyri, which are located in front of the sulcus, and the rostral composite and pre-cruciate gyri, which are positioned behind the presylvian sulcus (Figs. 1 and 3).
Lateral rhinal sulcus
The lateral rhinal sulcus lay parallel to the ventral edge of the brain and extended from the caudal end of the olfactory bulb to the caudal end of the cerebrum.It consisted of two parts, the rostral and caudal parts of the lateral rhinal sulcus, which were separated at the point of the pseudosylvian fissure (Fig. 1).The caudal part of the lateral rhinal sulcus presented as a deep groove that separated the piriform lobe, which is the primary olfactory cortex, from the caudal composite gyrus (Figs. 1 and 4).
Endorhinal sulcus
The Endorhinal sulcus, observable in the lateral view, was situated between the olfactory peduncle and subprorean gyrus (Figs. 1 and 4).
Ectosylvian sulcus
The ectosylvian sulcus was identified as a prominent sulcus that surrounded the Sylvian gyrus from the outside.It had a curved shape with three distinct parts: rostral, middle, and caudal part.The sulcus demonstrated a parabolic pattern in the temporal lobe of the canine brain.It was named the ectosylvian sulcus owing to its location on the lateral surface of the brain, adjacent to the ectosylvian gyrus.The sulcus separated the Sylvian gyrus from the dorsal adjacent ectosylvian gyrus.Furthermore, this sulcus could be seen from the lateral view and partially from the dorsal view (Figs. 1 and 3).
Suprasylvian sulcus
The suprasylvian sulcus was identified as a long continuous sulcus that extended dorsolaterally.Dorsally, it delineated the border between the ectosylvian and suprasylvian gyri.The suprasylvian sulcus was divided into rostral, middle, and caudal portions.The rostral portion intersected the rostral part of the suprasylvian gyrus and the rostral composite gyrus.This sulcus could be seen from the lateral and dorsal views (Figs. 1 and 3).
Ectomarginal sulcus
The ectomarginal sulcus was located dorsally at the caudal third of the brain and lay parallel to the marginal sulcus.It was clearly visible because it served as a boundary on the caudal third of the ectomarginal and middle suprasylvian gyri.From the caudal view, it coincided with the transverse fissure, almost one-third shifted perpendicularly (Figs. 1 and 3).
Marginal sulcus
The marginal sulcus ran parallel to the longitudinal fissure on the dorsal side of the brain.It started around the middle dorsal region of the brain and ran caudally until the caudal end of the brain (Fig. 3).www.nature.com/scientificreports/
Ansate sulcus
The ansate sulcus was a craniomedial extension of the marginal sulcus, bent toward the longitudinal fissure.It had a unique S-shaped or curved appearance, resembling an inverted U or V shape, with a convex portion facing the front of the brain and a concave portion facing the back (Figs. 1 and 3).This sulcus was better viewed from the dorsal aspect of the brain.
Coronal sulcus
The coronal sulcus appeared as an extension of the marginal sulcus situated rostrally.It was located between four gyri: the post-and pre-cruciate gyri dorsally and the rostral extension of the suprasylvian gyrus together with the dorsal aspect of the rostral composite gyrus (Figs. 1 and 3).www.nature.com/scientificreports/
Cruciate sulcus
The cruciate sulcus presented as a small, deep, and distinct sulcus that lay perpendicular to the longitudinal fissure.The pre-cruciate and post-cruciate gyri were located rostral and caudal to this sulcus, respectively.The sulcus could be viewed from the lateral and dorsal aspects of the brain.The sulcus extended medially and separated the pre-and post-cruciate gyri on the medial surface of the cerebral hemisphere (Figs. 1 and 3).
Post-cruciate sulcus
The post-cruciate sulcus was a small, U-shaped sulcus found between the post-cruciate and marginal gyri.It extended laterally to the coronal sulcus and could be viewed dorsally only (Fig. 3).
Prorean sulcus
The prorean sulcus was a small sulcus located around the frontal region of the brain between the frontal and prorean gyrus (Figs. 1 and 3).
Ectogenual sulcus
The ectogenual sulcus separated the genual gyrus from the frontal gyrus in the medial aspect of the canine brain.
It usually appeared as a linear structure that extended rostro-caudally (Fig. 2).
Genual sulcus
The genual sulcus separated the rostral portion of the cingulate gyrus from the genual gyrus.It had a curved path on the rostromedial aspect of the canine brain (Fig. 2).
Splenial sulcus
The splenial sulcus was a ramified sulcus that separated the post-cruciate from the pre-splenial gyrus rostrally and bifurcated into the rostral and caudal rami, thereby separating the cingulate gyrus from the splenial gyrus completely (Fig. 2).It consisted of two discontinuous caudally situated sulci: one running almost parallel to the longitudinal fissure and the other drawn perpendicular to the transverse fissure separating the marginal gyrus from the splenial gyrus (Fig. 2).
Sylvian gyrus
The Sylvian gyrus was found on the latero-ventral side of the cerebral cortex and could be entirely seen from the lateral side of the brain.It was more compact and spread, unlike the suprasylvian gyrus.The gyrus was further divided into the rostral and caudal portions (Figs. 3 and 4) and encompassed the Sylvian fissure (Fig. 1).It was bordered by the rostral composite gyrus cranially, the ectosylvian gyrus dorsally, and the caudal composite gyrus caudally (Fig. 4).
Ectosylvian gyrus
The ectosylvian gyrus was located dorsal to the Sylvian gyrus.It was divided into the rostral, middle, and caudal parts, like the Sylvian gyrus.The ectosylvian gyrus was bordered by the rostral part of the suprasylvian gyrus rostrally, rostral composite gyrus rostroventrally, suprasylvian gyrus dorsally, and Sylvian and caudal composite gyrus caudally (Figs. 3 and 4).www.nature.com/scientificreports/
Suprasylvian gyrus
The suprasylvian gyrus in the canine brain was thin.It extended rostro-caudally from the rostral composite gyrus to the caudal composite gyrus.It consisted of rostral, middle, and caudal portions.It extended dorsally all over the ectosylvian gyrus and ventrally to the pre-cruciate, post-cruciate, marginal, and ectomarginal gyrus (Figs. 3 and 4).
Ectomarginal gyrus
The ectomarginal gyrus was located between the marginal and caudal marginal gyri.It was located caudo-dorsally and bordered the suprasylvian gyrus (middle and caudal portions) and the cerebellum ventrally.The ectomarginal gyrus was seen within the occipital cortex (Figs. 3 and 4).
Marginal gyrus
The dorsal view offered the best visibility of the marginal gyrus, which emerged as the most dorsal structure when observed from the lateral aspect of the brain.It was located above the suprasylvian gyrus, mainly above the rostral and middle portions of the suprasylvian gyrus.It was bordered by the post-cruciate gyrus rostrally and the ectomarginal gyrus caudally.Part of the marginal gyrus could be seen caudally, adjacent to the ectomarginal gyrus, and this portion of the marginal gyrus was referred to as the caudal marginal gyrus.Additionally, it extended to the medial aspect of the cerebral cortex in the outermost dorso-caudal portion of the canine brain (Figs. 3, 4, and 5).
Post-cruciate gyrus
The post-cruciate gyrus was located dorsal to the rostral part of the suprasylvian gyrus and rostral to the marginal gyrus.This gyrus was a part of the parietal cortex and was located behind the cruciate sulcus, which separated the frontal and parietal cortex (Figs. 3 and 4).
Pre-cruciate gyrus
The pre-cruciate gyrus was located between the frontal and post-cruciate gyrus.Ventrally, it was bordered by the rostral composite gyrus and the rostral part of the suprasylvian gyrus.It was separated from the frontal gyrus and post-cruciate sulcus via the suprasylvian and cruciate sulcus, respectively (Figs. 3 and 4).
Frontal gyrus
The frontal gyrus was located within the prefrontal cortex, dorsal to the prorean gyrus and rostral to the pre-cruciate gyrus; it stretched its ramus medially between the olfactory bulb and the pre-cruciate gyrus (Figs. 3 and 4).
Subprorean gyrus
The subprorean gyrus was found below or ventral to the prorean gyrus.This small gyrus stretched horizontally and was located dorsal to the olfactory peduncle and rostral to the lateral olfactory gyrus (Fig. 4).
Rostral composite gyrus
As a central structure, the rostral composite gyrus in the canine brain was surrounded by numerous gyri.It was bordered by the prorean and rostral suprasylvian gyri rostrally, the lateral olfactory gyrus ventrally, and the rostral ectosylvian and Sylvian gyri caudally (Figs. 3 and 4).
Caudal composite gyrus
The caudal composite gyrus was located latero-ventrally, dorsal to the piriform lobe, and ventral to the caudal Sylvian and ectosylvian gyrus (Fig. 4).It was located within the ventral temporal cortex.
Lateral olfactory gyrus
Our study has revealed that the lateral olfactory gyrus was located lateral to the lateral olfactory tract (Fig. 4).It is also known as the para-olfactory gyrus.
Cingulate gyrus
The cingulate gyrus was the innermost medial gyrus located just above the corpus callosum.It extended rostrocaudally and bordered all the medial gyri, except the marginal gyrus (Fig. 5).
Straight gyrus
The straight gyrus, also known as gyrus rectus, was positioned on the medial aspect of the frontal lobe, caudodorsal to the olfactory bulb, and ventral to the genual gyrus (Fig. 5).
Genual gyrus
The genual gyrus was located at the medial aspect of the frontal lobe of the canine brain; it was bounded by the frontal gyrus dorsally, straight gyrus ventrally, and cingulate gyrus caudally (Fig. 5).
Presplenial gyrus
The presplenial gyrus was small, quadrilateral-shaped, and located within the medial aspect of the parietal cortex adjacent to the splenium of the corpus callosum (Fig. 5).It was connected to other medially located gyri, such as the post-cruciate, cingulate, marginal, and splenial gyri.
Splenial gyrus
The splenial gyrus was located in the medial aspect of the caudal third of the cerebral cortex, caudodorsally extending from the parietal lobe to the occipital lobe.It was found beneath the marginal gyrus and above the cingulate gyrus.
Discussion
This study on the nomenclature and representation of canine cerebral sulci and gyri sought to address a significant issue that persists in the field of veterinary neuroscience.This study aimed to provide an extensive topographic representation of the gross anatomy and nomenclature of the mesocephalic canine brain from a variety of perspectives, including lateral, medial, and dorsal views.Since there is some ambiguity regarding the definition of the beginning and end of gyri and sulci, and some inconsistencies and disagreements remain in the nomenclature of gyri and sulci, subjective markings are necessary.To overcome this, we created a colored-labeled mesocephalic canine brain showing the gyri and sulci, as well as their borders.For all gyri and sulci identified in our study, we primarily adhered to the terminology set forth by the Nomina Anatomica Veterinaria 16 guidelines.Additionally, we expanded our reference base by consulting major anatomy textbooks [17][18][19][20][21] .Several important characteristics of major sulci and gyri are outlined in this study, and important features were highlighted.The present study revealed that the canine brain has a prominent longitudinal fissure between its two cerebral hemispheres and a rougher medial surface with clearly developed gyri and sulci.The more extensive gyri and sulci may point to a higher level of cortical complexity and processing power, whereas the deeper longitudinal fissure may show a greater degree of functional specialization between the two cerebral hemispheres.In contrast, the ovine brain seems to have a simpler pattern of sulci and gyri than the canine brain 22 .According to Tillet et al. 23 , the ovine brain has a shallow longitudinal fissure and a relatively smooth medial surface.
In dogs, the frontal and temporal lobes overhang the insula bilaterally, forming a prominent sulcus called the pseudosylvian fissure located on the lateral surface.In contrast, Schmidt et al. 24 reported that the pseudosylvian fissure is less well-defined in dogs than in ruminants.Different names in the literature were used to describe the pseudosylvian fissure, i.e., lateral Sylvian fissure 25 , sulcus pseudosylvius 13 , sulcus sylvius 14 , and lateral sulcus 15 .The pseudosylvian fissure is the base around which three arched gyri (the Sylvian, ectosylvian, and suprasylvian gyri) are arranged, each separated by a corresponding sulcus.
Although the ectosylvian and suprasylvian sulci are present and well-studied in numerous breeds of dogs, their prominence varies depending on their location and how they are depicted in the anatomical presentation 26 .In ruminants, the ectosylvian sulcus had a non-continuous course, which was divided into rostral and caudal parts by the pseudosylvian fissure 24 .In our study, the ectosylvian sulcus in dogs was identified as a prominent Vol:.( 1234567890 www.nature.com/scientificreports/sulcus that bordered the Sylvian gyrus from the outside, had a curved shape with three distinct parts: rostral, middle, and caudal parts, and demonstrated a parabolic pattern in the temporal lobe of the canine brain.The suprasylvian sulcus in the canine brain is an important landmark because it functions as a separation line between the dorsal and lateral hemispheric surfaces of the canine brain.It may be considered a prominent fissure in the camelid brain because it runs through the three limbs, extending from the caudal end of the coronal gyrus to the rostral part of the occipital gyrus 15,27 .In the present study, the lateral rhinal sulcus was subdivided into the rostral and caudal parts, which were located on the lateroventral surface of the hemisphere.The caudal lateral rhinal sulcus was found to be a shallow groove that serves as an important demarcation line for the piriform lobe (Fig. 1).
On the medial surface, the splenial sulcus was the most extensive and deepest sulcus, extending dorsally from the occipital pole and then rostrally around the cingulate gyrus, continuing on the dorsal surface to become the cruciate sulcus.The current findings are consistent with those reported by Datta et al. 28 regarding the position and extension of the splenial and suprasplenial sulci.Whereas in calves, the splenial sulcus frequently joins the genual sulcus, and in sheep and goats, it terminates on the dorsal surface of the hemisphere 24 .
The ectogenual sulcus was identified as an extension of the genual sulcus, which separated the genual gyrus from the straight gyrus (Fig. 5).This finding differs from those reported in other studies that mainly focused on the anatomical representation of the genual sulcus on the rostral medial aspect of the frontal lobe 8 .In the present study, we successfully identified a previously unrecognized segmental and rostral extension of the genual sulcus, which we have distinguished as a separate feature named the ectogenual sulcus.
In our study, the cruciate sulcus was distinctly deep and easy to identify as it extended from the medial to dorsolateral direction, displacing the surrounding coronal gyrus laterally on each side, resulting in the formation of a cruciate pattern when viewed from a dorsal view.In ruminants, the ansate sulcus, extending from the longitudinal fissure to the dorsal surface and merging with the coronal sulcus, resembled the cross-like arrangement seen in the dog 24 .The continuity usually observed between the ansate and coronal sulci was not found in the case of the cruciate sulcus 24 .The ansate sulcus had a distinct S-shaped appearance in the canine brain.We observed that this sulcus appears to be a rostromedial expansion of the marginal sulcus in dogs.The precise anatomical presentation of the prorean, coronal, and marginal sulci, as discovered in our study, was overlooked in several studies due to its independence from the marginal sulcus.The marginal sulcus demonstrated a longitudinal pattern in dogs, extending up to the caudal border while following a straight path.In camels, however, it shows a wavy pattern, arbitrarily starting from the median position of the hemisphere and ending at the occipital lobe 15 .
The canine brain has been the subject of numerous studies.However, disagreements persist regarding nomenclature and representation of cerebral structures, especially those pertaining to gyri and sulci.The names of these structures vary between publications, with some structures presenting multiple names.For example, in some articles, the gyrus suprasylvius 19 is also known as the gyrus ectomarginalis or gyrus ectosagittalis 18 .The gyrus postcruciatus [17][18][19] is also known as the gyrus sygmoideus posterior 29,30 .Other contradictions include referring to the gyrus marginalis 17,19 as the gyrus lateralis 30 .One reason for these inconsistencies is the absence of information related to distinct borders for specific brain lobes in dogs, unlike in humans, where features like the central sulcus clearly define the boundary between the frontal and parietal lobes 31 .Additionally, variations in sulci patterns or lengths can lead to differences in the surface morphology of canine brains 32 .
In the present study, the label map emphasized the cerebral sulci and gyri but did not include any lobar distinctions due to the varying definitions of brain lobes in different textbooks, making it challenging to create a standardized lobar distinction.For example, some authors 20,33 include the post-cruciate gyrus in the frontal lobe, while others include it in the parietal lobe 13,17,18,21 .The parieto-occipital and temporo-occipital boundaries are not clearly defined because of variations in how far the occipital lobe is thought to extend in both the rostral and ventral directions 17,20,21,33 .
Our observations regarding the Sylvian, ectosylvian, and suprasylvian gyri agreed with those described by Datta et al. 28 , Louw 34 , and Czeibert et al. 35 .The three gyri shared similar anatomical representations and relative positions across the various dog breeds, ungulates, and camels examined in these studies.In the present study, these structures appeared to follow a stair-like arrangement in dogs.In our study, we found that the marginal, ectomarginal, and suprasylvian gyri extend caudally to form the occipital cortex.The occipital region in the dog brain is relatively narrow compared to other animals, such as the camel, where the gyrus occipitalis is more prominent and clearly separated from surrounding gyri 27 .
There are two dominant gyri arranged side by side in the dorsocaudal region of the hemisphere: the marginal and the ectomarginal gyri.A similar finding was reported by Evans and de Lahunta 19 , Louw 34 , and Czeibert et al. 35 .
The pre-and post-cruciate gyri were larger and located more dorsally in the canine brain compared with the camel brain 15,27,36 .Gerussi et al. 37 suggested similarities between the post-cruciate gyrus of the canine and ovine brains in terms of somatotopic organization.However, when comparing the equine and canine brains, the former has a less prominent post-cruciate gyrus, which may explain why dogs have faster reaction times and better somatosensory processing than horses 38 .
We observed that the frontal gyrus could be found in both the lateral and medial sides of the canine cerebral hemisphere, and it is situated rostral to the pre-cruciate gyrus, adjacent to the genual gyrus on the medial side, and the prorean gyrus on the lateral side, similar to what was reported by Johnson et al. 8 .In the camel brain, the frontal gyrus is not present 15,27,39 .
Consistent with the findings reported by Czeibert et al. 35 and Andrews et al. 40 , the rostral and caudal composite gyri were located on the lateral hemispheric side.They were named the rostral and caudal composite gyri based on their position relative to the Sylvian and ectosylvian sulci in the canine brain.These two gyri are absent in the camel brain 15,27 www.nature.com/scientificreports/One of the most striking features noticed in the present study was a small vertically stretched gyrus located ventral to the prorean gyrus, dorsal to the olfactory peduncle, and rostral to the lateral olfactory gyrus.This gyrus, called the subprorean gyrus, is often an overlooked segment of the canine brain 28,35 .However, Johnson et al. 8 noted it in their study of the canine brain.We were able to locate this segment successfully in all the studied samples.
The lateral olfactory gyrus is relatively prominent and easy to identify in the latero-medial part of the canine brain 41 .The heightened significance of olfaction in the survival and perception of dogs has led to the development of more advanced olfactory pathways in their brains 42 .We anticipate that conducting further research on the cerebral arterial branches in dogs, similar to previous studies on other animals 43,44 , will advance our comprehension and identification of the cerebral sulci and gyri.
In our study, we focused on mesocephalic dog breeds while excluding both brachycephalic and dolichocephalic breeds.This helped us to minimize variations in brain shape within our template and ensure greater consistency.It should be noted that our findings are not influenced by factors such as age, sex, or breed-specific variations.
Conclusions
In conclusion, this study thoroughly analyzes the distinctive anatomical features of the mesocephalic canine brain.In addition to helping veterinary surgeons pinpoint the precise gyri and sulci during canine brain surgery, the findings of this study may aid in reducing the inconsistencies and discrepancies among various studies regarding the gross anatomy of the canine brain.Furthermore, this study may assist in clarifying the variations and disagreements about the nomenclature and representation of the canine cerebral structures and aid in establishing a common understanding.We believe that these findings are crucial for veterinary medicine, animal behavior, and neuroscience research, as they provide a solid foundation for developing diagnostic and therapeutic approaches for various neurological diseases in dogs.
Figure 1 .
Figure 1.Detailed illustration of the sulci on the lateral surface of the canine brain.
Figure 2 .
Figure 2. Detailed illustration of the sulci on the medial surface of the canine right cerebral hemisphere.
Figure 3 .
Figure 3. Dorsal view of the canine brain showing the cerebral sulci (right cerebral hemisphere) and the cerebral gyri (left cerebral hemisphere).
Figure 4 .
Figure 4. Lateral view of the canine brain showing the cerebral gyri of the left cerebral hemisphere.
Figure 5 .
Figure 5. Medial view of the canine brain showing the cerebral gyri of the right cerebral hemisphere. | 2024-07-28T06:17:53.083Z | 2024-07-26T00:00:00.000 | {
"year": 2024,
"sha1": "c2153db7d3cd5d76858e6672351fa5519b7c6a9a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3075587b6b6cb772cfc48f816dee883df3571146",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257451231 | pes2o/s2orc | v3-fos-license | Citrobacter koseri: A rare cause of an epidural spinal abscess
Background: Citrobacter koseri, a Gram-negative organism, rarely causes an epidural spinal abscess. Case Description: A 50-year-old male presented with mild paraparesis attributed to an magnetic resonance (MR)-documented spinal epidural abscess (SEA) at the T10-level. Following surgical debridement, cultures grew C. koseri, a rare Gram-negative organism. The abscess was subsequently managed with a prolonged course of antibiotics resulting in complete symptom and MR-documented radiological resolution. Conclusion: A 50-year-old male presented with a T10 SEA attributed to a rare Gram-negative organism, C. koseri. The abscess was appropriately managed with surgical decompression/debridement, followed by prolonged antibiotic therapy.
INTRODUCTION
A 50-year-old male presented with a mild thoracic paraparesis attributed to an magnetic resonance (MR)-documented T10 epidural abscess. At surgery, the pathology proved to be a rate Gram-negative organism, Citrobacter koseri. [14] Following a decompressive laminectomy with abscess debridement and prolonged postoperative antibiotic therapy, the patients symptoms resolved along with the radiographic findings. intraoperative monitoring revealed a partially calcified lesion compressing the cord; intraoperatively, after the cord was decompressed, the somatosensory potentials significantly improved. e intraoperative frozen section revealed leucocyte infiltration and calcification of the lesion, but initial postoperative cultures were negative. erefore, the patient was routinely discharged without a diagnosis of infection.
Return 7 days later with infection diagnosed as C. koseri
e patient returned 7 days later with a wound infection; once reopened, a brown discharge was cultured. Broadspectrum antibiotics (AB) (i.e., Ertapenem and Vancomycin) were immediately started; 5 days after the second surgery, cultures from both the first and second surgeries were documented in a rare Gram-negative organism, C. koseri. Within 8 postoperative days, the peripheral white blood cell count and acute phase reactants normalized; then continued, Ciprofloxacin for an additional 2 weeks, along with 4 days of Ertapenem. Six months later, the thoracic magnetic resonance imaging showed complete resolution of the epidural abscess/ wound infection, and the patient fully recovered [ Figure 2].
Risk factors for spinal epidural abscesses (SEAs) due to C. koseri
Citrobacter is a nonsporulating, facultatively anaerobic, and Gram-negative bacteria of the Enterobacteriaceae family that was first isolated in 1932 by Werkman and Gillen. [5,15] It is frequently found in mammals' water, soil, food, and intestines. [13] ese infections can occur in the urinary tract (39%), gastrointestinal system (27%), wound/decubitus ulcers (10%), pulmonary, or other sites (11%). Although they typically occur in patients with diabetes mellitus, intravenous drug use, or compromised hosts (i.e., patients over >60 years of age and neonates), other cases have been reported in younger patients without clear risk factors. [2,6,7,9,12,14] Treatment of choice for SEA due to C. koseri e treatment of choice for epidural spinal abscesses in patients with significant neurological deficits is often operative decompression (i.e., laminectomy)/aggressive operative debridement, followed by 4-16 weeks of Müllner and Keller [9] Sotto et al. [12] 1994 postoperative intravenous AB. [7,14] While some infections may resolve by the 4 th postoperative week, others will require total treatment durations of six or more weeks [ Table 1]. [6,9,10,12] Notably, symptoms and inflammatory markers can help guide the efficacy and duration of antibiotic therapy. [11] Antibiotic sensitivity of C. koseri C. koseri is typically sensitive to ciprofloxacin, carbapenems, third-generation cephalosporins, piperacillin-tazobactam, aminoglycosides, and trimethoprim-sulfamethoxazole, but are typically markedly or moderately resistant to multiple other AB [ Table 2]. [1,3,4,8] CONCLUSION SEA caused by C. koseri is very rare. However, once recognized in conjunction with significant neurological deficits, they typically require surgical decompression/aggressive debridement and prolonged postoperative antibiotic therapy.
Declaration of patient consent
Patient's consent not required as patient's identity is not disclosed or compromised.
Financial support and sponsorship
Nil.
Conflicts of interest
ere are no conflicts of interest. | 2023-03-12T15:18:46.617Z | 2023-03-10T00:00:00.000 | {
"year": 2023,
"sha1": "1ce3208b908d04749e19dab75b5418e4c3872032",
"oa_license": "CCBYNCSA",
"oa_url": "https://surgicalneurologyint.com/wp-content/uploads/2023/03/12186/SNI-14-83.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae41fa915d09ea3a95c758fd3462e13a546d56c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
248695618 | pes2o/s2orc | v3-fos-license | Patient safety improvement with the patient engagement in Iran: A best practice implementation project
Background Patient engagement in patient safety is aimed at increasing the awareness and participation of patients in error-prevention strategies. The aim of this project was to improve the patient safety with the patient engagement within the local context of a maternity hospital by implementing best practice. Methods A clinical audit was conducted using the JBI Practical Application of Clinical Evidence System tool. The current project was conducted in surgical ward of Shahid-Beheshti maternity hospital, Iran. The sample size was 46 patients and 46 healthcare practitioners for both the baseline and follow-up. In phase 1, four audit criteria were used and a baseline audit was conducted for this project. In phase 2, barriers to compliance were identified, and strategies were adopted to promote best practice. In phase 3, a follow-up audit was conducted. Results The results showed varying levels of compliance with the four criteria used in this project. The criterion 1, which was related to training of healthcare practitioners on how they can support patients, has the highest compliance at 87% in baseline and follow up data collection. Furthermore, compared with the baseline data (criterion 2 = 52%; criterion 3 = 37%; criterion 4 = 61%), compliance with criteria 2, 3, and 4 notably improved at 85, 76, and 92%, respectively. Conclusions The present project successfully implements patient engagement in Iran and reveals varying results on compliance and the increasing knowledge of healthcare practitioners and patients on evidence-based patient engagement in order to improve the patient safety. The used strategies can facilitate implementation of evidence based procedures in clinical practice.
Introduction
Patient safety is the prevention of errors and adverse events associated with provision of medical care [1]. Patient safety and error reduction are the shared responsibility of all healthcare professionals, and improvement depends on recruitment, education, and performance of the whole multidisciplinary team [2,3]. A significant interest has been started internationally in involving patients in healthcare planning and service development [3,4].
The evidence has shown that significant numbers of harmful incidents, many of which are preventable, occur in hospitals [5]. Patients are an important source of information on potentially avoidable events, and their involvement can decrease medical risks and improve outcomes [6,7]. In general, patients agree that they should take an active role in preventing healthcare-related errors and are willing to engage with healthcare practitioners in safety practices [8]. Various strategies have been proposed to facilitate patient and/or family engagement in patient safety initiatives [8,9].
Patient engagement in patient safety is aimed at increasing the awareness and participation of patients in error-prevention strategies. Patient engagement in healthcare planning, service development and research is a key policy component in many countries [10]. Patients are dependent on healthcare practitioners, and their decision-making [10], however, their involvement in safety initiatives is crucial to the management of long-term conditions and improving safety [4,11]. Research has shown that, on average, patients are harmed in 10% of all hospital admissions and it is estimated that up to 75% of these incidents are preventable [1].
However, whilst the concept of "patient engagement" is recognized, unanimous definition of patient engagement exists, rather a variety of terms, including "patient involvement," "patient collaboration," "patient empowerment," "partnership" and "patient-centred care," have been used to describe a partnership with patients [12,13].
The WHO Eastern Mediterranean Regional Office (EMRO) developed the patient safety friendly hospital initiative (PSFHI) in 2007. At first, six countries were chosen as candidate to perform the programme and later it was implemented in all other countries in the region [14]. Iran, as one of the countries of the region, participated in the programme. In the first step, 10 hospitals from around the country were selected as pilot phase. Then and according to achievements, the Ministry of Health and Medical Education (MoHME) ordered it to be implemented in about 100 hospitals in the country [15].
Patients who were aware of the need for their engagement and knowledgeable about patient safety were likely to engage in patient safety initiatives [16]. Provision of information and receiving education on how to detect and report changes in their clinical condition, and communicate errors, and how they can engage in safety initiatives can improve their engagement [9,17]. Patients' health status can also influence their engagement; if unable to participate, patients' relatives may be asked to fulfil this role [4,16]. Healthcare practitioners' attitudes, encouragement, support and education about patient engagement in safety were identified as key to facilitate patient engagement in their safety. Healthcare practitioners also require education on how they can support patients to actively engage and how to communicate errors to each other appropriately and respectfully. Patient engagement is affected by healthcare practitioners' knowledge, skills, and attitudes toward patient engagement and the care environment [18,19].
Whilst research shows that patients are willing and capable of engaging in patient safety initiatives [6,20], there remains an ambiguity over how they can become engaged in patient safety activities [21,22], and whilst evidence of patient engagement in other aspects of health care has been well-documented, as regards patient safety, engagement remains an emerging field of interest with limited evidence [6,9].
Numerous studies have been conducted on the effect of patient engagement in improving patient safety. In the majority of studies, research evidence was produced and translated, but the present study tried to implement the translated evidence [4,6,8,11,13,18,20,22]. Also, the aim of most previous studies was to determine the factors affecting patient engagement and patient safety [4][5][6][8][9][10][11][12][13][14][15][16][17][18]. On the other hand, qualitative studies were conducted to explain the experiences of patients and clinicians regarding patient engagement in patient safety [3,8,15,31]. So, there have been few interventional studies and clinical audit to implement the best research evidence for patient safety improvement with the patient engagement. However, this study is a clinical audit to change the behavior of clinicians and patients in the hospital. This study used the most important research evidence to be successful in implementing the interventions.
Objective(s)
The aim of this project was to improve the patient safety with the patient engagement within the local context of a maternity hospital at Maragheh in Iran by implementing best practice recommendations.
Through the audit process, the specific objectives of the project were as follows: • To determine current compliance with evidence-based practice regarding patient engagement in patient safety by carrying out an initial audit.
• To identify barriers and facilitators to achieving compliance and develop strategies to address areas of non-compliance.
• To improve knowledge regarding best practice regarding patient engagement in patient safety in Shahid-Beheshti hospital.
• To implement strategies for obtaining patient engagement in patient safety in order to address non-compliance with criteria.
• To conduct a follow-up audit to determine improvements in compliance with evidencebased criteria regarding obtaining patient engagement in patient safety.
Design
The current project is a quality improvement project using the JBI Practical Application of Clinical Evidence System (JBI PACES) and Getting Research into Practice (GRiP) audit and feedback tool. The JBI PACES and GRiP framework for promoting evidence-based practice involves three phases of activity such as (1) establishing a team for the project and undertaking a baseline audit based on the criteria informed by the evidence; (2) reflecting on the results of the baseline audit and designing and implementing strategies to address noncompliance observed in the baseline audit by following the JBI and GRiP framework; and (3) conducting a follow-up audit to assess the results of the interventions implemented to improve practice and to identify future practical issues to be addressed in subsequent audits. This project was implemented in three stages, from May 2020 to October 2020.
Ethical considerations
This project was considered a quality improvement project within the Shahid-Beheshti hospital in Maragheh, Iran. The study was approved by ethical committee of Maragheh University of Medical Sciences (Ethical code of project: MARAGHEHPHC.REC.1398.003). An approval from the hospital ethics committee was acquired. All participants entering in this study gave an informed consent. A verbal consent form was completed by all participants.
Phase 1: Team establishment and baseline audit
Establishing the project team. A project team was established to engage key stakeholders to support the work during the process. The team included the senior research fellow, gynecologist, head nurse (HN), chief executive officer (CEO), chief nursing officer (CNO), chief quality officer (CQO), public relations manager and the patient safety improvement expert of the hospital. The team identified a senior nurse, clinician and a nurse educator as additional key stakeholders to support and endorse this project. Involvement of the project team was based on their roles in support, data collection, data entry and/or participation. The members of the team were invited to participate in the project based on their positive approach and ability to influence staff and to engage patients. The team leaders highlighted the importance of the recommended practice, conducted pre-and post-implementation audits based on the timeline chart. The team members and stakeholders used formal letter, phone and social media for meetings.
Setting and participants. The current project was conducted in surgical ward of Shahid-Beheshti maternity hospital. This hospital is a maternity health facility with a 112-bed capacity, located in the city of Maragheh. The surgical ward has 26-beds and receives approximately 2400 patients annually. The sample size included all healthcare practitioners working on this unit. There were 46 healthcare practitioners and 46 patients involved in the baseline audit, with a similar number involved in the follow-up audit.
Audit criteria. Table 1 shows the evidence-based audit criteria used in the project (baseline and follow-up audit) as well as a description of the sample and approaches to measure compliance with the best practice for each audit criterion. The audit criteria were translated prior to the data collection. A certified translator and one of the research team members independently translated the audit from into Persian following a forward translation method.
Baseline audit. The baseline audit was conducted from May 03 to 21, 2020. The baseline audit was conducted by the project team members using the JBI PACES program. To collect the baseline data; we designed a questionnaire, consisted of demographic characteristics of the respondents (healthcare practitioners and patients) and audit criteria. Methods used to measure percentage compliance with best practice included documentation audit and semi-structured interview. The interviewers included three researchers from the research team. The researchers have previous experience conducting semi-structured interviews of patients and healthcare professionals. Additionally, they had been trained to conduct the semi-structured interviews for improving the quality of interviews. After the project team completed the audit form by reviewing of documents and interviews with participants, the investigator reviewed the participant's chart according to criteria.
Phase 2: Design and implementation of strategies to improve practice (GRiP)
This phase of the study focused on gaining an understanding of the barriers or gaps between the current practice and best practice in patient engagement in order to improve the patient safety. In first, the team presented baseline audit results to the healthcare practitioners. Based on these results, the team and the surgical ward identified barriers to the low compliance of the identified criteria. The team encouraged the healthcare practitioners to ask questions and suggest strategies to improve the audit results. In addition, the JBI GRiP tool was used and strategies and resources were formulated to facilitate our discussion. Then, a GRiP report was generated by outlining the implementation plan on patient engagement, and each member of surgical ward was informed.
Phase 3: Follow-up audit post implementation of change strategy
This phase assessed whether the post-implementation resulted in the improvement of compliance with the best practice patient engagement that must be enhanced. The follow-up audit used the same criteria utilized to the baseline audit. A total of 46 patients and 46 healthcare practitioners were audited during this phase. The follow-up data were analyzed into the PACES. Results were subsequently compared with the baseline audit to determine any change in the compliance rate. This follow-up audit was conducted in late October 2020. show that the healthcare practitioners have received education on how they can support patients to actively engage in patient safety practices at 87% compliance (Criterion 1). Patients (and/or their families) have received information and education on how to detect and report changes 52% compliance (Criterion 2). Compliance of the third criterion was 37% (patients have received specific instructions from their healthcare practitioner to take a specific action to prevent harm). Visual aids have been made available in the wards to remind patients and healthcare practitioners to perform safety behaviors in 61% compliance (Criterion 4).
Phase 2: Strategies for Getting Research into Practice (GRiP)
The ward staff identified four main barriers through interviewing the participants (healthcare practitioners and patients) and then specified strategies to improve their outcomes. The project team entered the specified barriers into GRiP. The barriers and developed strategies are shown in Table 2, including resources to implement best practice. Based on the meeting and discussion with the healthcare practitioners, the head doctor of the surgical unit, and the project team, we found that providing the virtual education among healthcare practitioners and implementing the patient safety standards of hospital are the most important strategies to enhance compliance with patient engagement in patient safety. Additionally, we expressed the importance of patient involvement in improving safety for doctors by webinars and involved them in safety research. The webinars were designed with the topic "the importance of patient involvement in improving safety". We held the webinars using the Adobe Connect and Skyroom on March 21 and 29, 2020. A total of 92 participants (46 patients and 46 healthcare practitioners) attended the webinar in two sessions, separately. Each webinar was approximately 75 minutes in length. Subsequently, patients were encouraged to engage and communicate with their health care team. Other strategies were getting feedback from patient, attend to patients' complaints, using the suggestion box and implementing specific guideline. Fig 2 reflects the baseline and the follow-up audit and compliance report according to each audit criterion involving 46 participants. Criterion 1 (training of healthcare practitioners on how they can support patients) remained at 100% compliance. Furthermore, compared with the baseline data, compliance with criteria 2, 3, and 4 notably improved at 85, 76, and 92%, respectively.
Discussion
This project was the first attempt to examine the current practice and implement evidencebased patient engagement in patient safety in a maternity hospital in Maragheh, Iran. Baseline and follow-up data were collected, and barriers, strategies, and resources were identified using the JBI PACES and GRiP tools. The project team and hospital leadership provided the resources in order to implement the identified strategies. The importance and practices of facilitators of patient engagement in patient safety were included in the educational programs of healthcare practitioners in the surgical ward. In addition, educational content were provided in unit and meetings were conducted to review patient engagement in patient safety.
The results showed that after implementation of the strategies, highest mean scores of criterion were found in relation to training of healthcare practitioners on how they can support patients (100%). The compliance rate for the criterion 2 on patients have received information and education on how to detect and report changes in their clinical condition, communicate errors, and how they can best participate in patient safety initiatives increased from 52% to 85%. The compliance rate of criterion 3 in relation to patients have received specific instructions from their healthcare practitioners to improve safety increased from 37% to 76% in the post implementation period. Finally, the compliance rate for the criterion 4 on preparation of the visual aids such as brochures to remind patients and healthcare practitioners to perform safety behaviors increased from 61% to 92%.
Despite the successful implementation of the project, several limitations exist. The first was inability to implement the face-to-face training to healthcare practitioners because of the outbreak of coronavirus disease . The project was conducted during this outbreak, which is a less busy time for healthcare practitioners. Therefore, the workload has increased that can influence the strategies. Additionally, due to the time limitations and funding, the strategies were only implemented in one ward and in one hospital. Therefore, the sample size was small and composed patients and healthcare practitioners from one maternity hospital so may not be generalisable to other hospitals.
This project has several strengths. One of the strengths of this study is to implement the best translated research evidence related to patient safety improvement using the patient engagement. We tried to change the behaviour of healthcare practitioners and patients in order to improve the patient safety and engagement in hospital. Also, our method of study was unique compared to other previous studies. We established a project team to support the work during clinical audit process in medical practice. The team included the key stakeholders. Then, the evidence-based audit criteria used in the project (baseline and follow-up audit). Next, we implemented evidence-based strategies to improve performance using JBI GRiP tool. Finally, we assessed post-implementation resulted in the improvement of compliance with the best practice patient engagement. In fact, we modified and improved the performance of patients and healthcare practitioners using the best available research evidence.
A challenge for the implementation of the project was that some of the healthcare practitioners did not think it was necessary to conduct the assessment, because patient safety standards are implemented in the hospital. In order to overcome the challenge and improve the beliefs of healthcare practitioners of the importance of conducting a project, a webinar about patient engagement in patient safety and practice improvement of clinicians were conducted and then, strategies were provided by considering their perspectives.
The main success of the project was including the full support of the hospital management team. Hospital leadership has important responsibility to oversee the safety and quality of care provided [23]. The studies showed that leadership support for patient safety is of particular importance in small hospitals where the economic burden of safety programs is disproportionately great and leadership is closer to the frontlines [23,24]. The second success of this project was good performance of nursing team in patient safety programs related to hospital accreditation standards [25,26]. In addition to these two, the implementations of developed strategies lead to synergy effects in the audit process.
In this project, barriers to the engagement of patients in the delivery of safe care in Shahid-Beheshti hospital were identified. The results highlighted that low level of health literacy and insufficient training in patient participation are disproportionate to the number of patients and that this decreases patients' ability to take an active role in safety. A systematic review also showed the insufficiency of health literacy amongst the Iranian population [27]. In order to overcome the barrier, we implemented the strategy of patient empowerment through training about patient safety. One of the most important barriers to the engagement of patients is negative attitudes toward patient engagement. In relation to the strategies used to implement best practice in patient engagement, the use of a journal club, poster and webinar to supplement healthcare practitioners' education about importance of patient involvement in improving safety. Additionally, healthcare practitioners linked to the clinical research development unit (CRDU) of hospital in order to involve in safety researches. Patients were encouraged by text messages to engage and communicate with their health care team. Forbat and colleagues found that direct experience of participatory working is to lead to positive attitude in the perspectives of healthcare practitioners [28]. Another barrier to the patient engagement was poor interaction between healthcare practitioners and patients. We tried to overcome this barrier by using the specific guidelines regarding the patient safety for healthcare practitioners and the suggestion box for investigating the patient perspectives. The results indicated that relationship skills can lead to efficiency, safety, and clinical outcomes [29][30][31]. Finally, the participants believed that the workload of healthcare practitioners was the principle barrier for effective patient-provider relationship and patient engagement. We used the strategies of incentive mechanisms and employing more nurses to reduce workload in unit. The evidence showed that workload in Iranian hospitals is to an important barrier in effective patient engagement [32,33].
Conclusions
This project was successful to an important evidence for improving healthcare practitioners' skill and knowledge about evidence-based patient engagement in patient safety in Shahid-Beheshti hospital. The results of this project provide the positive direction into implementing evidence-based patient engagement in other hospitals. This project has the potential to raise awareness amongst healthcare practitioners and managers on the barriers to patient engagement in patient safety and improvement strategies in Iranian hospitals.
In the future, follow-up audits engaging other clinicians from the hospital units should be conducted. Thus, healthcare practitioners and patients will be empowered to improve performance and patient safety. This project is a critical point in patient safety improvement with the patient engagement in Maragheh. We must try to provide the formal educational and practical strategies for sustainability. | 2022-05-12T06:18:05.144Z | 2022-05-11T00:00:00.000 | {
"year": 2022,
"sha1": "084ea1bf0b6d6d19a7a41ffccca0aa3bf5145f90",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0267823&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f43a7b43d4e2ace46514b18602acba77fd411663",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
165163578 | pes2o/s2orc | v3-fos-license | Controlling skyrmion bubble confinement by dipolar interactions
Large skyrmion bubbles in confined geometries of various sizes and shapes are investigated, typically in the range of several micrometers. Two fundamentally different cases are studied to address the role of dipole-dipole interactions: (I) when there is no magnetic material present outside the small geometries and (II) when the geometries are embedded in films with a uniform magnetization. It is found that the preferential position of the skyrmion bubbles can be controlled by the geometrical shape, which turns out to be a stronger influence than local variations in material parameters. In addition, independent switching of the direction of the magnetization outside the small geometries can be used to further manipulate these preferential positions, in particular with respect to the edges. We show by numerical calculations that the observed interactions between the skyrmion bubbles and structure edge including the overall positioning of the bubbles are fully controlled by dipole-dipole interactions.
A Ta (5 nm) / Pt (4 nm) / Co (t) / Ir (4 nm) stack is used as a basis for the samples studied in this work. The two different heavy metal layers adjacent to the magnetic layer are known to induce large interfacial Dzyaloshinskii-Moriya interation 12 , which ensures that any DWs that are formed will be of the Néel type and will have a fixed chirality. By careful tuning of the Co layer thickness (t), a balance between the DW energy and dipolar energy can be found, such that skyrmion bubbles can be stabilized using a small external magnetic field [13][14][15] . Skyrmion bubbles are studied in circular, triangular, and square shapes of sizes ranging from 4 µm to 20 µm. The different structure sizes have different symmetries and thus enable us to investigate up to which dimensions the edges influence the skyrmion bubbles, and down to which dimensions skyrmion bubbles can be stabilized. Here we only show the key results for a few of these structures that clearly demonstrate the investigated bubble-edge interaction. Additional results are included in the supplementary material.
Two different fabrication processes, one based on electron beam lithography (EBL) and one based on focussed ion beam irradiation (FIB), are used (fabrication details are discussed in the supplementary material). With FIB, the anisotropy of a magnetic film can be controlled locally, allowing us to define regions in which skyrmion bubbles are stable. The two methods lead to two distinct situations at the edge of the structures. The EBL samples correspond to edge type (I), with no magnetic material outside the structure. The FIB samples correspond to edge type (II), with magnetic material outside of the investigated structures that has a homogeneous magnetization.
In Fig. 1(a) schematic side views of a skyrmion bubble near the edge is shown for these different edge types. Figure 1 also shows Kerr microscope images of a 20 µm wide square created (b) by EBL and (c) by FIB for various applied magnetic fields. The behaviour as a function of magnetic field is comparable for both samples: at remanence a labyrinth domain structure forms, for small fields densely packed skyrmion bubbles occur, for larger fields only a few individual skyrmion bubbles remain, and for even higher fields the magnetization is in a uniform state. The skyrmion bubbles in the EBL structure have different dimensions than in the FIB structure (the average radii are 1.34 µm and 0.7 µm, respectively) and the magnetic field at which these states occur is different for the two samples, suggesting a difference in the material parameters. Both samples show a property that is useful for our study: for the FIB sample it can be seen that at µ 0 H z = 0.50 mT the magnetization outside the irradiated structure switches. This coercive field is larger than the field for which the skyrmion bubbles are stabilized (µ 0 H z ≈ 0.25 mT). This makes it possible to study the behaviour of the skyrmion bubbles both when the magnetization outside the shape points parallel and antiparallel to the magnetization at the skyrmion core. For the EBL sample the dimensions of the bubbles and stripes are comparable to the size of the structure itself, and they do not seem to be distributed randomly throughout the structure. The stripes at remanence are aligned with the edges of the structure 16 , and for fields where skyrmion bubbles are stabilized, these bubbles are distributed such that the space in the structure is packed optimally. Next, the FIB sample (in particular the circle with a diameter of 8 µm) is studied under influence of a 0.25 mT field, both for the situation that the magnetization outside of the FIB structure points antiparallel (edge type (IIA)) and parallel (edge type (IIB)) to the magnetization at the core of the skyrmion bubbles. Kerr microscope movies are analysed in the same way as in the previous section and the results are plotted in Fig. 3(a) and (b). Because inside the structure containing the skyrmion bubbles the conditions are identical, it is remarkable that there is such a distinct difference between the preferential positions in (a) and (b). This difference is also apparent in Fig. 3(c), which shows histograms with the number of observations as a function of the distance to the structure edge for both the situation in (a) (green) and (b) (orange). For situation (b) there are no observations closer than 1.9 µm from the edge, which suggests a repelling force between the skyrmion bubbles and the structure edge. The fact that in Fig. 3(a) there is a preferential spot in the middle of the structure that is not there in Fig. 3(b) suggests that the interactions between the bubbles and the edge and the inter-bubble interactions are dominant over structural imperfections in determining the preferential spots. However, the data also suggests some influence of local variations in material properties, because if they were negligible the skyrmion observations would be distributed evenly along circles.
For the 20 µm sized squares, from which some raw images are shown in Fig. 1(c), the observed skyrmion positions are shown in Fig. 3(d) and (e), again for the situation that the magnetization outside the shape is aligned antiparallel or parallel to the cores of the skyrmions, respectively. The preferential positions seem to be distributed randomly through the FIB structure, indicating that the influence of the structure shape is no longer of relevance for this ratio between the structure size and skyrmion bubble size. However, in the vicinity of the edge the skyrmion bubbles can clearly be controlled by the magnetization outside the structure. this is not the case: a fit with an error function (black curve) reveals that the number of detected bubbles rapidly drops to zero around 1.8 µm away from the edge.
We will now discus which mechanisms could be behind the observed interaction between skyrmion bubbles and the structure edge. Strong DMI has been reported for Pt/Co/Ir samples in literature, suggesting that edge states could play a role, just as for compact skyrmions. A problem with this interpretation for our observations is the length scale: the onset of this interaction is when the skyrmion and edge state 'touch', so typically over the distance of the DW width and edge state width. These are in the order of tens of nanometers for the material stacks used here (supplementary material) while it is observed experimentally that the distance between the edge and skyrmion bubbles is in the order of micrometers. Therefore DMI-induced edge states cannot explain why our skyrmion bubbles are repelled by the structure edge.
We use a combination of the thin wall model and numerical calculations of the dipolar energy for bubbles near a sample edge to show that dipolar interactions are a plausible explanation for the observed results. Fig. 1(a) shows the dipolar fields that are involved for the three investigated edge types. Situation IIA shows the edge of a FIB structure where the magnetization beyond the edge is directed opposite to the magnetization at the bubble core. The stray fields emanating from beyond the edge help stabilize the bubble, and the dipolar energy should in principle be the same as for a bubble in an infinite film. Situation IIB shows a skyrmion bubble near the edge of a FIB structure, but now with the magnetization beyond the edge pointing in opposite direction. The stray fields emanating from beyond the edge now increase the bubble energy. For edge type I, which corresponds to the the EBL samples, there is no magnetic material and hence no stray field from beyond the edge. The bubble energy is now increased with respect to the energy of a bubble in an infinite film, because the dipolar fields that lower its energy are partially missing. Fig. 4 shows numerical calculations of how the dipolar energy varies as a function of the distance, d (also indicated in Fig. 1(a)) between the bubble and the edge for these three situations (see the supplementary information for details on this calculation).
The stability and size of a skyrmion bubble can be calculated using the 'thin wall model' 13,19 .
Here the energy of a circular domain in an infinite film is calculated with respect to the uniformly magnetized state. The size and stability of this circular domain is determined by the balance between the Zeeman energy, the DW energy and the dipolar energy. We determine the relevant
I. MATERIAL PROPERTIES
In the thin wall model 13,19 the energy of a cilindrical magnetic domain in a infinite film is calculated with respect to the uniformly magnetized situation. The magnetic material has perpendicular magnetic anisotropy, an external magnetic field is applied along the film normal, and it is assumed that the size of the domain is large compared to the width of the domain wall (DW) surrounding it.
The size and stability of skyrmion bubbles can be calculated using this model, and is determined by the balance between the Zeeman energy (E Z ), the DW energy (E DW ) and the dipolar stray field energy (E dip ): These contributions can in turn be expressed in terms of the external magnetic field (H z ), the saturation magnetization (M S ), the magnetic film thickness (t), the skyrmion bubble radius (R), the DW energy per unit area (σ ), and the vacuum permeability (µ 0 ): Here d = 2R t and u 2 = d 2 1+d 2 are defined for convenience, and K(u 2 ) and E(u 2 ) are the complete elliptical integrals of the first and second kind, respectively.
The material parameters occurring in these equations determine the skyrmion bubble size, and whether or not skyrmion bubbles are stable at all. Therefore, we determine these parameters experimentally in this section, using a full sheet sample of the material stack used for the structures created by electron beam lithography.
The magnetic layer thickness is determined simply by the growth rate and deposition time of the situation where skyrmion bubbles are stabilized, and was found to be 0.7(1) nm. The external magnetic field can be determined by checking at what field skyrmion bubbles are found. This should be done with caution: because the used magnetic fields are extremely small, any field present in the surroundings is relevant. We therefore take the field relative to the field at which where α ≈ 0.955 is a constant 13 . To obtain this typical domain size, a two dimensional fast Fourier transform was performed on the images taken by Kerr microscope, as shown in Fig. 5(b).
Now these values are combined in the energy equations. It turns out that the combination of parameters needed for skyrmion bubble stabilization is delicate in the regime we are investigating: within the experimental uncertainty of the parameters both the situation that bubbles are stable as the situation in which they are not can be obtained. Therefore the typical radius of the bubbles is determined experimentally from the Kerr microscope images and was found to be 1.34(10) µm.
The other parameters can now be tuned such (of course within their uncertainty range) to correspond with the existence of 1.34 µm sized skyrmion bubbles. We expect this combination of parameters to correctly represents the experimental situation, and therefore these are the parameters that are subsequently used to calculate the effect of a decrease in the dipolar energy contribution. This is done by a adding a artificial factor p to equation 1, resulting in We find that for our material parameters, the minimum in the E sk vs R curve disappears at p = 0.92 (see again Fig. 6), indicating that skyrmion bubbles are no longer stable. In the main text this result is linked to a specific distance between skyrmion bubble and structure edge, and it can be checked experimentally that no skyrmion bubbles are stable anymore beyond this distance.
However, please also note the assumptions that were made during this analysis. For the calculation of the boundary, the skyrmion bubble size as determined in an unpatterned film was used.
However, because the bubble size depends on the dipolar stray fields, it is expected to be different in a confined geometry close to the edge. Because the skyrmion bubbles in confined structructures that were observed seem comparable in size to the bubbles in the unpatterned film, we choose to neglect this effect. Also, this analysis was performed using parameters that were determined for the structures produced by EBL. The FIB structures have slightly different material parameters, as indicated by the difference in bubble size, and the different in the external field necessary for stabilization. Nevertheless, the experimentally observed skymrion bubbles seem to adhere to the calculated boundaries adequately.
II. METHODS
All material stacks shown in this work are grown using sputter deposition. Using a deposition tool with a low base pressure of typically < 10 −8 mbar , fully automated growth sequences, and sample rotation during deposition, we reproducibly create full sheet samples in which skyrmion bubbles can be stabilized within areas of several square millimetres large. After deposition of the material stack, there are two ways to proceed, resulting in the two types of samples that are presented. For the first, a sample in which skyrmion bubbles are stable is selected and coated with ma-N 2410, which is a negative resist. The desired structures are written using EBL, and the exposed resist forms a hard mask. After development, ion-beam milling, and cleaning with acetone, a magnetic structure with a sharp edge remains. Alternatively, a region on the sample in which skyrmion bubbles are not stable can be selected, in particular where the perpendicular magnetic anisotropy is slightly too strong for skyrmion stabilization, and the material properties can be modified locally to create a small area in which they are stable. For the local modification of material properties, a Ga focused ion beam (FIB) is used. This is an established technique to lower magnetic anisotropy [20][21][22] , and the anisotropy gradient that is created at the boundary has been shown to be ±22 nm wide for the fabrication tool we use 23 . Besides the work of Zhang et al.
in which FIB is used to create circular shapes with in-plane magnetization within an out-of-plane magnetized film to create an artificial skyrmion lattice 24 , this is this to our knowledge the first time that this technique is employed to stabilize skyrmions. An energy of 30 keV and a dose of 1.25 × 10 12 ions/cm 2 is used.
For the presented calculations in Fig. 4 on the dipole-dipole energy, a system of 10000 cells with dimensions of 450 nm × 450 nm × 0.7 nm is defined. The direction of the magnetization for each cell is defined in such a way that the investigated structure, the skyrmion bubbles and the surrounding of the structure are mimicked. The energy resulting from dipolar interactions, E dip between the cells can be calculated using the classical formula for two magnetic dipoles µ i and µ j whose positions are connected by a vector r ij : For some geometries, it is possible to calculate the expected skyrmion bubble positions using the same type of calculations. It is experimentally determined how many skyrmion bubbles are present in a certain geometry at a certain external magnetic field, and E dip is now calculated for various positions of these skyrmion bubbles (for these calculations lateral cell sizes ranging from
III. OVERVIEW SHAPES AND SIZES
In the main manuscript only a selection of the investigated shapes and sizes is shown, in view of the legibility of the paper. In this section we show a more complete overview of the obtained results. Fig. 7 shows the results for all shapes created by EBL in which skyrmion bubbles could be stabilized. The same analysis procedure and data representation as in Fig. 2 the structures become, the smaller the influence of the edge becomes, and the more freely the bubbles can move around (though some preference for certain positions remains observable for all structure sizes investigated). The general observations agree well with our interpretation based on dipolar interactions, but in this section we will discuss some noteworthy individual cases.
First, there is the 8 µm sized circle, in which there are no distinct preferential positions visible, but a rather smeared out region. In the individual frames of the Kerr microscope movie, two individual skyrmion bubbles that occupy the structure simultaneously, can be identified. In the analysis of the complete movie, it can be seen that these bubbles have a preferred distance from the centre, but other than that they seem distributed randomly. The symmetric nature of the circle is well reflected in these results, in stark contrast with the results for the square and triangles.
When examining the 15 µm sized circle, this symmetry is no longer present in the observations.
Probably, there is an energy minimum caused by local variations in the material parameters that In the thin wall model, as worked out in section I, the equilibrium size of a bubble is co-determined by the dipolar energy, and hence it will change in the proximity to another bubble or to the structure edge. Hitherto we have ignored the dependence of the bubble size on its distance to the edge and to other bubbles, to not over complicate our analysis. Also, it is conceivable that in an asymmetric environment (for example, when on the one side there is interaction with the structure edge, while on the other side there is interaction with another bubble) the equilibrium shape of a bubble can deviate from a circle, while the data is represented as perfect circles. Apparently, for the two mentioned structures, the ratio between the number of bubbles and the structure area is such that these additional effect become relevant, and a representation by average bubbles in infinite films is inaccurate.
For the 6 µm sized triangle, a bubble with the equilibrium size as found for the unpatterned film does not fit within the stability region. However, when processing the Kerr microscope images, our software consistently detects a skyrmion bubble at the centre of the triangle. Possibly the same complications as for the 8 µm sized circle and 8 µm sized square could play a role, and the bubble present in this structure is smaller than a bubble in an infinite film. However, there is also the possibilty that the inverted domain actually reaches the edges, and that it is not a bubble at all, but from the Kerr microscope footage it is not possible to conclude this with certainty. | 2019-05-24T15:58:55.000Z | 2019-05-24T00:00:00.000 | {
"year": 2019,
"sha1": "64baeaff191c08b2c4ad69f558240ab243a1d1ff",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1905.10304",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5bb3506e21377834bb91acd0b3ce580ad0d284d2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
639375 | pes2o/s2orc | v3-fos-license | Microscopic Evolution of Social Networks by Triad Position Profile
Disentangling the mechanisms underlying the social network evolution is one of social science's unsolved puzzles. Preferential attachment is a powerful mechanism explaining social network dynamics, yet not able to explain all scaling-laws in social networks. Recent advances in understanding social network dynamics demonstrate that several scaling-laws in social networks follow as natural consequences of triadic closure. Macroscopic comparisons between them are discussed empirically in many works. However the network evolution drives not only the emergence of macroscopic scaling but also the microscopic behaviors. Here we exploit two fundamental aspects of the network microscopic evolution: the individual influence evolution and the process of link formation. First we develop a novel framework for the microscopic evolution, where the mechanisms of preferential attachment and triadic closure are well balanced. Then on four real-world datasets we apply our approach for two microscopic problems: node's prominence prediction and link prediction, where our method yields significant predictive improvement over baseline solutions. Finally to be rigorous and comprehensive, we further observe that our framework has a stronger generalization capacity across different kinds of social networks for two microscopic prediction problems. We unveil the significant factors with a greater degree of precision than has heretofore been possible, and shed new light on networks evolution.
INTRODUCTION
Disentangling the mechanisms underlying the social network evolution is one of social science's unsolved puzzles. Recent advances in research bring us a wide variety of principles and models for the growth of complex networks. In most works these principles/models are validated from perspectives of macroscopic scaling-laws, such as power-law degree distribution [19], attachment kernel [36] and clustering coefficient as function of node degree [19]. Anecdotal evidence that preferential attachment is a powerful mechanism underlying the emergence of scale-free property in social networks, where new links are established preferentially to more popular nodes in a network, is ubiquitous. However it is also evident that the preferential attachment principle is not able to explain all scaling laws [38] [39] [32]. Further study in [38] and [39] shows that an individual's link formation significantly relies on its neighbors. The principle of triadic closure has been empirically demonstrated to be relevant for above three macroscopic scaling laws in the work of [37] [41] [42] [43] [32], expressly or implicitly. To summarize, both preferential attachment and triadic closure are strong force shaping the network dynamics. The questions is whether they can be balanced and unified in one framework.
Tremendous works have been proposed to compare principles in macroscopic level. However the network evolution drives not only the emergence of macroscopic scaling but also the microscopic behavior. Different from prior research we exploit distinctness of principles from the microscopic perspective. The evolution of social network affects individuals in two aspects: 1) nodal influence varies over time; 2) new links are attached to existing nodes. They are highly intertwined. Formation of new links will lead to enhancing a node's influence or prominence, and the increment of node's influence over time will attract more links (Preferential Attachment [19]). Consider Twitter as an example: as an individual rises in prominence, he/she generates more followers or links. Likewise a website grows in prominence on the basis of its connections (PageRank [5]). We posit a richer framework in the network evolution analysis and modeling should be capable of describing both of the influence evolution and the link formation mechanism. This is the central theme of our paper.
Influence analysis and modeling is a subject focus in social networks. This includes influence maximization [1] [2], influence selection and quantification [21], and influence validation [22]. In addition, different influence models [14] [15] and centrality measures, such as Pagerank [5], Betweenness, [6] Closeness [7], and Clustering Coefficient [8] have been used for discovering influential nodes in a network. These methods are limited as they are not predictive about possible rise to prominence or influence of a node in future, and are also not consistent in their performances across different types of networks. Thus, a fundamental question that we consider in this paper: is there a generic approach for the influence analysis and prediction of the prominence of a node?
As mentioned, the process of link formation is also an integrated aspect of network evolution. In recent work [26], a measure of attractiveness that balanced popularity (i.e., preferential attachment) and similarity (i.e., common neighbors) was shown to have a better interpretation of the link formation mechanism. Additionally in the work of Liben-Nowell and Kleinberg [24] and Lichtenwalter et al. [25] it was observed that there is no single feature that is capable of outperforming uniformly in different networks, and [25] also developed a supervised learning method that included the different features and outperformed the singular features. While there is a body of work in link prediction [16] [17] [18] [35], there is a paucity of an understanding of the evolutionary processes that guide link formation. A fundamental question that we consider here is: how to develop a coherent model that captures influence evolution to inform link prediction?
Modeling social networks serves to help us understand how social networks form and evolve. Besides providing approaches for concrete problems, we are more interested to know what is the fundamental principle in social network evolution. To further study, we ask the question, whether the models developed are transferable from one network to another. That is: is the model for prominence prediction and link prediction, generic enough to learn from one social network and make a prediction on another social network? With these rigorous analysis we unveil that triadic closure could be identified as one of the fundamental principles in the social network evolution. Our contributions are summarized as follows: • In Section 3 we discuss two popular principles (preferential attachment and triadic closure) and their consequences on the microscopic evolution of social networks. We develop a framework called triad position profile where the trade-offs between two principles are optimized.
• In Section 4 and 5 we apply our approach for node's prominence prediction and link prediction. We validate that our framework can interpret the individual influence evolution and the link formation mechanism better than has heretofore been possible.
• In Section 6 the validity of generality is tested on four real-world networks, which demonstrates that our methodology has a better interpretation of mechanisms underlying network evolution.
Overall our work provides microscopic insights about social network evolution with applications ranging from link prediction to inferring the future prominence of an individual node.
Datasets
In this paper we examine our approaches and perform our analysis on four social networks. The Condmat network [25] is extracted from a stream of 19,464 multi-agent events representing condensed matter physics collaborations from 1995 to 2000. Based on the DBLP dataset from [27] we attach timestamps for each collaboration and choose 3,215 authors who published at least 5 papers. Enron dataset [28] contains information of email communication among 16
Problem Definition
Network evolution is usually reflected in changes of node's prominence and new links formed with the other nodes. First, the social network evolution impacts the prominence (social status) of node; in addition, the node also affects its local neighbourhood and beyond (via link formation or link dissolution). In order to give insights into the network dynamics, we provide several definitions and formulate two concrete problems for the ease of evaluation and comparison.
On a global level, influential nodes or prominent nodes have intrinsically higher strength of influence than others due to the network topology. Through our study, we have found that a small number of nodes occupy large portion of network resources. For example, in Figure 1(b) top 20% (ranked by PageRank) nodes occupy about 80% PageRank influence in DBLP network. This satisfies Pareto Principle (also known as 80-20 rule) [9]. To better understand and model the effects of network evolution on node's prominence, we partition nodes into two sets prominent nodes and non-prominent nodes. Based on Pareto Principle, their definitions are given as follows: In following sections we denote the set of prominent nodes as PN and the set of non-prominent nodes as NPN.
As we postulated, an arguably generic approach for the network evolution analysis should have the predictability of a node's prominence in the future. Therefore we formulate a concrete task, the prominence prediction problem, where we can directly evaluate different approaches and facilitate our findings of the underlying principles.
While the link formation is a closely intertwined process with the changes of node's prominence, which is a negligible part in our analysis. To validate the link formation mechanism, we employ the link prediction problem as our evaluation metric. The associated definitions are as follows: where PNt is the set of prominent nodes measured in network Gt. How reliably can we infer whether a node v (v ∈ Vt) will belong to the set of PNt+∆T ?
In order to demonstrate the discrimination of different principles, ∆T is selected large enough for the node influence evolution.
Problem 2. Link Prediction
In a time-varying network Gt = (V, E, TV , TE), the link prediction task in such network is to predict whether there will be a link between a pair of nodes u and v at time t + ∆T , where u, v ∈ V and e(u, v) / ∈ E.
These concrete problems provide us quantitative and microscopic views of network evolution, which also make it convenient for principles comparison.
Besides verifying the generality of approaches across two intimately interacted processes of network evolution, we also study whether the learned predictors can generalize across different domains of social networks for both problems defined above. This provides us rigorous and empirical views of the network evolution problem.
TRIAD POSITION PROFILE
An important fraction of network dynamics locates in the process of influence evolution. A generic and effective measurement should be able to infer the influence evolution trend, and aid in predicting the potential prominence of a node in the future. We first introduce the current state of the art of influence measures and discuss their limitations. In addition, we validate the fundamental principles associated, and introduce our framework -triad position profile which optimizes trade-offs between preferential attachment and triadic closure. Finally, based on experiments we unveil the interactions between the process of influence evolution and the process of link formation, which are well reflected in our framework.
Current State of The Art
The influence analysis in social networks has been a perennial topic of academic research. Typically these include influence maximization [1] [2], influence selection and quantification [4] [21], and influence validation [22].
For influence maximization problem, there are quite a few influence diffusion models proposed, such as linear threshold Figure 2: Preferential Attachment vs. Triadic Closure. Based on the principle of preferential attachment two red nodes are most likely to be connected in future; while the triadic closure principle suggests that link between two blue nodes.
model and weighted cascade model in the work of [1]. Many algorithms are designed to maximize the influence in these diffusion models, such as DegreeDiscount [2] and "CELF" [3]. At the same time many centrality measures are proposed for identifying influential nodes in a network, such as degree centrality, Pagerank [5], Betweenness [6], and Closeness [7]. In addition, Goyal et al. [21] studied the problem of learning influence probability of node from a log of user actions.
Limitations of Current Methods
Although these methods are proved to be effective in influence quantification and measurement, they are inherently lack of predictability. First, most of these measures assign a value to each node, which leads to the loss of information; second, much research has focused on describing the influence at current time -that is, the consequence of influence evolution. This does not assure their predictability of future prominence. To summarize, even though these existing measures of influence degree are good at evaluating consequences of evolution, they have limitations in describing the future of influence evolution.
A Case of Local Sub-structure
Social influence is a well accepted phenomenon in social networks. We posit influence of a node, as well as capacity of a node being influenced, is a function of its neighborhood. Thus the future prominence of a node may be a function of the sub-structure surrounding the node at time t. We have several canonical examples to support this proposition. First, based on the PageRank heuristic: importance of a node is indicated by the number of connections or links to that node; second, Burt [20] proposed the concept of structural hole: a node's success often depends on their access to local bridges. Both of these examples imply that the position of node within a social network is important. This leads us to investigate the value of a node's position within the local sub-structures and the impact on its future prominence, which inspires the development of our framework.
Preferential Attachment and Triadic Closure
Despite the well known macroscopic scaling in social networks, such as power-law degree distribution [19], attachment kernel [36] and clustering coefficient as function of node degree [19], it is undecided whether there is a common mechanism underlying these macroscopic laws [32] [33]. With the evidence that the preferential attachment process [19] is just one dimension of network evolution, much recent research has extended the preferential attachment principle by local sub-structure evolution rules [38] [39]. Li et al. [39] and Jin et al. [38] proposed that an individual's link formation significantly relies on its neighbors. In the work of [31], Granovetter proposed that a "forbidden" triad (left in Figure 3) is most unlikely to occur in social networks, which means that the probability of a new link to close "forbidden" triad is higher than the probability of link between two randomly selected nodes. The principle of triadic closure is demonstrated to be relevant for social network evolution in many works [38] [39] [44] [32]. Obviously these two principles propose two distinct mechanisms of network evolution and none of them can act as a single origin of network evolution. In preferential attachment new links are made preferentially to high degree nodes while in triadic closure new links are generated to close "forbidden" triad ( Figure 2). We are interested to know whether there is an effective combination of these two principles.
The principles of preferential attachment and triadic closure have been empirically demonstrated to be relevant (not as a single origin) for macroscopic scaling laws in the work of [37] [41] [42] [43], expressly or implicitly. As the fact that these principles are underlying the social network macroscopic scaling laws, we are interested to know whether these principles are valid to answer the microscopic problems in social network dynamics, such as prominence prediction and link prediction. Our work is different from the work of [37] and [40], Leskovec et al. [37] employ triadic closure to reproduce the observed macroscopic laws of social networks and Lou et al. [40] investigated how a reciprocal link is developed and how relationships develop into triadic closure.
Triadic Closure Effect on Network Evolution
The effect of preferential attachment on the influence evolution is obvious and evident. Here we explore the effect of triadic closure on the influence evolution. The quantity of triadic closure (or structural balance) is usually defined as below [38]: balance rate = 3 × number of closed triads number of connected triads (1) where connected triad is the left triad in Figure 3 and closed triad is the right triad in Figure 3 respectively. By studying the sub-networks among prominent or non-prominent nodes, we observe that initially the future prominent nodes sub-network has a lower balance rate than the future nonprominent sub network, while after long enough evolution the prominent sub-network forms a more balanced structure ( Figure 4). There are several implications: 1. There exists connections between the triadic closure and the prominence evolution. In addition, as discussed above, new links are more likely to form between nodes located in an imbalanced sub-network; (a) Before Evolution (b) After Evolution The initial sub-network where future prominent nodes are located is more imbalanced than that of future nonprominent nodes, position of node can be indicative of its future prominence. To some extent this implies the effect of triadic closure on both the influence evolution and the link formation mechanism.
As suggested in the principle of triadic closure, a "forbidden" triad is more likely to evolve into a closed triad. For further validation, we provide the evolution ratio of two types of triads in Figure 5 (a). We can see that the "forbidden" triad (triad 2) has much higher probability to be a closure triad than the disconnected sub-structure triad 1. This implies, nodes in different triads have different probabilities to develop prominence and new links. This leads us to an important conclusion: the positions of nodes in sub-structures determine their future orbits in both essential evolution elements: the influence evolution and the link formation. This observation leads us to develop our method called, the Triad Position Profile, discussed in the next sub-section.
Triad Position Profile
Motivated by the above analysis, we start our investigations from the principle of triadic closure. Based upon the principle of triadic closure, an individual will try to close a "forbidden" triad that it has, for example in Figure 3 a "forbidden" triad is likely to evolve as a closed triad. See examples of all possible triads in Figure 6(a) and Figure 6(b). The number labeled on the edge describes whether two nodes have relation, for instance '1' can state that two actors are friends while '0' means they are non-friends.
Such kind of triad evolution has very nice characteristics, firstly it leads to the formation of the link, and additionally it also increases the influence of node. Thus, different positions of a node in corresponding triads can be indicative of influence and prominence, as well as link formation analysis. This satisfies our proposition that the influence evolution and link formation are highly intertwined. As we have discussed above, the position of node within substructures could provide us insights into the principles underlying We are interested to know that the consequences of preferential attachment and triad closure on the network evolution; second, we want to validate our proposition made in the above section and seek a solution which optimizes trade-offs between two distinct principles. Based on our discussions above, we introduce our framework-Triad Position Profile for the influence evolution analysis. Formally, the Triad Position Profile is defined as follows: In order to analyze the generality and effectiveness of existing influence measures and our method, we design an experiment to identify their correlation with node latent prominence. The evidence that our framework combining two principles well will be provided later.
Experimental Setup For a time-varying network at time t Gt, we extract the set of nodes whose arriving time is t and then compute their influence measures based on the topology of Gt. At time t+∆T for the network Gt+∆T we classify the set of nodes into prominent set PNt+∆T and non-prominent set NPNt+∆T based on the topology of Gt+∆T . In order to demonstrate the discrimination of two principles, ∆T is selected large enough for the node influence evolution. As we know when ∆T is small the prominence prediction problem will be easy. Here we extract new arriving nodes as our prediction candidates, because existing nodes are well evolved and much easier to predict. In this way we can compare the correlations between these metrics and node's latent prominence quantitatively, we show the p-value associated with each feature and their corresponding significance level in Table 1.
We observe (see Table 1) that the centrality measures are not performing well in describing a node's future prominence except degree centrality and betweenness (1 sigma), while several TPP positions are significantly better in describing a node's latent prominence. For the user influential probability measure, the historic information of influence probability does not give a promising partition of PN and NPN (due to the lack of outside action log we calculate user influential probability as shown in Figure 6(c), a link construction is considered as an action). We note that, in the experiment the sets of PNt+∆T and NPNt+∆T are labeled by the degree centrality, however we notice that degree centrality metric does not have a very significant correlation with node's future prominence. This implies that the preferential attachment is not the only dimension in the social network evolution as stated in [38] [39] [37]. While for the TPP positions, we have several observations: 1) we unfold that different TPP positions have different ability in describing node's future prominence; 2) three of TPP positions are much better than centrality measures. These observations hold for other datasets used in our work.
To summarize, even though the centrality measures of influence degree are proved to be good at influence quantification, they are inherently not powerful enough to depict the node's future prominence. Additionally we can observe that the triad position profile combines two principles. Triad position 1 and 4 reflect the effect of preferential attachment, while triad position 3 follows the triadic closure principle. This confirms our propositions made above and the effectiveness of our framework will be further validated.
As triadic closure principle suggests, for the unclosed triad (triad 2) new links are formed between nodes in position 3, however we have observed that nodes in position 4 is more likely to be prominent in future. One possible reason underlying such phenomenon is the preferential attachment principle, nodes in position 4 have higher attractiveness of links. However in Table 1 we observe that degree centrality does not have a comparable significance as the position 4, this suggests that the preferential attachment principle is not the only mechanism underlying this.
To further study this effect, we calculated the conditional probability of position 3 and position 4, P rob(3|4) states the probability that a node shows up in position 3 given the condition that it is located in position 4; P rob(4|3) is the probability that a node is located in position 4 given the condition that it is also in position 3. We can see in Figure 5 principle. This explains why position 4 has higher significance level than position 3, and further confirms that the triadic closure principle is more significant than the preferential attachment in social networks evolution. Also this implies an important characteristic of the TPP method, the position profile combines two well know social principles (i.e. preferential attachment and triadic closure).
Influence Evolution and Link Formation
As we conjectured in Section 1, the influence evolution and the link formation are intertwined, and here we provide a detailed investigation into this from the perspective of influence events. Goyal et al. [21] proposed the concept of user influential probability that captures the influence degrees from the historic log of user actions. However such user actions are not always available in networks, here we define an action called link action between two nodes u and v as follows: Definition 5. For a given node u in the time-varying network Gt = (V, E, TV , TE), u is said to have a link action on node w at time t if (u, w) ∈ E and t ∈ TE(u, w).
Additionally we provide the definition of the link influence of node u on its neighbor v as follows: Definition 6. A node u is said to have a link influence on its neighbor v iff: 1) there is a link action of node u with another node w at time t; 2) there exists a link action of node v with node w at time t ; 3) min(TE(u, v)) < t < t and t − t < σ The σ is the average action delay between two nodes u and v. An example of link influence is presented in Figure 8 (left).
In our work we divide nodes into two groups (prominent nodes and non prominent nodes), in this section we further study the connection between the node's prominence and the link formation. Thus in Figure 8 we partition the link influence event into 8 categories based on nodes' prominence. The three digits represent the prominence status of the three nodes, u, v, and w, '1' indicates prominent node and '0' indicates non prominent node. In Table 2 we provide the distribution of several patterns, and we observe that: 1) |1XX| > |0XX| and |X1X| > |X0X|, this means prominent nodes have much higher probability to have link influence on their neighbors, and it also validates the principle of preferential attachment; 2) additionally |XX0| > |XX1|, non-prominent nodes play an important role to transfer link influence; 3) |11X| > |00X|, this states that link influence is more likely to happen between prominent nodes; 4) |10X| ≈ |01X|, if link influence occurs among prominent nodes and non-prominent nodes, then prominent nodes and Condmat 1530 365 1513 382 95 1800 1316 168 214 197 DBLP 1377 438 1329 486 15 1800 681 498 369 267 Enron 11769 249 11787 231 187 11831 11549 11 220 238 Facebook 6203 2775 6196 2782 10 8977 4794 1373 1409 1402 non-prominent nodes have the same chance to initiate the influence. To summarize, this validates the intimate interactions between the influence evolution and the link formation. Thus we postulate to validate the effectiveness of our framework in these two microscopic problems.
INFERRING FUTURE PROMINENCE
In order to prove the correctness of our framework, we apply our approach in prominent prediction problem and compare with baseline methods. Note that we classified nodes as PN or NPN, thus making it a binary classification task. We first discuss the feature vector construction aspect.
Feature Vector Engineering
We first integrate the various measures capturing the notion of influence in to one feature vector. In addition to the different measures described in Table 1 (other than TPPs), we also include some measures introduced in Burt's work of [20], such as efficiency, constraint and hierarchy. These features contribute to the feature vector for the Baseline method.
The five TPP positions census contributes to our TPP method for prediction. In addition, we developed a method based on triad substructure influence census (TPP+), as follows.
We first compute the link influence probability of a node u, which can be expressed as In Figure 8 we can see that a node u with high LIP is more likely to attract links for its neighbors. Our heuristic is, if a node has large number of connections with high LIP nodes then it has higher probability to be prominent in future. Based on this heuristic we design two features to describe a node's prominence trend: Thus, TPP+ comprises of TPP, as well as prominence prob and prominence index.
The features for Baseline method, TPP and TPP+ are listed in Table 3. For all methods, we use Bagging with Logistic Regression as the supervised learning model. Our goal here is to evaluate the utility of additional information imputed by us in the feature vector versus the quality of a learning algorithm. We conjecture that the benefits of another learning algorithm may uniformly apply to the task, and provide improvements across the board.
Experimental Settings
In our experiment we only allow methods to observe features of nodes in a short duration after nodes arriving, for example, for Condmat and DBLP we only use the first year
Classification Performance
In Table 4, we provide an empirical comparison of learning performance. In our observation our approach TPP+ outperforms the baseline method in terms of AUPR and Top@50, and has better or comparable performance in terms of AUC and Accuracy. The TPP+ improves AUPR by 0.7%-16.4% and improves Top@50 by 0%-42.8%. This confirms that our approach has generalized performance in different domains of datasets. The performance of TPP is provided in section 6, which is also better than the baseline method. We have several conclusions: 1) the principle preferential attachment is just one dimension of mechanisms underlying the nodal influence evolution; 2) the trade-offs between triadic closure and preferential attachment are well balanced in triad position profile and then it achieves better performance in prediction task.
Impact of Different Influence Models
There are different centrality and influence measures for evaluating the prediction of a node's prominence, and it is not possible to enumerate performance across each of those dimensions. To resolve that and do a robust evaluation, we used the influence propagation models for further validation.
In the prominence prediction problem, our task is to predict whether the set of nodes arriving at time t (denoted as NAN (new arriving nodes)) will become prominent at time t + ∆T . The prominence predictors will rank the nodes in NAN based on their likelihoods of being prominent in the future. When applied in influence maximization problem, a simple method is to extract the top k ranked nodes (based on different metrics, i.e., degree) as the seed set. The seed set extracted from our approach is denoted as triad profile while for the baseline method we denote such a set as baseline. Comparing these two seeds sets' influence spread in future network Gt+∆T will suggest which method is a better indicator of future prominence. Besides this, we also build a reference system for the validation of predictability. We employ the DegreeDiscount [2] (an efficient and scalable algorithm in influence maximization) to identify the top k seed set from the set NAN based on the topology of network Gt, this top k seed set is denoted as DegreeDiscount t . If the seed set extracted from a prominence predictor' results has better influence spread than DegreeDiscount t , then we consider this predictor owns predictability in dynamic influence maximization.
As shown in Figure 9, our approach still outperforms the baseline method. In datasets DBLP, Enron and Facebook, our approach reveals its predictability in dynamic influence maximization, while in Condmat the ∆T is too short thus the benefit of predictability is not significant. To note that, our method is not designed for the purpose of influence max- imization, the comparisons in influence propagation model are employed to provide an empirical and solid comparison between baseline method and TPP+.
LINK PREDICTION
As we discussed above, the influence evolution and the link formation/dissolution are intimately connected. In this section we demonstrate that our framework can also interpret the link formation process better than has heretofore been possible.
Triad Evolution Matrix Predictor
Motivated by our discussion in Section 3.2 and Section 3.4, here we introduce a method called Triad Evolution Matrix (TEM), which are adapted from triad position profile to perform the link prediction task. All possible 3-subgraph in an undirected network is presented in Figure 10 (a), and their transition relationships are provided in Figure 10 (b). Based on our observations in Section 3.2 different kinds of triads have different probabilities to be closure triad, this inspires us to perform the census of nodes pair's collocation in these triads and gain discernment in predicting new links.
The TEM is a matrix of size n × n (n = 4), where n is the number of triads in undirected networks, and TEM[i, j] represents the percentage of triad-i at time t evolve to triadj at time t+1 (Figure 11 (a)). This matrix can be computed trivially by counting triads in the network Gt at time t and then calculating elements by checking the network Gt+1 at time t + 1. Additionally in the network G there are four possible triad collocation elements (Figure 11 (b)) for two nodes s and t where e(s, t) / ∈ G. Thus for any two nodes s and t collocated in TCE i (triad collocation elements) (i ∈ {0, 1, 2, 3}), we can compute the likelihood of potential link between s and t as below: In this way we get a likelihood vector for TCE, To note that here we are using the dot product of two vectors of size 4, the result will be a real number. And the case that two nodes s and t are collocated in TCE 1 is equivalent to the case that they are collocated in TCE 2 for undirected link prediction.
Thus for two nodes s and t we can calculate a probability vector based on the corresponding TCE vector, TCEs,t = (|T CE 0s,t|, |T CE 1s,t|, |T CE 2s,t|, |T CE 3s,t|) ,where |T CE is,t| states for how many TCE i elements nodes s and t are collocated in. The corresponding probabilities vector T EM prob(s, t) can be written as: In this way this vector gives a multi-dimensional description of the link likelihood of a pair of nodes s and t based on the principle of triadic closure, and which also balances well with the preferential attachment.
Link Influence Census
As we discussed in Section 4, we can calculate the LIP for a individual node using the census of link influence. Similarly, we can compute the LIP of a node u on its neighbor v, as follows: Thus for a pair of nodes v and w, we can calculate their link likelihood based on our observed link influence probability information and most recent link actions within ∆t time.
Firstly for the common neighbor node u of v and w, we need to calculate the probability of link between v and w due to the influence of u.
In the above equations, p u,v v,w represents the probability of link between v and w due to the influence probability of u on v, while p u,w v,w represents the probability of link between v and w due to the influence probability of u on w. Trivially, we can use the maximum value of them to represent the probability of link between v and w due to the node u, denoted as p u v,w . And for a pair of nodes v and w there can exist many common neighbors, thus we define the probability of link between v and w due to the link influence effect as follows:
Inferring New Links
Experimental Setup We set the default values for classifiers used in this paper, 10 bags of 10 random forest trees for HPLP (the same setting in the work of [25]), 10 bags of 10 logistic regression for TEM-, TEM and TEM+. The features lists of all models are presented in Table 5. TEM method combines TCEs,t and TEM prob (s, t) into the features vector, which includes the nodes pair collocation information and triad evolution information learned from historical data. In TEM+ we include one more feature (Link Influence Probability) than the TEM, this feature is introduced in equation 5. While TEM-only includes the nodes pair collocation information described by TCEs,t, where only static topological information are included. In this way we can investigate the generality of TEM prob (s, t) and Link Influence Probability. The performance of TEM-can be found in Table 8.
The reason we select HPLP for comparison is, HPLP includes almost all centrality measures frequently used in link analysis and it is also the best framework of featurebased link prediction till date. Another reason is, the HPLP method can be considered as a naive combination of preferential attachment and triadic closure, where the values of preferential attachment and common neighbors are simply combined into one feature vector. We undersample training set to 30% positive class prevalence in training. We do not change the size or distribution of the testing data. In this Figure 13: Link Influence Probability Method paper, we restricted the prediction task within the set of two hops node pairs [25].
In Table 6 we present the performance comparisons of our methods with HPLP [25] and state-of-art methods listed in [24] (Adamic/Adar, Common Neighbors and Preferential Attachment). As suggested in the work [30], given the high class imbalance, area under the precision-recall curve (AUPR) should be used as the primary evaluation measure. We can see that the TEM+ and TEM method significantly outperforms the HPLP method by at most 121% in terms of AUPR. We observe almost the same pattern in Section 4. Thus, the position profile methodology is consistently effective for both the prominence prediction and link prediction. Additionally we find that our framework (TEM+ and TEM) are better than PA, CN and AA methods. This implies that the combination of preferential attachment and triadic closure is better than each of them alone.
To be rigorous, in Figure 12 we also provide ROC curves and PR curves for three methods, TEM, HPLP and PA. HPLP includes almost all classical predictors frequently used in link analysis, which can be considered as a naive combination of preferential attachment (i.e., PA) and triadic closure (i.e., common neighbors and Adamic/Adar). We observe that: 1) TEM method outperforms PA method significantly in terms of ROC curve and PR curve, which means preferential attachment is not the single origin underlying the process of link prediction; 2) HPLP method outperforms PA method in all cases, which indicates that both preferential attachment and triadic closure are not negligible in network evolution; 3) TEM method is better than HPLP method, which means TEM successfully optimizes certain trade-offs between preferential attachment and triadic closure. All of these results demonstrate the strength of our framework in predicting new links.
GENERALIZATION ACROSS DATASETS:
A CASE FOR TRANSFER LEARNING In the above sections we have demonstrated that the triad position profile has a stronger generalization capacity than nodal attributes based methods in predicting future prominent nodes and predicting new links. To be rigorous, we now ask: are these features powerful enough to transfer learning from one social network to another? If our framework are able to generalize across datasets, then it will further demonstrate that our framework captures the essential principles of network evolution.
Generalization-the Prominence Prediction
We first consider the prominence prediction problem. In Table 7, we provide the transferred learning results for baseline model and TPP model. Each pair of generalization is trained on the row dataset and evaluated on the column dataset by Bagging with logistic regression, as before. The diagonal entries represent the performance of models which are trained and tested on the same dataset, which makes it convenient for comparisons.
There are several observations. First, we find that few generalization entries have higher performance than their corresponding non-generalization entries (diagonal entries), for example two increased entries of TPP belong to the generalization from Enron and Facebook to DBLP. Second, we observe that the TPP model's performance degrades remarkably less than the baseline model in most cases. This indicates that the position profile of node captures principles that are more generic than the centrality based model, and this still holds even if the generalization is across different domains of networks. Third, the generalization of the position profile methodology is not significantly impacted by the fact that the difficulty of prediction is different across the network domains and the fact that the imbalance ratio between training set size and testing set size: for example, prominence prediction is more difficult on DBLP dataset, however the models learned on DBLP still have performance that are comparable to the corresponding diagonal entries; additionally even if DBLP training set size is only around 100, it can still work well on the testing sets of Condmat, Enron and Facebook, which are up to thousands. Fourth, the difficulty of prediction in each dataset is not affected by the generalization: all performances in the same column are in the same order of magnitude. Additionally, by comparing the performance of TPP and TPP+ we can see the TPP method is more stable in the generalization across datasets; the reason is trivial, the TPP+ contains two feature (prominence prob and prominence index) which are not that generic across different domains of datasets. This further confirms that the position profile is a general crossdomain property for the influence evolution analysis.
In conclusion, the position profile based model is notably more generic across different domains of networks, and the centrality based model is more particular to a specific dataset.
Generalization-the Link Prediction
As discussed before, the link formation and influence evolution are always accompanied with each other. We also conducted an empirical generalization of the link prediction task across different datasets in Table 8. To note that, in order to have fair comparison, we use 10 bags of logistic regression for all methods. Most of our observations made in the generalization of prominence prediction still hold for the link prediction problem. First, this validates the intimate interactions between the influence evolution and the link formation mechanisms; we see the same overall pattern. For example, both baseline and HPLP have large drops in performance when generalizing from Facebook to Condmat and generalizing from Facebook to Enron. Second, this suggests that the properties captured by the position profile (in both problems) are indeed general across datasets.
In conclusion based on the generalization of the prominence prediction problem and the link prediction problem across datasets, we postulate that the positions where nodes are located are more significant in determining their evolution orbits than the nodal attributes possessed by them. Our methodology of position profile has a greater degree of precision than has heretofore been possible in depicting the network evolution. This is due to the optimized trade-offs between triadic closure and preferential attachment in our triad position profile methodology.
CONCLUSIONS
In this paper we analyzed several principles/mechanisms underlying the network evolution, mainly focused on two essential elements of the network evolution: the individual influence evolution and the link formation/dissolution mechanism. We demonstrated that position of a node in a local structure is strongly indicative of the influence progression or future prominence of the node in the social network. Building on this observation, we developed a prominence prediction method as well as a method for link prediction. We showed that the node prominence and the process of link formation are closely intertwined and impact the evolution of a network. We empirically demonstrated the improvement in performance over the baseline methods for both prominence prediction and link prediction across the four different datasets. We further established the generalization capacity of our methods under a transfer learning scenario -we learned the classifier on one social network (using the proposed features) and tested on another social network. The performance trends clearly showed that our approach is able to capture essential properties or features underlying network evolution, which are general across different domains of social networks.
These findings are important for several reasons. First, it provides microscopic evidence that the triadic closure is a fundamental principle underlying the social network evolution. Second, our methodology (triad position profile) is validated to optimize trade-offs between essential dimensions of network evolution (preferential attachment and triadic closure), then it is not surprising that, as a consequence, our approach yields accurate and generic performance in both microscopic problems. In summary, we have developed a new perspective for network evolution and developed a general purpose feature vector that can be used by different machine learning algorithms across different social networks. | 2014-09-18T06:15:46.000Z | 2013-10-05T00:00:00.000 | {
"year": 2013,
"sha1": "445d8bc362d7c71b32cb0f09bfa16e318e5da16c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "445d8bc362d7c71b32cb0f09bfa16e318e5da16c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
119708376 | pes2o/s2orc | v3-fos-license | On Einstein metrics, normalized Ricci flow and smooth structures on $3\mathbb{CP}^2 # k \bar{\mathbb{CP}}^2$
In this paper, first we consider the existence and non-existence of Einstein metrics on the topological 4-manifolds $3\mathbb{CP}^2 # k \bar{\mathbb{CP}}^2$ (for $k \in {11, 13, 14, 15, 16, 17, 18}$) by using the idea of R\u{a}sdeaconu and \c{S}uvaina (2009) and the constructions in Park, Park, and Shin (arXiv:0906.5195v2) and in Park, Park, and Shin (2009). Then, we study the existence or non-existence of non-singular solutions of the normalized Ricci flow on the exotic smooth structures of these topological manifolds by employing the obstruction developed in Ishida (2008).
Introduction
Recent years have witnessed a drastic increase in our understanding of the topology and geometry of 4-manifolds and complex surfaces. The newest developments can be exemplified by the construction of simply connected surfaces of general type with small topology [22,23], by the unveiling of a myriad of exotic smooth structures on small 4-manifolds [1], and by how these manifolds have provided an adequate environment for the study of fundamental questions in Riemannian geometry that were previously out of reach.
In particular, intriguing questions regarding Einstein metrics ( [24]), and the relation between smooth and geometric structures (like the Yamabe invariant and the normalized Ricci flow) on a given topological 4-manifold ( [13,14]) have been immediate beneficiaries of the novel constructions. In this paper we employ the procedure of R. Rȃsdeaconu and I. Şuvaina ( [24]) to the constructions of H. Park, J. Park and D. Shin ( [23]), and to those of A. Akhmedov and B.D. Park ([1]) to study the (non)-existence of Einstein metrics, and the (non)-existence of nonsingular solutions to the normalized Ricci flow on small manifolds (although bigger than those considered in [24]).
Our main results are the following. 13, 14, 15, 16, 17, 18}. Each of the topological 4-manifolds 3CP 2 #kCP 2 admits a smooth structure that has an Einstein metric of scalar curvature s < 0, and infinitely many non-diffeomorphic smooth structures that do not admit Einstein metrics.
Regarding the non-singular solutions to the normalized Ricci flow on the exotic smooth structures of the manifolds from Theorem 1 and in the spirit of [14], the following result is proven. We are also able to prove that for k ≥ 9, each of the reducible manifolds of Theorem 1 have infinitely many smooth structures that do not carry an Einstein metric, all of which have negative Yamabe invariant, and on which the only solutions to the normalized Ricci flow for any initial metric are all singular (Proposition 11). Moreover, for k ≥ 8, the manifolds of our theorem does not admit anti-self-dual Einstein metrics (Lemma 12). Theorem 1 and Proposition 2 extend the results in [24] and [14], and improve results of [21] and [5].
The (non)-existence of Einstein metrics on different smooth structures on small blow ups of 3CP 2 was previously considered by V. Braungart and D. Kotschick in [5]. In that paper, the authors proved instances k = 17 and k = 18 of Theorem 1. By a result of F. Catanese [8], in the case k = 18, the manifolds with Kähler-Einstein metrics in this paper and in [5] are diffeomorphic.
The paper is organized as follows. In Section 2 we determine the homeomorphism types of the complex surfaces built by H. Park, J. Park and D. Shin; the second part of the section provides a description of their surfaces. The Third Section contains the construction of an Einstein metric on each of these surfaces of general type. The non-existence of these metrics on the topological prototypes is addressed in Section 4. The proof of Theorem 1 is spread through out the first four sections. In Section 5, we study the sign of the Yamabe invariant and the solutions to the normalized Ricci flow on the exotic smooth structures. That is, Proposition 2 is proven in the fifth and last section.
Homeomorphism type
The following theorem was proven in [23]. Here K denotes the canonical divisor class of the complex surface. Our enterprise starts by pinning down a homeomorphism type for each of these complex surfaces. From now on, let S be one of such surfaces. Surfaces of general type are Kähler (see, for example, [20,Lemma 2]). Thus, one has b + 2 (S) = 2p g + 1 = 3. On the other hand, we have The claim follows by substituting σ( From these computations we also observe Corollary 5. These manifolds satisfy the Hitchin-Thorpe inequality [12, Theorem 1].
Proof. The claim is 2χ + 3σ > 0. Indeed, we have which is always a positive number for the manifolds considered in this paper.
It follows from Rokhlin's Theorem [25] that the manifolds built in [23] are nonspin. We are now ready to conclude on the topological prototypes of the minimal surfaces of general type in question by using Freedman's Theorem [10], and Donaldson's results [9]. The possible homeomorphism types are arranged in the following proposition.
Proposition 6. Let S be a simply connected surface of general type with p g = 1, and q = 0. The homeomorphism type of S is given as follows We proceed to give an sketch of the construction for the example with K 2 = 6. The reader is referred to the quoted papers for details.
The starting manifold is a particular rational elliptic surface E(1), which is obtained out of blowing-up a well-chosen pencil of cubics in CP 2 . Take the double cover of this rational elliptic surface E(1), and call it Y . The complex surface Y is an elliptic K3 surface; this complex manifold Y is a common material in all of the minimal surfaces produced in [23].
In particular for the surface with K 2 = 6 we are describing in this section, one considers (within Y ) two I 8 -singular fibers, two I 2 -singular fibers, one nodal singular fiber, and three sections.
By blowing-up Y 18 times at rightly selected points (see [23, Section 4.5, Fig. 13]), one obtains a surface Z := Y #18CP 2 . The surface Z contains five disjoint linear chains of CP 1 's including the proper transforms of the sections. The linear chains are denoted by the following dual graphs, which have been labeled for the purposes in Section 3: One proceeds to contract these five chains of CP 1 's from Z. Since Artin's criteria is satisfied ( [2]), the contraction produces a projective surface with special quotient singularities. Denote it by X. At this step, H. Park, J. Park and D. Shin use Q-Gorenstein smoothings to deal with the singularities. Each singularity admits a local Q-Gorenstein smoothing. In Section 3 of [23], they prove that the local smoothings can actually be glued to a global Q-smoothing of the entire singular surface by proving there is no obstruction to do so. The surface of general type S with p g = 1, q = 0 and K 2 = 6 is a general fiber of the smoothing of X; in the papers of H. Park, J. Park and D. Shin, S is denoted by X t .
The argument regarding the minimality of S goes as follows. Let f : Z → X be the contraction map of the chains of CP 1 's from Z to the singular surface X. By using the technique in, for example, Section 5 in [22], one sees that the pullback f * K X of the canonical divisor K X of X is effective and nef. Therefore, K X is nef as well, which implies the minimality of S.
Existence of Einstein metrics
The existence of an Einstein metric on a certain manifold is hard to prove. In the case of interest of this paper, where the manifold is a minimal complex surface of general type that does not contain any (-2)-curves, the following criterion was found independently by T. Aubin and by S.T. Yau. In order to apply Theorem 7, the following result needs to be proven. Proposition 8. There exist simply connected surfaces of general type with p g = 1, q = 0, K 2 = 1, 2, 3, 4, 5, 6 or 8, and ample canonical bundle.
The rest of the section is devoted to such endeavor. We carry out the argument for the surface with K 2 = 6. The other examples can be dealt with in a similar fashion.
3.1.
Proof of Proposition 8. The following proof follows closely the argument of R. Rȃsdeaconu and I. Şuvaina used to prove Theorem 1.1 in [24].
Proof. Theorem 3 settles the existence part of the proposition. According to [23], in Z there are five disjoint linear chains. Using the labels we put on their dual graphs in Section 2, let us denote them by G = and let L be the chain of length one. Name F i , i = 1, . . . , 11 the eleven smooth curves of self-intersection -1 represented by dotted lines labeled -1 in Fig 13 of [23]. We point out that the Poincaré duals of the irreducible components of the five chains and those of the curves F i 's form a basis of H 2 (Z; Q).
Let f : Z → X be the contraction map. Then, one has The coefficients that appear above can be computed explicitly (see [22]). However, for our agenda it suffices to know that they are positive rational numbers. In particular the pullback of the canonical divisor of the singular variety to its minimal resolution is effective. Set the exceptional divisor of f to be Exc(f ) = We wish to show that the canonical bundle K X of the Q-Gorenstein smoothing is ample. This implies our claim: indeed, remember S is a general fiber of the Q-Gorenstein smoothing X, and ampleness is an open property ( [15]). Moreover, we know K X is nef. To show it is ample as well, we proceed by contradiction.
Suppose K X is not ample. By its nefness and according to the Nakai-Moishezon criterion ( [15]), there exists an irreducible curve C ⊂ X such that (K X · C) = 0.
The total transform of C in Z is Here C ′ stands for the strict transform of C, and the coefficientes w i , x i , y i , z i , t are non-negative rational numbers. It is straight-forward to see that C ′ is not numerically equivalent to 0 ( [24]).
We compute The intersection number of the curve C ′ with any component of the exceptional divisor Exc(f ) is greater or equal to zero. The equality is achieved only in the case when C ′ is disjoint to all the irreducible components of Exc(f ); this is equivalent to the curve C missing the singular points of X. This is At this point there are two possible scenarios.
• Either there is an i 0 ∈ {1, . . . , 11} such that (C ′ · F i0 ) < 0, or • the equality (C ′ · F i ) = 0 holds for all i = 1, . . . , 11. The first scenario requires C ′ to coincide with F i0 . This is not the case, since given that f * K X is nef, (f * K X ·F i ) > holds for all i = 1, . . . , 11, which is impossible by our assumption. Thus, the intersection number of the curve C ′ with all the F i 's and with all of the irreducible components of Exc(f ) must be zero. However, as it was remarked earlier, the Poincaré duals of the F i 's and those of the irreducible components of Exc(f ) generate H 2 (Z; Q). This implies that C ′ would have to be numerically trivial on Z. This is a contradiction.
Non-existence of Einstein metrics: Exotic smooth structures
Topologically there is no obstruction for the existence of an Einstein metric on the surfaces of general type we are working with (cf. Corollary 5). We now proceed to study the non-existence of Einstein metrics with respect to their exotic differential structures.
When one considers different smooth structures on 4-manifolds, the main obstruction to the existence of an Einstein metric is the following result, which generalizes work done by C. LeBrun in [17] and by D. Kotschick [16].
Proposition 11. Let 9 ≤ k ≤ 18. The topological manifolds 3CP 2 #kCP 2 support infinitely many smooth structures that do not admit an Einstein metric. Moreover, each of these manifolds admits infinitely many smooth structures, all of which have negative Yamabe invariant, and on which there are no non-singular solutions to the normalized Ricci flow for any initial metric.
Proof. We make use of the infinite family {X n } of pairwise non-diffeomorphic 4manifolds (with non-trivial SW) sharing the topological prototype 3CP 2 #4CP 2 built in [1]. The first part of the lemma now follows by setting r ≥ 5 in LeBrun's result (Theorem 10); notice that the blow-up formula [11,Theorem 1.4] allows us to conclude that the manifolds in the infinite family {X n #(4 + r)CP 2 } are pairwise non-diffeomorphic. For the claims regarding the Yamabe invariant and the solutions to the normalized Ricci flow see Section 5 below.
4.1. Non-existence of anti-self-dual Einstein metrics. Using another obstruction theorem of LeBrun in [19] we obtain the following lemma.
Lemma 12. Let 7 ≤ k ≤ 18. The topological manifolds 3CP 2 #kCP 2 support infinitely many smooth structures that do not admit an anti-self-dual Einstein metric.
Proof of Proposition 2
The following argument is based on the proof of Theorem B in [14].
Proof. We start with the part of (1) concerning the sign of the Yamabe invariant. Consider the smooth structure related to the minimal surfaces of general type taken from [23]. By [18] their Yamabe invariant is negative. Regarding the existence of non-singular solutions to the normalized Ricci flow, it follows from Cao's theorem ( [6], [7]) by taking as an initial metric the Kähler metric with Kähler form the cohomology class of the canonical line bundle.
For Property (2), consider the smooth structures used in Theorem 1 that were built by A. Akhmedov and B.D. Park in [1]: the infinite family {X n } of pairwise non-diffeomorphic minimal manifolds homeomorphic to 3CP 2 #4CP 2 . These manifolds have non-trivial Seiberg-Witten invariants, and for all of them c 2 1 > 0 holds. Thus, by [17], their Yamabe invariant is strictly negative. By a result of M. Ishida (Theorem B in [13]), there are no solutions to the normalized Ricci flow on X i for any i and any intial metric. | 2012-08-24T09:11:01.000Z | 2010-09-06T00:00:00.000 | {
"year": 2010,
"sha1": "20e6c5e4e4e0eb06efc57fe0a0b279022026e2c8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "20e6c5e4e4e0eb06efc57fe0a0b279022026e2c8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
84839378 | pes2o/s2orc | v3-fos-license | Evaluating the effectiveness of domestic abuse prevention education : Are certain children more or less receptive to the messages conveyed ?
Publisher rights © 2014 The Authors. Legal and Criminological Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society This is an open access article published under a Creative Commons Attribution License (https://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution and reproduction in any medium, provided the author and source are cited.
Purpose. A number of school-based domestic abuse prevention programmes have been developed in the United Kingdom, but evidence as to the effectiveness of such programmes is limited. The aim of the research was to evaluate the effectiveness of one such programme and to see whether the outcomes differ by gender and experiences of domestic abuse.
Method. Pupils aged 13-14 years, across seven schools, receiving a 6-week education programme completed a questionnaire to measure their attitudes towards domestic violence at pre-, post-test, and 3-month follow-up, and also responded to questions about experiences of abuse (as victims, perpetrators, and witnesses) and help seeking. Children in another six schools not yet receiving the intervention responded to the same questions at pre-and post-test. In total, 1,203 children took part in the research.
Results. Boys and girls who had received the intervention became less accepting of domestic violence and more likely to seek help from pre-to post-test compared with those in the control group; outcomes did not vary by experiences of abuse. There was evidence that the change in attitudes for those in the intervention group was maintained at 3-month follow-up.
Conclusions. These findings suggest that such a programme shows great promise, with both boys and girls benefiting from the intervention, and those who have experienced abuse and those who have not (yet) experienced abuse showing a similar degree of attitude change.
In the United Kingdom, high rates of abuse in teenage dating relationships have been found (Barter, McCarry, Berridge, & Evans, 2009), highlighting the significance of the issue in the lives of many young people. Through a survey involving 1,353 young people aged 13-17 years, Barter et al. (2009) found that 22% had experienced moderate physical violence and 8% had experienced more severe physical violence. High rates of emotional abuse among teenagers were also exposed by Barter et al. (2009) three quarters of girls and 50% of boys had experienced this form of abuse. A sizeable minority -31% of girls This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. *Correspondence should be addressed to Dr Claire L. Fox, School of Psychology, Keele University, Keele, Staffordshire ST5 5BG, UK (email: c.fox@keele.ac.uk). compared to 16% of boysreported having been pressured or forced to do something sexual such as 'kissing, touching or something else', and 18% of girls and 11% of boys reported having been pressured or forced to have sex.
Similar rates of victimization have been reported across Europe and North America. A recent review by O' Leary and Smith Slep (2012) reported rates in the low 20% range for middle-school students and between 32% and 38% in high-school students. Studies that have sampled a wide age range suggest that the peak age for perpetrating domestic abuse is between 16 and 18 years of age (Foshee, Reyes, & Wyckoff, 2009;Nocentini, Menesini, & Pastorelli, 2010). There is, therefore, good reason to target preventative interventions at teenagers in early adolescence.
Over the past 10 years, a number of domestic abuse prevention education programmes have emerged in the United Kingdom. However, few have been formally evaluated. Furthermore, the evaluations that do exist have been small scale and methodologically limited. Rarely are experimental methods used to assess attitudinal or behavioural change. Often, qualitative methods are used to explore the perceived benefits of the programme, including young people's perceptions of what they are taught and how it has been delivered (e.g., Bell & Stanley, 2005;Hester & Westmarland, 2005;Scottish Executive, 2002), but with little account taken of whether the intended messages of the programme have actually been learnt. This is true of many school-based domestic abuse prevention programmes that have been developed in the United Kingdom. Hester and Westmarland (2005) reported on five such small-scale UK-based projects. In two of the projects, pre-and post-test questionnaires were used to assess knowledge of and attitudes towards domestic abuse. With all these evaluations, analyses involved comparing the percentage of responses to individual questions at pre-and post-test with no attempt to match respondents at the two points of testing; the failure to use inferential statistics means that it is not known whether the changes were statistically significant. Furthermore, the absence of a control group makes it difficult to rule out alternative explanations of the positive changes, such as a local history effect. Stanley, Ellis, and Bell (2011) reported on an evaluation of a Domestic Violence Awareness Raising Programme, delivered by an external agency. The programme was delivered as planned (over six sessions) in only two of the four schools originally targeted. In total, 74 young people completed measures at pre-and post-test, with analyses involving the comparison of average responses to 12 individual items tapping into their knowledge and attitudes towards domestic violence, indicating positive changes for 6 of the 12 items. However, gender differences emerged, with many boys responding to the programme with cynicism or apathy.
Some UK programmes aim to tackle dating violence specifically, whereas other programmes have a slightly wider remit of addressing the issue of domestic abuse, focusing on abuse in teenage relationships, abuse in adult relationships, and with consideration of children as witnesses. What most UK programmes have in common, however, is a commitment to raising awareness of abuse in relationships, tackling the underlying attitudes that give rise to abusive tendencies, and encouraging more young people to seek help. The recent enlargement of the UK government's definition of domestic abuse to young people aged 16 and above renders the need to conduct research and evaluation on preventative education all the more urgent (Home Office, 2013). For consistency, the term 'domestic abuse' will be used in this study, except when referring to studies that have specifically used the term 'dating violence'.
In the United States, experimental designs have become the norm rather than the exception. Evaluations in the United States have typically involved large sample sizes of 500 or greater and experimental designs (e.g., treatment and control conditions)some with random allocation of participants, classes, or schools to conditions. Established scales are often used to measure knowledge, attitudes, and in some cases, behaviour (e.g., perpetration and victimization), with individual item analyses being the exception rather than the norm (Whitaker et al., 2006). Studies classed as high in overall quality in the review by Whitaker et al. (2006) have also used random allocation of participants or schools to conditions (Foshee et al., 1998;Wolfe et al., 2003). Whitaker et al. (2006) describe the overall quality of the 11 evaluations they review as low due to short follow-up periods, high attrition rates, and a failure to measure perpetration behaviour. They further note that experimental designs can be practically and ethically difficult, but that these are vital to rule out alternative explanations of the findings.
The evaluation of Safe Dates by Foshee et al. (1998) involved 1,700 eighth and ninth graders (13-to 15-year-olds) across 14 schools in the United States, who completed measures at pre-and post-test. The Safe Dates programme includes a curriculum delivered over ten 45-min sessions by school teachers, a theatre production, a poster competition, and community activities (e.g., crisis line, support groups). The 14 schools were matched in terms of school size and then one member of each pair was randomly allocated to a treatment or control condition, with control participants exposed to the community activities only. Analyses were conducted using the full sample and separate analyses were conducted on those who had never been victimized or perpetrated abuse (primary prevention group), as well as on those who had been victimized (secondary prevention victim group) and those who had already perpetrated abuse (secondary prevention perpetrator group). For the full sample at post-test, there was less psychological abuse perpetration and less perpetration of sexual and physical violence in the treatment condition, compared with the control condition. In addition, primary and secondary prevention effects were observed. A 4-year follow-up found that these effects were maintained and there was also less victimization reported by those in the treatment condition (Foshee et al., 2004). Such a universal preventative approach, which does address gender-based expectations, therefore shows much promise (O'Leary & Smith Slep, 2012).
A similar study by Wolfe et al. (2003) involved an evaluation of a programme targeted at 14-16-year-olds at risk of developing abusive relationships because of their history of maltreatment. The Youth Relationship Program involves eighteen 2-hr sessions delivered by social workers or other community professionals. The evaluation involved a comparison of 96 young people who received the intervention with 62 control participants. The findings suggested that the intervention was effective at reducing incidents of physical and emotional abuse over time. Most domestic abuse prevention programmes are typically delivered through the school system and are universal, that is, aimed at all children. The study by Wolfe et al. was one of the first to examine the effectiveness of a programme that took into account research on child maltreatment as a risk factor for abuse within intimate relationships. As noted by Capaldi and Langhinrichsen-Rohling (2012), previous programmes were designed 'prior to a full understanding of the etiology and complex dynamics associated with intimate partner violence' (p. 323).
The most controversial aspect in the field has been whether or not programmes should focus explicitly on wider gender power inequalities in society that are thought to foster violence (Capaldi & Langhinrichsen-Rohling, 2012). Others have commented that an approach that positions males as perpetrators and females as victims is ill-advised because it misrepresents the nature of domestic abuse at this age (Avery-Leaf & Cascardi, 2002;O'Leary & Smith Slep, 2012). Most programmes are empirically based. For example, acceptance of dating violence has been found repeatedly to be associated with domestic violence perpetration among adults and adolescents, which explains the focus on changing the acceptance of violence as a component of most domestic abuse prevention programmes (Foshee, Linder, MacDougall, & Bangdiwala, 2001). Programmes also typically focus on teaching skills to enable young people to identify constructive means of handling conflict; this is based on research that highlights poor conflict-resolution skills as a risk factor for perpetration of dating violence (Bird, Stith, & Schladale, 1991). Finally, most programmes focus on ways to encourage young people to seek help as many studies have shown that young people typically do not seek help for dating violence (Ashley & Foshee, 2005). In sum, domestic abuse prevention education programmes typically recognize the problem as multi-determined and this is reflected in their content.
The study we report on below aimed to improve on previous UK-based studies and evaluate a school-based domestic abuse prevention education programme, utilizing a quasi-experimental design, with pre-and post-test measures administered to those in treatment and control conditions. As noted by Leen et al. (2013), 'there is a need for additional data from countries outside North America on both intervention programs and prevalence rates' (p. 171). In pilot work with n = 213 13-to 14-year-olds who had received the intervention programme on which this study is based, there was preliminary evidence of changes in children's attitudes from pre-to post-test. This study provided a much more robust test of the effectiveness of the programme by utilizing a control group and a 3-month follow-up period.
A secondary aim of this study was to examine whether the outcomes differed by gender and experiences of domestic abuse. While Foshee et al. (1998) did examine outcomes for different sub-samples, for example, a primary prevention sub-sample with experience of abuse, no study has specifically examined whether there are certain groups of children who are more or less receptive to the messages conveyed. As recently indicated by Supplee, Kelly, Mackinnon, and Barofsky (2013), policy makers have moved on from asking, 'what works?' to asking the question, 'what works for whom?' An examination of moderated effects can help to refine theory, target interventions, and tailor interventions more appropriately to the needs of a specified group (Rothman, 2013).
Given the well-established link between witnessing domestic abuse and attitudes that are more accepting of violence in relationships (Lichter & McCloskey, 2004;Slovak, Carlson, & Helm, 2007), as well as the notion of the intergenerational transmission of violence (e.g., see Stith et al., 2000), it was predicted that the intervention would have less of an impact on young people who have already witnessed domestic abuse. As a result of witnessing domestic abuse, they may be more likely to believe that such actions are acceptable, perhaps even necessary, and these attitudes may be more entrenched and resistant to change.
Furthermore, for those young people for whom domestic abuse has already become a feature of their own relationships (as victims or perpetrators), it was hypothesized that the intervention would have a reduced impact. Even though they may begin with attitudes that are more accepting of domestic abuse and so have the potential to show the most change, we may instead see patterns of behaviour that may have become established and thus more difficult to change. In addition, as boys typically display attitudes that are more accepting of violence in relationships (Burman & Cartmel, 2005;Burton, Kitzinger, Kelly, & Regan, 1998), are less likely to seek help when a victim of 'dating violence' (Ashley & Foshee, 2005), and are harder to engage than girls (Stanley et al., 2011), it was predicted that the intervention would have more of an impact on girls than boys.
The programme
Relationships without Fear (RwF) is a 6-week Healthy Relationships and Domestic Abuse Prevention Programme, developed by the Arch RwF team in North Staffordshire, UK. The programme starts in year 4 (ages 8-9 years) and runs through to year 11 (15-16 years), with the programme tailored for different year groups. With the younger age groups the emphasis is on friendships and peer group relationships, building up to talking about abuse in intimate relationships with year 6 children (those aged 10-11 years).
The programme has been developed by Arch over a number of years using relevant theory and the empirical literature. It looks at how positive relationships can be formed and how children and young people can develop relationships that are free from fear and abuse. It aims to prevent further domestic abuse by giving young people the knowledge to enable them to recognize an abusive relationship. In addition, skills of conflict resolution are taught and the programme tackles the underlying attitudes that give rise to abusive tendencies. The programme addresses young people's attitudes towards abuse through challenging stereotypical views and the belief held by some that hitting a partner is justified in certain circumstances. Young people are made aware that domestic abuse happens to men as well as women, but they are also introduced to the notion of how gender inequality can foster violence in relationships. There is also an emphasis on help seeking, tackling the barriers that exist, as well as outlining the support that is available. The programme reinforces the message that the victim is never at fault and that the perpetrator is always responsible for his/her actions. In sum, RwF aims to contribute to the long-term overall reduction in domestic violence.
The programme runs for 6 weeks, 1 hr each week. 1 It is usually delivered during Personal, Social, and Health Education lessons and by trained RwF staff (either domestic abuse practitioners or trained teachers). The programme is tailored for each year so that the content is age appropriate. The current evaluation focused on the programme delivered to year 9 pupils (aged 13-14 years). The six sessions, all delivered by domestic abuse practitioners, were organized into the following topics: The difference between domestic abuse and other forms of abuse; how domestic abuse affects you; the emotional effects on victims (including a focus on male victims); the attitudes of young people towards abuse; the barriers to leaving; and how can you make a difference?
The programme is designed to be interactive to encourage young people's participation. It relies heavily on using real-life stories and requires pupils to respond to the scenarios and empathize with the different actors in that story. The programme also uses question and answer sessions, fact sheets, true/false and problem page exercises, role-play, and video clips. Using these activities, pupils are encouraged to share in discussions, are given the freedom to voice their own opinions, and are required to listen to those of others.
Method
Participants Pupils in seven schools received the RwF programme during the school year 2010-2011. These were schools that had responded to an invitation and indicated a willingness to run the programme at some point in their school year. Each school was matched with a control group school, not yet receiving the programme, taking into account the size of the school, demographic variables (e.g., proportion of students receiving free school meals), and geographical proximity. One control group school acted as a control for two intervention schools, given the small number of classes taking part in two of the intervention schools, and there were therefore six control group schools. In total, 1,203 year 9 pupils (aged 13-14 years) participated in the study from 54 classes (27 intervention group classes and 26 control group classes): 572 males and 596 females (gender missing for 35 participants). Of those participants who provided data about their ethnicity, 89.5% were White, 1% Black, 6% Asian, 3% Mixed, 0.3% Chinese, and 0.2% 'Other' (only 11 participants failed to answer this question). Making a conservative estimate of the effect of clustering (a design effect of 2), the sample size of 1,203 was sufficient to provide 80% power to detect a standardized mean difference of 0.23 or greater, at a two-tailed 5% alpha (Cohen, 1988).
Materials
Attitudes to domestic violence The Attitudes to Domestic Violence (ADV) questionnaire (Fox, Gadd, & Sim, 2013), used as an outcome measure in this study, is a 10-item measure that aims to capture young people's normative beliefs about how wrong it is for a man to hit a woman and also a woman to hit a man, under certain conditions. The aim was to create a tool that was easy for practitioners to use and would be sensitive enough to detect the subtle shift in attitudes to more extreme disapproval of violence. Most young people regard hitting a partner as wrong; however, many are willing to condone it under certain circumstances (Burman & Cartmel, 2005). Given that theories of interpersonal aggression highlight the importance of normative beliefs in justifying such actions, it was deemed appropriate to assess attitudes towards domestic violence (see Foshee et al., 2001).
For the ADV questionnaire there are five different conditions, for example, do you think it is OK for a man to hit his partner/wife if HE says he is sorry afterwards? Each question is followed by a 4-point scale -1 = it's perfectly OK, 2 = it's sort of OK, 3 = it's sort of wrong, 4 = it's really wrong. Depending on how the question is phrased, the response scale may be presented in reverse order (i.e., 1 = it's really wrong, 2 = it's sort of wrong, 3 = it's sort of OK, 4 = it's perfectly OK). For those questions that begin, 'Do you think it is OK…', the scale begins with 'it's perfectly OK'. The other questions that are phrased, 'Suppose [x happened] how wrong…', have the response scale appearing in reverse order, that is, 'it's really wrong' to 'it's perfectly OK'. The five situations include: saying sorry, been cheated on, been embarrassed, they deserve it, and having been hit first. For every situation where a man is being abusive to a woman, the same situation is presented with a woman being abusive to a man.
The questionnaire is scored so that a high mean score indicates beliefs that are more accepting of domestic violence (on a possible range 1-4). Over the course of three studies, the 10-item ADV questionnaire was developed. Although the measures of goodness of fit from the factor analysis are lower than the ideal benchmarks, the consistently high loadings of all items on a single factor suggest that the scale can be used as a single summative index. In addition, the scale demonstrates good internal consistency and reproducibility over time (coefficients of .93 and .72 respectively). For further details of the development of the ADV questionnaire, see Fox, Gadd, et al. (2013).
Experiences of abuse
At pre-test only, the children also responded to questions about their experiences of domestic abuse, as victims (VDA), perpetrators (PDA,) and as witnesses of abuse in their own homes (WDA). We asked the young people to think about 'people you have dated, and past or current boyfriends or girlfriends'. They were then asked to consider the adults who look after them at home, 'for example, your parents, stepparents, guardians or foster carers', and questions that are about 'things that can happen between two partners in a relationship'. The questions were very similar to those used in the National Society for the Prevention of Cruelty to Children (NSPCC) survey, with questions assessing physical, sexual, and emotional forms of violence (for further details of the questions asked, see Fox, Corr, Gadd, & Butler, 2013). As the data were positively skewed, binary categories to reflect victim status, perpetrator status, and being a witness were formed. For victimization and perpetration, they were asked to consider 10 different behaviours in terms of whether this had happened to them or whether they had ever done it themselves: 'Never', 'Once', or 'More than once'. Participants' responses were combined to yield a score representing their responses across all the questions in that scale. Thus, there were two categories: 'Never' (they had never experienced or perpetrated any of the forms of abuse) or 'Once or more than once' (they had experienced or perpetrated at least one of the forms of abuse). For witnessing abuse there were eight different behavioursthe same as for the previous sections, but we omitted the questions about sexual abuse. Again, there were two categories: 'Never' and 'Once or more than once'. As very few young people reported experiences that had happened, 'More than once', the 'Once', and 'More than once' categories were combined. For victimization, an average of 3.4% of the sample indicated 'More than once' across the 10 items; for perpetration, 0.95% across 10 items; and for witnessing abuse, 5.1% across eight items.
Help seeking There were also two questions about help seeking used as additional outcome measures: 'Suppose a boyfriend/girlfriend ever hit you, how likely would you be to seek help from an adult?' and 'Suppose you found out that an adult who looks after you was being hit by their partner, how likely would you be to seek help from an adult outside of your friends and family (e.g., a teacher, school nurse, social worker)?' For each question there were four response options: 1 = not at all likely, 2 = not likely, 3 = somewhat likely, or 4 = very likely.
Procedure
Children in the intervention group completed the questionnaires in the first and final session of RwF and at 3-month follow-up; children in the control group schools completed the questionnaires at the same time as the children in the matched intervention schools, within at most 1 week of each other (but they did not participate at the 3-month follow-up). To enable us to match up questionnaire responses, we asked the young people to answer a series of questions on the front page: (1) What are the last three digits of your home telephone number?, (2) What month were you born in?, and (3) What was your first pet's name?
The survey questions, procedures, and ethical guidelines were developed through close consultation with user groups of young people; for example, a local Youth Parliament and a group of people known to practitioners within the local NSPCC, and also with members of our multi-agency steering group. The research was conducted consistent with the ethical guidelines of the British Psychological Society, and clearance was gained from the University Ethical Review Panel.
All data collection was overseen by a member of the research team who read out the standardized instructions, was on hand to answer any questions, and debriefed the children. Children were encouraged to read through the questions at their own pace. The questionnaire was anonymous and the young people were reassured that their responses would remain confidential. They were also told that they did not have to take part in the research if they did not want to, and could stop taking part at any time.
Parental consent was sought using the 'opt-out' method, which meant that parents had to send a form back if they did not wish their child to take part; in total, 19 children were opted out of the research by their parents (16 males, 3 females) and 28 participants opted out themselves (17 males and 11 females). It was stressed to the children that some of the questions were quite 'personal and sensitive'. They were also reassured that if they were willing to answer the questions their responses could not be traced back to them as individuals or to their family. However, they were told that if they said something to us face-to-face to suggest that they or someone else was at significant risk of harm, then we would have to pass on our concerns to one of their teachers. They were asked to answer the questions in silence, to keep their answers to themselves, and to not look at what the person next to them was doing. After they had completed the questionnaire, they were debriefed and were pointed to appropriate sources of support.
Results
ADV group differences at pre-test A series of unrelated ANOVAs were conducted to compare the pre-test scores of males and females based on experiences of domestic abuse: victims/non-victims of domestic abuse (VDA), perpetrators/non-perpetrators of domestic abuse (PDA), and witnesses/non-witnesses of domestic abuse (WDA). The means and standard deviations and results of the ANOVAS can be seen in Table 1. At pre-test boys scored higher on the ADV compared with girls, indicating attitudes more accepting of domestic violence. In addition, there were differences between the groups based on experiences of abuse with victims, perpetrators, and those who had witnessed abuse scoring higher than those not involved. The lack of significant interaction effects suggests that these group differences held for girls and boys.
Attrition analyses
A series of analyses were conducted to compare the pre-test scores for those who took part at pre-and post-test (i.e., had a post-test value on at least one of three outcome variables; n = 950) with those who provided pre-test data only (i.e., had post-test values on none of the three outcome variables; n = 193). For the ADV, the mean (SD) score for pre-test-only participants was 1.47 (.39), and for pre-and post-test participants was 1.42 (.38); these values did not differ significantly (t 1141 = 1.65, p = .099). For help seeking when witnessing abuse, the median (interquartile range [IQR]) score for pre-test-only participants was 2 (1, 3), and for pre-and post-test participants was 3 (2, 3); these values differed significantly (Wilcoxon rank sum z = 2.42, p = .016). The corresponding median values for help seeking for abuse in one's own relationship were 3 (2, 4) for both groups; these values did not differ significantly (Wilcoxon rank sum z = .96, p = .335).
Chi-square analyses were conducted to compare the VDA, PDA, and WDA scores of those who took part at pre-and post-test with those of participants who dropped out of the study. A higher percentage of those who had been victims of domestic abuse were represented within the pre-test-only sample (46.6%, in comparison to 35.3% in the preand post-test sample); these values differed significantly (v 2 1 = 8.88, p = .003; φ = .09). However, the percentages of those who had perpetrated abuse did not differ significantly (25.9% in pre-test-only sample and 20.0% in the pre-and post-test sample; v 2 1 = 3.37, p = .066; φ = .05), and neither did the percentages of those who had witnessed domestic abuse (36.3% in the pre-test-only sample and 34.2% in the pre-and post-test sample; v 2 1 = 0.30, p = .583; φ = .02). Although some of these differences were significant, owing to the large sample size, they were generally of small magnitude. Nonetheless, imputation was utilized to counteract any resulting bias, as will be explained in the next section.
Comparison of the intervention and control groups from pre-to post-test
Owing to the clustered nature of the data, data were analysed using multi-level models, with two levels of clustering (within classes and within schools). Values on the ADV Scale were analysed using a multi-level linear model (Rabe-Hesketh & Skrondal, 2012a), with group as a between-subjects factor and controlling for age, gender, VDA, PDA, WDA, and baseline values of the ADV Scale. Terms were included for interactions between group and each of VDA, PDA, WDA, and gender. Residuals from the analysis were homoscedastic across groups, but were found to be positively skewed; however, this was not considered problematic in view of the large sample size. To secure the baseline comparability of the groups and counteract any bias that might be induced by attrition, missing values on the outcome variables were estimated (under a 'missing at random' assumption) using multiple imputation, through five imputed data sets. Values on the two help-seeking scales were analysed using a multi-level ordered logistic model (Rabe-Hesketh & Skrondal, 2012b), with group as a between-subjects factor and age, gender, VDA, PDA, WDA, and baseline values of the scale concerned as covariates. This model would not allow the inclusion of interactions. A secondary sensitivity analysis was conducted using just participants with observed outcome data.
To determine whether change induced by the intervention in each of the outcomes was sustained at 3-month follow-up, a generalized estimating equations (GEE) model was fitted to the data in just the intervention group (Hardin & Hilbe, 2003). As such models accommodate missing values in repeated measures data, no imputation of missing values was performed.
Data analysis for the multi-level models was performed in Stata 12, using the GLLAMM program (www.gllamm.org) for the ordered logistic models. The GEE models were estimated in SPSS 20 (IBM, Hampshire, UK). Statistical significance was set at p ≤ .05 (two tailed) and 95% confidence intervals (CIs) were calculated for all estimates of effect.
Thirteen schools, comprising 1,203 children, were randomized to the control group (6 schools, 584 children) and intervention group (7 schools, 619 children). The baseline characteristics of the control and intervention groups are summarized in Table 2. Missing data were imputed on the outcome variables as follows: 202 values on the ADV Scale (103 controls; 99 interventions); 208 values on the Victim Help-seeking Scale (108 controls; 100 interventions); 209 values on the Witness Help-seeking Scale (109 controls; 100 interventions).
The unadjusted mean (SD) ADV scores for the control and intervention group were 1.44 (.43) and 1.35 (.39) respectively. The covariate-adjusted mean difference (control minus intervention) was 0.10 (95% CI: 0.03, 0.18), indicating that at post-test those in the intervention group were significantly less accepting of domestic violence (p = .008). All interactions were non-significant (group VDA, p = .603; group PDA, p = .917; group WDA, p = .345; group gender, p = .862), and the effect of the intervention did not therefore differ across the groups defined by these variables; that is, the magnitude of change on the ADV Scale did not depend upon participants' VDA, PDA, or WDA category. Unadjusted median (IQR) values on the Victim Help-seeking Scale were 2 (2, 3) and 3 (2, 3) for the control and intervention groups respectively. The covariate-adjusted odds ratio was 1.67 (95% CI: 1.28, 2.17); this indicates that the odds of a higher point on the scale (denoting a greater readiness to seek help) were 67% greater for the intervention group than for the control group (p < .001). Unadjusted median (IQR) values on the Witness Help-seeking Scale were 3 (2, 4) and 3 (3, 4) for the control and intervention groups respectively. The covariate-adjusted odds ratio was 1.65 (95% CI: 1.31, 2.07); this indicates that the odds of a higher point on the scale were on average 65% greater for the intervention group than for the control group (p < .001).
The results of the sensitivity analysis are shown in Table 3. The estimates from the analyses on just the observed data are 5-7% higher (suggesting that the missing data had induced a small bias), but the statistical conclusions of these analyses are unchanged from those from the analyses with imputation.
Comparison of the intervention group from pre-test, post-test, to 3-month follow-up Within the intervention group, the mean reduction on the ADV Scale between baseline and post-test (0.11) and between baseline and 3-month follow-up (0.11) was in each case significant; see Table 4. The mean score on the ADV Scale therefore remained significantly lower than baseline at both post-test and follow-up, at an equivalent level. For the Victim Help-seeking Scale and the Witness Help-seeking Scale, the odds ratios for post-test compared with baseline (1.22 and 1.31 respectively) were in both cases significant (see Table 4). However, for these two scales, the odds ratios for 3-month follow-up compared with baseline (1.08 and 1.10 respectively) were non-significant; see Table 4. For both of the help-seeking scales, therefore, the significant effect of the intervention at post-test was not sustained at follow-up.
Discussion
This is the first study in the United Kingdom to evaluate the effectiveness of a domestic abuse prevention education programme, using a pre-test, post-test, control group design. Note. CI = confidence interval; ADV = Attitudes to Domestic Violence questionnaire. a Numbers analysed for control and intervention groups respectively; b mean difference (control minus intervention); c odds ratio (control as reference category).
Previous evaluations have been in small scale and have suffered from methodological limitations, thus limiting the conclusions that can be drawn. Using a large sample of children, with treatment and control conditions, it was found that the attitudes to domestic violence for those in the intervention condition became less accepting from preto post-test, in comparison to those in the control condition. In a similar way, considering just those participants in the intervention group, help-seeking scores improved from preto post-test, but were not maintained at 3-month follow-up. In addition, the outcomes, at least for the attitudes to domestic violence scores, did not vary by gender or experiences of abuse (as demonstrated by the non-significant interaction terms), which indicates that participants in these categories experienced similar magnitudes of change. These findings suggest that such a programme shows great promise, with both boys and girls benefiting from the intervention and those who have experienced abuse and those who have not (yet) experienced abuse showing a similar degree of attitude change. Such interventions work on the premise of changing the acceptance of violence, as acceptance of dating violence has been found repeatedly to be associated with domestic violence perpetration among adults and adolescents (Foshee et al., 2001). Clearly, there is a need to address the attitudes of those at risk of becoming perpetrators or victims, exposing them to ideas about how healthy relationships can be formed and maintained (Wolfe et al., 2003). At the same time there is also the need to address the wider attitudes of the peer group, as peer group attitudes have been found to be important, especially for boys (Heise, 1998). What these findings suggest is that children at risk of becoming domestic abuse perpetrators or victims can still benefit from a wider school-based prevention programme, even though they would undoubtedly benefit from additional, more specialized support, perhaps on a one-to-one or small group basis. But, identifying these young people is difficult as well as ethically problematic because such interventions can also be highly stigmatizing. The current programme adopted a very similar model to that of the Safe Dates programme, evaluated by Foshee and colleagues (Foshee et al., 1998(Foshee et al., , 2004. Both are universal programmes aimed at males and females, which incorporate notions of how gender inequalities in society can foster violence. They are both delivered over a number of sessions in schools, drawing on a range of different teaching methods. As well as seeking to tackle gender stereotypes, both programmes also aim to teach new skills in conflict resolution and challenge norms around domestic abuse. However, Safe Dates is delivered by school teachers who have undertaken extensive training and the 10 sessions are supplemented by community activities that include enhancing the range of support services that are available to young people. Programmes in the United Kingdom will need to take note of this and consider how teachers can best be supported to incorporate such education into the curriculum. We would argue that this is the only way to ensure the long-term sustainability of such programmes. The findings of this study support the call for young people to be exposed to domestic abuse prevention education in schools. While it can be difficult to find time within the curriculum to cover all the important issues (Maxwell, Chase, Warwick, Aggleston, & Wharf, 2010), we would argue that schools should make time and space for it, introducing this to young people before they start to form intimate relationships (e.g., ages 11-12 years), and on a yearly basis. Indeed, while our study showed a change in attitudes towards domestic violence that was maintained at 3-month follow-up, the changes in help-seeking scores were not. Thus, young people need more than a one-off programme to convince them that it is worthwhile to seek help from adults should domestic abuse become a feature of their lives.
Certain limitations of this study are worthy of mention. First, we assessed attitudes towards domestic violence and not actual behaviour. Although associations have been identified between attitudes towards domestic violence and perpetration of abuse in relationships (see Foshee et al., 2001), further research is clearly needed to see whether such a change in attitudes does then translate into changes in behaviour. The reason for not assessing pre-and post-test changes in behaviour was because we were expecting to find a low base rate of domestic abuse at this age, which would make it difficult to detect meaningful changes, made even more difficult by assessing changes over a relatively short time frame. In future we will need to assess incidents of domestic abuse, as a victim and perpetrator, and assessment will need to take place at pre-and post-test, up to 1-year and perhaps even 4-year follow-up as in the Foshee et al. (2004) study.
In addition, it is important to acknowledge the limitations of the single-item help-seeking measures, which only captured intentions to seek help in the future (i.e., perceptions) and only asked if they would seek help and not specifically where or from whom they would seek such help. Subsequent studies will need to move beyond single-item help-seeking measures to take forward the issues our research has raised.
A further limitation was that fidelity to the curriculum was not assessed in detail, nor 'dosage', that is, individual student attendance at the sessions. It has been noted that some classes received less than the prescribed 6 weeks of sessions. However, we do not know the impact of all these components, separately and in combination, on the findings. Future studies must incorporate these issues into the evaluation from the outset to enable firmer conclusions about the effectiveness of such programmes.
One of the strengths of this study was the use of a control group to rule out alternative explanations of the findings. For example, it would be feasible to detect changes in the attitudes of those in the control group because both groups were exposed to a national awareness raising campaign. Despite the use of the control group, participants (or classes or schools) were not randomly allocated to treatment conditions, raising the possibility that the two groups differed at the outset in relation to one or more variables for which we did not control statistically, for example, the intervention group might have been more motivated to learn or change their attitudes. It is also possible that there was more socially desirable responding from those in the intervention group, with the change reflecting young people's awareness of what we were expecting to find, by virtue of their participation in the programme.
The findings of this study provide a useful basis on which to build, with the proposed use of a randomized control group design and the assessment of behaviour as well as attitudes, at pre-and post-test and 1-year, and perhaps also at 4-year follow-up. However, such studies are practically very difficult to implement and thus very costly. Such an approach would also rely on a more coordinated system of delivery, whereas in the United Kingdom provision at present is somewhat ad hoc, delivered by external organizations to schools that can see the benefit of such education. As already suggested, a country-wide approach is needed to ensure that all school children receive this type of education. This will need government investment, and schools and teachers will need support from external organizations to implement it. Across Europe and in North America there is increasing pressure on schools to raise academic standards and student achievement and so there is a risk that schools 'may be unable or unwilling to devote time for violence prevention activities' (Whitaker et al., 2006, p. 162).
Another issue that must also be considered in future research is the comparison of different models of domestic abuse prevention education. In the United Kingdom, for example, a number of programmes have been developed over the past few years by organizations such as Women's Aid, the Zero Tolerance Trust and Tender, and some funded by the Home Office or through the Children's Fund initiative. There are differences between programmes and greater clarity is needed in terms of what should be taught (i.e., programme content), how it should be taught (e.g., teaching methods), and who should deliver it (e.g., teachers or external organizations). Of course, such programmes must be theoretically informed but also evidence based.
In conclusion, we would argue that domestic abuse prevention education is a worthy investment when we consider the costs to society in terms of social care, health care, and the criminal justice system. But, establishing how best to deliver effective domestic abuse prevention education merits further research and scrutiny. | 2018-12-29T12:02:28.629Z | 2016-02-01T00:00:00.000 | {
"year": 2016,
"sha1": "2e3d6c430b5957b164fb173dc312ff2fdbe8406c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/lcrp.12046",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6fc2e4b52a183bbeefc1d1544e511b88f5e4ba87",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
15280371 | pes2o/s2orc | v3-fos-license | Single Cancer Cell Detection by Near Infrared Microspectroscopy, Infrared Chemical Imaging and Fluorescence Microspectroscopy
Novel techniques are currently being developed and established for the accurate chemical analysis and detection of single cancer cells, single embryos and single seeds by Fourier Transform Near Infrared (FT-NIR) Microspectroscopy, Fourier Transform Infrared (FT-IR), Fluorescence and High-Resolution NMR (HR-NMR). The first FT-NIR chemical images of biological systems approaching 1micron resolution are here reported. 400 and 500 MHz, H-1 NMR analyses were carried out that allowed the selection of mutagenized embryos. Detailed chemical analyses are being demonstrated to be also possible by FT-NIR Chemical Imaging/ Microspectroscopy of single cancer cells. FT-NIR Microspectroscopy and Chemical Imaging are also shown to be potentially important in Functional Genomics and Proteomics research through the rapid and accurate detection of high-content microarrays (HCMA). Multi-photon (MP), pulsed femtosecond laser NIR Fluorescence Excitation techniques were shown to be capable of Single Molecule Detection (SMD. Thus, MP NIR excitation for Fluorescence Correlation Spectroscopy (FCS) allowed not only single molecule detection, but also molecular dynamics observations and high resolution, submicron imaging of sub-femtoliter volumes inside living cells with 0.25 micron spatial resolution, in both normal and cancer cells, as well as neoplastic tissues. These novel, ultra-sensitive and rapid FT-NIR/FCS analyses have, therefore, substantial potential for numerous applications in important research areas, such as: medicine, medical/cancer research, pharmacology, agricultural biotechnology, food safety, as well as clinical diagnosis of viral diseases and cancers.
INTRODUCTION
Infrared (IR) and Near Infrared (NIR) commercial spectrometers employ, respectively, electromagnetic radiation in the range from to ~150 to 4,000 cm -1 , and from 4,000 to ~14,000 cm-1 . The utilization of such instruments is based on the proportionality of IR and NIR specific absorption bands with the concentration of the molecular components present, such as protein, oil, sugars and/or moisture. The molecular bond stretching/vibrations, bending and or rotations cause specific absorption peaks or bands, centered at certain characteristic IR and NIR wavelengths. FT-IR/NIR spectrometers obtain spectra using an interferometer and also utilize Fourier Transformation in order to convert the interferogram from the time domain to the frequency domain. The use of interferometry in FT-IR and FT-NIR spectroscopy increases the spectral resolution, the speed of acquisition, the reproducibility of the spectra and the signal to noise ratio in comparison with dispersive instruments that utilize either prisms or diffraction gratings.
An FT-IR/NIR image is built up from hundreds, or even thousands, of FT-IR/NIR spectra and is usually presented on a monitor screen as a cross-section that represents spectral intensity as a pseudo-color for every microscopic point in the focal plane of the sample. Special, 3D surface projection algorithms can also be employed to provide more realistic representations of microscopic FT-IR/NIR images. Each pixel of such a chemical image represents an individual spectrum and the pseudo-color intensity codes regions with significantly different IR absorption intensities. In 2002, four commercial FT-IR/NIR instruments became available from PerkinElmer Co. (Shelton, CT, USA): an FT-NIR Spectrometer (SpectrumOne-NTS), an FT-NIR Microspectrometer (NIR AutoImage), an FT-IR Spectrometer (SpectrumOne) and an FT-IR Microspectrometer (Spotlight300). The results of the tests obtained using these four instruments are shown in section 3.1.
The employment of high-power, pulsed NIR lasers for visible fluorescence excitation has resulted in a remarkable increase of spatial resolution in microscopic images of live cells, well beyond that available with the best commercial FT-NIR/IR microspectrometers, allowing even for the detection of single molecules. This happens because fluorescent molecules can absorb two NIR photons simultaneously before emitting visible light, a process referred to as "two-photon excitation." Using two-photon NIR excitation (2PE) in a conventional microscope provides several great advantages for studying biological samples. As the excitation wavelength is typically in the NIR region, these advantages include efficient background rejection, very low light scattering and low photodamage of unfixed biological samples and in vivo observation. Additionally, photobleaching is greatly reduced by employing 2PE, and even more so in the case of three-photon NIR excitation (3PE). The spatial region where the 2PE process occurs is very small (of the order of 1 femtoliter, or 10 -15 L), and it decreases even further for 3PE. Multiphoton NIR excitation allows submicron resolution to be obtained along the focusing (z) axis in epi-fluoresence images of biological samples, without the need to employ any confocal pinholes. The 2PE and 3PE systems with ~150-femtosecond (10 -13 s) NIR pulses have several important advantages in addition to high resolution. Firstly, they offer very high sensitivity detection of nanomole to femtomole concentrations of appropriately selected fluorochromes. Secondly, these systems have very high selectivity and the ability to detect interactions between pairs of distinctly fluorescing molecules for intermolecular distances as short as 10 nm, or less. 2PE and 3PE also allow one to rapidly detect even single molecules through Fluorescence Correlation Spectroscopy (FCS); FCS is usually combined with microscopic imaging. The principles of single photon FCS microscopy are briefly discussed next, in Section 2.2.
PRINCIPLES
A complete understanding of the principles of chemical imaging as well as fluorescence microscopy that allow the quantitative analysis of biological samples is necessary in order to interpret effectively and correctly the results obtained with these techniques. The underlying principles of NIR and IR spectroscopy are discussed in Chapter 1x of this book.
Principles of Chemical Imaging
Chemical, or hyper-spectral, imaging is based on the concept of image hyper-cubes that contain both spectral intensity and wavelength data for every 3-D image pixel; these are created as a result of spectral acquisition at every point of the microscopic chemical image. The intensity of a single pixel in such an image plotted as a function of the NIR or IR wavelength is in fact the standard NIR/IR spectrum for the selected pixel, and is usually represented as pseudo-color.
Principles of Fluorescence Correlation Spectroscopy/ Imaging
The presentation adopted here for the FCS principle closely follows a brief description recently developed by Eigen et al. (1). FCS involves a special case of fluctuation correlation techniques in which a laser light excitation induces fluorescence within a very small (10 -15 L = 1fL) volume of the sample solution whose fluorescence is auto-correlated over time. The volume element is defined by the laser beam excitation focused through a water-or oil-immersion microscope objective to an open, focal volume of ~ 10 -15 L. The sample solution under investigation contains fluorescent molecule concentrations in the range from 10 -9 to 10 -12 M, and is limited only by detector sensitivity and available laser power. A non-invasive determination of single molecule dynamics can thus be made through fluctuation analysis that yields either chemical reaction constants or diffusion coefficients, depending on the system under consideration.
FT-IR and FT-NIR Microspectrometers
A microspectrometer is defined as a combination of a spectrometer and a microscope that has both spectroscopic and imaging capabilities. Such an instrument is capable, for example, of obtaining visible images of a sample using a CCD camera, and chemical images with an NIR detector. Chemical images are then employed for sophisticated quantitative analyses. The results reported in this chapter for soybean seeds and embryos were obtained with FT-IR and -NIR spectrometers made by the PerkinElmer Co. (Shelton, CT, USA). The FT-NIR (NTS model) spectrometer was equipped with an integrating sphere accessory for diffuse reflectance. The FT-IR or -NIR spectrometers were, respectively, attached to microscopes for the IR region (Spotlight 300) or NIR region (NIR Autoimage), as illustrated in Fig. 3.2.1 and Fig. 3.2.2. Each spectrometer has an internal desiccant compartment to remove the water vapor and the carbon dioxide from air that may interfere with the spectrum of a sample. Apart from the improved resolution and acquisition time, these instrument models, offer increased sensitivity and also allow the transfer of spectra to different instruments of similar design. The two microspectrometers are each equipped with two cassegrain imaging objectives and a third cassegrain before the NIR detector in order to improve focus and sensitivity, as shown in Fig. 3
High-Resolution NMR Method for Oil Determination
The technique applied to obtain the oil content in soybean embryos was simple onepulse, High-Resolution (HR) NMR (11). The HR-NMR technique was explained in Section 3.4 of Chapter 1x. A Varian U-400 NMR instrument was employed for oil measurements; the selected 90 deg pulse width was 19.4 s and the 1 H NMR signal absorption intensity was recorded with a 4 s recycling interval to avoid saturation.
. Fluorescence Correlation Spectroscopy
This section presents submicron resolution imaging results that we obtained with twophoton NIR excitation of FCS. The FCS data was obtained in the Microscopy Suite of the Beckman Institute for Advanced Science and Technology at UIUC by employing two-photon NIR fluorescence excitation at 780 nm with a 180 fs, Ti: Sapphire pulsed laser, coupled to an FCS Alba™ spectrometer system (recently designed and manufactured by ISS Co., Urbana, Illinois). The configuration of an Alba TM spectrometer with an inverted microscope is shown in Fig. 3 Multi-photon (MPE) NIR excitation of fluorophores--attached as labels to biopolymers like proteins and nucleic acids, or bound at specific biomembrane sites--is one of the most attractive options in biological applications of laser scanning microscopy (12). Many of the serious problems encountered in spectroscopic measurements of living tissue, such as photodamage, light scattering and autofluorescence, can be reduced or even eliminated. FCS can therefore provide accurate in vivo and in vitro measurements of diffusion rates, "mobility" parameters, molecular concentrations, chemical kinetics, aggregation processes, labeled nucleic acid hybridization kinetics and fluorescence photophysics/photochemistry. Several photophysical properties of fluorophores that are required for quantitative analysis of FCS in tissues have already been reported (13). Molecular "mobilities" can be measured by FCS over a wide range of characteristic time constants from ~10 -3 to 10 3 ms. At signal levels comparable to 1PE confocal microscopy, 2PE reduces photobleaching in spatially restricted cellular compartments, thereby preserving the long-term signal-to-noise during data acquisition (14). Furthermore, 3PE has been reported to eliminate DNA damage and photobleaching problems that may still be present in some 2PE experiments. Whereas both 1PE and 2PE alternatives are suitable for intracellular FCS observations on thin biological specimens, 2PE can substantially improve FCS signal quality in turbid samples, such as plant cell suspensions or deep cell layers within tissues. the Nikon TE-300 -special Model, that has available both a back illumination port and a left-hand side port. The PC employed for data acquisition, storage and processing is located behind the instrument, as is the laser illumination source (not visible in the figure).
FT-IR and FT-NIR Chemical Imaging Tests
A series of tests were carried out for both FT-NIR and FT-IR microspectrometers in order to compare both their imaging speed and microscopic resolution (15,16). The results of such tests are presented, respectively, in In addition, one should also note that the spatial resolution increases dramatically to ~ 1 micron for the shorter NIR wavelengths, even with relatively thick samples, such as a 1 cm Zirconium single crystal (Fig. 4.1.3).
FCCS Applications to DNA Hybridization, PCR and DNA Binding
In the bioanalytical and biochemical sciences FCS can be used to determine various thermodynamic and kinetic properties, such as association and dissociation constants of intermolecular reactions in solution (18,19). Examples of this are specific hybridization and renaturation processes between complementary DNA or RNA strands, as well as antigeneantibody or receptor-ligand recognition. Although of significant functional relevance in biochemical systems, the hybridization mechanism of short oligonucleotide DNA primers to a native RNA target sequence could not be investigated in detail prior to the FCS/FCCS application to these problems. Most published models agree that the process can be divided into two steps: a reversible first initiating step, where few base pairs are formed, and a second irreversible phase described as a rapid zippering of the entire sequence. By competing with the internal binding mechanisms of the target molecule such as secondary structure formation, the rate-determining initial step is of crucial relevance for the entire binding process. Increased accessibility of binding sites, attributable to single-stranded open regions of the RNA structure at loops and bulges, can be quantified using kinetic measurements (20).
The measurement principle for nearly all our FCS/FCCS applications is based so far upon the change in diffusion characteristics when a small labeled reaction partner (e.g., a short nucleic acid probe) associates with a larger, unlabeled one (target DNA/RNA). The average diffusion time of the labeled molecules through the illuminated focal volume element is inversely related to the diffusion coefficient, and increases during the association process. By calibrating the diffusion characteristics of free and bound fluorescent partner, the binding fraction can be easily evaluated from the correlation curve for any time of the reaction. This principle has been employed to investigate and compare the hybridization efficiency of six labeled DNA oligonucleotides with different binding sites to an RNA target in a native secondary structure (20).
Hybridization kinetics were examined by binding six fluorescently labeled oligonucleotide probes of different sequence, length and binding sites to a 101-nucleotidelong native RNA target sequence with a known secondary structure (Fig. 6.1.1). The hybridization kinetics were monitored and quantitated by FCS, in order to investigate the overall reaction mechanism. In this "all-or-none" binding model, the expected second-order reaction was assumed to be irreversible. For nM concentrations and at temperatures around 40°C, the typical half-value reaction times for these systems are in the range of 30 to 60 min, and therefore the hybridization process could be easily followed by FCS diffusional analysis. At the measurement temperature of 40°C the probes are mostly denatured, whereas the target retains its native structure. The binding process could be directly monitored through diffusional FCS analysis, via the change in translational diffusion time of the labeled 17-mer to 37-mer oligonucleotide probes HS1 to HS6 upon specific hybridization with the larger RNA target (Fig. 6.1.1). The characteristic diffusion time through the laser-illuminated focal spot of the 0.5 µm-diameter objective increased from 0.13 to 0.20 ms for the free probe, and from 0.37 to 0.50 ms for the bound probe within 60 min. The increase in diffusion time from measurement to measurement over the 60 min could be followed on a PC monitor and varied strongly from probe to probe. HS6 showed the fastest association, while the reaction of HS2 could not be detected at all for the first 60 min. It has been shown above that FCS diffusional analysis provides an easy and comparably fast determination of the hybridization time course of reactions between complementary DNA/RNA strands in the concentration range from 10 -10 to 10 -8 M. Perturbation of the system is therefore not necessary, so the measurement can be carried out at thermal equilibrium. Thus, the FCS-based methodology also permits rapid screening for suitable anti-sense nucleic acids directed against important targets like HIV-1 RNA with low consumption of probes and target.
Because of the high sensitivity of FCS detection, the same principle can be exploited to simplify the diagnostics for extremely low concentrations of infectious agents like bacterial or viral DNA/RNA. By combining confocal FCS with biochemical amplification reactions like PCR or 3SR, the detection threshold of infectious RNA in human sera could be dropped to concentrations of 10 -18 M (21,22). The method is useful in that it allows for simple quantitation of initial infectious units in the observed samples. The isothermal Nucleic Acid Sequence-Based Amplification (NASBA) technique enables the detection of HIV-1 RNA in human blood-plasma (2). The threshold of detection is presently down to 100 initial RNA molecules per milliliter, and possibly much fewer in the future, by amplifying a short sequence of the RNA template (24; 25). The NASBA method was combined with FCS, thus allowing the online detection of the HIV-1 RNA molecules amplified by NASBA (22). The combination of FCS with the NASBA reaction was performed by introducing a fluorescently labeled DNA probe into the NASBA reaction mixture at nanomolar concentrations, hybridizing to a distinct sequence of the amplified RNA molecule. The specific hybridization and extension of this probe during the amplification reaction resulted in an increase of its diffusion time and was monitored online by FCS. Consequently, after having reached a critical concentration on the order of 0.1 to1.0 nM (the threshold for FCS detection), the number of amplified RNA molecules could be determined as the reaction continued its course. Evaluation of the hybridization/extension kinetics allowed an estimation of the initial HIV-1 RNA concentration, which was present at the beginning of amplification. The value of the initial HIV-1 RNA number enables discrimination between positive and false-positive samples (caused, for instance, by carryover contamination). Plotted in a reciprocal manner, the slopes of the correlation curves in the HIV-positive samples drop because of the slowing down of diffusion after binding to the amplified target. This possibility of sharp discrimination is essential for all diagnostic methods using amplification systems (PCR as well as NASBA).
The quantitation of HIV-1 RNA in plasma by combining NASBA with FCS may be useful in assessing the efficacy of anti-HIV agents, especially in the early infection stage when standard ELISA antibody tests often display negative results. Furthermore, the combination of NASBA with FCS is not restricted only to the detection of HIV-1 RNA in plasma. Though HIV is presently a particularly common example of a viral infection, the diagnosis of Hepatitis (both B and C) remains much more challenging. On the other hand, the number of HIV, or HBV, infected subjects worldwide is increasing at an alarming rate, with up to 20% of the population in parts of Africa and Asia being infected with HBV. In contrast to HIV, HBV infection is not particularly restricted to the high-risk groups.
CONCLUSIONS AND DISCUSSION
Our results from high-resolution NMR analysis of oil with nanoliter precision in mutagenized somatic embryos strongly indicate that this novel methodology is practical for producing mature soybean embryos with increased oil content that would be of significant economical value. By comparison, the rate of useable mutants in whole soybean seeds has been reported to be as low as 10 -4 . Therefore, methodologies starting with whole mature soybean seeds have considerably higher cost and time requirements for the experimental selection process of mutant soybean lines with increased oil content, than our methodology that utilizes somatic embryos grown in vitro.
On the other hand, FT-NIR spectroscopy has major, practical advantages over other techniques (such as either low-or high-resolution NMR) for quantitative determination of oil, protein, moisture and perhaps even minor seed constituents. Such key advantages are: speed, accuracy, reproducibility, convenience (i.e., little or no sample preparation), and relatively low cost (in comparison with both pulsed and HR NMR). Furthermore, a significant advantage of the datasets/results obtained by adequately calibrated FT-NIR is the high internal consistency of the FT-NIR results for large numbers of normal, yellow-coat, soybean seed samples. These advantages are very important both for soybean breeding/selection programs and for wide-scale industrial applications of NIR composition analysis throughout the entire soybean distribution and processing chain. Major practical limitations of FT-NIR spectroscopy are the need for primary/reference methods and its lower resolution (compared with either FT-IR or HR-NMR).
Microscopic resolution testing of the Spotlight 300 model, FT-IR chemical imaging (array) microspectrometer with coated latex spheres and Mg-silicate particles yielded about 6 micron spatial resolution. The current, commercial instrumentation for FT-IR and FT-NIR Microspectroscopy/chemical imaging, and fluorescence microscopy is capable of in vivo, automated measurements and visualization of composition distribution in various cellular types and tissue systems.
Recent FT-NIR/IR developments and the combination of the FT-IR and FT-NIR spectroscopy with microscopy (i.e., Microspectroscopy) allow one to obtain microscopic, chemical images of soybeans and soybean embryos-both in reflection and transmission modes-in as little as 3 min at spectral resolutions up to ~8 cm -1 . The highest spatial resolution among the commercial FT microspectrometers investigated was close to 1 micron, and was obtained with the FT-NIR AutoImage model microspectrometer. In spite of its lower sensitivity (microgram vsus picogram, respectively), NMR Microscopy ('MRM') has also been reported to achieve 1 micron resolution under the most favorable conditions, at 1 H resonance frequencies significantly lower than 1GHz (33; 34). At present, however, the typical resolution obtained by NMR Microscopy on seeds is on the order of 50 µm as in the case of castor bean imaging (33), or oil in the germ of wheat grains (35; 36). The resolution limit of NMR microscopy is limited by several factors (33) currently including the lower sensitivity compared with FT-NIR; nevertheless, technical improvements of NMR imaging techniques may be found that overcome such obstacles thus leading to submicron resolution. In this context, it is interesting that individual protein-bound water molecules could be observed in lysozyme by 2D NMR (37).
The latest developments indicate that the sensitivity range of FT-NIR microspectroscopy observations can be extended to the femtogram level, with submicron spatial resolution. Such FT-NIR/IR microspectroscopy instrumentation developments are potentially very important for those agricultural and food biotechnology --as well as biomedical and pharmacological applications--that require rapid and sensitive analyses, such as the screening of high-content microarrays in Genomics and Proteomics research. Novel, two-photon NIR excitation fluorescence correlation microspectroscopy results were here reported with submicron resolution for concentrated suspensions of plant cells and thylakoid membranes. With advanced super-resolution microscopy designs, a further, tenfold resolution increase is attainable, in principle, along the optical (z) axis of the microspectrometer. Especially promising are current developments employing multi-photon NIR excitation that could lead, for example, to novel cancer prevention methodology and the early detection of cancers using NIR-excited fluorescence. Other related developments are the applications of Fluorescence Cross-Correlation Spectroscopy detection to monitoring DNA hybridization kinetics, DNA binding, ligand-receptor interactions and HIV-HBV testing.
Very detailed, automated chemical analyses of oils/fats and phytochemicals (e.g., isoflavones in cell cultures) are now also becoming possible by FT-NIR microspectroscopy of single cells, either in vitro or in vivo. Such rapid analyses have potentially important applications in food safety, agricultural biotechnology, medical research, pharmacology and clinical diagnosis. the Nikon TE-300 -special Model, that has available both a back illumination port and a left-hand side port. The PC employed for data acquisition, storage and processing is located behind the instrument, as is the laser illumination source (not visible in the figure). FT-IR Reflectance Chemical Images Compared with the Visible Reflectance Image (middle picture) of a black-coated soybean obtained with a PerkinElmer Spotlight 300 Chemical Imaging/FPA Microspectrometer. The soybean region labeled "Y" shows a zone where the black coat was removed, thus unveiling the yellow soybean interior which has a markedly different IR absorption spectrum from that of the black coat region. | 2004-07-03T04:43:47.000Z | 2004-07-03T00:00:00.000 | {
"year": 2004,
"sha1": "27d030b259fa9beaa8ce0e9a4cc73d72acd3e9b1",
"oa_license": null,
"oa_url": "https://doi.org/10.1038/npre.2011.6207.1",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "4df81f21d63c2c7127b0284e8011b49630f7da9d",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Chemistry"
]
} |
230674398 | pes2o/s2orc | v3-fos-license | Spatial Distribution and Contamination of Heavy Metals in Street Dust from Hamedan, Iran of Heavy Metals in Street Dust from Hamedan,
In the Anthropocene, researchers are to pay special attention to heavy metal pollution associated with urban dust particles, amid overwhelming concerns that heavy metals might exert considerable impacts on ecosystem and human health. In this research, 60 street dust samples were taken from five different urban locations namely high, and low traffic streets, parks, residential, and industrial areas of Hamedan, Iran, as well as background city soils. The samples were analyzed for Zn, Cu, Pb, Mn, Cr and Co, using the atomic absorption spectrophotometric technique and ICP MS. Data showed that mean values for Zn, Cu, Pb, Mn, Cr, and Co in the dust samples were 189.9, 63.8, 63.0, 378.5, 33.0, and 19.8 mg Kg -1 , respectively. The street dust samples were found to contain significant levels of Cu, compared to the background; however, their geo-accumulation indices (I geo ) suggested them to be uncontaminated with Zn, Pb, Mn, Cr and Co, and moderately contaminated with Cu. The Igeo values were in the following order: Cu >Pb, Zn>Mn>Cr>-Co. The enrichment factor (EF) was estimated for all studied heavy metals by using Mn as the reference element. Except from Cu, the mean values of EF were less
Introduction
In recent decades, characterized as the Anthropocene epoch, much attention has been paid to heavy metal contamination with dust particles in many parts of the world [1,2]. Dust chemicals vary from elemental wastes to organic and inorganic compounds. Natural and anthropogenic heavy metals comprise the most important part of the dusts' inorganic components [3,4]. Disproportionate accumulation of heavy metals is a serious threat to living organisms.
They pollute the natural environment and enact toxic impacts on living humans. Some heavy metals are essential for life; however, it must be taken into consideration that at higher levels, they can also impose some toxicological risk [4,6]. When studying urban atmospheric pollution, identification of source and location of the dust stands out as a high priority [7]. Hence, such studies should determine origin, distribution, environmental damage and health effects of the concerned heavy metals [8]. Dust storms with their important impacts on human health have turned into a frequent phenomenon of Iran's weather system. Their adverse effects on humans causing respiratory and cardiovascular diseases, and infections, in one hand, and on the environment, reducing visibility, creating agricultural losses, affecting the industry, and making satellite imagery complications are well-known [9]. On global scale, most of the dust particles come from arid and semiarid areas [10].
Recently, the amount of dust particles coming to Iran from Arabian countries has been dramatically increasing. They have viciously affected western and even central parts of Iran [11].
They are an important way of exposing people to heavy metals.
In recent years, the large amounts of atmospheric dust particles have been depositing in many cities, crossing the western borders of Iran. The most dominant winds of the country blow from west to east. Therefore, as sources of Iran's dust storms; mainly western adjacent countries must be blamed for [12]. Saeedi et al. reported that although there is no confirmed origin, it is suspected that most of the dust particles originate from the dry wetlands of south eastern Iraq and the desserts of Iran's western neighbors [13]. The growth of population, industrial activities, and vehicles in large cities are the other major causes of pollution in urban environments. Road dust particle play as the leading path in terms of subjecting people to the toxic elements [14]. Cities have become source points of toxic chemicals from the unrestrained use of fossil fuels. Urban people are the most affected and the traffic policemen are the worst sufferers, because they are particularly close to the fumes of automobile exhaust [15]. The complexity of dust particles makes their characterization and source identification difficult [16]. Dust particles could be introduced as an indicator of heavy metal contamination from atmospheric deposition [17][18][19][20].
The close association of heavy metals with dust particles may be enhanced in the presence of anthropogenic sources of heavy metals.
Depending on the population and economical activities, the level of contamination in a city varies from place to place. The presence of heavy metals in high concentrations in the environment lead to health hazards such as its adverse effects on the nervous, blood forming, renal, and reproductive systems. The toxicity of elements such as Cu, Cd, and Zn, have been identified as being able to alter the purpose of the human central nervous and respiratory system, while also disturbing the endocrine [21].
Others include reduced intelligence, attention deficit, and behavioral abnormality, as well as increased cardiovascular diseases in adults [22]. Dust particles may also cause other kind of problems such as reduced soil fertility, damage to crops and reduced solar radiation [23]. While there are many published studies on the concentration of heavy metals in street dust of major cities in the developed countries; few research projects have been conducted addressing the issue in smaller cities of developing countries. Therefore, contamination and spatial distribution of heavy metals associated with street dust particles has become a major environmental issue in many western cities of Iran including Hamedan. The main objective of this study; therefore, was to determine the concentration of heavy metals (Zn, Cu, Pb, Mn, Cr and Co) in street dust samples collected from different parts of Hamedan city; and then examine their spatial distribution.
Sample Collection
The street dust samples were collected and placed into polyethylene bags using a clean brush and dustpan. The sampling was conducted in May (prior to rainy season) to avoid heavy metals being washed out by rain. Each street dust sample was 300-700 g in mass and collected from a 1 m2 area which measured by a ruler. Then, the samples were transferred to soil science laboratory of Bu-Ali Sina university, Hamedan, Iran; and were dried at room temperature for 3 days. Later, they were sieved through a 1 mm stainless steel sieve.
Dust Analysis
At first, 0.6 grams from every sample was digested [24] with a mixture of perchloric, nitric, and sulfuric acid in proportion of 1:5:1 and heated at 215°C until white fumes given off and a creamy color appeared. Then, 10ml deionized water was added into each sample and heated at 100°C for 1 hour. The solutions were allowed to cool, then filtered and made up to 100 ml. Finally, the concentrations of Zn, Cu, Pb, Mn, Cr, and Co were determined using the atomic absorption spectrophotometric technique.
Contamination Assessment Methods
A number of calculation methods have been put forward for quantifying the degree of metal enrichment or pollution in dust particles [25]. In this study, geo-accumulation index (I geo ), enrichment factor (EF), and pollution index (PI) were calculated to assess the heavy metal contamination level in the road dust particles. However, the geo-accumulation index was originally used with bottom sediments [25]. It is also widely used to determine the pollution degree of heavy metals in dust particles and soil [26]. I geo is computed by the following equation (eq. 1): Where Cn represents measured concentration of the element n in sampled street dust and Bn is geochemical background value of the element n in background sample. The geo-accumulation index is classified as shown in Table 1. The enrichment factor (EF) is used in studied samples to determine degree of metal contamination [27]. It was based on standardization of a determined element against a reference element. A reference element is often the one characterized by low occurrence variability such as Fe, Al, Ti, Mn, Sc, etc. [28][29][30]. The EF calculation is expressed below as eq.2: Table 2). The Pollution index (PI) was used to assess pollution degree and environment quality in this study.
The PI is defined as eq.3: PI= Cn/ Cref eq.3 Where Cn and Cref stand for the concentration of examined element and the reference value, respectively (Table 3).
Methods of Mapping Heavy Metal Concentrations
The spatial distribution maps of heavy metal concentrations were generated through the Inverse Distance Weighted (IDW) interpolating data from 60 street dust samples using Arc map software.
Results and Discussion
Heavy Metal Concentration Shi and Wang reported that in their studied area, mean content of Cu particles was found to be almost 6 times higher than the background. Lu found the maximums Cu in the samples from heavy of Pb in Hamedan city. It is believed that Pb is responsible for many negative effects on human bodies such as damaging kidneys, the nervous, and reproductive systems. It is also the element of most concern in environmental heavy metal pollution studies [32].
Spatial Distribution of Heavy Metals
The spatial distribution of metal concentrations is a useful method to assess the possible sources of enrichment, and to identify hot-spot areas with highest metal concentration. According (Table 6). The order of EF values was as the following: Cu >Zn, Pb> Cr, Co, similar to the I geo order, which could also be interpreted as decreasing order of their overall contamination of street dusts from Hamedan city. The anthropogenic pollution is clearly identified when the maximum EF of each heavy metal is larger than 3. The mean EF of Cu was higher than 2, while the mean EF of Zn, Pb, Cr, and Co were less than 2. These findings showed that Cu in street dusts of Hamedan city was the main pollutant, and largely originated from anthropogenic sources. The high values of I geo , EF and PI for Cu in the sampled street dusts indicated that there was a considerable Cu pollution, which mainly originated from traffic and industry activities. The I geo , EF and PI of other studied heavy metals were low and revealed low levels of these heavy metals' pollution in street dusts from studied area. These findings stressed on the fact that more attention should be paid on heavy metal contamination in the street dusts of the city, especially on the Cu. Some protective measures such as encouraging use of public transport; replacing liquid fossil fuels with gaseous fuel, and planting more green areas are suggested to combat the problem. | 2020-12-24T09:13:21.780Z | 2020-09-02T00:00:00.000 | {
"year": 2020,
"sha1": "af75432132a95274ddd03c901c9a06d07cd3bcea",
"oa_license": "CCBY",
"oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.004882.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7651e82304ee92eec82bb98e7ac2785be26c6066",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
54721417 | pes2o/s2orc | v3-fos-license | On the thermodynamic stability of rotating black holes in higher dimensions -- a comparison of thermodynamic ensembles
Thermodynamic potentials relevant to the micro-canonical, the canonical and the grand-canonical ensembles, associated with rotating black holes in D-dimensions, are analysed and compared. Such black holes are known to be thermodynamically unstable, but the instability is a consequence of a subtle interplay between specific heats and the moments of inertia and it manifests itself differently in the different ensembles. A simple relation between the product of the specific heat and the determinant of the moment of inertia in both the canonical and the grand-canonical ensembles is derived. Myers-Perry black holes in arbitrary dimension are studied in detail. All temperature extrema in the micro-canonical ensemble are determined and classified. The specific heat and the moment of inertia tensor are evaluated in both the canonical and the grand-canonical ensembles in any dimension. All zeros and poles of the specific heats, as a function of the angular momenta, are determined and the eigenvalues of the isentropic moment of inertia tensor are also found and classified. It is further shown that many of the thermodynamic properties of a Myers-Perry black hole in D-2 dimensions can be obtained from those of a black hole in D dimensions by sending one of the angular momenta to infinity.
Introduction
An immediate consequence of Hawking's famous result that Schwarzschild black holes in four dimensions have a temperature that is inversely proportional to their mass [1] is that any such black hole is thermodynamically unstable, due to a negative specific heat. They can however be stabilised by putting them in a box with thermal walls [2] or by introducing a negative cosmological constant of sufficient magnitude [3]. While the specific heat of a Schwarzschild black hole can be rendered positive by making it rotate sufficiently fast this does not stabilise the black hole as the moment of inertia tensor then becomes negative, maintaining the instability.
The generalisation of the Kerr metric to a class of rotating black holes in D-dimensions, found by Myers and Perry [4], provides an arena for testing these ideas in a more general context. In 4-dimensions there is a maximum angular momentum that a rotating black hole can sustain, corresponding to an extremal black hole with vanishing Hawking temperature, but in higher dimensions this is not the case. There is more than one angular momentum in D > 4, corresponding to the fact that the rank of SO(D − 1) is greater than one for D > 4, and some, but not all, of the angular momenta can become arbitrarily large -the phenomenon of ultra-spinning black holes [4]. Infinite momentum does not however imply infinite angular velocity, rather the corresponding angular velocity vanishes as an angular momentum diverges -the infinite angular momentum is due to a singularity in the moment of inertia of the black hole and is not due to infinite angular velocity.
It was suggested some time ago that there should be a link between the thermodynamic properties of black holes, in particular the second law of thermodynamics, and dynamical instability for D > 4, [5]. Stability of Myers-Perry black holes was analysed in [6] and an extensive literature on the subject of the thermodynamic and dynamical instability of rotating black holes in higher dimensions has since emerged [7]- [19]. In particular it has been shown, with a very general argument utilising only the Smarr relation and the first law, that all asymptotically flat electrically neutral solutions of the vacuum Einstein equations in D-dimensions are thermodynamically unstable, [15]. Nevertheless it is still instructive to examine the details of thermodynamic stability in the different ensembles and in specific cases.
The thermodynamic quantities of interest are the mass (internal energy), 1 the entropy, the temperature, the angular momenta and the angular velocities. With Newton's constant G N = 1 and c = 1 these can all be given dimensions of length to some power: It is thus natural to consider M, S, and J i to be extensive while T and Ω i are intensive, and this classification will be adopted here.
In §2.4 a general relation between the canonical and the grand canonical ensembles, for electrically neutral rotating black holes, is derived. For the canonical ensemble the Hessian, ∂ 2 U, of the internal energy U(J i , S) is shown to have determinant det(∂ 2 U) = 1 βC J det I T (1) where C J is the specific heat at constant angular momentum and I T is the isothermal moment of inertia tensor. For the grand canonical ensemble, on the other hand, the Hessian ∂ 2 G of the grand canonical potential, G(Ω i , T ), is shown to satisfy det(−∂ 2 G) = βC Ω det I S , where C Ω is the specific heat at constant angular velocity and I S is the isentropic moment of inertia tensor. Standard thermodynamics arguments then imply that C J det I T = C Ω det I S , which is one of our main results. We then compare the thermodynamics of Myers-Perry black holes in the different ensembles. In one of the first papers on the stability of Myers-Perry black holes [6] it was observed that, as one of the angular momenta is increased keeping the others zero and the mass fixed (the microcanonical ensemble), there is a minimum in the temperature. The authors suggested that this was a signal of an instability -that there should be dynamical negative modes leading to a more stable solution of Einstein's equations, but is more correctly thought of as the enthalpy rather than the internal energy. These are of course the same for zero pressure and in this work we make no distinction between enthalpy and internal energy.
with less symmetry. This gave further support, beyond the non-rotating case studied in [5], to the idea that thermodynamic and dynamical instability of black holes in higher dimensions are intimately related. Hints of the instability can be seen in the microcanonical ensemble in which the entropy is a monotonically decreasing concave function of angular momenta, at constant mass, until the temperature hits a minimum at which point the entropy has an inflection point and becomes concave in at least one direction [8].
Thermodynamic instability manifests itself in different ways in the various ensembles. In the grand canonical ensemble the grand canonical potential G(Ω i , T ) is considered as a function of intensive variables and thermodynamic stability requires that G be a totally concave function of its arguments [21]. The particular cases of asymptotically flat Kerr and Myers-Perry black holes were investigated in [10] for 4 ≤ D ≤ 6 and it was shown that the specific heat C J is negative when all angular momenta vanish, but can become positive when some of the angular momenta become large enough. However when the angular momenta are large enough for the specific heat to be positive the isothermal moment of inertia, all of whose eigenvalues are positive for zero rotation, has at least one negative eigenvalue -there is thus always an instability. One of the results of the present work is to extend the explicit analysis of [10] to all D and show that the same phenomenon persists.
The relationship between the microcanonical and the grand canonical ensembles was studied in [16] and in §3.4.4 we extend this analysis further and derive a number of relations between the temperature, the specific heat at constant angular velocity C Ω , and the isentropic moment of inertia tensor I S for Myers-Perry black holes in any dimension. We show that there is a branched hypersurface in angular momentum space where βC Ω (with β = 1 T ) develops a pole and that this is the same hypersurface as the one on which the temperature is minimised in the microcanonical ensemble. This hypersurface can be obtained from the extremal T = 0 hypersurface by analytically continuing (J i ) 2 to −(J i ) 2 , keeping the entropy constant. There is yet another significant hypersurface, one with a number of branches on which βC Ω vanishes, and on this hypersurface the isentropic moment of inertia tensor develops an infinite eigenvalue, in the form of a pole. This pole exactly cancels the zero in βC Ω in the Hessian of the grand canonical potential. The branches of this hypersurface divide the space of angular momenta into separate regions determined by the signature of I S .
A by product of our analysis is that the thermodynamic properties of a Myers-Perry black hole in D − 2m dimensions, in the micro or the grand canonical ensemble, can be obtained from those of a Myers-Perry black hole in D dimensions by sending m of the angular momenta to infinity in the latter.
In §2 the thermodynamics of rotating black holes in the different ensembles are analysed and equation (3) derived along with other relations between the various thermodynamic quantities. In §3 Myers-Perry black holes are studied and it is shown explicitly how the specific heats and moments of inertia conspire to satisfy the general relations of §2. The results are summarised in §4 and some technical results required in the analysis are relegated to five appendices.
Thermodynamics of rotating black holes
Rotating black holes in D > 4 space-time dimensions must be treated slightly differently for even and odd D because the rotation group SO(D −1) has different characterisations of angular momenta in the even and odd dimensional cases. The Cartan sub-algebra has dimension D−2 2 for even D and D−1 2 for odd D so a general state of rotation is specified by D−2 2 independent angular momenta in even D and D−1 , the integral part of D−1 2 , be the dimension of the Cartan sub-algebra of SO(D − 1), then there are N independent angular momenta J i , i = 1, . . . , N. It is notationally convenient to introduce a parameter ǫ = 1 + (−1) D /2 in terms of which In the microcanonical ensemble the energy is fixed and we chose as thermodynamic control parameters the extensive quantities, J i and M, with the entropy S(J i , M) being the thermodynamic potential, which is convenient for differentiation keeping M fixed. In the canonical ensemble the energy is allowed to fluctuate and the internal energy is used as the thermodynamic potential. In the grand canonical ensemble all extensive variables are allowed to fluctuate and the intensive variables are used as control parameters, the relevant thermodynamic potential is then the grand canonical potential The grand canonical potential is related to the Euclidean formulation, since the Euclidean action I E is related to the mass by a Legendre transform
Microcanonical ensemble
For completeness we summarise in this sub-section some of the results pertaining to the microcanonical analysis in [18]. With the entropy expressed as a function of J i and M, S(J, M), we have where β = 1 T . These thermodynamic quantities have a geometrical interpretation in the Euclidean formulation of the black hole, where demanding absence of a conical singularity requires periodicity in imaginary time, τ = it, with τ identified with τ +β, and periodicity in imaginary angle [18], ϕ i = iφ i , with ϕ i identified with ϕ − βΩ i . Thus the 1-form dS determines the size of the (τ, ϕ i ) torus, For thermodynamic stability the entropy should be purely concave [21], which requires that the Hessian H AB = − ∂ 2 S ∂x A ∂x B , wherex A = (J i , M), must be a positive definite matrix. The identity for asymptotically flat black holes, is derived in [15], where H ij is the N × N matrix One can immediately conclude that such black holes can never be thermodynamically stable in D ≥ 4, since det(H AB ) > 0 requires det(H ij ) < 0, hence at least one eigenvalue of H ij would have to be negative and S cannot be a concave function.
Canonical ensemble
The canonical ensemble uses the internal energy U(J i , S) as thermodynamic potential, depending on extensive arguments that are the Legendre transforms of Ω i and T , and for black holes U is identified with the ADM mass, at least in the asymptotically flat case. Stability requires that U be a totally convex function of its arguments. Let x A = (J i , S), with A = 1, . . . , N + 1, and U AB = ∂ 2 U ∂x A ∂x B must be a positive matrix. Explicitly where the symmetric matrix is the isentropic moment of inertia tensor; C J is the specific heat at constant J, and (this last equation is a Maxwell relation). A necessary, but not sufficient, condition for stability is thus
Grand canonical ensemble
In the grand canonical ensemble stability requires that G(Ω, T ) be a concave function. Let y A = (Ω i , T ), with y i = Ω i and y N +1 = T , then and G AB = ∂ 2 G ∂y A ∂y B must be a negative matrix. Explicitly where the symmetric matrix is the isothermal moment of inertia tensor; C Ω is the specific heat at constant Ω, and A necessary, but not sufficient, condition for stability is thus
Relation between the canonical and the grand canonical ensembles
The canonical and the grand canonical ensembles are of course related. An immediate consequence of the Legendre transform (6) is that and this has important consequences for the individual components. A relation between the specific heats was derived in [10], where the Maxwell relation (22) has been used. Similar manipulations can be used to relate the isentropic and isothermal moment of inertia tensors, Or equivalently, The stability conditions (17) and (23) can thus be expressed as Equation (24) now gives the identity A new instability would be expected to develop every time one of the eigenvalues of −∂ A ∂ B G changes from a positive to a negative value, either by going through zero or infinity. In general this might be expected to happen on a hypersurface on which det(−∂ 2 G) is either zero or infinity, but we shall see that, at least in the case of Myers-Perry black holes, there are some subtle cancellations so that the change is not reflected in the determinant. The form of the Hessians (13) and (19) can be simplified by using a Legendre transform on the scalar variable, respectively S and T . In the canonical ensemble let x A ′ = (J i , T ) and Then the co-ordinate transformation matrix is So, in (J i , T ) co-ordinates, is partially diagonalised. 2 Similarly, in the grand canonical ensemble, we can transform from Note that, although the canonical ensemble implicitly involves I −1 S , its stability properties are most easily seen using I −1 T in (36) and, while the grand canonical ensemble implicitly involves I T , its stability properties are most easily studied using I S in (37).
Myers-Perry black holes
Myers-Perry black holes in D space-time dimensions have an event horizon which has the topology of a (D−2)-dimensional sphere. This can be described in terms of Cartesian co-ordinates x a in R D−1 by and we can write this as . . , N, are complex co-ordinates for both the even and odd cases while y = x D−1 is only present for even D.
Then ρ i , φ i and y are co-ordinates that can be used to parameterise the sphere and, for the black hole, J i are angular momenta in the ( The stability properties of the canonical ensemble are determined by the signature of the Hessian U AB , which is the same as that of U A ′ B ′ . The matrix ∂ 2 F /∂x A ′ ∂x B ′ has a different signature. 3 Hereinafter we shall not distinguish between upper and lower indices, The Myers-Perry line element can be expressed as where the functions Z and U are The a i are rotation parameters in the (x 2i−1 , x 2i )-plane and µ is a mass parameter. We use units in which the D-dimensional Newton's constant and the speed of light are set to one. There is an event horizon at r h , the largest root of Z − 2µ = 0, so and the area of the event horizon is Where ̟ is is the volume of the round unit (D − 2)-sphere, The Bekenstein-Hawking entropy is and the Hawking temperature is The angular momenta, the entropy and the ADM mass, M, of the black hole are related to each other, and to the metric parameters, via while the angular velocities are
Microcanonical ensemble
The microcanonical ensemble was developed for Myers-Perry black holes in [12], [15] and [18]. In particular the Hessian (11) was evaluated explicitly in [18] and is reproduced in appendix A, are dimensionless angular velocities, and Ω 2 = N i=1 ω 2 i . Black hole thermodynamics in D > 4 has a subtle relation with dynamical instability. It was noted in [6] that, for D ≥ 6, the temperature of a Myers-Perry black hole, with only one J i = 0, has a minimum as the spin increases at fixed mass. Taking J 1 = 0 and J i = 0 for i = 2, . . . , N the minimum is at D−5 and in [6] it was suggested that this minimum signals the onset of a dynamical instability for a rotating black hole. Thermodynamic functions are thus giving hints of possible dynamical instability and this was studied in [15] and [18], where some special cases of non-zero spin were analysed. These authors studied the matrix (48) in the symmetric cases were the non-zero a i are all equal, a 1 = . . . = a n = a = 0, a i = 0 for i = n, . . . , N.
The entropy and the temperature decrease as the angular momenta are increased, at constant M, until the temperature reaches a minimum, and at precisely that point the matrix H ij develops a zero eigenvalue, signalling the fact that the entropy ceases to be concave in that direction. The temperature has a minimum for configurations of the form (49) with In particular n = 1 gives the original expression in [6], and the two special cases n = 1 and n = N were considered in [18], while (50) for general n appeared in [16].
In appendix A all extrema of the temperature, in the microcanonical ensemble with fixed mass, are found and classified. For finite J i they are all of the same from as (49), (up to permutations of the J i ) with where j * = 2πJ * S is the dimensionless angular momentum, in units of entropy. The value of the temperature at the extrema (53) is The temperature is a maximum, T max = D−3 4πr h , for non-rotating Schwarzschild-Tangherlini black holes (n = 0). For finite J i the stationary points J * are saddle points of the temperature with minima along the directions J * = (J * , · · · , J * , 0, . . . , 0) satisfying (53) and maxima in the directions orthogonal to these. At the same time H ij in equation (48) develops a single zero eigenvalue at J * , in the direction J * , indicating an inflection point in that direction. 4 There are also (n − 1) negative eigenvalues of H ij at J * , (137), indicating convexity of the entropy in these directions with associated thermodynamic instabilities, while the entropy is concave in all other finite directions. At all stationary points of T , det(H ij ) vanishes.
When some number m of the J i are allowed to become infinite equations (53) and (54) are modified to respectively. Indeed in many of the following formulae the thermodynamic properties of a Myers-Perry black hole in D dimensions, with m angular momenta sent to infinity, are seen to be the same as those of a D − 2m dimensional black hole with all angular momenta finite and the moment of inertia tensor, which has zero eigenvalues in the infinite directions, suitably truncated (a caveat to this statement is that we must restrict to m < D−3 4 , equation (110)). This can be seen in the formulae in the appendix, though C J and det(I T ) are exceptions and so thermodynamic dimensional reduction using this limit does not work in the canonical ensemble. In this sense lower dimensional black holes can be obtained by starting from large D and sending more and more of the J i to infinity.
Heat capacities
The heat capacity at constant J is derived in appendix B. It can be expressed fairly concisely by using the functions The specific heat at constant J i is then where Equation (56) generalises the formulae for the specific cases D = 4, 5 and 6, given in [10], to arbitrary D.
As is well known the specific is negative for J i = 0, but can be positive for non-zero J i . The specific heat at constant J i (the canonical ensemble) is related to the specific heat at constant Ω i (the grand canonical ensemble) by equation (27). Alternatively C Ω can be evaluated directly for a general D without knowing the moment of inertia explicitly. The details are left to appendix C and here we just quote the result, which generalises the D = 4 result of [11] to D ≥ 4.
To simplify some later formulae It will be convenient to define, in analogy with (57), in terms of which Note the signs: in this notation There is a curious parallel between the singularities of βC Ω , where t = 0, and extremal black holes for which t, and hence T , vanishes. Since t(j 2 ) = t(−j 2 ) these are related by mapping (J i ) 2 → −(J i ) 2 , keeping the entropy constant.
Moment of inertia tensor
The isothermal moment of inertia tensor, is derived in appendix D. It is Equation (62) generalises the formulae for the particular cases D = 4, 5 and 6 derived in [10]. The isentropic moment of inertia tensor was given for general D in [22]. A derivation is outlined in appendix E and it has the form .
The determinant of the isentropic moment of inertia tensor is Note that, in the determinant of the Hessian for the grand canonical ensemble (2), the factor D − 2 + 2Σ − 1 in the denominator of (65) exactly cancels the same factor in the numerator of (59).
Stability analysis in the canonical and grand canonical ensembles
In this section we examine the thermodynamic stability of Myers-Perry black holes in the canonical and the grand canonical ensembles, using the formulae of sections §3.2 and §3.3. We first summarise the well known case of D = 4 and the results of [10] for D = 5 and 6, before going on to describe the situation for general D.
D=4
The case D = 4 is well known, but is included here for completeness. In four-dimensions N = 1 and there is only one J. The temperature is so we must restrict to 0 ≤ j ≤ 1, with j = 1 being extremal. C J evaluates to which is positive for 2 √ 3 − 1 < j 2 < 1, (in terms of J and M, and these limits corresponding to 2 . The isothermal moment of inertia is which is only positive when C J is negative. Indeed clearly illustrating that Kerr metrics are thermodynamically unstable in the grand canonical ensemble for all values of the angular momentum: when the specific heat is positive the moment of inertia is negative and vice-versa. Note that the pole in C J exactly cancels a zero in I T -a phenomenon that we shall see persists for all D. Equation (32) immediately shows that an instability must be present in the canonical ensemble, though the full story is a little simpler there. Explicitly and indeed βC J I T = βC Ω I S as it should be, even though the instability can shift between the specific heat and the moment of inertia in the former case while it always resides in the specific heat in the latter, the isentropic moment of inertia always being positive.
D=5
In five-dimensions the Hawking temperature is so j 2 1 j 2 2 ≤ 1, with the locus of extremal black holes being the hyperbolae j 2 1 j 2 2 = 1.
The specific heat at constant J and the isothermal moment of inertia tensor are easily determined from the general formulae in §2, with N = 2, but the explicit forms are not illuminating and we shall resort to a graphical representation. The specific heat is positive in the region of the j 1 − j 2 plane indicated in figure 1: it diverges on the boundary of the red inner region and vanishes on the outer hyperbolae (the latter being the T = 0 curve); and is positive in the yellow region enclosed by the curves. The eigenvalues of the isothermal moment of inertia tensor are plotted in figure 2. Both eigenvalues are positive for small j i , and one is always positive, but the other vanishes on the same curve that bounds the red region in figure 1 and is negative outside this region. The innermost surface on which the moment of inertia tensor develops a negative eigenvalue is termed the ultraspinning surface in reference [18] and it was shown there that there is no ultra-spinning surface in the microcanonical ensemble for a singly spinning Myers-Perry black hole D = 5. In contrast we see here that there is an ultra-spinning surface for I T in the grand canonical ensemble -the concept of an ultra-spinning surface depends on the ensemble used.
Thermodynamic instability can nevertheless be seen directly from I T has a negative eigenvalue when C J is positive and when I T has two positive eigenvalues, C J < 0. Hence C J det I T is always negative for any black-hole, and so these black-holes are thermodynamically unstable for any choice of angular-momenta with positive temperature.
D=6
In 6 − D the temperature again vanishes on hyperbolae in the j 1 -j 2 plane. The specific heat at constant J looks a little more complicated than in 5D, but only because some of the hyperbolae overlap. Figure 3 displays similar information to figure 1, but for D = 6 -this figure is essentially the same as one in [10] and is reproduced here for comparison with figure 4.
General D ≥ 7
In §3.4.1-3.4.3 the stability properties of Myers-Perry black holes in the canonical ensemble were analysed in terms of C J and isothermal moment of inertia, associated to the canonical ensemble through (30). We focus in this section on the grand canonical ensemble, partly because the canonical ensemble has already been analysed (albeit only for D = 4, 5, 6) but primarily because it is algebraically somewhat simpler than the canonical ensemble. The general principles of §2.4 ensure that the stability properties are the same: since ∂ A ∂ B U and −∂ A ∂ B G are inverses of each other their signature is the same.
One necessary condition for thermodynamic stability is
in particular βC Ω is negative for non-rotating black holes with all j i = 0. More generally we must examine the condition In terms of the variables (77) is a simple ratio of linear functions and positivity of βC Ω requires that i.e. the x's are constrained to lie between two hyperplanes in x-space, which never intersect for finite x i . However x i diverges when j 2 i passes through 1, and this description pushes some subtleties around j 2 i = 1 out to infinity. So we consider instead the condition This ratio can only change sign either across the hypersurface where it has a zero, or across the hypersurface where it has a pole. Both these hypersurfaces are of the form with s = 2 or 3. If any j 2 i = 1, for example if j 2 1 = 1, then and at least one other j 2 i must be one, the remaining j i 's, N − 2 of them, are arbitrary. Indeed the hypersurfaces C D,2 and C D,3 intersect on a manifold of co-dimension two, which is actually a flat R N −2 in j-space). In D = 7 for example, the relevant hypersurfaces are C 7,3 and C 7,2 , while in D = 8 they are C 8,3 and C 8,2 . These various hypersurfaces are shown in figures 5.
We can determine whether or not βC Ω changes sign across these hypersurfaces by following it out along rays from the origin in specific directions. For example in the direction j 1 = j, j 2 = · · · = j N = 0, which is negative between C D,2 = 0 and C D, Thus βC Ω does indeed change sign when it crosses either of the hypersurface C D,2 = 0 or C D,3 = 0 in this direction (in this specific direction each hypersurface has only one branch, and so is only crossed once). We note in passing that the hypersurface C D,3 , on which j 2 = D−3 D−3−5 , coincides with the surface on which T is minimised in the microcanonical ensemble, equation (53) with n = 1.
The determinant of the Hessian for Myers-Perry black holes, in the grand canonical ensemble, (31) is derived in appendix E, equation (177), .
(88) Thus the factor D − 2 + Σ − 1 in the numerator of C Ω , giving rise to a zero in the specific heat, is cancelled by a similar factor in the denominator of det(I S ). It is actually more convenient to examine I S −1 rather than I S , as it has the sightly simpler form (173), .
(89) Focusing first on the determinant, stability requires Of course positivity, while necessary for stability, is not sufficient, (90) is satisfied when there are an even number of negative eigenvalues, but we do know that det I S −1 can only change sign when C D,2 = 0. To understand the eigenvalue structure in more detail consider first the two cases D = 7 and D = 8. The relevant surfaces are C 7,2 and C 7,2 shown in figure 5. Each surface C D,2 consists of two branches, on which at least one eigenvalue of (89) must vanish, touching at the symmetric point j 2 1 = j 2 2 = j 2 3 = 1 where two eigenvalues vanish and the third is positive. These two surfaces divide the parameter space into three regions. All three eigenvalues are positive in the interior region, inside the inner surface that is visible through the holes in the outer surface, because they are positive at the origin where I S −1 is a positive multiple of the identity matrix. We can determine explicitly how many negative eigenvalues there are in the intermediate region between the two surfaces simply by checking the number at any one point in the region, there must be the same number at any other point in the region as none can change sign unless we cross one of the surfaces. Similarly we can find the number in the exterior region outside both surfaces.
For the region between the two surfaces we need merely set j 1 = j 2 = 0 and choose j 2 3 = j 2 large enough to ensure that we are outside the interior region. Then Hence there is one negative eigenvalue if j 2 > D−2 D−4 , with j 2 = D−2 D−4 marking the boundary of the interior region in the j 3 -direction.
For the region exterior to both the surfaces we can set j 2 1 = j 2 2 = j 2 3 = j 2 , with j large enough to ensure that we are in the exterior region. Now (89) shows that that the eigenvalues of r h S 2π I S −1 are Hence there are two degenerate negative eigenvalue if j 2 > 1 and always one other positive eigenvalue for D = 7 or 8. We have thus shown that, for every point in the interior region, I S −1 has three positive eigenvalues, every point in the intermediate region has two positive eigenvalues and one negative one while every point in the exterior regions has two negative eigenvalues and one positive one. Since βC Ω vanishes on the same surfaces, and is negative in the interior region since it is negative at the origin, we see that the canonical ensemble is never stable in 7 or 8 dimensions.
The above analysis is easily extended to D > 8. We only need determine the signs of the eigenvalues of (89) in special directions j 1 = · · · j n = j, j n+1 = · · · j N = 0, and this gives the signs in each of regions separated by the roots of The number of regions in any specific direction is determined by the number of roots, with j 2 > 0, and the greatest number is when n = N: there are then N − 1 such roots and the different branches of C D,2 = 0 divide j-space into N regions. The form of (89) in these directions is where 1 d×d are d × d identity matrices and Q n×n is the n × n matrix whose entries are all one. 5 There are N − n eigenvalues +1 and the remaining eigenvectors V = (V 1 , . . . , V n , 0 . . . , 0) t and eigenvalues λ are determined by There are two possibilities: 5 We use the same notation as [18].
1.
n k=1 V k = 0: this requires V 1 = · · · = V n which implies that, for i = 1, . . . , n, For 1 ≤ n < N, this configuration returns a negative eigenvalue for this eigenvalue is positive for all values of j 2 .
The overall picture is then that there are N − 1 branches to the hypersurface C D,2 which divide j-space into N regions. All eigenvalues of I S −1 are positive at the origin and at every point inside the first branch. Every time a branch is crossed by a ray emanating from the origin, one of the positive eigenvalues of I S −1 changes sign and becomes negative until, in the outer region after all N − 1 branches have been crossed, there are N − 1 negative eigenvalues and one remaining positive one. The only region in which I S −1 , and hence I S , is a positive matrix is the innermost one. But we have already seen that βC Ω is negative in the innermost region, hence the canonical ensemble is always unstable for any choice of metric parameters in any D.
In addition to the positive mass Myers-Perry black holes in odd dimensions there are also negative mass Myers-Perry black holes [4]. However, as pointed out in [23], there is a subtlety with these space-times: geodesics are repelled from the would-be event horizon and do not pass through it, so in a sense there is no event horizon. Nevertheless one expects a non-zero Hawking temperature, determined by demanding regularity of the Euclidean time metric, so the entropy cannot be zero. The thermodynamics of these space-times is not analysed here, but would be an interesting future project.
Conclusions
We have compared the microcanonical, the canonical and the grand canonical ensembles in the thermodynamic description of asymptotically flat rotating black holes in arbitrary dimensions. These black holes are always thermodynamically unstable but the thermodynamic instability manifests itself differently in the different ensembles. There is however an elegant and simple relation between the specific heats and moment of inertia tensors in the canonical and the grand canonical ensembles, given by equation (3), The case of Myers-Perry black-holes has been analysed in detail and all extrema of the temperature in the microcanonical ensemble have been found and classified and shown to correspond to inflection points of the entropy.
In the canonical ensemble it has been shown that, in D dimensions, the specific heat C J in equation (56) vanishes when T = 0 and changes sign on a hypersurface in angular momentum space given by (where j i = 2πJ i S ), on which it diverges. In the determinant of the Hessian this singularity in C J is exactly cancelled by an equivalent zero in det(I T ). There are also singularities in det(I T ) when C D,3 in equation (83) vanishes.
In the grand canonical ensemble C Ω in equation (59) also vanishes when T = 0 and has divergences, this time on the hypersurface defined by C D,3 = 0 rather than that given by (99). In addition C Ω also has zeros on the hypersurface C D,2 = 0 in equation (82). In the determinant of the Hessian for the grand canonical ensemble (31) the zeros of C Ω are cancelled by corresponding poles in det(I S ) on C D,2 = 0. The locus of these singular points of det(I S ) corresponds to a branched hypersurface in angular momentum space which divides the space into N separate regions. Every time a branch of this hypersurface is crossed an eigenvalue of I S changes sign and the moment of inertia tensor has different signature in the N separate regions. Only the region surrounding the origin in angular momentum space gives a positive definite moment of inertia tensor and this region corresponds precisely to the region where C Ω is negative.
There is a curious relation between the hypersurface C D,3 = 0 on which both C Ω and det(I T ) diverge on the one hand and extremal T = 0 Myers-Perry black holes on the other: the algebraic equations defining these two hypersurfaces are related by analytic continuation (J i ) 2 → −(J i ) 2 , with the entropy held constant.
Our analysis has also shown that, in the microcanonical and the grand canonical ensembles, many of the thermodynamic properties of Myers-Perry black holes in D − 2 dimensions can be obtained from those of a black hole in D dimensions by letting one of the angular momenta in D dimensions tend to infinity, keeping the entropy constant.
The thermodynamic instabilities of Myers-Perry black holes thus have a very rich structure, beyond that of the ultra-spinning surface upon which the moment of inertia tensor develops its first negative eigenvalue.
An obvious direction for future work on this topic is to include a charge on the black hole and to introduce a cosmological constant to encompass the case of asymptotically anti-de Sitter rotating black holes. The latter should prove particularly interesting as the black holes will become thermodynamically stable when the magnitude of the cosmological constant is large enough and much could be learned by mapping out the boundary of the stability region.
A Temperature extrema and inflection points of the entropy
In this appendix we extend the study in [15] and [18] to find all isenthalpic (i.e. constant mass) extrema of T for Myers-Perry black holes, as the J i are varied in asymptotically flat space-times. At constant M equation (46) implies that with, which the expression for the mass, gives where we have defined (both t and ω i are invariant under j i → 1 j i ). Equations (100) and (102) together imply .
We now have all the information we need to calculate ∂T ∂J i M from we find where Ω 2 := k ω 2 k . For fixed M extrema of T occur for In particular any finite non-zero j i are all equal at an extremum. It is also possible that some of the j i might tend to infinity. Suppose m of the j i diverge as j i ≈ Λ → ∞. Then, at fixed finite mass, and hence which tends to zero for Λ → ∞ provided m < D−3 4 , which is possible for D ≥ 8.
To keep the discussion general we shall suppose n of the j i are finite and equal, m are infinite and N − n − m are zero. Up to permutations of the j i , extrema of T can only occur for configurations with The extrema require j to satisfy the second equation in (107), j = j * with At j * which gives the solution of (113) to be The temperature at these extrema is, from (105), Demanding T * ≥ 0 imposes the restriction More generally when the angular momenta are of the form (112), but not necessarily at j * , the temperature is and vanishes for which is only possible for m + n ≥ D−3 2 . To analyse the nature of the extrema we need the second derivative of T . A straightforward but tedious calculation gives where Sums like Σ crop up frequently in this analysis and it will prove convenient to define in terms of which We wish to determine the signs of the eigenvalues of (120) at j * . There are three cases to consider: The eigenvalues are all negative in these directions, corresponding to a maximum of T around J * .
• If both indices i and j are in the range [1, n], then the nature of the extremum is determined by where Q ij is the n × n matrix whose entries are all unity and A * and B * are ratios of polynomials in j * . Since Q ij has n − 1 zero eigenvalues and one eigenvalue equal to n, (126) has n − 1 degenerate eigenvalues λ 1 = A * and one eigenvalue λ 2 = A * +nB * . Evaluating A * and B * gives Thus J * is in general a saddle point, with T minimised in the directions i = 1, . . . , n and maximised in the directions i = N − n − m, . . . , N − m. A necessary condition for stability is that the eigenvalues of be positive, [15]. Using (102) and gives As observed in [18] the eigenvalues are all positive near J i = 0 and the first negative eigenvalue is encountered for n = 1 when j = D−3 D−5 , which is precisely the inflection point of [6] at the first temperature minimum. This hypersurface j = D−3 D−5 on which H ij first develops a zero eigenvalue is the ultra-spinning surface of [18]. For D = 5 the ultra-spinning surface in the microcanonical ensemble is not closed and j 1 can reach infinity when j 2 = 0, It is shown in §3.4.2 that the D = 5 ultra-spinning surface in the canonical ensemble is closed, see figure 2.
At temperature minima, where j = j * , the eigenvalues (135) above evaluate to In particular there is always one zero eigenvalue, corresponding to an inflection point in the entropy in the direction of the associated eigenvector..
B Specific heat at constant angular momentum
To calculate the heat capacity at constant J we first observe that (46) gives Next, combining (41), (44) and (46), j i = a i r h = 2πJ i S vary as But explicitly from (44) Equations (141) and (142) together now give and then (143) and (144) can be combined to give the specific heat in the canonical ensemble with fixed J, This can be expressed in terms of M and j i by noting that
C Specific heat at constant angular velocity
The specific heat at constant Ω is straightforward to determine, using similar techniques to those of §3.2. In terms of j i the entropy (44) is and The specific heat at constant angular velocity is defined as From (149) Using these it is straightforward to show that and Combining these we immediately arrive at equation (59) in the text, .
This generalises the D = 4 case derived in [11] to arbitrary D.
D Isothermal moment of inertia
To calculate the isothermal moment of inertia tensor in asymptotically flat Myers-Perry space-times our starting point is again from which we find Now use this in (46), re-written using (41) in the form to deduce that (159) Similar manipulations applied to (47) produce the inverse of which is where Equations (159)-(161) are now easily combined to give the symmetric isothermal moment of inertia tensor where This is equation (62) in the text. The determinant of I T can be evaluated by observing that the components of matrix are of the form with The determinant of (165) has a compact expression, because the off-diagonal entries factorise, det(I T ) = (167) A little manipulation, using (166), then yields with t defined in (164). This, together with the expression for C J in (56), leads to the compact expression . (169)
E Isentropic moment of inertia
To calculate the isentropic moment of inertia in asymptotically flat Myers-Perry space-times, again re-write (44) as from which with dJ i S = S 2π dj i S . Then yields which was reported in [22]. Equation (173) is easily inverted to give .
Similar manipulations to those of appendix D, with reveal that Combining this with C Ω in (59) leads to the same expression as (169), so (3) is indeed satisfied. | 2014-06-16T15:14:12.000Z | 2013-12-24T00:00:00.000 | {
"year": 2014,
"sha1": "bb8639e8d60e08a27f78a274205010489a30219b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1312.6810",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bb8639e8d60e08a27f78a274205010489a30219b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
229935439 | pes2o/s2orc | v3-fos-license | Exogenous phosphatidic acid reduces acetaminophen-induced liver injury in mice by activating hepatic interleukin-6 signaling through inter-organ crosstalk
We previously demonstrated that endogenous phosphatidic acid (PA) promotes liver regeneration after acetaminophen (APAP) hepatotoxicity. Here, we hypothesized that exogenous PA is also beneficial. To test that, we treated mice with a toxic APAP dose at 0 h, followed by PA or vehicle (Veh) post-treatment. We then collected blood and liver at 6, 24, and 52 h. Post-treatment with PA 2 h after APAP protected against liver injury at 6 h, and the combination of PA and N-acetyl-l-cysteine (NAC) reduced injury more than NAC alone. Interestingly, PA did not affect canonical mechanisms of APAP toxicity. Instead, transcriptomics revealed that PA activated interleukin-6 (IL-6) signaling in the liver. Consistent with that, serum IL-6 and hepatic signal transducer and activator of transcription 3 (Stat3) phosphorylation increased in PA-treated mice. Furthermore, PA failed to protect against APAP in IL-6-deficient animals. Interestingly, IL-6 expression increased 18-fold in adipose tissue after PA, indicating that adipose is a source of PA-induced circulating IL-6. Surprisingly, however, exogenous PA did not alter regeneration, despite the importance of endogenous PA in liver repair, possibly due to its short half-life. These data demonstrate that exogenous PA is also beneficial in APAP toxicity and reinforce the protective effects of IL-6 in this model.
Introduction
Acetaminophen (APAP) is a popular analgesic and antipyretic drug 1 , but overdose causes severe acute liver injury. In fact, it is currently the leading cause of acute liver failure (ALF) throughout much of the world 2 . Conversion of APAP to the reactive metabolite N-acetyl-p-benzo quinoneimine (NAPQI) initiates the hepatotoxicity. NAPQI binds to free sulfhydryl groups on cysteine residues, depleting hepatic glutathione and damaging proteins 3e5 . The protein binding leads to mitochondrial dysfunction and oxidative stress 6,7 , which activates the c-Jun N-terminal kinases 1/2 (JNK) and other kinases 8e10 . Activated JNK then translocates from the cytosol to mitochondria, where it exacerbates the mitochondrial dysfunction by reducing mitochondrial respiration 9,11 . Eventually, the mitochondrial permeability transition occurs 12,13 and the mitochondrial damage causes release of endonucleases from mitochondria, which then cleave nuclear DNA 14 . The affected hepatocytes die by necrosis 15e17 .
Phosphatidic acid (PA) is a critically important lipid in all prokaryotic and eukaryotic cells. It is the simplest diacylated glycerophospholipid, having a bare phosphate head group. In cell and organelle membranes, the small size and negative charge of the head group likely promotes negative curvature that may be important for membrane fission 18 . It is also a major metabolic intermediate, serving as a key precursor for synthesis of all other phospholipids, as well as triglycerides 19 . Finally, it is a major lipid second messenger with roles in nutrient sensing and cell proliferation via mechanistic target of rapamycin (mTOR) signaling 19e21 .
We recently demonstrated that PA is beneficial after APAPinduced liver injury in mice through an entirely novel mechanism 22,23 . Briefly, we found that endogenous PA is elevated in the liver and plasma after APAP overdose in both mice and humans, and that it promotes cell proliferation and therefore liver regeneration by regulating glycogen synthase kinase 3b (GSK3b) 22,23 . However, we did not test the effect of administering exogenous PA on APAP-induced liver injury. In the present study, we hypothesized that exogenous PA is similarly beneficial in a mouse model of APAP overdose.
Animals
Age-and cage-matched male wild-type (WT) C57Bl/6J mice and Il-6 knockout mice (Il-6 KO; B6.129S2-Il6 tm1Kopf /J) on the C56Bl/6J background between the ages of 8 and 12 weeks were obtained from The Jackson Laboratory (Bar Harbor, ME, USA). Female mice were not used because they are less susceptible to APAP hepatotoxicity 24 , which does not reflect the human phenotype 25 . The mice were housed in a temperature-controlled 12 h light/dark cycle room and allowed free access to food and water. The APAP and PA solutions were prepared fresh on the morning of each experiment. APAP was prepared by dissolving 15 mg/mL APAP (Sigma, St. Louis, MO, USA) in 1 Â phosphatebuffered saline (PBS) with gentle heating and intermittent vortexing. The PA solution was prepared by re-constituting purified egg PA extract (Avanti Polar Lipids, Alabaster, AL, USA) at 10 mg/mL in 10% DMSO in 1 Â PBS and warming to 80 C for 20e30 min with intermittent vortexing to obtain a uniform hazy suspension, and cooling to approximately body temperature immediately before injection. To determine if PA affects liver injury, WT mice (n Z 5e10 per group) were fasted overnight then injected (i.p.) with 250 mg/kg APAP at 0 h, followed by 10% dimethylsulfoxide (DMSO) vehicle (Veh) or 20 mg/kg PA (i.p.) at 2 h. Blood and liver tissue were collected at 4 h (for JNK activation) or 6 h (other endpoints). We chose the 20 mg/kg dose of PA because it is commonly recommended when taken as a dietary supplement in humans. To determine if the combination of N-acetyl-L-cysteine (NAC) and PA reduces injury compared to NAC alone, some mice were injected with APAP at 0 h followed by 300 mg/kg NAC (dissolved in 1 Â PBS) and either PA or Veh at 2 h (n Z 7 per group). Blood was collected at 6 h. We chose the 300 mg/kg dose of NAC because it is approximately 2-fold greater than the typical loading dose in humans after APAP overdose. Using this high dose of NAC ensures that our results comparing NAC with APAP þ NAC are conservative and robust. For transcriptomics, the original PA experiment was repeated at the 6 h time point with addition of a Veh-only control group (n Z 5 per group). To determine if PA protection depends upon IL-6, the experiment was repeated again at the 6 h time point using Il-6 KO mice (n Z 5e6 per group) and WT mice matched for source, genetic background, age, diet, and environment, with a similar but higher dose of APAP (350 mg/kg). The change in APAP dose in the latter experiment was due to an adjustment made to our university animal use protocol during the course of the study and unrelated to our data from these experiments. Finally, to test the role of Kupffer cells, the original PA experiment was repeated at the 6 h time point with WT mice (n Z 10 per group) after 24 h i.v. (tail vein) pre-treatment with 0.2 mL of 17 mmol/L liposomal clodronate (Clodrosome, Brentwood, TN, USA). All study protocols were approved by the Institutional Animal Care and Use Committee of the University of Arkansas for Medical Sciences (Little Rock, AR, USA).
Subcellular fractionation
Right and caudate liver lobes were homogenized in ice cold isolation buffer containing 220 mmol/L mannitol, 70 mmol/L sucrose, 2.5 mmol/L HEPES, 10 mmol/L EDTA, 1 mmol/L ethylene glycol tetra-acetic acid, and 0.1% bovine serum albumin (pH 7.4) using a Thermo Fisher Bead Mill (Thermo Fisher, Waltham, MA, USA). Subcellular fractions were obtained by differential centrifugation. Samples were centrifuged at 2500 Â g for 10 min to blood cells and debris. Supernatants were then centrifuged at 20,000 Â g for 10 min to pellet mitochondria. The supernatant was retained as the cytosol fraction. Pellets containing mitochondria were then re-suspended in 100 mL of isolation buffer and freezeethawed three times using liquid nitrogen to disrupt the mitochondrial membranes. Protein concentration was measured in both the mitochondrial and cytosol fractions using the bicinchoninic acid (BCA) assay, and the samples were used for Western blot as described below.
Clinical chemistry
Alanine aminotransferase (ALT) was measured in serum using a kit from Point Scientific Inc. (Canton, MI, USA) according to the manufacturer's instructions.
Histology
Liver tissue sections were fixed in 10% formalin. For hematoxylin & eosin (H&E) staining, fixed tissues were embedded in paraffin wax, and then 5 mm sections were mounted on glass slides and stained according to a standard protocol. Necrosis was quantified in the H&E-stained sections by two independent, fellowshiptrained, hepatobiliary pathologists who were both blinded to sample identity. Percent necrosis was then averaged for each animal. For oil red O staining, fixed tissues were embedded in optimal cutting temperature (OCT) compound and rapidly frozen by placing on a metal dish floating in liquid nitrogen. 8 mm sections were cut and mounted on positively-charged glass slides. The sections were allowed to dry for 30 min at room temperature, then treated with 60% isopropanol for 5 min, followed by freshly prepared oil red O solution in isopropanol for 10 min, and then 60% isopropanol for an additional 2 min. The sections were then rinsed with PBS, treated with Richard-Allan Gill 2 hematoxylin solution (Thermo Fisher) for 1 min, and rinsed again with PBS before cover-slipping. Digital images were taken using a Labomed Lx400 microscope with digital camera (Labo American Inc., Fremont, CA, USA).
Western blot
Liver tissues were homogenized in homogenizing buffer composed of 25 mmol/L HEPES buffer with 5 mmol/L EDTA, 0.1% CHAPS, and protease inhibitors (pH 7.4; Sigma). Protein concentration was measured using a BCA assay. The samples were then further diluted in homogenization buffer, mixed with reduced Laemmli buffer (Bioworld, Dublin, OH, USA), and boiled for 1 min. Equal amounts (60 mg protein) were added to each lane of a 4%e20% Tris-glycine gel. After electrophoresis, proteins were transferred to polyvinylidene fluoride (PVDF) membranes and blocked with 5% milk in Tris-buffered saline with 0.1% Tween 20. Primary monoclonal antibodies were purchased from Cell Signaling Technology No. 926-32211). All secondary antibodies were used at 1:10,000 dilution. Bands were visualized using the Odyssey Imaging System (LiCor Biosciences, Lincoln, NE, USA).
Glutathione measurement
Total glutathione (GSH þ GSSG) and oxidized glutathione (GSSG) were measured using a modified Tietze assay, as we previously described in detail 26 .
APAPeprotein adduct measurement
APAPeprotein adducts were measured using high pressure liquid chromatography (HPLC) with electrochemical detection, as previously described 27,28 .
Transcriptomics
The Supporting Information contains all details concerning RNA sequencing sample prep, next generation sequencing, and bioinformatics analyses.
Statistics
Normality was assessed using the ShapiroeWilk test. Normally distributed data were analyzed using Student's t-test for comparison of two groups or one-way ANOVA with post-hoc Student-NeumaneKeul's for comparison of three or more groups. Data that were not normally distributed were analyzed using a nonparametric ManneWhitney U test for comparison of two groups, or one-way ANOVA on ranks with post-hoc Dunnet's test to compare three or more. All statistical tests were performed using SigmaPlot 12.5 software (Systat, San Jose, CA, USA).
Exogenous PA reduces liver injury at 6 h after APAP overdose
To determine the effect of exogenous PA treatment on APAPinduced liver injury, we treated mice with APAP at 0 h followed by PA or Veh control at 2 h. We then collected blood and liver tissue at 6 h (Fig. 1A). We observed a significant reduction in serum ALT values in the PA-treated mice at 6 h post-APAP (Fig. 1B). Two blinded, fellowship-trained hepatobiliary and GI pathologists independently evaluated histology slides and the results confirmed the reduction in injury ( Fig. 1C and D).
NAC is the current standard-of-care treatment for APAPinduced liver injury in patients. To determine if the combination of NAC and PA can further reduce injury after APAP overdose compared to NAC alone, we treated mice with APAP followed by Veh, 300 mg/kg NAC and Veh, or 300 mg/kg NAC and 20 mg/kg PA. Post-treatment with NAC þ Veh reduced serum ALT compared to APAP þ Veh and the combination of NAC and PA reduced it further (Fig. 1E). The latter result confirms that PA protects against APAP and indicates that it has potential to be useful as an adjunct treatment with NAC for APAP overdose.
Finally, to determine if the protection at 6 h is persistent or represents a delay in injury, we treated mice with APAP at 0 h followed by either Veh or PA at 2 h and measured serum ALT at 24 h. We found that the protection with PA was lost at 24 h (Supporting Information Fig. S1). It is possible that additional early treatments or continuous PA infusion would still provide protection at 24 h and beyond, but these data indicate that the specific PA treatment regimen used here delays severe injury but does not prevent it. Altogether, these data demonstrate that exogenous PA could be an effective adjunct treatment with NAC to reduce early APAP hepatotoxicity or to extend the treatment window.
Exogenous PA does not affect the canonical mechanisms of APAP-induced liver injury
Next, we sought to determine the mechanisms by which exogenous PA reduces early APAP hepatotoxicity. The initiating step in APAP-induced liver injury is formation of the reactive metabolite N-acetyl-p-benzoquinone imine (NAPQI), which depletes glutathione and binds to proteins, initiating the downstream oxidative stress that activates JNK. Importantly, we chose a 2 h post-treatment with PA to avoid any effect on APAP metabolism and bioactivation because it is known that NAPQI formation and protein binding is complete by approximately 1.5 h 28 . Nevertheless, to confirm that the decrease in liver injury at 6 h was not due to an effect on NAPQI formation, we measured total glutathione (GSH þ GSSG) and APAPeprotein adducts in the liver. We did not detect a significant difference between the APAP þ Veh and APAP þ PA groups in either parameter ( Fig. 2A and B).
To determine if PA protects by preventing the early mitochondrial dysfunction and oxidative stress after APAP overdose, we measured GSSG in the liver. There was no significant difference in either total GSSG or the percentage of glutathione in the form of GSSG (%GSSG) between the two groups ( Fig. 2C and D). JNK is activated by reactive oxygen species (ROS) after APAP overdose and worsens the mitotoxicity, so to further test the effect of PA on oxidative stress and to determine if JNK activation was altered, we immunoblotted for phosphorylated and total JNK. Again, we could not detect a difference between the APAP þ Veh and APAP þ PA groups at either 6 h or even 4 h, which is closer to the peak of JNK activation after APAP (Fig. 2E). To determine if PA had an effect on mitochondrial damage downstream of JNK, and therefore mitochondrial rupture, we also immunoblotted for AIF and cytochrome c release into cytosolic fractions, and again no differences were detected (Fig. 2F). Because we previously found that endogenous PA can regulate GSK3b activity through Ser9 phosphorylation 23 and because active GSK3b is known to exacerbate APAP-induced liver injury 29 , we also measured GSK3b Ser9 phosphorylation, but once again observed no differences (Fig. 2G). Finally, as an additional indicator of mitochondrial function, we measured triglycerides in the liver by both oil red O staining and direct biochemical measurement. Triglyceride accumulation after APAP overdose is a direct effect of mitochondrial dysfunction with resulting loss of mitochondrial fatty acid oxidation 30 , so a change in triglycerides can be considered a secondary endpoint for mitochondrial damage. Consistent with previous studies 30e32 , we observed oil red O accumulation in the damaged hepatocytes within centrilobular regions (Fig. 3A) and liver triglycerides were elevated in the APAP þ Veh group compared to Veh treatment alone (Fig. 3B). However, again, we saw no difference between the APAP þ Veh and APAP þ PA groups. Altogether, these data largely rule out an effect of PA on APAP bioactivation, oxidative stress, and other effects downstream of oxidative stress, including JNK activation and mitochondrial damage.
Exogenous PA protects through IL-6 signaling in the liver
To identify other mechanisms by which PA might reduce early APAP hepatotoxicity, we performed next generation RNA sequencing in liver tissue from mice treated with Veh, APAP þ Veh, and APAP þ PA. We found that 6192 genes were differentially expressed between the Veh and APAP þ Veh groups. Consistent with the protein alkylation, oxidative stress, and inflammation known to occur in APAP hepatotoxicity, gene ontology (biological processes; GO:BP) analysis revealed that genes involved in protein refolding, cell responses to chemical stimulus, and Toll-like receptor signaling were increased by APAP, while various cell growth and cell signaling processes were decreased (Fig. 4A). Only 388 genes were differentially expressed between the APAP þ Veh and APAP þ PA groups. This was insufficient for complete GO analysis, but it is notable that the GO:BP term "acute inflammatory response" was over-represented in the APAP þ PA group when using a log 2 fold-change threshold of 1. Furthermore, hierarchical clustering analysis (Fig. 4B) and other measures (Supporting Information Figs. S2eS4) showed clear separation of the APAP þ Veh and APAP þ PA groups across the five biological replicates per group. Importantly, Upstream Analysis using Ingenuity Pathway Analysis (IPA) software revealed activation of signaling downstream of IL-6 and its target transcription factor signal transducer and activator of transcription 3 (Stat3) ( Table 1). Recent studies have demonstrated that IL-6 is protective in APAP hepatotoxicity 33 , and it was previously demonstrated that treatment with exogenous PA at doses similar to those we used here rapidly increase serum IL-6 concentration 34 . Thus, to confirm that PA increased serum IL-6 in our experiment, we measured IL-6 protein in serum at 6 h post-APAP. Importantly, IL-6 was significantly elevated in the APAP þ PA mice compared to the APAP þ Veh animals (Fig. 4C). Finally, to confirm activation of Stat3, we immunoblotted for phoshpo-Tyr705 Stat3 (p-Stat3) and total Stat3 in liver tissue. Consistent with our other results, p-Stat3 was significantly increased by PA treatment (Fig. 5A and B). Together, these data indicate that PA may protect against APAP toxicity by activating IL-6 and Stat3 signaling.
To confirm that exogenous PA reduces early APAP-induced liver injury through IL-6, we compared the effect of exogenously administered PA on APAP hepatotoxicity in WT and Il-6 KO mice at 6 h post-APAP. Importantly, PA did not reduce liver injury in the KO mice, despite protecting in the WT mice in the same experiment (Fig. 6AeC). Areas of necrosis in liver tissue were the same between the APAP þ Veh and APAP þ PA treated Il-6 KO mice ( Fig. 6B and C), and serum ALT actually increased with PA treatment (Fig. 6A). In addition, ALT values were higher after APAP þ Veh treatment in the Il-6 KO mice compared to the WT mice matched for genetic background, age, diet, and environment (Fig. 6A). The latter is consistent with the recently proposed protective role of IL-6 in early APAP toxicity 33 , although it should be noted that there was no difference in area of necrosis after APAP þ Veh treatment in the two genotypes, and that the WT and KO mice were not littermate controls. Altogether, these data clearly demonstrate that IL-6 is necessary for the protection provided by exogenous PA in WT mice, and support previous work indicating that IL-6 is protective in APAP-induced liver injury overall.
Adipose tissue is a likely source of increased IL-6 after PA treatment
Multiple liver cell types express IL-6, but Kupffer cells (KCs) are the major producers. To determine if the increase in IL-6 caused by treatment with exogenous PA is due to increased expression of IL-6 in KCs or other liver cells, we measured Il-6 mRNA in liver tissue in Veh-only, PA-only, APAP þ Veh, and APAP þ PA groups. Consistent with earlier work, Il-6 expression increased in the liver after APAP overdose. However, we could not detect a significant difference in Il-6 expression between the APAP þ Veh and APAP þ PA groups (Fig. 7A). Because KCs account for only a small portion of cells in the liver, it is possible that total liver mRNA has poor sensitivity to detect changes specifically within KCs. Thus, to further test if KCs are the source of IL-6 after PA treatment, we pre-treated mice with liposomal clodronate to ablate hepatic macrophages. The following day, we administered APAP followed by either PA or Veh. Blood and liver tissue were collected at 6 h post-APAP. Surprisingly, serum ALT was still significantly reduced by PA (Fig. 7B), despite depletion of the liver macrophages (Fig. 7C). These data once again confirm protection with PA and indicate that the liver itself is probably not the major source of IL-6 after PA treatment.
To identify other possible sources of IL-6, we treated mice with PA or Veh and collected liver, kidney, lung, epididymal white adipose tissue (eWAT), and spleen 4 h later. The experiment was designed to replicate earlier work showing that exogenous PA treatment increases circulating IL-6 34 but without exploring the source. We chose these specific tissues because they have high basal IL-6 expression and produce IL-6 in other disease contexts. Interestingly, we observed an 18-fold increase in Il-6 mRNA in eWAT (Fig. 7D). We could not detect differences in any of the other tissues. To confirm these results at the protein level, we immunoblotted for IL-6 in both eWAT and liver tissue lysates. Consistent with the mRNA data, there was an increase in IL-6 protein in eWAT but not liver (Fig. 7EeG). These data are consistent with the idea that adipose tissue is a source of the increased systemic IL-6 after PA treatment, indicating inter-organ crosstalk between liver and fat in the protective mechanism of exogenous PA. However, the specific cell type responsible for the increased IL-6 (e.g., resident macrophages vs. adipocytes) is not clear from these data. Further studies are underway to determine that.
Exogenous PA does not promote liver regeneration
Finally, because we previously demonstrated that endogenous PA promotes liver regeneration 22,23 and because IL-6 is a wellknown driver of that process 35 , we wanted to determine if exogenous PA enhances regeneration and repair after APAP overdose. To test that, we treated mice with APAP at 0 h, followed by exogenous PA or Veh at 6, 24, and 48 h post-APAP (Fig. 8A). We selected these late post-treatment time points to avoid an effect on the early injury at 6 h, which could have decreased liver regeneration secondary to the reduced injury. We then collected blood and liver tissue at 24 and 52 h. Although serum ALT was significantly decreased at 52 h (Fig. 8B), which is consistent with the overall protective effects of exogenous PA, there was no apparent difference in area of necrosis (Fig. 8C) and no change in proliferating cell nuclear antigen (Pcna, Fig. 8D) between the treatment groups at either time point. To determine if PA simply failed to increase IL-6 levels at these later time points, we measured serum IL-6 and found that there were no statistically significant differences between the APAP þ Veh and APAP þ PA groups (Fig. 8E). There are a few possible interpretations of these data, but overall the results indicate that the exogenous PA treatment regimen that we used is ineffective for enhancing regeneration.
Discussion
Together with our earlier work, the results from this study reveal that endogenous and exogenous PA have different beneficial effects in APAP hepatotoxicity, involving different mechanisms of action. We previously demonstrated that endogenous PA accumulates in liver tissue and plasma after APAP overdose in both mice and humans 22 . Importantly, inhibition of the PA accumulation had no effect on the early injury in mice but did reduce regeneration and survival by de-regulating GSK3b activity through an effect on Ser9 phosphorylation 22,23 . In the present study, we found that exogenous PA reduces or delays the early injury by increasing systemic IL-6 levels but has no effect on GSK3b phosphorylation and minimal effect on liver regeneration. The latter may be because PA has a short half-life in serum, so more frequent treatments are needed to see an effect. It may also be because IL-6 expression and release is so high late after APAP overdose that further increases are not possible, though short halflife would also explain why the 2 h post-treatment delayed liver injury but did not provide protection at 24 h. Overall, these data indicate that exogenous PA or PA derivatives have potential to one day be a useful adjunct with NAC to treat early APAP hepatotoxicity in patients, but targeting PA-mediated signaling to promote liver regeneration in late presenters may require a different approach. The latter might also indicate that PA must be incorporated into membranes near Wnt receptors in order to alter GSK3b signaling and regeneration, rather than acting on lipid receptors in the cell membrane, such as lysoPA receptors (LPARs).
Our results are consistent with earlier data demonstrating that systemic administration of exogenous PA dramatically increases circulating levels of IL-6 34 and adds to those results by Figure 7 The source of IL-6 after PA is extrahepatic and likely includes white adipose tissue. In one experiment, mice were treated with 250 mg/kg APAP at 0 h, followed by Veh or 20 mg/kg PA at 2 h. Where indicated, mice were pre-treated for 24 h with liposomal clodronate (LC). Blood and liver tissue were collected at 6 h. In a second experiment, mice were treated with 20 mg/kg PA or Veh and various tissues were demonstrating that adipose tissue is a likely source. Although we could not determine which specific cell type is responsible for the increased IL-6 production in eWAT from our data, it is likely to be adipose tissue-resident macrophages. Earlier studies revealed that directly treating the macrophage cell line RAW264.7 with PA in vitro increased expression of IL-6 and other cytokines in those cells with a time course that closely resembles in vivo induction 34 .
Our data also confirm the protective role of IL-6 in APAP hepatotoxicity. Masubuchi et al. 36 reported that Il-6 KO mice have worse injury after APAP overdose. More recently, Gao et al. 33 observed that administration of exogenous IL-6 is protective. Although the mechanism by which IL-6 reduces APAP hepatotoxicity remains elusive, there are a few possibilities. IL-6 induces expression of heat shock protein 70 (Hsp70) and other Hsps in liver tissue after APAP overdose 36 and Hsp70 KO worsens APAP toxicity 37 , so it is possible that PA ultimately protected through Hsps. In fact, Ni et al. 38 recently demonstrated that adducted proteins form toxic insoluble aggregates, and that inhibition of autophagy worsens APAP toxicity by reducing removal of those aggregates. Hsp70 has a central role in chaperone-mediated autophagy (CMA) 39 , so together these data may indicate that CMA is another critical autophagic process for removal of the adducted protein aggregates. However, we were unable to consistently detect an increase in either Hsp70 or Hsp40 in the APAP þ PA group compared to the APAP þ Veh group by immunoblotting (data not shown), so that seems unlikely. Another possibility is that IL-6 trans-signaling blocked the detrimental effects of IL-11. In a preprint, Dong et al. 40 recently reported that IL-11 may mediate APAP toxicity, that transgenic expression of HyperIL-6 (recombinant IL-6 with soluble IL-6 receptor) reduces APAP-induced liver injury, and that the protective effect of HyperIL-6 is lost in Il-11 KO mice. However, considerably more research is needed to support that hypothesis.
Interestingly, Bae et al. 41 recently demonstrated that exogenously administered lysoPA also protects against APAP hepatotoxicity. PA can be converted to lysoPA by phospholipases, so it is theoretically possible that lysoPA contributed to the protection we observed in our study. However, their data demonstrated that lysoPA protected by 1) preventing early glutathione depletion and increasing glutathione re-synthesis at 6 h post-APAP and by 2) altering JNK and GSK3b activation 41 . We could not detect any effect of exogenous PA on either glutathione or kinases in our experiments, so it is likely that PA protected through entirely different mechanisms in our study. Bae et al. 41 also used a 1-h pretreatment in most of their experiments, which has limited clinical relevance and makes it difficult to directly compare our results.
Initially, it was surprising to us that exogenous PA did not enhance liver regeneration after APAP overdose in our experiment despite multiple treatments. Our prior work demonstrated that endogenous PA is critical for normal liver regeneration 22,23 . Additionally, IL-6 is known to be very important in liver repair. Il-6-deficient animals have delayed regeneration after partial hepatectomy, APAP overdose, and CCl 4 hepatotoxicity 42e45 . On the other hand, Bajt et al. 46 found that injection of recombinant IL-6 does not enhance regeneration after APAP overdose, and many treatments that do enhance regeneration do not increase IL-6. It may be the case then that basal IL-6 levels are sufficient to aid liver repair, such that reducing IL-6 can blunt regeneration but increasing it has no effect. In our case, however, exogenous PA failed to affect serum IL-6 at later time pointsdlikely due to the short half-life of PAdso we cannot determine if an increase in IL-6 would have been beneficial later. More data are needed to understand the details of those effects.
Conclusions
Overall, we conclude that post-treatment with exogenous PA reduces APAP hepatotoxicity in mice by increasing systemic IL-6, which is protective. Because PA is readily available over-thecounter as a supplement due to its purported ergogenic effects 47 and because the combination of PA and NAC protected better than NAC alone in our experiments, exogenous PA or PA derivatives may one day be a useful adjunct with NAC for treatment of APAP overdose patients. However, more research is needed to test that possibility. In future studies, we will explore the effects of different doses, different acyl chain composition, and different PA formulations to optimize the protection. We will also test the effects of both endogenous and exogenous PA in other liver disease models. | 2020-12-24T09:02:13.462Z | 2020-12-23T00:00:00.000 | {
"year": 2021,
"sha1": "c77e7f910ffbf2643bc9e3efd80321e02f587e9d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.apsb.2021.08.024",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e97c14ab381b72ffe943d50316eb5346e42aef4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
]
} |
256040571 | pes2o/s2orc | v3-fos-license | Developing local RG: quantum RG and BFSS
In this paper we study various forms of RG and apply these to the BFSS model of N coincident D0-branes. Firstly, as a warm-up, we perform standard Wilsonian RG, investigating the conditions under which supersymmetry is preserved along the flow. Next, we develop a local RG scheme such that the cutoff is spacetime dependent, which could have further applications to studying QFT in curved spacetime. Finally, we test the conjecture put forward in [1] that the method of quantum RG could be the mechanism responsible for the gauge/gravity duality by applying it to the BFSS model, which has a known gravitational dual. Although not entirely conclusive some questions are raised about the applicability of quantum RG as a description of the AdS/CFT correspondence.
Introduction
In its most precise form, the AdS/CFT correspondence is an equality of partition functions, where sources in the field theory side correspond to boundary conditions on the dynamical fields of the gravity side [2][3][4][5]. In the large N limit on the field theory side, and in the classical limit on the gravity side, we get, roughly, . (1.1) We can then use this to calculate correlation functions on both sides. However, this is not the whole story. If we try to evaluate the classical action as it stands, with boundary conditions precisely on the boundary of AdS, we would get infinity. As is standard in QFT calculations, the way to deal with this infinity is to do renormalisation, i.e. introducing counterterms to absorb the infinities. This procedure has been extensively developed, and is now a very standard technique under the name of Holographic Renormalisation [6][7][8][9][10].
There are many interesting peculiarities with this idea. Firstly, it seems that what would normally be the UV divergences in standard QFT are in fact IR divergences in the -1 -JHEP05(2020)063 gravity side. Further, what plays the role of renormalisation scale is in fact the radial direction in AdS spacetime. This is but one of the many hints that there is some deep connection between scale in the field theory side, and the radial direction in the gravity side [6,11].
Nonetheless, despite it's success, this also leaves many questions unanswered. The most immediate one is diffeomorphism invariance. What do we mean by radial direction? That is surely not a gauge invariant statement. Secondly, it's now very well known that renormalisation in QFT is not about removing annoying infinities, it's about coarse graining, integrating out degrees of freedom we do not have access to, in order to get a description relevant at our desired scale [12]. Is there any way we can understand Holographic Renormalisation under a Wilsonian point of view?
As is to be expected, these questions have long been explored. It didn't take very long to understand that the would be RG flow in the gravity side is given by the Hamilton-Jacobi formulation, where instead of time evolution we consider radial evolution [13][14][15]. There have also been many proposals on how to give a more diffeomorphic invariant meaning to this radial direction [16][17][18][19][20]. Most of which include interpreting different RG schemes on the QFT side as different coordinate systems in the bulk. What was in general poorly understood is which scheme corresponds to which coordinate system. More recently there is a proposal for the generic correspondence between smooth schemes on both sides [21], and another for the particular case of dimensional regularisation [22]. As is to be expected, the relation between the two is not at all simple.
The difficulty with all these ideas (and the major interest) lies in the fact that, in order to get a full understanding of this issue, we would need to perform some sort of RG on the field theory side, and then compare with some sort of radial evolution on the gravity side, which, essentially, requires proving the conjecture. Conversely, we could also go the other way around, instead of thinking that it's a shame we need to prove the conjecture in order to answer these questions, we can try to answer these questions as a means to try to prove the conjecture. The goal of this paper is to do precisely that, to test, in a simple case, whether one of these proposals holds or not.
The proposal to be analysed in particular is the Quantum Renormalisation Group (QRG) [1,23,24], which, briefly, consists of applying the following procedure to a QFT with matrix valued fields (this will be covered in more detail in section 4.1): 1. Turn on single trace operator deformations with sources 2. Do an infinitesimal local RG transformation 3. Add auxiliary dynamical fields to project onto the space of single trace operators
Iterate
In this way, from a d-dim QFT we generate a (d+1)-dim action where what were sources are now dynamical fields. The proposal in [1] is that the new action would be the holographic dual to that CFT giving a concrete realisation of the AdS/CFT correspondence. 1
JHEP05(2020)063
Since the original paper, some follow-up work has been done, namely some hints for it's application to the original AdS 5 /CFT 4 case [35], a concrete calculation for the U(N ) vector model [36], and, understanding the conditions under which one can recover full (d + 1)-dim diffeomorphism invariance [37]. However, there has been no explicit calculation, starting from a QFT with a known gravitation dual, performing QRG and checking whether we end up with the same theory.
This is exactly what has been accomplished with this paper. The QFT chosen was the N = 16, one-dimensional super Yang-Mills theory with gauge group SU(N ), more commonly known as the BFSS model after the authors of [38]. This theory not only has a known gravitational dual [39][40][41][42], but also is extremely simple given that it is onedimensional, a fact which allows us to perform all calculations explicitly. In the end after we perform QRG the results seem to differ from the gravity predictions [40,43] (which have matched by lattice simulations [44]). Even though QRG cannot be completely ruled out some questions are raised as to what would be needed to make it work or prove it wrong.
We begin section 2 by performing standard (i.e. not quantum) RG on the BFSS model. This result by itself, as far as the authors are aware, is absent from the literature, mainly because there are no UV divergences, therefore, by itself, this is not very useful. However, it turns out it is a very useful playground to explore how one can break or preserve supersymmetry under an RG flow since we can compute everything explicitly. In section 3, we address the first main concern, how to define a local version of RG. It turns out one can define this under certain restrictions, and we give a concrete example of how to achieve this. We have developed this formalism to apply to QRG, however, it may be interesting in it's own right, e.g. if one wanted to perform RG in a curved background spacetime. Finally, in section 4, we put everything together and perform QRG on the BFSS model. We start be reviewing the QRG procedure in detail and the holographic duality in BFSS. Then we go to the main calculations, highlighting the disagreement with known results.
Renormalisation group flow of BFSS model
In this section we calculate the renormalisation group flow of the BFSS model in the case where the renormalisation scale is spacetime independent. We start by a brief review of the BFSS model, then we move on to the calculation using a hard momentum cutoff. Already here we find interesting ways to avoid breaking supersymmetry. After this prelude we discuss how to implement RG with a smooth cutoff in the sense of exact RG, we find that we always break supersymmetry in that case. Finally we give some remarks on (failed) attempts to circumvent the aforementioned supersymmetry breaking.
Overview of the model
The BFSS model is the maximally supersymmetry matrix quantum mechanics describing the dynamics of N D0-branes. Equivalently, it is the N = 16 super Yang-Mills theory in d = 1 dimensions with gauge group SU(N ), which can be obtain by dimensional reduction of the N = 1 super Yang-Mills in d = 10 dimensions. It was originally introduced in [38] as a description of M-theory in the infinite momentum frame in the uncompactified limit, -3 -
JHEP05(2020)063
only later was its role in the gauge/gravity duality fully appreciated [39]. For a general review of this model see [45].
This theory has an SU(N ) gauge field A, nine scalars X i (i = 1, . . . , 9), and 16 fermions ψ α (α = 1, . . . , 16). Both the scalars and the fermions are in the adjoint representation of the gauge group and therefore are represented by Hermitian, traceless, N × N matrices. The action for this model is (in Euclidean time): where λ = N g 2 YM is the usual 't Hooft coupling. We are using the convention where the generators of the Lie algebra are Hermitian, and therefore they obey Furthermore, we normalise T as Tr T a T b = δ ab . The covariant derivative in eq. (2.1) acts as D τ = ∂ τ + i[A, ·]. Finally, γ i are the nine-dimensional Dirac gamma matrices, which are real, symmetric matrices satisfying {γ i , γ j } = 2δ ij . As mentioned above, this theory is invariant under a supersymmetry transformation with 16 supercharges whose precise form will not be relevant for the subsequent discussion. Note also that the gauge field is not dynamical, therefore we can completely fix the gauge with A = 0 without the need to introduce Fadeev-Popov ghosts. This is one of the many simplifying aspects of the theory. In the remainder of the manuscript we assume we are in such a gauge.
It will also prove to be useful to do the rescaling, so that, in these new variables, the action looks like, We note that in the large N limit N → ∞, the original untilded variables are O(N 0 ). Finally, in order to do the perturbative calculations presented in the subsequent sections, it is convenient to write the action in terms of the structure constants,
RG with a hard momentum cutoff
As a warm-up calculation, we start by computing the perturbative 1-loop RG flow of this model. Since this is a one-dimensional theory, there will be an infinite number of relevant interactions that will be turned on by the RG flow, rendering our perturbative approximation useless. We will, nonetheless, proceed with the calculations and only consider diagrams with up to four external legs. This is completely artificial and unjustified, however, we will proceed with this calculation because there are still some interesting lessons to take from this analysis to do with supersymmetry. We will impose a hard momentum cutoff by demanding that our fields only have support for momenta |p| < Λ 0 . Then, to lower the cutoff, we integrate over modes with support in momentum space Λ < |p| < Λ 0 . The calculations themselves involve rather tedious index manipulations, for that reason we relegate the details to appendix A and only present the main results in the core text. The relevant diagrams at 1-loop order and up to four external fields are (where we denote the high energy modes with blue): Tadpole. This one is trivially zero by the index structure. = 0 . .
For the scalar mode we must have |ω| ∈ [Λ, Λ 0 ]. For the fermionic mode, one might naively think that the region of integration is also |ω| ∈ [Λ, Λ 0 ], just as for the scalar. However, that would be wrong. In fact there is also a high energy mode with momentum ω − p so, since that mode only has support when its momentum is in the range [Λ, Λ 0 ] we must also impose that |ω − p| ∈ [Λ, Λ 0 ]. Usually, integrating over these intricate regions is prohibitively difficult, however, for one-dimensional integrals, they can be done analytically. If we do not integrate over this region, we get non-sensical answers. For instance, the answer would depend on which line of the loop we give momentum ω and which line we give momentum ω − p. 2 Let us define which brings eq. (2.6) to Expanding in powers of p yields There is a linear term in p which could be worrisome, however, in d = 1 this is a total derivative, so we shall drop it. Note that the would-be mass term cancels between the two diagrams and we are left with just a wavefunction renormalisation contribution. (2.8)
JHEP05(2020)063
Fermion propagator. There is only one diagram that contributes: where once again we have to be careful about the integration region and integrate over I as defined in eq. (2.7): (2.9) Expanding in powers of p we get, which gives a wavefunction renormalisation of Triangle diagram. This is also trivially zero by the index structure = 0 .
Cubic coupling. There is only one diagram that contributes Since we just want the correction to the cubic coupling we will set the external momenta to zero. This also means there are no subtleties with the region of integration. We then get for the correction to the cubic coupling,
JHEP05(2020)063
Quartic coupling. Now there are six diagrams that contribute at 1-loop order, they are all distinct and rather messy. However, setting the external momenta to zero allows us to add up all these diagrams to get something nice in the end. After the dust settles the correction to the quartic couplic is: Putting everything together, that is, including the wavefunction renormalisation and classical scaling into account we find, to leading order in λ, Even though we have not generated anything as egregious as a mass term for either the fermions or the scalars, the contribution to the cubic and quartic couplings is not quite right. At the quantum level, with this regulator, λ (4) = λ 2 (3) which signals a breaking of supersymmetry.
By themselves, these results are not very surprising. In this theory, the supersymmetry algebra only closes on-shell, so a hard momentum cutoff will necessarily break supersymmetry (the next section will delve deeper into this issue). However, we have noticed a somewhat bizarre feature for which the interpretation is still not entirely clear (which is the main reason for including these calculations in the final manuscript). We can preserve supersymmetry at the 1-loop level if we prescribe the integration in a slightly different way. Instead of integrating with the physical constraint that all internal lines are high energy, we tried using the Feynman parameter method, which is usually used to combine propagators and make integrals more tractable (in our case we can do the calculation in both ways and compare the final answer). We then impose that the final integral is the one that sits in the range [Λ, Λ 0 ]. Like we previously mentioned, this is physically rather dubious, but it corresponds to the standard practice in higher dimensions (see for instance [46]), and, surprisingly enough, it appears to preserve supersymmetry.
The only diagrams that change are the contribution to the scalar propagator with a fermionic loop and the fermionic propagator. The contribution to the scalar propagator which precisely cancels the contribution from the scalar loop, meaning there is no scalar wavefunction renormalisation with this regulator. Finally the fermionic propagator becomes, which is exactly the same result as before.
Putting everything together we get, which now preserves supersymmetry at the quantum level. Therefore, we have found a regulator that indeed preserves supersymmetry at least at 1-loop level. However, the physical interpretation of this regulator is not at all clear, and it does not seem to be usable beyond perturbation theory. Nevertheless, it would be interesting to see if similar phenomena occur for other theories in higher dimensions. We will not pursue this further in this manuscript, leaving it to future work.
RG with smooth regulators
As we mentioned in the introduction, the last calculation was mostly a warm-up calculation before doing full quantum RG. However, in order to have a local notion of scale we cannot -9 -JHEP05(2020)063 impose a cutoff in Fourier space. Indeed, if the cutoff depends on spacetime, the Fourier transform is no longer invertible. 3 Therefore we need to use a smoother procedure. To that effect, we will use some basic exact RG technology to implement a smooth cutoff. We shall remain in momentum space for this section for convenience, in section 3.1 we address how to extend this to position space. We only need the most basic ideas of exact RG, nonetheless, we will review them for completeness. We closely follow the derivation in the beginning of [47], for some other reviews on the topic of exact RG you can refer to [48][49][50][51][52].
Let us consider scalar field theory for illustration. The key idea is to introduce a function K(x) such that: See for example figure 1 for a function satisfying all the above criteria. These requirements can be satisfied by a smooth function, however, no analytic function works. Nevertheless, we can soften the second requirement, and only impose that K(0) = 1 and that K(x) is suitably close to 1 for x < 1. Then, we can find suitable analytic functions, e.g. K(x) = e −x 2 . In momentum space, this distinction is not necessary, as there is no issue with working with smooth but non-analytic functions. However, when we go to position space, we need to phrase these functions in terms of operators, and therefore we need them to be analytic in order to be able to define them. With that in mind we shall assume we are using analytic K, and therefore, Taylor expansions work. 3 This is quite easy to see. For example, take some function f (x), the normal Fourier transform with e ikxf (k), and we can easily check that indeed this is invertible: . However, if we promote Λ → Λ(x) in the first step then we can't swap the order of the two integrals and therefore we can't invert the transformation.
JHEP05(2020)063
The regulated action (with a global cutoff) is where, for brevity, we defined p ≡ we can write (by appropriately choosing A and B, and neglecting the φ integral since it only contributes with a field independent constant) which gives the required split into high and low energy modes, but now through a smooth regulator.
The key point is that, when we are integrating over the high energy modes, the propagator can be approximated via, This means that, if we are only interested in the beta functions, we only need to consider diagrams with one high energy propagator. Working with a smoother cutoff implies we count propagators instead of loops. Even if we are not just interested in the beta function and we want the full RG, this is still a relevant phenomenon. The analyticity of K mean we can Taylor expand and compute the integrals order by order and different orders will not mix. We must count propagators.
JHEP05(2020)063
This is manifestly at odds with supersymmetry. Now we cannot cancel the mass term for the scalars since the two diagrams come at a different order in δΛ. To counter that, we could try lowering the fermionic and scalar cutoffs at a different rate so that each scalar propagator counts as two fermionic propagators, making both terms contributing to the scalar propagator appear at the same order and allowing the mass term to cancel. However, even in that case, supersymmetry is broken. The reason now being that the corrections to the other couplings come at higher orders in δΛ, so the only contribution to the beta function would be from the scalar wavefunction renormalisation, and there is one scalar for the cubic coupling, but four scalars in the quartic coupling. We would not have λ 2 (3) = λ (4) and supersymmetry would be broken.
By itself this is not a very surprising result, a similar phenomenon already happens for the much simpler four-dimensional N = 1 theory with one complex scalar and one Weyl fermion. In this case, however, one can preserve supersymmetry, even with a smooth regulator, by using the off-shell formalism. This is accomplished by using auxiliary fields that make the supersymmetry algebra close without using the equations of motion. This was our issue previously, by introducing a regulator in the style described above, we have changed the equations of motion, which were essential in preserving supersymmetry. Then, if we regulate all quadratic terms with the same function, including the auxiliary field, which now becomes dynamical and propagating, we do not break supersymmetry. This can happen because we no longer have the quartic scalar coupling, what we do have is a cubic coupling with two scalars and one auxiliary field. This means (using dotted lines for auxiliary fields), → (2.23) which comes at the same order as the fermionic loop.
Knowing this result for the simpler theory, could we reproduce this with BFSS? The answer turns out to be no. Our first hurdle is the fact that no off-shell formulation with this many supercharges and finitely many fields is known. 4 We can try to ameliorate our situation by using the N = 1 superspace formulation of four-dimensional N = 4 SYM and dimensionally reducing it down to 1D. In this manner we would have 4 supercharges preserved off-shell. However, this is still not enough to prevent the formation of a mass term. This happens because we do not destroy every quartic coupling, just some of them, so part of the calculation that leads to the mass term would carry through with no change.
To implement a smooth cutoff, which we must do to make it local, means giving up explicit supersymmetry.
Local renormalisation group
In this section we do the first step in performing QRG, defining how to integrate out modes with a local regulator, i.e. integrating modes at different speeds in each point of spacetime. 4 We thank Nick Dorey for pointing that out to us.
JHEP05(2020)063
To do that we first repeat the derivation done in section 2.3 but now in position space. We shall see that it still holds, provided there are some restrictions on the kinetic operators we use. Then we take a particular example, of a local Gaussian regulator in one dimension and prove that that regulator obeys all necessary restrictions. This provides the first explicit realisation of a local cutoff scheme which could be used in practical calculations.
Smooth regulator in position space
Deriving (2.21a) is rather straightforward. As it stands, we just plug in the definitions (2.21b) and (2.21c) and do the resulting algebra. The reason for this simplicity is that, in momentum space, we are dealing with ordinary multiplication of functions. In position space, however, we would be dealing with operators. Which do not obey many of the nice properties we take for granted when performing algebraic manipulations.
For simplicity, we shall resort to matrix multiplication notation, where spacetime integration is denoted with a dot product. In this notation, local operators become matrices by introducing a delta function, 5 and, as usual, the inverse of the operator will be it's Green's function. For example so that, It is important to note that, in general, these objects will not obey the same nice properties that matrices do. Namely, for a given "matrix" (i.e. function of two arguments), left and right inverses do not necessarily match, and the inverse of a diagonal object is not necessarily diagonal. Note, for instance, that (3.2) is diagonal, but (3.3) is not. In this case left and right inverses do match because both are symmetric. In the end these subtleties will not be all that relevant, but it is important to have in mind the full picture.
Let us start by deriving (2.21a) in position space. We take B −1 = G −1 Λ to be the low energy kinetic operator and (A + B) −1 = G −1 Λ 0 to be the high energy kinetic operator. We make no assumption at this point as to whether they are local or global regulators. However, by construction they will both be symmetric. So, if the inverses exist, they will behave as expected.
If we can find those two inverses, B and A+B, we can define A = A+B−B = G Λ 0 −G Λ as the high energy propagator, which is the most useful quantity in practical calculations, 5 Throughout this paper we always work in Euclidean signature.
JHEP05(2020)063
and, by construction, is also symmetric. Then, if A −1 exists, it behaves just like a matrix inverse. Just note that finding A −1 can be incredibly hard because it is the opposite question to what is usually done, we have the Green's function and we want to find out the corresponding operator. However, even though our derivation will only work if such an operator actually exists, we do not actually need for any practical calculations so it suffices to show that it exists.
We repeat the derivation of (2.21a) assuming all those inverses behave as expected, and, in the next section, we present an explicit example and check whether these assumptions are valid. We will be careful in saying exactly what conditions are needed, so that, in future work it is clear if any generalisation is possible.
Analogously to (2.21b) and (2.21c), we start by defining: so that the Jacobian is still unity. We therefore have (ignoring an overall, unimportant, factor of 1 2 ): which is the desired expression. Note that if all operators and Green's functions are symmetric, which implies left and right inverses match, then all conditions are satisfied. Also note that we are free to choose both the high energy and the low energy propagators, so the real crux is on the properties of A and the existence of A −1 .
An example: local Gaussian regulator
We shall restrict to Euclidean time and consider a Gaussian regulator. In the end we shall be most interested in the case d = 1 but the results of this section are valid for arbitrary d. With a usual, spacetime independent cutoff we have, which has the Green's function,
JHEP05(2020)063
such that the kinetic term looks like In what follows we need to give the cutoff spacetime dependence. If we naively just promote Λ → Λ(x) directly in (3.8c) there will be ordering issues when expanding the exponential which will make it hard to deal with. To help with that, we start with (3.8a) instead and promote Λ → Λ(x 1 ). In this way the derivatives actually commute with the cutoff, so there are no ordering issues. However, then the resulting operator is not symmetric (and only the symmetric part contributes to the action because it's multiplied on both sides by the same field). Therefore, we take the symmetric part and define the local version as, Unfortunately, for arbitrary Λ(x) we do not know how to find the Green's function of (3.9). However, for our purposes (as will be shown in the following section), we only need to find the beta functions, i.e. infinitesimal flow. Therefore we approximate, defining the original high energy cutoff, Λ 0 to be constant, and taking, Λ(x) = Λ 0 e −α(x)dz , for α and dz positive, and dz 1. We can then solve this perturbatively, expanding in powers of dz, giving us A = G Λ 0 − G Λ = −dz G (1) . All we have to do now is find A and show that A −1 exists. First we find A, i.e. the Green's function for (3.10), order by order in powers of dz. At 0 th order, the equation is solved by construction. At 1 st order we get, Using the definition of G Λ 0 as the Green's function for (3.8a) allows us to simplify the first term on the r.h.s.,
JHEP05(2020)063
Acting with G Λ 0 on the left on both sides of this equation, and, once more using its defining property as the Green's function gives us, Everything is nice and symmetric as expected, which means left and right inverses will match nicely, if they do exist, that is. As mentioned above we do not actually need to find an explicit expression for the inverse, we just need to know that it indeed exists to render our calculations consistent. It is instructive to take the Fourier transform of (3.14), using the explicit expression in (3.8b). After a straightforward calculation, just using the definition of Fourier transform and some manipulation of delta-functions we arrive at, Because everything is nice and symmetric, left and right inverses match, and we can then use standard linear algebra results. In this language, an inverse exists if and only if where, crucially, f cannot have any dependence on k 1 . Imagine for a moment that in (3.15) there is noα, then this is clearly not true. We just need to pick f to be an odd function and the integral vanishes. This is also the case for constantα, however, a constantα corresponds to a delta-function in position space, which we can clearly rule out as an allowed profile forα, it would correspond to changing the scale only at one point. So let us restrict to the case whenα is not constant.
In this case for a givenα, and a given k 1 , we could conceivably make the integral vanish for a non-zero f by judiciously choosing f , possibly relying on some non-trivial symmetry. However, becauseα only depends on the combined sum k 1 + k 2 , any such choice will inevitably depend on k 1 . Unlessα is constant (which we have ruled out), by just choosing a different k 1 we will shift the profile ofα in an arbitrary fashion, and inevitably, some of those shifts will ruin our choice of f . Given that f cannot depend on k 1 and the condition must be valid for all k 1 we conclude that (3.16) is true, and therefore, G (1) is invertible, rendering our procedure consistent. We have successfully developed an RG scheme with a local change of scale.
Quantum renormalisation group
After developing a framework to perform local RG we can move on to the main objective of this paper, testing quantum RG (QRG). We start by an overview of the procedure itself in greater detail than what was given in the introduction. We then move on to an overview of what is known (and is relevant) about the holographic duality of the BFSS model, to understand what should be our starting point and are we expecting to reproduce (or fail to reproduce) after performing QRG. Finally we put everything together and do the actual computation.
Overview of QRG
The starting point for QRG [1,23,24] is a quantum field theory with dynamical fields Φ which are matrix valued. These could have any spin, but it is important that they are matrix valued. We write the partition function of this theory as The algorithm of QRG is as follows: 1. Turn on single trace operator deformations. In general, we should turn on a complete basis of single trace operators. However, in practice, we will only be able to turn on a finite number of them. Let O m be the operators and j (0)m be the corresponding sources, the partition function is then 2. Perform an infinitesimal local change of scale, i.e. if in the initial theory the cutoff is Λ 0 , we do an RG flow such that the new scale is Λ = e −α (1) (x)dz Λ 0 , for dz 1. The new partition function is, to leading order in dz, and f (x; j (0) ] denotes a function that depends on j (0) (x) and its derivatives at a point x. We have used the fact we turned on a complete basis of operators to write all appearances of the fields in terms of the operators we have turned on. If we only turn on a finite number of them, it cannot generate any new ones, or otherwise this is not a consistent algorithm. Note that, to leading order in dz, we do not generate more than double trace operators.
δj (0)m in δS, noting that we must now be careful with the ordering in (4.3b), the order shown, where all the operators are on the right is the correct one (4.5) 5. Integrate by parts with respect to j (1)m in the δS term (4.6) 6. Now we can start with Z[j (1) ] and iterate this procedure Taking the dz → 0 limit, it's not hard to see we have generated an action that lives in d + 1 dimensions for the new dynamical fields j m (z, x) and p m (z, x). It is important to note that if no double trace operators are generated then this action will be linear in p m (z, x) and therefore this field will still just be a Lagrange multiplier, not a dynamical field. In order to have non-trivial dynamics for these fields we must generate double trace operators.
The conjecture is that Gauge/Gravity Duality is completely encapsulated in a procedure such as this one. As mentioned in the introduction, there has been some additional work with regards to this conjecture. Namely some hints for it's application to the original AdS 5 /CFT 4 case [35], a concrete calculation for the U(N ) vector model [36], and, understanding the conditions under which one can recover full (d + 1)-dim diffeomorphism invariance [37]. This last one is the most relevant for our purposes since it his here that the importance of having a spacetime dependent cutoff was fully appreciated as a means to recover diffeomorphism invariance.
Overview of the holographic dual to BFSS
As mentioned in section 2.1, the BFSS model, describes the dynamics of N coincident D0-branes. This means it also has a dual gravitational description in terms of 10-dim type IIA supergravity [39]. In the decoupling limit,
JHEP05(2020)063
the supergravity background solution corresponding to BFSS is given by [39], where dΩ 2 is the metric on a round unit radius S 8 , α is related to the string length and g s is the string coupling. We note in passing that strictly speaking this solution is singular at the origin. The standard way to deal with this is to put the system at a finite temperature, which corresponds to having a black hole in the gravity perspective. However, if we are far enough away from the origin, i.e. near the boundary, the effects of this temperature should be minimal, that is also the region we are we have more control over our field theoretic description. Therefore, in this paper, we neglect finite temperature effects. Another point to make is that, as mentioned in [39], the curvature gets large as we approach the boundary, more specifically, α R ∼ U 3 g 2 YM N , and therefore we have less faith on our supergravity description. Naively, this does not intersect with the region where we have analytic control on the field theory side. However, in QRG we only need to do one infinitesimal step of coarse graining, and, as we have showcased in sections 2.3 and 3.2 we can do that exactly. This seems to solve all our problems, but there is an issue. The theory we want is the one that approaches the action (2.1) in the UV. When the coupling is strong, the correct action is not (2.1) but it needs corrections that, by construction, will be very important. If we just take the action (2.1) and define the coupling to be strong we have a well defined theory and calculations, it will simply not be the theory we're looking after. This is similar to how we can solve QCD in the strong coupling limit exactly using lattice methods but the answers we get aren't physically relevant. 6 The resolution to this issue comes from the realisation that, in the gravity side, we should insert the sources at the boundary, not deep into the bulk. Therefore, we should start with a field theory action in the Λ 0 → ∞ limit. Then we do the one infinitesimal coarse graining step required by QRG at this weak coupling limit. By the nature of QRG we can put all corrections due to this step into the new dynamical fields and start again with the original action. This means we can confidently do all the hard calculations in the regime where we have control over the theory and then use the auxiliary turned dynamical fields to recover the important physics.
With those points in mind we carry on with our discussion. The solution presented above is not the full content of the gauge/gravity duality. As was mentioned in the introduction, the most general form of the correspondence is an equality between partition functions that allows us to calculate correlation functions on both sides (and hopefully match them) [2][3][4][5]. However, to do that, we need to find out which operators on the field theory side correspond to which modes on the gravity side.
This is precisely what was done in [40]. By decomposing the ten-dimensional modes in harmonics of the eight-dimensional sphere, they have found a correspondence between 6 We thank David Tong for pointing this out.
-19 -JHEP05(2020)063 certain supergravity modes and certain operators discussed in [53]. In addition to harmonic analysis, a very important tool is generalised conformal symmetry, which, despite its importance, is not very pertinent to the main point of this paper, so we skip it, for interested readers here is a selection of useful literature on the subject [9,[54][55][56][57][58].
We will not repeat here the full dictionary except to point out that these modes are constructed such that, up to quadratic order in the supergravity action, they do not mix and have an effective two-dimensional action. Therefore, if we turn on the correct operator in the field theory side, even if just that one, we should be capable of reproducing the correct 2-point function on the gravity side. This test has indeed been made in [44] and matching between the two sides has been found.
In particular we shall turn on the operator [53] T ++ 2,ij = 1 which is dual to the supergravity mode [40] s =2 and Y are the scalar SO(9) spherical harmonics (we have suppressed their internal indices), h µν , µ are the perturbations of, respectively, the metric and the gauge field around the background (4.9a). These modes, have the following 2-point function, as discussed in [40] and confirmed in [44]: which we should be able to reproduce if QRG is valid. We shall assess in the following whether or not QRG holds. Finally, a quick note that, for this simple case, there is the possibility of recovering interactions because there are known fully consistent truncations down to 2 dimensions [41][42][43], which do agree with the tests performed in [44]. Even though we shall use the fact such truncations exist to draw some conclusions we shall not need to use the particular structure therefore we refer the reader to the above cited literature.
QRG of BFSS
We now apply the full QRG calculation of the action (2.1). We first add the source term, where T ++ 2,ij is given by (4.10). In this case, because it is very important to keep track of the trace structure we shall resort to fundamental indices I, J = 1, . . . , N , and represent the fields by traceless Hermitian matrices. It is also simpler to use Wick's theorem instead of Feynman diagrams. We shall furthermore be agnostic about the regulator procedure used, just noting that is has to be local, and could be, for example, the one developed in section 3.2 (it is not too hard to generalise the results of that section to include fermions). We then split all the index structure apart from the temporal dependence and write, for the high energy modes to be integrated out, where we are using the rescaled variables defined in (2.3).
All we have to do is compute all connected correlation functions with just a single contraction, i.e. propagator. Up to that order, the only terms that contribute are those that come from the expectation value of a single operator, or from the expectation value of the product of two operators. All such calculations proceed in exactly the same manner: expand the expectation value; pick all possible pairs of fields to be "+", i.e. high energy, summing over all possible choices, the remaining fields become "-"; (anti-)commute past each other (depending if they're scalars or fermions) until you have expressions of the form (4.15a) or (4.15b); contract all indices noting that δ I I = N . Therefore, we shall only present the full details for the first calculation and for all others we merely give the final answer. We note, however, that everything that involves the quartic interaction is much more cumbursome than anything else, because we need to sum over the possible choices.
Single operator
Cubic interaction:
Two operators
Cubic-cubic: Cubic-quartic: Cubic-source: Quartic-quartic: predictions, this mode should have dynamics on its own, not just when coupled to other operators (and in the lattice simulations dynamics where observed without the need to turn on more operators). However, after we do this, we can take the limit where the original sources are all set to zero and then carry out the calculation anyway, possibly finding non-zero double trace operators which will only turn on away from the boundary. Then, technically, we have only turned on that single mode initially, it just so happened to turn on other modes which then gave it the necessary dynamics. This mechanism cannot be completely ruled out by our calculations, and it seems that our simplifying assumption that we only need to turn on a finite set of sources and still get meaningful answers is not justified, however, in practice, it is not possible (nor naively well defined) to turn on an infinite number of operators. This leads to many difficulties in proceeding and confirming or completely ruling out QRG, which the authors leave as open problems. Firstly, it shouldn't be surprising that we have turned on extra modes, this is not a consistent truncation after all, this mode interacts with others. Therefore, in order to correctly interpret the results there should be some consistent way to truncate and neglect some operators to reproduce the approximation made in the supergravity side. However, neither the large N limit nor generalised conformal dimensions seem to do the trick since all single trace operators scale equally in the large N limit and in d = 1 the fields have negative dimensions, so having more fields will lower the dimension even further.
To deal with this, one could try to use a consistent truncation instead. However, some of the single trace operators we have generated above are not part of the consistent truncation. This is problematic unless they never become dynamical. So we still run into the issue of having to turn on an infinite number of operators, with the added fact that we know that if QRG is valid then we can only generate double trace operators for those exact operators we turned on initially, we may still need an infinite number of auxiliary non-dynamical fields. The extent to which having those fields will affect physical results is unclear.
Finally, we note that, even in the case when no source is turned on, we still generate some single trace operators. None of these modes may at any point become dynamical because that would mean that the vacuum has non-trivial dynamics, which, once more, goes against the supergravity predictions. However, this is still not a full contradiction since it may be that these new modes are never dynamical unless we turn on sources at the start. 8 This leads us with a very narrow window of possible success for QRG, it cannot generate any non-trivial dynamics when no source is turned on, it must generate non-trivial dynamics when any of the sources in [40] is turned on, and it cannot generate non-trivial dynamics away from the consistent truncations in [41][42][43] when only those modes are initially turned on. Perhaps some clever use of SO(9) symmetry could constrain which modes are turned on at each step and confirm or rule out QRG, however, currently the authors are unaware of any such method.
Discussion
There were three main steps in this paper: doing global RG on the BFSS model, developing a local RG scheme, and performing QRG on BFSS. The first two were part of the necessary construction to perform QRG, but they are also very important and interesting in their own right.
First of all, we performed standard Wilsonian RG on the BFSS model. This result was absent to the literature due to the finiteness of BFSS but was a very useful warm-up calculation. Even more importantly, it highlighted under which conditions were we able to preserve supersymmetry along the flow. Namely, a hard cutoff breaks supersymmetry but if we use Feynman parameters, as is usually done in higher dimensions, supersymmetry appears to be preserved. This is very surprising and the interpretation is not yet clear, because the physical hard cutoff breaks supersymmetry, the Feynman parametrisation is a mere computational trick. Furthermore, we concluded that the use of a smooth regulator always breaks supersymmetry. Even the use of the superspace formalism does not help because it does not preserve enough supersymmetry off-shell, it only preserves 4 supercharges out of the 16 total.
Secondly, we discussed under which conditions can we use a local regulator, and constructed an explicit example of one, a local Gaussian regulator. Constructing a local regulator is harder than a global one because of the subtleties of dealing with infinite dimensional objects, but we have shown that it is possible to do it, so long as we make sure every operator is symmetric and has an inverse. This section is especially interesting because it could potentially be used for performing RG in curved spacetime.
Finally, we put all the pieces together and performed QRG on BFSS with a particular operator turned on, one which we know from independent studies that has non-trivial dynamics in the gravity side, and found that it didn't generate any double trace operators. Further considerations meant it didn't completely rule out QRG but it greatly limited the ways in which it could still work. So far it appears to require turning on an infinite set of operators which is unclear if it is possible to do in practice. But further studies are necessary to fully understand its role in understanding the AdS/CFT correspondence. | 2023-01-21T14:54:41.707Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "0597a0f50507fa8e3954efa622480d7f2aa74af0",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2020)063.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "0597a0f50507fa8e3954efa622480d7f2aa74af0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
211112158 | pes2o/s2orc | v3-fos-license | Initial Selection of Disc Brake Pads Material based on the Temperature Mode
A spatial computational model of a motor vehicle disc brake, based on the system of equations of heat dynamics of friction and wear (HDFW), was developed. The interrelations of temperature-dependent coefficient of friction and coefficient of intensity of wear through the contact temperature and vehicle velocity were taken into account. The solution of the system of equations of HDFW was obtained by the finite element method (FEM) for six different brake pad materials associated with the cast-iron disc during a single braking. Changes in the braking time, coefficient of friction, braking torque, vehicle velocity, mean temperature of the contact area of the pads with the disc and wear of the friction surfaces were determined. Then, the obtained calculation results were evaluated in terms of stabilization of the coefficient of friction (braking torque), as well as minimization of the maximum temperature, wear, braking time and pads mass. As a result, recommendations were given to select optimum brake pad material in combination with a cast-iron disc.
Introduction
The main function of a braking system is to reduce velocity, stopping or preventing movement of a vehicle. Therefore, it is important to obtain sufficiently high and stable braking torque, ensuring stopping of rotating parts (disc and related components). With the given construction dimensions of the brake and the clamping force, a central role plays the coefficient of friction, which is intrinsically connected with the appropriate friction materials selection.
An attempt to systematize the problems of selection of materials for disc brake components was made in article [1]. The object of the analysis were materials used for motor vehicle brake discs. From the group of material selection criteria, the most important were stability (irrespective of load) of coefficients of friction and wear intensity, change in sliding velocity and temperature mode during braking. Among the additional requirements taken into account when selecting friction pair materials, the material properties such as compression strength, density and specific heat capacity were listed. Furthermore, material resistance to cracking, manufacturing and the associated cost should be considered. An important aspect of the material selection process is also the possibility of reprocessing and recycling. The authors drew attention to the problem of the diversity of approaches of material selection methods [2][3][4]. One of the basic ways of selecting friction materials is to develop a diagram (chart) presenting the properties of materials according to the specific type of braking system. Another method is the digital logic (DL) method using Ashby's chart [5]. The selection of material for the brake disc according to that method proceeds in the following four stages [1]: (1). general material performance requirements; Materials 2020, 13 The basic requirements for the braking system are taken into account-obtaining the highest possible and stable coefficient of friction. The criteria include compression strength, coefficient of friction, wear resistance, specific heat capacity, material density and costs. (2). initial screening of the candidate material; Based on the general requirements, materials are strictly selected for the braking system. (3). material selection using digital logic method; First, the weighting factors of the criterion parameters from step (1) should be found. Then, the performance index of each material is calculated. The material with the highest index is considered to be the most useful. (4). optimum material selection; The performance index together with the total cost of the material selected in stage (3) is compared with the corresponding parameters of gray cast iron (GCI).
The radial cracking process of worn, ventilated automotive brake discs made of gray cast-iron was studied in article [6]. A number of images of the microstructure (crack top view and cross-section) and hardness across the crack at different distances from the contact surface, obtained from optical microscope, scanning electron microscope (SEM), and energy dispersive X-ray spectroscopy (EDS) were shown and discussed. It was stated that the main source of appearance of the straight radial cracks, propagated from the outer edge of the disc, was the excessive wear. Both abrasive and adhesive wear mechanisms were identified in the specimen. Based on the carried out additional pin-on-disc wear tests with gray cast iron, coefficient of friction, weight loss against sliding distance, distribution of the laser profilometry at the end of the process and SEM images of the sample were presented and analyzed.
Thermo-structural analysis for conventional solid and grooved brake discs manufactured based on 3D printed of maraging steel was carried out in article [7]. The numerical calculations of temperature evolutions and von Mises equivalent stress fields in the discs were performed using ANSYS finite element method based software. In addition, the effect of the radial grooves area on the changes in the heat flux and temperature of the brake disc (6, 9 and 18 grooves) was investigated. The existence of grooves cut on the disc surface through Direct Metal Laser Sintering (DMLS) process, led to both lower von Mises stress and lower temperature. That fact was mainly attributed to more efficient heat dissipation.
The conversion of mechanical energy into heat results in an increase in temperature at the interface of two sliding components of the braking system. As established on the basis of a number of experimental studies and calculations, an effect of temperature on tribological characteristics, durability and reliability is undeniable [8]. Due to the dependence of the coefficient of friction on temperature, thermal sensitivity of thermophysical properties of materials, wear and an array of physical phenomena accompanying the friction process, a loop of unstable cause-effect processes takes place. That effect, especially the instability of the coefficient of friction, adversely affects operating conditions and safety. The difficulty in developing universal guidelines in the materials selection stems, among others, from the variety of disc brake designs due to their intended use-there are permissible different contact pressures, velocities, contact surface temperatures and bulk volumetric temperatures achieved, the level of wear and its mechanisms in aircraft brakes, automotive brakes or devices and working machines. Materials are also constantly developed and modified on account of growing requirements for high and stable braking torque while minimizing the mass of the friction pair. One should also remember about the technological features of the material, resistance to weather conditions (susceptibility to oxidation of carbon materials), resistance to various types of liquids (water, oil and other substances), as well as ecological aspects. The requirements for friction materials include stable and high coefficient of friction, low wear irrespective of working conditions, resistance to adhesive tacking, stability and uniformity of changes in chemical and phase composition, other properties of the surface layer during operation, corrosion resistance, high melting point, high thermal conductivity, low thermal expansion coefficient-constant shape and dimensions irrespective of changes in temperature, high specific heat capacity, absence of vibration and squeal noise [9,10].
The integral quantity combining the parameters and factors mentioned above is the temperature of the braking system [11,12]. Its mode (regime) largely determines the friction and wear characteristics of the brake sliding components. Knowing the temperature field, one can perform a preliminary friction pair materials selection. It will answer the next two questions. First, whether the friction material will work at its acceptable temperature, and secondly, what the approximate wear of the working surfaces is, i.e., the service life of the friction pair [13]. To answer that question, it is necessary to have experimental data on the friction stability of the considered pairs of materials-the dependence of their coefficients of friction and the intensity of wear on temperature. They are the basis for the calculation model of the maximum brake temperature using the finite element method and the system of equations of heat dynamics of friction and wear (HDFW) [14,15]. The solution of that system of equations will allow for a comprehensive assessment of the working ability of a preselected friction pair at the stage of developing the brake structure within the foreseeable range of operating parameters. The scheme of friction pair selection based on the solution (analytical, analytical-numerical or numerical) of the HDFW system of equations can be presented as follows [16]: (1). estimation of the bulk volumetric temperature for the most severe brake operating conditions and selection of the class of materials (polymers, sintered powders, carbon composites, etc.), from which friction materials may be chosen; (2). calculating the average temperature of the nominal contact area of the friction pair to reduce the number of materials selected and assessing the brake structure for thermal strain and stresses as well as structural changes on the contact surfaces; (3). determination of the maximum temperature and real changes of the coefficient of friction and wear during braking.
Comparison of different combinations of materials of the friction pair is carried out based on the parameters that largely affect the smoothness of braking. These are [17]: (1). mean value f m of the coefficient of friction f ; (2). stability f s = f m / f max ; (3). fluctuation f f = f min / f max ; (4). braking efficiency α e f f = f s /t 2 s , where t s is braking time; (5). relative braking efficiency β e f f = α e f f /I l,max , where I l,max is maximum value of linear wear I l .
These parameters allow the operation of a given pair of materials to be evaluated in terms of meeting all indexes for friction and wear, including the requirement of stability of braking torque and smoothness of the braking process itself. Having such data, the constructor, taking into account the requirements of the machine functioning, can be more justified and more certain in determining the optimum brake option. At the same time, the basic issues of the dilemma should be taken into account: what is more desirable in the case under consideration-braking efficiency or economic aspects, i.e., reducing wear and accordingly increasing the operating time of the brake.
Numerical calculations using the finite element method of the axisymmetric temperature fields of the pads and the disc during a single braking were carried out in the article by Yevtushenko and Grzes [18]. In the braking simulation, 16 configurations of materials of the pad-disc system were examined, which included four materials of the Al MMC (aluminum alloy series), FCD50 (iron alloy series), steel EI-696, cast iron ChNMKh and four materials of pads cermet FMC-11, FMC-845, MCV-50 and titanium alloy VT-14. The main purpose of the study was to carry out comparative analysis of temperature evolutions of the friction surfaces of the brake contact model with the constant and temperature-dependent thermophysical properties of materials. In each of the analyzed cases, the change in velocity was linear (constant deceleration), and thus the total friction power density did not change. Such assumptions allowed direct investigation of the effect of material properties on temperature fields of the brake components. It was found that taking into account the thermally sensitive materials has no significant effect on the maximum temperature values (difference below 3%). Individual pairs revealed larger differences, reaching around 16%. In the analysis of the results obtained, emphasis was placed on thermal effusivity-the parameter being the square root of the product of thermal conductivity, density and specific heat capacity. Its greater change during braking caused the largest differences in surface temperature for constant and thermally sensitive materials.
Axisymmetric (2D) and spatial (3D) computational models of disc brakes using FEM with the temperature-dependent coefficient of friction were developed in articles [19,20]. The temperatures and wear of the friction pairs, including pads made of FC-16L Retinax A or cermet FMC-11, sliding on the surface of the cast-iron disc during single (2D model) and multiple (3D model, 6 brake applications) braking were studied. The same braking time at constant and thermally sensitive coefficient of friction adapted in both models gave different values of the full work done, which made it difficult to estimate the influence of the input parameters on temperature and wear.
On the basis of the 2D numerical solution of the system of equations of HDFW, the influence of disc brake design features on the maximum temperature and duration of the braking process was examined in the article [21]. The dependence of the coefficient of friction on the mean temperature of the contact area of the pad with the disc was taken into account, based on the coupling of the initial value problem for the equation of motion and the boundary value heat conduction problem (thermal problem of friction). That approach allowed the same work done during braking to be performed in each of the analyzed cases (five geometrical variants and four contact pressure values). Five geometrical models differing in the outer diameter of the disc and brake pads were analyzed, while maintaining a constant volume-increasing the diameter led to reduction in the thickness of the brake components. It was established that increasing the equivalent radius of the friction path (outer and inner diameter of the pads and disc) significantly shortens the braking time and distance, while the maximum temperature achieved slight changes. The corresponding 3D thermal problem of friction based on the system of equations of HDFW was studied in article [22]. The calculations were conducted using a 3D contact model of a disc brake. Two friction pairs at six contact pressures, assuming that the properties of materials are constant, were analyzed.
The elastic-plastic effects in the process of Vickers indentation of deep drawing quality steel sheets were investigated using the finite element method in the paper [23]. The authors placed emphasis on correlation of anisotropy of the material according to Hill yield criterion and contact conditions. Nonlinear numerical calculations of stresses and strains were carried out on the basis of the three-dimensional contact model of the 3D rigid indenter and deforming steel sheet with the real thickness. The indenter shape was ideal, without the rounding. In order to assure accuracy of the computer simulation, sensitivity analysis with the different total number of elements of the mesh was performed. Distributions of the equivalent plastic strain along the rolling direction and equivalent stress under maximum displacement and after unloading were presented and analyzed. It was observed that the coefficient of friction affects the hardness of the material, however friction conditions affect the maximum force and the character of the load-displacement correlations slightly. Two-dimensional finite element FE indentation analysis and the experimental Digital Image Correlation (DIC) method were used to study strains for the samples made of a ductile material, 99% tin [24]. The purpose of the research carried out in this work was to develop a methodology for selection of pads material for the optimum friction of a given brake disc taking into account the boundary value heat conduction problem and the initial value problem for the equation of motion. To find the temperature mode of the pad-disc pair, a coupled 3D FEM model was adapted from the article [22]. The calculations were conducted for six brake pad materials associated with the cast-iron disc.
HDFW System of Equations
The results of experimental tests on the friction thermostability of the materials of the friction pair in question, i.e., changes in the coefficients of friction f and intensity of thermomechanical mass wear I under the influence of temperature T, were approximated by the functions: where T 0 is the initial temperature of the system, while the values of the coefficients f i , I i , i = 1, 2, . . . , 7 were found using methodology from [19,25]. It was assumed that the pressure p is uniformly distributed over the nominal contact areas of each of the two brake pads with the single brake disc and increases exponentially in time t from zero to the nominal value p 0 according to the relationship: where t i is the rise time and t s is the braking time. At the initial time moment t = 0, the vehicle with mass m, equipped with four wheels with the same dynamic radius R w , moves at the initial velocity V 0 . Change in the vehicle velocity V during braking will be found from the solution to the initial value problem for the equation of motion: where is the nominal contact area of the single pad with the disc, R p , r p is the outer and inner radii of the brake pad, respectively, 2θ 0 is the cover angle of the pad and T(r, θ, z, t) is the spatial transient temperature field in the cylindrical coordinate system (r, θ, z) ( Figure 1). Here and further, parameters and values corresponding with the pad and the disc are marked in "p" and "d", respectively.
The solution to the initial value problem (5)-(8) takes the form: Materials 2020, 13, 822 8 of 20 where f and I are the temperature-dependent m T (7) coefficients of friction and intensity of wear (1)-(3).
The above formulated problem of motion (8)- (12) and the thermal problem of friction (15)-(29) are coupled by the coefficient of friction [ ( )] m f T t , which means that the sliding velocity V and temperature T are interdependent. It should be also noted that the influence of microgeometry of the rubbing surfaces in determining the maximum temperature is related to the flash temperature, not only the mean temperature of the contact surfaces. It has been established that the flash temperature reaches the highest value in the initial period of braking and the mean surface temperature at about the midpoint of this period. The influence of topography of the surface on the temperature mode of a disc brake has been investigated in articles [15,31]. It was shown that the decisive influence on the maximum temperature has the mean temperature of the contact region of the pad with the disc. Hence, in the present study, the dependencies of the coefficient of friction and the wear rate only on the mean temperature of the contact region were used. There are also other approaches, which take into account the microgeometry of the rubbing surfaces. The roughness of these surfaces is usually simulated by introducing the thermal resistance into the boundary conditions and, as a result, by the appearance of temperature jump on the friction surface. In the proposed computational model, the solution of the boundary value problem of heat conduction was obtained under ideal (perfect) conditions of thermal contact of friction, which are typical for fairly smooth working surfaces of the pad and the disc. A review of the research on that topic is presented in article [32].
Numerical Analysis
The numerical solution of the system of equations of HDFW (1)-(32) was obtained using the finite element method adapted in COMSOL Multiphysics ® software [33]. To create the mesh of the brake, 8520 higher-order finite elements (quadratic Lagrange hexahedral elements) were used, including 1320 in the area occupied by the pad p and 7200 elements in the area of the brake disc [34]. The value of the heat transfer coefficient was assumed to be equal 21 60 Wm K h , which is fully justified for short-term braking of a motor vehicle [28]. Initially, the disc and the pads were heated to the ambient temperature 0 20 C T . The braking time t s is determined from the stopping condition V(t s ) = 0, which, taking into account formulas (9)-(11), gives the functional equation: With the constant coefficient of friction f = f 0 , f * (T) = 1 and immediate ( t i → 0 ) pressure reaching the nominal value p = p 0 , p * (t) = 1, we have F * (t) = 1 and from formulas (10) and (12) we find: The next assumptions concern the formulation of the thermal problem of friction for a single disc brake, consisting of two identical pads each with the thickness δ p , pressed from two sides to the surfaces of the disc with internal r d , external R d = R p and thickness radii 2δ d . Due to the symmetry of such a system relative to the center plane z = 0 of the disc, to determine the temperature field it is enough to consider the friction system consisting of one pad sliding on the front surface of the disc of the thickness δ d (Figure 1). As a result of friction, heat is generated in the pad contact area with the disc Γ = r p ≤ r ≤ R p , |θ| ≤ θ 0 , z = 0 and the components heat up. The sum of the intensities of heat fluxes directed in the area of contact Γ from the friction surface along the normal inward the pad and disc is equal to the friction power density [26]: We neglect the thermal resistance of the contact area, assuming even temperature of the friction surfaces of the pad and disc in that area. The surface of symmetry of the disc is adiabatic, and the free surfaces of the brake are cooled by convection with the constant heat transfer coefficient h averaged in the braking process [27,28]. The brake discs are solid and do not include an area for mounting on the wheel hub [29]. Such simplification is justified for short-term braking, when the temperature of the disc outside the friction path is insignificant.
With such assumptions, the transient temperature field T ≡ T(r, θ, z, t) can be found from the solution of the following spatial boundary value heat conduction problem ( Figure 1): where q(r, t) is the friction power density (14) and ∆ is the Laplace operator in the cylindrical coordinate system: Ω p,d is the spatial regions, occupied by the pad and the disc, respectively: K p,d , ρ p,d , c p,d are the thermal conductivities, densities and specific heat capacities of the pad and the disc materials, respectively.
Having the temperature field T(r, θ, z, t), the evolution of the bulk volumetric temperature of the pad T V p and the disc T V d , we determine from the formulas: The change in thermomechanical wear of the friction surfaces during braking was calculated from the formula [30]: where f and I are the temperature-dependent T m (7) coefficients of friction and intensity of wear (1)-(3).
The above formulated problem of motion (8)- (12) and the thermal problem of friction (15)-(29) are coupled by the coefficient of friction f [T m (t)], which means that the sliding velocity V and temperature T are interdependent.
It should be also noted that the influence of microgeometry of the rubbing surfaces in determining the maximum temperature is related to the flash temperature, not only the mean temperature of the contact surfaces. It has been established that the flash temperature reaches the highest value in the initial period of braking and the mean surface temperature at about the midpoint of this period. The influence of topography of the surface on the temperature mode of a disc brake has been investigated in articles [15,31]. It was shown that the decisive influence on the maximum temperature has the mean temperature of the contact region of the pad with the disc. Hence, in the present study, the dependencies of the coefficient of friction and the wear rate only on the mean temperature of the contact region were used. There are also other approaches, which take into account the microgeometry of the rubbing surfaces. The roughness of these surfaces is usually simulated by introducing the thermal resistance into the boundary conditions and, as a result, by the appearance of temperature jump on the friction surface. In the proposed computational model, the solution of the boundary value problem of heat conduction was obtained under ideal (perfect) conditions of thermal contact of friction, which are typical for fairly smooth working surfaces of the pad and the disc. A review of the research on that topic is presented in article [32].
Numerical Analysis
The numerical solution of the system of equations of HDFW (1)-(32) was obtained using the finite element method adapted in COMSOL Multiphysics ® software [33]. To create the mesh of the brake, 8520 higher-order finite elements (quadratic Lagrange hexahedral elements) were used, including 1320 in the area occupied by the pad Ω p and 7200 elements in the area of the brake disc Ω d (Figure 1). The total number of degrees of freedom (DOF) of the model was equal to 78,307.
A computer simulation of frictional heating of the disc brake components during the single vehicle braking with the mass m = 1524.3 kg from the initial velocity V 0 = 100 km h −1 (27.78 m s −1 ) to standstill was performed. The pad and the disc dimensions were R p = R d = 113.5 mm, r p = 76.5 mm, r d = 66 mm, δ d = 5.5 mm, δ p = 10 mm and θ 0 = 32.25 • [34]. The value of the heat transfer coefficient was assumed to be equal h = 60 W m −2 K −1 , which is fully justified for short-term braking of a motor vehicle [28]. Initially, the disc and the pads were heated to the ambient temperature T 0 = 20 • C.
Calculations were made for the following variants, including the pads friction material and the nominal contact pressure [35]: The characteristics of the material of the pads, necessary to carry out calculations, are given in Table 1. For the material of the brake disc, ChNMKh gray cast iron (K d = 52.17 W m −1 K −1 , ρ d = 7100 kg m −3 , c d = 444.6 J kg −1 K −1 ) was chosen. From the selected brake pad materials, two groups can be distinguished, the first consisting of four materials and marked with numbers 1, 2, 3 and 6 (145-40, 42-773, 2-61 and FC-16L). They are characterized by low thermal conductivity (K p = 0.39 ÷ 0.79 W m −1 K −1 ), the lowest density (ρ p = 2300 ÷ 2500 kg m −3 ) and the highest specific heat capacity (c p = 961 ÷ 1206 J kg −1 K −1 ). Materials 145-40, 42-773 and 2-61 represent a combined binder formed from mixtures of the same name and differing in proportions of ingredients such as asbestos, synthetic resin, rubber, barite, aluminum oxide, graphite, copper powder and brown shavings. Retinax FC-16L type A (variant 6) is a composite based on phenol-formaldehyde resins and reinforced with brass shavings.
It is necessary to emphasize that modern friction materials are complex multi-component systems. Their composition and manufacturing technology are usually not disclosed. In the scientific literature only data on the averaged (effective) thermophysical and mechanical properties of the materials as a whole or their individual components were available. The methods for determining such averaged characteristics are known but are not the subject of investigations in the present study. The composite materials used in this thermal analysis are very different, and it would be difficult to take into account the inhomogeneity of the properties of each of them.
The results of calculations shown below in figures for each of the six abovementioned variants have been marked with numbers 1, 2, . . . , 6, respectively. In addition, the curves corresponding to the calculation variants 4, 5 and 6 at low nominal pressure values (p 0 = 0.588 MPa, 0.49 MPa, 0.392 MPa) were denoted as 4a, 5a or 6a (solid lines). In contrast, the curves obtained for the same variants at high nominal pressure (p 0 = 1.47 MPa) were denoted as 4b, 5b or 6b (dashed lines).
The values of the coefficients of friction f 0 and intensity of wear I 0 at the initial temperature T 0 = 20 • C and the values of the coefficients f i , I i , i = 1, 2, . . . , 7, in formulas (2) and (3), approximating the experimental data, are given in Table 2 [31,35]. The experimental dependencies (1)-(3) of the coefficient of friction f and the intensity of wear I on the temperature T for all calculation variants are presented in Figures 2 and 3, respectively. The friction thermostability curves of selected materials differ significantly (Figure 2). Increase in temperature causes a monotonic increase in the coefficient of friction in the case of the pads made of material 145-40 (variant 1) or its almost linear decrease for the pads made of materials 2-61 (variant 3) and FMC-11 (variant 4). Friction thermostability curves for pads made of 42-773 (variant 2) and MCV-50 (variant 5) have a local maximum and, with Retinax FC-16L (variant 6), a local minimum. At the set temperature, the coefficient of friction is greater at lower contact pressure (variants 4, 5 and 6). 0.14μg N m ) wear intensity. A rapid increase in wear intensity after reaching the temperature of 400 °C is shown by the friction pair with the pad made of FC-16L (variant 6). At a temperature of around 500 °C, the wear intensity shows a clear maximum for variants 1, 2, 3, 5a and 5b. At the specified temperature, the wear intensity is higher at the higher contact pressure (variants 4, 5 and 6). 0.14μg N m ) wear intensity. A rapid increase in wear intensity after reaching the temperature of 400 °C is shown by the friction pair with the pad made of FC-16L (variant 6). At a temperature of around 500 °C, the wear intensity shows a clear maximum for variants 1, 2, 3, 5a and 5b. At the specified temperature, the wear intensity is higher at the higher contact pressure (variants 4, 5 and 6). The evolution of the work done by each out of the four braking systems of the motor vehicle: According to formula (33), the value of the wear intensity factor I has a decisive impact on the material wear of a particular friction pair during braking. Experimental dependence I(T) for the six pad materials considered are shown in Figure 3. Except for the pad made of Retinax FC-16L (variant 6), the values of I do not exceed 1.2 µg N −1 m −1 in the temperature range of 20 ÷ 900 • C. At a temperature not exceeding 400 • C, the friction pairs of variants 1, 2 and 3 are characterized by a very low (below 0.14 µg N −1 m −1 ) wear intensity. A rapid increase in wear intensity after reaching the temperature of 400 • C is shown by the friction pair with the pad made of FC-16L (variant 6). At a temperature of around 500 • C, the wear intensity shows a clear maximum for variants 1, 2, 3, 5a and 5b. At the specified temperature, the wear intensity is higher at the higher contact pressure (variants 4, 5 and 6).
The evolution of the work done by each out of the four braking systems of the motor vehicle: where equivalent radius r eq and friction power density q were obtained from formula (8) and (14), respectively and are shown in Figure 4. It increases monotonically from zero to the nominal value where equivalent radius eq r and friction power density q were obtained from formula (8) and (14), respectively and are shown in Figure 5. Except for the short initial period, the reduction in velocity is linear, which was previously also established for thermally sensitive materials in article [31]. The longest ( The reduction in vehicle velocity from the initial value V 0 = 27.78 m s −1 to zero at the time of stopping for all variants is shown in Figure 5. Except for the short initial period, the reduction in velocity is linear, which was previously also established for thermally sensitive materials in article [31]. The longest (t s = 32.88 s) braking lasted in the case of the friction pair from variant 6, and the shortest (t s = 5.38 s) was from variant 5 at low pressure p 0 = 0.49 MPa (Table 3) Evolutions of the mean temperature T m (7) of the contact area of the pad with the disc are presented in Figure 6. Due to the same initial kinetic energy and, therefore, the total work done (Figure 4), the maximum values T m,max differ slightly for all considered calculation variants. The difference between the highest 187.4 • C (curve 6b) and the lowest 160.6 • C (curve 4b) is equal to about 14.3% (Table 3).
Materials 2020, 13, 822 13 of 20 Table 3. Characteristics of the temperature mode and temperature wear of the friction pairs. Evolutions of the mean temperature m T (7) of the contact area of the pad with the disc are presented in Figure 6. Due to the same initial kinetic energy and, therefore, the total work done (Figure 4), the maximum values ,max m T differ slightly for all considered calculation variants. The difference between the highest 187.4 °C (curve 6b) and the lowest 160.6 °C (curve 4b) is equal to about 14.3% (Table 3). (Figure 8). It should be noted that the results presented in Figure 7 and 8 are very much related to the values of the heat partition coefficient, which in turn are determined by the thermophysical properties of friction pair materials [36]. With a significantly lower thermal conductivity of the pad, Changes in the bulk volumetric temperature of the pad T V p and the disc T V d (32) with the braking time are shown in Figures 7 and 8, respectively. The maximum values of the temperature for the pad made of materials with low thermal conductivity (variants 1, 2, 3, 6) are low and fluctuate from 47 • C (curve 3) to 65 • C (curve 6a) (Figure 7). The values of the thermal conductivity for cermet FMC-11 (variant 4) and MCV-50 (variant 5) are much higher, which means that the maximum values of their bulk temperature are higher and vary in the range from 134 • C (curve 5b) to 154 • C (curve 4a). The maximum values of the bulk volumetric disc temperature are equalized for all calculation variants and vary from 145 • C (variant 5a) to about 168 • C (variant 6b) (Figure 8). It should be noted that the results presented in Figures 7 and 8 are very much related to the values of the heat partition coefficient, which in turn are determined by the thermophysical properties of friction pair materials [36]. With a significantly lower thermal conductivity of the pad, more heat is directed from the friction surface to the disc, and hence, the volumetric temperature of the disc is higher than the pad temperature. Due to the similar thermophysical properties of the pad materials with the disc material (variants 4, 5), their bulk temperatures differ slightly.
No. Pads Material [s]
pads (variants 4 and 5). This is explained by their greater (by two orders of magnitude) thermal conductivity compared with the other four materials (variants 1-3, 6). Consequently, the effective heating depth of the ceramic metal pads is also substantially greater than the rest of the materials (they heat up over the entire thickness). High thermal conductivity of the cast-iron disc and almost twice smaller computational thickness than the pad are the reasons for the small difference in the corresponding bulk and surface temperatures shown in Figure 6 and 8.
We note that evolution of the temperature of the friction surface of the disc has an oscillating nature [20,22,31]. This is caused by the motion of the contact region along the working surface of the disc due to its rotation. At each revolution of the disc, the temporal profile of the temperature of the specified point on the friction surface of the disc consists of two stages-increasing and decreasing. First, when the point of the rubbing path of the disc comes into contact with the pad, the temperature increases and reaches the maximum value, then after passing through the pad area, it decreases until another contact with the pad takes place. In the present analysis, the purpose was to find the change in the mean temperature of the contact region with respect to braking time defined by formula (7). That temperature was used in the construction of the computational model. Since it is associated with the stationary pad, there are no oscillations. The other reason is the averaging of temperature within the contact region, and therefore, we can see only the monotonic change in the mean (and volumetric) temperatures in Figure 6-8. The thermomechanical mass wear of the friction surfaces w I (33) increases monotonically from zero at the initial time moment to the maximum value at the stopping ( Figure 9). The lowest total wear in the process takes place for the friction pairs including pads made of materials 42-773 (curve conductivity compared with the other four materials (variants 1-3, 6). Consequently, the effective heating depth of the ceramic metal pads is also substantially greater than the rest of the materials (they heat up over the entire thickness). High thermal conductivity of the cast-iron disc and almost twice smaller computational thickness than the pad are the reasons for the small difference in the corresponding bulk and surface temperatures shown in Figure 6 and 8.
We note that evolution of the temperature of the friction surface of the disc has an oscillating nature [20,22,31]. This is caused by the motion of the contact region along the working surface of the disc due to its rotation. At each revolution of the disc, the temporal profile of the temperature of the specified point on the friction surface of the disc consists of two stages-increasing and decreasing. First, when the point of the rubbing path of the disc comes into contact with the pad, the temperature increases and reaches the maximum value, then after passing through the pad area, it decreases until another contact with the pad takes place. In the present analysis, the purpose was to find the change in the mean temperature of the contact region with respect to braking time defined by formula (7). That temperature was used in the construction of the computational model. Since it is associated with the stationary pad, there are no oscillations. The other reason is the averaging of temperature within the contact region, and therefore, we can see only the monotonic change in the mean (and volumetric) temperatures in Figure 6-8. The thermomechanical mass wear of the friction surfaces w I (33) increases monotonically from zero at the initial time moment to the maximum value at the stopping ( Figure 9). The lowest total wear in the process takes place for the friction pairs including pads made of materials 42-773 (curve However, an insignificant difference between the mean temperature of the friction surface ( Figure 6) and bulk temperature of the pad (Figure 7) takes place only in the case of ceramic metal pads (variants 4 and 5). This is explained by their greater (by two orders of magnitude) thermal conductivity compared with the other four materials (variants 1-3, 6). Consequently, the effective heating depth of the ceramic metal pads is also substantially greater than the rest of the materials (they heat up over the entire thickness). High thermal conductivity of the cast-iron disc and almost twice smaller computational thickness than the pad are the reasons for the small difference in the corresponding bulk and surface temperatures shown in Figures 6 and 8.
We note that evolution of the temperature of the friction surface of the disc has an oscillating nature [20,22,31]. This is caused by the motion of the contact region along the working surface of the disc due to its rotation. At each revolution of the disc, the temporal profile of the temperature of the specified point on the friction surface of the disc consists of two stages-increasing and decreasing. First, when the point of the rubbing path of the disc comes into contact with the pad, the temperature increases and reaches the maximum value, then after passing through the pad area, it decreases until another contact with the pad takes place. In the present analysis, the purpose was to find the change in the mean temperature of the contact region with respect to braking time defined by formula (7). That temperature was used in the construction of the computational model. Since it is associated with the stationary pad, there are no oscillations. The other reason is the averaging of temperature within the contact region, and therefore, we can see only the monotonic change in the mean (and volumetric) temperatures in Figures 6-8.
The thermomechanical mass wear of the friction surfaces I w (33) increases monotonically from zero at the initial time moment to the maximum value at the stopping (Figure 9). The lowest total wear in the process takes place for the friction pairs including pads made of materials 42-773 (curve 2, I w,max = 2.22 mg) and 2-61 (curve 3, I w,max = 1.51 mg), and the largest when using the pads made of Retinax FC-16L (curve 6b, I w,max = 85.56 mg) ( Table 3). Increasing the contact pressure (variants 4, 5, 6) causes significant (even several times) increase in wear. (Table 3). Increasing the contact pressure (variants 4, 5, 6) causes significant (even several times) increase in wear. Knowing the time profiles of the mean temperature m T (7) of the six analyzed calculation variants (Figure 6), the formulas for the coefficient of friction f ( Figure 10) were reproduced using formulas (1) and (2). Relatively low (below 200 °C) maximum values of m T , presented in Table 3, cause the initial values of 0 ff to be approximately maintained throughout the entire braking process. The coefficient of friction for the pad made of 145-40 (variant 1) was stable, a slight increase in f occurred for the friction pair from variant 2, and a decrease occurred for variant 3. The largest changes in the coefficient of friction during a single braking were observed in the results obtained from calculations according to materials denoted 4, 5 and 6. Knowing the time profiles of the mean temperature T m (7) of the six analyzed calculation variants (Figure 6), the formulas for the coefficient of friction f ( Figure 10) were reproduced using formulas (1) and (2). Relatively low (below 200 • C) maximum values of T m , presented in Table 3, cause the initial values of f = f 0 to be approximately maintained throughout the entire braking process. The coefficient of friction for the pad made of 145-40 (variant 1) was stable, a slight increase in f occurred for the friction pair from variant 2, and a decrease occurred for variant 3. The largest changes in the coefficient of friction during a single braking were observed in the results obtained from calculations according to materials denoted 4, 5 and 6. (Table 3). Increasing the contact pressure (variants 4, 5, 6) causes significant (even several times) increase in wear. Knowing the time profiles of the mean temperature m T (7) of the six analyzed calculation variants (Figure 6), the formulas for the coefficient of friction f ( Figure 10) were reproduced using formulas (1) and (2). Relatively low (below 200 °C) maximum values of m T , presented in Table 3, cause the initial values of 0 f f to be approximately maintained throughout the entire braking process. The coefficient of friction for the pad made of 145-40 (variant 1) was stable, a slight increase in f occurred for the friction pair from variant 2, and a decrease occurred for variant 3. The largest changes in the coefficient of friction during a single braking were observed in the results obtained from calculations according to materials denoted 4, 5 and 6. (6) and (8), respectively, are similar to the changes in the coefficient of friction presented in Figure 10 ( Figure 11). Just as the friction force, the braking torque, depends on The temperature field calculated numerically allows the conclusion that the temperature mode (regime) of the brake is light [36]. It occurs when the bulk volumetric temperature of the friction pair is about 100 • C and the average temperature of the friction surface does not exceed 200 • C. Materials for which the temperature of the beginning of destruction of at least one of the components is in the range 250 ÷ 300 • C are suitable for operation in such conditions. Thus, the thermal stability of separate components has a significant impact on the frictional stability of the material, determining the permissible temperature modes of its operation. It should also be noted that with light temperature modes, the wear of the disc is usually not taken into account and the service life of the braking system and reliability are mainly determined by its temperature mode.
Based on the comparative analysis of the data contained in Tables 3 and 4, it is apparent that among the pad materials belonging to the first of the abovementioned groups (variants 1, 2, 3 and 6), friction material 42-773 (variant 2) is the best presented. When using that material for the pads in combination with the cast-iron brake disc, the braking time is the shortest, the maximum temperature of the friction surface slightly differs from the values obtained for the other materials in that group, the mass wear is low whereas the average value of the coefficient of friction f m , stability of the coefficient of friction ( f s ) and its fluctuation ( f f ) are the highest (required) during the braking process. The friction pair from variant 2 also has the highest braking rate in that group. Among the friction materials from the second group (variants 4 and 5), the optimum choice with both low and high contact pressure is cermet MCV-50 (variant 5). It excels in the absolute ranking of all six friction pairs, ahead of the cermet FMC-11 in the second place. It should be noted that a significant disadvantage of friction ceramic metal materials is their high hardness and rigidity. Hence, as shown also in the results in Table 3, the wear of the pad material and its bulk volumetric temperature increase significantly and their noise characteristics deteriorate.
It should be noted that the present study examines the so-called light temperature mode, when the temperature of the friction surface does not exceed 200 • C. This is caused by the properties of the materials of the pads employed in the calculations. The first three (variants 1-3) can be used only with light frictional heating modes. The ceramic metal materials (variants 4 and 5), as well as Retinax (variant 6), can withstand much higher temperatures such as medium (about 450 • C) and heavy (above 750 • C) temperature modes. This research was limited only to the light mode for all six materials, because the main objective was to perform a comparative analysis of the obtained characteristics of these materials operating under the same conditions (the same initial velocity, pressure and full work done). However, the proposed method of initial selection of the pad material remains unchanged in the rest of the temperature modes. It should also be noted thatthere is too little information about the experimental data for the temperature dependence of the coefficients of friction and the intensity of wear. Such data was found and used for six pad materials in combination with the same cast-iron disc.
Conclusions
The aim of the present study was to develop a methodology for the initial material selection for brake pads based on temperature mode data. Numerical simulation of the friction heating process of the disc brake during single braking was carried out. The basis for calculations was the coupled spatial system of equations of HDFW taking into account the thermal sensitivity of the coefficients of friction and the intensity of thermomechanical mass wear. The numerical solution using FEM of the developed system of equations of HDFW was obtained for six selected pad materials (six calculation variants), combined with the cast-iron brake disc while maintaining a constant work done during braking. As a result, the key parameters of the braking process were determined, such as the braking time and time profiles of the mean temperature of the nominal contact area of the pad with the disc and their bulk volumetric temperature, thermomechanical mass wear, coefficient of friction and braking torque. This allowed the maximum values of the mean and bulk temperatures as well as mass wear for six selected pads materials to be determined. Four parameters were considered for the pads material selection with the best friction characteristics, taking into account the change in the coefficient of friction: average value, stability, fluctuation and braking efficiency. Based on the initial material selection, carried out by means of comparative analysis of the values of these parameters for each of the six friction pairs, it was found that the best tribological characteristics have MCV-50, FMC-11 ceramic metal materials and material denoted as 42-773. The final choice of the pads material should also be based on the strength tests, technology, production costs of the material and the ecological aspects.
We proposed the general method of initial selection of the pads material for the given material of the disc. The calculations were made for the light temperature mode of the brake operation and were aimed only to illustrate this method. The presented technique is quite universal, and calculations on it can be carried out for medium and heavy temperature modes too, including the use of complicated mathematical models taking into account the topography of the surface of friction, the temperature dependence of the properties of materials, nonuniformity of contact pressure distribution, etc. | 2020-02-13T09:24:16.665Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "b252709f5bbb6de3a953a003a8777ba34f79305a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/4/822/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4446ef391e1b0cfbef2deeeefbd870e5bc881bf5",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
213813797 | pes2o/s2orc | v3-fos-license | Projective mapping and descriptive analysis of commercial fish floss in Yogyakarta Region
This study aimed to determine the quality of commercial fish floss in the Yogyakarta region based on the similarity and dissimilarity in the dominant spices perceived by panelists. Projective mapping (PM) or napping has attained much attention in recent literature as a method for fast sensory profiling and measurement of consumer perception. Configurations from PM have been shown to provide similar product maps. Observation of projective mapping was carried out on ten commercial fish floss using 80 untrained panelists. Panelists were instructed to group samples according to their perception in a sheet flat area 60 × 60 cm2. The coordinates of the grouping sample were recorded. Results of the study were able to detect 12 flavored spices obtained; samples were scattered in four groups of dominant herbs. Fish floss AK, KF_T, and DE contain nutmeg, turmeric, tamarind, and ginger as the dominant flavor. The brands N_S, TS, and SM exhibit dominant taste of fish flavor, candlenut, lemongrass, and a slight chili. N_L, SR, and KF_L had the dominant taste of bay leaves, coriander, cutcherry, and chili. SF_S brand had a dominant flavor of galangal. The result of this study can be used as a reference for the development of the fish floss industry in the Yogyakarta area for new industries and the destination of new products from various raw materials of fish.
Introduction
Fish floss is a fish product which is processed by steaming and frying. Various spices are added to produce good taste and prolong the shelf life. Fish floss is generally marketed in various types of packaging to maintain product quality [1]. It is important to pay attention to the physical aspects of food and marketing new products to give a good impression of quality. This will have an impact on understanding the desires of consumers to buy products on the market [2]. Trained panelists and quantitative descriptive analysis are usually chosen to determine the character of the product's sensory characteristics. In the food industry, the time needed to obtain trained panelists for a series of product tests is an issue that needs to be resolved. So there is a need of a way to find answers quickly from consumer products without the need to train these panelists [3].
Projective mapping (PM) or napping is a method in sensory testing that is popularly used to determine the character of the product being tested. PM is preferred because the implementation procedure is simple but effective to determine the overall differences between samples [4]. The basis of the PM method is to identify and characterize product samples with similar (dis) characteristics [5]. In general, PM panelists are asked to use the overall similarity of characteristics in the sample, both in terms of sensory value, preference, or other aspects of the food field, to place the position of the [6]. Projective mapping or flash profile is impressive because it does not require panelist training [7] because achieving consensus among trained panelists on descriptive sensory testing will be difficult for some types of product [8].
Fish floss is one of the important fish products in the city of Yogyakarta. Products sold have a variety of fish raw materials, both freshwater, and marine with a variety of spices. Most fish flosses are produced by home industries and packaged using plastic packaging, aluminum foil, or a combination of both with local market segmentation. Mapping of fish floss products in the city of Yogyakarta using PM is expected to give an idea of the characteristics of fish floss products that consumers are interested in.
Sample and sample preparation
Ten fish floss products were selected as samples, taken from several retail stores in the city of Yogyakarta (table 1). All fish floss samples were the products of small and medium enterprises, most of which come from Sleman Regency. Fish floss was the final product and can be consumed directly, so that presentation to the panelists was done directly. Several fish floss samples (10 ± 2 g) were placed on small cups gave a three-digit random number and immediately prepared when the test to be carried out. Panelists tasted the fish floss samples that were served and asked to assess the mapping sheet.
Panelist
A total of eighty untrained panelists aged 20-29 years (men, n = 32 and women, n = 48) were taken from local communities and were familiar with and used to consume fish floss products. This was done to ensure that panelists were familiar with the character of fish floss and knew the main spices commonly used as seasonings. Panelists were people who were interested and willing to participate in this test.
Procedure 2.3.1 Hedonic Test.
Panelists were asked to taste the fish floss samples and expressed their preference for product appearance, aroma, taste, and texture. Hedonic test assessments used a scale of 1-5, with a score of 1 = very dislike; 2 = dislike; 3 = rather like; 4 = like; and 5 = really like.
Projective mapping.
Panelists were asked to taste each product sample and wrote down descriptions of the sensory characteristics that emerged. Panelists put each product sample on a square sheet of paper measuring 60 x 60 cm by the description felt by the panelist. Samples of products that were considered to have similar sensory properties were placed close to each other, and vice versa. If the sample was considered to have differences, the sample was placed apart from each other. Panelists were allowed to change the position of the sample to confirm the assessment. After the final and no changes, the panelists were asked to write down the three-digit numbers of each sample on the sheet of paper. Panelists were then asked to group groups of samples that had sensory similarities. Once grouped, panelists conducted ultra-flash profiling [9] freely writing descriptions of the sensory properties of the groups that have been determined.
Statistical analysis
Hedonic data were analyzed using Kruskal Wallis and the Mann-Whitney method. PM data were processed using multiple factor analysis (MFA) and hierarchical cluster analysis (HCA) using SPSS (version 20, IBM, Chicago, IL, USA). MFA was used to project data of each panelist and brought up a perception map of the product and sensory characteristics felt by the panelists [10].
Results and discussion
The implementation of the projective mapping to find out the sensory characteristics of fish floss products takes place more easily and quickly. PM or also known as 'napping' is a method of free profiling tasks based on techniques to build their vocabulary from panelists [11]. PM is also known as a method that can be used quickly to obtain a complex mapping of products tested. Another advantage obtained from this method is that it is potentially used as a consumer research tool to gather relevant vocabulary and direct feedback from consumers [1]. The sensory characteristics of 10 samples of fish floss from the PM, along with hedonic test scores are presented in table 2. *) The score was the panelist's preference level for the product on a scale of 1 -5, score 1 = very dislike, and score 5 = very like. Values followed by different letters indicated a significant difference (P<0.05). Na = product sample was not available temporarily when the test was carried out.
The panelists detected the aroma of spices used as fish floss seasonings, including the smell of fish, candlenut, lemongrass, chili, coriander, galangal, ginger, cutcherry, bay leaves, tamarind, and turmeric. The aroma of these spices appeared quite strong because it was commonly used as the main ingredient in making fish floss. Some herbs such as coriander, garlic, palm sugar, tamarind, galangal, bay leaves, and turmeric are ingredients commonly used as fish floss spices [12]. The preference of the panelists for the samples showed that there was a significant difference in the aroma, taste, and texture of the fish floss. The appearance of fish floss did not show significant differences. The highest hedonic score of fish floss was indicated by KF_L, which was not significantly different from SR and N_L. This most preferred product shows that consumers prefer freshwater fish floss than marine fish floss. This was due to the floss of seawater fish having a fishy aroma that consumers did not like.
The Biplot MFA obtained (figure 1) from the statistical test illustrated the sensory space produced by the panelists. The variance shown was 39.85% (F1 = 21.77%; F2 = 18.08%). The plot results showed that the fish floss samples were almost evenly distributed, with the ideal proportions being shown by samples of SM products. MFA results from 10 fish floss samples and sensory characteristics determined by panelists ( Figure 2) showed a total variance of 39.85% contributed by F1 (21.77%) and F2 (18.08%). This value was low because it was a characteristic of a PM that uses untrained panelists. Three descriptive tests, namely QDA, flash profiling, and projective mapping were done, and the most significant variance results were QDA, followed by FP and PM [1]. The PM results were the lowest because the characterization results obtained were the direct result of sample mapping in sheets. While for FP, panelists were asked to bring up terminology between samples, and QDA had the highest score because the results obtained were the consensus from trained panelists.
MFA fish floss showed the relationship between samples with the strongest attributes sensed by panelists. Scents that are more often detected by panelists will be in the middle area of the MFA and vice versa, the less detected will be located outside the MFA. Ten samples of fish floss products were divided into four groups of the most dominant sensory characteristics. Samples of DE, AK, and KR_T were found in the F1 positive axis and the positive F2 axis (quadrant 1), owing to the character of ginger, tamarind, nutmeg, and turmeric. The KF_S sample was found in the F1 negative and F2 positive axis (quadrant 2) having galangal aroma character. The N_L, SR, and KF_L samples were found in the negative F1 and F2 axis (quadrant 3), having the characteristics of coriander, cutcherry, chili, and bay leaves. Samples of SM, TS, and N_S were found in the F1 positive axis and negative F2 From the MFA plot, there were four groups of dominant aroma characters in fish floss samples, with several overlapping plots. In the sample plots DE, AK, and KF_T in quadrant 1, the scent characters of ginger and tamarind were in the nutmeg and turmeric plots. This indicated that the aroma of tamarind and ginger was more specific to the sample than the aroma of nutmeg and turmeric; likewise, the lemongrass character in the sample plots SM, TS, and N_S was in quadrant 4. Meanwhile, the sample N_L, SR, and KF_L, which were in quadrant 3, showed an interaction between the aroma of bay leaves, cutcherry, and chilies, and bind to each other through coriander aroma. The KF_S sample plot had its character with the dominant galangal aroma and was not related to other scents. Each aroma of spices recognized by the panelist is a type of spice that had a strong aroma and often used in cooking. The spice with a strong smell made the aroma easily recognizable by panelists. The consumers will easily detect differences and assess the characteristics of a product if they are familiar with the product. Conversely, if consumers are not familiar with the product, they will have difficulty in detecting differences. This condition will affect the valuation position in PM [13].
Dendrogram results of HCA (figure 3) illustrated the closeness of the relationship between fish floss samples. Unlike the MFA plot results which was dividing the samples into four groups, the HCA dendrogram divided the sample into three groups. These three groups were distinguished based on the similarity of aroma detected by the panelists. Figure 3. Dendrogram of hierarchical cluster analysis of fish floss samples from projective mapping These result indicated that fish floss sold in the city of Yogyakarta had almost the same aroma character, and there were no samples that had a real dominant aroma. Although the tests conducted were able to categorize existing samples, the result of PM in this study was unable to show interactions of panelists' preference for fish floss samples. To get a better picture of PM results on panelists' preferences, further research was needed on the application of PM combined with consumer preference tests.
Conclusions
Projective mapping of commercial fish floss available and sold in the city of Yogyakarta divided the fish floss into four groups of dominant aromas. The PM implementation was relatively easy and inexpensive to find out the sensory characteristics of the fish floss. However, research combining the projective mapping with consumer preference test was needed to obtain a broader picture of the application of the PM to characterize the commercial fish floss. | 2019-12-05T09:26:53.324Z | 2019-12-02T00:00:00.000 | {
"year": 2019,
"sha1": "212b5e2eb89f4ecb4f87798d51124c98712313be",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/370/1/012069",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "51b67cfa6094a36c9bfd9f93e3799957912c6aad",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
253553600 | pes2o/s2orc | v3-fos-license | A wind, temperature, H$_2$O and CO$_2$ scanning lidar mobile observatory for a 3D thermodynamic view of the atmosphere
A ground-based mobile 3D lidar observatory has been developed for simultaneous measurements of wind speed, temperature, water vapor and carbon dioxide absorption in the atmosphere. The present paper reports details of the instruments, assesses the current performances and gives some examples of measurements for different geophysical applications.
Introduction
The motivation of this work is to provide advanced observations of the main variables that characterize the land-atmosphere exchanges of momentum, temperature, water vapor and carbon dioxide. The lidar mobile observatory is a prototype that hopefully will help to path the way to a future 3-D thermodynamic view of the atmosphere that will match current and future Navier-Stokes equation simulation at different scales (Large-Eddy Simulation (LES), Direct Numerical Simulation (DNS)) and will help our understanding of the carbon cycle in the context of global warming [1]. Multiple goals include: (i) to address the representativeness of in situ measurements in heterogeneous landscape, especially for surface fluxes (ii) to assess the relevance of Monin-Obukhov similarity theory (MOST) which links gradient and flux close to the surface (iii) to address the issue of dissimilarity of scalar transport such as heat and water vapor or CO2 in inhomogeneous landscape (iv) to help to find advanced model parametrizations of land-surface or boundary layer -free atmosphere exchanges and transport processes, for both convective and stable planetary boundary layers. To do so, new observations are needed that can provide, first, a 3-D view of the atmosphere and second, that have turbulence-scale temporal and spatial resolutions in order to investigate flux-gradient relationships and estimate higher-order moments.
In this paper we present the main characteristics of the two lidars of this observatory and we assess the performances in terms of precision and bias.
2
Instrumental set-up
3-D lidar observatory
The mobile 3-D lidar observatory consists in two containers with scanning lidars that operates in non-visible UV and the NIR optical windows for eye safe reasons (Fig. 1). First lidar is a temperature and water vapor Raman lidar at 355 nm (TERA) and second is a prototype DIAL and Doppler lidar at 2051 nm (COWI) for simultaneous wind speed and carbon dioxide (CO2) absorption measurements. Lidar observations are completed by several in situ sensors to complete the dataset especially in the surface atmospheric layer. Two in situ flux stations with sonic anemometers and gas analyzers, one attached to the observatory and second deployed few kilometers apart are especially used for turbulent-linked measurement references and to assess surface property heterogeneities.
TERA: temperature and water vapor Raman lidar
The main characteristics of TERA lidar are shown in Fig. 2. The lidar consists in a diode-pumped and seeded tripled Nd:YAG laser that provides 200 mJ pulses at 100 Hz at 354.8 nm (Merion-Lumibird SA) and a 50 cm diameter telescope and associated scanning device (Fig. 2). The detection set-up includes several interference filters in cascade, two for temperature rotational channels (RR1-354.15 nm (0.3 nm bandwidth) and RR2 -353.3 nm (0.5 nm)), one for H2O vibrational Raman detection at 407.7 nm (0.3 nm) similar to the system that was developed in [2]. An elastic channel is used to monitor the structure of the atmosphere but also to calibrate the 3-D axes of the scanning device with referenced hard targets. The detection and acquisition system use standard devices with PMTs and LICEL TR40-12-bit systems for simultaneous analogic and photon-counting detections. The power supply, including the cooling of the laser is lower than 5 kW. The scanning device has been built in the lab and uses custom ellipsoidal gas fusion mirrors with an aluminum coating. TERA was specifically designed for high temporal and space resolution profiling of temperature and water vapor in the boundary layer for turbulence-linked measurements but longer averaging scales (1 h -200 m) enables to make measurement up to the stratosphere during the night.
COWI: CO2 and wind lidar, Doppler and DIAL
COWI lidar makes use of a Thulium fiber laser pumped dual wavelength seeded Ho:YLF MOPA emitter that provides 10 mJ pulses at 2 kHz [3]. The lidar has a coherent detection for wind speed measurement and a direct detection using a new HgCdTe APD and a 20 cm diameter telescope for differential absorption measurement of CO2 [4]. In the present configuration, two wavelengths are used to make DIAL measurements of CO2 using the R30 absorption line at 2050.97 nm but a third one may be added at 2050.53 nm to make simultaneous measurement of H2O. Spectral purity is measured to be larger than 99.96% and frequency stability (better than 150 kHz at 10s) is achieved using a lab-made frequency reference system that relies on a CO2-filled absorption cell, external frequency modulation and Pound-Drever-Hall technique. Total power supply of the lidar is lower than 2 kW.
3-D lidar sensing
The main objectives of 3-D lidar sensing are to provide tropospheric vertical profiles and to document at the same time the heterogeneity of surface layer properties that could explain these vertical profiles (Fig. 3). In this way, the lidars are operated sequentially in three different modes: (i) vertical to get the troposphere characteristics, scalar profiles and moments, integral scales and fluxes using the eddy-covariance method; (ii) RHI (range-height indicator), low altitude vertical cross-section of the surface layer at a given azimuth angle to apply Monin-Obukhov similarity theory and estimate surface fluxes heterogeneity; (iii) PPI (plan polar indicator), low altitude horizontal cross-section of the surface layer to measure scalar field heterogeneity.
Potential temperature and specific humidity gradients and fluxes
One of the main objectives of this lidar observatory is to provide a continuous thermodynamic view of the convective boundary layer in order to improve model parametrizations of vertical transport. Correlation between radial wind speed and scalar profiles (temperature, H2O, CO2) requires co-located instruments, several altitude-referenced targets to calibrate scanning device elevation and azimuth angles and synchronized data acquisition systems. Fig. 4 shows an example of eddy-covariance sensible and latent heat fluxes calculated with time and space resolution of 30 min and 50 m. Instrumental and sampling errors are indicated. Mean potential temperature and specific humidity profiles are also displayed. Statistical errors are respectively lower than 0.5 K and 0.3 g/kg with time and space resolution of 2 min and 7.5 m in the first kilometer of the CBL. Comparison with radiosondes profiles show a bias for potential temperature profile for z < 0.3 km due to a different overlap function for the two lidar temperature channels.
Some insight in CO2 DIAL measurements
Simultaneous measurements of CO2 with H2O and temperature is of great interest to address the issue scalar diffusivity difference and scalar dissimilarity in the boundary layer. Up to now, such measurement is still difficult to achieve given the required precision and accuracy (< 1%) especially when one wants to study turbulent exchanges. At least the CO2 field heterogeneity may be investigated with a useful geophysical precision but still limited time and space resolution [5]. Fig. 6 shows an example of CO2 diurnal cycle monitoring both in the surface layer and in the boundary-layer using the scanning ability of the COWI lidar. Measurements were made using the coherent detection. Unfortunately, the direct detection that uses the HgCdTe APD prototype detector has not been successful at this point [4]. A new detector made in the framework of the ongoing HOLDON H2020 [6] project is expected to give the necessary precision, space and time resolution for serious CO2 geophysical studies. | 2022-11-17T06:42:55.000Z | 2022-11-16T00:00:00.000 | {
"year": 2022,
"sha1": "efa8c3ccfe86a0e04a4e36f5e938b6309d37dd9c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "efa8c3ccfe86a0e04a4e36f5e938b6309d37dd9c",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219904141 | pes2o/s2orc | v3-fos-license | DOWN’s syndrome with Systemic Lupus Erythematosis: Never turn a blind eye
Down’s syndrome (DS, Trisomy 21) with a prevalence of 1:8,000 live births is considered the most common genetic chromosomal disorder, with an extra full or partial copy of chromosome 21. In addition to physical and mental developmental delays and disabilities being a challenge to this disorder, vulnerability of DS patients to a variety of autoimmune diseases like diabetes, thyroid disorders and celiac disease is also well established suggesting impaired immune response especially cell mediated immunity. In the last 3 decades, a few cases of Systemic Lupus Erythematosis (SLE) have been identified and reported internationally, this case report adds to this rare association and also endorses the need to carefully evaluate and investigate the DS individuals for the presence of connective tissue disorders especially if there is already an existing autoimmune disorder like diabetes, celiac or thyroid disorder.
INTRODUCTION
Down's syndrome (DS) is the most common identified cause of intellectual disabilities, with an average prevalence of 1 per 1,000 registered births. [1] Patients with DS have an increased prevalence of autoimmune disorders affecting both endocrine and non-endocrine organs including coeliac disease, diabetes mellitus, hypo-and hyperthyroidism. DS associated with connective tissue disorders is hypothesized but is still rare.
CASE REPORT
We reported a case of 14-year-old female of DS who presented in the outpatient department having low set years, knocked knees and typical facies, her milestones were delayed but she was able to communicate in short simple sentences. Although the patient was phenotypically and geno-typically DS but with a milder mosaic type, there was no family history of DS in the pedigree known. Her past history was remarkable as she started experiencing gross tremors of extremities 2 years ago and was advised MRI Brain and EEG in 2013 which were Normal, however, in 2015, she was again advised for an MRI Brain which showed subcentral hyperintense signal intensity in parietal cortex, bilaterally near high vertex suggestive of small ischemic infarcts.
Three months prior to this visit, our patient experienced a rash on chest, abdomen and right arm which persisted and low grade fever which was ignored, she got anorexic and lethargic and complained of joint pains frequently. One month before presentation, she had pedal edema which gradually increased, developed a malar rash with recurrent episodes of high grade fever and burning micturition. On examination, she was toxic with a B.P 110/70 mmHg, Pulse 104/min, Temp 100 • F, having gross tremors, oral ulcers and a malar rash. Pedal edema was present with dry skin. She had a pleomorphic non blanching rash on her chest, abdomen and left upper arm, in addition typical vasculitic lesions on palms and soles were also present.
Echocardiogram showed an ejection fraction of 60%. Patient was fulfilling 6 of the 11 criterias of Systemic Lupus Erythematosis (SLE) so she was treated as a case of SLE flare with hypothyroidism. She was given prednisolone 40 mg, azathioprine 50 mg, Hydroxychloroquine 200 mg, folic acid 5 mg, and an antibiotic to cover the UTI. She was also started on thyroxine 50 µg. Patient showed significant clinical improvement in one week with a resolving rash, her infection subsided and she went afebrile. Her steroids were gradually weaned off after 3 months.
Patient is on regular follow up since a year, now with mild flares twice during the year which were managed with a course of steroids, she is currently on azathioprine 50 mg and hydroxychloroquine 200 mg running a stable course. Her serial thyroid profile is within normal range along with the baseline laboratory tests.
DISCUSSION
The prevalence of SLE in DS is explored in the literature and it was found only occasional with this report as the fifth in line in the last 36 years.
The first case was reported by Flanklin et al. [2] in the year 1985 when a 20-year-old female presented with fever, polyarthritis and rash along with hematological involvement, her ANA, Ds DNA and LE cells were positive but she was controlled on NSAID's as the disease severity was low.
The second case was reported 22 years ago in the year 1994 by Bakkaloglu et al. [3] when an 8-year-old DS (mosaic) patient previously diagnosed as having SLE in 1987 [2] presented with acute flare, he had all the characteristic stigmata of DS with mild mental retardation. He presented with oral lesions, cutaneous findings, Coombs (+) hemolytic anemia, + ANA titer and high anti-DNA levels. Over the course of 5 years, he repeatedly had flares of SLE with involvement of kidneys and hypocomplementemia (low C4 levels), it was postulated that a low C4 level may have resulted in development of SLE at a young age. Later on he had a resistant course of disease with CT examination demonstrating bilateral calcifications of the basal ganglia, which were interpreted to be secondary to cerebral vasculitis only controlled when cyclophosphamide was started with steroids. Unfortunately, he could not survive the progressive disease with pulmonary involvement, pancytopenia, and low C3 and C4 levels. Despite prompt treatment, the child died due to failure of the respiratory system.
Another case reported in 1998 by Feingold & Schneller, [4] presented with a 30-year-old female DS (trisomy 21) with chest pain and pericardial effusion which was followed by arthralgia and a photosensitive rash, further investigations revealed chronic persistent hepatitis with a positive serology for ANA (1:320) and Ds DNA (10.3 u/ml) and normal complement levels. The disease went into remission with oral steroids and NSAIDS with discontinuation of therapy one year later.
The fourth case was reported in 1999 by Suwa et al. [5] when a 42-year-old Japanese female with DS (Mosaic) presented with fever, rash, polyarthritis and pleuritis, she was found seropositive with ANA and LE cells and responded well to low dose prednisone 20 mg.
Unlike the earlier case reports, our patient presented with arthralgias, vasculitic rash, hematological and immunological manifestations and a probable cerebral involvement earlier in the course before the diagnosis was made, the disease however, is successfully controlled on medications (steroids, hydroxychloroquine and imuran), and she is on regular follow up.
Among all the five case reports, only one male presented at a very early age with severe disease activity and end organ involvement of CNS, renal, haematological and immunological system including a low complement level. Unfortunately, the child succumbed to death even though he was treated aggressively with steroids and pulse cyclophosphamide.
Abnormalities of cell-mediated, humoral, and phagocytic functions has been linked to patients with DS, [6] resulting in qualitative and quantitative defects in lymphocytes, these immunological aberrations may predispose DS patients to autoimmune disease. This is suggested previously by Ivarsson et al., [7] who also suggested the possibility of presence of additional genes on chromosomes which along with precipitation of environmental factors can result in susceptibility to SLE.
Scarcity of such data may be deceptive and probably a lot of other undiagnosed cases of SLE and connective tissue disorders are ignored because of the coexistence with other mimicking diseases like hypothyroidism, infections, fibromyalgia, psychiatric and mental disorders, only to be discovered later at a more advanced and sometimes irreversible stage. Clinicians often get biased because of the existing condition in DS and the inability of the patient to correctly explain their complaints adds up to this ignorance. Another reason of missed diagnosis is the short life expentency of patients with DS whereas the peak age of appearance of most SLE features is in adulthood.
CONCLUSION
DS is associated with a predisposition to develop autoimmune disorders which include diabetes, thyroid dysfunctions and celiac disease. SLE is another severe systemic disease to be considered carefully especially in females with already having an autoimmune disorder. To date, there are only 4 case reports documented in this respect, unfortunately, patients were not diagnosed in all 4 cases until later with a flare because of the cognitive defect.
Clinicians should be aware of the possibility of an autoimmune defect in female with DS as they can present with SLE features at a young age. The question of whether the association of DS with SLE is coincidental or whether there is a predilection for autoimmune disorders in DS is still investigated.
CONFLICTS OF INTEREST DISCLOSURE
The authors declare they have no conflicts of interest. | 2020-06-04T09:08:00.812Z | 2020-06-03T00:00:00.000 | {
"year": 2020,
"sha1": "03a35d9b2a06fc0e79b9089ed3fa5591f04752d9",
"oa_license": null,
"oa_url": "http://www.sciedupress.com/journal/index.php/dcc/article/download/17405/11056",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "456042d7edc096d6ac6656403bc7922b44660a8d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11455586 | pes2o/s2orc | v3-fos-license | Microbial Communities and Organic Matter Composition in Surface and Subsurface Sediments of the Helgoland Mud Area, North Sea
The role of microorganisms in the cycling of sedimentary organic carbon is a crucial one. To better understand relationships between molecular composition of a potentially bioavailable fraction of organic matter and microbial populations, bacterial and archaeal communities were characterized using pyrosequencing-based 16S rRNA gene analysis in surface (top 30 cm) and subsurface/deeper sediments (30–530 cm) of the Helgoland mud area, North Sea. Fourier Transform Ion Cyclotron Resonance Mass Spectrometry (FT-ICR MS) was used to characterize a potentially bioavailable organic matter fraction (hot-water extractable organic matter, WE-OM). Algal polymer-associated microbial populations such as members of the Gammaproteobacteria, Bacteroidetes, and Verrucomicrobia were dominant in surface sediments while members of the Chloroflexi (Dehalococcoidales and candidate order GIF9) and Miscellaneous Crenarchaeota Groups (MCG), both of which are linked to degradation of more recalcitrant, aromatic compounds and detrital proteins, were dominant in subsurface sediments. Microbial populations dominant in subsurface sediments (Chloroflexi, members of MCG, and Thermoplasmata) showed strong correlations to total organic carbon (TOC) content. Changes of WE-OM with sediment depth reveal molecular transformations from oxygen-rich [high oxygen to carbon (O/C), low hydrogen to carbon (H/C) ratios] aromatic compounds and highly unsaturated compounds toward compounds with lower O/C and higher H/C ratios. The observed molecular changes were most pronounced in organic compounds containing only CHO atoms. Our data thus, highlights classes of sedimentary organic compounds that may serve as microbial energy sources in methanic marine subsurface environments.
The role of microorganisms in the cycling of sedimentary organic carbon is a crucial one. To better understand relationships between molecular composition of a potentially bioavailable fraction of organic matter and microbial populations, bacterial and archaeal communities were characterized using pyrosequencing-based 16S rRNA gene analysis in surface (top 30 cm) and subsurface/deeper sediments (30-530 cm) of the Helgoland mud area, North Sea. Fourier Transform Ion Cyclotron Resonance Mass Spectrometry (FT-ICR MS) was used to characterize a potentially bioavailable organic matter fraction (hot-water extractable organic matter, WE-OM). Algal polymer-associated microbial populations such as members of the Gammaproteobacteria, Bacteroidetes, and Verrucomicrobia were dominant in surface sediments while members of the Chloroflexi (Dehalococcoidales and candidate order GIF9) and Miscellaneous Crenarchaeota Groups (MCG), both of which are linked to degradation of more recalcitrant, aromatic compounds and detrital proteins, were dominant in subsurface sediments. Microbial populations dominant in subsurface sediments (Chloroflexi, members of MCG, and Thermoplasmata) showed strong correlations to total organic carbon (TOC) content. Changes of WE-OM with sediment depth reveal molecular transformations from oxygen-rich [high oxygen to carbon (O/C), low hydrogen to carbon (H/C) ratios] aromatic compounds and highly unsaturated compounds toward compounds with lower O/C and higher H/C ratios. The observed molecular changes were most pronounced in organic compounds containing only CHO atoms. Our data thus, highlights classes of sedimentary organic compounds that may serve as microbial energy sources in methanic marine subsurface environments.
INTRODUCTION
Marine sediments cover 70% of the Earth surface. Organic matter is finely dispersed in these sediments in different concentrations depending largely on the size of the organic matter source, water depth, and sedimentation rates (Hedges and Keil, 1995). Apart from organic matter produced in the marine system, e.g., algal and bacterial biomass rich in lipids and nitrogenous compounds, marine sediments also receive inputs of terrestrial organic matter, which is mainly derived from plant materials rich in cellulose and lignin (De Leeuw and Largeau, 1993). Regardless of sources, extensive recycling of organic matter occurs in the water column (Hedges and Keil, 1995) and only about 1% of the organic carbon export reaches the seafloor on a global scale (Hedges and Keil, 1995). This detrital organic matter serves as a main energy source for microorganisms living in marine sediments (Jørgensen and Boetius, 2007).
In surface sediments, easily degradable organic matter is preferentially utilized by microorganisms (Cowie and Hedges, 1994;Wakeham et al., 1997), whereas less reactive organic matter accumulates and is buried in deeper sediments (Zonneveld et al., 2010). Consequently, microorganisms inhabiting deeper sediments have to meet their metabolic demands by relying on more recalcitrant organic matter, whose degradation requires longer time scales (Middelburg, 1989;Biddle et al., 2006). There are very few studies (e.g., Xie et al., 2013;Vigneron et al., 2014) on the nature of organic matter mineralized by microorganisms in marine subsurface sediments. However, the consistence of microorganisms dominating subsurface sediments across many environments may be due to special adaptations for utilization of less reactive organic matter (Biddle et al., 2006;Inagaki et al., 2006). Dominant Bacterial phyla are usually Chloroflexi and candidate division JS1 (Inagaki et al., 2006;Webster et al., 2007;Hamdan et al., 2011), while dominant Archaea are mostly members of the Miscellaneous Crenarchaeota Group (MCG) and Marine Benthic Group B (MBGB), otherwise referred to as Deep Sea Archaeal Group (DSAG; Biddle et al., 2006;Inagaki et al., 2006;Teske and Sørensen, 2008;Kubo et al., 2012). How these important groups of microorganisms thrive, and what carbon sources they assimilate is largely unknown.
Knowledge of the molecular composition of sedimentary organic matter is important to predict the contributions of different organic matter sources to the pool of total organic carbon (TOC; Meyers and Ishiwatari, 1993), each pool's relevance for shaping the functional diversity of microbial communities (Hunting et al., 2013) and associated energy limitations originating from substrate composition (Lever et al., 2015). However, it is a major challenge to molecularly characterize organic matter in sediments due to analytical limitations (Nebbioso and Piccolo, 2012). In the last decade, Fourier Transform Ion-Cyclotron Resonance Mass Spectrometry (FT-ICR MS) has successfully provided insights into the molecular composition of dissolved organic matter (DOM) in diverse environments (Kim et al., 2004;Koch et al., 2005;Dittmar and Koch, 2006;Hertkorn et al., 2006;Tremblay et al., 2007;Reemtsma et al., 2008;Schmidt et al., 2009Schmidt et al., , 2014Bhatia et al., 2010;D'Andrilli et al., 2010;Lechtenfeld et al., 2013;Roth et al., 2013;Kellerman et al., 2014;Seidel et al., 2014;Dubinenkov et al., 2015) due to its capacity to resolve thousands of individual components of complex organic matter based on accurate mass measurement. We applied FT-ICR MS to the water-extractable organic matter (WE-OM) fraction, which consists of free and adsorbed pore-water DOM as well as DOM that can be leached from particulate organic matter (Schmidt et al., 2014). Thus, WE-OM is representative of both pore-water DOM and its potential particulate precursor pool. This pool of organic matter may also provide utilizable carbon and nitrogen for microorganisms living in sediments and soils (Strosser, 2010;Guigue et al., 2015). However, the ubiquity, distribution, and potential relevance, as a substrate source, of individual groups of DOM molecules for microbes in marine sediments are not known.
The Helgoland mud area (German Bight of the North Sea) is one of the depocenters of fine-grained mud in the open North Sea. In periods before 1250 A.D., this area has experienced higher sedimentation rates (up to 12-fold higher) and deposition of organic matter than now-a-days (Hebbeln et al., 2003). With this work, we aim at a molecular characterization of WE-OM and prokaryotic communities in sediments from the Helgoland mud area and discuss potential links between the molecular composition of organic matter and diversity of microbial populations in marine sediments.
Site and Sampling Description
Samples from surface sediments (up to 10 cm) and deeper sediments (up to 530 cm) from the Helgoland mud area (54 • 5.00 ′ N 7 • 58 ′ E) were collected in 2012, 2013, and 2014 during cruises with the research vessels HEINCKE and UTHÖRN. Sampling sites, coordinates, and methods are described in detail by Oni et al. (2015). Microbial community analysis was performed on samples reported in the aforementioned study. For sediment cores collected in 2012 (core UT2012, surface sediments and core HE376-007, deeper sediments), TOC, total nitrogen (TN), stable carbon, and nitrogen isotope analysis was performed with samples from 0 to 5, 5 to 10 cm, and each 25 cm sections of the 500 cm sediment core described in Oni et al. (2015). The same parameters were measured on sediment cores collected in 2013 (core HE406-8-003, deeper sediments). From sediment core HE421-004, only 4-6 cm (surface sediments) was sampled while sediment core HE406-8 was sampled in 25 cm sections at 100 cm intervals [i.e., 30-55 cm (close to the sulfatemethane transition depth, SMT (75 cm, Oni et al., 2015), termed "SMT area" hereafter), 130-155, 230-55, 330-355, and 430-455 cm (methanic zone)]. Samples from cores HE421-004 and HE406-8 were used for studying the molecular composition of organic matter by aqueous soxhlet extraction and subsequent FT-ICR MS analysis of extracts.
Organic Matter Analysis
Total Organic Carbon, Total Nitrogen, and Stable Carbon and Nitrogen Isotopes To quantify the contents of TOC, TN, and their respective stable isotopes, approximately 3 g of wet sediment from each section was decalcified by treatment with 10% HCl. Afterwards, samples were washed with ultrapure water and freeze-dried followed by grinding in a mortar. 10-30 mg of each sample was weighed into tin capsules and analyzed on a Thermo Scientific Flash 2000 elemental analyzer connected to a Thermo Delta V Plus IRMS. All values are mean values of duplicate measurements. Stable isotopic compositions (δ 13 C and δ 15 N) are reported in relative to the Vienna Pee Dee Belemnite (V-PDB) standard and atmospheric N respectively. High-resolution TOC contents were determined using a Carbon-Sulfur Determinator (ELTRA CS 2000). About 50 mg of dried and ground sediment were weighted into ceramic crucibles. Two to three drops of ethanol were added to avoid strong bubbling. Subsequently, the sediment was decalcified with 12.5% HCl p.a. and dried on a heating plate at 250 • C. After about 2 h, the dry sediment was covered by a mixture of steel and tungsten splinters to ensure a homogenous burning of the sample. The analytical precision was better than 1%.
Soxhlet Extraction
A detailed description of extraction procedures and postextraction steps has been provided in Schmidt et al. (2014). In brief, about 25 g of wet sediment was weighed into precombusted glass fiber thimbles (30×100 mm, Whatman). Prior to use, thimbles were extracted in ultrapure water for 48 h to remove potential contaminants. A procedural blank containing thimble and deionized water was run to check for contaminations. The thimbles were placed in the soxhlet extraction unit and WE-OM was extracted from the sediment samples with 200 ml of distilled, de-ionized water for 24 h. Soxhlet extracts were filtered first with 0.7 µm (GF/F, Whatmann) and then 0.2 µm (cellulose acetate, Sartorius) microbiologically sterile filters before storing extracts at 4 • C until further use.
DOM Extraction
Soxhlet extracts were acidified to pH 2 with HCl (suprapur, Merck) before concentrating the DOM by solid phase extraction (SPE) using Bond Elut-PPL cartridges (500 mg, 3 ml syringe; Agilent Technologies, Germany) as described by Dittmar et al. (2008). As the extracts were adsorbed to the cartridges, salts were removed by rinsing the cartridges with 6 ml ultrapure water (pH 2). Extracts were eluted with 1 ml of methanol (LiChrosolv, Merck) and stored at −20 • C in the dark until FT-ICR MS analyses.
Dissolved Organic Carbon and Total Dissolved Nitrogen
DOC and total dissolved nitrogen (TDN) concentrations were analyzed in Soxhlet extracts and SPE extracts. First, methanol was removed from aliquots of SPE extracts under a stream of nitrogen and afterwards DOM was re-dissolved in 6 ml ultrapure water. Measurements were performed by high-temperature catalytic oxidation (at 680 • C) using a Shimadzu TOC/TN analyzer equipped with infrared and chemiluminescence detector (oxygen flow: 0.6 l min −1 ). Prior to direct injection onto the catalyst samples were acidified with 0.12 ml HCl (2 M) in the autosampler and purged with oxygen to remove inorganic carbon. Final DOC and TDN concentrations were average values of triplicate measurements.
FT-ICR MS
DOM extracts were analyzed on a Bruker SolariX XR FT-ICR mass spectrometer (Bruker Daltonik GmbH, Bremen, Germany) equipped with a 12 T refrigerated actively shielded superconducting magnet (Bruker Biospin, Wissembourg, France), a dual ionization source (ESI and MALDI, Apollo II electrospray source, Bruker Daltonik GmbH, Bremen, Germany) and a dynamically harmonized analyzer cell (ParaCell ™ , Bruker Daltonik GmbH, Bremen, Germany). Prior to measurement, the extracts were diluted with methanol:water (1:1, v/v) mixture to same SPE concentrations for all samples (750 nmol DOC/mL). Samples were ionized using electrospray ionization in negative ionization mode at an infusion flow rate of 5 µl min −1 . Ion accumulation time was set to 0.05 s and 200 scans were added to one mass spectrum. Mass spectra were acquired with 4 MW data points resulting in a resolving power of 480,000 at m/z 400. Mass spectra were calibrated externally with arginine clusters and recalibrated internally with compounds that were repeatedly identified in marine pore-water DOM samples (cf. Schmidt et al., 2014). The root mean square error of the internal calibration was below 0.095 ppm resulting in very reliable molecular formula assignment. Molecular formulas were calculated under consideration of the following elements 1 H 0−90 , 12 C 0−60 , 13 C 0−1 , 16 O 0−35 , 14 N 0−4 , 32 S 0−2 , 34 S 0−1 , 31 P 0−2 in a m/z range of 180-600 using a custom-developed software written in C ++ .
Formulas were restricted to integer double bond equivalent (DBE) values and a molecular element ratio of O/C ≤ 1.2. A mass tolerance of ±0.5 ppm was considered as a valid formula. Multiple formulas were filtered with the homologous series/building block approach and isotope check (Koch et al., 2007). Molecular formulae containing 13 C or 34 S were excluded from the final dataset which was limited to peaks with S/N > 7 corresponding to a relative peak intensity of 0.4%. Relative peak intensities were calculated from the total peak intensity ( Int allPeaks ) in the spectra after following equation: In order to reduce the complexity of data characteristically obtained from FT-ICR MS analyses, molecular formulae were first grouped into categories based on their elemental composition: (1) molecular formulae containing C, H, and O atoms, (2) molecular formulae consisting of C, H, O, and one or two N atoms (CHO-N 1−2 ), (3) molecular formulae consisting of C, H, O, and three or four N atoms (CHO-N 3−4 ), (4) molecular formulae containing N and P (CHNOP), (5) molecular formulae containing S (CHOS), (6) molecular formulae containing N and S (CHNOS), (7) molecular formulae containing P and S (CHOPS) as well as (8) those containing P only (CHOP). In addition, molecular formulae in the different categories were divided into five groups based on modified aromaticity index (AI mod, Koch and Dittmar, 2006), H/C and O/C ratios (e.g., Šantl-Temkiv et al., 2013;Seidel et al., 2014), hereafter referred to as groups 1-5: (group 1) polycyclic aromates, (PCAs, AI mod ≥ 0.67), (group 2) highly aromatic compounds, including polyphenols and PCA compounds with aliphatic chains (0.67 > AI mod > 0.50), (group 3) highly unsaturated compounds (including humic compounds and carboxyl-rich alicyclic molecules (CRAM; Hertkorn et al., 2006; AI mod ≤ 0.5 and H/C < 1.5), (group 4) unsaturated aliphatic compounds (2.0 > H/C ≥ 1.5), (group 5) saturated aliphatic compounds (may include carbohydratelike compounds, saturated fatty and sulfonic acids; H/C ≥ 2.0). Raw data sheets used for molecular assignments are provided as Supplementary Material (Data Sheet S4). ESI negative FT-ICR mass spectra covering all mass ranges in WE-OM and detailed mass spectra on nominal mass 385Da (as an example) are also provided in Figures S1, S2, respectively.
Microbial Community Analyses
Pyrosequencing and Sequence Analyses DNA samples extracted as described in Oni et al. (2015), from depths 0 to 5 and 5 to 10 (surface sediments), 30 to 55 (SMT area), 180 to 205, 230 to 255, 305 to 330, 355 to 380 and 480-505 cm (methanic zone), were selected for 454 FLX pyrosequencing at Molecular and Research Testing Laboratory (Lubbock, Texas, USA). Same primer pairs for bacterial and archaeal 16S rRNA gene amplification as reported in Oni et al. (2015) were used. Downstream processing of sequence raw data files (SFF files) were done as reported earlier (Oni et al., 2015). Rarefactions curves (observed species based on 97% OTU cut-off) and microbial diversity and microbial and richness indices (Shannon and Chao 1, Hughes et al., 2001;Spellerberg and Fedor, 2003) were calculated for each sample analyzed using QIIME version 1.7.0. Species diversity and richness indices along the depth profile for bacteria and archaea were calculated after normalizing the number of sequences to those of the samples with lowest sequence reads. Weighted Paired Group Method of Averaging (WPGMA) cluster diagrams were generated for bacterial and archaeal OTUs.
Statistical Analyses
To investigate the strength of relationships between TOC and TN or between TOC and microbial populations with depth, spearman correlations were calculated using PAleotontological STatistics software version 2.17c (PAST, Hammer et al., 2001).
Water-extractable Organic Matter Analysis
WE-OM fraction in the surface and deeper sediments ranged between 1.2 and 2.4% (Table 1) with the highest portion found for the sample from 130 to 155 cm and the lowest portion for the sample from 230 to 255 cm. FT-ICR MS analysis resolved thousands of molecular formulae per sample ( Table 2). The sample from the surface sediment (4-6 cm) contained a lower number of formulae compared to samples from deeper sediments (30-455 cm). In deeper sediments, numbers of molecular formulae were higher in the samples from the methanic zone (below 130 cm) compared to the sulfate methane transition zone (SMT area; 30-55 cm). Intensity weighted averages of molecular masses (m/z wa ) were higher in deeper sediments than in surface sediments. Weighted average Double Bond Equivalent (DBE wa ) values, which denote the sum of rings and double bonds in the molecular compounds, as well as O/C wa and C/N wa ratios, were generally lower in surface sediments. Conversely, H/C wa ratio was higher in the surface sediment compared to the deeper sediments. With respect to relative intensities of peaks, total signal intensities of CHO and N-bearing compounds were highest in all samples. CHO and CHO-N 1−2 compounds were more enriched in deeper sediments whereas CHO-N 3−4 and CHNOP compound groups were most abundant in the surface sediments (Figure 2). Relative signal intensities of CHOS compounds showed no clear trend from surface sediments down to deeper sediments (Figure 2). PCAs (group 1) showed the highest relative abundance in the sample from the SMT area (4.1%), followed by sample from the surface sediment (3.8%). Their abundance decreased at 130-355 cm and below, where it ranged between 2.5 and 2.7%. In the deepest sample from 430 to 455 cm PCA compounds showed a slight increase to 3.4% (Figure 3). Highly aromatic compounds (group 2) were comparatively more abundant in all samples and showed similar trends in deeper sediments as group 1 (Figure 3). The decrease in the percentage relative intensities of PCA and highly aromatic compounds below the SMT area appeared to be most pronounced in the CHO compounds (Figures 3, 4B). Highly unsaturated compounds (group 3) were the most abundant molecular formulae group in all samples (Figure 3). In the surface sediment they constitute 47% of all peak intensities while their relative abundance increased in deeper sediments, from 58% in the SMT area to 61-64% in the methanic zone. Unsaturated aliphatic compounds (group 4) were highest in the surface sediment (40%) whereas their relative intensities decreased in the deeper sediment to approximately half (19.3-20.3%) of their total intensities in the surface sediment. The relative abundances of CHO-N 3−4 in surface sediments, were most abundant in group 4 and group 1 (Figure 3). Finally, saturated aliphatic compounds (group 5) were most abundant in the surface sediment (3.4%) in relation to samples from deeper sediments (1.4-2.2%). CHO-N 3−4 formulae made up a small portion (∼0.3-1%) of the compounds in group 5 (Figure 3).
Microbial Community Structure and Composition
The bacterial and archaeal community structure clearly differed between the surface and deeper sediments as displayed in Figures 4A,B. Specifically, deeper sediments showed a separation between bacterial populations in the SMT area and the methanic zone ( Figure 4A). However, there was no separation of archaea between the SMT area and the methanic zone ( Figure 4B). Bacterial and archaeal diversities (Shannon index) were higher in surface compared to deeper sediments ( Figure 5). Overall, no clear differences in bacterial species richness were observed between surface and deeper sediments (Figure 5). However, archaea species richness was approximately 4-9 times higher in surface than in deeper sediments (Figure 5). Estimates of the number of bacterial and archaeal OTUs detected (based on 97% sequence similarity cut-off) are shown in rarefaction curves (Figures 6A,B).
Up-to-family-level relative abundance information on bacterial and archaeal populations at each sampled depth are given in Data Sheets S1, S2, respectively.
Organic Matter-linked Microbial Populations in Deeper Sediments
Multiple sediment samples retrieved from the gravity core (HE 376-007-5) allowed the possibility to match the depthwise distribution of bacterial and archaeal populations detected in deeper sediments to TOC content at depths from which samples were chosen for microbial molecular analysis. Microbial populations belonging to Chloroflexi (ρ = 0.928, p = 0.01; mainly Dehalococcoidales, candidate order GIF9), Thermoplasmata (ρ = 0.812, p = 0.07), and a candidate order of the MCG (pGrfC26; ρ = 0.899, p = 0.03) showed strong correlations to TOC (Figure 8, Data Sheet S3).
DISCUSSION
We characterized the molecular composition of the WE-OM pool of bulk organic matter in the surface and deeper sediments. In addition, prokaryotic community composition of the Helgoland mud area was studied. Our findings, as discussed below, reveal important differences in the FIGURE 3 | Depth-wise relative abundance distribution of intensities of compound groups classified based on modified aromaticity index (AI mod ), H/C and O/C ratios. At each depth, compound groups are further divided based on heteroatoms (N, S, and P). Surface sediment samples (4-6 cm) are obtained from core HE421-004. Deeper sediments samples (30-455 cm) are obtained from core HE 406-008. molecular composition of WE-OM and organic matter bioavailability, which may play a role in determining microbial populations dominating in surface and deeper sediments.
Sources and Bioavailability of Organic Matter in Surface Sediments
The relative 13 C enrichment of organic matter (δ 13 C of TOC is −23.1 to −23.4 ) in surface sediments is indicative of higher contributions of marine derived organic matter such as algal materials (Dauwe and Middelburg, 1998;Holtvoeth, 2004;Sangiorgi et al., 2005). Algal organic matter consists of a higher portion of aliphatic and N-rich molecules (Sun et al., 1997). It has previously been shown that near-surface pore-water DOM from open marine sites with a predominance of algal material, contains more molecular formulae with N and elevated H/C ratios (Schmidt et al., 2009). In line with this were the low C/N wa and high H/C wa ratios ( Table 2) and higher abundances of saturated and unsaturated aliphatic compounds (groups 4 and 5, Figure 3) in WE-OM from the surface sediment. In the van Krevelen diagram (Figure 9A), the difference in the CHO formulae between surface and deeper sediment is illustrated by elevated relative intensities of aliphatic compounds with low O/C ratios in the surficial WE-OM (orange to red color). Besides a change in the main organic matter source, differences in the reactivity of different organic matter types could also contribute to the molecular variations between WE-OM in the surface and deeper sediment. Saturated aliphatic compounds (group 5), which might contain fatty acids and carbohydrates, are considered as easily biodegradable components of marine organic matter and are quickly lost during early diagenesis (Freese et al., 2008). The higher biodegradability of saturated and unsaturated aliphatic compounds might contribute to their lower abundances in the deeper sediment compared to the sample from the surface sediments (Figure 3).
Sources and Bioavailability of Organic Matter in Deeper Sediments
The 13 C depletion of TOC in sediments from the SMT area and below is consistent with an elevated proportion of terrestrial organic matter in the deeper sediments (Figure 1). TOC showed only minor variations in δ 13 C in deeper sediments, which is suggestive of similar sources, attributable to the high flux of terrestrial organic matter which was deposited during periods of heavy storms and disintegration of parts of the Helgoland Island (Hebbeln et al., 2003). Terrestrial organic matter consists of a high portion of complex O-rich structures e.g., lignin, tannin and cellulose. This is reflected in the higher abundance of O-rich aromatic and highly unsaturated compounds in the deeper sediment compared to the surface sediment ( Figure 9A). Terrestrial organic matter is known to show greater recalcitrance in marine sediments compared to algal-derived organic matter (Andersen and Kristensen, 1992;Meyers and Ishiwatari, 1993;Meyers, 1994;Rontani et al., 2012). One reason for this could be pre-aging of terrestrial organic matter en-route the marine system or its higher susceptibility to encapsulation by accompanying minerals (Mayer, 1994;Keil, 2011;Lalonde et al., 2012;Riedel et al., 2013;Barber et al., 2014). In general, selective degradation strongly modifies the characteristics of residual organic matter in sediments (Meyers, 1994;Zonneveld et al., 2010). As microbes preferentially degrade the easily-utilizable portion of bulk organic matter, the more recalcitrant fractions selectively accumulate in deeper sediments (Cowie and Hedges, 1994;Wakeham et al., 1997). The generally higher abundances of CHO as well as CHO-N 1−2 in the deeper sediments (Figure 2) suggest that a larger portion of the compounds represented by these formulae is relatively refractory. With respect to the molecular structures, highly unsaturated compounds (group 3) are likely to harbor a larger proportion of recalcitrant compounds as they are more abundant in deeper sediments (∼58-64% in deeper sediments Figure 3). Changes in the abundance of different organic matter groups within the deeper sediments could be related to organic matter degradation. The percentage relative intensities of PCA and highly aromatic formulae (mostly CHO compounds) show a slightly decreasing trend below the SMT (Figure 3). This could be a result of a slow degradation of these formulae groups by microorganisms in the methanic zone. Similarly, the higher abundance of CHO-N 3−4 formulae in the SMT and surface sediment relative to deeper sediments suggests that the Nrich compounds are preferentially degraded and therefore less abundant in the deeper sediments. This is consistent with reports of preferential degradation of N-rich organic matter in marine sediments (Cowie and Hedges, 1991;Freudenthal et al., 2001;Sinkko et al., 2013;Barber et al., 2014;Schmidt et al., 2014).
Microbial Populations and Organic Matter Degradation in Surface Sediments
As surface sediments contained higher proportions of labile algalderived aliphatic organic matter, bacterial groups belonging to Gammaproteobacteria, Alphaproteobacteria, and Bacteroidetes, often prominently detected during initial degradation of algalderived organic matter in marine waters and sediments (Gutierrez et al., 2011;Teeling et al., 2012;Landa et al., 2014;Miyatake et al., 2014;Ruff et al., 2014) appeared to be more dominant therein. Flavobacteriaceae, the dominant members of the Bacteroidetes in surface sediments of our study site (Data Sheet S1, Tables 1, 2), have been consistently enriched in plankton-amended microcosm incubations as well as in natural phytoplankton blooms (Kirchman, 2002;Abell and Bowman, 2005;Bauer et al., 2006;Teeling et al., 2012). A recent study in an Arctic fjord (Smeerenburgfjord, Svalbard) has suggested a role in polysaccharide hydrolysis for members of the Verrucomicrobia phylum (Cardman et al., 2014). The occurrences of Cyanobacteria, Acidobacteria, and some members of the Chloroflexi (candidate class Ellin 6529; Data Sheet S1, Tables 1, 2) mainly in the surface sediments ( Figure 7A) suggest that they may be better adapted to fresh organic matter. Dominant Deltaproteobacteria in surface sediments namely, Desulfobulbaceae, Desulfuromonadaceae, and Desulfobacteraceae (Data Sheet S1, Tables 1, 2), include various sulfate-, sulfur-, and metal-reducing bacteria that may specialize in the oxidation of low-molecular weight organic compounds fermentatively produced from upstream degradation of the heavier organic molecules (Lovley et al., 1993(Lovley et al., , 1995Muyzer and Stams, 2008). Ammonia resulting from organic matter degradation is a potential substrate for the dominant Thaumarchaeota (mainly Cenarchaeaceae), which include known ammonia-oxidizing archaea such as Nitrosopumilus maritimus (Könneke et al., 2005) and Candidatus Nitrosopumilus koreensis (Park et al., 2010). Candidate division Parvarchaea also constitute a dominant archaeal group in surface sediment. However, no ecological role can be predicted for this candidate phylum due to lack of cultured members.
Microbial Populations and Organic Matter Degradation in Deeper Sediments
The recalcitrant nature of organic matter in subsurface sediments may have selected for specific microbial populations capable of its utilization, resulting in lower bacterial and archaeal diversity compared to surface sediments (Figure 5). The diversities of Bacteria and Archaea in deeper sediments were mostly covered by the number of sequences analyzed in our study as respective rarefaction curves from deeper samples were already approaching plateau (Figures 6A,B). WE-OM from deeper sediments showed higher abundances of highly unsaturated compounds compared to the surface sediment ( Figure 9A). These compounds may include CRAMs (Hertkorn et al., 2006) and some plant-derived materials rich in lignin/lignocellulosic molecules (Sleighter and Hatcher, 2008). Microbial populations dominant in deep sediments of our study site (Chloroflexi, candidate division JS1, MCG, and Thermoplasmata, Figures 7A,B), are consistent with those regularly found in marine subsurface sediments (Parkes et al., 2005;Biddle et al., 2006Biddle et al., , 2008Inagaki et al., 2006;Webster et al., 2007;Durbin and Teske, 2012;Schippers et al., 2012) and most of these microbial groups have been linked to heterotrophic metabolism (Biddle et al., 2006;Webster et al., 2007;Lloyd et al., 2013). In addition, the strong covariance of Chloroflexi (mainly Dehalococcoidales, ρ = 0.81 and candidate order GIF 9, ρ = 0.75), MCG archaea (mainly candidate order pGfrC26, ρ = 0.89), and Thermoplasmata (ρ = 0.81) to the depth profile of TOC in sediment core HE376-007-5 (Figure 8), suggests that these organisms are important for organic matter degradation in the deeper sediments of our study site as well. As organic matter source and input were relatively constant in the deeper sediments (> 30-530 cm, Figure 1), the observed shift of molecular signatures (mostly among CHO compounds) from high O/C and low to intermediate H/C ratios toward lower O/C and higher H/C ratios with increasing depth in the methanic zone ( Figure 9B), are possibly a signature of selective organic matter degradation. Similar shifts have been observed in DOM degradation experiments (Kalbitz et al., 2003;Kim et al., 2006) and in subsurface sediments of peatlands where organic matter is considerably reactive (Tfaily et al., 2013(Tfaily et al., , 2014(Tfaily et al., , 2015, but not in marine subsurface sediments so far. A likely explanation is a microbial utilization of these O-rich highly unsaturated and aromatic compounds via potential reactions such as reduction or decarboxylation ( Figure 9B). This offers an interesting new perspective to the range of organic matter potentially available for microbes in deep subseafloor as complex molecules such as for example, CRAM-like, lignin-like and tannin-like structures, as well as condensed aromatic molecules, have previously not been considered to be an important energy source for subsurface microbes. In line with our finding here, a role in fermentation of plant polymer building blocks (such as pyrogallol) has recently been predicted for a member of the candidate order GIF9 (Hug et al., 2013). In addition, members of the Dehalococcoidia are also known to be involved in the reductive degradation of substituted aromatic hydrocarbons (Alfreider et al., 2002;Fennell et al., 2004;Wasmund et al., 2014;Pöritz et al., 2015). Candidate order pGrfC26 are sub-grouped into the MCG-A or class 6 MCG (Meng et al., 2014) and are similar to Rice Cluster IV (Großkopf et al., 1998). These groups of MCG have been largely enriched in lignocellulose-amended cultures (Peacock et al., 2013) and may also have a role in the degradation of lignin monomers such as protocatechuate (Meng et al., 2014). Functional potential of organisms such as members of Chloroflexi and MCG in the degradation of aromatic compounds may have contributed to the molecular changes in CHO fractions of at least, PCA and aromatic formulae (group 1 and 2) in the deeper sediments of our study site. Potential for degradation of aromatic compounds were found in other Chloroflexi-and MCG-dominated subsurface sediments-e.g., in the Sonora Margin, Guayamas Basin, where genes responsible for degradation of aromatic hydrocarbons such as ethylbenzene and ethylphenol increased in proportion with depth (Vigneron et al., 2014).
The presence/higher abundances of candidate lineages such as OP1, OP8, WS3, and LCP-89 and Planctomycetes (mostly Phycisphaerae), Elusimicrobia (formerly Termite Group I), Spirochaetes, and Actinobacteria in deeper sediments in relation to the surface sediments suggest that they are better suited to the conditions or more important therein. Firmicutes in our site, mostly belonging to the Bacillales and Clostridiales (Tables 1-8 of Data Sheet S1), appear less selective as they are equally abundant in the surface and deeper sediments.
Methanogenesis and AOM
Methanogenesis is the terminal step of organic matter degradation (Schink, 1997). The presence of methanogenic populations belonging to Methanosarcinaceae (harbor methylated C1 compounds, hydrogen and acetate utilizers), Methanosaetaceae (acetoclastic methanogenesis), Methanomicrobiales (hydrogenotrophic methanogenesis), and Methanocellales (hydrogenotrophic methanogenesis) suggest the potential for all three major pathways of methanogenesis in our site ( Figure 7B). Methylotrophic methanogenesis has also been reported in members of the Thermoplasmata (Dridi et al., 2012;Paul et al., 2012;Iino et al., 2013;Poulsen et al., 2013). Thermoplasmata detected in this study all belong to the candidate order E2, a member (Candidatus Methanogranum caenicola) of which has recently been reported to reduce methanol to methane using hydrogen as an electron donor (Iino et al., 2013). Although in lower concentrations compared to surface sediments, methanol has been detected in pore waters of subsurface sediments of the Black Sea (Zhuang et al., 2014) and its source has been attributed to degradation of terrestriallyderived macromolecules such as lignin and pectin (Donnelly and Dagley, 1980;Schink and Zeikus, 1980). This may explain the strong covariance of Thermoplasmata with TOC (ρ = 0.812, p = 0.07) in deep sediment samples studied here. If the ability to utilize methylated C1 compounds is widespread among members of the candidate order E2, such methanogenic pathway may be very important in subsurface sediments as Thermoplasmata account for up to 17% of total archaeal populations in deeper sediments based on our sequencing method (Figures 7B, 8). However, analysis of mcrA genes and incubation studies on these sediment samples will be necessary to verify this hypothesis.
Potential for anaerobic oxidation of methane is reflected by the abundances of ANME populations (ANME-1 and ANME-2c). The highest combined abundance of ANME populations (∼30% of archaeal populations) and the highest presence of Deltaproteobacteria (mostly Desulfobacteraceae) found in the SMT area are consistent with the distinctiveness of this zone as the active site for AOM coupled to sulfate reduction (Boetius et al., 2000). Nevertheless, potential for AOM in the Helgoland mud area may extend deeper into the methanic zone where iron reduction is occurring (Oni et al., 2015) suggesting the possibility of AOM coupled to iron reduction (Beal et al., 2009). ANME-1 were detected in all samples taken below 30-55 cm depth (4-8% of archaeal populations) in analogy to previous observations (Lloyd et al., 2011) and ANME-2c were also found in high proportion at 230-255 cm (16% of archaeal population).
CONCLUSIONS
Our study suggests that the amount and composition of organic matter may influence the distribution of microbial populations in surface and deeper sediments of the Helgoland mud area (e.g., as seen in Figure 8). While nitrogen-rich, aliphatic organic compounds of presumed algal origin are mostly available for microorganisms in surface sediments, the subsurface sediments are dominated by aromatic and unsaturated phenolic compounds that presumably originate from terrestrial sources. Microorganisms dominating deeper sediments of our study site are consistent with those commonly found in other marine subsurface sediments. These dominant bacterial and archaeal populations are strongly correlated to the TOC content, suggesting involvement in degradation of organic matter in deeper sediments of our study site. Consistently, we observed molecular transformations in the water-extractable (potentially microbially-available) portion of bulk organic matter in subsurface sediments (particularly within the methanic zone) showing a shift from a higher abundance of O-rich molecules in the shallower subsurface (higher O/C ratio) toward a higher abundance of more reduced compounds (with higher H/C and lower O/C ratios). The assemblage of formulae corresponds to PCA, aromatics and highly unsaturated molecules that may include lignins, tannins, CRAM equivalents (groups 1-3), and is consistent with recent findings that O-rich compounds are also preferentially depleted in highly-reactive peatland subsurface sediments (Tfaily et al., 2015). We therefore conclude that organic matter with such oxygen-rich phenolic and aromatic compounds may be an important energy source for microorganisms inhabiting marine subsurface environments characterized by high depositional rates, such as the Helgoland mud area as well. The findings presented here thus shed more light on our understanding of molecular transformations for WE-OM in marine sediments and could accelerate ongoing efforts to culture microorganisms or enrich active microbial consortia in the marine subsurface sediments. In future, detailed analyses of functional genes linked to the degradation of algal polymers, aromatic and phenolic compounds in marine sediments would be necessary to confirm microbial involvement in observed depth-wise molecular transformations in organic matter composition.
ACKNOWLEDGMENTS
This study was supported by the Research Center/Cluster of Excellence "The Ocean in the Earth System" (MARUM) funded by the Deutsche Forschungsgemeinschaft (DFG), by the University of Bremen, and by the European Research Council under the European Union's Seventh Framework Programme-"Ideas" Specific Programme, ERC grant agreement No. 247153 (project DARCLIFE) to KUH. The authors thank the captain, crew, and scientists of R/V HEINCKE expeditions HE376, HE406, and HE421. Dr. Carolina Reyes is thanked for her help with sectioning of sediment core HE376-007-5. We also thank the captain and crew of RV UTHÖRN for their help during UT-2012 sampling. We are grateful to Boris Koch for DOC measurements and Jenny Wendt for help on carbon and nitrogen measurements. We thank Benjamin Löffler and Gerhard Kuhn for carrying out and support with the high-resolution TOC measurements. We acknowledge additional funding by the Max Planck Society and the Helmholtz Association (Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research) in the framework of the research programs PACES I and PACES II. We thank the reviewers for their constructive comments.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fmicb. 2015.01290 Figure S1 | ESI negative FT-ICR mass spectra of WE-OM extracted from the sediment cores of Helgoland mud area. Largest peaks are contaminants (listed in the surfactant database: http://www.terrabase-inc.com//Surfactants. htm) and were removed from the final data set. Figure S2 | FT-ICR mass spectra on the mass 385 Da for WE-OM with increasing sediment depth from top to bottom. Symbols refer to different compound groups and homologous series. Homologous series are defined as the functional relationship between molecular formulae that differ by a specific mass difference equivalent to a chemical building block [in this case CH 4 replaced by O (0.036 Da)]. | 2016-06-17T06:54:19.222Z | 2015-11-25T00:00:00.000 | {
"year": 2015,
"sha1": "531cff34651c916e4df7e3e514f0cb24e7c7fca2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2015.01290/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "531cff34651c916e4df7e3e514f0cb24e7c7fca2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17654037 | pes2o/s2orc | v3-fos-license | M-curves of degree 9 with three nests
The first part of Hilbert's sixteenth problem deals with the classification of the isotopy types realizable by real plane algebraic curves of a given degree $m$. For $m = 9$, the classification of the $M$-curves is still wide open. Let $C_9$ be an $M$-curve of degree 9 and $O$ be a non-empty oval of $C_9$. If $O$ contains in its interior $\alpha$ ovals that are all empty, we say that $O$ together with these $\alpha$ ovals forms a nest. The present paper deals with the $M$-curves with three nests. Let $\alpha_i, i = 1, 2, 3$ be the numbers of empty ovals in each nest. We prove that at least one of the $\alpha_i$ is odd. This is a step towards a conjecture of A. Korchagin, claiming that at least two of the $\alpha_i$ should be odd.
Real and complex schemes
The first part of Hilbert's sixteenth problem deals with the classification of the isotopy types realizable by real plane algebraic curves of given degree. Let A be a real algebraic non-singular plane curve of degree m. Its complex part CA ⊂ CP 2 is a Riemannian surface of genus g = (m − 1)(m − 2)/2; its real part RA ⊂ RP 2 is a collection of L ≤ g + 1 circles embedded in RP 2 . If L = g + 1, we say that A is an M-curve.A circle embedded in RP 2 is called oval or pseudo-line depending on whether it realizes the class 0 or 1 of H 1 (RP 2 ).If m is even, the L components of RA are ovals; if m is odd, RA contains exactly one pseudo-line,which will be denoted by J . An oval separates RP 2 into a Möbius band and a disc. The latter is called the interior of the oval. An oval of RA is empty if its interior contains no other oval. One calls exterior oval an oval that is surrounded by no other oval. Two ovals form an injective pair if one of them lies in the interior of the other one. Let us call the isotopy type of RA ⊂ RP 2 the real scheme of A; it will be described with the following notation due to Viro. The symbol J stands for a curve consisting in one single pseudo-line; n stands for a curve consisting in n empty ovals. If X is the symbol for a curve without pseudo-line, 1 X is the curve obtained by adding a new oval, containing all of the others in its interior. Finally, a curve which is the union of 2 disjoint curves A and B , having the property that none of the ovals of one curve is contained in an oval of the other curve, is denoted by A∐B .The classification of the real schemes which are realizable by M -curves of a given degree in RP 2 is part of Hilbert's sixteenth problem. This classification is complete up to degree 7. For m ≥ 8, one restricts the study to the case of the M -curves. The classification is almost complete for m = 8, and still wide open for m = 9. A systematic study of the case m = 9 has been done, the main contribution being due to A. Korchagin. See e.g. [11], [8], [9], [10], [13] for the constructions, and [6], [7], [11], [1], [3], [4], [14], [15] for the restrictions.
Let us briefly recall some facts about complex orientations. The complex conjugation conj of CP 2 acts on CA with RA as fixed points sets. Thus, CA\ RA is connected, or splits in 2 homeomorphic halves which are exchanged by conj. In the latter case, we say that A is dividing. Let us now consider a dividing curve A of degree m, and assume that CA is oriented canonically. We choose a half CA + of CA \ RA. The orientation of CA + induces an orientation on its boundary RA. This orientation, which is defined up to complete reversion, is called complex orientation of A. One can provide all the injective pairs of RA with a sign as follows: such a pair is positive if and only if the orientations of its 2 ovals induce an orientation of the annulus that they bound in RP 2 . Let Π + and Π − be the numbers of positive and negative injective pairs of A. If A has odd degree, each oval of RA can be provided with a sign: given an oval O of RA, consider the Möbius band M obtained by cutting away the interior of O from RP 2 . The classes [O] and [2J ] of H 1 (M) either coincide or are opposite. In the first case, we say that O is negative; otherwise O is positive. Let Λ + and Λ − be respectively the numbers of positive and negative ovals of RA. The complex scheme of A is obtained by enriching the real scheme with the complex orientation: let e.g. A have real scheme J ∐ 1 α ∐ β . The complex scheme of A is encoded by J ∐ 1 ǫ α + ∐ α − ∐ β + ∐ β − where ǫ ∈ {+, −} is the sign of the non-empty oval; α + , α − are the numbers of positive and negative ovals among the α; β + , β − are the numbers of positive and negative ovals among the β (remember that all signs are defined with respect to the orientation of J ).
Rokhlin-Mishachev formula: If m = 2k + 1, then Fiedler theorem: Let L t = {L t , t ∈ [0, 1]} be a pencil of real lines based in a point P of RP 2 . Consider two lines L t 1 and L t 2 of L t , which are tangent to RA at two points P 1 and P 2 , such that P 1 and P 2 are related by a pair of conjugated imaginary arcs in CA ∩ ( L t ).
Orient L t 1 coherently to RA in P 1 , and transport this orientation through L t to L t 2 . Then this orientation of L t 2 is compatible to that of RA in P 2 .
Results
The main result of the present paper is the following This Theorem represents a step towards a conjecture from [8].
We prove also a few results on complex orientations and rigid isotopy for the curves C 9 with some α i odd.
First properties 2.1 Descriptive lemmas and definitions
Let C 9 be an M -curve of degree 9. Given an empty oval X of C 9 , we often will have to consider one point chosen in the interior of X. For simplicity, Figure 1: jump we shall call this point also X. In the following, it will be clear from the context whether we speak of the oval or of the point X. We denote the pencil of lines based in X by F X . Let [XY ], and [XY ] ′ be the two segments of line determined by X and Y , cutting J respectively an even and an odd number of times. We say that [XY ] is the principal segment determined by X, Y . Let X, Y, Z be three ovals of C 9 . Corresponding three points X, Y and Z determine 4 triangles of RP 2 . We will call principal triangle and denote by XY Z the triangle whose sides are the principal segments [ (Figure 1).
Lemma 1 If C 9 has a jump, then D is exterior.
Proof Assume i = 1. Let A, B, C be 3 ovals in Int(O 1 ). By Bezout's theorem with the conic through A, B, C, A 2 , A 3 , the lines AB, AC, BC must all cut the same segment [ Proof This follows immediately from Lemma 3.
Definition 5 Let C 9 have a jump in O 3 , and A be any empty oval of Proof Let C 2 be the conic through A 1 , A 2 , B, C, E.
1. Let E ∈ T 3 . One has a priori C 2 = A 1 EA 2 CB or BA 1 CA 2 E. Applying Bezout's theorem with C 9 , one gets: C 2 = A 1 EA 2 CB, and the arc CB of C 2 lies inside of O 3 . Thus, D / ∈ C 2 , and E ∈ A 1 A 2 CDB. The conic A 1 A 2 CDB cuts C 9 at 20 points. Contradiction.
By symmetry, we can suppose that E ∈ T 1 . One has a priori Thus D ∈ C 2 and E ∈ A 1 A 2 CDB. The conic A 1 A 2 CDB cuts C 9 at 20 points. Contradiction.
Complex orientions
Let a ± i be the numbers of positive and negative interior ovals of the nest O i . Let A be any empty oval of O j ∪ O k . Consider the pencil of lines F A , sweeping out O i . By Fiedler's theorem, the empty ovals met by this pencil have alternating orientations. It follows from lemmas 1, 2 and 4 that the ordering of the ovals in the chain is independent from the choice of A. There is at most one jump in The equality occurs if and only if O i has a jump with repartition l 1 , l 2 , l 3 , with each l n , n = 1, 2, 3 odd. Let us call principal ovals the ovals O 1 , O 2 , O 3 , A 1 , A 2 , A 3 . Let us call base ovals the empty prinicpal ovals A 1 , A 2 , A 3 . Denote by ǫ n , n = 1, 2, 3, 4, 5, 6, ǫ n ∈ ±1 the respective contributions of these 6 ovals to Λ + − Λ − Let λ 0 , λ 1 , λ 2 , λ 3 , λ 4 , λ 5 , λ 6 be the contributions to Λ + − Λ − brought respectively by the non-principal ovals of the zones T 0 , Q 1 , Q 2 , Q 3 , T 1 , T 2 , T 3 .
Lemma 8 One has:
Proof Apply Fiedler's Theorem to the pencils of lines F A 1 : Lemma 9 If α i is odd, then the oval O i is non-separating.
Proof Let O 1 be separating. By lemma 6, O 1 has no jump. Consider the Fiedler chain formed by the empty ovals in Int(O 1 ). Let A 1 and A ′ 1 be the 2 extreme ovals of this chain, such that A ′ 1 ∈ T 0 and A 1 ∈ T ′ 1 . Take as base ovals first the triple (A 1 , A 2 , A 3 ) and then the triple (A ′ 1 , A 2 , A 3 ). For either case, we write the contributions of the triangles, the quadrangles and the base ovals to Λ + − Λ − . One has: ǫ ′ i = ǫ i for i = 1, 2, 3, 5, 6; λ ′ i = λ i for i = 1, 2, 3, 5, 6.
1. α 1 even, A 1 positive: Write the fourth identity in Lemma 8 for either choice of the base ovals. Substracting the one identity from the other, one gets: The two cases where α 1 is odd yield a contradiction.
Inequalities
Let C 9 be, as in the previous section, an M -curve of degree 9 with real scheme J ∐ 1 α 1 ∐ 1 α 2 ∐ 1 α 3 ∐ β . Let us perform a Cremona transformation cr : (x 0 : We shall denote the respective images of the lines ( For the other points, we use the same notation as before cr. The curve C 9 is mapped onto a curve C 18 of degree 18 with 3 singular points. We shall call main part of C 18 the piece formed by the images of J and the principal ovals. See Figure 4 where cr(A i ) and cr(O i ) stand for the images of the ovals A i and O i . An oval A of C 18 will be said to be interior, exterior, positive or negative if its preimage is.
. The ovals of Int(O) and their preimages will be called O-inner ovals; the ovals of Ext(O) and their preimages will be called O-outer ovals.
k , P ′ j in this ordering. Each arc joining two consecutive points cuts cr(O k ), cr(A k ), cr(O j ) and cr(A j ).
If any of the following conditions is verified, then
We leave the proof to the reader.
Lemma 12 Let C 2 be a conic passing through 5 ovals B 1 , . . . B 5 of C 18 , and having at least 4 intersection points with O. Then one of the three base lines, say A 1 A 2 is non-maximal with respect to C 2 , and the points A 1 , A 2 lie outside of C 2 .
Proof If the three base lines are maximal with respect to C 2 , then C 2 cuts the images of the principal ovals at 24 points, O at 4 points, and the union ∪B i , i = 1, . . . , 5 at 10 points. Contradiction. A base line, say A 1 A 2 is non-maximal; Lemma 10 implies that A 1 , A 2 lie outside of C 2 .
Proof Lemma 12 implies that a base line, say A 1 A 2 , is non-maximal for C 2 ; by Lemma 11 (2), the points A 1 , A 2 lie on the same arc s of O \(O ∩ C 2 ), that is exterior to C 2 . Bezout's theorem applied to C 18 with the lines B l B m implies that the endpoints of s are either on two consecutive arcs of C 2 , or on the same arc of C 2 . In the first case, one can assume that the consecutive arcs are B 1 B 2 , B 2 B 3 . Assume that B 5 B 1 cuts O and consider the conics Both of them cut O at 6 points ( Figure 5). One of the base points A j , j = 2, 1 lies in the interior of the conic C 2 (A i ), i = 1, 2. The preimage of C 2 (A i ) is a rational cubic C 3 (A i ) passing through A 1 , A 2 , A 3 , B 3 , B 4 , B 5 , B 1 , with double point at A i ( Figure 6). This cubic cuts: each of the ovals A i , O i at 4 points, each of the other base ovals A k , O k , A j , O j at 2 points, the set {B 3 , B 4 , B 5 , B 1 } at 8 points, and J at 5 points. Hence in total 29 intersection points with C 9 . Contradiction. In the second case, one can assume that the endpoints of s are on B 1 B 2 , B 2 B 3 , or B 5 B 1 . With similar arguments as above, one gets again a contradiction, letting C 2 (A i ), i = 1, 2 be respectively:
Lemma 14
The curve C 18 cannot contain a configuration of 6 ovals Figure 6: If C 2 is one of these 3 conics, let 2 base points lie on the same exterior arc of O \ (O ∩ C 2 ). By Lemma 13, these points lie in the interior of one of the other 2 conics (Figure 8). Contradiction.
In the proofs of the next two propositions, we consider conics passing through some empty ovals of C 18 . Several times, we find a conic that is maximal with respect to the 3 base lines. The maximality follows always from Lemma 11 (1) To each conic determined by 5 given points, we associate the pentagon having these points as vertices. Choose a line at infinity L that does not cut any of the pentagons interior to the 3 conics. The points B 1 , D 3 , B 2 , D 1 , B 3 , D 2 lie in convex position in the affine plane RP 2 \ L. The hexagon H = B 1 D 3 B 2 D 1 B 3 D 2 gives rise to a natural cyclic ordering of the 6 lines supporting its edges. Let Z k , k ∈ {1, . . . , 6} be the 6 triangles that are supported by triples of consecutive lines, such that Z k and H intersect along an edge. The base lines are distributed in the 3 zones: All of the remaining ovals of C 18 lie in ∪Z k . There is a natural cyclic ordering of the empty ovals of C 18 given by the pencils of lines F B i , i = 1, 2, 3 sweeping out the 6 triangles, and the pencils F D i , i = 1, 2, 3 sweeping out the 4 triangles that do not have D i as vertex. The cyclic chain of ovals splits into 6 successive groups, that are alternatively inside and outside of T 0 . By Fiedler's theorem: λ 0 + λ 1 + λ 2 + λ 3 − λ 4 − λ 5 − λ 6 = 0. Combining this with the fourth identity in Lemma 8, one gets: 2λ 0 = − ǫ i . Thus, |λ 0 | ≤ 3 and if λ 0 = ±3, then ǫ i = ∓6. If |λ 0 | = 3, then C 9 has at most one non-separating oval. Moreover, if O 3 is non-separating, α 3 = 2 and |a + 3 − a − 3 | = 2. By Lemma 9, if O i is separating, then α i is even.
Proof Perform the cremona transformation cr, and denote by B i , i = 1, . . . , n the exterior ovals of C 18 lying in cr(T l ). Using conics, we prove the following facts. Let B i , B j , B k be 3 such ovals, the triangle B i B j B k whose vertices do not cut the base lines is empty. Thus, the ovals of cr(T l ) lie in convex position in this zone. Each of the maximal pencils of lines F B i gives rise to a cyclic ordering of all other ovals of C 18 . Consider 2 ovals B i , B j that are consecutive for some pencil F B k . Then, B i B j are consecutive for any other pencil F D based in another empty oval D. Thus, one may speak of a Fiedler chain of ovals in cr(T l ), without refering to a base point. Assume that |λ l+3 | ≥ 3, so there are at least 3 distinct Fiedler chains of ovals in cr(T l ). Let 3 ovals B 1 , B 2 , B 3 and 2 of the other ovals. By Bezout's theorem with C 18 , these conics are: Choose a line at infinity L that does not cut any of the 3 interior pentagons. The points B 1 , D 3 , B 2 , D 1 , B 3 , D 2 lie in convex position in the affine plane RP 2 \ L. The hexagon H = B 1 D 3 B 2 D 1 B 3 D 2 gives rise to a natural cyclic ordering of the 6 lines supporting its edges. Let Z k , k ∈ {1, . . . , 6} be the 6 triangles that are supported by triples of consecutive lines, such that Z k and H intersect along an edge. All of the remaining ovals of C 18 lie in ∪Z k . There is a natural cyclic ordering of the empty ovals of C 18 given by the pencils of lines F B i , i = 1, 2, 3 sweeping out the 6 triangles, and the pencils F D i , i = 1, 2, 3 sweeping out the 4 triangles that do not have D i as vertex. By Lemma 14, one of the D i , say is crossing, and by the pencil of conics F A 1 A 2 A 3 C : Proof Let O 3 be crossing, let E ∈ T 0 ∪ T 1 ∪ T 2 and H be an oval met after E by the pencil of conics Figure 9: Figure 10: first that E ∈ T 0 ∪ T 1 . Perform the cremona transformation cr(A 1 , A 2 , A 3 ), and consider the pencil of conics F A 1 ECD . The possible positions for the double lines of this pencil are shown in Figure 10. The preimage of F A 1 ECD is the pencil of rational cubics F A 1 A 1 A 2 A 3 ECD . This pencil has five singular cubics, whose images in F A 1 ECD are the three double lines and the two conics through A 2 respectively A 3 . If E ∈ T 0 , there is only one possible sequence of singular cubics for F A 1 A 1 A 2 A 3 ECD (Figure 11). If E ∈ T 1 , there are four possible sequences of singular cubics for F A 1 A 1 A 2 A 3 ECD (see Figures 12,13,14,15). Let H be one of the remaining ovals of C 18 . Let E ∈ T 0 , H is swept out in the portion F A 1 ECD : Moreover, if H is met after E by the pencil of lines F C : A 2 → A 3 → A 1 , then H lies inside of O = cr(J ). Let E ∈ T 1 , H is swept out by F A 1 ECD in the portion: In all cases, if H is met after E by the pencil of lines F C : Notice that there are four possibilities for the pencil of rational cubics F A 2 A 2 A 1 CDA 3 , which are deduced from the pencils F A 1 A 1 A 2 A 3 ECD , E ∈ T 1 by an axial symmetry switching (A 1 , A 3 ) with (A 2 , C). The result follows immediately. Figure 12: Figure 13: E ∈ T 1 , case 2 Figure 14: Figure 15: Figure 16: Let O 3 be non-crossing and F ∈ T 3 . Let H be an oval met before F by the pencil of conics We shall prove that H must be also in T 3 . Perform the cremona transformation cr, and consider the pencil of conics F A 3 F CD . The double lines of this pencil are shown in Figure 10. The preimage of F A 3 F CD is the pencil of rational cubics F A 1 A 2 A 3 A 3 F CD . This pencil has five singular cubics, whose images in F A 3 F CD are the three double lines and the two conics through A 1 respectively A 2 . There is only one possible sequence of singular cubics for (Figure 16). Let H be one of the remaining ovals of C 18 , H is swept out by Lemma 16 Let C 9 be an M -curve of degree 9 with a jump. One of the three possibilities hereafter arises: Proof It follows immediately from Lemma 15 and the fact that Π + − Π − ≤ 4. First complex orientation formula (Orevkov) Let C m be such that the nests N i , i ∈ {1, 2, 3, 4} have respective depths d, d, d, d − 1, and p l , l ∈ {1, 2, 3, 4} lies in the principal triangle determined by the other three points p i , p j , p k , then: Second complex orientation formula (Orevkov) Let C m be such that: 5.2 Proof of the conjecture for the case α 1 , α 2 , α 3 even Let C 9 be an M -curve of degree 9 with real scheme J ∐ 1 α 1 ∐ 1 α 2 ∐ 1 α 3 ∐β . The complex scheme of C 9 is determined by the complex schemes of the 3 nests O i = 1 α i , i ∈ {1, 2, 3}. The complex scheme S i of a nest O i is encoded as follows: We replace the standard encoding by a simpler one, writing: where the letters u and d stand respectively for up and down. We call complex typeS i of a nest O i the complex scheme of O i , improved with the information whether O i is up, down or non-separating. If S i = ν i , then there are 3 possibilities forS i : (ν i , u), (ν i , d) and (ν i , n) (n stands for non-separating). In the other cases, the oval O i is non-separating, and we use the same notation as for the complex scheme. Let us call complex type of C 9 , and denote byS the triple (S 1 ,S 2 ,S 3 ).
Proof The first formula applies making N i , i = 1, 2, 3 and N 4 = B.
Let O i be separating. Remember that by Lemma 9, α i must be even. Again, choose base ovals A 1 , A 2 , A 3 , and let A 4 be a fourth oval, interior to O i , lying in T i . Let N l = (A l , O l ), l = 1, 2, 3 and N 4 = (A 4 , O i ). Let: F i =Π ′ i +Π 4 − (Q 2 i − 2Q i + P 2 4 − P 4 + ν(O i )), and G l = P 2 l − P l − Π l . It is easily seen that G l depends only on S i , and F i depends only onS i .
Lemma 18 Let C 9 have three nests and separating O i . Then, F i = G j +G k Proof The second formula applies with the nests N l , l = 1, 2, 3, 4. In Figure 17, 18 we computed the terms appearing in Lemmas 17, 18. | 2010-09-14T08:21:41.000Z | 2008-06-27T00:00:00.000 | {
"year": 2008,
"sha1": "d581fdae241a8d44d203e0339e3b3b0d2696b548",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d581fdae241a8d44d203e0339e3b3b0d2696b548",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
2215564 | pes2o/s2orc | v3-fos-license | Near-thermal equilibrium with Tsallis distributions in heavy ion collisions
Hadron yields in high energy heavy ion collisions have been fitted and reproduced by thermal models using standard statistical distributions. These models give insight into the freeze-out conditions at varying beam energies. In this paper we investigate changes to this analysis when the statistical distributions are replaced by Tsallis distributions for hadrons. We investigate the appearance of near-thermal equilibrium state at SPS and RHIC energies. We obtain better fits with smaller chi^2 for the same hadron data, as applied earlier in the thermal fits for SPS energies but not for RHIC energies. This result indicates that at RHIC energies the final state is very well described by a single freeze-out temperature with very little room for fluctuations.
Introduction
After many years of investigating hadron-hadron and heavy ion collisions, the study of hadron production remains an active and important field of research. The lack of detailed knowledge of the microscopic mechanisms has led to the use of many different models, often from completely opposite directions. Thermal models, based on statistical weights for produced hadrons [1,2,3,4], are very successful in describing particle yields at different beam energies [5,6,7,8], especially in heavy ion collisions. These models assume the formation of a system which is in thermal and chemical equilibrium in the hadronic phase and is characterised by a set of thermodynamic variables for the hadronic phase. The deconfined period of the time evolution dominated by quarks and gluons remains hidden: full equilibration generally washes out and destroys large amounts of information about the early deconfined phase. The success of statistical models implies the loss of such information, at least for certain properties, during hadronization. It is a basic question as to which ones survive the hadronization and behave as messengers from the early (quark dominated) stages, especially if these are strongly interacting stages.
In the case of full thermal and chemical equilibrium, relativistic statistical distributions can be used, leading to exponential spectra for the transverse momentum distribution of hadrons. On the other hand, experimental data at SPS and RHIC energies display non-exponential behaviours at high p T . One explanation of this deviation is connected to the power-like hadron spectra obtained from perturbative QCD descriptions: the hadron yield from quark and gluon fragmentation overwhelms the thermal (exponential) hadron production. However, this overlap is not trivial. One can assume the appearance of near-thermal hadron distributions, which is similar to the thermal distribution at lower p T , but it has a non-exponential tail at higher p T . A stationary distribution of strongly interacting hadron gas in a finite volume can be characterized by such a distribution (or strongly interacting quark matter), which will hadronize into hadron matter. Tsallis distributions satisfy such criteria [10,11]. In the next Section we will review the Tsallis distribution and emphasize the properties most relevant to particle yields.
Relation between the Boltzmann and Tsallis distributions
Neglecting quantum statistics, the entropy of a particle of species i is given by [9] where the mean occupation numbers, n B i , are given by with g i being the degeneracy factor of particle i. The total number of particles of species i is given by an integral over phase space of eq. (2): The transition to the Tsallis distribution makes use of the following substitutions [10] ln which leads to the standard result [10,11] which is usually referred to as the Tsallis distribution [10,11]. As these number densities are not normalized, we do not use the normalized q-probabilities which have been proposed in Ref. [11]. In the limit where q → 1 this becomes the Boltzmann distribution: The particle number is now given by Note that q = 1.5 is the maximum value that still leads to a convergent integral in eq. (8). A derivation of the Tsallis distribution, based on the Boltzmann equation, has been given in Ref. [12]. A comparison between the two distributions is shown in Fig. (1), where it can be seen that, at fixed values of T and µ, the Tsallis distribution is always larger than the Boltzmann one if q > 1. Taking into account the large p T results for particle production we will only consider this possibility in this paper. As a consequence, in order to keep the particle yields the same, the Tsallis distribution always leads to smaller values of T for the same set of particle yields. The dependence on the chemical potential is also illustrated on the right of Fig. 1 for a fixed temperature T and a fixed energy E. As one can see, the Tsallis distribution in this case increases with increasing q. The Tsallis distribution for quantum statistics has been considered in Ref. [13,14,15,16].
Relation between the Tsallis parameter q and temperature fluctuations
The parameter q plays a central role in the Tsallis distribution and a physical interpretation is needed to appreciate its significance. To this end we follow the analysis of Ref. [17] and write the Tsallis distribution as a superposition of Boltzmann distributions where the detailed form of the function f is given in [17]. The parameter T B is the standard temperature as it appears in the Boltzmann distribution. It is straightforward to show [17] that the average value of 1/T B is given by the Tsallis temperature: while the fluctuation in the temperature is given by the deviation of the Tsallis parameter q from unity: which becomes zero in the Boltzmann limit. The above leads to the interpretation of the Tsallis distribution as a superposition of Boltzmann distributions with different temperatures. The average value of these (Boltzmann) temperatures is the temperature T appearing in the Tsallis distribution. This is the interpretation of the Tsallis temperature that we will follow. The other parameter in the Tsallis distribution, q, describes the spread around the average value of the (Boltzmann) temperature T . For q = 1 we have an exact Boltzmann distribution, for values of q which deviate from 1, we have a corresponding deviation. From this point of view the Tsallis distribution describes a distribution of (Boltzmann) temperatures. A deviation from q = 1 means that a spread of temperatures is needed instead of a single value.
Thermal Fit Details
In order to identify the energy dependence of the deviation from ideal gas behaviour, thermal fits were performed on yields measured at the CERN SPS in central Pb-Pb collisions at 40 AGeV, 80 AGeV and 158 AGeV (using the same data as analyzed in [8]) and yields measured at RHIC in central Au-Au collisions at √ s = 130 AGeV (using the same data as analyzed in [19]) and at √ s = 200 AGeV. In the CERN SPS fits, the thermal parameters T , µ B , γ S and R were fit to the data, while µ Q and µ S were fixed by the initial baryon-to-charge ratio and strangeness content in the colliding system, respectively.
In the case of the RHIC analysis we again fit T , µ B , µ S and γ S to the data. The use of mid-rapidity data here led to the relaxing of the constraints on µ S and µ Q typical in analyzes of 4π data. Instead, µ Q was set to zero as justified by the observed π + /π − ratio. The following expression was used to calculate primordial particle yields, where |S i | is the number of valence strange quarks and anti-quarks in species i. The value γ s = 1 obviously corresponds to complete strangeness equilibration. All calculations were done using the THERMUS package [18].
Results and Conclusions
The most surprising result of our analysis is shown in Fig. (2): the quality of the fits, as measured by the χ 2 /d.o.f., improves at first as the Tsallis parameter q increases. It reaches a minimum value around q ≈ 1.07 for SPS beam energy of 158 AGeV. This behaviour is repeated at other SPS energies with the minima at slightly different values of q, i.e. 1.08 for 80 AGeV and about 1.05 at 40 AGeV beam energy. This behaviour is not seen at RHIC energies. Clearly, changes in the Tsallis parameter q have only a small negligible effect on the χ 2 values at RHIC energies, of course, this still leaves open the possibility for q values larger than 1 [20]. However on the SPS data the effect is substantial and changes the interpretation substantially. One possible interpretation is that at SPS energies fluctuations in the freeze-out temperature are substantial.
Recently [21] a coalescence model with a Tsallis distribution for quarks was used to fit the transverse momenta spectra measured at RHIC. This fit does not include decays from resonances and therefore cannot be compared directly to ours since decays can substantially modify the transverse momenta, also the emerging hadrons are not in a Tsallis-type equilibrium gas, which is an assumption of the present analysis. The authors obtained values for the Tsallis parameter q which are remarkably similar for all particle species considered, i.e. q ≈ 1.2 which cannot be excluded by our analysis. The freeze-out temperature T decreases, as expected, with increasing values of q. This can be understood from the fact that the Tsallis distribution is always larger than the Boltzmann one (as long as q > 1). Hence, in order to match the same particle yields one has to adjust T to lower values. This is seen at all energies in Fig. 3. However, the drop in T , turns out to be quite drastic numerically. In fact, the decrease in particle numbers has to be compensated by increases in all other thermodynamic variables. The (modest) increase in the baryon chemical potential is shown in Fig. 3 on the right hand side.
The strangeness non-equilibrium factor γ s as shown in Fig. 4. It is interesting to note that the Tsallis distribution leads to a much better chemical equilibrium than the corresponding Boltzmann distribution with q = 1. In all cases considered the γ s is very close to the chemical equilibrium value of 1. Clearly, the use of the Tsallis distribution in relativistic heavy ion collisions calls for a reevaluation of the understanding gained from previous analyses [5,7,8]. | 2009-01-27T11:44:27.000Z | 2008-12-08T00:00:00.000 | {
"year": 2008,
"sha1": "3133518f09e8b5b9e1328bc624882e6c10a4bd0a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0812.1471",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "72042eb27818180ed906fa245f610a2e6ace9671",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
49302669 | pes2o/s2orc | v3-fos-license | Comparative survey of visual object classifiers
Classification of Visual Object Classes represents one of the most elaborated areas of interest in Computer Vision. It is always challenging to get one specific detector, descriptor or classifier that provides the expected object classification result. Consequently, it critical to compare the different detection, descriptor and classifier methods available and chose a single or combination of two or three to get an optimal result. In this paper, we have presented a comparative survey of different feature descriptors and classifiers. From feature descriptors, SIFT (Sparse&Dense) and HeuSIFT combination colour descriptors; From classification techniques, Support Vector Classifier, K-Nearest Neighbor, ADABOOST, and fisher are covered in comparative practical implementation survey.
INTRODUCTION
Image classification is very popular application in the image processing field. Image classification stands for identifying object(s) in given image and assigning it to a collection of objects with similar appearance, called classes [1]. Even though the objects belonging to one class have similarities regarding the type of attributes they posses, they can also be visually be notoriously different, regarding the color, size (scale), texture, design, gender (person). For humans, making the observation that a given object is present in the image is pretty straight forward and obvious, regardless of any of the previously mentioned different variations in which it comes, based on acquired knowledge. Providing this knowledge to artificial system and forcing them to acquire the human-like thinking is extremely demanding task. Additionally, problems like occlusion, scaling, pose change, clatter and many others often occur.
Object classification methodology [2], [6] consists of extracting feature descriptors [4], creating local vocabularies [5] (bag of words) using the positive training images and k-means clustering, training classifiers, extracting features from test images, creating histogram of the test feature based on the vocabularies, computing the confidence to get the true positive and false positives for the Receiver Operating Curve (ROC) curve [3].
It is always difficult to say one descriptor is good and the other descriptor is bad or one classifier is good or the other is bad without practical experimental approaches. This is due to that fact that some of them are good for some classes and the rest are good for other classes. Consequently, a comparative survey of the different object classification elements based on practical implementation is a key step to select the appropriate combination for future challenges.
In this paper, a comparative survey of different descriptors and classifiers are presented. Three different feature descriptors and four classifiers are implemented and analysed by comparing their result to similar training and test datasets.
BACKGROUND
In general block diagram of object class classification is divided into training and testing block as shown in Fig-1. In the training stage, initially a dictionary socalled "bag of words" composed of clustered positive feature descriptors of training images from each class is created. Then to train the classifiers all the positive and negative feature descriptor of the training images are extracted and created histogram length of the keywords. Finally, a classifier is trained using a set of training images so that it can be able to identify them among other words.
During the testing stage, a set of testing images are classified using the previously trained classifier, producing an output answer to the question whether the object exists in the image or not, along with the degree of certainty. The certainty is crucial measure because it represents the basis for computation of the ROC curve used for evaluating the results. The details of the descriptors, classifiers and other parameters are given below.
FEATURE DESCRIPTORS
Feature descriptors have been playing an important role in many computer vision problems, such as image matching and object recognition [7]. Information about object on its own is not sufficient for identification whether it is present or not in the image, due to various differences in colour, scale, viewpoint, orientation, appearance. That is why descriptors of the most salient points are extracted. They contain sufficient information for making the identification. In this paper, the following three types of feature descriptors are implemented and compared.
SIFT (vl_sift)
Prior to extracting the most salient points, a detector needs to be used. The Difference of Gaussians (DoG) [8] is the detector used for obtaining those features, followed by the use of Scale Invariant Feature Transform (SIFT) [9] with the purpose of their description. The main reason for choosing SIFT is as its name states the invariance to changes in scale, translation, rotation, local geometric distortion and furthermore to noise and different illumination. The majority of the points that SIFT extracts as salient points lie in high-contrast areas, such as object edges. As result when using the SIFT descriptor feature vectors with constant length of 128 bits are obtained. The original SIFT descriptor is also known by the name sparse SIFT. The SIFT used in this implementation is the vl_sift[10] from the VLFeat library. This is due to the fact that the vl_sift is faster than the normal SIFT implemented by David Lowe [11] with small deviation in terms of performance.
HUE SIFT( huesift)
One of the very effective and well-know SIFT descriptor extension is the so-called HUE SIFT [12]. As the name specifies it is connected with incorporating color information in the SIFT feature extraction process. One of the most distinguishable properties of this descriptor is the ability to map skin color shades for person classification necessities. HUE SIFT is scaleinvariant and shift-invariant [13] (at least the SIFT component) and similarly to the hue histogram is made up by weighting each sample of the hue by its saturation. Contrary to the original SIFT descriptor the length of the extracted feature vectors is longer (165) because of the color information that is incorporated. The detail flow of this special descriptor is given in the figure 2. Figure 2 Steps for creating dictionary using HUE SIFT
Dense SIFT (Vl_dsift)
Dense SIFT [14] computes descriptors for densely sampled key points with same size and orientation. These key points are sampled so that the centre of the spatial bins is at integer coordinates within image boundaries. When this approach is used the number of features remains the same for images which have same size. This is one of the most characteristic differences between dense and sparse SIFT. In the case of sparse SIFT images of same size can have different sized descriptors, meaning one image may have more features, which will cause them to be weighted more. The problem can be avoided by using Dense SIFT. The features are uniformly distributed along the image providing advantage of avoiding unknown positions, but on the other side features that are not so strong can appear. In this implementation the vl_dsift [15] from the VLFeat library is used to extract the dense features. Even though the default Dense SIFT extracts the features from each pixel, in our implementation we set the pixel step to be 10 due to memory problem.
BAG-OF-WORDS
The Bag of Words [16], or often referred to as dictionary is built using the feature descriptors of positive training images from each class in the dataset and clustering using k-means into a matrix of length of the feature descriptor by the length of the cluster. The length of the vocabulary in the dictionary depends on the number of clusters used to create the dictionary. At the same times, the strength or performance of the dictionary depends on the number of images per class used to build the dictionary. The more the images per class used the more possibility to have distinctive vocabularies. There are several approaches to create the dictionary. They differ in the way how the positive training images are extracted and how many images per class are considered. In some cases, a unique dictionary per each class is prepared by taking features of training images a single class and clustering the features. This approach is very slow and time-consuming. Another approach is to create a single dictionary for all classes but it also follows two different ways to when taking the training images. The first way is to select the training images manually. And the second way is to select the positive training images randomly. In this paper, a single dictionary is constructed for each class object for a given number of words and a number of images. To observe the effect of taking different number of words and images per class; three different lengths of clusters (50, 100, 200) and three different number of images per class (5, 15, 40) are taken. By combining the above different length of clusters and images per class, 9 different dictionaries are created. Finally using all these different dictionaries, the different types of classifiers are tested.
HISTOGRAM
Histogram [17] of the training image feature descriptor is created by mapping the descriptors of each positive and negative training images into the dictionary created above. The length of the histogram is equal to the length of the number of clusters. Since the dimension of the dictionary is 128xN where N is the number of clusters, the length of the histogram will be N. Every time when the descriptor of 128xK is extracted from the training images, all the K columns of the descriptor mapped into the dictionary columns of length equal to the words. When a a given column is mapped from the descriptor column to the dictionary, in the corresponding level of the histogram will be added 1.
In this comparative implementation, a MATLAB data classifier function called knnclassify() [18]which classifies data using the nearest-neighbor method is used. This function takes the dictionary, the descriptor and index of the cluster as input and returns the score of mapping to each index of the cluster. Finally the score of each index is added to the histogram to get the histogram of a given training class or test image as shown below. In order to have uniformity between the histogram of the training image and the test images, the histogram is always normalized.
CLASSIFIERS
The main task of the classifiers [19] is to divide or map the histogram created from the training images into positive and negative histogram. The classifiers take as input the histogram of a given class and the ground truth positive and negative identification of the training images. Then the classifiers map the histogram into positive and negative group so that to use as a confidence measure during the testing stage. Now we are going to get present the classifiers that were used in our implementation.
SVC(svc)
The Support Vector Classifier [20] is one of the most well-known classifiers. It is supervised method used for regression and classification. The principle of work is the following: a model is built used for making prediction whether the following (new) example is going to fall under a certain category. As input to the support vector Classifier we have set of training examples, each of whom contains a label of the category (+1,-1).
K-Nearest neighbour
The k-nearest neighbors algorithm (k-NN) [21] is a non-parametric method used for classification. It is widely known method for classifying objects based on closest training examples in the feature space is the nearest neighbour classifier. The reason why it being widely known is its simple structure, maybe even the simplest in the Classifier learning algorithm's family. The object classification is based on majority voting procedure of neighbours. The assignment to a specific class is done so that the most common class among its nnearest neighbours is used for classification. In the most simple case, if n=1, the object is simply classified to the same class as its nearest neighbour.
Fisher
Fisher Discriminant Classifier [22] is a classification technique based on the well known fisher linear discriminant technique used to reduce dimensionality. The idea is to project the binary class in an intermediate linear space where the error of misclassification is reduced to zero thereby reducing the dimensionality.
The idea of the dimension reduction is that, it reduces the non important dimension in such a way that the retained reduced dimension represents the entire system in a robust way, but in case of classification this leads to misclassification.
AdaBoost
AdaBoost [23] is Classifier learning algorithm which initially when given a weak classifier, slightly better than random boosts the performance of that classifier to the maximum possible extent. This algorithm as most of the others is sensitive to noisy data and outliers, but less susceptible to the overfitting problem than most of the other Classifier learning algorithms. The mode of workflow is iterative, during a series of rounds, by calling a new weak classifier in each round. When the call is done a distribution of weights is being updated. This indicates the importance of examples in the data set used for classification. On each round, the weights of each incorrectly classified example are increased while, the weights of the correct classified entries is decreased. This way the classifier is forced to focus on the wrong classifications.
ROC CURVE
Receiver Operating Curve (ROC) [3] is the evaluation measure used for comparing the classification results. It shows the ratio between true positives (sensitivity) and false positives (specificity). In addition it returns the Area Under the Curve (AUC) where the bigger the area the better the classification system is.
IMPLEMENTATION
The detail flow of the implementation is given in In the feature extraction step three different feature descriptors are used. The first two discriptors are the sparse and dense sift from VLFeat libray which are vl_sift and vl_dsift. The third descriptor used is the huesift color-sift combination descriptor developed by Koen van de Sande, Intelligent System Lab Amsterdam, University of Amsterdam. Since the color descriptor software is an executable file which is ready to extract different types of color descriptors. For clustering during the creation of the dictionary, the vl_kmeans is used.
To map the training feature descriptors in to the dictionary for creating the histogram, the well known matlab mapping classifier function called knnclassify() is used. To train the classifier, four different classifiers from the PRTool library [24] are used. The classifiers used are SVC, FISHER, KNNC and ADABOOST.
To compute the confidence during classifying the test images, different codes for the differ classifiers is used. This is due to the fact that the mapping output of the four classifiers mentioned above is different.
RESULTS AND DISCUSSION
The final output result of the classification project is the ROC curve and the Area under curve. Even though the ROC curve gives the general pictorial true positive versus false positive rates, the area under the curve (AuC) was used as a measure of the performance of the classifier. The outputs of the classifiers are values that indicate the degree in which it belongs to the class to be selected. Some ROC curve and the AuC of the different descriptors and classifiers are given below.
Vl_sift Descriptor
The result of classification after using the vl_sift descriptor for different type of classifiers is given below. In each classifier, deferent combinations of dictionary are presented. The mean result of the above 4 different classifiers is almost the same with deviation less than 0.02. In the first two classifiers SVC and Knn when the number of images and clusters for the dictionary increases there is slight improvement in the results. On the other hand, in the later classifiers Adaboost and fisher when the cluster number is increases, there is slight reduction in the performance. But this is not a general case for all classes of images. A special observation from the K-nearest mean classifier is that its performance increase when the number of clusters increases. In the other classifiers there is no uniformity for all the classes when the number of clusters increase, but in the case of knn, when the number of clusters increases the performance also increases.
On the other hand the performance of the fisher classifier decreases when the number of clusters increases. But when the number of images per class increases, the performance of this classfier also increases.
In case of increasing images per class for the dictionary, in some classes it gives better result and in some classes lower than when small number of images are considered. Since may be due to the problem of the newly added image. Even though they are positive images, they may have some noise characteristics which affect the dictionary of the given class. So increasing images per class or number of clusters is not always guarantee for better result. It depends on the profile of images added.
The adaboost classifier is the worst for the bicycle class. In the rest three classifiers, the result of classification for the bike is greater than 0.81 but using the adaboost classifier it gives 0.793. For the cow class, the knn is the best classifier which gives up to 0.879 for number of cluster equal to 100 and images per class 20.
In terms of speed the fisher classifier is the fastest of all the four classifiers. The next fastest classifier is Knn and adaboost is the slowest. This is just comparison of the speed of the classifiers. The overall speed of the classification depends on the number of images for training and clusters. We observe that when the number of images per class and clusters are increases, the time needed for classification also increase. This descriptor includes the color attribute to the normal sift classifier. And in most cases to get better result of this descriptor, the classifiers should train with all possible colors of the objects in the same class. For example the color of cows is different which includes black, red, gray, white and combination of these colors. So if a red cow is missed during the training, the classifier will be weak to classify the red cows as a cow. In case of cows there might not be a problem as most of the time the back ground is green.
This descriptor is an extension of the sift descriptor to improve classification performance of some classes by adding the hue color attribute to the sift descriptor. Comparing to the normal sift used above this descriptor slightly improves the performance of the "car" class and the "cow". As it mentioned earlier, to see the advantage of using huesift over the normal sift, the number of images used for training should be as enough as all the possible colors of the objects in the class.
In this part the, result using the svc classifier and adaboost is presented. Both the classifiers when the number of images per class increases, there is slight improvement in the overall classification result.
Dense sift (vl_dsift)
A result of the classification using the dense sift descriptor (vl_dsift) is given below. Since the dense descriptor of a given image is a big matrix, we couldn't create the huge matrix of the positive training image descriptors to build the dictionary due to memory problem. As s result, instead of applying the full dense sift we modify the vl_dsift sampled dense descriptor after 10 pixel steps. So the following result is sampled dense sift with step between consecutive pixels equal to 10, the result couldn't be generalized as a dense sift performance. But it gives a clue about the dense sift. The results in Table-7 and 8 show that even using the sampled version of the dense sift, the classification output is better than the normal sift. Especially for classes Person and dog, the dense sift is better than the normal sift. The maximum classification result for the person is acquired using the dense sift descriptor.
In addition, when the number of cluster increases, the performance of this descriptor also increases. Since the number of descriptors are two much, when the cluster size increases it gives better unique representation of the descriptors by clustering in to different groups.
In general if the full dense is used the result will be much better. But there is a trade of between the performance and its drawback of computational time and memory. A sample ROC curve of the person and dog are given in Fig 9 and 10.
Comparison between descriptors
A comparison between the three descriptors using two different classifiers, cluster number equal to 50 and images per class 5 is presented below.
Classifier : Adaboost The comparison table shows that the overall heusift has some advantage than the normal sift. Sometimes the adaboost classifier gives better result when the descriptor is normal sifts than the dense sift. But this is not a general case.
Generally speaking, the huesift and dense sift are better than the normal sift. But both the huesift and dense sift have a computational problem. As the length of the huesift descriptor is 165 and the dense descriptor considers too many points per image, both of them needs higher computational memory and time.
CONCLUSION
In this paper a comparative survey of visual object classification based on practical implementation is presented. Three feature descriptors and four classifiers are implemented and tested by creating different combinations of the feature descriptor and classifiers.
Detail analysis of the comparative survey based on the classification results for different classes is also provided. This paper will be helpful to have the general pros and cons among the different descriptors and classifiers. In addition, it will help to introduce the detail flow diagram for practical implementation of visual object classification. | 2018-06-17T01:20:32.000Z | 2018-06-17T00:00:00.000 | {
"year": 2018,
"sha1": "ff7de2ea4d21e7d32d7f07e07fd278bebf6b5d66",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ff7de2ea4d21e7d32d7f07e07fd278bebf6b5d66",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
263201452 | pes2o/s2orc | v3-fos-license | Carrier Modulation via Tunnel Oxide Passivating at Buried Perovskite Interface for Stable Carbon-Based Solar Cells
Carbon-based perovskite solar cells (C-PSCs) have the impressive characteristics of good stability and potential commercialization. The insulating layers play crucial roles in charge modulation at the buried perovskite interface in mesoporous C-PSCs. In this work, the effects of three different tunnel oxide layers on the performance of air-processed C-PSCs are scrutinized to unveil the passivating quality. Devices with ZrO2-passivated TiO2 electron contacts exhibit higher power conversion efficiencies (PCEs) than their Al2O3 and SiO2 counterparts. The porous feature and robust chemical properties of ZrO2 ensure the high quality of the perovskite absorber, thus ensuring the high repeatability of our devices. An efficiency level of 14.96% puts our device among the state-of-the-art hole-conductor-free C-PSCs, and our unencapsulated device maintains 88.9% of its initial performance after 11,520 h (480 days) of ambient storage. These results demonstrate that the function of tunnel oxides at the perovskite/electron contact interface is important to manipulate the charge transfer dynamics that critically affect the performance and stability of C-PSCs.
Introduction
Organic lead halide perovskite solar cells (PSCs) have emerged as a competitor of silicon photovoltaics regarding their high performance and commercial prospects.During the last few years, the power conversion efficiency (PCE) of PSCs improved from 3.8% to a recently certified 26.1% [1,2] as a result of relying on perovskite films with impressive properties, such as a high absorption coefficient, excellent ambipolar charge transport [3][4][5], long carrier diffusion lengths and a tunable bandgap [6][7][8][9][10].Noble metals and organic hole-transporting materials (HTMs) are being widely employed for the preparation of stateof-the-art PSCs.However, their presence leads to an expensive manufacturing process and poor stability [11][12][13][14].These issues could be overcome by the application of carbon counter electrodes (CEs) in hole-conductor-free PSCs.However, the removal of hole collection layers would sacrifice cell efficiency [15][16][17][18][19]. Currently, the PCE of hole-conductor-free C-PSCs is still lower than 20%, considerably lagging behind regular PSCs with fully functional layers [20,21].
Electron-collecting contacts play vital roles in determining the performance of common PSCs [22][23][24][25][26][27], especially that of carbon-based hole-transporting-layer-free PSCs (C-PSCs) [28][29][30].Electron contacts simultaneously affect the charge transfer dynamics and influence the growth kinetics of the perovskite absorber [31].Despite the fast progress and superior stability of C-PSCs [32], more in-depth research efforts are still required to improve the performance of C-PSCs.Since there are no HTMs in C-PSCs, charge manipulation and film growth modulation are more important than PSCs with fully functional layers [33].Spike energy strategies that reduce the interface recombination are highly desired in inorganic photovoltaics [34,35].SiO 2 and Al 2 O 3 have been introduced in silicon photovoltaics as electron tunneling paths forming the tunnel oxide passivated contacts on solar cells [36].Similarly, insulating layers acting as energy band uplifters at the perovskite/TiO 2 electron transport layer (ETL) interface are usually employed in mesoscopic PSCs [37][38][39].Han and coworkers reported that modifying the ETL surface with an insulating material reduces the charge recombination and improves the open-circuit voltage (V oc ) of the device PSCs [37].Xu and coworkers found that introducing a thick (about 100 nanometers) Al 2 O 3 insulator layer can reduce nonradiative recombination in PSCs [40].Kamat and coworkers reported that hole accumulation can indirectly promote halide ion segregation in HTM-free PSCs with TiO 2 ETLs, while insulating ZrO 2 substrates suppresses phase segregation due to a more balanced charge transport [41].
In addition, the surface morphology of the underneath scaffold has a strong impact on the perovskite layer, which is paramount in influencing the final efficiency of PSCs.Zhu and coworkers investigated the compositional and optoelectronic properties of the buried perovskite interface [42]; they found that the bottom surfaces of perovskite films have severe compositional inhomogeneity and sub-microscale imperfections, causing major energy loss pathways that hinder device performance.They suggest that the underneath scaffolds play vital roles in the elimination of detrimental defects on the perovskite bottom surfaces.Therefore, surface topography tailoring should also be significantly considered in the optimization of C-PSCs.Regarding the charge transfer dynamics and perovskite film crystallization kinetics, the application criteria of Al 2 O 3 [37], ZrO 2 [18,38] and SiO 2 [39] should be unveiled urgently.
In this work, we investigated the influence of tunnel oxide passivating (TOP) layers on the perovskite film quality and charge transporting properties of mesoscopic C-PSCs.TOP layers have several advantages in C-PSCs: first, they uplift the band bending at perovskite/ETL interfaces through the passivation of TiO 2 surfaces by forming a discontinuous coating; second, they reduce charge shunting risks in the case of the presence of pinholes in the perovskite film; and, third, they modify the ETL with a porous topology more favorable for the solution infiltration of the perovskite precursor, leading to a higher absorber film quality and better interconnection with ETL networks.We selected commonly employed dielectric materials as tunnel oxides in PSCs, including Al 2 O 3 , SiO 2 and ZrO 2 .In particular, ZrO 2 has a relatively higher dielectric constant than TiO 2 , which might cause sufficient passivation on the TiO 2 surface.Moreover, ZrO 2 nanoparticle-coated scaffolds maintain high uniformity and porous features, facilitating perovskite crystallization and charge collection.An electrical impedance spectroscopy demonstrated that ZrO 2 TOP-based C-PSCs show the best charge transfer properties with the highest efficiency of 14.96%.The efficient passivation with the tunnel oxide layer enables the high repeatability of our devices.Our HTM-free C-PSCs were fabricated under ambient conditions with a humidity of about 50%, further emphasizing the robust air stability that is compatible with high-yield manufacturing processes.Our C-PSCs present excellent long-term stability; they maintained 88.9% of their original efficiency after 11,520 h (480 days) of ambient storage without encapsulation.
Fabrication of C-PSCs
The process for fabricating C-PSCs involved several steps.First, the FTO glass was patterned by etching with Zn powder and 2 M HCl diluted in ethanol.The surface of the glass was then cleaned using acetone, deionized water, acetone and ethanol alternately, and dried in clean air.A solution of 0.15 M titanium diisopropoxidebis(acetylacetone) in 1-butanol was spin-coated on the cleaned FTO glass at 3000 rpm for 30 s, and dried at 125 • C for 20 min to form a compact TiO 2 layer.The mesoporous TiO 2 layer was deposited over the compact TiO 2 layer by spin-coating a homemade TiO 2 P25 paste at 3000 rpm for 20 s.The deposited layers were then sintered in air at 500 • C for 30 min.After cooling down to room temperature, the films were treated with a 0.05 M aqueous solution of TiCl 4 at 70 • C for 30 min, rinsed with deionized water and ethanol, and dried in the air.Insulating layers were prepared by spin-coating ZrO 2 , Al 2 O 3 or SiO 2 paste over the TiCl 4 -treated TiO 2 film and annealing it at 500 • C for 30 min.The perovskite film was deposited on the mesoporous TiO 2 film using a two-step sequential method under ambient conditions with high humidity (~50%).In the first step, the PbI 2 precursor solution was spin coated at 4000 rpm for 20 s and then the wet PbI 2 film was treated with ethanol and annealed at 100 • C for 8 min.In the second step, the film was immersed in an isopropanol solution of MAI (7 mg mL −1 ) for 5 min and dried with nitrogen gas.The as-prepared MAPbI 3 perovskite film was further heated at 100 • C for 10 min.Finally, the carbon paste was coated over the perovskite film using a doctor-blade method and annealed at 100 • C for 40 min.The resulting solar cells had a configuration of FTO/c-TiO 2 /meso-TiO 2 /TOP layer/MAPbI 3 /carbon.
Measurements and Characterization
The Bruker instrument (D8 Focus diffractometer) utilizing Cu Kα radiation (λ = 0.15406 nm) at 40 kV and 40 mA was employed for structural analysis.The surface and cross-sectional morphology were observed by a field emission scanning electron microscope (SEM, Zeiss SIGMA, Jena, Germany).The absorption spectra of films deposited on FTO were collected using a UV-vis spectrophotometer (Lambda 650S, PerkinElmer, Shelton, CT, USA) with a wavelength range of 300-800 nm at room temperature.The J-V curves of the solar cells were measured using a CHI660C electrochemical workstation (Shanghai, Chenhua) coupled with a solar simulator (Newport, 91192) under 100 mW cm −2 illumination (AM 1.5 G) with a scan rate of 0.05 V s −1 .The area of the portion of C-PSC exposed to the radiation was confined to 0.1 cm 2 using a metal mask.The films were characterized by ambient air condition, with a temperature of around 25 • C and relative humidity of 50%.The time-resolved photoluminescence (TRPL) was performed using a time-correlated single photon counting (TCSPC) module, excited with a 532 nm pulsed laser.The external quantum efficiency (EQE) was measured using an instrument equipped with a 300 W xenon lamp (Newport 66984), and the monochromatic light ranged from 300 to 800 nm.Electrochemical impedance spectra (EIS) were recorded under one sun illumination over the range of frequencies spanning from 1 MHz to 1 Hz at open-circuit voltage bias.During the long-term stability test, we stored the devices under room light without further protection.As aging progressed, the devices exhibited gradually increased performance for hundreds of hours before beginning to decline.The devices were exposed to room light during storage.
Results and Discussion
The C-PSCs possess a straightforward device architecture comprising FTO/TiO 2 /TOP layer/perovskite/carbon, as depicted in Figure 1a.Here, FTO refers to fluorine-doped tin oxide, and the perovskite layer corresponds to CH 3 NH 3 PbI 3 (MAPbI 3 ).The entire fabrication process was conducted in ambient air, with the perovskite layer deposited using a two-step sequential method, and the carbon electrode doctor-blade coated onto the perovskite film using the homemade carbon paste [43].Our previous research indicates that high-temperature annealed TiO 2 films exhibit numerous surface defects, which should be responsible for the inferior performance of C-PSCs [44].We randomly used TiCl 4 post-treatment and an external SiO 2 coating to passivate the ETL surfaces, and the PCEs have been improved for the corresponding C-PSCs due to the elimination of interface defects [38,44].However, the criteria for selecting surface passivating layers remain unclear.Drawing inspiration from the optimization processes of silicon (Si) solar cells, we deliberately selected SiO 2 , Al 2 O 3 and ZrO 2 as the TOP layers, taking into account their surface charge states, dialectical constants, film topologies and interface electric fields.Efficient C-PSCs require optimal TOP layers.Thus, we initially investigated the concentrations of SiO 2 , ZrO 2 and Al 2 O 3 pastes by diluting the original pastes with ethanol.Figure 1b shows the typical energy level diagram of the C-PSC that employs the ZrO 2 TOP layer.The thin ZrO 2 TOP layer exhibits a deep valence band maximum (VBM), with its conduction band position surpassing that of TiO 2 [37].Photo-generated electrons in the perovskite absorber layer may transfer from the conduction band (CB) of MAPbI 3 to TiO 2 through either the tunneling effect or the voids of the discontinued insulating layer coated on the thick mesoporous TiO 2 scaffold.Consequently, electrons accumulate at the TiO 2 interface due to the existence of this layer of insulating oxide, ultimately elevating the Fermi level and increasing the V oc of the solar cell.Simultaneously, due to the blocking effect of the insulating layers, it becomes challenging for electrons in the conduction band of TiO 2 to recombine with holes.Therefore, a thin layer of insulating materials can reduce interfacial recombination, thereby facilitating carrier transport [40].
Results and Discussion
The C-PSCs possess a straightforward device architecture comprising FTO/TiO2/TOP layer/perovskite/carbon, as depicted in Figure 1a.Here, FTO refers to fluorine-doped tin oxide, and the perovskite layer corresponds to CH3NH3PbI3 (MAPbI3).The entire fabrication process was conducted in ambient air, with the perovskite layer deposited using a two-step sequential method, and the carbon electrode doctor-blade coated onto the perovskite film using the homemade carbon paste [43].Our previous research indicates that high-temperature annealed TiO2 films exhibit numerous surface defects, which should be responsible for the inferior performance of C-PSCs [44].We randomly used TiCl4 posttreatment and an external SiO2 coating to passivate the ETL surfaces, and the PCEs have been improved for the corresponding C-PSCs due to the elimination of interface defects [38,44].However, the criteria for selecting surface passivating layers remain unclear.Drawing inspiration from the optimization processes of silicon (Si) solar cells, we deliberately selected SiO2, Al2O3 and ZrO2 as the TOP layers, taking into account their surface charge states, dialectical constants, film topologies and interface electric fields.Efficient C-PSCs require optimal TOP layers.Thus, we initially investigated the concentrations of SiO2, ZrO2 and Al2O3 pastes by diluting the original pastes with ethanol.Figure 1b shows the typical energy level diagram of the C-PSC that employs the ZrO2 TOP layer.The thin ZrO2 TOP layer exhibits a deep valence band maximum (VBM), with its conduction band position surpassing that of TiO2 [37].Photo-generated electrons in the perovskite absorber layer may transfer from the conduction band (CB) of MAPbI3 to TiO2 through either the tunneling effect or the voids of the discontinued insulating layer coated on the thick mesoporous TiO2 scaffold.Consequently, electrons accumulate at the TiO2 interface due to the existence of this layer of insulating oxide, ultimately elevating the Fermi level and increasing the Voc of the solar cell.Simultaneously, due to the blocking effect of the insulating layers, it becomes challenging for electrons in the conduction band of TiO2 to recombine with holes.Therefore, a thin layer of insulating materials can reduce interfacial recombination, thereby facilitating carrier transport [40].2a,b, respectively.As the concentration of ZrO 2 or Al 2 O 3 increases, the photovoltaic parameters, including V oc , J sc , FF and PCE, first exhibit an increase and then a decrease trend.This can be ascribed to the thickness of the tunnel oxide layers, which is tuned by the concentration of the pastes.For the non-treated C-PSCs, severe charge recombination occurs at the buried perovskite interface due to the presence of defects, leading to inferior performance.However, if the concentration of ZrO 2 or Al 2 O 3 is too high, the thickness will be thick enough to suppress the electron transportation process, ultimately reducing the performance of final devices.The optimal weight ratio of ethanol to ZrO 2 and Al 2 O 3 pastes is found to be 1:1 and 2:1, respectively.The SiO 2 TOP layer used here is the same as that reported in our previous work [38].
Nanomaterials 2023, 13, x FOR PEER REVIEW 5 of 15 tunnel oxide layers, which is tuned by the concentration of the pastes.For the non-treated C-PSCs, severe charge recombination occurs at the buried perovskite interface due to the presence of defects, leading to inferior performance.However, if the concentration of ZrO2 or Al2O3 is too high, the thickness will be thick enough to suppress the electron transportation process, ultimately reducing the performance of final devices.The optimal weight ratio of ethanol to ZrO2 and Al2O3 pastes is found to be 1:1 and 2:1, respectively.The SiO2 TOP layer used here is the same as that reported in our previous work [38].Figure 3a,b shows the cross-sectional SEM images of the full device and perovskite film grown on the TiO2/ZrO2 layer, respectively.The thicknesses of FTO, TiO2/ZrO2 and MAPbI3 are about 380 nm, 460 nm and 530 nm, respectively.The images reveal that the perovskite materials infiltrate well into the pores and the carbon electrode tightly adheres to the perovskite film.The surface morphology and structure of scaffold layers play a crucial role in the performance of mesoscopic PSCs.Factors such as surface roughness, pore size and hydrophilicity have a significant impact on the infiltration of perovskite materials, crystallization quality and carrier transportation in the device [37].Figure 4a-d shows the surface morphology of various scaffold layers (left column) and the perovskite grown on them (right column).TiO2 film shows a relatively uniform surface, with some nanoparticle agglomeration.In contrast, the TiO2/ZrO2 film exhibits a homogeneous morphology with well-dispersed top ZrO2 nanoparticles, which facilitates the infiltration and growth of perovskite materials, thereby promoting the transport of photo-generated carriers.However, the pores in the TiO2/Al2O3 scaffold are very small, which will hinder the infiltration of the precursor solution and limit the growth and crystallization of MAPbI3 in the pores.The TiO2/SiO2 film shows serious agglomeration, resulting in a rough surface with an exposed TiO2 layer that weakens the effect of the insulation layer as a separator between the carbon electrode and TiO2 layer, leading to a higher risk of shunting.All perovskite films grown on different scaffold layers show nanocube-like structures, indicating that the addition of insulating oxide has little effect on their surface morphology.Therefore, the improvement in PSCs' performance is not only caused by the morphology of the perovskite but also by The surface morphology and structure of scaffold layers play a crucial role in the performance of mesoscopic PSCs.Factors such as surface roughness, pore size and hydrophilicity have a significant impact on the infiltration of perovskite materials, crystallization quality and carrier transportation in the device [37].Figure 4a-d shows the surface morphology of various scaffold layers (left column) and the perovskite grown on them (right column).TiO 2 film shows a relatively uniform surface, with some nanoparticle agglomeration.In contrast, the TiO 2 /ZrO 2 film exhibits a homogeneous morphology with well-dispersed top ZrO 2 nanoparticles, which facilitates the infiltration and growth of perovskite materials, thereby promoting the transport of photo-generated carriers.However, the pores in the TiO 2 /Al 2 O 3 scaffold are very small, which will hinder the infiltration of the precursor solution and limit the growth and crystallization of MAPbI 3 in the pores.The TiO 2 /SiO 2 film shows serious agglomeration, resulting in a rough surface with an exposed TiO 2 layer that weakens the effect of the insulation layer as a separator between the carbon electrode and TiO 2 layer, leading to a higher risk of shunting.All perovskite films grown on different scaffold layers show nanocube-like structures, indicating that the addition of insulating oxide has little effect on their surface morphology.Therefore, the improvement in PSCs' performance is not only caused by the morphology of the perovskite but also by the modulation of the TOP layer on the carriers, which will be discussed later.the XRD patterns for the rest of the films are identical to that of the TiO 2 film without new peaks belonging to SiO 2 or Al 2 O 3 .This means that the SiO 2 or Al 2 O 3 present in the TiO 2 film is in the amorphous phase rather than the crystal phase, which could be attributed to the higher sintering temperatures required to form the phases [46,47].The XRD patterns of perovskite films coated on different metal oxide films reveal similar features, indicating that the introduction of insulating layers does not affect the crystallization of perovskite inside.The diffraction peak observed at around 12.7 • corresponds to PbI 2 , resulting from the presence of excess lead iodide in the perovskite, which can contribute to the increase in the V oc of PSCs [48].
In addition, we studied the UV-Vis absorption spectra for ETLs and MAPbI 3 perovskite films grown over different insulating layers, as shown in Figure 5a,b, respectively.The absorption spectra of the scaffolds with various insulating layers exhibit negligible differences.The absorption of perovskite films is slightly increased after the addition of the insulating layers, which may be caused by the increased thickness of the scaffold layers resulting from the introduction of the insulating layers so that more perovskites can be loaded.
, x FOR PEER REVIEW 8 of 15
Figure 4e shows the XRD patterns of TiO2, TiO2/ZrO2, TiO2/Al2O3 and TiO2/SiO2 films.Except for the TiO2/ZrO2 film that shows a ZrO2 tetragonal phase at 2θ~29.2° [45], the XRD patterns for the rest of the films are identical to that of the TiO2 film without new peaks belonging to SiO2 or Al2O3.This means that the SiO2 or Al2O3 present in the TiO2 film is in the amorphous phase rather than the crystal phase, which could be attributed to the higher sintering temperatures required to form the phases [46,47].The XRD patterns of perovskite films coated on different metal oxide films reveal similar features, indicating that the introduction of insulating layers does not affect the crystallization of perovskite inside.The diffraction peak observed at around 12.7° corresponds to PbI2, resulting from the presence of excess lead iodide in the perovskite, which can contribute to the increase in the Voc of PSCs [48].
In addition, we studied the UV-Vis absorption spectra for ETLs and MAPbI3 perovskite films grown over different insulating layers, as shown in Figure 5a,b, respectively.The absorption spectra of the scaffolds with various insulating layers exhibit negligible differences.The absorption of perovskite films is slightly increased after the addition of the insulating layers, which may be caused by the increased thickness of the scaffold layers resulting from the introduction of the insulating layers so that more perovskites can be loaded.N2 adsorption-desorption isotherms were recorded for the powders of TiO2, ZrO2, Al2O3 and SiO2, as shown in Figure S1a-d.The inset shows the corresponding pore-size distribution curves obtained by the Barrett-Joyner-Halenda (BJH) method.The isotherms of the samples are the originated classic type IV isotherms of H3 hysteresis loop, indicating the existence of mesopores (2-50 nm) originating from the aggregated nanoparticles.This is consistent with the results observed from the SEM.The TiO2 and Al2O3 exhibit relatively narrower pore size distribution.The pore diameter of TiO2 ranges from 62 nm to 88 nm, while Al2O3 shows the smallest pore size of ~20 nm.The smaller pores may facilitate poor penetration of PbI2 into the TiO2 mesoporous scaffold and hinder the growth of perovskite.However, ZrO2 and SiO2 exhibit a wide range of pore-size distribution.The ZrO2 is mainly composed of macropores with a size of the order of 100-165 nm, and the pore size of SiO2 is distributed between 50 and 120 nm.Therefore, the relatively larger pore size corresponding to the ZrO2 can accommodate more perovskite in the scaffold layer, which, in turn, facilitates better light harvesting and higher electron collection efficiency.
We prepared a large number of C-PSCs to study the effects of different insulation layers on photovoltaic performance.Figure 6a-d shows the statistics of photovoltaic parameters, including Voc, short circuit current density (Jsc), FF and PCE, and the correspond- N 2 adsorption-desorption isotherms were recorded for the powders of TiO 2 , ZrO 2 , Al 2 O 3 and SiO 2 , as shown in Figure S1a-d.The inset shows the corresponding pore-size distribution curves obtained by the Barrett-Joyner-Halenda (BJH) method.The isotherms of the samples are the originated classic type IV isotherms of H3 hysteresis loop, indicating the existence of mesopores (2-50 nm) originating from the aggregated nanoparticles.This is consistent with the results observed from the SEM.The TiO 2 and Al 2 O 3 exhibit relatively narrower pore size distribution.The pore diameter of TiO 2 ranges from 62 nm to 88 nm, while Al 2 O 3 shows the smallest pore size of ~20 nm.The smaller pores may facilitate poor penetration of PbI 2 into the TiO 2 mesoporous scaffold and hinder the growth of perovskite.However, ZrO 2 and SiO 2 exhibit a wide range of pore-size distribution.The ZrO 2 is mainly composed of macropores with a size of the order of 100-165 nm, and the pore size of SiO 2 is distributed between 50 and 120 nm.Therefore, the relatively larger pore size corresponding to the ZrO 2 can accommodate more perovskite in the scaffold layer, which, in turn, facilitates better light harvesting and higher electron collection efficiency.
We prepared a large number of C-PSCs to study the effects of different insulation layers on photovoltaic performance.Figure 6a-d shows the statistics of photovoltaic parameters, including V oc , short circuit current density (J sc ), FF and PCE, and the corresponding average values are summarized in Table 1.Each parameter was calculated from 40 devices.The C-PSC without an insulating layer shows an average V oc of 0.959 V, a J sc of 20.29 mA cm −2 and an FF value of 65.25%, yielding an average PCE value of 12.71%.The average PCE values of C-PSCs that employ ZrO 2 , Al 2 O 3 and SiO 2 as insulation layers are increased to 13.84%, 12.89% and 13.42%, respectively.The increase in PCE is mainly due to the enhancement of V oc and FF, which can be attributed to the inhibition of carrier recombination by insulation layers, as discussed above.As mentioned previously, after TOP layer coating, the absorption of the perovskite film slightly increases.Therefore, more light energy can be collected and the J sc of the solar cell is increased accordingly.The improvement in average PCE for PSCs employing ZrO 2 as the insulating layer is noticeably higher than that of PSCs using Al 2 O 3 and SiO 2 .This could be due to the uniform and porous morphology of ZrO 2 promoting the effective permeation of PbI 2 into the mesoporous TiO 2 scaffold layer, thereby facilitating charge transport.As shown in Figure S2, our devices exhibit good reproducibility with a small deviation in PCE.The perovskite films deposited on the surface of the insulating layers, as shown in Figure 7a, have a stable PL intensity lower than that deposited on the TiO2 surface, suggesting that the introduction of the TOP layer increases the transport and extraction efficiency of the carrier.The strongest PL quenching occurred on the perovskite film deposited on TiO2/ZrO2, indicating that ZrO2 has better charge modulation capabilities.We further performed a time-resolved photoluminescence (TRPL) test of the perovskite films.Figure 6e shows the J-V curves of the best-performing devices with different scaffold layers, with the corresponding photovoltaic parameters listed in the inset.The device with a ZrO 2 insulating layer exhibits excellent performance, with a V oc of 0.995 V, a J sc of 21.21 mA cm −2 and an FF of 70.91%, yielding a PCE of 14.96%.Figure 6f shows the incident photon-to-electron conversion efficiency (IPCE) spectrum of a C-PSC prepared on TiO 2 /ZrO 2 film.The measured integral J sc from the IPCE spectrum is also shown in Figure 6f.The resulting integrated J sc value is 20.92 mA cm −2 , which is only ~1.4% lower than that of the champion cell (21.21 mA cm −2 ) in Figure 6e.
The perovskite films deposited on the surface of the insulating layers, as shown in Figure 7a, have a stable PL intensity lower than that deposited on the TiO To further investigate the kinetics of charge transport and recombination in PSCs, we measured the electrical impedance spectroscopy (EIS).The Nyquist plots of our C-PSCs, employing different insulating layers, are shown in Figure 7c.The semicircles at high-and low-frequency regions can be assigned to the charge transport resistance (Rct) and recombination resistance (Rrec), respectively [43].The corresponding impedance parameters are To further investigate the kinetics of charge transport and recombination in perovskite solar cells, we measured the electrical impedance spectroscopy (EIS).The Nyquist plots of our C-PSCs, employing different insulating layers, are shown in Figure 7c.The semicircles at high-and low-frequency regions can be assigned to the charge transport resistance (R ct ) and recombination resistance (R rec ), respectively [43].The corresponding impedance parameters are listed in Table 2.It is found that R ct slightly decreases with the addition of ZrO 2 or SiO 2 , indicating that the presence of ZrO 2 or SiO 2 has a slight promoting effect on charge transport.However, after Al 2 O 3 treatment, R ct increases from 44.5 to 50.4 Ω due to hindered charge transport by the addition of dense Al 2 O 3 .On the other hand, R rec obviously increases with the incorporation of insulating layers, indicating effectively suppressed charge recombination, which confirms our previous expectation that insulating layers can prevent direct contact with carbon and TiO 2 [38].In general, ZrO 2 -based C-PSC has the smallest R ct and the largest R rec , indicating faster carrier transport and slower recombination, which well explains the significant improvement in the V oc and FF of corresponding devices.The buried interface quality has been highly improved due to carrier modulation with highly suppressed nonradiative recombination.To further confirm the reliability of our fabricated C-PSCs, the steady-state efficiency of C-PSC fabricated on TiO 2 /ZrO 2 is measured in ambient air under a constant bias of 0.8 V near the maximum power point.As shown in Figure 8a, our device presents a stable current density of 17.55 mA cm −2 under continuous illumination for 400 s, and the corresponding PCE is 14.04%.In comparison, the original TiO 2 -based device produces only 12.83% steady-state PCE under the same test condition, with a current density of 16.04 mA cm −2 (Figure 8b).Since the stability of PSC is one of the most critical concerns for the future commercialization of the devices, we have also recorded the stability of C-PSCs prepared on TiO2/ZrO2 to verify the long-term endurance in ambient air conditions with a temperature of 25 °C and humidity of 50 RH%.As shown in Figure 9, the Voc slightly increases during the stability test, while Jsc and FF first increase and then show a decreasing trend.The PCE increases from an initial 13.96% to the highest of 15.24%, and finally drops to 12.41% after being stored for 11,520 h (480 days), demonstrating the outstanding stability of C-PSCs, which is among the first class of state-of-the-art devices [18,32].The better performance during storage may be ascribed to the better contact attained between the perovskite layer and the carbon CE [43].We further investigated the thermal stability of our unencapsu- Since the stability of PSC is one of the most critical concerns for the future commercialization of the devices, we have also recorded the stability of C-PSCs prepared on TiO 2 /ZrO 2 to verify the long-term endurance in ambient air conditions with a temperature of 25 • C and humidity of 50 RH%.As shown in Figure 9, the V oc slightly increases during the stability test, while J sc and FF first increase and then show a decreasing trend.The PCE increases from an initial 13.96% to the highest of 15.24%, and finally drops to 12.41% after being stored for 11,520 h (480 days), demonstrating the outstanding stability of C-PSCs, which is among the first class of state-of-the-art devices [18,32].The better performance during storage may be ascribed to the better contact attained between the perovskite layer and the carbon CE [43].We further investigated the thermal stability of our unencapsulated devices by placing them on a heating plate at 85 • C in an environment with a humidity of 50%. Figure S3 shows the variation in PCE with heating time.The PCE initially improved slightly; however, it reduced to 81% of the initial value after 120 h of continuous heating.This may be caused by the decomposition of perovskite material in unencapsulated devices triggered by the high-humidity environment.
TiO2/ZrO2 to verify the long-term endurance in ambient air conditions with a temp of 25 °C and humidity of 50 RH%.As shown in Figure 9, the Voc slightly increases the stability test, while Jsc and FF first increase and then show a decreasing trend.T increases from an initial 13.96% to the highest of 15.24%, and finally drops to 12.4 being stored for 11,520 h (480 days), demonstrating the outstanding stability of which is among the first class of state-of-the-art devices [18,32].The better perfo during storage may be ascribed to the better contact attained between the perovsk and the carbon CE [43].We further investigated the thermal stability of our une lated devices by placing them on a heating plate at 85 °C in an environment wi midity of 50%. Figure S3 shows the variation in PCE with heating time.The PCE improved slightly; however, it reduced to 81% of the initial value after 120 h of con heating.This may be caused by the decomposition of perovskite material in une lated devices triggered by the high-humidity environment.
Conclusions
In summary, ZrO 2 , Al 2 O 3, and SiO 2 are successfully used as insulating TOP layers for air-processed, highly efficient and stable C-PSCs.These common insulating materials can effectively separate TiO 2 ETL and carbon electrodes, thus efficiently inhibiting carrier recombination caused by shunting.The main reason for the variation in improving the performance of C-PSCs lies in the morphology of insulating layers, which affects the infiltration and growth of perovskite material.We achieved the best performance of C-PSCs with a PCE of 14.96% using TiO 2 /ZrO 2 as a scaffold layer, indicating that ZrO 2 is the most suitable insulating layer for the system of C-PSCs.Moreover, our C-PSCs show outstanding long-term stability, maintaining 88.9% of their initial efficiency after 11,520 h storage in ambient air.This work is promising for high performance carbon-based HTM-free perovskite solar cells via the optimization of insulation materials.The high efficiency and stability in our TOP layer passivated C-PSCs offer a step towards the future commercialization of this low-cost photovoltaic technology.
Nanomaterials 2023 ,
13, x FOR PEER REVIEW 4 of 15 illumination over the range of frequencies spanning from 1 MHz to 1 Hz at open-circuit voltage bias.During the long-term stability test, we stored the devices under room light without further protection.As aging progressed, the devices exhibited gradually increased performance for hundreds of hours before beginning to decline.The devices were exposed to room light during storage.
Figure 1 .Figure 1 .
Figure 1.(a) Schematic illustration of the C−PSCs structure; (b) energy−level diagram of a C−PSC with ZrO2 as a TOP layer.The variation in photovoltaic parameters of C-PSCs employing different concentrations of ZrO2 and Al2O3 pastes are shown in Figure 2a,b, respectively.As the concentration of ZrO2 or Al2O3 increases, the photovoltaic parameters, including Voc, Jsc, FF and PCE, first exhibit an increase and then a decrease trend.This can be ascribed to the thickness of the Figure 1.(a) Schematic illustration of the C−PSCs structure; (b) energy−level diagram of a C−PSC with ZrO 2 as a TOP layer.The variation in photovoltaic parameters of C-PSCs employing different concentrations of ZrO 2 and Al 2 O 3 pastes are shown in Figure2a,b, respectively.As the concentration of ZrO 2 or Al 2 O 3 increases, the photovoltaic parameters, including V oc , J sc , FF and PCE, first exhibit an increase and then a decrease trend.This can be ascribed to the thickness of the tunnel oxide layers, which is tuned by the concentration of the pastes.For the non-treated C-PSCs, severe charge recombination occurs at the buried perovskite interface due to the presence of defects, leading to inferior performance.However, if the concentration of ZrO 2 or Al 2 O 3 is too high, the thickness will be thick enough to suppress the electron transportation process, ultimately reducing the performance of final devices.The optimal
Figure 2 .
Figure 2. Dependence of Voc, Jsc, FF and PCE on the concentration of (a) ZrO2 paste and (b) Al2O3 paste.
Figure 2 .
Figure 2. Dependence of V oc , J sc , FF and PCE on the concentration of (a) ZrO 2 paste and (b) Al 2 O 3 paste.
Figure 15 Figure 3 .
Figure 3a,b shows the cross-sectional SEM images of the full device and perovskite film grown on the TiO 2 /ZrO 2 layer, respectively.The thicknesses of FTO, TiO 2 /ZrO 2 and MAPbI 3 are about 380 nm, 460 nm and 530 nm, respectively.The images reveal that the perovskite materials infiltrate well into the pores and the carbon electrode tightly adheres to the perovskite film.Nanomaterials 2023, 13, x FOR PEER REVIEW 6 of 15
Figure 3 .
Figure 3. (a) Cross-sectional SEM image of the PSC device; (b) cross-section SEM image of perovskite grown on TiO 2 /ZrO 2 layer.
Figure
Figure4eshows the XRD patterns of TiO 2 , TiO 2 /ZrO 2 , TiO 2 /Al 2 O 3 and TiO 2 /SiO 2 films.Except for the TiO 2 /ZrO 2 film that shows a ZrO 2 tetragonal phase at 2θ~29.2 • [45], the XRD patterns for the rest of the films are identical to that of the TiO 2 film without new peaks belonging to SiO 2 or Al 2 O 3 .This means that the SiO 2 or Al 2 O 3 present in the TiO 2 film is in the amorphous phase rather than the crystal phase, which could be attributed to the higher sintering temperatures required to form the phases[46,47].The XRD patterns of perovskite films coated on different metal oxide films reveal similar features, indicating that the introduction of insulating layers does not affect the crystallization of perovskite inside.The diffraction peak observed at around 12.7 • corresponds to PbI 2 , resulting from the presence of excess lead iodide in the perovskite, which can contribute to the increase in the V oc of PSCs[48].In addition, we studied the UV-Vis absorption spectra for ETLs and MAPbI 3 perovskite films grown over different insulating layers, as shown in Figure5a,b, respectively.The absorption spectra of the scaffolds with various insulating layers exhibit negligible differences.The absorption of perovskite films is slightly increased after the addition of the insulating layers, which may be caused by the increased thickness of the scaffold layers resulting from the introduction of the insulating layers so that more perovskites can be loaded.
15 Figure 7 .
Figure6eshows the J-V curves of the best-performing devices with different scaffold layers, with the corresponding photovoltaic parameters listed in the inset.The device with a ZrO 2 insulating layer exhibits excellent performance, with a V oc of 0.995 V, a J sc of 21.21 mA cm −2 and an FF of 70.91%, yielding a PCE of 14.96%.Figure6fshows the incident photon-to-electron conversion efficiency (IPCE) spectrum of a C-PSC prepared on TiO 2 /ZrO 2 film.The measured integral J sc from the IPCE spectrum is also shown in Figure6f.The resulting integrated J sc value is 20.92 mA cm −2 , which is only ~1.4% lower than that of the champion cell (21.21 mA cm −2 ) in Figure6e.The perovskite films deposited on the surface of the insulating layers, as shown in Figure7a, have a stable PL intensity lower than that deposited on the TiO 2 surface, suggesting that the introduction of the TOP layer increases the transport and extraction efficiency of the carrier.The strongest PL quenching occurred on the perovskite film deposited on TiO 2 /ZrO 2 , indicating that ZrO 2 has better charge modulation capabilities.We further performed a time-resolved photoluminescence (TRPL) test of the perovskite films.The TRPL data are fitted by a biexponential decay model and the corresponding lifetime values are listed in the inset of Figure 7b.The average carrier lifetime of TiO 2 /perovskite film is 110.5 ns.After the introduction of the ZrO 2 insulating layer for TiO 2 , the average carrier lifetime reduces to 82.5 ns, indicating improved charge transport.However, for the TiO 2 /Al 2 O 3 /perovskite and TiO 2 /SiO 2 /perovskite films, the carrier lifetime increases to 125.6 and 108.3 ns, respectively.Nanomaterials 2023, 13, x FOR PEER REVIEW 11 of 15
Figure 7 .
Figure 7. (a) Steady-state photoluminescence (PL) spectra and (b) time-resolved photoluminescence (TRPL) spectra of MAPbI 3 films deposited on various scaffolds; (c) EIS spectra and their fitting curves of C−PSCs based on various insulating layers.
023, 13 ,
x FOR PEER REVIEW 12 of 15 V near the maximum power point.As shown in Figure8a, our device presents a stable current density of 17.55 mA cm −2 under continuous illumination for 400 s, and the corresponding PCE is 14.04%.In comparison, the original TiO2-based device produces only 12.83% steady-state PCE under the same test condition, with a current density of 16.04 mA cm −2 (Figure8b).
Figure 8 .
Figure 8. Steady-state photocurrent and PCE output as a function of time held at a bias of 0.80 V under one-sun (100 mW cm −2 ) illumination for the device based on (a) TiO2/ZrO2 and (b) pristine TiO2.
Figure 8 .
Figure 8. Steady-state photocurrent and PCE output as a function of time held at a bias of 0.80 V under one-sun (100 mW cm −2 ) illumination for the device based on (a) TiO 2 /ZrO 2 and (b) pristine TiO 2 .
Figure 9 .
Figure 9. Photovoltaic parameters versus storage time for a TiO2/ZrO2/MAPbI3/carbon stored under dry air with humidity of 50% at room temperature without encapsulation.
Figure 9 .
Figure 9. Photovoltaic parameters versus storage time for a TiO 2 /ZrO 2 /MAPbI 3 /carbon solar cell stored under dry air with humidity of 50% at room temperature without encapsulation.
Table 1 .
Average photovoltaic parameters of total of 160 C-PSCs prepared with different scaffold layers.The error values represent the standard deviations.
Table 2 .
Impendence values of PSCs with different insulating layers. | 2023-09-28T15:11:37.905Z | 2023-09-26T00:00:00.000 | {
"year": 2023,
"sha1": "3d0c9ba2717f89f750474c9b7f28500430cf2784",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/13/19/2640/pdf?version=1695706765",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef9415dd1b8407db4455a3548466e69efa3c05fe",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
252857678 | pes2o/s2orc | v3-fos-license | Oral Self-Mutilation in Lesch–Nyhan Patients: A Cross-Sectional Study
Lesch–Nyhan syndrome (LNS) is a rare genetic condition resulting from an inherited disorder of purine metabolism. It is characterized by the lack of one enzyme, hypoxanthine-guanine phos-phoribosyltransferase (HGPRT), which is responsible for purine salvage. The main manifestations of this syndrome are hyperuricaemia, reduction in cognitive abilities, self-aggressive behavior, choreoathetosis, spasticity, and retarded development. The aim of the study was to investigate the means of treatment and efficacy of prevention of oral self-injury behavior (SIB) in patients with LNS. Information regarding the type and treatment of oral SIB in 19 LSN Italian patients (mean age 23.3 years) was gathered via a structured telephone interview of their parents. A total of 84% of the patients showed some form of self-injury behavior; the first form to manifest itself was finger biting (37%), followed by lip biting (25%), and then tongue biting (18%). Furthermore, 74% of cases featured oral SIB, and tooth extraction was found to be the most frequent form of treatment practiced (71%). This study has revealed the great difficulty parents and carers face in managing forms of oral SIB; dental extraction was the most common choice, despite its invasive nature and far-reaching consequences in regard to the psychosocial status of the patients.
Introduction
Lesch-Nyhan syndrome (LNS) [1] is a rare genetic pathology, whose incidence has been reported to range from 1:1,000,000 to 1:380,000 [2], although the number of known cases would seem to suggest that the incidence is lower [3]. Worldwide distribution of the disease is unknown, but it appears to be uniformly represented in terms of geographical location and ethnic origin. As it involves a defect in the X chromosome, the vast majority of affected patients are males.
LNS is caused by a genetic dysfunction in purine metabolism and is characterized by a lack of hypoxanthine-guanine phosphoribosyltransferase (HGPRT), an enzyme found in all tissues, especially in the brain, which is responsible for the purine salvage that catalyzes the reaction in which hypoxanthine and guanine are converted to their respective nucleotides [1,4,5].
Clinical manifestations of the disease are hyperuricaemia, reduction in cognitive ability, self-mutilation, choreoathetosis, spasticity, and retarded development [1,2,[6][7][8][9][10]. Self-harm, one of the peculiarities of this condition, induces the patients to consciously injure themselves in response to an unwanted but uncontrollable impulse, the product of the disease itself. These patients seek to cause themselves pain in a variety of ways, from the most simple to the most unusual and unpredictable; examples of this self-injury behavior include finger, lip, tongue, and cheek biting; banging the head, arms, or legs 2 of 7 against obstacles; inserting fingers or other extremities into dangerous places; or placing their whole person in harm's way [1,8,11,12].
Patients are aware that their behavior against themselves, and others, is 'wrong', but are generally unable to control it; occasionally, they are able to warn their carers of an impending attack, but more often than not they fail, leaving them with a strong sense of disappointment and sorrow.
Like other forms of self-injury behavior (SIB), oral self-mutilation is seen more frequently in periods of emotional stress, when the patient is uneasy or unwell. The most frequently observed are lesions to the lips, cheek, tongue, and gums, as well as bites to other parts of the body, and self-extraction of teeth.
Various types of treatment have been proposed for oral self-injury, and can be classed as pharmacological [13][14][15], orthodontic [16][17][18][19][20][21][22][23], or extractive, i.e., the extraction of teeth [24,25]. Pharmaceutical intervention is generally aimed at correcting the dopaminergic deficiency in the striatum, responsible for the self-injury behavior seen in LNS patients. Numerous orthodontic treatments have been suggested, aimed mainly at protecting the areas most affected by oral self-harm or covering the teeth in order to attenuate the effects of mastication during episodes. The most effective solution in these cases is the extraction of the permanent or deciduous teeth, although this can obviously have a great impact of the psychosocial outlook of the patient.
In this context, the objective of this cross-sectional study was to evaluate the incidence of oral SIB in these patients and to assess the frequency of the different treatments used to contrast it.
Materials and Methods
This was a cross-sectional study on an Italian population of LNS patients contacted through an organization of families affected by the disease (Lesch-Nyhan Group). Data collection was conducted by the Orthodontics and Pediatric Dentistry Department of the University of Genoa. The period of recruitment was between September 2017 and June 2018. The parents of 19 patients were contacted and asked to complete a structured interview reported in Table 1. The primary objective was to evaluate incidences of SIB and secondarily report the treatment options to manage this condition. The questions asked investigated the current and past clinical histories, the odontostomatological situation, and the occurrence of SIB. At the moment of interview, two of the patients were already deceased but were included in the study thanks to the information kindly provided by their parents.
Results
The age of the patients ranged from 5 to 46 years, with a mean age of roughly 23.3 years (±9.4); diagnosis of the condition had been made at a mean age of 4.7 years (± 3.0).
Self-Injury Behavior
According to the parents, 16 patients out of the 19 included in the sample displayed some form of self-injury behavior (84%). The first form of self-mutilation to manifest itself was found to be finger biting 7/19 (37%), followed by lip biting 5/19 (26%) and tongue biting 4/19 (21%). A total of 14 patients out of these 16 displayed oral SIB (74%) (Figure 1).
Orthodontic appliances
Dental check-ups
Results
The age of the patients ranged from 5 to 46 years, with a mean age of roughly 23.3 years (±9.4); diagnosis of the condition had been made at a mean age of 4.7 years (± 3.0).
The three patients (16%) showing no signs of self-mutilation were the youngest, respectively 5, 8, and 9 years of age, the age range during which this kind of behavior generally begins to manifest itself.
A total of 42% (8/19) of all patients included in the sample displayed grinding behavior. Moreover, 71% (10/14) of these had consequently been treated by means of extraction, while orthodontic appliances were employed in 29% of cases (4/14).
Treatment for Self-Injury
The majority of patients had been subjected to extraction (71%); of the ten patients with extracted teeth in our sample, extraction had only been performed for apparently therapeutic reasons in six cases; two other patients had self-extracted, one had lost their teeth due to the lack of dental treatment, and another due to combined trauma and lack of treatment. The remainder of the patients due to orthodontic appliances as, for example, an acrylic maxillary device, designed and constructed with an occlusal plate raising the bite or soft resin mouth guard (29%). Other types of treatment had been used with limited success, including filing the teeth to make them less likely to cut the oral and perioral tissues (14%); painkilling drugs (7%) for palliative rather than symptomatic relief; and obstructive methods such as placing objects between the teeth to prevent biting lesions (36%) (Figure 2). The majority of patients had been subjected to extraction (71%); of the ten patients with extracted teeth in our sample, extraction had only been performed for apparently therapeutic reasons in six cases; two other patients had self-extracted, one had lost their teeth due to the lack of dental treatment, and another due to combined trauma and lack of treatment. The remainder of the patients due to orthodontic appliances as, for example, an acrylic maxillary device, designed and constructed with an occlusal plate raising the bite or soft resin mouth guard (29%). Other types of treatment had been used with limited success, including filing the teeth to make them less likely to cut the oral and perioral tissues (14%); painkilling drugs (7%) for palliative rather than symptomatic relief; and obstructive methods such as placing objects between the teeth to prevent biting lesions (36%) (Figure 2).
Discussion
The syndrome was diagnosed at a mean age of 4.7 years (±3.0), often upon the birth of a second child affected by the same syndrome. Only five patients were diagnosed in their first year of life, eight patients got their diagnosis between one and six years of age, and the remainder were provided with a precise diagnosis after seven years of age.
Three pairs of siblings suffering from the same condition were among the 19 patients recruited. This appears to be a fairly frequent occurrence due to the lack of a precise diagnosis in the firstborn; indeed, it is often only the birth of the second afflicted child that reveals the existence of the pathology.
Considering lifetime self-injury behavior, the above-mentioned results are similar to those published by Anderson et al. who reported a 90% permanent physical damage with compulsive approach [26].
The first form of self-injury to manifest itself in the group was finger biting (37%), followed by lip biting (26%), and finally tongue biting (21%). Other, less frequent forms of self-harm were general biting and throwing oneself backwards, as well as banging the head (5% for each).
These kind for lesions are those typically observed in LN patients and reported in other studies [27].
Of these 16 self-harming patients, 14 (74%) displayed oral self-injury behavior. However, the two non-oral self-mutilating patients did display tooth grinding behavior in times of great emotional stress, during fever, or when ill. In fact, it is worthy to note that the majority of patients displayed, or have displayed, more than one form of self-injury behavior at the same time, or had developed various forms over the years.
Accordingly to Anderson, who reported permanent damages over 45% of those patients, in the present study the majority of patients (86%) also showed permanent lesions following oral self-injury behavior. The most prevalent of these were of the upper lip; six patients (50%) had bitten their lower lip completely off, while the others displayed at least one scar.
Biting scars were also found inside the cheeks, on the buccal mucosa, in seven patients, and, less frequently, presumably due to its great regenerative capacity, on the tongue. This notwithstanding, two patients had completely bitten off a portion of their tongue as parents reported; one, the tip using the incisors; and the other two lateral portions-the anterior using the canine and the posterior using the molars. Two patients also had resection scars on the fingers and missing fingernails, and one patient had a scar accompanied by missing tissue at the left nostril.
Bruxism and tooth grinding were very common in these patients. As previously mentioned, two patients who did not display other forms of self-injury ground their teeth at night, and at times of emotional stress. In addition to these two patients, grinding was reported in five other patients who displayed oral SIB, and in one who displayed a non-oral pattern of self-harm. Thus, 8 out of the 19 LNS patients considered presented grinding (42%).
Various solutions have been proposed to prevent oral self-injury. These involve positioning orthodontic or other devices between the teeth, pharmacological therapy, and tooth extraction. However, the management of these cases is difficult [28,29], and parents must be involved in the clinician's decision-making process [30].
Some of the prevalent non-orthodontic means of preventing biting injuries were reported to be dummies (pacifiers); a roll of gauze, buccal shields, or other cloth or rubber objects placed between the teeth; and a plaster to stick the lower lip to the chin, thereby distancing it from the teeth.
The literature contains a variety of case reports proposing the use of various appliances for limiting oral SIB, as soft mouth guard fabricated to prevent the destruction of perioral soft tissues and combined psychiatric pharmacologic therapy have been proved to have satisfactory results [16][17][18][19][20][21][22][23]. On the whole, these have been successful at protecting the oral tissues and other parts of the body. Extraction of the teeth, on the other hand, is the most invasive of the treatment options available and leads to significant oral disability even though it led to improvement of patients' lesions. Despite this, it was the most commonly observed solution in our study, probably due to the possibility to an immediate solution of the patient's problem. Indeed, the telephone interviews conducted in this study revealed that 6 out of 19 patients were not regularly (once a year) examined by a dentist (31%). Of the remaining 13 carers interviewed, 5 would not or could not respond to this question, suggesting that this percentage may, in fact, be far higher.
Interestingly, two pairs of parents who had consented to this procedure being performed on their children, despite having been informed about the tragic consequences, now regret their choice.
In our sample, the extractions had been performed at different ages; several had been carried out in childhood, with the entire set of deciduous teeth being extracted, while the majority had been performed in permanent dentition, either extracting all the teeth in one sitting under general anesthesia, or extracting the teeth, or parts of teeth, progressively on the basis of the lesions provoked.
Following the extraction of all his deciduous teeth, one patient stopped biting himself and showed no further oral problems, thereby obviating the need for extraction of the permanent teeth; this could be an interesting approach to be investigated further to assure that this approach can effectively reduce or delete SIB in permanent dentition.
Conclusions
• Self-injury behavior in Lesch-Nyhan patients represents a severe management problem that both parents and oral health carers have to face. The incidence of SIB reported is 74% and led to permanent damages in 86% of cases. Among different therapeutical options, including oral appliances and drug administration, dental extraction is the most frequently chosen therapy (71%) despite its invasive approach; • This cross-sectional study shows how extraction therapy is extremely distressing for patients and suggests that the use of orthodontic appliances reported in the literature should be preferred wherever possible in order to improve the patients' quality of life. Funding: This research received no external funding.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of University of Genoa, Genoa, Italy (121/July 2021). | 2022-10-13T15:08:57.537Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "9f4573381a736770503abea4a8d2d4d8f2edcb39",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/20/5981/pdf?version=1665482906",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "554c71238b393ebc37250b2b9d62093ec51c1d9d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236782215 | pes2o/s2orc | v3-fos-license | Gas Diffusion Model and Its Application based on CFD Theory
In order to reduce the loss and impact of the unexpected leakage of hazardous chemicals, the paper makes use of the computational fluid mechanics theory and applies the commercial software, FLOVENT, to imitate the steady state and transient state of six scenes of the diffusion scope and concentration change of NH3 and H2 in different wind directions and leak location; furthermore, it takes the simulated result as the basis to analyze the diffusion length and impact scope of the characteristic concentration, both of which will be the evidence for occupants evacuation.
Introduction
With the rapid economic development, enterprises are using more and more hazardous chemicals day by day. As many enterprises don't do well in daily management on the storage, transportation and usage of a lot of hazardous chemicals, the leak of them, which car caused by various factors, leads to fire, explosion and other accidents endlessly. In particular, there are always a lot of hazardous chemicals piled at the storage sites, where the accidents often cause greater negative influence on the society and environment. Therefore, the enterprise should enhance its daily control and management as well as take immediate measures to handle the leak and other accidents in order to minimize the loss and impact. Therefore, it's needed to conduct the quantitative simulation on the leaked hazardous chemicals and then make up a proper emergency disposal scheme according to the simulated result, so as to reduce the accident loss to the minimum.
Since computer-aided engineering (CAE) tools have been developed prosperously throughout the world in recently years, computational fluid dynamics (CFD) which is one of the analytical techniques [1][2][3][4]; furthermore, it takes the simulated result as the basis to analyze the diffusion length and impact scope of the characteristic concentration, both of which will be the evidence for occupants evacuation.
Establishment of Analytical Model
Firstly, the simulation software of fluid mechanics is applied to simulate the maximum scope of influence when the concentration of the leaked gas in the 3D space of the factory reaches LC50, IDLH, ERPG-2 and TWA, respectively. The simulated result can be taken as the evidence for the enterprise to delimit the death zone, restricted zone, evacuation zone and polluted zone of the disaster prevention and emergence rescue. Moreover, the concentration changes with respect to time in the simulated result can be taken as the evidence for personnel evacuation (WANG Xue-qi, et al, 2013; Qiao Lin, 2012; HE Xiuying, 2007) [5][6][7].
Analysis process
In general, the analysis process of CFD is displayed as in Fig. 1 Table 1). There are simulated in the ways of steady state and transient state, respectively in these six events. The simulation in steady state is done to know the maximum scope of influence of the leaked gas when it reaches the steady state, while the simulation in transient state is done to work out the time of rescue and distance of evacuation of the changes in concentration with respect to time. The transient state is simulated for 10min in total, with the result being output every 20s in the front 3min and every 30s in the later 7min. if the gas all out in 10min, flow velocity will be 77.5m/s and it leaks following the wind. Before the simulation of transient state is made, the pre-run of hr is set to made the wind field steady so as to have the model in a proper wind field. In the setting of the computational domain, the model is simulated in the condition of 1atm and 20℃, in order to simulate the realistic influence of the surface boundary layer. Therefore, the height of the computational domain is assumed to be 200m and the direction of gravity is vertical to the ground (-Z), in order to calculate the two boundaries, set the wind speed and imitate the even direction of the air.
In the meantime, the medium is assumed to the air of 20℃. The flow field is supposed to be incompressible, with the flow being Newtonian fluid, viscosity coefficient fixed, thermal radiation unconsidered and buoyancy effect taken into account.
Analysis Grid
In order to gain the surface boundary layer effect, the mesh encryption is made close to the ground. To make the simulation of flow field effective, partial of the field area is encrypted. The grid of counted to be 816,715 in total.
Steady State Model
In this study, the air is the research object and the steady-state flow field the computing mode, so the applied governing equation is expressed in terms of tensor as follows: (Yoshihide Tominaga, et al, 2008) [11] Equation of continuity: The speed and pressure are the sums of their respective time-average terms and disturbing terms, respectively.
In the flow field, the average property of any variable is defined as follows: The time average property of disturbing term is called Reynolds stress item, which is an unknown term and solved in the turbulence model.
Turbulence Model
In this study the standard − k model of the turbulence model is adopted, which, based on the Boussinesq Approximation Reynolds stress, is expressed as (Sandra C K, et al, 2008) [12]: Where the unknown term is t , which is set as 2 k C t = , So k and can be gained from their respective transport equation, and the coefficient of each item is listed as follows.
Results analysis
In this study, the software FLOVENT is applied to imitate the steady state and instantaneity of six scenes of the diffusion scope and concentration change of NH3 and H2 in different wind directions and leak location; furthermore, it takes the simulated result as the basis to analyze the diffusion length and impact scope of the characteristic concentration, both of which will be the evidence for occupants evacuation ( [13][14][15][16]. As for NH3, we observe its scope of influence when its index concentration is 4837ppm (LC50), 300ppm (IDLH), 150ppm (ERPG-2) and 50ppm (TWA), respectively. As for H2, we observe its scope of influence when its index concentration is 8000ppm (1/5LEL).
In here, ERPG-2 is used to calculate the radius of the scope of influence, the lethal concentration 50% (LC50) to calculate the radius of the death zone, and the time weight average (TWA) to calculate the radius of the pollution zone.
Transient state simulation
Event One: wind flows in the direction of northeast at the speed of 2.14m/s; NH3 is leaked; and the accident happened at Site One. After an hour of pre-run, the wind field is almost steady.
Imitation of steady state
In Table 4 are the maximum influenced ranges of the concentration of all indexes after the imitation of steady state; in figure 2 is the distribution diagram of the concentration of all indexes after the imitation of steady state. Compared with the imitation of transient state of the maximum influenced range and distribution diagram of the concentration of all indexes after 10min of leak, the imitation of the steady state is smaller in the maximum influenced range when the concentration is between 20 ppm and 4837 ppm, but its distribution range is bigger. With the leak site as the center and the maximum influenced range of the concentration of all indexes as the radius, the maximum influenced range of the concentration of all indexes in the steady state can be drawn (see figure 2). This diagram can be taken as the evidence for the personnel evacuation after the NH3 is leaked, to plan the evacuation route and site (LI Yue, et al, 2013; Tang Jing-yin, 2012; Chen Cheng, et al, 2013) [17][18][19].
Conclusion
Quite different from the large-scale atmospheric diffusion, the diffusion of the leaked hazardous gas is affected not only by the storage status, storage condition and leak feature but also by the wind speed, wind direction and terrain, so the rule of the leakage diffusion can't be described exactly with a single model. In this paper, only the hazardous gas diffusion model is preliminarily analyzed, but the research result can be used to help the enterprise to conduct the imitation of gas diffusion by using the flow field and model of 3D space, and the simulation result can be taken as the evidence for emergency rescue, so that appropriate measures can be carried out to minimize the loss when accidents happen. | 2021-08-03T20:03:54.126Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "0e464d452081b0e9dce25ce2117de3537f1768a6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1985/1/012078",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0e464d452081b0e9dce25ce2117de3537f1768a6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
11907526 | pes2o/s2orc | v3-fos-license | Fibroblast-derived CXCL12/SDF-1α promotes CXCL6 secretion and co-operatively enhances metastatic potential through the PI3K/Akt/mTOR pathway in colon cancer
AIM To investigate the underlying mechanism by which CXCL12 and CXCL6 influences the metastatic potential of colon cancer and internal relation of colon cancer and stromal cells. METHODS Western blotting was used to detect the expression of CXCL12 and CXCL6 in colon cancer cells and stromal cells. The co-operative effects of CXCL12 and CXCL6 on proliferation and invasion of colon cancer cells and human umbilical vein endothelial cells (HUVECs) were determined by enzyme-linked immunosorbent assay, and proliferation and invasion assays. The angiogenesis of HUVECs through interaction with cancer cells and stromal cells was examined by angiogenesis assay. We eventually investigated activation of PI3K/Akt/mTOR signaling by CXCL12 involved in the metastatic process of colon cancer. RESULTS CXCL12 was expressed in DLD-1 cancer cells and fibroblasts. The secretion level of CXCL6 by colon cancer cells and HUVECs were significantly promoted by fibroblasts derived from CXCL12. CXCL6 and CXCL2 could significantly enhance HUVEC proliferation and migration (P < 0.01). CXCL6 and CXCL2 enhanced angiogenesis by HUVECs when cultured with fibroblast cells and colon cancer cells (P < 0.01). CXCL12 also enhanced the invasion of colon cancer cells. Stromal cell-derived CXCL12 promoted the secretion level of CXCL6 and co-operatively promoted metastasis of colon carcinoma through activation of the PI3K/Akt/mTOR pathway. CONCLUSION Fibroblast-derived CXCL12 enhanced the CXCL6 secretion of colon cancer cells, and both CXCL12 and CXCL6 co-operatively regulated the metastasis via the PI3K/Akt/mTOR signaling pathway. Blocking this pathway may be a potential anti-metastatic therapeutic target for patients with colon cancer.
INTRODUCTION
Colon cancer is the fourth most frequently diagnosed cancer in the United States. In 2015, an estimated 93090 new cases of colon cancer occurred in the United States. During that same year, it was estimated that 49700 patients died from colon and rectal cancers [1] . The poor prognosis of colon cancer is attributable to its tendency of metastases. However, the precise mechanisms that determine the directional proliferation and invasion of cancer cells into specific organs remain to be established [2,3] . Therefore, exploring the fundamental mechanism of invasion, proliferation, metastasis and tumor biological behaviors at the level of cellular or molecular microenvironments is needed in clinical diagnosis and therapy.
Chemokines (chemotactic cytokines) form a complex family of small, secreted proteins that play an important role in innate and adaptive immunity, homeostatic processes, angiogenesis and tumorigenesis [4,5] . Based upon the position of conserved cysteine residues, chemokines are classified into four subfamilies (C, CC, CXC, CX3C) [6] . CXC chemokines have been proven to modulate tumor behaviors, especially in regulation of angiogenesis, activation of a tumor-specific immune response and stimulation of tumor cell proliferation in an autocrine or paracrine fashion [7] . However, updated research has shed new light on this subfamily of cytokines, indicating that its members have multifaceted roles in the microenvironment that consists of the tumor cells themselves and/or stromal cells, including infiltrating leukocytes, endothelial cells (ECs) and fibroblasts.
The functions of CXC chemokines in the tumor microenvironment depend considerably on the chemokine type and tumor and stromal cells' characteristics. In addition, there are cases in which chemokines have been implicated as having tumor-inhibiting gene activities, and there are many more examples of CXC chemokines with tumor-promoting roles [8][9][10][11] . Two of the most famous members are the stromal cellderived factor-1 (SDF-1/CXCL12/IL12) and chemokine ligand 6 (CXCL6). Numerous studies have shown that their activities would increase the establishment of tumorigenesis, invasion, proliferation and metastases. Recent analysis has shown that CXCL12 supports the survival or growth of a variety of normal or malignant cell types, including hematopoietic progenitors, germ cells, leukemia B cells and breast carcinoma cells [12][13][14][15] .
Other studies have shown that the CXCL12/CXCR4 and related axis are involved in tumor metastasis to sites which are characterized by high production of CXCL12, such as liver, lung and bone marrow [16,17] . Activation of the CXCL12/CXCR4 signaling axis leads to chemotaxis, cell survival, and/or proliferation; however, the downstream signaling cascades are tissue-specific and not well characterized in EC [18] . CXCL6, a small cytokine belonging to the CXC chemokine family, is also known as granulocyte chemotactic protein 2. As its former name suggests, CXCL6 is a chemoattractant for neutrophilic granulocytes [13][14] . It elicits its chemotactic effects by interacting with the chemokine receptors CXCR1 and CXCR2. This tumor progression may occur as a function of the regulation of angiogenesis, cell motility, immune cell infiltration, cell growth and survival in the microenvironment, and modulation of local anti-tumor immune responses [19] . As evidenced by various experiments, CXCL6 is over expressed in colorectal, breast, lung and thyroid cancers. Actions of tumor cells in the microenvironment were also regulated by complicated molecular mechanisms [20][21][22][23] . Different chemokines played their specific roles. Both the angiogenesis-promoting effect of CXCL6 and chemotactic effect of CXCL12 play important roles in tumorigenesis and metastasis [24,25] . However, the molecular mechanisms of the active signaling pathway by which CXCL12 and CXCL6 co-operatively regulate metastasis of colon cancer remain to be clarified.
The purpose of this study was to investigate the cooperative promotion of metastatic potential and the underlying mechanism of CXCL12 and CXCL6 in order to better understand the interaction between colon cancer cells and stromal cells. Furthermore, our study provided data to demonstrate that phosphatidylinositol 3-kinase (PI3K)/Akt/mTOR signaling pathway plays an important role in CXCL12 simulation and that this process is involved in the development and metastasis of colon cancer. Understanding the biologic mechanisms responsible for regulation of chemokines may enable better molecular targeted therapies to treat patients with metastatic colon cancer.
Western blot analysis
Cells were cultured in the media supplemented with 1% FBS for 1 d. After the indicated treatments, the cells were lysed in lysis buffer [
Enzyme-linked immunosorbent assay
All cancer cell lines and fibroblasts were separately seeded at a density of 3 × 10 5 cells/mL into 12-well plates containing medium with 10% FBS and allowed to adhere overnight. The medium was exchanged, and cells were cultured for an additional 48 h. The medium was collected and microcentrifuged at 1500 rpm for 5 min to remove particles, and the supernatants were frozen at -80 ℃ until performance of enzymelinked immunosorbent assay (ELISA). Concentration of CXCL6 was measured by ELISA kit (R&D Systems) according to the manufacturer's instructions. In order to further investigate the synergistic effect of the tumor-stromal interaction, we tested the effect of fibroblast-derived CXCL12 on cancer cell CXCL6 production using a double-chamber method in 24-well plates. Fibroblasts were seeded at a density of 1 × 10 5 cells/well into 24-well plates, and allowed to adhere overnight. The medium was exchanged with or without CXCL12 Ab, and then co-cultured with 5 × 10 4 HT-29, WiDr, CaCo-2, DLD-1 and fibroblasts cells that had been placed into inserts with 0.45-mm 2 pores (Kurabo Co.). The co-culture systems were incubated for an additional 48 h, and CXCL6 concentration was subsequently measured as described above. Each condition was assessed using 5 independent samples.
Proliferation assay
To confirm the effect of chemokines on HUVEC proliferation, we performed the proliferation assay according to the manufacturer's instructions. HUVECs were seeded at a density of 5 × 10 3 cells/100 µL in cell-derived CXCL12 on tubular formation by HUVECs, the colon cancer cells (DLD-1 secreting CXCL12 or CaCo-2 and HT-29 not secreting CXCL12), HUVECs, and fibroblasts were co-cultured using a doublechamber method in 24-well plates. DLD-1, CaCo-2 or HT-29 cells (5 × 10 4 cells) were seeded into transwell chambers, consisting of polycarbonate membrane with 0.45-m pores and allowed to adhere overnight. Transwell chambers were then placed in the HUVECs/ fibroblasts co-culture system with or without 10 ng/mL of CXCL12 or CXCL12 Ab and exchanged on the sixth day. All cells were cultured for a total of 11 d. HUVEC tubular formation was described as above. This assay allowed us to evaluate angiogenesis quantitatively and examine tumor-stromal interactions through soluble cytokines.
Statistical analysis
Data are presented as mean ± SD. Differences in the mean of two groups were analyzed by an unpaired t-test. Multiple group comparisons were performed by one-way ANOVA with a post hoc test for subsequent individual group comparisons. P < 0.05 was considered statistically significant. Mean values and SD were calculated for experiments performed in triplicate (or more).
Expression of CXCL12, CXCL6 and CXCR4 proteins in colon cancer cell lines and stromal cells
Western blotting results revealed that CXCL12 protein was only expressed in fibroblasts and DLD-1, but not in HT29, WiDr, CaCo-2, Colo320 and HUVECs. CXCR4 and CXCL6 were expressed in all colon cancer cell lines, fibroblasts and HUVECs ( Figure 1).
Effect of CXCL12 on secreted level of CXCL6 from colon cancer cell lines and HUCVECs
The secreted CXCL6 level was measured by ELISA assay in colon cancer cell lines and stromal cells.
Invasion assay
The effects of CXCL12, CXCL6 and co-cultures with fibroblasts or colon cancer cells (DLD-1) on invasive capability of HUVECs were determined by Matrigelcoated invasion chambers (Becton Dickinson, Bedford, MA, United States) according to the manufacturer's instructions. This system is separated by a PET membrane coated with Matrigel Matrix such that only invasive cells can migrate through the membrane to the reverse side. HUVECs (5 × 10 4 cells/mL) were suspended in medium containing 2% FBS and seeded into the Matrigel pre-coated transwell chambers consisting of polycarbonate membranes with 8-µm pores, and fibroblasts or DLD-1 cells were seeded at a density of 2 × 10 5 cells/well into the inner chambers in 24-well plates, then the transwell chambers were then placed into 24-well plates, into which we added basal medium only or basal medium containing gradient concentrations of CXCL6 (0 ng/mL, 0.1 ng/mL, 1 ng/ mL, 10 ng/mL, 10 ng/mL + 10 µg/mL CXCL6 Ab) or CXCL12 (0 ng/mL, 0.1 ng/mL, 1 ng/mL, 10 ng/mL, 10 ng/mL + 10 µg/mL CXCL12 Ab). After incubating for 24 h and HUVECs for 16 h, the upper surface of the transwell chambers was wiped with a cotton swab and the invading cells were fixed and stained with Diff-Quick stain. The number of invading cells was counted in five random microscopic fields of the low filter surface under a microscope at 200 × magnification.
Each condition was assessed in triplicate.
Angiogenesis assay
To investigate the influence of CXCL6 on tubule formation by HUVECs, HUVECs and fibroblasts were coculture in the basal medium using an angiogenesis kit (Kurabo Co.) according to the manufacturer's protocol. First, HUVECs and fibroblasts were co-cultured in 24-well plates with basal medium. The media were exchanged every 2 d, with co-incubation continuing for a total of 11 d. The co-culturing system was stained with anti-CD31 Ab. The areas of angiogenesis were measured quantitatively over ten different microscopic fields for each well using an image analyzer (Kurabo Co.).
Angiogenic activity during co-culture with colon cancer cells in vitro
To further investigate the influence of colon cancer enhanced CXCL6 secretion level in the HUVEC culture supernatants as well (P < 0.01), because fibroblasts could secrete CXCL12 protein. Furthermore, the enhanced CXCL6 production elicited by co-culturing with fibroblast cells and recombinant CXCL12 were significantly inhibited in the presence of CXCL12 Ab (P < 0.01).
HUVEC proliferation following treatment with CXCL6, CXCL12 and fibroblast cell-cultured supernatants
To create stromal cell supernatants, fibroblast cells were seeded to a final number of 5 × 10 6 cells/5 mL into 100-mm dishes containing medium with 10% FBS, and were cultured overnight. Cells were then cultured in medium containing 2% FBS for 48 h. The culture media were collected and microfuged at 1500 rpm for 5 min to remove any particles, and the supernatants were used in proliferation assays. Recombinant CXCL6 elicited enhanced proliferation of HUVECs in a dose-dependent manner, and coculture with fibroblasts caused significantly enhanced HUVEC proliferation (P < 0.05, P < 0.01; Figure 3A). Recombinant CXCL6 also promoted the proliferation of HUVECs in a concentration-dependent manner (P < 0.05, P < 0.01; Figure 3B).
CXCL6 and CXCL6 promotion of colon cancer cell and HUVEC invasiveness
The invasion assay was used to investigate whether CXCL12 and CXCL6 influence invasiveness of colon cancer cell lines. The invasive capacity of HT-29 cells was promoted by stimulation using recombinant CXCL6 ( Figure 4A) and CXCL12 ( Figure 4B) in a concentration-dependent manner (P < 0.05, P < 0.01), and 10 ng/mL of CXCL6 and CXCL12 significantly promoted cancer cell invasion (P < 0.01). Interestingly, CXCL6 ( Figure 4C) and CXCL12 ( Figure 4D) also significantly enhanced the invasion of HUVECS in a dose-dependent manner (P < 0.05, P < 0.01). However, the invasive behavior of HUVECs upon CXCL6 stimulation was more significant than upon CXCL12 stimulation. Enhancement of invasive ability of HUVECs by CXCL6 and CXCL12 stimulation were blocked by pre-incubating HUVECs with neutralizing anti-CXCL6 and anti-CXCL12 Ab, respectively (P < 0.05, P < 0.01; Figure 4C).
Effect of co-culturing with fibroblasts and DLD-1 cells on HUVEC invasiveness
To investigate the interaction between colon cancer and stromal cell-derived CXCL12 in the tumor microenvironment, we next examined the role of cellderived CXCL12 on HUVECs' invasiveness using the Matrigel double culturing chamber invasion assay.
The invasive capability of HUVECs were enhanced by co-cultivation with fibroblasts (P < 0.01; Figure 4E) and DLD-1 (P < 0.01; Figure 4F); meanwhile, the enhancement of HUVEC invasive behavior was inhibited by neutralizing anti-CXCL12 Ab (P < 0.01), and the addition of recombinant CXCL6 significantly enhanced HUVECs invasiveness in co-cultivation with fibroblasts system as well (P < 0.01; Figure 4E). At the same time, co-cultivation with CaCo-2 cells did not significantly increase the invasion of HUVECs.
CXCL6 and CXCL12 enhancement of tube formation
To further determine the role of CXCL12 and CXCL6 in the living cell microenvironment, we focused on the interaction between tumor cells and stromal cells by characterizing angiogenic activity in co-cultured fibroblasts and vascular ECs, and the effect of CXCL6 and CXC12 in this system. Initially, we measured the influence of CXCL6 and CXCL12 on tube formation by HUVECs. HUVEC tube formation was significantly enhanced in a dose-dependent manner following treatment CXCL6 (P < 0.01; Figure 5A) and CXCl12 (P < 0.01; Figure 5B). The enhanced angiogenesis of HUVECs was inhibited by the addition of neutralizing anti-CXCL6 and anti-CXCL12 Ab (P < 0.01).
Effect of colon cancer cells with or without CXCL12 on tube formation by HUVECs
In order to explore the different secreted CXCL2 of colon cancer cells influence on tube formation by HUVECs, we cultured three cell lines using double chamber methods to determine the interaction among them. The tubular formation was significantly enhanced by co-culture with DLD-1 cells compared with control (HUVECs and fibroblasts only) or coculture with HT-29 and CaCo-2 cells, respectively (P < 0.01; Figure 5C). Moreover, the CXCL12 and CXCL6 could significantly promote the tubular formation in co-culture with HT-29 and CaCo-2 cells system (P < 0.01). In contrast, the enhanced tubular formation by HUVECs was significantly inhibited by addition of anti CXCL12 Ab in co-culture with DLD-1 cells (P < 0.01).
Activation of the PI3K/Akt/mTOR signaling pathway after CXCL12 stimulation in colon cancer and stromal cells
We used the colon cancer cell HT-29 and stromal cell HUVECs to examine activation of the PI3K/Akt/mTOR signaling pathway, a downstream target of CXCL12. The stimulation by 10 ng/mL of CXCL12 could increase Akt ( Figure 6A), PI3K ( Figure 6B) and mTOR ( Figure 6C) phosphorylation in a time-dependent manner in HT-29 cells and HUVECs. To determine the role of mTOR, we investigated the effect of CXL12 and PI3K/ Akt inhibitor on the activation of mTOR in colon cancer cells and HUVECs; we looked at the effects of IGF-1 and/or PI3/Akt kinase inhibitors on the activation of mTOR in these cells. HT-29 and HUVECs were pretreated for 60 min with PI3K/Akt inhibitors and then stimulated overnight with CXCL12 (100 ng/mL). The extracted proteins were separated by SDS-PAGE, transferred to membranes, and the membranes probed with Ab directed against phospho-mTOR and total mTOR. We found that by CXCL12-mediated increase phospho-mTOR was inhibited by 50 µmol/L PI3K inhibitor (LY294002) and 50 µmol/L Akt kinase inhibitor. These data indicate that CXCC12 regulates the PI3K/Akt/mTOR signaling pathway activity and suggest that the PI3K/Akt/mTOR signaling pathway could participate in the regulation of metastatic behavior by colon cancer cells ( Figure 6D).
DISCUSSION
Many tumors produce chemokines, which may explain the presence of the tumor-associated microenvironment. However, the role of these chemokines in tumor biology is still unclear. Chemokines form a complex family of small, secreted proteins that play important roles in innate and adaptive immunity, homeostatic processes, angiogenesis and tumorigenesis [4] . Recent exploration of the tumor microenvironment has become the crux of research aimed at explaining tumor behaviors, especially those involving metastasis of solid tumors as in colon, stomach, liver, lung and breast cancers.
The tumor microenvironment consists of tumor, stromal, immune and inflammatory cells, all of which produce cytokines, growth factors and adhesion molecules [26,27] , and the abnormal expression of cytokines has been shown to have great effect on tumor behaviors, such as tumor progression and metastasis [28,29] . The CXC chemokine family of cytokines, which are founded in the microenvironment, represent a significant difference between tumors and normal tissues [30] . The tumor microenvironment contains secreted chemokines representing distinctive profiles, the components of each having specific target cells. The chemokine CXCL12, through its receptor CXCR4, positively regulates angiogenesis by promoting EC migration and tube formation. However, the relevant downstream signaling pathways in EC have not been defined.
Our previous studies elucidated that IL-1α is one of the most important inflammatory cytokines involved in the metastatic process of colon cancer. IL-1α contributed to the regulation of tumor growth, progression, and liver metastasis in primary gastric carcinoma and pancreatic cancer. Pancreatic cancer cell-derived IL-1α increases fibroblast-derived hepatocyte growth factor (HGF) secretion in a paracrine manner, and that enhanced HGF expression promotes invasion, proliferation and angiogenesis of cancer cells. In the living microenvironment of the tumor, the chemokines act as couriers or guides for promoting tumor development and the metastasis process [31][32][33] . As a structural component of tumor tissue, fibroblasts have been shown to be deeply involved in tumor proliferation and the mitogenic processes. Fibroblasts produce certain cytokines that influence neighboring cells, including malignant cells [4] . The precise role of chemokines in neovascularization during inflammation or tumor growth is not yet fully understood. We investigated here whether cancer cell stromal cell-derived CXCL12 influences colon cancer CXCL6 secretion, thereby co-regulating the metastatic potential of colon cancer. Our results revealed that CXCL12 was expressed in DLD-1 and fibroblasts, while CXCL6 and CXCR4 were expressed in all cell lines. The most salient observations of our study were that the secreted CXCL6 levels by colon cancer cells and HUVECs were significantly promoted by cancer cell (DLD-1)-and stromal cell (fibroblast)-derived CXCL12 in the co-culturing system, and that the enhanced CXCL6 production could be significantly inhibited by CXCL12 Ab. Similar results were reported for other effects through the up-regulation of MMP-9, providing a possible mechanism mediating the effect of CXCL6 on metastasis [34] . In our study, CXCL6 and CXCL12 not only co-operatively enhanced proliferation and invasion of HUVECs, but also promoted the invasion of colon cancer cells. Similarly, CXCL6 has been reported to be up-regulated in colon cancer, and plays key roles in the induction and maintenance of gut inflammation, enhancing the development and growth of colitisassociated colorectal cancer [35] .
To further investigate the inaction between CXC chemokines and cancer cell living microenvironment, we focused on the interaction between tumor cells and stromal cells by characterizing angiogenic activity in co-cultured fibroblasts and vascular ECs, and the effect of CXCL6 and CXC12 in this system. HUVEC tube formation was significantly enhanced by CXCL6. We aimed to explore the influence of different secreted CXCL2 from colon cancer cells on tube formation by HUVECs. The tubular formation was significantly enhanced by co-culture with DLD-1 cells, as compared with colon cancer cells, and this is related to the produced CXCL12. In contrast, the enhanced tubular formation by HUVECs was significantly inhibited by Ma JC et al . CXCL12-and CXCL6-enhanced metastatic potential in colon carcinoma Figure 5 Effect of granulocyte chemotactic protein-2, stromal cell-derived factor-1 and co-culture with colon cancer cells on angiogenesis. The treatment of CXCL6 (A) and CXCL12 (B) influence HUVEC tube formation. After incubation of the HUVEC/fibroblast co-culture system in the presence or absence of CXCL6 or anti-CXCL12 Ab, then co-culture for 7 d, the HUVEC/fibroblast co-culture system was stained with anti-CD31 antibody. Tube formation area was measured quantitatively using an image analyzer. A1: Control; A2: 1 ng/mL CXCL6; A3: 10 ng/mL CXCL6; A4: 10 ng/mL CXCL6 + 10 µg/mL of CXCL6 Ab. B1: Control; B2: 1 ng/mL CXCL12; B3: 10 ng/mL CXCL12; B4: 10 ng/mL CXCL12 + 10 µg/mL of CXCL12 Ab. Effect of colon cancer cells (DLD-1, HT-29 or CaCo-2) on HUVEC tube formation is shown (C). Angiogenesis assay by HUVEC/fibroblast co-culture with DLD-1, HT-29 or CaCo-2 cells was conducted using the double-chamber method. Detection of tube formation by HUVECs was described in the "Material and Methods" section. C1: Co-culture with DLD-1; C2: Co-culture with DLD-1 + 10 ng/mL of CXCL6; C3: Co-culture with DLD-1 + 10 ng/mL of CXCL12; C4: Co-culture with DLD-1 + 10 µg/mL of CXCL12 Ab; C5: Co-culture with HT-29 cells; C6: Co-culture with HT-29 cells pre-treated with 10 ng/mL CXCL6; C7: Co-culture with HT-29 cells pre-treated with 10 ng/mL CXCL12; C8: Co-culture with HT-29 + 10 µg/mL of CXCL12 Ab; C9: Co-culture with CaCo-2 cells; C10: Co-culture with CaCo-2 cells pre-treated with 10 ng/mL CXCL6; C11: Co-culture with CaCo-2 cells pre-treated with 10 ng/ mL CXCL12; C12: Co-culture with CaCo-2 cells pre-treated with 10 µg/mL anti-CXCL12 antibody. Columns, mean pixels of HUVEC tube formation area; Bars = SD. Multiple comparisons were performed by one-way ANOVA followed by the SNK test; b P < 0.01 vs control. Ab: Antibody; CXCL6: Granulocyte chemotactic protein-2; CXCL12: Stromal cell-derived factor-1; HUVEC: Human umbilical vein endothelial cell. addition of anti-CXCL12 Ab in co-culture with DLD-1 cells ( Figure 5). CXCL12 should be the initial factor secreted by fibroblasts, and the target colon cancer cells enhanced the secretion of CXCL6 after CXCL12 combined with its receptor CXCR4. The proliferation and invasion of colon cancer cells and HUVECs have been activated and enhanced after a series of complicated biochemical reactions. Breakthroughs of insights into the tumor microenvironment have made great contributions towards clinical treatment. All kinds of anti-carcinoma chemotherapeutics have been based upon this mechanism, and there is no exception among the newly targeted cancer therapies or the gene therapies; proof of effects on critical pathways in proliferation or differentiation are sought. Chemokines are chemo-attractant cytokines that regulate genetic activity of leukocytes and other cell types, including tumor and stromal cells [36] . It is already known that mTOR is an atypical intracellular serine/threonine protein kinase regulated by PI3K. The activated mTOR pathway has been identified in several human malignancies, thus being an attractive target for anticancer therapy [37] .
Our results showed that CXCL12-enhanced secretion level of CXCL6 co-regulation of invasion, proliferation and angiogenesis were dependent on PI3K-Akt-mTOR signaling activation. However, both of these factors upregulation of PI3K-Akt-mTOR survival signaling were decreased by selective inhibitors of PI3K and Akt. All these results suggest that both the CXCL12 factor and the enhancement of CXCL6 expression serve to co-operatively promote metastatic potential in colon cancer cells. CXCL12-induced activation of this signaling pathway could be inhibited by PI3K/Akt inhibitor, consistent with the inhibition of metastatic capabilities of colon cancer cells. This cascade may be a key pathway for colon cancer cells to metastasize.
Crosstalk between CXCR4, CXCL12 and PI3K/ mTOR has been previously described in peritoneal disseminated gastric cancer and pancreatic cancer. The solid tumors indicate an interconnection between CXCL12 and mTOR signaling. Interfering with mTOR signaling has abolished chemotaxis towards CXCL12 [38] . The mTOR will enhance cell growth and proliferation via promotion of the ribosome S6 protein kinase (p70S6K) and inhibition of the eIF4E-binding proteins (4E-BP1), and can even enhance the secretion of vascular endothelial growth factor and angiogenesis by promoting expression of the transcription factor hypoxia-inducible factor 1 and its downstream target genes. Under a series of exterior and interior factors, cancer cell proliferation and invasion can be induced and cell apoptosis can be avoided by initiating the PI3K/Akt/mTOR pathway [39] .
In conclusion, this is the first report on the concomitant involvement of CXC12 and CXCL6 both transducing the mTOR pathway, affecting progression and spreading of human colon cancer cells, ultimately suggesting that targeting of CXCR4 and mTOR may improve therapeutic efficacy and prevent mTORtargeting agents' resistance. Our work should encourage further investigation into more potent angiogenesis modulating agents to improve the effectiveness of colon cancer therapies.
Background
Colon cancer is the fourth most frequently diagnosed cancer worldwide. The poor prognosis of colon cancer is attributable to its tendency of metastases. However, the precise mechanisms of metastasis are still unknown. The target of this study was to investigate the underlying mechanism by which CXCL12 and CXCL6 influences the metastatic potential of colon cancer and the internal relation of colon cancer and stromal cell, as well as to investigate the interaction with CXCL12/CXCL6/PI3K/Akt/mTOR signaling in the metastatic process.
Research frontiers
The functions of CXC chemokines in the tumor microenvironment depend considerably on the chemokine type and tumor and stromal cell characteristics. Both the angiogenesis-promoting effect of CXCL6 and chemotactic effect of CXCL12 play important roles in tumorigenesis and metastasis. However, the molecular mechanisms of the activity signaling pathway by which CXCL12 and CXCL6 co-operatively regulate metastasis of colon cancer remain to be clarified.
Innovations and breakthroughs
This research provides the first demonstrations of fibroblast-derived CXCL12 enhancing CXCL6 secretion of colon cancer cells. CXCL6 and CXCL12 not only co-operative enhanced proliferation and invasion of HUVECs, but also promoted the invasion of colon cancer cells via the PI3K/Akt/mTOR signaling pathway. Blocking this pathway may be a potential anti-metastatic therapeutic target for patients with colon cancer. This work might encourage further investigation into more potent angiogenesis modulating agents to improve the effectiveness of colon cancer therapies.
Applications
The concomitant involvement of CXC12 and CXCL6 transduces the mTOR pathway, affecting progression and spread of human colon cancer cells. The authors suggest that targeting CXCR4 and mTOR may improve therapeutic efficacy and prevent mTOR-targeting agents' resistance. The authors' work should encourage further investigation into more potent angiogenesis modulating agents to improve the effectiveness of colon cancer therapies.
Terminology
The CXC chemokine family of cytokines, found in the microenvironment, represent the significantly distinctive profiles of tumors and normal tissues. The tumor microenvironment contains secreted chemokines representing distinctive profiles, the components of each having specific target cells. The chemokine CXCL12, through its receptor CXCR4, positively regulates angiogenesis by promoting endothelial cell (EC) migration and tube formation.
Peer-review
The results of this study for the relationship between CXCL6 and CXCL12 in colorectal cancer and ECs seem to be of interest to many readers, and the experiment is well planned. But, before publication, several issues have to be considered. | 2018-04-03T00:26:33.784Z | 2017-07-28T00:00:00.000 | {
"year": 2017,
"sha1": "d1fff5c4254e5e1bc4e7ff44bb72d56bee49b536",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v23.i28.5167",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1fff5c4254e5e1bc4e7ff44bb72d56bee49b536",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
246086519 | pes2o/s2orc | v3-fos-license | A phase I/IIa trial of atorvastatin in Japanese patients with acute Kawasaki disease with coronary artery aneurysm: Study protocol of a multicenter, single-arm, open-label trial
Background Kawasaki disease (KD) is a systemic vasculitis complicated with coronary artery abnormalities (CAAs). Intravenous immunoglobulin reduces the occurrence of CAAs, but significant number of KD patients with CAAs still exists. Thus, new approaches to prevent and attenuate CAAs are warranted. Atorvastatin has been shown to promote endothelial cell homeostasis and suppress vascular inflammation and has received enthusiasm as a potentially new candidate treatment for KD. In the United States, a phase I/IIa dose-escalation study of atorvastatin in KD patients with CAAs demonstrated the safety and pharmacokinetic data of atorvastatin. However, due to the uncertainty in the application of these results to other populations, we aim to examine the tolerability and generate pharmacokinetics data in Japanese KD patients. Methods This is a multicenter, single-arm, open-label, phase I/IIa study of atorvastatin in acute KD patients with CAAs in Japan. A minimum of 9 and a maximum of 18 KD patients (2 years–17 years old) will be recruited for a 3 + 3 dose-escalation study of a 6-week course of atorvastatin (0.125–0.5 mg/kg/day). The primary outcome will be safety of atorvastatin. The secondary outcomes will be pharmacokinetics of atorvastatin, activity of atorvastatin and echocardiographic assessment of CAAs. The activity of atorvastatin will include assessment of C-reactive protein or high sensitivity C-reactive protein and white blood cell levels. Discussion This study will provide evidence of the safety, tolerability, and pharmacokinetics of atorvastatin in Japanese KD patients and may lead new standard therapy for acute-phase KD associated with CAA complications. Trial registration Japan Registry of Clinical Trials (JRCTs031180057). Registered December 19, 2018, https://jrct.niph.go.jp/en-latest-detail/jRCTs031180057.
Introduction
Kawasaki disease (KD) is a systemic vasculitis complicated with coronary artery abnormalities (CAAs) [1]. The standard initial treatment is intravenous immunoglobulin (IVIG) plus aspirin (ASA). IVI-G-resistant KD patients are treated with methylprednisolone pulse, prednisolone, or infliximab added to IVIG. These approaches have reduced the occurrence of CAAs [2,3]. However, 8.9% of KD patients in Japan still experienced CAAs [4]. Therefore, new approaches to prevent or attenuate CAAs are warranted.
Statins are a class of drugs that lower the level of cholesterol in the blood by blocking hydroxy-methylglutaryl-coenzyme A reductase (HMG-CoA reductase). Studies have shown that statins have pleiotropic anti-inflammatory antioxidant, anti-coagulation and thrombolytic effects, promoting endothelial cell homeostasis and suppressing vascular inflammation [5,6]. Atorvastatin (Lipitor) suppresses matrix metalloproteinase 9 (MMP-9) activity and upregulates regulatory T cells [7,8]. Other statins inhibit tumor necrosis factor (TNF) -alpha production, MMP-9 secretion and transforming growth factor (TGF) -beta-induced myofibroblast trans-differentiation [9,10]. These mechanisms contribute to restoring cardiovascular homeostasis. Among the statins, atorvastatin is supposed to have the highest anti-inflammatory effect [11,12].
Pathogenesis of the anti-inflammatory effects of statins for KD vasculitis examined using an animal model of KD. Atorvastatin inhibits lymphocyte proliferation in response to superantigen stimulation and production of interleukin (IL)-2, TNF-α and MMP-9, which improves coronary outcomes [13][14][15][16][17]. These in vitro studies suggest atorvastatin may be a reasonable candidate as a new treatment for KD.
In clinical settings, Niedra et al. [18] reported on a Canadian case series of 20 patients with giant CAAs in which an atorvastatin was safe. In the US, Tremoulet et al. [19] conducted a phase I/IIa dose-escalation study of atorvastatin (0.125-0.75 mg/kg/day) in KD patients with CAAs. This study indicated that up to 0.75 mg/kg/day of atorvastatin was safe. The study also showed differences in pharmacokinetics (PK) characteristics between KD patients and adults; the C max and the areas under the curve (AUC) for atorvastatin and its metabolite, ortho-hydroxyatorvastatin, increased depending on the weight-based dose of atorvastatin and were higher in the study patients than in adults, suggesting a slower metabolic process in children.
While promising, it is still uncertain that results of the US study can be applied to Asian patients. The AUC of atorvastatin is influenced by genetic variation of cytochrome P450 3A4 (CYP3A4) and polymorphisms of SLCOB1, a transporter gene [20]. According to the drug information of Lipitor, the maximum dose for adults in Japan is lower than that in the US. Therefore, careful evaluation of the safety and PK of atorvastatin is needed in Japanese KD children.
The incidence rate of KD in Japan is the highest in the world and there is urgency in identifying new treatment approaches in Japan, such as the use of atorvastatin, but there is no available data on its safety and PK in Japanese KD patients. We planned a phase I/IIa dose-escalation study of atorvastatin in the treatment of acute KD patients with CAAs in Japan with the objective of generating high-quality evidence related to the tolerability and PK of this promising drug.
Trial design
This is a multicenter, single-arm, open-label, phase I/IIa doseescalation study of atorvastatin for patients with acute KD with CAAs in Japan. 14 hospitals in Japan are registered.
Inclusion criteria
This is a phase I/IIa study using a 3 + 3 dose-escalation design. Depending on the dose level at which the maximum tolerated dose (MTD) is reached, a minimum of 9 or a maximum of 18 patients will be enrolled. We recruited participants from May 1, 2019 to April 30, 2022. Patients who fulfill all criteria described below are eligible for the study.
1. Patients who are 2-17 years old diagnosed with classic KD within 20 days after the fever onset. (Classic KD; presenting at least five of the following six principal symptoms: (i) fever persisting ≥5 days; (ii) bilateral conjunctival congestion; (iii) changes in lips and oral cavity; (iv) polymorphous exanthema; (v) changes in peripheral extremities; and (vi) acute non-purulent cervical lymphadenopathy. [21]) 2. Patients with left anterior descending coronary artery (LAD) or right coronary artery (RCA) Z-score ≥ 2.5 or an aneurysm (≥1.5 x the adjacent segment) of one of the coronary arteries in the echocardiogram evaluation. 3. Patients whose parent or legal guardian will provide written informed consent and patients who will also provide informed assent or consent by themselves in case that they are aged 7 and above.
Exclusion criteria
Patients who fall under any of the following categories will be excluded from the study.
1. Use of statins, fibrates, or niacins within 90 days prior to the enrollment. 2. History of any severe chronic disease (e.g. congenital heart diseases, autoimmune diseases, chromosomal disorders or neurodegenerative diseases), except for bronchial asthma, atopic dermatitis, autism spectrum disorder or controlled acute disease. 3. Creatine phosphokinase (CK) ≥ 500 IU/L at the screening laboratory test.
Intervention
Based on guidelines for medical treatment of acute KD [22], all patients will receive IVIG (2 g/kg/day) with ASA (30-50 mg/kg/day). Prednisolone or methylprednisolone pulse therapy can be added to the standard therapy in patients with the Kobayashi risk score of five points or higher which represents a high risk of no response to the initial treatment with intravenous immunoglobulin. Patients who have coronary artery Z-score ≥ 2.5 or an aneurysm (≥1.5 x the adjacent segment) within the first 20 days after the fever onset will receive atorvastatin once a day orally for 6 weeks ( Fig. 1). For patients who are unable to swallow tablets, tablets will be crushed. Adherence of atorvastatin will be confirmed at the 2-and 6-week visits.
This study uses a 3 + 3 dose-escalation design in which a minimum of 3 patients will be enrolled into each cohort group of 3 dose levels (Step1: 0.125 mg/kg/day, Step 2: 0.25 mg/kg/day, Step3:0.5 mg/kg/day) ( Table 1). A phase I/IIa trial of atorvastatin in patients with acute KD with CAAs in the US showed 0.75 mg/kg/day of atorvastatin as MTD [19]. In Japan, maximum dose of atorvastatin in adults is 40 mg/day, which is less than the maximum adult dose in the US (80 mg/day). Based on this information, we determined a maximum dose for this study to be 0.5 mg/kg/day (maximum dose of 40 mg/day).
Dose escalation depends on the number of patients with a doselimiting toxicity (DLT) at a given dose level. Three patients will be given the first dose. If none of the 3 patients showed DLT after 6 weeks of the therapy, the next 3 patients will be enrolled for the next higher dose; if 1 of 3 patients have a DLT, then an additional 3 patients will be enrolled at that dose level, and further dose escalation will depend on the number of DLTs among those 6 patients in the cohort of this dosage. If 2 or fewer of the 6 patients have a DLT, the next 3 patients will be given the next higher dose. However, if 3 or more patients in this cohort have a DLT, we will consider this dose level as surpassing the MTD and stop the dose-escalation, signaling the end of the trial; if 2 of 3 patients in a cohort group have a DLT at any dose level, dose-escalation will cease and the trial will be ended.
The MTD is defined as the highest dose of atorvastatin examined at which a maximum of 2 or fewer of the 6 patients in the same cohort experience a DLT during the 6 weeks of the treatment. If dose-escalation reaches the highest dose level set for this study (0.5 mg/kg/day) without patients experiencing DLT, this dose will be considered the MTD.
DLT will be defined as any of the following at the 2-or 6-week time point: • ALT or AST is elevated by 20% or more compared to entry level and above 76 IU/dl. • CK elevation >10 times of the upper limit of normal ( Table 2) or symptoms of muscle pain due to myositis [23]. • A decrease in total cholesterol (TC) level that is reduced by 10% or more compared to entry level and below 99 mg/dl. All patients will be monitored for DLT occurrence for up to 6 weeks from the time of enrollment. A patient who experiences a DLT will discontinue atorvastatin immediately and will be monitored for symptoms and abnormal measurements. Patients will be followed until resolution of the DLT or for 6 weeks, whichever is later.
Outcomes
The primary outcome is safety of atorvastatin in Japanese KD patients with CAA.
The secondary outcomes are as follows: 1. Pharmacokinetics of atorvastatin 2. Activity of atorvastatin a. Biomarkers and measures of inflammation: Levels of C-reactive protein (CRP) or high sensitivity CRP (hsCRP) and white blood cells (WBC). 3. Echocardiographic assessment (Z-score) of CAAs (LAD, proximal part of RCA)
Data collection and management
The following data will be collected before the first administration of atorvastatin.
• Demographic data: patient's age at KD onset, sex, family history, past history, ethnicity • Physical data: height, weight, body temperature • Clinical data: physical findings confirming the KD case definition, the date of diagnosis, start of treatment and study entry • Laboratory data: complete blood count, CRP or hsCRP, total protein, albumin, total bilirubin, AST, ALT, lactate dehydrogenase, gammaglutamyl transpeptidase, CK, TC, low-density lipoprotein, highdensity lipoprotein, triglyceride, blood urea nitrogen, creatinine, sodium, potassium, chlorine • Echocardiogram: internal lumen diameters and Z-scores of proximal RCA and proximal LAD. Z-score curve is derived from the lambda-mu-sigma (LMS) method [24], and the Z-score will be calculated by Z score calculator Version 4.0.
• Concomitant therapy
The laboratory tests and the echocardiogram will be performed within 24 h from, and 2 and 6 weeks after, the first administration of atorvastatin. Time series PK specimen collection (1, 4, and 12 h or 2, 6, and 24 h) will be performed at the first dose and only trough level will be measured at 2 weeks and 6 weeks. We will also measure the plasma concentrations of the brain-specific cholesterol metabolite, 24(S)-hydroxy-cholesterol (24-OHC) at 2 weeks and 6 weeks because there may be an effect of atorvastatin treatment on the brain (Tables 3-5).
• Laboratory data: complete blood count, CRP or hsCRP, total protein, albumin, total bilirubin, AST, ALT, lactate dehydrogenase, gammaglutamyl transpeptidase, CK, TC, low-density lipoprotein, highdensity lipoprotein, triglyceride, blood urea nitrogen, creatinine, sodium, potassium, chlorine. • Echocardiogram: internal lumen diameters and Z-scores of proximal RCA and proximal LAD • PK assessment: blood samples are drawn according to two schedules (Schedule 1 and Schedule 2) ( Table 4). Data centers will alternately assign participants to Schedule 1 and Schedule 2 in the order of registration. Samples are collected at 2, 6, and 24 h for Schedule 1 and at 1, 4, and 12 h for Schedule 2 after the first dose. The blood sample for trough level is collected right before taking atorvastatin at 2 and 6 weeks after the enrollment for both schedules (Table 5). • 24-OHC: measured at 2 weeks and 6 weeks.
• Patient adherence • Adverse events • Concomitant therapy
For the laboratory test, a total of 2 ml of whole blood will be drawn and equally divided into two tubes with and without anticoagulant (EDTA) for plasma and serum segregation, respectively. For plasma segregation, the EDTA-treated tube will be centrifuged for about 15 min at 1000-2000×g and the supernatant will be immediately transfer to a clean tube using a pipette. For serum segregation, we will leave the tube at room temperature for about 15 min to allow the blood to clot which will be removed by centrifuging at 1000-2000×g for 10 min. For the PK study, about 1 ml of whole blood will be collected in a heparin sodium treated tube. The same procedures as the plasma segregation for the laboratory test will be performed. These procedures will allow us to Table 3 Schedule of data collection and monitoring. (O): clinical information or samples will be recorded if they can be accessed. *PK study will be performed according to schedule 1 or schedule 2. Data center will determine the dose level and PK schedule on the enrollment. a Laboratory test and sample preservation will be allowed within 2 days prior to enrollment. collect 0.5 ml of plasma or serum. We will store the samples at -80 • C for 5 years after this study is completed or discontinued. Atorvastatin and ortho-hydroxy atorvastatin blood concentration measurements will be performed by LC/MS at Q2 Solutions in the United States (the same assay as the atorvastatin study performed in the US [19]). 24-OHC will be measured by enzyme linked immunosorbent assay (ELISA) (the same assay as the US study [19]).
Statistical methods
The Full Analysis Set (FAS) consists of all patients enrolled in this trial except for the following patients: • Patients who will not be treated with the protocol treatment • Patients whose data will not be collected after the protocol treatment starts • Patients who are designated to be ineligible after enrollment A Per Protocol Set (PPS) consists of the population without serious protocol violations in the FAS. A safety analysis set consists of patients who will be treated with the protocol treatment at least once.
Regarding discrete variables of patient's demographics (e.g. sex, past history), we will calculate proportions for each category. Regarding continuous variables (e.g. age, height, weight), we will provide descriptive statistics (e.g. mean, standard deviation, median, interquartile range, maximum, minimum).
Population-based PK analyses will be attempted despite limitations in sample size. NONMEM 7.3 (Icon, Dublin, Ireland) or Phoenix NLME (Certara USA, Inc) will be used to perform non-linear mixed effects modeling. We will generate individual patient parameter estimates for volume of distribution (Vd) and clearance (CL) using the maximum a posteriori Bayesian analysis for each patient's data applying the final population model and the POSTHOC subroutine.
Covariates for Vd and CL will be assessed to the extent possible with age, weight, ALT, and CRP included. The uncertainty in the final model will be evaluated using a bootstrap analysis of 1000 virtual patients to calculate the 95% confidence intervals for the population estimates [25]. The model will be supposed reliable if the parameter estimates are within the 95% confidence intervals. If a reliable model will not be obtained because of small patient numbers, the PK profile will be simply described by plotting the observed concentrations of each dosing and PK sampling group.
Criteria for discontinuing or modifying allocated interventions
The intervention will be discontinued when: 1. The patient or his/her parent or legal guardian asks for withdrawal from the study. 2. Exacerbation or recurrence of the primary disease makes the intervention difficult to continue. 3. The investigators judge it as necessary to discontinue the intervention because of an adverse event(s). 4. Patient dies during the intervention. 5. Exacerbation of the chronic disease or complications make the intervention difficult to continue. 6. The patient is found to be ineligible for the study (e.g. misdiagnosis) after enrollment. 7. Attending the hospital is difficult because of patient's relocation. 8. The Efficiency and Safety Monitoring Committee orders the discontinuance of the intervention because of the adverse events.
The investigators judge discontinuance of the intervention because of any other reasons.
Adverse event reporting and harms
As for safety evaluations, all adverse events will be monitored and evaluated. Adverse events refer to any unfavorable or unintended change in a sign (i.e. abnormal laboratory), symptom or disease temporally associated with the study treatment, whether or not it is considered to be related to the study product. All adverse events will be graded according to NCI-CTCAE (the National Cancer Institute Common Terminology Criteria for Adverse Events) version 4.0. All serious adverse events (SAEs) must be reported to all investigators and discussed. When adverse events occur, patients will receive appropriate treatment and the cost will be supported by the National Health Insurance. For patients who are experiencing ongoing unresolved adverse events at the time of 6 weeks after enrollment, the observation period will be extended until the treatment is completed.
Discussion
This study will provide evidence of the safety, tolerability, and PK characteristics of atorvastatin in Japanese KD patients with CAAs. Furthermore, this study may provide helpful insight about its effectiveness for the prevention and attenuation of CAAs.
This study may have limited statistical power to determine the effectiveness of atorvastatin. However, we will pursue thorough sample size considerations for the next phase III study based on the result of this current evaluation. If the phase III study demonstrates the efficiency of atorvastatin, we will be able to develop a new standard therapy for acute-phase KD experiencing complications with CAAs. This treatment can be expected to reduce the number of KD patients with CAAs, leading to improved quality of life of patients and their family and the reduction of medical expenses.
Ethics approval and consent to participate
The study will be conducted according to the principles of the World Medical Association (WMA) Declaration of Helsinki and Clinical Trials Act (Act No. 16 of April 14, 2017). This study was approved by the All patients will receive adequate information about the nature, purpose, possible risks and benefits of the study, and alternative therapeutic choices using an informed consent protocol approved by the Certificated Review Board. In this study, patient's parent or legal guardian will sign a consent form. A patient aged 16 years and above will also sign the consent form. Patients aged 7-15 years will be provided information about the study and will sign an assent form. In case that a patient between age 7 and 12 years gives assent orally, the patient's parent or legal guardian will be allowed to indicate the patient's assent on the consent form.
Consent for publication
Results of the trial will belong to the National Center for Child Health and Development in Japan and will be submitted to a scientific journal as a report after the final analysis. The principal investigator will decide the first author of the report and co-authors following the guidelines of the International Committee of Medical Journal Editors Uniform Requirements for Manuscripts Submitted to Biomedical Journals. All authors will read and approve the final manuscript.
Competing interests
The authors declare no conflicts of interest.
Trial status
This protocol is version 1.5 (amended on April 14, 2020). Enrolment commenced as of May 01, 2019 and will end by April 30, 2022. Total study period will be May 01, 2019 to April 30, 2022.
Funding
This study was supported by a grant from the National Center for Child Health and Development in Japan (30-6). This funding source had no role in the study design, writing of the article, or the decision to submit for publication.
Declaration of interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
No data was used for the research described in the article. | 2022-01-22T16:38:28.465Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "877312b7a970d66cdbb9bb84cfd9878222a63924",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.conctc.2022.100892",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "337d28eeb91aab5765b8d961de3639b18ceec215",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231774921 | pes2o/s2orc | v3-fos-license | Association between physical function and health-related quality of life in survivors of hematological malignancies undergoing hematopoietic stem cell transplantation
Objective: The association between physical function and the health-related quality of life (HRQOL) remains unclear in survivors of hematological malignancies undergoing hematopoietic stem cell transplantation (HSCT). The purpose of this study is to clarify the association between physical function and HRQOL in survivors of hematological malignancies undergoing HSCT. Methods: The present cross-sectional multicenter study included 32 survivors of hematological malignancies who underwent HSCT. Patient characteristics, physical function (based on handgrip strength, isometric knee extension strength, 6-minute walk test [6MWT], and chair stand test), HRQOL (assessed with the 36-Item Short-Form Health Survey [SF-36] questionnaire), depression, fatigue, and physical activity level were assessed. Results: A significant association was observed between physical function (chair stand test and 6MWT) and the physical functioning (PF) subscales of the SF-36 questionnaire. The PF, mental health, and social functioning (SF) subscales of the SF-36 questionnaire were significantly associated with depression and fatigue. Multiple logistic regression analysis showed that the physical component summary was significantly associated with depression and affective fatigue, and the PF score was significantly associated with the chair stand test and depression. The mental component summary showed that the physical role functioning, vitality, and SF scores were also significantly associated with depression. Conclusion: Physical function, depression, and fatigue were significantly associated with the HRQOL in survivors of hematological malignancies undergoing HSCT. *Correspondence to: Junichiro Inoue, RPT, PhD, Division of Rehabilitation Medicine, Kobe University Hospital, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe, 651-0017, Japan, Tel: +81-78-382-6494; Fax: +81-78-382-6499; E-mail address: jinoue@panda.kobe-u.ac.jp
Introduction
Hematopoietic stem cell transplantation (HSCT) well-established standard treatment for patients with a variety of hematological malignancies and is associated with good clinical outcomes with longer post-transplant life expectancy being observed over the years [1]. Notably, improved overall survival is not the only determinant of successful medical outcomes following HSCT; therefore, healthrelated quality of life (HRQOL) is being considered as one of the useful indicators for successful treatment [2]. Previous studies have reported that age at transplantation, sex, marital status, primary diagnosis, infection, graft-versus-host disease (GVHD), and sibling donor, are among the factors associated with the HRQOL in patients undergoing HSCT [3][4][5].
Physical activity levels are markedly reduced in patients undergoing HSCT owing to the conditioning regimen, such as the administration of total body irradiation, high-dose chemotherapy, immunosuppressive therapy for GVHD, transplant-related toxicities including infections and GVHD, and prolonged bed rest in a bioclean room. Therefore, deconditioning is commonly observed in patients undergoing HSCT [6][7][8]. Physical function (represented by muscle strength and aerobic capacity) is decreased after HSCT [8][9][10]. Deconditioning that occurs during treatment limits patients' leisure and occupational activities; therefore, resumption of daily activities after discharging from hospital becomes difficult and negatively affects patients' HRQOL [11]. Previous studies report a period of up to one year for complete recovery of physical function in 40% of patients undergoing allogeneic HSCT (allo-HSCT). Moreover, stamina loss prevented 32% of survivors from return to work during the first 2 years after allo-HSCT [12]. Therefore, physical function is an important factor associated with the HRQOL in patients who undergo HSCT, although to date, the association between physical function and HRQOL remains unclear. We investigated the association between physical function and HRQOL in survivors of hematological malignancies undergoing HSCT.
Participants
This cross-sectional multicenter study included 32 survivors of hematological malignancies who underwent HSCT and presented for outpatient medical follow-up after discharge at Kobe University Hospital and Kakogawa Central City Hospital in Japan between June and November 2014. This study was performed in accordance with the ethical standards established by the 1964 Declaration of Helsinki and later amendments and was approved by the Ethics Committee of Kobe University Graduate School of Health Sciences (approval number: 298-1). Written informed consent was obtained from all participants.
Patient characteristics
The following data were obtained from the medical records: age, sex, primary diagnosis, graft type (autologous/allogeneic), donor type (bone marrow/peripheral blood/umbilical cord blood), conditioning regimen (myeloablative/non-myeloablative), and interval between transplantation and study enrollment.
Physical Function
Handgrip strength, isometric knee extension strength, 6-minute walk test (6MWT), and chair standing test were evaluated as variables of physical function.
Handgrip strength was measured using a standard adjustable-handle dynamometer (Grip-D, Takei Scientific Instruments Co. Ltd., Niigata, Japan) in accordance with the method previously described by Mathiowetz, et al. [13]. The grip dynamometer was set to the second grip position. The test was performed twice on each hand and the highest value was selected for analysis.
Isometric knee extension strength was measured using a hand-held dynamometer (microFET2 Ⓡ ; Nihon Medix, Chiba, Japan) based on the method described by Andrews, et al. [14]. The test was performed with the patient seated and the knee flexed to approximately 90°. The dynamometer was applied proximal to the malleoli. The maximum force observed during 10 s of effort was recorded. The test was performed twice on each leg, and the highest value was selected for analysis.
Exercise capacity was evaluated using 6MWT based on the American Thoracic Society (ATS) guidelines [15]. Patients were instructed to walk along a 10 m corridor for 6 min at their own pace. They were encouraged to cover as much distance as was possible; however, they were permitted to stop for rest and resume as soon as they felt able. The test was aborted in patients who experienced symptoms of dyspnea or leg pain. The distance covered in 6 min was recorded.
The chair stand test was performed based on the short physical performance battery (SPPB), and the time required for 5-repetition sitto-stand was recorded [16].
Health-related quality of life (HRQOL)
The HRQOL was evaluated using the Japanese version of the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) [17]. SF-36 is a self-administrated questionnaire that evaluated general healthrelated QOL and assesses physical and mental health components across 8 domains: physical functioning (PF), physical role functioning (RP), bodily pain (BP), general health (GH), vitality (VT), social functioning (SF), emotional role functioning (RE), and mental health (MH). HRQOL is a multidimensional construct; SF-36 assesses the key components that constitute the HRQOL on a scale of 0-100, with higher scores indicating better HRQOL. The scores obtained after assessment of all 8 domains are combined to calculate more comprehensive indicators of physical and mental health: the physical component summary (PCS) and the mental component summary (MCS) scores. PCS and MCS are converted into norm-based scores (NBS) applicable to the general Japanese population [17]. A score of 50 points indicates the national standard value of the NBS, and higher scores indicate a better HRQOL.
Depression
Depression was assessed with the Self-Rating Depression Scale (SDS) [18]. SDS is a 20-item self-report questionnaire that is widely used as a screening tool covering affective, psychological, and somatic symptoms associated with depression. Each item is scored on a Likert scale with scores ranging from 1-4. The total score is obtained by calculating the sum of individual item scores and ranges from 20-80. Most patients with depression score between 50 and 69 points, and scores of >70 indicate severe depression.
Fatigue
The Cancer Fatigue Scale (CFS) was used to assess fatigue [19]. CFS is a 15-item self-rating scale to assess fatigue of cancer patients. The scale consists of 3 subscales (scales evaluating the physical, affective, and cognitive aspects of fatigue) and assesses the multidimensional nature of fatigue. The patients are instructed to circle a number that describes their present state on a scale of 1 (not at all) to 5 (very much). The response range for each subscale score is 0-28 for the physical and 0-16 for each of the affective and cognitive subscales. The total fatigue score is calculated as the sum of these individual score. The maximum total score is 60 and the higher scores indicates more severe fatigue.
Physical activity level
Physical activity level (PA) was evaluated with the Japanese version of the International Physical Activity Questionnaire (IPAQ) (long version) [20]. The total PA was expressed in terms of the metabolic equivalent of task/min (MET-min)/day and time spent (in min) for vigorous intensity and moderate intensity PA, as well as hiking, per the IPAQ. Moderate intensity was defined as 4 METs, vigorous intensity as 8 METs, and hiking equivalent to 3.3 METs. The MET-min was calculated by multiplying METs/min participation in PA of moderate and vigorous intensity, as well as hiking. The total PA expressed as MET-min/week was calculated as the sum of the scores and this value was used for analysis.
Statistical analysis
The association between patient characteristics and each outcome measure was compared by the Student's t-test for normalized variables and the Mann-Whitney U test for non-normalized variables. Multiple comparisons were performed using one-way analysis of variance for normalized variables and the Kruskal-Wallis test for non-normalized variables. The association between HRQOL and each outcome measure was compared using the Pearson product-moment correlation coefficient for normalized variables and the Spearman's rank correlation coefficient for non-normalized variables. Multiple logistic regression analysis was performed to adjust for all possible confounders, which were selected based on p value <.05 recorded by the above tests.
All statistical analyses were performed with the JMP software, version 8.0.1 (SAS Institute Japan, Tokyo, Japan). A p value < .05 was considered statistically significant.
Results
Patient characteristics of the 32 participants in this study are shown in Table 1 The correlation coefficients between the HRQOL and each outcome measure are shown in Table 3. Regarding the association between physical function and HRQOL, we observed a significant association between the chair stand test and PCS, the 6MWT/chair stand test and PF and between the 6MWT/chair stand test and BP. With regard to the association between HRQOL and depression/fatigue, the MCS, GH, VT, RE, and MH subscale scores were significantly associated with depression, physical, affective, cognitive, and total fatigue. The PCS, PF, and SF scores were significantly associated with depression, physical, affective, and total fatigue. The PA was not significantly associated with the HRQOL.
Results of multiple logistic regression analysis are shown in Table 4. The PCS score was significantly associated with depression and affective fatigue and the PF score was significantly associated with chair stand test and depression. The MCS, RP, VT, and SF scores were significantly associated with depression.
Discussion
In the present study, we investigated the association between physical function and the HRQOL in survivors of hematological malignancies undergoing HSCT.
With regard to physical function, Morishita, et al. [21] reported that the mean handgrip strength was 21.0 kg, the mean knee extension strength was 204.0 N, and the mean 6MWT result was 425.4 m at discharge in patients undergoing HSCT. In contrast, previous studies that investigated healthy volunteers in their 50s reported that the mean hand grip strength was 45 kg for men and 28 kg for women, and the mean knee extension strength was 507 N for men and 442 N for women [22,23]. Our study showed that physical function in survivors of hematological malignancies undergoing HSCT was similar to or higher than that observed at discharge in patients undergoing HSCT but was remarkably lower than that observed in healthy volunteers.
Regarding the HRQOL, the PCS, PF, RP, GH, SF, and RE (nearly 50% of the subscales evaluated) scores were lower than those recorded by the Japanese NBS. Kisch, et al. [3] reported that emotional well-being improved 100 days after HSCT, whereas all other dimensions including the overall HRQOL (assessed with the Functional Assessment of Cancer Therapy-Bone Marrow Transplantation tool showed deterioration. Moreover, physical and social/family well-being scores decreased at the 12-month follow-up, whereas the emotional well-being scores showed improvement. Mitchell, et al. [24] reported that compared to the MH scores, the PCS and other scores were significantly lower than the NBS in patients undergoing HSCT who developed chronic GVHD and survived >100 days. These reports prove that the HRQOL in survivors of hematological malignancies undergoing HSCT remained low for prolonged periods after transplantation and that the physical dimension deteriorated more significantly than the emotional dimension of HRQOL. Our study results concur with those reported by the aforementioned studies; the HRQOL in survivors of hematological malignancies undergoing HSCT remained low, and PCS scores were lower but MCS scores were higher than the NBS.
The mental aspect of the HRQOL was shown to improve after discharge because patients undergoing HSCT were of the view that radical cure had been achieved, although there is the limitation of which reason this study clarified.
With regard to the factors associated with HRQOL, the PCS score was significantly associated with depression and affective fatigue, the PF score was significantly associated with the chair stand test and depression, and the MCS, RP, VT, and SF scores were significantly associated with depression. The only physical function parameter that was associated with the HRQOL was the chair stand test. Notably, depression and fatigue rather than physical function were associated with the HRQOL in survivors of hematological malignancies undergoing HSCT.
Morishita, et al. [9] reported that diminished handgrip strength and knee extension strength were not associated with PF and SF scores in patients undergoing HSCT. Our study showed that in addition to physical function, depression and fatigue were associated with HRQOL.
The National Comprehensive Cancer Network (NCCN) guidelines recommend aerobic exercise and resistance training to improve fatigue of cancer patients [25]. Moreover, a few systematic reviews and metaanalyses have reported that physical exercise improved HRQOL and reduced fatigue, anxiety, and depression in patients undergoing HSCT [26][27][28]. Exercise improves physical function and HRQOL and reduces fatigue and depression; therefore, positive rehabilitation after discharge is useful in survivors of hematological malignancies undergoing HSCT. Early introduction of a rehabilitation program including physical exercise and vocational counseling is important to facilitate early return to work in survivors undergoing HSCT. A study performed by De Boer, et al. [29] reported that rehabilitation intervention, such as vocational counseling combined with patient education and biofeedback-assisted behavioral training or physical exercise, achieved higher return-towork rates in cancer patients.
The limitations of this study are as follows: (a) This cross-sectional study included a small number of patients; therefore, we could not definitively establish causality between physical function and the HRQOL. (b) Transplantation-induced symptoms and patients' living environment are known to be associated with HRQOL; however, these variables were not evaluated in this study. Therefore, further studies that consider these points are warranted. | 2021-01-31T07:23:53.092Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "3a3fc65d52f5e3975e22f5a63e4b5e7d5b132522",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/TiT-14-289.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3a3fc65d52f5e3975e22f5a63e4b5e7d5b132522",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55998057 | pes2o/s2orc | v3-fos-license | The Effect of Event Sponsorship on Customer ’ s Brand Awarness and Purchase Intention — A Case Study of Toyota Vietnam
The purpose of this research was to investigate how event sponsorship directly affected customer’s purchase intention and indirectly through the mediation of brand awareness. The quantitative approach was applied with structured questionnaires directly delivered to sports fans of V-league football champion in Binh Duong province, Vietnam. The empirical results showed that attitude and attention towards the sponsorship indirectly affected purchase intention. The fit of the sponsored events and event involvement both directly and indirectly affected customer’s purchase intention. In addition, brand recognition played a meaningful mediation role in the relationship between event sponsorship and customer’s purchase intention.
Introduction
Sponsorship defined as "an investment, in cash or in kind, in an activity, in return for access to the exploitable commercial potential associated with this activity" (Meenaghan, 1991, p. 36).Companies desired to achieve marketing's purpose through sponsorship.The purposes of sponsorship include increasing sales, generating and raising awareness, reaching new target markets, and enhancing corporate image (Shank, 1999).Sports are the aspiration for sponsorship (McCarville, Flood, & Froats, 1998).Moreover, sports focus on public interest and are the subjects of involvement, commitment, and emotional connection (McDonald, 1991).Additionally, sports can show highly effective images and attract to the almost target group (Ferrand & Pages, 1996).For above these reasons, marketers decided to increase sports sponsorship as a considerable modification in traditional marketing communications.In 2015, Toyota became the main sponsor of the V. League and the Toyota Mekong Club Championship in order to contribute to the development of Viet Nam's football.Toyota spending on sponsorship reached VND 30 billion at this year.In 2016, Toyota's investment in sponsorship has grown from VND 30 billion in 2015 to VND 40 billion in 2016, which represents a 30 percentage increase over this time period.According to the V. League 2016, it will kick off on 20 February nationwide with 14 clubs competing in 182 matches through 26 rounds and the winner will receive VND 3 billion (USD 133,000) while the runners-up pocket half that amount.
Toyota has spent a huge amount of money on sports sponsorship.To be precise, they used more than VND 40 billion to organize VPF and prizes for winners, which is a huge investment for marketing at V-League.Moreover, as commercial sponsorship is a form of marketing communication, it is necessary to explore how it actually influences consumers and their perception and behaviors of commercial sponsorship as well as sponsor's products instead of simply measuring the sales or reach the performance of the brand.In essence, the effectiveness of Toyota's sports sponsorship program in terms of customers' response from a variety of perspectives in the Vietnamese market is necessary to be investigated because Toyota's expenditures on sponsorship are increasing.Therefore, the research will discover which factors between sponsor and event have strongly effect on purchase intention in order to help advertisers give an effective marketing strategy for the sponsors' products in the Vietnamese market.The main of the purpose of this study points out that the factors have a positive effect on brand awareness to improve opportunity toward the purchase of Toyota's products in Binh Duong province.For more detail, the objectives of the research find the problem as follow: What is the effect of event sponsorship on customer's brand awareness and purchase intention.Specifically, this study was designed to examine the influence of several factors on the purchase to a sponsor's products in terms of attitude toward the sponsor, attention to sponsor's promotion, sponsor event fit, event involvement and the mediating role of brand awareness.
Purchase Intention
Purchase intention can be defined as "if a certain company supported a particular event then it would improve the chances that a consumer would buy the sponsor's products" (Speed & Thompson, 2000).It is likely that the customers intend to purchase or consume the sponsor's product in the future, which is an effect of the sponsorship.Smith (2008) had gone deeper into the measures of purchase focusing on the direct effect of sponsorship on sales.The positive attitude approach to the product has also been determined to be a primary of purchase intention.Whitlark, Geurts, and Swenson (1993) discovered that "75% of the respondents who indicated that they would be likely to buy a sponsor's products actually did purchase the products within three to six months".Additionally, Pitts (2004) showed that "a staggering 92% of respondents at Gay Games IV who said that they were more likely to purchase the sponsors' products because of the companies' support for the event".Yong Jae et al. ( 2008) also illustrates that the future purchase intention of consumers can be examined in sponsorship.This leads practically the purpose of sponsored company (Howard & Crompton, 1995, p. 363) because of the aim of purchase is a sign of great worth and advantage as an influence future sales on sponsorship.Aaker (1996) defined "the ability of a consumer can recognize and recall a brand in different situations".Brand awareness consists of brand recall and brand recognition.In the sponsorship context, brand recognition examines if the ability of consumer recognize a particular sponsor appearing in a football match, given the brand name as a cue.Brand recall connected to the ability of consumer recover a particular sponsor from memory given only the product category as a cue or by even solely asking for any sponsor that comes to mind (Keller, 1993).Further, Hoeffler and Keller (2002) showed that brand awareness can differentiate from depth and width.Depth explained "how to make consumers to recall or identify brand easily" and width concluded, "when consumers purchase a product, a brand name will come to their minds at once".Whether a product is in possession of brand depth and width at the same time, consumers will consider a particular brand at the time of purchasing a product which results higher brand awareness to that product.Additionally, brand name is the main component of brand awareness (Davis, Golicic, & Marquardt, 2008).As a result, brand awareness will have an influence on purchase decision by way of brand association, and when a product possesses favorable brand image, it will support to marketing activities (Keller, 1993).Brand awareness has a great effect on purchase intention due to the tendency of the customer to purchase a familiar and well-known product (Keller, 1993;Macdonald & Sharp, 2000).Brand awareness can assist consumers in order to recognize a brand from a product category and make a purchase decision (Percy & Rossiter, 1992).In addition, brand awareness has a great impact on choice and can have previously regarded as a product category (Hoyer & Brown, 1990).Furthermore, brand awareness plays a role as a crucial element to the consumer purchase intention, and specific brands will be gathered to consumers' mind-affecting consumer purchase decision.Moreover, when consumers recognize the familiar brand, it leads to a higher purchase intention.In others words, a product has higher brand awareness that results in higher market share and the act of evaluating for better quality.
Attitude toward the Sponsor
Prior sponsorship research pointed out that attitude toward the sponsor was the most appropriate factors to use in examining the effectiveness of sponsorship (Javalgi et al., 1994;Stipp & Schiavone, 1996).One of the main purposes sponsors is that customers are positive about attitudes toward the sponsor in the relationship between sponsor and sponsored event (Cornwell & Maignan, 1998).In addition, Speed and Thompson (2000) had pointed out that the influence of a company on person's attitude in the direction of company financially assisted to a specific event.This research had shown that sponsors who possess positive image get an approving response to their sponsorships as compared to those not to be able to afford.
Attention to Sponsor's Promotion
Recent research has explored that "advertising response used cognitive psychology to suggest that the mere-exposure effect can lead to higher evaluation of a product when advertising responses are automatic and unconscious.And the consumers' attitude toward different attributes of a promotion play a major role in shaping their response to that promotions" (Grunert, 1996).Speed and Thompson (2000) indicated that "customers believe sponsorship of a particular event by a particular sponsor will affect their attention to the sponsor and remember the sponsor's promotion".This means that the attention to sponsor's promotion had built up a practice in customers' consideration through sponsored event.When watching sponsored event can improve customers' concentration about the sponsorship that they notice the sponsor's name on sports event and pay attention to the advertisement of the sponsor.
Sponsor-Event Fit
As the result of researches, fit between the sponsor and the sponsored event is attached very great importance to endorsement literature (Crimmins & Horn, 1996;Meenaghan & Shipley, 1999;Speed & Thompson, 2000).Speed and Thompson (2000, p. 230) have discovered that "the degree to which the pairing of an event and sponsor is perceived as well matched or a good fit, without any restriction on the basis used to establish fit".A fit between sponsor and event can be established with the dimensions includes: "sponsor product relevance to the object", "functional similarities" (i.e., sponsor's product and the object are both high quality) and "image/symbolic similarities" (Gwinner, 1997;Rifon et al., 2004;Speed & Thompson, 2000).Moreover, Martensen et al. (2007) discovered that fit is used as a distinction between positive and negative emotions toward a sponsored event and higher fit that may lead to positive emotions or lower fit that may contribute to negative emotions.In addition, Becker-Olsen and Simmons ( 2002) defined endorsers as "Good fit between sponsor and object resulted in higher attitudes toward the sponsorship and sponsor.The explanation for a bad-fitting sponsorship created positive effects similar to those for native good fit, with even higher sponsor recall".Their study points out that having the matching good fit with the key measurement for value transfers between event and sponsor to gauge the successful brand awareness.
Event Involvement
Event involvement is defined as "a kind of genuine excitement caused by a strong and solid interest in a specific activity (on our case the sponsored sports event) which results from the importance of this activity for an individual" (Lardinoit & Derbaix, 2001).Within the context of sponsorship, Meenaghan (2001) had published the idea of fan involvement and state of being involved in the sponsorships.As the result of researches, he investigated focus group that people showed positive emotions regarding the investments of the sponsors because of the increase in event or fan involvement for specific sponsored activity.Additionally, favorably involved fans had knowledge of the sponsor's investments.Hansen and Scotwin (1995) had demonstrated that "sports fans have significantly higher recall than those not involved for one sponsor".In other study from Pham (1992) had concluded as consumer highly involved in a sponsored event, the knowledge of consumer had a strong influence in increasing interest, considerable motivation on the sponsorship and brought about the benefit of sponsor (Crimmins & Horn, 1996).It is probable that consumer highly connected with sponsored event and strongly endorse to the sponsor.Therefore, respondents who perceived the event to be attractive and interesting believe it would have favorable image towards sponsor (D' Astous & Bitz, 1995), and this research had a conclusion "sponsors can increase the response to their sponsorship if they select events that are well liked by their target market".
In order to confirm the direct and indirect effects on customer purchase intention this study hypothesized that: H 1 : Factors of event sponsorship positively affect brand recognition.
H 2 : Factors of event sponsorship positively affect brand recall.
H 3 : Factors of event sponsorship, brand recall, and brand recognition positively affect customer's purchase intention.
H 4 : Brand recall mediates the relationships between factors of event sponsorship and customer's purchase intention.
H 5 : Brand recognition mediates the relationships between factors of event sponsorship and customer's purchase intention.
Questionnaire Design and Data Collection
The research used Likert Scaling to measure customer's brand awareness and purchase intention with a series of short statements on a given in the surveys in from of the five-point range of responses from 1 (strongly disagree) up to 5 (strongly agree).Attitude toward the sponsor, attention to sponsor's promotion, sponsor-event fit and event involvement was adapted from Speed and Thompson (2000).Brand recognition scale was adjusted from Keller (1993), Aeker (1991) and Yoo, Donthu (2001), brand recall scale from Keller (1993), Aeker (1991) and Rajh (2002) and purchase intention from Boulding, Kalra, Staeling, and Zeithaml (1993), Ajzen and Fishbein (1980), Spears and Singh (2004) and Pelsmacker (1998).
For the purpose of this research, there were two important aspects of the sponsor's sports event: the effect of event sponsorship on customer's brand awareness and the influence of them on purchase intention.The study collected primary data in form of questionnaire translated into appropriate Vietnamese version.In Binh Duong Province context, surveys were delivered directly to the respondents that are customers of Toyota and sports fans with V-league football champion.Surveys were collected from 264 respondents among which 37 questionnaires were rejected and remained 227 valuable questionnaires to analyze in this research.Factor analysis and multiple regression were conducted by using SPSS software (version 20).
Factor Analysis and Reliability
Principal component analysis was used with varimax rotation method to investigate meaningful factors from the proposed model.The four independent variables resulted in three factors as in Table 1.Attitude toward the sponsor and attention to sponsor's promotion were grouped into one factor.Therefore, it was reasonably named "attitude and attention towards the sponsorship", including three items of attitude and three items of attention.
The second factor was fit of the sponsored events that only remained four items for the following step because one item of the fit variable was eliminated.The final factor was event involvement consisting of four items.
Total variance explained of all factors was 73.7 percent and Eigenvalue of each dependent factor was above 1.KMO was .922and Barlett's test of sphericity reached its significant value (p=.000) so this result is considered as satisfactory.Cronbach Alpha values for the three factors were all greater than .70(Nunnally, 1978), showing high reliability.Thus, the three factors were significant consistency and reliability for further analysis.Similarly, Table 2 showed the factor analysis of the group of dependent variables.The mediator Brand Recognition, Brand Recall and dependent variable Purchase Intention were three valuable factors with total variance explained of 73.22 percent and Eigenvalue of each dependent factors all was above 1.KMO value was .892and significant Barlett's test of sphericity value was p=.000.Cronbach Alpha values for the three factors all were greater than 0.7 (Nunnally, 1978).Therefore, the three scales were reliable and considered to remain for subsequent analysis.
Characteristics of Respondents
Total 264 questionnaires were received but the usable sample size was 227; 71.4% were male and 28.6% were female.This is a representative total number of respondents the following football in V-League.Football has traditionally regarded as a "male" sport, but lately women have begun to present an involvement in it.As a shown in Table 3, there is a greater percentage of marriage than single status in the population.As far as the age distribution is concerned, there are five different sized categories: 4.4% were over than 55 years old, 11.5% were between 45 and 55 years old and 9.3% were between 18 and 25 years old.The majority of respondents are between 25 and 35 years old, 42.7%, respectively and 32.2 percent between 35 and 45 years old.Almost respondents have an experience in the workplace at the time, which was equal to the minimum one-year seniority.Besides, they earned approximately VND 15 millions.
Relationship between Independent Variables, BRARECA, BRARECO and Purchase Intention
Table 4 showed the results of the Pearson's Correlation coefficient which is a statistical measure of the strength and direction of association that exist between factors of event sponsorship, brand recall, brand recognition and customer's purchase intention.The finding indicated the positive correlations between five independent variables (ATINATE, FITOSEVE, EVINVOL, BRARECA and BRARECO) and the dependent variable (PURINTE) at p<.001.All variables were positively correlated with PURINTE.Among these relationships, the strongest was a correlation between FITOSEVE and PURINTE (r=.666, p<.001), however, the weakest was a correlation between BRARECA and PURINTE (r=.431,p<.001).In other words, the high level of factors of event sponsorship, brand recall and brand recognition were associated with the high level of Consumers' Purchase Intention.
Direct Effects on the Mediator Brand Recognition and Brand Recall
Brand recognition was significantly affected by three factors: attitude and attention towards the sponsorship (B=.261), fit of the sponsored events (B=.301), event involvement (B=.163).This implies that the participants are the strong supporter of the V-League and feel more favorable emotions toward Toyota.Additionally, there is a logical connection between V-League and Toyota so sponsorship makes the audiences more likely to remember Toyota's promotion.
Brand recall was strongly influenced by two factors: attitude and attention towards the sponsorship (B=.256) and event involvement (B=.192).Conversely, the interaction between fit and brand recall showed no significant contribution.This study suggested that as a high level of attitude and attention towards the sponsorship and event involvement becomes apparent, the brand recall is likely to be greater.
Direct Effects on Customer Purchase Intention
Among the three factors, attitude and attention towards the sponsorship and event involvement (B=.330 and B=.268, respectively) had direct effects to customer purchase intention.It illustrated that the target audiences are favorable and positive attitudes in the relationship between sponsor and sponsored event and pay attention to the Toyota's advertising during the V-League football match.Conversely, fit of the sponsored events also did not exert significant influence on customer purchase intention.
Moreover, brand recognition had a positive effect on purchase intention (B=.327) with high impact.However, brand recall showed no significant effect on purchase intention.It is implied that the audiences are merely able to recognize a specific Toyota sponsor through Toyota's brand name occurring the logo of V-League and Toyota's Banner Advertising.
Indirect Effects on Customer Purchase Intention
The outcomes of regression analysis revealed there were three independent variables that impact on PURINTE through BRARECO.The indirect effect of the independent variable on the dependent variable through the mediator was the total result of the independent variable on the mediator and the mediator on the dependent variable (Preacher & Hayes, 2008).Brand recognition was significantly affected by three factors: attitude and attention towards the sponsorship (B=.412), fit of the sponsored events (B=.301) and event involvement (B=.226).Then, brand recognition (B=.327) was significantly accounted for purchase intention.Consequently, through the intervening variable of brand recognition, the factor of FITOSEVE, ATINATE and EVINVOL affected indirectly on PURINTE .098,.086and .053at respectively.
Significant of the Indirect Effect
According to Preacher and Hayes (2008), the bootstrapping method was used to test the significance of indirect effects or mediation.If there is a zero fell within the interval range between the lower (LLCI) and upper boundary (ULCI) in order that there is no mediation or indirect effect at 95 percent confidence.On the other hand, if zero does not happen between the LLCI and the ULCI in order that the mediation or indirect effect is significant with 95 percent confidence.As can be seen in the outcomes of Table 5, the indirect effects of ATINATE, FITOSEVE and EVINVOL on PURINTE through the mediation of BRARECO were estimated to be placed between .0396(LL) and .1387(UL); .0486(LL) and .1553(UL); .0203(LL) and .0933(UL) with 95 percent confidence, respectively.Because the range between lower and upper boundary did not contain zero at the 95 percent confidence interval, we can conclude that the indirect effects of ATINATE, FITOSEVE and EVINVOL on PURINTE were significantly different from zero at p<.05 (two tailed) and the mediation of BRARECO in this research was correct.
Total Causal Effect of the Customer Purchase Intention
As can be seen, the output of
Discussion and Conclusion
The purpose of this research was to examine empirically the link between brand awareness, specifically, brand recognition and purchase intention in sports sponsorship in Vietnam.The finding showed that in the line with prior empirical researches (Chi et al., 2009;Shahbaz et al., 2010) brand recognition has a significantly positive relationship with purchase intention.On the other hand, brand recall has no significant impact on purchase intention.Brand recognition is the main component in sponsorship context because sponsor's brand certainly becomes a customer purchase decision.In other words, if customers are not aware of the specific sponsor when they search for the exact sponsor's product, then it is very unlikely for them to choose another brand.Thus, brand managers provide guidance to their brand marketing communication strategy to assist in maintaining customer recognition and finding a new solution to improve customers' brand recall compared with its competitors.
The findings of this research also confirmed the significantly positive relationship between three important variables consisting of attitude and attention towards the sponsorship, fit of the sponsored events and event involvement and brand recognition.Attitude and attention toward the sponsorship is tested to have the strongest direct effect on brand recognition.In addition, fit of the sponsored is of major importance in affecting the effectiveness of a sponsorship (Erik & Hans, 2011) and higher fit between the image of the sponsor and sponsored event contributing to favorable emotions.This shows that marketers in Toyota invested successfully an enormous amount of money in becoming the primary sponsor for V-League football champion because the respondents are favorable about attitudes toward the sponsor in the relationship between sponsor and sponsored event (Cornwell & Maignan, 1998).Besides, Toyota displayed successfully the logo and brand name that immediately come into the view of audiences to attract attention, enjoy following coverage of the V-League and recognize the sponsors' brand in V-League football.
Furthermore, the study verifies that brand recognition acts as a mediator among three factors including attitude and attention towards the sponsorship, fit of the sponsored events and event involvement and purchase intention.
The research also finds that if consumers can recognize a particular sponsor appearing in a football match, given the brand name as a cue (Keller, 1993) when they want to buy a sponsor's product, it means that Toyota car holds higher brand recognize through sponsorship to the certain sport event.When the brand name of sponsor recognize easily famous event, it can gain consumers' preferences, create positive emotions toward sponsor in other to pay attention to other advertising or campaign and increase their purchase intention.
Finally, the outcomes of this study revealed that in line with study of Speed, Richard and Peter Thompson (2000), attitude and attention towards the sponsorship and event involvement have a direct influence on customer purchase intention.Event involvement is positively associated with purchase intention to sponsorship because almost respondents are as sports fans that highly involved in a sponsored event and their knowledge that have a strong effect on increasing interest from the investment of sponsor (Crimmins & Horn, 1996).The results of this study also indicated the direct link between attitude towards the sponsors' product and Customers' purchase intention.This was related to the research by (MacKenzie, Lutz, & Belch, 1986;Madrigal, 2001) in the sponsorship context.When a consumer has a favorable attitude and their belief toward a sponsor, they show the strongest inclination to know and regard a particular sponsor product.This leads to the behavioral intentions as consumer purchase decisions from the effect of positive individual's attitude.
Recommendations for Future Research
This study concentrates on the respondents in Binh Duong Province.It is considered the representation of small sample size in Viet Nam that results in the problem to consider the whole country.There is the limitation of the sample in this research so future studies should investigate the larger sample of the population and the extension of other major cities and provinces, such as Hanoi, Hue, Danang, Can tho.These places could have potential respondents both seeing the V-League football match and demanding to buy sponsors' products.It is suggested that future research can expand participants to carry out a survey and become general consumers.Moreover, the outcome of research is believed to help extend the understanding of how to affect consumer's purchase intention through sponsorship method in order to improve marketing position and increase achieve the business turnover.
The finding finds out three factors consisting of attitude and attention towards the sponsorship, fit of the sponsored events and event involvement have both the direct and indirect effect on brand awareness to improve opportunity toward the purchase of Toyota's products.Thus, further research should base on the result of this study in order to discover other variables influencing on brand awareness and purchase intention in the relationship between Toyota and V-League football.Because this research may not review the effects of all factors of sponsorship on customer's purchase intention through the mediate of brand recognize.Besides, future research is possible to discover the source of information and spend time on review much relative literature in sports sponsorship to increase the greater valuable of research result.
Figure 1 .
Figure 1.Path coefficients of hypothesis testing Note.All coefficients in the model are significant at 95% confidence level.
Table 1 .
Summary of independent variables with reliability coefficients
Table 2 .
Summary of dependent variables with reliability coefficients
Table 3 .
The summary of demographic information of respondent (N=227)
Table 4 .
Correlations between variables * Significant level at p<.001.
Table 5 .
Table 5 was divided up into two groups of direct and indirect effects on PURINTE through BRARECO.The total effect of FITOSEVE had the strongest positive impact on PURINTE (B=.428).Besides, positive effects on PURINTE are ATINATE (B=.086),EVINVOL (B=.321) and BRARECO (B=.327).To sum up, the total effects of independent variables impact Consumers' Purchase Intention are 1.162 including 79.6% of direct effects from FITOSEVE, EVINVOL and BRARECO and 20.4% of indirect effects from ATINATE, FITOSEVE and EVINVOL.Direct, indirect and total causal effects | 2018-12-12T07:05:12.612Z | 2017-01-21T00:00:00.000 | {
"year": 2017,
"sha1": "1b46e755d1bca98f6e9c6bb091fa14c91f194024",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/res/article/download/65921/35658",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1b46e755d1bca98f6e9c6bb091fa14c91f194024",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
8338583 | pes2o/s2orc | v3-fos-license | Projected BCS Wave Functions for Low Dimensional Frustrated Spin Systems
Twenty-five years after the first proposal, the question whether the ground state of a frustrated spin-half system is well described by a spin-liquid Resonating Valence Bond (RVB) wave function is still controversial. A physically transparent representation of a RVB state can be obtained in fermionic representation with a standard BCS-type pairing wave function, working in the subspace with fixed number of electrons and no double occupancies. In this work, we show that, using this variational wave function with a careful parameterization of the pairing function, it is possible to obtain an extremely accurate {\em ansatz} for the ground state of the Heisenberg antiferromagnet with next-nearest neighbors interactions ($J_{1}{-}J_{2}$ model) in the regime of strong frustration. Indeed, in the spin-half realization of this model, it is known that the combined effect of frustration and zero-point motion interferes with the mechanism of spontaneously broken symmetry, giving rise to a non-magnetic phase of purely quantum-mechanical nature ($J_2/J_1\simeq 0.5$). This wave function is proposed to represent the generic spin-half RVB ground state in spin liquids.
netic order even if the ground state is very close to have a broken symmetry, with a gapless excitation spectrum and a power-law decay of spin-spin correlations. This is also the case for any array consisting of an odd number of chains (odd-leg ladder systems). The ground state of two chains or in general of any even-leg ladder system is non-magnetic too. However -in contrast to the previous cases -here the correlation length is finite and the spectrum is gapped. [2] Such a gap is known to decrease exponentially with the number of legs [3] leading to a gapless spectrum in the two dimensional limit, where the ground state of the Heisenberg model has genuine long-range antiferromagnetic order. [4] Competing interactions may allow in principle the stabilization of a non-magnetic ground state even in truly two dimensional systems. One of the simplest examples of these frustrated systems, which has been also recently realized experimentally [5], is the so-called J 1 −J 2 model [6,7] H = J 1 n.n.Ŝ i ·Ŝ j + J 2 n.n.n.Ŝ i ·Ŝ j , (2) where the antiferromagnetic alignment between neighboring spins (due to J 1 > 0) is hindered by a next-nearest-neighbors antiferromagnetic coupling (J 2 > 0).
Classically, the minimum energy configuration of the 2D J 1 −J 2 model has the conventional Néel order with magnetic wave vector Q = (π, π) for J 2 /J 1 < 0.5. Instead for J 2 /J 1 > 0.5 the minimum energy configuration is the so-called collinear state with the spins ferromagnetically aligned in one direction and antiferromagnetically in the other, corresponding to magnetic wave vectors Q = (π, 0) or Q = (0, π). [8] Exactly at J 2 /J 1 = 0.5 any classical state having zero total spin on each elementary square plaquette is a minimum of the total energy. These states include both the Néel and the collinear states but also many others with no long-range order so that the occurrence of a non-magnetic ground state in the quantum case, for a small spin value, is likely around this value of the J 2 /J 1 ratio. Indeed, at present there is a general consensus on the fact that the combined effect of frustration and zero-point motion leads to the disappearance -Variational estimate of the magnetic structure factor for the spin-half Heisenberg chain and two-leg ladder (filled circles). Empty dots are the numerically exact results obtained with the Green's function Monte Carlo method. [22] of the long-range antiferromagnetic order marked by the opening of a finite spin gap for ∼ 0.4 < J 2 /J 1 <∼ 0.55. [9,10] The nature of this non-magnetic ground state is one of the most interesting puzzles of the physics of frustrated spin systems.
In particular an open question is whether the ground state of the J 1 −J 2 Heisenberg model is a homogeneous spin liquid, i.e., a state with all the symmetries of the Hamiltonian, as it was originally suggested by Figueirido et al. [11]. The other possibility is a ground state which is still SU (2) invariant, but nonetheless breaks some crystal symmetries, dimerizing in some special pattern (see below). [12,13,14,15,16,10] A simple picture of a non-magnetic ground state can be given in terms of the socalled Resonating Valence Bond (RVB) states. [17] These are linear superpositions of valence bond states in which each spin forms a singlet bond with another spin on the opposite sublattice (say A and B) [18] where N is the number of sites of the lattice, r m is the distance between the spins forming the m th singlet bond (i m j m ), and h(r m ) is a bond weight factor. These states form in general a (overcomplete) basis of the S = 0 subspace so that any singlet wave function can be represented in terms of them. However, they represent a non-magnetic state whenever the short-ranged bonds dominate the superposition (3). More precisely, it has been numerically shown by Liang, Doucot and Anderson [18] that the RVB state (3) has no long-range antiferromagnetic order for bonds that decay as rapidly as h(r) ∼ r −p , with p ≥ 5. Such bonds can be either homogeneously spatially distributed on the lattice, with short-range correlations among each other (spin liquid) [ fig. 1 (a)], or they can break some symmetries of the Hamiltonian, with the dimers frozen in some special patterns [ fig. 1 (b)] as originally predicted for the J 1 −J 2 model in the regime of strong frustration. [12,13,14,15,16,10] In a seminal paper, [19] Anderson proposed that a physically transparent description of a RVB state can be obtained in fermionic representation by starting from a BCS-type pairing wave function, of the form This wave function is the ground state of the well-known BCS Hamiltonian The non-trivial character of this wave function emerges when we restrict to the subspace of fixed number of electrons (equal to the number of sites) and enforce Gutzwiller projection onto the subspace with no double occupancies: singlet pairs do not overlap in real space and this wave function can be described by a superposition of valence bond states of the form (3). [19,20,21] This projected-BCS (p-BCS) wave function turns out to be an almost exact representation of several low-dimensional spin system with non-magnetic ground states. For instance it provides an excellent variational ansatz of the ground state of the Heisenberg chain and of the two-leg ladder giving a very accurate estimate of the ground-state energy ( fig. 2) and reproducing almost exactly the antiferromagnetic correlations. In the first case the spin structure factor S(q) = Ŝ q ·Ŝ −q shows a cusp at q = π while for two-leg ladders it has a broad maximum at q = (π, π). These features are remarkably well reproduced by the (p-BCS) variational wave function ( fig. 2), which generates robust antiferromagnetic correlations at short distances with a very simple parameterization of the gap function: ∆ k = ∆ 1 cos k + ∆ 2 cos 3k for the chain and ∆ k = ∆ x cos k x + ∆ y cos k y for the ladder. [22] In two dimensions this wave function has been already studied for the pure Heisenberg model by several authors [20,21] for ∆ k ∝ (cos k x − cos k y ). In this case it provides a reasonable prediction for the ground-state energy but it fails in reproducing correctly the long-range antiferromagnetic order of the ground state. Here we show that this type of RVB state represents an extremely accurate variational ansatz for the J 1 −J 2 model in the non-magnetic phase when the gap function ∆ k is carefully parameterized. In particular, a definite symmetry is guaranteed to the p-BCS state provided ∆ k transforms according to a one dimensional representation of the spatial symmetry group. A careful analysis [23,24] shows that the odd component of the gap function ∆ k = −∆ k+(π,π) may have spatial symmetries different from those of the even component ∆ k = ∆ k+(π,π) . Indeed, the best variational energy is obtained when the former has d x 2 −y 2 symmetry, whereas the latter either vanishes or it has d xy symmetry. In order to determine the best variational wave function of this form we have used a recently developed quantum Monte Carlo technique [25] that allows to optimize a large number of variational parameters with modest computational effort.
The remarkable accuracy of the p-BCS wave function in describing the ground state of the 2D J 1 −J 2 model in the regime of strong frustration can be shown by calculating the variational energy and the overlap with the exact ground state, |ψ 0 , for the largest square cluster N = 6 × 6 where the solution can be numerically determined by exact diagonalization. As shown in fig. 3 the accuracy of the p-BCS wave function rapidly increase by increasing the frustration ratio J 2 /J 1 whereas conventional Néel ordered spin-wave wave functions [26,7] quickly become less and less accurate. Entering the regime of strong frustration J 2 /J 1 ∼ 0.45 ± 0.05, where a gapped non-magnetic ground state is expected, the p-BCS wave function becomes impressively accurate with a relative accuracy on the ground-state energy of order ∼ 4 × 10 −3 and an overlap to the exact ground state of ∼ 99%, both improved by more than an order of magnitude with respect to the J 2 = 0 case. This fact implies that the ground state in the strongly frustrated regime is almost exactly reproduced by a RVB wave function.
Interestingly, the transition to the regime of strong frustration is marked by the stabilization at the variational level of a non-zero d xy component of the gap function. This allows to reproduce correctly the phases of the actual ground-state configurations as illustrated in fig. 4. A measure of the accuracy of a variational wave function |ψ V in reproducing the phases of the ground state can be given in terms of the average sign S = x | x|ψ V | 2 Sgn x|ψ V x|ψ 0 . In the unfrustrated case it is well known that such phases are determined by the so-called Marshall sign rule [27]: on each real space configuration |x , the sign of the ground-state wave function is determined only by the number of spin down in one of the two sublattices. This feature, rigorously valid for J 2 = 0, turns out to be a very robust property for weak frustration (J 2 /J 1 <∼ 0.3). [28] However, it is clearly violated when frustration plays an important role. It can be shown [23,24] that the Marshall sign (i.e., S = 1 for J 2 /J 1 = 0) can be obtained using the p-BCS wave function, with only the d x 2 −y 2 component, so that this wave function provides an almost exact representation of the ground-state phases for weak frustration. However, for J 2 /J 1 >∼ 0.4, the phases of the wave function are considerably affected by the strong frustration and only when a sizable d xy component is stabilized at the variational level, this property can be correctly reproduced.
An even more remarkable effect associated with the d xy component of the gap function is the change induced on antiferromagnetic correlations. As it is shown, in fig. 5 the finite-size magnetic structure factor of the d x 2 −y 2 p-BCS state is sharply peaked around the antiferromagnetic wave vector Q = (π, π) giving rise to a logarithmic divergence in the thermodynamic limit. Such a divergence is instead washed out in presence of the d xy component of the gap function, leading to a state with weaker short-range antiferromagnetic correlations which is of course more suitable to describe the spin-gapped strongly-frustrated phase.
Of course, the accuracy in the energy does not necessarily guarantee a corresponding accuracy in correlation functions. However, as shown in fig. 6, the comparison of the magnetic structure factor with the exact result gives a clear indication that correlation functions obtained by the variational approach are essentially exact. Furthermore using the stochastically implemented Lanczos technique and the variance extrapolation method [25] we have verified that the accuracy of the p-BCS wave function is preserved by increasing the lattice size. [23] In order to investigate the existence of long-range dimer-like correlations, as in the columnar or the plaquette valence bond state, we have calculated the dimer-dimer correlation functions, In presence of some broken spatial symmetry, the latter should converge to a finite value for large distance. This is clearly ruled out by our results, shown in fig. 7, with a very robust confirmation of the liquid character of the ground state for J 2 /J 1 ≃ 0.5, which is correctly described by our variational approach.
A totally symmetric spin-liquid solution proposed for this model in ref. [11] was actually rather unexpected after the work of Read and Sachdev, [14] providing arguments in favor of spontaneous dimerization. This conclusion was supported by series expansion [16,10] and quantum Monte Carlo studies included the one done by two of us. [9] It is clear however that it is very hard to reproduce a fully symmetric spin liquid ground state, with any technique, numerical or analytical, based on reference states explicitly breaking some lattice symmetry.
In conclusion, the spin-liquid RVB ground state, originally proposed to explain high-Temperature superconductivity, is indeed a very robust property of strongly frustrated low-dimensional spin systems. Due to the success in reproducing the non-magnetic ground states of other low-dimensional spin systems like the 1D chain and the twoleg ladder, [22] we expect that the p-BCS RVB wave function represents the generic variational state for a spin-half spin liquid, once the pairing function f i,j is exhaustively parameterized according to the symmetries of the Hamiltonian. Work is in progress on this line of research. [24,29] * * * We thank C. Lhuillier, F. Mila, and D.J. Scalapino for stimulating discussions. This work has been partially supported by MURST (COFIN01). L.C. was supported by NSF grant DMR-9817242. | 2018-05-08T18:31:39.430Z | 2002-08-19T00:00:00.000 | {
"year": 2002,
"sha1": "36b4df8a8b4d4f564d09eb64068c6163141284d2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "36b4df8a8b4d4f564d09eb64068c6163141284d2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
133782294 | pes2o/s2orc | v3-fos-license | The Change of Space Use of Shared Space from Landed to High-Rise Settlement
Indonesia is entering an era of urban settlement transformation from horizontal landed living settlement to low-rise settlement, into the construction of vertical high-rise settlement. This resulted in the landed settlement residents that being moved to high-rise settlement, they have encountered a change of high-rise vertical living culture that different from their living culture before. This study aims to find out the use of shared space in landed living and high-rise settlement. The research method used in this research is qualitative descriptive study. The site of the research is in the landed settlement of Kampung Pulo and the high-rise settlement of Jatinegara Barat. Kampung Pulo settlement is a residence of Jatinegara Barat residents before they are moved. The result of the research shows the change of space use for shared space from landed to high-rise settlement; (1) landed settlement more accommodate the diversity of communal activities than highrise settlement. (2) in the landed settlement there is a territory transition space that accommodates the needs of interaction, homebased business activities, and play, whereas high-rise settlement there is no more transition space, so the need for interaction and play is contained in the public space, while the trading activities needs are mostly found within the private area. (3) In landed settlements, the shared space used for communal activities is more multifunctional than high rise settlement.
Introduction
Indonesia is entering an era of urban settlement transformation from horizontal landed living settlement to low-rise settlement, into the construction of vertical high-rise settlement.This resulted in the landed settlement residents that being moved to high-rise settlement, they have encountered a change of high-rise vertical living culture that different from their living culture before.After being transferred to new settlement, some residents from Kampung Pulo have limited space to perform their daily activities.As a result, some activities such as selling, visiting, playing, and gathering, are carried in spaces that should not be used for such activities.this study will examine the change of space use of shared space from landed to high-rise settlement.
Literature Review Space for Communal Activity
Shared space is a room function that always exist in Indonesian society.The existence of shared space is a symbol of the community, especially in a settlement, which has good relationships with others, marked by the communality.Shared space is a space that accommodates various communal activities of community (both positive and negative) from the economic,social , and cultural needs.
Kampong Pulo
Kampung Pulo is a very high density settlement like any other slums area in Indonesia.With an area of 8 Ha, the majority of the existing site is almost entirely utilize for building development.Residents of Kampung Pulo who live on the banks of the river and in the middle of the settlement almost entirely affected by the annual flood, so most of the housing in this area has 2 floors height.
Jatinegara Barat flat
Jatinegara barat Flats is located in Jatinegara Barat, about 300 meters from Kampung Melayu Terminal, intended to accommodate Kampung Pulo relocation which has been moved from Ciliwung River since June 2015.The Jatinegara Barat Flat consists of two towers, 16 floors, with housing capacity reaching 520 units.
Qualitative Methods
Research with qualitative methods that use qualitative approach aims to reveal events or facts, circumstances, phenomena, variables and circumstances that occur when
Observation
Observation is a method of collecting data through direct observation in the field, the purpose of obtaining various concrete data in the research location.
Interview
The technique of interview according to Nazir (1988) is the process of obtaining data for research purposes by way of question and answer face to face between researchers and respondents.
Research Methods
This research uses qualitative method with descriptive approach.Research with qualitative methods that use qualitative approach aims to reveal the events, facts, circumstances, phenomena, variables and circumstances that occur when research takes place by presenting what really happened.
Primary Data
Primary data were collected by field observations, interviews with sources, as well as documentation in the field.
Secondary Data
Secondary data collection was conducted with the study of literature records, study results, and the results of previous research publications related to the title of this study.
Research Focus
The object of this study are landed settlement and high-rise flats.Based on variables used in some similar research, the variables that match the topic of space observation used for shared activities are; physical conditions of space, activities that occur, actors activities, frequency of activities, and space attributes.
Research Object
The research object is located in the landed settlement of Kampong Pulo and the Jatinegara Barat high-rise flat.The problem of frequent floods, resulting in residents inhabited in less decent housing.Floods not only harm the people of Kampung Pulo that have direct contact with the river Ciliwung, but also affected the entire village of Kampung Pulo is located far from the banks of the river Ciliwung.That is why, there is similar habit activities and culture between residents who live on the riverbank and on the village.
Because of that, this research took place at landed settlement in Kampung Pulo that did not experience river normalization.The observations started from a few alley (blue circle) located on the banks of the Ciliwung river and on the edge of Jatinegara Barat street.There are two types of alleys in Kampung Pulo, the main alley (green) cuts the middle of Kampung Pulo area measuring 1.6-2 meters which can be passed by people and vehicles such as motorcycles and bicycles by passing each other.While the other alley branch (colored yellow) measuring 1-1.6 meters and can only be passed by a bike or people walking without passing.. Node A is located in the middle of Kampung Pulo, at this point there is a buying and selling activity.In the hallway room measuring 1.8 meters with right-left alleyway there is a concrete covered drain, the placement of merchandise is placed in the right-left of the alley that is precisely above the concrete covered drains.When the buying and selling activity did not happen, there are trading tables that are left outside the stalls, but some are storing it back into the house.At node C there is activity such as communication with neighbors, some used to be sitting on chair, some other are usually just interact at front of houses without using any chair at all.Different home positions that go back and forth towards the alley are used as storage space, such as merchandise and parking of two-wheeled vehicles.
Interview Result
Interviews and filling questionnaires were conducted on 17 respondents from Kampung Pulo residents.Of the 17 respondents, each with details of 9 people mothers, 5 fathers, 1 teenager and 2 children.The interview was conducted with the aim of asking some questions to the Kampung residents related to the profile of the residents of Kampung Pulo, the points of the tendency of people gathered, public space facilities available in Kampung Pulo.While charging questionnaires related to the verification of literature review data on the joint activities conducted by residents of Kampung Pulo.The results of the interviews show that some of the residents of Kampung Pulo are IDKs from DKI since birth, points of joint activity in alley, guard post, front of house and resident's house.
Observation Summary
Use of shared space for social activities Some concrete seats are used as a place for casual conversation between the neighbors, the reason they choose to chat casually on the terrace because of the wide limitations of their homes.So entertaining guests and chatting with a comfortable neighbor is performed on the terrace.
Use of shared space for transactional activities
There are several types of selling in Kampung Pulo, the most common type is the type of selling on the terrace.The property trademark is dominated by the type of trade placed on the porch or front of the house.As for the type of selling in the house, which can only be performed by a residence that has large enough room to store the merchandise, and merchandise offered in the form of grocery stalls for everyday purposes.
Use of shared space for kids play
Child playroom takes place on the terrace of the house, street alley, vacant lot, field, game rental, and odong-odong game of traveling traders.Limitations of space and land resulted in children looking for land that they could use as a place to just talk about running, or playing bikes.
Ground Floor
Fig. 9. Space Use at Ground Floor.
Shared activities on the 1st floor are scattered at some point, most of the residents choose to sit by the steps of entrance entrance.Groups of gangs consist of fathers, mothers to adults, some other mothers accompany their children in early childhood while talking to parents, there are also children playing in the back garden towers and mothers ktivitas together spread in some point, most of the residents choose to sit at the edge of the stairs entrance entrance.Groups of gangs consist of fathers, mothers to adults, some other mothers accompany their children in early childhood while talking to parents, there are also children playing in the back garden towers and mothers who keep chatting while.
Second floor
Fig. 10.Space Use at Second Floor.
The shared activity was done in several selling stalls, but because the kiosks were not so much, so the joint activity that happened was not so intense, while the activity was done by many children just running or playing badminton in the area together.
Typical floor
Fig. 11.Space Use at Typical Floor.
The typical floor of Jatinegara tower which is designated as residential floor, in the public area of the corridor and lobby lift is not every time used by many citizens.In everyday life, the elevator front lobby that serves as a public space together is not used.The space is used for the social gathering room of the mothers or the residents of the community every month, sometimes the children also cycling and running in the vestibule of the elevator.Actually has provided children's playroom and facilities on the 1st floor, but due to its location is far enough so that children choose to play near their residential units, and parental supervision is much more monitored through their residential units.
On a typical floor there are different rules.If on the 1st and 2nd floors is a floor intended for public activities, then a typical 14-storey is prohibited for public activities such as trade.Here's an overview of the rules that apply on typical floors:
Interview Results
Jatinegara Barat Flats consists of 2 towers, namely tower A and tower B. From both towers are taken sample of respondents to be interviewed about the profile of residents of Jatinegara Barat Flats, the points tendency of people gathered in the tower area, public space facilities provided in flat , as well as additional questions about neighboorhood relationships, a market that becomes merchant buyers, and playground locations.Respondents from tower A counted to 27 people, while from tower B counted to 19 people.seller on the door of the North who made the respondents amounted to 4 people.In addition to several residents and sellers, one of the managers of the Jatinegara Barat Flats named Mr. Sarkim Sukarya as Head of Sub Division Housing Jatinegara Barat Flats was also interviewed related to the profile data of towers and shared space facilities provided.
The results of the interviews of this Jatinegara Barat Flat, among others; the work of most residents in the field of informal flat, co-existing facilities in the towers are on the 1st and 2nd floor, neighboorhood relationships are reduced compared to when living in Kampung Pulo, many people who work as traders and choose to sell in residential units and doors North on the 1st floor compared to the 2nd floor provided by the tower management as a public space for selling, and more children playing on typical floor corridors around their residential units.
Observation summary
Use of shared space for social activities Shared activities gathered among the neighbors are more prevalent on the 1st floor in the public space area near the entrance stairs.In the area consists of groups of youth, fathers and even mothers who huddle to sit back together.Elsewhere, PAUD's front room and children's playground behind the towers are also used as chat rooms for mothers who are waiting for their children to play.North door selling arena is also quite crowded buyers from offices around the towers.While on the 2nd floor, communication between neighbors is not so happening because at least the stalls selling on the 2nd floor.Some people go by to buy things they need but not linger.Some children are seen playing in the 2nd floor public area.Similar to the 2nd floor, a typical floor also appears to be few people who gather together.Some children play together or play bicycles in front of the elevator, but for the fathers and mothers not many are seen clustering together..
Use of shared space for transactional activities
There are 3 places that use to become selling area provided for jatinegara barat residents.Such as selling in residential units, on the 2nd floor and at the north door.Here are the details:
Use of shared space for kids play
The Jatinegara barat flats have provided a children's playground on the 1st floor on the West side of the tower, but unfortunately most users are from the floor near the ground floor.While the children who live upstairs, more likely to play in the front elevator area and corridor at each floor.In contrast to the Jatinegara Barat Flats, most dominated by the form of 'trade-in' area type.Apparently this relates to enforcement of existing regulations, it causes pressure on the occupants who want to trade and expanding into the dwelling whose purpose is to avoid enforcement of regulations by the manager / management of towers.
Conclusion
Change of space use for shared space from landed to high-rise settlement 1) Landed settlement accommodate more diversity of shared activities rather than high-rise settlement. in Kampung Pulo, various kinds of shared activities occur in many common spaces, whereas in Jatinegara Barat flat, some variety of shared activities only occurs on the 1st floor.2) In the landed house, there is a transition area such as terrace that accommodates the needs of interaction, trade, or play.In contrast to high-rise occupancy, there is no longer found transition space because of space limitation, so the need for interaction is only occur in the public space, while the needs of trading are mostly found in private areas.3) Spaces used for shared activity in landed settlement are more multifunctional, it is different from high-rise settlement because every space is only used for 1 activity function.
Fig. 13 .
Fig. 13.Type of Trading Area at Residential Unit in Jatinegara Barat Flats.
Table 1 .
Flats Residents that Work at Informal Sector. | 2018-12-11T01:42:08.251Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "5c2f021ea555972a306fa41f6e7ae9eeb9cd9f8c",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2018/02/shsconf_eduarchsia2018_07002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5c2f021ea555972a306fa41f6e7ae9eeb9cd9f8c",
"s2fieldsofstudy": [
"History",
"Economics"
],
"extfieldsofstudy": [
"Geography"
]
} |
257077787 | pes2o/s2orc | v3-fos-license | A cycle-consistent adversarial network for brain PET partial volume correction without prior anatomical information
Purpose Partial volume effect (PVE) is a consequence of the limited spatial resolution of PET scanners. PVE can cause the intensity values of a particular voxel to be underestimated or overestimated due to the effect of surrounding tracer uptake. We propose a novel partial volume correction (PVC) technique to overcome the adverse effects of PVE on PET images. Methods Two hundred and twelve clinical brain PET scans, including 50 18F-Fluorodeoxyglucose (18F-FDG), 50 18F-Flortaucipir, 36 18F-Flutemetamol, and 76 18F-FluoroDOPA, and their corresponding T1-weighted MR images were enrolled in this study. The Iterative Yang technique was used for PVC as a reference or surrogate of the ground truth for evaluation. A cycle-consistent adversarial network (CycleGAN) was trained to directly map non-PVC PET images to PVC PET images. Quantitative analysis using various metrics, including structural similarity index (SSIM), root mean squared error (RMSE), and peak signal-to-noise ratio (PSNR), was performed. Furthermore, voxel-wise and region-wise-based correlations of activity concentration between the predicted and reference images were evaluated through joint histogram and Bland and Altman analysis. In addition, radiomic analysis was performed by calculating 20 radiomic features within 83 brain regions. Finally, a voxel-wise two-sample t-test was used to compare the predicted PVC PET images with reference PVC images for each radiotracer. Results The Bland and Altman analysis showed the largest and smallest variance for 18F-FDG (95% CI: − 0.29, + 0.33 SUV, mean = 0.02 SUV) and 18F-Flutemetamol (95% CI: − 0.26, + 0.24 SUV, mean = − 0.01 SUV), respectively. The PSNR was lowest (29.64 ± 1.13 dB) for 18F-FDG and highest (36.01 ± 3.26 dB) for 18F-Flutemetamol. The smallest and largest SSIM were achieved for 18F-FDG (0.93 ± 0.01) and 18F-Flutemetamol (0.97 ± 0.01), respectively. The average relative error for the kurtosis radiomic feature was 3.32%, 9.39%, 4.17%, and 4.55%, while it was 4.74%, 8.80%, 7.27%, and 6.81% for NGLDM_contrast feature for 18F-Flutemetamol, 18F-FluoroDOPA, 18F-FDG, and 18F-Flortaucipir, respectively. Conclusion An end-to-end CycleGAN PVC method was developed and evaluated. Our model generates PVC images from the original non-PVC PET images without requiring additional anatomical information, such as MRI or CT. Our model eliminates the need for accurate registration or segmentation or PET scanner system response characterization. In addition, no assumptions regarding anatomical structure size, homogeneity, boundary, or background level are required. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-023-06152-0.
Introduction
Over the recent decades, positron emission tomography (PET) imaging, among other molecular imaging modalities, has gained importance in preclinical, clinical, and research fields. PET is widely used in the assessment of oncology patients, cardiac pathologies, and various neurological disorders, including Alzheimer's Disease (AD), Parkinson's Disease (PD), and epilepsy. PET provides functional information useful in the assessment of a variety of metabolic processes, such as tissue metabolism, protein accumulation, and neurotransmission pathways [1,2]. Accurate and reliable quantification is a major strength of molecular PET imaging as it allows us to accurately assess molecular pathways and various diseases in their earliest phases. For instance, accurate localization and/or quantification of tracer uptake in malignant lesions is the basis for pre-and post-treatment evaluations in neurooncology. In addition, accurate delineation of tumor contours is crucial in monitoring treatment response and radiation therapy planning.
The limited spatial resolution and low signal-to-noise ratio are the main drawbacks of PET imaging, making accurate quantitative analysis a challenging task in clinical practice. The partial volume effect (PVE) results from the poor spatial resolution of PET scanners, typically in the range of 3.5 to 6 mm full-width-half-maximum (FWHM). As a result of PVE, the intensity of a particular voxel is affected not only by the tracer concentration of the tissue in which the voxel is located but also by the surrounding tissues/organs. In addition, the physical size and shape of the volume of interest (VOI) and its contrast relative to surrounding regions affect PVE. Therefore, correction for PVE is mandatory for reliable quantitative measurements of physiological parameters and image-derived metrics, such as the standardized uptake value (SUV) or tumor-to-background ratio (TBR) for specific VOIs. This is particularly relevant when the pathology itself affects the volume of the target regions, as is the case in neurodegenerative diseases which are typically associated with atrophy.
Partial volume correction (PVC) techniques can overcome the adverse effects of PVE on PET images. Studies have shown that PVC improves diagnostic accuracy and SUV quantification [3], estimation of tracer uptake in plaque in large vessels or in an atrophied gray matter [4], and measurement of ventricular mass [5], in addition to improving overall image quality for 18 F-Flortaucipir and amyloid PET tracers [6,7]. Moreover, PVC PET images allow for the quantification of different physiologic processes in the brain, including cerebral blood flow, glucose metabolism, neuroreceptor binding, and tumor metabolism [8]. Applying PVC methods also proved to improve the statistical power in cross-sectional [9] and longitudinal [6] analyses in quantitative amyloid imaging. PVC can also eliminate confounding results in studies of aging [10] or atrophy effects in the brain [11,12]. For instance, PVC prevents the underestimation of physiologic measurements due to the loss of cerebral volume resulting from healthy aging processes. A number of studies demonstrated that PVC improves clinical classification performance in AD [13] and PD [14] research. It can be concluded that PVC is necessary to ensure that measurements are truly quantitative for different regions within the brain. To this end, a number of PVC techniques have been developed and implemented with varying degrees of success [15][16][17].
Most popular PVC methods for brain PET imaging, such as Meltzer's method [15], Müller-Gärtner (MG) [16], or the geometric transfer matrix (GTM) method [17], typically require other imaging modalities, such as CT or MRI as a priori anatomical information. This dependence gives rise to a key drawback, namely the need for accurate co-registration of PET to CT or MR images. This dependency means that misregistration or inaccurate segmentation contributes to errors in PVC. Other methods use the PET scanner's point spread function (PSF). The downside of these methods is that they require an accurate estimate of the spatially varying PSF, which might be difficult to measure [17]. Other methods require dedicated reconstruction software, which is readily not available for all PET/CT or PET/ MRI systems. The mentioned downfalls of the current PVC methods highlight an unmet need for an end-to-end method to produce high-resolution PET images without the need for additional anatomical images and prior knowledge of PET scanner characteristics, tumor and VOI size, shape, or background level. Lu et al. assessed the impact of Müller-Gärtner (MG) and iterative Yang (IY) PVC on 11 C-UCB-J brain PET images for finding synaptic vesicle glycoprotein 2A (SV2A), which has been suggested as an indicator of synaptic density in Alzheimer's disease (AD) [18]. Onoue et al. compared CT and MRI-based PVC in brain 18 F-FDG PET and discussed the advantages of PVC using CT images [19]. An error propagation analysis was also performed for seven PVC methods by Oyama et al., where they showed around 30% bias in small and thin regions in AD patients with and without PVC [20].
Recently, machine learning (ML), especially deep learning (DL) as a subset of ML, has been increasingly used in various applications of PET imaging [21][22][23]. With advances in both DL algorithms and computational power, a paradigm shift favoring DL-based PVC approaches might be very promising toward the development of accurate and robust methods.
This work proposes a novel anatomical imaging-free DLassisted PVC algorithm and evaluates its performance using clinical brain studies acquired with four PET neuroimaging radiotracers. The method is an end-to-end PVC pipeline, which inputs a low-resolution brain PET image to generate a highquality PVC image, which does not require anatomical imaging and a priori knowledge of the PSF, VOI size, shape, or background level.
PET/CT and MRI data acquisition
Patients undergoing a brain PET/CT/MRI scan collected between April 2017 and February 2020 at Geneva University Hospital were enrolled in this study. The study protocol was approved by the institution's ethics committee, and all patients gave written informed content. The two hundred and twelve patients dataset were acquired following injection of four different PET neuroimaging radiotracers (50 18 F-FDG, 50 18 F-Flortaucipir, 36 18 F-Flutemetamol, and 76 18 F-Fluoro-DOPA). The corresponding CT and T1-weighted MR images were also used in this study. A combination of healthy patients and those diagnosed with different pathologies, such as neurodegenerative disease, cannabis use disorder, and internet gaming disorder, were considered for training the model to increase the generalizability of our method. The corresponding demographic details are summarized in Table 1.
Attenuation and scatter-corrected PET images as well as T1-weighted MR images were acquired on the Biograph mCT scanner and 3 T MAGNETOM Skyra scanner (Siemens Healthcare, Erlangen, Germany), respectively. The PET scanning protocol for the different radiotracers, including injected activities, scan time durations, and delay times between injection and PET scanning, is summarized in Table 1. MRI data acquisition protocol was similar for the various radiotracers. The PET/CT/MRI scanning protocol details were summarized in Supplementary Table 1.
Data processing and image registration
After cropping PET and MR images, they were coregistered to the corresponding standard brain template defined into Montreal Neurological Institute (MNI) (Montreal Neurological Institute, McGill University) standard stereotactic space [24] using the 3D Slicer software [25]. An affine registration method with 12 degrees of freedom was employed for all images [26]. Because PET and CT images acquired on the PET/CT scanner were already registered, PET images were registered to the MNI template, and the resulting registration matrix was applied to CT images. Subsequently, T1-weighted MRI was registered to CT images. All images were visually assessed to ensure accurate registration between PET, CT, and MR images.
Data augmentation
Since the number of cases for each radiotracer was not similar, the effect of dataset size on model performance was minimized using a previously developed augmentation method using the Laplacian blending (LB) technique, referred to as Robust-Deep [27], to increase the dataset size to a fixed number of 100 per radiotracer. The Robust-Deep technique increases the number of brain images by combining images of two different cases through a predefined mask to create a semi-realistic image, which can significantly enhance the robustness of the deep learning models.
Partial volume correction
The Iterative Yang (IY) technique [4] was selected from the PET-PVC toolbox [28] for PVC. Unlike region-based PVC Cognitive symptoms of possible neurodegenerative etiology methods, where the corrections are only valid for voxels within a selected region to provide regional mean values (e.g., GTM, MGM), a voxel-by-voxel correction is applied to the whole image in the IY method. As such, the PVC image f itr PVC (x) is estimated from the multiplication of the uncorrected PET image f (x) and the ratio of artificial PET images f itr a (x) and a blurred/smoothed version of this image (achieved by convolving f itr a (x) with the PSF of the PET scanner): where the artificial PET images f itr a (x) is renewed at each iteration by multiplying the average value of the artificial PET f itr a (x) at j-th regions ( A j,f itr PVC (x) ) and anatomical probability of j-th regions at location x P j (x), which is extracted from MR images: We initially considered the first PVC PET images as equal to the uncorrected PET images: Ten iterations were used for PVC in this work. The FWHM of the 3D Gaussian convolution kernel was set to 3.0 × 3.0 × 3.0 mm.
Network architecture
A Cycle-Consistent Generative Adversarial Network (CycleGAN), which learns a function to translate non-PVC PET images to PVC PET images ( Fig. 1), was used in this work. The model consists of two GANs, including four main model architectures -two generators and two discriminators -as described in detail in Supplementary Table 2. The model training and evaluation were performed on an NVIDIA 2080Ti GPU with 11 GB memory running under Windows 10 operating system. We trained four different models with five-fold cross-validation for each radiotracer.
Visual and quantitative evaluation for the test dataset
All images, namely original PVC and DL-predicted PVC images, were visually inspected to assess overall image quality and the presence of potential alterations and artifacts in tracer distribution.
Quantitative analysis was performed by calculating wellestablished metrics, such as structural similarity index metrics (SSIM), root mean squared error (RMSE), and peak signal-to-noise ratio (PSNR), showing geometric similarity between the DL-predicted and ground truth images, the level of error/noise, and the strength of the signal-to-noise ratio, respectively. Voxel-wise and region-wise activity concentration correlations between the DL-predicted and reference PET images were evaluated through joint histogram and Bland and Altman analysis. For region-wise analysis, 20 radiomic features from 83 brain regions were extracted through registering the reference and predicted images to the Hammers N30R83 brain atlas [29].
Radiomics analysis
The image biomarker standardization initiative (IBSI) [30] compliant LIFEx software [31] was used for the extraction of the radiomic features. The list of the extracted radiomic features and their related categories are presented in Table 2. The relative bias between radiomic features extracted from the reference and DL-predicted PVC PET images were calculated over all radiotracers.
Voxel-based statistical analysis
All T1-weighted, original non-PVC PVC, and DL-predicted PVC images for all PET neuroimaging tracers were pre-processed using FSL (FMRIB Software Library v6.0.1, Analysis Group, FMRIB, Oxford, UK). In each step, we initially preprocessed T1-weighted images and then applied transformation matrices to the original and DL-predicted PVC images. Therefore, the original non-PVC PVC and DL-predicted PVC PET images were identically pre-processed for each patient.
First, brain tissue was extracted from T1-weighted images using the BET function implemented within FSL (Brain Extraction Tool, FSL). Subsequently, skull-stripped T1-weighted images were used as a mask to extract brain tissue both from the original non-PVC PVC and DLpredicted PVC PET images for each patient. Afterward, T1-weighted images were registered to MNI standard space using the FLIRT function (FMRIB's Linear Image Registration Tool, FSL). Then, the original non-PVC PVC and DL-predicted PVC PET images of each patient were registered to MNI space via FLIRT using the same transformation matrix employed for registering the T1-weighted image of that subject. We applied a linear image registration method that does not change the voxels' values without smoothing to minimize the effect of pre-processing on the results. In each step, the outcome of pre-processing procedures was manually checked for potential errors, and appropriate corrections were performed when needed. After these pre-processing steps, a mass univariate methodology of Statistical Parametric Mapping (SPM12; Welcome Centre for Human Neuroimaging, UCL, UK) was used to perform a voxel-wise two-sample t-test that compared the DL-predicted PVC with reference PVC PET images for each tracer dataset [32]. This analysis identifies voxel clusters with statistically significant differences in the DL-predicted PVC images compared to the reference PVC PET images. Statistical significance was determined at a voxel-wise threshold of p < 0.05 (family-wise error corrected), and no voxel clusters exceeding the threshold were determined.
Results
All DL-predicted PVC PET images were considered visually adequate and comparable to the corresponding original PVC PET images, as exemplified in Figs. 2 and 3. In particular, Fig. 2 illustrates three different transaxial slices of MRI, non-PVC PET, reference MRI-based PVC PET, and the DL-predicted PVC PET images as well as the corresponding bias maps for the four different patients/radiotracers. The effectiveness of our model in terms of highlighting and enhancing the contours of the anatomical information in the DL-predicted PVC PET images is observable. It is worth noting that the DL-predicted PVC PET images are synthesized from only PET images as opposed to reference PVC PET which is generated from both MR and PET images. Figure 3 presents four abnormal cases depicting some artifacts and anatomical information loss in MR images, likely because of probable patient motion and the existence of metallic objects, such as a dental crown or a ventriculoperitoneal shunt or post-operative changes, causing artifacts in MR images. The reference PVC PET generated from MR The scatter and Bland and Altman plots for 83 brain regions over the test dataset for each radiotracer are illustrated in Fig. 4. For all radiotracers, the scatter plots show high correlations between SUVs calculated on DL-based PVC PET images and those on reference MRI-based PVC PET images, with a correlation coefficient (R 2 ) larger than 0.98 and RMSE smaller than 0.15 SUV. The Bland and Altman plots show that the largest variance in terms of mean error and confidence interval (CI) was achieved for 18 F-FDG The Bland-Altman plots (right panel) and scatter plots (left panel) of SUV mean differences in the 83 brain regions for various tracers. In the Bland-Altman plots, the black solid and dashed lines denote the mean and 95% confidence interval (CI) of the SUV differences, respectively. In the scatter plots, the black solid and dashed lines denote the linear regression line and identity line, respectively (95% CI: − 0.29, + 0.33 SUV, mean = 0.02 SUV), whereas the smallest variance was obtained for 18 F-Flutemetamol (95% CI: − 0.26, + 0.24 SUV, mean = − 0.01 SUV). Table 3 summarizes the outcome of quantitative evaluation metrics, including SSIM, PSNR, and RMSE for the different radiotracers. The PSNR varies from 29.64 ± 1.13 dB for 18 F-FDG to 36.01 ± 3.26 dB for 18 F-Flutemetamol. The smallest SSIM was achieved for 18 F-FDG (0.93 ± 0.01), whereas the largest SSIM was obtained for 18 F-Flutemetamol (0.97 ± 0.01). 3D-rendered views of voxel-wise statistical analysis of reference and DL-predicted PVC PET images for each PET tracer are shown in Fig. 5. The red and green regions represent voxels with statistically significant overestimation and underestimation of tracer uptake, respectively. In Fig. 6, clusters presenting with statistically significant differences between the DL-predicted and reference PVC PET images are depicted. By comparing the DL-based images with the original images, we have classified errors into two categories, namely overestimation and underestimation. The first describes the DL-predicted PVC PET voxels with a significantly lower value compared with the reference PVC PET voxels, while the latter describes voxels with a significantly higher value compared with the reference value. Table 4). The joint voxel-wise histogram analysis between reference and DL-predicted PVC PET images are depicted in Supplementary Fig. 1. The results are in good agreement with region-wise scatter plots. Figure 7 shows the relative error heat maps for 20 radiomic features and 83 regions for the different radiotracers. For a more concise presentation of the heat map, we reported the average of the left and right regions. The complete heat map for the 83 regions is depicted in Supplementary Figs. 2 and 3 to highlight abnormal cases where the left and right regions have different significantly different errors. The maximum underestimation and overestimation errors for each radiotracer can be appreciated from their corresponding color bar. It can be seen that the largest underestimation and overestimation is around 10% for 18 F-FluoroDOPA. With this radiotracer, the SUV was mostly underestimated in the DL-predicted PVC PET images for all radiomic features, except graylevel zone length matrix low gray-level zone emphasis. The average relative error for the kurtosis radiomic feature was 3.32%, 9.39%, 4.17%, and 4.55%, whereas it was 4.74%, 8.80%, 7.27%, and 6.81% for NGLDM_contrast feature for 18 F-Flutemetamol, 18 F-FluoroDOPA, 18 F-FDG, and 18 F-Flortaucipir, respectively. The average relative error of HISTO_energy_Uniformity, a feature depicting the strength of the signal, varied from 2.81%, 5.93%, 4.30%, and 3.93% for 18 F-Flutemetamol, 18 F-FluoroDOPA, 18 F-FDG, and 18 F-Flortaucipir, respectively.
Discussion
There is a growing interest in applying PVC for PET image interpretation and for quantifying various physiological parameters of interest in clinical and research settings. A variety of PVC algorithms have been developed; however, they are not yet widely applied in the clinical setting. One possible explanation for this fact could be that most available algorithms rely on certain assumptions that introduce uncertainty in the computation and ensuing quantification and require extra-anatomical images, such as CT and MRI. Moreover, additional imaging modalities are not always available; the radiation dose burden from CT and the acquisition time and cost of MRI considerably limit the clinical adoption of these techniques.
Two of the most popular PVC algorithms, namely MG and GTM, rely on anatomical/structural information provided by other imaging modalities, such as CT or MRI. Anatomically based methods assume perfect registration and segmentation of multimodal images prior to the application of PVC. In previous studies, the deleterious effect of co-registration errors [33] and segmentation errors [34,35] on PVC implementation have been investigated and reported, specifically in the context of brain imaging [17,[36][37][38]. Quarantelli et al. [37] showed that, of all possible sources of error, misregistration errors demonstrated the most substantial impact on the accuracy of PVC in brain PET imaging.
An alternative to these strategies is iterative deconvolution methods [39,40], which do not require anatomical information or assumptions regarding surrounding structures, tumor size, homogeneity, or background. One drawback of deconvolution-based methods is that they can amplify the high-frequency content of images, thus resulting in increased image noise [41]. As a result, ideal/perfect PVC algorithms appear problematic to achieve [11]. In addition, similar to other PVC methods, deconvolutionbased methods still need to incorporate the scanner's PSF in the reconstruction process [42][43][44]. As mentioned earlier, accurate characterization of the scanner's response function could be challenging as it is spatially variable, object-dependent, and can be affected by reconstruction parameters [7]. It has been shown that any PSF mismatch might be critical [28,45]. Table 4 Voxel-based statistical analysis between DL-predicted and original MRI-guided PVC for the different PET tracer The models using 18 F-FDG and 18 F-FluoroDOPA images for predicting corresponding PVC images had fewer voxels with statistically significant differences, yielding better performance. Conversely, models using 18 F-Flutemetamol and 18 F-Flortaucipir images had more voxels with statistically significant differences, demonstrating worse prediction compared to 18 F-FDG or 18 F-FluoroDOPA. Here, "voxel number" represents the extent of a difference with statistical significance, and "T-values" represent the degree of a difference with statistical significance Radiomic features analysis evaluates the consistency and robustness of existing patterns in DL-predicted and reference PVC PET images. Considering the relatively poor spatial resolution of clinical PET systems and the importance of PVE in brain PET, conventional radiomic features, such as SUV max , SUV mean , and total lesion glycolysis (TLG), are expected to be significantly impacted by PVC. Furthermore, high-order features, such as GLZLM which represent small regions/patterns with low gray levels, are essential to evaluate the impact of PVC since PVE can lead to higher bias in small structures. Although our results highlight the importance of radiomic features for the assessment of PVC methods, separate studies are necessary to further understand the relevance of radiomics analysis.
Other assumptions include homogeneity of tracer distribution in a region or tissue component or homogeneous VOI [46,47]. However, since the VOIs can be very heterogeneous in practice, the homogeneity assumption can introduce uncertainty and bias in parameter estimates [48]. In most voxel-based methods, the correction is valid only for voxels within the target region and requires initial information about the mean or relative mean values in various regions [46]. Region-based methods [42,49] require manual VOI definition, which suffers from inter-and intra-observer variability. This might potentially lead to different VOI definitions for the same target [50,51], where the difference in delineation can go up to 15 mm in diameter [52,53]. In addition, some PVC algorithms require dedicated reconstruction software [42,54] or extensive parametrization [7,40,55].
Research and development efforts are still being spent to tackle the limitations of currently available PVC algorithms. To encourage the clinical community to adopt PVC methods as part of standard processing procedures, more robust and straightforward methods must be developed and made available. It is essential to develop techniques that can be easily integrated, take as few assumptions as possible, and require as little parameter setting as possible.
Similar to other application fields, especially computer vision, DL can be helpful in tackling different problems encountered in PET imaging [56][57][58][59]. However, to the best of our knowledge, no DL-based method has been proposed to address the PVE problem in brain PET to date. Application in other body regions, e.g., in clinical oncology, is very sparse, with only a few studies so far [60]. We proposed a method that consists of an end-to-end DL-based pipeline to generate PVC PET images without the need for additional anatomical imaging modality. In addition, it does not depend on any aforementioned underlying assumptions and eliminates the need for prior information, such as VOI size, homogeneity, or regional mean value. We trained and evaluated our proposed model in 83 brain regions defined on a template for various PET neuroimaging radiotracers. The evaluation demonstrated excellent quantitative and qualitative performance. In addition, our method is not affected by the limitations or artifacts present in other imaging modalities or the registration and segmentation inaccuracies commonly existing in alternative methods. One limitation of the current study is that the data were not multi-institutional and were instead collected from a single site. Related to and as a consequence of this, the images were also acquired on the same PET and MRI scanner models. This might affect the generalizability of the model that needs to be addressed in future studies through the use of a more diverse dataset from multiple institutions to further enhance the robustness of the model. Using images acquired on different PET scanners and using different acquisition and reconstruction protocols might improve the robustness and reproducibility of the model, thus leading to better performance. In addition, due to the differing sizes of the datasets for each radiotracer, data augmentation was required. Though this was beneficial in reducing the effect of sample size and increasing the robustness of the model, it may introduce some additional bias. Eliminating the need for additional imaging modalities might be particularly useful in cases where these other modalities are not available or are available but have been acquired in other conditions (e.g., post-operative) or with an important time delay or harbor artifacts that could then be transferred to PET images, as exemplified in Fig. 3. We hope that such end-to-end approaches will facilitate the implementation of PVC in routine clinical setting owing to ease of implementation on different systems. Another limitation of the current study is the absence of an ideal ground truth for the assessment of the proposed PVC technique. The MRI-based PVC method used in this work as a surrogate of the ground truth does not reflect ideal PVC PET images. Despite the advantages of simulations where the ground truth is available for evaluation [60], no simulations/phantoms are capable of perfectly mimicking clinical scenarios. Our model performed better if it was fed with PET images in MNI space. The normalization to MNI space can be automated through simple coding to transfer the images from native space to standard space. This will enable the user to feed the model with images in the native space directly.
PVC has been shown to improve diagnostic accuracy in conditions associated with atrophy and in small brain regions [61]. An added clinical value is also expected in the evaluation of small focal abnormalities, namely the localization of epileptic foci or in the detection of small malignant lesions [62]. Our results demonstrated that the proposed approach provides quantitative accuracy equivalent to alternative approaches without the need for anatomical images.
Conclusion
This work presents an end-to-end anatomical imaging-free DL-based PVC algorithm to correct for PVE in brain PET imaging. The technique is efficient because it eliminates the need for accurate registration or segmentation or PET scanner response function characterization. In addition, no assumptions regarding VOI size, homogeneity, boundary, or background level are required. The proposed approach fits most situations encountered in the clinical setting and provides sufficient training data. Moreover, it is relatively less sensitive to minor errors that may affect intersubject comparisons and thus is more robust. Given the post-reconstruction nature of the technique, it can be used on existing clinical PET scanners to improve PET's quantitative accuracy. The qualitative and quantitative performance of the proposed method demonstrated its potential in clinical brain PET studies using various neuroimaging molecular imaging probes. The achieved performance and robustness might make the proposed approach a good candidate for the incorporation of PVC in routine clinical practice.
Acknowledgements This work was supported by the Swiss National Science Foundation under Grants No. SNSF 320030_176052, 185028, 188355, 169876, and 31003A_179373, the Louis-Jeantet Foundation with contributions of the Clinical Research Center, University Hospital and Faculty of Medicine, University of Geneva, the Velux Foundation, and the Schmidheiny Foundation. VG received research/teaching support through her institution from Siemens Healthineers, GE Healthcare, Roche, Merck, Cerveau Technologies, and Life Molecular Imaging. Avid radiopharmaceuticals provided access to the 18 F-Flortaucipir radiotracer but were not involved in data analysis or interpretation.
Funding Open access funding provided by University of Geneva.
Data availability Data used in this work are not available owing to privacy/ethical restrictions.
Declarations
Ethics approval and consent to participate All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.
Conflict of interest The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2023-02-23T06:18:20.569Z | 2023-02-20T00:00:00.000 | {
"year": 2023,
"sha1": "7d577079fefbe3d95fed56d21955cea49fd96ee3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00259-023-06152-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "19ba36bd3e36195c8a42d0db8ba22a393a77a304",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
224916696 | pes2o/s2orc | v3-fos-license | Bacteriocidal Properties of Bacillus Subtilis Nanoparticles Against Selected Human Pathogens
The use of biologically synthesized nanoparticles has been an area of research interest in recent times. Due to the high rate of bacterial resistance to antibiotics, there is a need to search for a more potent alternative to ineffective antibiotics. This study aims to evaluate the antibacterial effects of silver nanoparticles synthesized by Bacillus subtilis against Pseudomonas aeruginosa and Staphylococcus aureus. Silver nanoparticles were obtained by dissolving 0.842 gram of AgNO3 silver nitrate into 100ml of B. subtilis in Mueller Hinton broth. The antibacterial susceptibility of the nanoparticles formed was carried out using standard methods. Comparative antibacterial test was also carried out using standard antibiotics The multiple antibiotic resistance index were also determined. The zones of inhibition were 29 and 12 mm against Staphylococcus aureus and Pseudomonas aeruginosa respectively after 8 hrs of nanoparticle synthesis. The antibiotic susceptibility test using standard antibiotics revealed that S. aureus was sensitive to only Erythromycin and ofloxacin with a zone of inhibition of 15mm and 9mm respectively while P. aeruginosa was sensitive only to ofloxacin. The Multiple resistance index (MARi) shows P aeruginosa to have MARi of 0.9 while S, aureus has MARi of 0.82. The result indicated that B. subtilis nanoparticles presented better antibacterial properties than standard antibiotic and can be explored as a candidate for drug production to fight bacterial resistance to antibiotics.
I. INTRODUCTION
Multidrug resistance among pathogens has become a global problem for the treatment and cure of bacterial infections [1]. There is a need to explore new and alternative avenues for antimicrobials that are less susceptible to microbial resistance. Nanoparticles (NPs) are defined as particulate matter with at least one dimension that is less than 100 nm. [2]. NPs are very effective as antimicrobials and offer better therapy than conventional antibiotics due to the fact that nanoparticles in direct contact with the cell wall without necessarily penetrating the cell wall contrary to the mode of action of most antibiotics [3]- [5] There is a need to consider the biological synthesis of nanoparticles because it is ecofriendly as compared to other routes of nanoparticle synthesis [6].
Three NPs have found useful in antibacterial therapy which are silver, gold, and copper NPs. Silver nanoparticle (Ag) is known to perforate the cell wall by increasing the, and optical properties [7] Three NPs have found useful on antibacterial therapy which are silver, gold, and copper NPs. Silver nanoparticle (Ag) is known to perforate the cell wall by increasing the permeability and inactivating the respiratory chain [8].
Nanoparticles have been known to have antimicrobial effect on fungi, viruses, parasites and protozoa also [9] and they also exhibit synergistic tendencies with conventional antibiotics with greater retention time within the system of the host as compared to other drugs [9].
The nanoparticles synthesized by biological process especially microorganisms have higher catalytic reactivity, greater specific surface area and can interact with other microorganisms [10]. The main interest is production of nanoparticles using a biological method however, utilizing a biological source gives an easy approach, easy multiplication, and easy increase of biomass and size uniformity [11].
Silver salts such as silver nitrate (AgNO3) are effective at providing a large quantity of silver ions all at once. Because silver binds to thiol groups. Although one of the antimicrobial mechanisms of Ag + is binding efficiently to sulfur-containing compounds in the bacterial surface and rupture their cell wall causing cell death [12]. The downside of this is that thiolcontaining compounds such as proteins with cysteine residues can serve to absorb silver ions and neutralize their antibacterial activity by preventing the silver ions from attacking DNA [13].
Some authors have reported the biosynthesis of gold, silver, gold-silver alloy, selenium, tellurium, platinum, palladium, silica, titanium, zirconia, quantum dots, magnetite and uranite, nanoparticles by bacteria, actinomycetes, fungi, yeasts and viruses [14]. Nanoparticles are biosynthesized when the microorganisms target ions from their environment and then turn the metal ions into elemental metal through enzymes generated by the cell activities [15].
The aim of this research is to synthesize nanoparticles from bacteria using silver nitrate and used as antibacterial against selected bacteria.
A. Preparation and standardization of Bacteria
The bacteria used include clinical strains of Bacillus subtilis, Pseudomonas aeruginosa and Staphylococcus aureus in this was were collected from the Achievers University Microbiology Laboratory and confirmatory tests were carried out to ascertain the viability of the test organisms using biochemical methods and Gram staining techniques. The bacteria cultures were standardized to the McFarland's constant using BaSO4 according to the procedures described by Cheesebrough (2000).
B. Inocula preparation. Bacillus subtilis, Pseudomonas aeruginosa and
Staphylococcus were suspended in Mueller Hinton broth. and incubated for 4 hours to obtain a concentration corresponding to McFarlands constant (0.5 x 108 cfu/ml). The inocula were standardized with the prepared barium sulphate
C. Synthesis of Nanoparticles;
A modified method of [28] was used in the synthesis of nanoparticles from Bacillus subtilis. A 0.842 gram portion of AgNO3 (AgNO3, Kermel Nigeria), was added to standardized broth culture of Bacillus subtilis in a test tube and shaken thoroughly using the magnetic shaker (Gallenkamp 23/5861E). The test tube was wrapped with foil paper to prevent oxidation of silver [29] and kept under sterile condition.
F. Multiple Antibiotic Resistance Index (MAR):
The multiple antibiotic resistance (MAR) index for the bacteria strains used was determined according to the procedure described by Krumperman [32]. The indices were determined by dividing the number of antibiotics to which the organism were resistant to (a) by the number of the antibiotics tested (b), Resistance to three or more antibiotics is taken as MAR index and MAR greater that 0.2 indicates a high level of resistance.
G. Statistical Analysis
The statistical analysis was carried out using SPSS (Statistical Package for Social Science) version 20, to test the level of significant at 5% level of confidence. The antibiotic susceptibility of the test bacteria was compared to antimicrobial activity of Bacillus subtilis of silver nano particle on both Staphylococcus aureus and Pseudomonas aeruginosa using T-Test.
III. RESULT AND DISCUSSION.
A. Physical observation of culture during synthesis. At inoculation, the culture of B. subtilis presented yellow colouration, When AgNO3, was added, there was a colour change from yellow to brownish yellow and after 2 hrs of incubation, there was a colour change to brown which was consistent throughout 8 hrs of incubation evidencing the synthesis of nanoparticles. Presupposing that nanoparticles formation started after 2 hrs of synthesis.
B. Microscopic observation of B. subtilis after synthesis of nanoparticles.
There was no damage to the morphological feature of the cells of B. subtilis after synthesis when viewed under the microscope. The Gram status also remained as positive.
C. Physical appearance of nanoparticles on agar plate. The plates presented a silver coated surface which could be attributed to the silver nanoparticles. Figures 1a and 1b show the agar plates of B. subtilis against Pseudomonas aeruginosa and Staph aureus after 8hrs of synthesis after incubation at 37 o C for 24 hours. Change of the reaction mixture from pale yellow to brown colour indicated the production of silver nanoparticles (Ag + to Ag0) [33] The appearance of brown colour in AgNO3 treated flask can be attributed to the surface Plasmon resonance (SPR) suggestive of the formation of AgNPs [34].
A brownish colouration was observed around the zones of inhibitions of the silver nanopaticules synthesis and for both test organisms, however the S. aureus had a shiny silver colouration within the zones of inhibition between 7 to the 8 hour of synthesis after incubation at 37 o C for 24 hours. Zone of inhibition showing silver Coating of AgNO3 + nanoparticles b) Fig. 1 a, b. Zones of inhibition of silver nanoparticles synthesized from B. subtilis against P. aeruginosa and S. aureus respectively.
D. Antibiotic Susceptibility
The comparative antibiotic susceptibility test using standard antibiotics disc showed that Ofloxacin, ciprofloxacin, were effective against Staphylococcus aureus and Pseudomonas aeruginosa and with zones of inhibition ranging from 4 mm to 15 mm while Bacillus subtilis silver nanoparticles (BsAgNp) had zones of inhibition of 12 and 29 mm against P. aeruginosa and S. aureus respectively as presented in Table 1. The antibiotic susceptibility interpretation chat according to [35] reported that values less than 9 mm for antibiotics is considered resistant. So the test organism can be said to be resistant to the standard antibiotics used. The problem of antibiotic resistance of microorganisms has posed a serious and immense global challenge. Most of the currently available antimicrobials which are synthetic are inefficient and some elicit adverse side effects [36]- [37] The use of antibiotics over the years has led to numerous hazards to public health where some infection do not respond to any existing drugs [38]. This is evident with the level of resistance recorded as shown above.
E. Multiple antibiotic resistance index (MARi)
The multiple antibiotic resistance index of Pseudomonas aeruginosa and Staphylococcus aureus is presented in Table 2. P. aeruginosa has MAR index of 0.9 and the same value was recorded for S. aureus. Bacteria having MAR index of ≥0.2 indicates a high level of multi drug antibiotic resistance. The result thus presented confirms the high level of resistance of these two organisms to standard antibiotics.
F. Antibacterial activity of B. subtilis nanoparticles against test bacteria
Antimicrobial assessment of the synthesized nanoparticles was carried out on test bacteria hourly during synthesis and the values recorded. Interestingly, the nanoparticles synthesized by Bacillus subtilis showed appreciable antibacterial actions as compared to the antibiotics used. The activity of B. subtilis nanoparticles is presented in Table 3. The activity is time dependent as more efficacy was observed with increase in time of synthesis with the highest activity recorded at the 8 th hour of synthesis.
Time in hrs
Zone of inhibition in mm S.aureus P. aeruginosa At inoculation (0hr) 5.5 ± 0.7 3.5 ± 0.7 1 9 ± 0 5 ± 0 2 11.5 ± 0.7 6 ± 0 3 12.5 ± 0.7 7 ± 0 4 16 ± 0 7.5 ± 0.7 5 17.5 ± 0.7 8.5 ± 0.7 6 20 ± 0 10 ± 0 7 23 ± 0 11 ± 0 8 29 ± 0 12 ± 0 Comparatively, the activity of nanoparticles synthesized from B. subtilis was significantly better (P <0.05) than standard antibiotics. The level of bacterial resistance to antibiotics is evident in the result presented in table 1. The high level of resistance is not surprising, given the notoriety of P. aeruginosa. P. aeruginosa, known to cause infection in patients with compromised immune system and it is usually involved in nososcomial infection [39]- [41]. Report has also shown this organism to be resistant to a number of antibiotics [42]. On the other hand, S. aureus showed better susceptibility to the nanoparticles than P aeruginosa (Table 1). S. aureus is also known to have acquired multiple antibiotic resistance to various antibiotics [43], [44]. Various microorganisms have developed drug resistance over many generations as a result of genetic mutation, misuse or overuse of antibiotics, wrong prescription and other related factors. However, several authors have described Ag ions and Ag based compounds as having strong antimicrobial effects [45] which can be explored as alternatives to ineffective antibiotics.
The obvious activity of Silver nanoparticle suggests that it could be used as an effective antibacterial against S. aureus and P. aeruginosa which are culprits in a number of diseases such as chronic wound infections, respiratory infections and other Staphylococcal infection. .Silver ions in particular have been shown to exert strong inhibitory and antibacterial effects as well as to possess a broad spectrum antimicrobial property [46], [47] [49]; [6] in their work reported that silver nanoparticles have potent antibacterial activities against S. aureus and E. coli. [50] showed that silver bio-nanoparticles from bacteria have inhibitory and bactericidal effect against Methicillin-Resistant Staphylococcus aureus (MRSA). The mechanism of inhibitory action of silver ions on microorganisms is not completely known. It is believed that DNA loses its replication ability and cellular proteins become inactivated with Ag+ treatment. also, it was recorded that Ag+ may likely binds to functional groups of proteins, resulting in protein denaturation of microbes [51].
The first evidence of bacteria synthesizing silver nanoparticles was established using the Pseudomonas stutzeri AG259 strain that was isolated from silver mine. Some microorganisms can survive metal ion concentrations and grow under those conditions due to their resistance to that metal. The mechanisms involved in the resistance are the efflux systems, alteration of solubility and toxicity via reduction or oxidation, biosorption, bioaccumulation, extracellular complex formation or precipitation of metals, and lack of specific metal transport systems [52].
The application of antibacterial nanotechnology is fast gaining importance in the prevention of devastating consequences of antibiotic resistance. The basic properties of nanomaterials make them good candidates as antimicrobials [53]. Nanotechnology today offer promising future as alternative to antibiotics in the control of bacterial infections as a result of their prolonged antimicrobial activity coupled with low toxicity as compared to antimicrobial agents that display short term activity, high toxicity and side effects.
IV. CONCLUSION
There are strong needs for the development for new and novel antimicrobial agents that are less toxic, less expensive with lesser or no side effect and the exploration of nanotechnology is an option to be closely invested in. Sources like bacteria and plant materials can be looked into as sources of nanoparticles in combating resistant bacteria. | 2020-10-19T18:11:54.905Z | 2020-09-17T00:00:00.000 | {
"year": 2020,
"sha1": "2cd1c4852902bbb3f493b3d7e4bf355ce320733d",
"oa_license": "CCBYNC",
"oa_url": "https://ejbio.org/index.php/ejbio/article/download/65/29",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "256ea62df695a5f681789a53b2ce151b5b43f742",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
218951419 | pes2o/s2orc | v3-fos-license | Mindset development by applying U theory and religious concept in educational system: Thailand as a case
. The dynamic changes, the fast growing of technology, causing unpredictable environment that forces people to live with tension. Man must learn to adapt oneself mentally to the changing phenomenal. The mindset is crucial for a workplace and chaotic society. This article, the author has reviewed religious concept and the U theory form secondary data. The purpose is to propose the development of people emphasis on their mindset by employing the alignment of the two concepts in the educational system in Thailand. These two concepts accent to cultivate in the deepest mind of human. It enhances people to have consciousness, being an open mind, open heart, and open will. As a result, making people to live with others harmoniously, peacefully, seeing things as it is, and aware of their thought and temper that might lead to intolerable situations.
Introduction
Presently, the modern globalization become more complex since it evolves by the mean of economic using monetary to open free trade, investing and funding. The developed countries have better capability and knowledge of business competitiveness. Moreover, they have good economy which lead to purchasing power in the perfect market [1]. Furthermore, the advancement of technology makes the globe become flat where people are freely getting in touch both face to face and remote. Whereas, the robotics or artificial intelligence (AI) are replacing repetitive workforces.
As a result, various types of firms adapt their business model by reducing workforce or shutting down their branches. People, as well as, needs to re-skills, up-skills and practice life-long learning to create their own value, where AI could not replace. Those who are fail to improve themselves may face with difficulties to live in the fast-changing environment. People reacts to the problem and tension differently; some might express aggressively while others might suffer from depression.
Thus, in Thailand the government has implemented the policy to develop human capital to be ready for the unforeseen changes of the future. However, no one knows the right skills set necessary for the future, unless the mindset of people who willing to be a life-long learner and know how to adapt themselves to the dynamic of changes. This is consistent to [2] suggested that the workforce should be educated to be able to learn and grow focus on developing attitudes and behaviour. Since the mind leads to action whether positive or negative. A person who lack of consciousness or education might conduct objectionable, such as teenagers replicate the risky, threatening, or violent behaviour from social media. Therefore, the author aims to highlight the U theory and religious concept (in this article refers to Buddhism) which instruct people from the deepest of mind, spending a moment to feel and noticing their own mental state. So that people could control their actions and conduct their behaviour properly.
Theoretical and analysis
In the disruptive paradigm of changes, the high quality of education alone is not enough for harmonious society. The good quality populations shall have good health, knowledge, skills, and specifically morals and ethics [3,4,5]. Nevertheless, the action of human depends on the value, perception, attitude, and personality they hold with them. The value is the believe, attitude, norms which lead to action and practice [6].
To develop human in Buddhism called Sikkhā or education. Prior to administer the education to human, one must understand the human behavior. In Buddhism divided human behavior or actions called karma into three types including; physical actions, actions derive from verbal, and the thought. These three kinds of actions the thought is the most influential for human to be, to act. Therefrom, the education in Buddhism highlights on human mind to be neutral to whatever they sensing from the five sense of touches. Those are seeing, hearing, smelling, touching and tasting.
These five senses of touches might generate favorable or unfavorable feeling, such as one experiences something amicable, one would be happy. In contrast, if one experiences something rough, he/she would be agitated. The mood of human is also contagious to each other's. It is worth noticing that if you stay close to someone having bad mood, you would also be feeling unhappy. Furthermore, whatever actions one commit, the result would reflect to he/she in any kind. On the ground, to educate people in Buddhism stress profoundly on the mind which will lead to improve human behavior, intellectual, and mind [7,8].
From the reason of human is a social being [1] proposed that the whole world can live peacefully together as a concept of new normal of imagination. He considers that imagination come before knowledge, man can imagine without boundary as one wishes. And certainly, the imagination is a power to construct knowledge. All human being and all form of life are living in the same world or 'the same oneness'. If something was demolished, others would be affected and ruining. Correspondingly, human will lose harmonious. In fact, human has spirituality to survive by nature just as when we sacrifice or help other people the endorphin of happiness will be released. This endorphin would generate spiritual happiness, which make human become healthier. If a person lost spiritual happiness, that person would lack of self-actualization and will searching for it. The society system is diverse, but must be an integrity of diversity. By considering of three dimensions including; firstly, expanding the spiritual relatedness from community, social and the world.
Secondly, building the social fundamental structure like the body's cell that connected and work systematically. And thirdly, construction of the communication system as a whole i.e. all people knowing the same facts. In the new paradigm of human being, it is a must to create a cognitive of holistic view of individual to be the wholeness. Everyone has to understand the reality of environment that everything is dynamic inter-connected. It is a new consciousness since each individual at different life stage has distinctive consciousness E3S Web of Conferences 164, 12002 (2020) TPACEE-2019 https://doi.org/10.1051/e3sconf /202016412002 of mind. The harmonious of mind will let us free from defilements and stimulate universal love, understanding to others and any being.
Whereas theory U proposed by [9] explained that the theory explores to disseminate a hidden dimension of social process that human confront every day. The theory as a new science to performed with the mind of enlightenment. Albeit, the purpose of theory U is suitable for leaders and individuals or groups to handle with fluctuate situation by inner source of innovation. The U will penetrate into cognitive spaces profoundly for innovation and changes. The theory clarifies the journey of innovation to shine out from observing, reflecting, plan and act. One shall have a freedom of mind to stress on the senses, observe, perceive of any queries arise as an initiative. It starts from downloading information and seeing by holding the judgement; after that sensing the whole situation and observe until it fades away, then it goes to presencing which means one can connect to the deepest source within, and later it is crystallizing vision that emerge.
The last two steps are prototyping to explore the future by doing the new thought, and performing the newness into practices. The U theory also influenced by [10] that creativity is important for health, happiness and all aspect of life succeed. The creativity is inside everyone, but it was covered by the voice of judgement. To let the creativity come out, one need suspension with patience and willingness not to conceptualize framework on what one sees, rather observe without forming conclusion. Even, the group dynamic has coercive share of norms and ways of thought, seeing effectively, but people have to learn how to take time, assess, and stop assumption. There are two types of knowing, one is 'analytic knowing' and 'primary knowing'.
The analytic knowing is a cognitive science compose of independent objects and the state of human mind. By far majority, human live analytic knowing segregating "I" and "it". In contrary to primary knowing, it is an interconnected wholeness, no isolation, direct, presentation, spontaneous by means of timeless and larger than the 'self'. So the scholars present the model U starting from sensing, presencing, and realizing as shown in figure 1.
Fig. 1. Sensing
In addition, [10] explains that individual must open mind, open heart, and open will. These three words can be explained that the open mind is the ability to access intellectual (IQ). The open heart is the capability to reach an emotional intelligence (EQ), which means the competence to empathize others in different context. And the open will is the ability to reach an authentic aim or spiritual intelligence (SQ). For this reason, the author sees that institutions can adapt to learning and teaching for Thai students, notably in the deepest mind and consciousness.
Conclusion and recommendations
In conclusion, the education in Buddhism cultivates people profoundly from inside out at the consciousness level of mindfulness to behave morally and ethically. It is to make people aware of their own actions. In the essence is that to be ignorance to any kind of defilement, agitation, but just observe that feeling and soon it will fade away. The conscious mind would also bring insightful knowledge. It is the new knowledge, thereto, might be come up for solving the problem which quite necessary for knowledge-based economy in the disruptive environment.
At this time, most of institutions practice knowledge management searching for best practice to use in learning, teaching, research and innovation. Therefore, the concept of developing from the deeper level of mind should be consider to use for teaching and learning. As it would be beneficial for people to understand others, and feeling the wholeness. While, the U theory is to compliment by instructing people to avoid using their experience to judge without cogitate. In the same notion, these two theory guidelines people to use their analytical thinking skills, logical skills, and problem-solving skills effectively. Therefore, in the chaos and disruptive era the consciousness and mindset development shall be cultivate for young generation throughout the country via the educational system. | 2020-05-07T09:11:38.240Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "8594d660c60c688fd648aba21e904b6221230b80",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/24/e3sconf_tpacee2020_12002.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ea8ae643d99f5a099c1578cd17a3acf44474d19b",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
204743642 | pes2o/s2orc | v3-fos-license | A variety of clustering in $^{18}$O
We investigate excited states of $^{18}$O by using the antisymmetrized molecular dynamics. It is found that five different types of cluster states exist which we call ${}^{14}{\rm C}+\alpha$, higher-nodal ${}^{14}{\rm C}+\alpha$, two molecular states, and $4\alpha$ linear-chain. The calculated $\alpha$-decay widths are compared with the observed data. The higher-nodal ${}^{14}{\rm C}+\alpha$ cluster states reasonably agree with resonances reported by the recent experiments. We predict that the $\alpha$-particle emission is dominant for the ${}^{14}{\rm C}+\alpha$ cluster states while the molecular states prefer the $^{6}$He emission.
Although the many experimental and theoretical studies have been performed, there are two problems to be solved. First, the 12 C + α + 2n molecular states suggested by von Oertzen et al.
have not yet been investigated theoretically. In the case of 12 Be [42], it is shown that the α + α + 4n configuration appears at the lower excitation energy than various two-body cluster states such as 8 He + α, 6 He + 6 He, and 7 He + 5 He. The former is interpreted as the molecular orbital structure analog to the covalent bond where the four valence neutrons locates around the clusters simultaneously, whereas the latter is interpreted as the ionic configuration where the four valence neutrons are trapped in either of α-cores. If the 14 C + α state is interpreted as the "ionic" state, the molecular orbital configuration 12 C + α + 2n should be also expected. Second, there is lack of quantitative calcu-lation to compare with the observations. As mentioned above, the observables such as B(E1) and α-decay widths have been already measured. Therefore, we need to calculate the B(E1) and α-decay widths quantitatively to establish the clustering in 18 O. In addition, the inversion doublet is essential to prove the asymmetric cluster structure, but the assignment of the negative-parity 14 C + α bands are controversial between the experiments.
In this paper, we investigate excited states of 18 O by using the AMD. For quantitative comparison of the excitation energy, we improve a effective nucleon-nucleon interaction and wave functions compared with the previous AMD calculation [26]. To search for the molecular states, we calculate higher excited states. As the result, we suggest that the five cluster configurations exist. Moreover, we compare the energy spectra and α-decay widths with the experiments. Calculated higher-nodal 14 C + α cluster states reasonably agree with resonances reported by the recent experiments. Also, it is found that the 14 C + α cluster states decay by α-particle emission while the 12 C + α + 2n molecular states decay by 6 He emission.
II. THEORETICAL FRAMEWORK
In this work, we use the microscopic A-body Hamiltonian written aŝ wheret i ,t c.m. ,v N , andv C are the kinetic energy per nucleon, kinetic energy of the center-of-mass, nucleonnucleon interaction and Coulomb interaction, respectively. The Gogny D1S [43] is used as the effective nucleon-nucleon interaction . The AMD wave function Φ AMD is represented by a Slater determinant of single particle wave packets, Here, ϕ i is the single particle wave packet which is a direct product of the deformed Gaussian spatial part [44], spin (χ i ) and isospin (ξ i ) parts, The centroids of the Gaussian wave packets Z i , the direction of nucleon spin a i , b i , and the width parameter of the deformed Gaussian ν σ are variables determined by the variational calculation. In this calculation, we perform the variational calculation with a constraint potential on the quadrupole deformation parameter β, After the variational calculation, the eigenstate of the total angular momentum is projected out. We perform the generator coordinate method by employing the quadrupole deformation parameter β as the generator coordinate. The wave functions Φ Jπ MKi are superposed, where the coefficients g J Kiα and eigenenergies E Jπ α are obtained by solving the Hill-Wheeler equation [45]. To discuss the dominant configuration in Ψ J Π Mn , we calculate the overlap between Ψ J Π Mn and the basis wave function Using the GCM wave functions, we estimate the αdecay width from the reduced width amplitude (RWA). In order to calculate the RWA, we employ the Laplace expansion method given in Ref. [46]. The reduced width γ l is given by the square of the RWA, and the partial α-decay width is a product of the reduced width and the penetration factor P l (a), where P l is given by the Coulomb regular and irregular wave functions F l and G l . Here, the channel radius a is chosen as 5.2 fm which is same with those used in Refs. [33,34]. The wave number k is determined by the decay Q-value and the reduced mass µ as k = 2µE Q . A dimensionless α-reduced width is defined by the ratio of the reduced width to Wigner limit γ 2 W , and the spectroscopic factor S is defined by the integral of the RWA, To investigate the valence-neutron properties, we calculate single-particle orbits of the intrinsic wave function.
The calculation is referred in our previous work [19]. Using the single-particle orbit φ s , we discuss the amount of the positive-parity component, and angular momenta in the intrinsic frame, III. RESULTS Figure 2 shows energy curves for J π = 0 + states obtained by the β-constraint variational calculation. On these curves, six different structures appear whose density distributions are illustrated in Fig. 3. There are also other local energy minima with different structures above these energy curves. However, they do not have prominent cluster structure, and hence, are not shown in this figure. We focus on and discuss these six structures by referring their density distributions and properties of valence neutron orbits listed in Table. I.
A. Energy surface and density distribution
The lowest energy configuration shown by circles has the minimum at E = −139.6 MeV and β = 0.2. As seen in Fig. 3 (a), this configuration has no pronounced clustering and it becomes the most dominant component of the ground band. The properties of the valence neuron in Table. I (a) shows two valence neutrons approximately occupy (d 5/2 ) 2 orbit, that is, j ≈ 5/2 and l ≈ 2.
In β = 0.6 ∼ 0.8 region, another configuration shown by blue triangles becomes the local energy minimum. Figure 3 (b) shows that it has the spatially separated α and 14 C clusters. In our calculation, positive-parity 14 C + α cluster configuration is little different from that of previous AMD work in Ref. [26]. Valence neutrons occupy the p-orbit perpendicular to the symmetry axis, while they occupy the orbit parallel to the symmetry axis in previous work.
In β = 0.8 ∼ 1.0 region, two different configurations shown by red triangles and green squares are almost degenerated. The red triangles show the well developed α and 14 C clustering illustrated by Fig. 3 (c). On the other hand, the configuration shown by the green squares has a symmetric configuration illustrated by Fig. 3 (d). It seems that two valence neutrons distribute all over the nucleus. We consider that this configuration corresponds to 12 C+α+2n molecular states suggested by von Oertzen et al [32] although our calculation does not show the clear 12 C and α core. The properties of the valence neuron in Table. I (d) show that the valence neutrons occupy the d-orbit, which behaves like the molecular orbit.
In β = 1.0 ∼ 1.2 region, another molecular configuration denoted by yellow diamonds becomes the yrast states. Figure 3 (e) displays well developed α clustering but a different orbit of the valence neutrons which locates parallel to the symmetry axis. From the singleparticle properties in Table. I (e), it is found that the valence neutrons occupy the σ-orbit, that is, |j z | ≈ 1/2 and |l z | ≈ 0. This configuration is quite similar to the σ-orbit predicted for 22 Ne [38].
In extremely deformed region β > 1.2, the exotic clustering is realized, which is denoted by cross symbols. Figure 3 (f) shows the linear alignment of 4α particles. In addition, two valence neutrons occupy the π-orbit (|j z | ≈ 3/2 and |l z | ≈ 1) from Table. I (f). Thus, we show the π-bond linear-chain configuration exists in 4α system, similar to 14 C [14,19,22]. Fig. 3. Each column show the single particle energy ε in MeV, the amount of the positive-parity component p + and the angular momenta (see Eqs. (12)-(14)). Table. II.
The energy minimum of the 1 − state with negativeparity is located at β = 0.40 with the binding energy −134.57 MeV shown by circles, which has the density distribution described in Fig.5 (a). From Table. II, it is found that valence neutrons occupy the (d 5/2 ) 2 orbit which is same as the ground state. In this configuration, the most weakly bound proton is excited into the d 5/2 orbit (i.e. 1p1h configuration π(p 1/2 ) −1 (d 5/2 ) 1 ), so that the negative-parity is attained.
In β = 0.8 ∼ 1.0 region, two different configurations shown by blue triangles and green squares appear, which is similar to the positive-parity. The blue triangles show the pronounced 14 C + α clustering illustrated by Fig. 5 (b). This configuration is almost same as that of positiveparity. Therefore, it is the counterpart of the inversion doublet. On the other hand, the configuration shown by the green squares is the molecular configuration but little different from that of the positive-parity. The most weakly bound neutron shown in the lower panel of Fig. 5 (c) is same as that of the positive-parity molecular state, namely, the d-orbit. The other valence neutron, however, has different properties p + = 0.01, |j z | = 0.54 and |l z | = 1.00 in Table. II (c). These properties correspond to the π-orbit.
In β = 1.0 ∼ 1.2 region, the steep curve shown by yellow diamonds becomes the lowest energy configuration. Figure 5 (d) shows a similar configuration to the 12 C + α + 2n molecular states with the σ-orbit. Actually, the most weakly bound neutron occupy the σ-orbit because |j z | ≈ 1/2 and |l z | ≈ 0. However, the other valence neutron does not show a clear molecular orbit because of the parity mixing (p + = 0.51).
Similar to the positive-parity, the 4α linear-chain configuration appears at β > 1.2 with rather high excitation energy. From Fig. 5 (e), it seems that the two valence neutrons occupy the π-orbit. However, their properties show that the parity mixing occurs even in this configuration. Note that these orbits locate around left 3α showing the 14 C + α correlation. Figure 6 shows the positive-parity spectrum up to J π = 6 + state obtained by the GCM calculation. We classified the obtained states to six bands and other noncluster states based on the configurations discussed in the previous section. This classification is based on their overlap with the basis wave functions defined by Eq. (7). Table III lists the member states of these bands and compares with the observed data.
The member states of the ground band shown by circles in Fig. 6 are dominantly composed of the basis wave function shown in Fig. 3 (a). In fact, the ground state has the largest overlap with this basis that amounts to 0.98. The calculated binding energy is −139.97 MeV that nicely agrees with the observed value (−139.81 MeV). Due to the improvement of the wave functions and effective interaction, the moment-of-inertia of the ground band is smaller than that of the previous AMD framework [26], as a result, the excitation energies of the 2 + 1 and 4 + 1 states are also reasonably improved.
The 14 C+α cluster configuration generates a rotational band denoted by blue triangles at 5.44 MeV near the 14 C+α threshold. The bandhead state 0 + 2 has the largest overlap with the basis wave function shown in Fig.3 (b) which amounts to 0.94. Excitation energies of this band are closer to those of the observation than the previous AMD framework although they are still overestimated. The band shown by red triangles is composed of the basis wave function in Fig. 3 (c). The bandhead state 0 + 4 has the largest overlap with this basis which amounts to 0.71. To make the difference between 0 + 2 , 0 + 3 , and 0 + 4 states clear, their reduced width amplitudes are shown in Fig. 7. In the 14 C + α channel (left panel), the 0 + 2 state has four nodes (n = 4) while the 0 + 4 state has five nodes (n = 5). Therefore, we conclude that the 0 + 4 state is the higher-nodal 14 C + α state, which corresponds the 0 + 4 state in OCM calculation [27] and the 0 + state observed in Ref. [33]. In addition, the candidates for 2 + and 4 + states which have rather large α-decay widths are also observed, and listed in Table. III.
The 12 C + α + 2n molecular configurations, denoted by green squares and yellow diamonds, generate two rotational bands respectively. The former band generates the K π = 0 + band built on 0 + 3 state at 11.34 MeV, and the K π = 2 + band built on 2 + 6 state at 13.78 MeV. The bandhead 0 + 3 state has the largest overlap with the wave function shown in Fig.3 (d) that amounts to 0.97. The σ-orbit configuration generates the K π = 0 + band built on 0 + 6 state at 16.07 MeV, and the K π = 2 + band built on 2 + 11 state at 21.39 MeV. The bandhead 0 + 6 state has the largest overlap with the wave function shown in Fig.3 (e) that amounts to 0.87. We consider that the green squares band corresponds the band suggested by von Oertzen et al [32] although our calculation overestimates their excitation energies. Note that the molecular states 12 C + α + 2n exist above the two-body atomic (or "ionic") states 14 C + α in the case of 4α system, while the two-body atomic states 4 He + 8 He exist above the molecular states α + α + 4n in the case of 2α system [42]. This inversion has its origin in the difference between the α − n interaction and 12 C − n interaction. This is very analogous to the electro-negativity in a molecule. 14 C has much larger two-neutron separation energy than 6 He and 8 He, so 12 C cluster strongly attracts two valence neutrons than α-cluster. This effect suggests an the extended threshold rule for neutron-rich nuclei to understand the nature of the clustering.
At rather high energy region, the 4α linear-chain configuration generates a single rotational band (cross symbols) on the bandhead state 0 + 9 which has the largest overlap with the basis shown in Fig.3 (f) which amounts to 0.72. Although the linear-chain of 3α in carbon isotopes has long been investigated, there are few works for 4α in oxygen isotopes [48,49]. We for the first time suggest its existence and it is considerably fascinating if it is observed.
In the negative-parity spectrum in Fig. 8, we discuss the five bands which correspond to the configurations seen in the energy curves. Detailed properties and the comparison with observations are listed in Table. IV. The 14 C + α configuration generates a rotational band which is built on the 1 − 4 state located at 11.41 MeV. The bandhead state has the largest overlap with the basis wave function shown in Fig.5 (b) which amounts to 0.67. The negative-parity band of 14 C+α is 6 MeV higher than that of positive-parity, which constitutes the inversion doublet. The calculated bandhead of the negative-parity band is closer to the 1 − state in Ref. [33] than those in Refs. [29][30][31]. As discussed later, the calculated α-decay width also supports that the 1 − state observed by Ref. [33] is a 14 C + α cluster state.
The molecular states denoted by green squares generate two rotational bands. The K − = 1 − band build on the 1 − 5 state located at 12.49 MeV, while the K − = 2 − band build on the 2 − 4 state located at 10.43 MeV. The bandhead state 2 − 4 has the largest overlap with the basis wave function shown in Fig.5 (c) which amounts to 0.87. These member states are reasonably agreed with the excitation energies suggested by von Oertzen [32], although all spin-parities are tentative experimentally. The other 12 C + α + 2n states denoted by yellow diamonds form a K − = 1 − band build on the 1 − 10 state and K − = 0 − band build on the 1 − 14 . As the angular momentum increases, these bands are fragmented into several states due to the mixing of K quantum numbers. The member states have large overlap with the basis wave function shown in Fig. 5 (d), which amount to, for example, 0.56 in the case of the 1 − 10 state. Above E x = 30 MeV, the 4α linear-chain band appears and the bandhead state 1 − 16 has the largest overlap with the basis shown in Fig.5 (e) which amounts to 0.89. In the case of 18 O, the linear-chain band is not fragmented and forms a single band, which is different from the negativeparty linear-chain of 14 C in our previous work [19].
C. Decay widths
We calculate α-decay widths and compare with the experimental data. Calculated widths for each band are listed in Tables. III and IV. The channel radius a is 5.2 fm, which is same as those used in Refs. [33,34]. The present calculation shows that the 14 C + α band has large dimensionless reduced α-widths, which reasonably agrees with the observation for the 6 + 1 state. In addition, the higher-nodal 14 C + α band has large partial α-decay widths (e.g. Γ α = 184 for 0 + 4 and Γ α = 308 for 4 + 3 ). Compared with the experiments, the 0 + state at E x = 9.9(1) MeV with the rather large α-decay (Γ α = 3200(800)) width is considered as the higher-nodal 14 C + α cluster state although the excitation energy and width do not fully agree. The underestimation of the αdecay widths is explained as following. As shown in Fig. 7, the RWA of the 0 + 2 state has the surface peak around 4.5 fm, and hence, the channel radius a = 5.2 fm is appropriate. On the other hand, that of the 0 + 4 state has the surface peak around 7.0 fm. In our calculation, therefore, the channel radius a = 5.2 fm is not valid for the 0 + 4 state. Using a = 7.5 fm, the α-decay and dimensionless width for the 0 + 4 state are Γ α = 1717 and θ 2 α = 0.68, which is comparable with the observation. In the same manner as the 0 + , we consider the reported 2 + and 4 + higher-lying resonances correspond to the higher-nodal 14 C + α band.
In the negative-parity, the reported resonances are close to the member states of the 14 C + α cluster band. In particular, the calculated 1 − 4 state at 11.41 MeV reasonably agrees with the 1 − state at 9.76 MeV observed by Ref. [33]. This implies that the bandhead of the negative-parity 14 C + α band is the 1 − state at 9.76 MeV in Ref. [33] but not those at 4.45 and 8.04 MeV in Refs [29,31,32]. In order to establish the negative-parity band, we need to compare the B(E1) with the experiment [29]. It will be reported in a future article.
Finally, we mention the decay patterns of the molecular states in 18 O. Figure 9 shows spectroscopic factors defined by Eq. (11) for the 0 + state. The 14 C + α cluster states 0 + 2 and 0 + 4 show large spectroscopic factors with respect to the α-decay (black bars). In contrast, the 12 C + α + 2n molecular state, 0 + 3 , shows the largest spectroscopic factor with respect to the 6 He-decay (white bar) although it is rather smaller than S α of the 0 + 2 and 0 + 4 states. In addition, the 0 + 3 state has the negligibly small S α . Therefore, the α-particle emission is dominant for the 14 C + α cluster states, whereas the 12 C + α + 2n molecular states prefer the 6 He emission. This feature is consistent with the 6p4h configuration suggested by von Oertzen [32]. From the right panel of Fig. 7, the peak of the 6 He-decay RWA appears at the 4.5 fm. The characteristic decay patterns can be the signature for the molecular state, if it is observed by using this channel radius.
IV. SUMMARY
We presented various types of clustering in 18 O based on the AMD calculation. It is found that the five cluster configurations appear on the energy curves for the 0 + state; 14 C + α, higher-nodal 14 C + α, 12 C + α + 2n, 12 C + α + 2n (σ-orbit), and 4α linear-chain states. The calculated excitation energies of the ground band and 14 C + α cluster states are closer to those of the observations than previous AMD calculation due to the improvement in the effective interaction and wave functions.
We clarify the existence of the molecular state 12 C+α+ 2n suggested by von Oertzen et al. The calculated states are reasonably close to the excited states reported by the experiment although further experimental and theoretical studies are in need. For future observations, we also focus on the decay patterns of the obtained cluster states. The 14 C + α cluster states 0 + 2 and 0 + 4 dominantly decay by the α-emission. On the other hand, the α-decay is strongly suppressed for the 12 C + α + 2n molecular states. Alternatively, the 6 He-decay gets more dominant although it shows smaller spectroscopic factor. This characteristic decay pattern can be the signature for the molecular state.
In contrast to Be isotopes, the molecular states 12 C + α + 2n exist above the two-body ionic states 14 C + α in the case of 18 . This inversion provides us the information of the core − n interaction and affects the establishment of the extended threshold rule for neutron-rich nuclei.
The higher-nodal 14 C + α cluster band is the candidate of the reported resonances with rather large α-decay widths in Refs. [33,34]. In addition, we support the 1 − state located at E x = 9.76 MeV in Ref. [33] as the bandhead of the negative-parity 14 C + α cluster band which had been controversial. In order to establish the negative-parity band, more observables such as the B(E1) are needed to be compared in future works. Cunsolo et al. [28] Gai et al. [29] Curtis et al. [31] von Oertzen et al. [32] Avila et al. [33] Yang et al. [34] AMD [26] OCM [27] ( . 7. (color online) Reduced width amplitude as a function of distance r for 0 + 2 (blue), 0 + 3 (green), and 0 + 4 (red) states as the 14 C + α (left) and 12 C+ 6 He (right) channels. It is assumed that the spins of 4,6 He and 12,14 C, and relative angular momentum between them are zero. | 2019-10-17T09:41:59.000Z | 2019-10-17T00:00:00.000 | {
"year": 2019,
"sha1": "adcb743dbbfdf4228ffb1ba552bb359bd687ecc6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1910.07789",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "de8f916fead7c44780aad88276a9c843e689c444",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
251271382 | pes2o/s2orc | v3-fos-license | Healthy lifestyle and life expectancy at age 30 in the Chinese population: an observational study
Background The improvement of life expectancy is one of the aims of the “Healthy China 2030” blueprint. We aimed to investigate the extent to which healthy lifestyles are associated with life expectancy in Chinese adults. Methods We used the prospective study of China Kadoorie Biobank (CKB) study (n=487,209) to examine the relative risk of mortality associated with individual and combined lifestyle factors (never smoking or quitting not for illness, no excessive alcohol use, being physically active, healthy eating habits, and healthy body shape). We estimated the national prevalence of lifestyle factors using data from the China Nutrition and Health Surveillance (2015) and derived mortality rates from the Global Burden of Diseases, Injuries, and Risk Factors Study (2015). All three data sources were combined to estimate the life expectancy of individuals at age 30 following different levels of lifestyle factors by using life table method. The cause-specific decomposition of the life expectancy differences was analyzed using Arriaga’s method. Findings There were 42,496 deaths documented over a median follow-up of 11.1 (interquartile range: 10.2-12.1) years in CKB. The adjusted hazard ratios (95% confidence intervals [CIs]) of participants adopting five versus 0-1 low-risk factors was 0.38 (0.34, 0.43) for all-cause mortality, 0.37 (0.30, 0.46) for cardiovascular disease (CVD) mortality, 0.47 (0.39, 0.56) for cancer mortality, and 0.30 (0.14, 0.64) for chronic respiratory disease (CRD) mortality. The life expectancy (95%CI) at age 30 for individuals with 0-1 low-risk factor was on average 41.7 (41.5, 42.0) years for men and 47.3 (46.6, 48.0) years for women. When individuals adopted all five low-risk factors, the life expectancy was 50.5 (48.5, 52.4) years for men and 55.4 (53.5, 57.4) years for women, with an increase (95%CI) of 8.8 (6.8, 10.7) years (men) and 8.1 (6.5, 9.9) years (women), respectively. The estimated extended life expectancy for men and women was attributable to reduced death from CVD (2.4 years [27% out of the total extended life expectancy] for men and 3.6 years [46%] for women), cancer (2.5 years [29%] and 0.9 years [11%]), and CRD (0.6 years [7%] and 1.3 years [16%]). Interpretation Our findings suggest that increasing the adoption of these five healthy lifestyle factors through public health interventions could be associated with substantial gains in life expectancy in the Chinese population.
Introduction
Traditional lifestyle-related risk factors, including smoking, excess drinking, physical inactivity, poor dietary habits, and obesity, have been associated with increased risk of death, especially from chronic diseases. 1,2 The widespread prevalence of these risk factors has caused a great burden of disease (such as cardiovascular disease, cancer, and chronic respiratory diseases) worldwide, 3 with no exception to China. 4 Life expectancy as an absolute quantitative measure is more intuitive than indicators such as relative risk and absolute lifetime risk and has become a common metric for establishing public health priorities.
Previous studies that have assessed the relationship between lifestyle and life expectancy were mainly done in North American and European populations, and these studies suggested that healthier lifestyles were associated with an increase in life expectancy of between 7·4 years and 18·5 years. [5][6][7] Most of these studies were based on specific cohort populations; the results of such a study design only reflect the mortality of specific cohort populations over a follow-up period and caution should be maintained in generalising these results to national populations.
There are non-negligible differences between Chinese and European and American populations in economic and social development and determinants of health, such as genetics, lifestyle, and hazardous environmental exposures. However, only a few studies have evaluated the effect of individual lifestyle factors, such as smoking and alcohol intake, on the life expectancy of the Chinese population. [8][9][10] The effect of combined lifestyle behaviours on Chinese life expectancy remains unclear, and the evidence gaps need to be filled.
The blueprint of Healthy China 2030 set out the goal of increasing the average life expectancy of Chinese people at birth from 76·3 years in 2015 to 79 years in 2030. We aimed to evaluate the potential effects of individual and combined low-risk lifestyle factors on the life expectancy at age 30 years in the Chinese population.
Study design and participants
We combined three sources of data: (1) the China Kadoorie Biobank (CKB) study for the association between lifestyle factors and mortality; (2) The CKB study is a nationwide population-based prospective cohort of more than 500 000 adults. Details of the study design have been previously described. 11 Briefly, 512 725 participants aged 30-79 years were recruited during 2004-08 from five urban and five rural areas geographically spread across China. Baseline survey and anthropometric measurements were undertaken by trained study staff. All participants signed informed consent forms. Ethical approval was obtained from the Ethics Review Committee of the Chinese Center for Disease Control and Prevention (CDC, Beijing, China) and the Oxford Tropical Research Ethics Committee, University of Oxford (Oxford, UK). In the present study, participants with coronary heart disease, stroke, or cancer at baseline were excluded, in addition to those with missing values for body-mass index (BMI). For analysis of chronic respiratory diseases, participants with chronic obstructive pulmonary disease (COPD) or asthma at baseline were excluded.
The CNHS (2015-17) was the latest cross-sectional survey with nationally and provincially representative samples from 302 survey sites of 31 provincial-level administrative divisions in mainland China. In this round of surveillance, the survey on adult chronic diseases and nutritional status was done in 2015. Participants were sampled using a stratified multistage cluster sampling design, with details published previously. 12 In the present study, data from adults aged 30-84 years from the CNHS 2015 were used to estimate the sex-specific and age-specific (every 5-years) prevalence
Research in context
Evidence before this study We searched PubMed, EMBASE, and Google Scholar for articles published from the inception of each database to Oct 31, 2021, using a combination of terms: ("life expectancy" OR "life span" OR "life time" OR "life years" OR "longevity") AND ("lifestyle" OR "smoking" OR "tobacco use" OR "alcohol" OR "physical activity" OR "diet" OR "BMI" OR "overweight" OR "obesity"). No restrictions were applied to study type or language. Relevant studies were also found by checking reference lists of identified articles. Available studies that assessed the relationship between lifestyle and life expectancy were mainly done in highincome countries and based on specific cohort populations, limiting the generalisability of the results to other countries, where the factors that influence health might differ. The potential impact of healthy lifestyles on life expectancy at a population level in China remains unclear.
Added value of this study
The estimated life expectancy at age 30 years for individuals with five low-risk lifestyle factors was on average 8·8 years longer in men and 8·1 years longer in women than those with 0-1 low-risk factors. About two thirds of the extended life expectancy associated with adopting all five low-risk factors could be explained by the reduced death from cardiovascular disease, cancer, and chronic respiratory disease. To the best of our knowledge, this is the first study to quantify the association between combined lifestyle factors and life expectancy in China. The use of a large prospective cohort study of more than 500 000 Chinese people and a nationally representative survey of risk factors improved the representativeness of the findings for the national population.
Implications of all the available evidence
Our findings suggest that fostering a healthy lifestyle through population-wide public health interventions could be associated with substantial gains in life expectancy in the Chinese population. The findings of the study could encourage the government to commit to promoting a healthy lifestyle, in order to achieve the goal of increasing the average life expectancy, as outlined in the blueprint of Healthy China 2030. Further investigations are also needed to explore the effect of other factors on life expectancy, such as environmental hazards.
of lifestyle-related factors. The Ethics Review Committee of the China CDC approved the survey. All participants had completed written informed consent forms.
Procedures
Baseline lifestyle-related factors and covariate information in CKB were assessed by intervieweradministered laptop-based questionnaires and physical measurements (body weight, height, and hip and waist circumference). The data entry system had built-in functions to avoid missing items and logic errors maximally. Details have been described in the appendix (p 2).
The data in CNHS were collected by face-to-face interviews with trained staff using well-designed questionnaires (appendix pp 3-7) and taking physical measurements. Questions about smoking status were the same as those in the CKB questionnaire, except that only the cigarettes were considered to calculate the daily smoking amount, whereas in the CKB study, hand-rolled cigarettes, pipes or water pipes, and cigars were also considered, and these data were converted to the equivalent numbers of cigarettes smoked per day. A food frequency questionnaire was used to collect the frequency and amount of various foods and alcoholic drinks consumed in the past 12 months. Physical activity was investigated with an adapted version of the International Physical Activity Questionnaire-long form, and the total amount of physical activity was calculated in a similar way to the CKB study (appendix p 8). Body weight, height, and waist circumference were measured by trained staff using well calibrated instruments.
The all-cause and cause-specific mortality rates, including from cardiovascular disease, cancer, and chronic respiratory disease (including COPD and asthma), of the Chinese population by sex and 5-year age groups (30-94 years) in 2015 were derived from the GBD study.
Five modifiable lifestyle factors that could define a lowrisk lifestyle were included in this study based on previous studies and Dietary Guidelines for Chinese Residents: smoking, alcohol intake, physical activity, dietary habits, and body shape (a reflection of balance between energy intake and energy expenditure). 1,5,7,13 Not smoking or quitting smoking for reasons other than illness was defined as low risk. Former smokers who had stopped smoking due to illness were excluded from the low-risk group to avoid biasing death risk upward. The low-risk group for alcohol intake included non-regular drinkers and daily light-to-moderate drinkers (<30 g of pure alcohol in men and <15 g in women per day). 1,5 Former drinkers were also excluded from the lowrisk group to address the potential sick-quitter phenomenon (ie, cessation of alcohol consumption might result from disease onset and changes in health conditions). 14 However, such exclusion did not apply to the CNHS because its questionnaire did not ask about previous drinking habits. The low-risk group for physical activity included those who engaged in an age-specific (<50, 50-59, and ≥60 years) and sex-specific median or higher level of physical activity. 1 For dietary habits, we created a simple diet score by considering the following criteria: eating fresh vegetables daily, eating fresh fruits daily, eating red meat 1-6 days per week, eating legumes ≥4 days per week, and eating fish ≥1 day per week. For each criterion met, one point was scored; otherwise, 0. Thus, the diet score ranged from 0 to 5, with a score of 4 to 5 classified as the low-risk group. 1 Both general and central adiposity indicators were considered for body shape, with BMI of 18·5-27·9 kg/m² and waist circumference of <90 cm for men and <85 cm for women defined as low-risk, 15 which emphasises prevention of extremely high or low weight and abdominal obesity.
A simple low-risk lifestyle score was derived according to the number of low-risk lifestyle factors, ranging from 0 to 5, with higher scores indicating a healthier lifestyle. The vital status of each participant in CKB was identified through the National Disease Surveillance Points system, supplemented with the annual active follow-up. The underlying causes of death were coded using the 10th revision of the International Classification of Diseases.
Statistical analysis
In the analysis of CKB, person-years were counted from baseline until death, loss to follow-up, or Dec 31, 2017, whichever occurred first. Cox proportional hazards regression with an age timescale was used to calculate the hazard ratio (HR) and 95% CI for the relative risk of mortality outcomes with each lifestyle factor and the number of combined lifestyle factors. The Cox model was stratified jointly by ten study areas and age at baseline in a 5-year interval. For cause-specific mortality, we applied a regression model based on the proportional sub-distribution hazard proposed by Fine and Gray. 16 Assuming that the observed association is causal, we calculated population-attributable risk percent (PAR%), which estimates the percentage of mortality that would have been prevented if all participants had been in the low-risk group. In these analyses, we coded low-risk lifestyle factors as a binary variable and compared participants with all five low-risk factors with all others, following a method advocated by Wacholder and colleagues. 17 The statistical methods used for estimating years of life gained or lost associated with lifestyle factors are detailed in the appendix (pp 9-10). Due to the sex differences in life expectancy, we did all analyses for men and women separately. We used period life tables to calculate the life expectancy, applying 1-year age bands For more on CKB questionnaires see https://www.ckbiobank.org/ For more on the Global Burden of Disease 2015 study see http://ghdx.healthdata.org/gbdresults-tool for age 30 up to 94 years, with the final age group encompassing those aged 95 years and older. The cumulative survival from age 30 years onwards was estimated for participants following different levels of low-risk lifestyle factors by applying sex-specific HRs for all-cause and cause-specific mortality from the CKB to the detailed mortality component from the GBD, combined with the prevalence of low-risk lifestyle factors from CNHS (2015).
By applying Arriaga's decomposition method, 18 we estimated the cause-specific contributions to the life expectancy difference between participants adopting all five and 0-1 low-risk lifestyle factors to determine which cause-specific mortality differences were major contributors to the total change in life expectancy (appendix p 11).
In the sensitivity analysis, we excluded CKB participants who died within the first 2 years of followup to minimise potential reverse causality. We also applied sex-specific and age-at-risk-specific HRs for allcause mortality to the life expectancy analysis to account for potential non-linear increase of death hazard in older ages, in which participant age at risk was determined by splitting the follow-up time every 10 years. 19 The age-atrisk groups were 30-49, 50-59, 60-69, 70-79, and 80 years and older for men and 30-69, and 70 years and older for women, considering that few deaths occurred before the age of 70 and older than 80 years among women adopting 0-1 low-risk lifestyle factors (the reference group) in the CKB study. Considering the lag time between exposure and mortality outcome, we substituted the mortality data with the most recent data from 2019 (4-year lag). 6,20 Subgroup analyses were done by the factors of residence (urban and rural), education level (no education and primary school, and middle school and higher), smoking status (men: never, former, and current; women: never and ever), body shape (underweight, neither general nor abdominal obesity, and either or both), and baseline disease status (neither hypertension nor diabetes, and either or both).
Considering the gradients in death risk according to different levels of each lifestyle factor, we further created an expanded low-risk lifestyle score. We graded the categories of each lifestyle factor from 1 (least healthy) to 5 (most healthy) according to the CKB findings of the association between lifestyle factors and all-cause mortality. The points across all five lifestyle factors were totalled, with the overall score ranging from 5 to 25.
All statistical analyses, unless otherwise stated, were done with Stata (version 15.0). The competing-risk analysis, calculation of PAR%, and computation of prevalence of lifestyle factors were done with SAS (version 9.4). The confidence interval for life expectancy was estimated using @RISK 8.1 (Palisade Corp, Ithaca, NY). 21 Graphs were plotted using R (version 4.0.3).
Role of the funding source
The funders had no role in the study design, data collection, data analysis, data interpretation, or writing of the report.
Results
In the present study, 15 472 CKB participants with coronary heart disease, 8884 with stroke, and 2578 with cancer at baseline were excluded, in addition to two people with missing values for BMI. After these exclusions, a total of 487 209 participants were included in the primary analysis. Reasons for exclusion were not mutually exclusive, with 1406 participants meeting multiple exclusion criteria. For analysis of chronic respiratory diseases, 37 057 participants with COPD and 2528 with asthma at baseline were excluded, and 451 233 participants were included in the final analysis. Baseline COPD was ascertained on the basis of selfreported clinical diagnosis of chronic bronchitis or emphysema and onsite pulmonary function test. 22 Other medical histories relied on self-reported clinical diagnoses. In the present study, we used up to 171 127 adults aged 30-84 years from the CNHS 2015 to estimate the sex-specific and age-specific (every 5-years) prevalence of lifestyle-related factors (appendix p 13).
During a median follow-up of 11·1 years (IQR 10·2-12·1; 5·3 million person-years), the CKB study documented 42 496 deaths, including 16 257 deaths from cardiovascular disease, 14 069 deaths from cancer, and 3332 deaths from chronic respiratory disease. After excluding participants with prevalent COPD and asthma at baseline, 1449 deaths from chronic respiratory disease occurred among the remaining 451 233 participants. In the multivariableadjusted model, smoking was asso ciated with an increased risk of all-cause mortality, and being physically active and following healthy dietary habits were associated with a reduced risk of all-cause mortality ( Data from 487 209 participants. Multivariable models were adjusted for sex (men or women), education (no formal school, primary school, middle school, high school, college, or university or higher), marital status (married, widowed, divorced or separated, or never married), hip circumference (mm), family history of heart attack and stroke (presence, absence, or unknown; adjusted for analyses of all-cause and cardiovascular mortality), and family history of cancer (presence, absence, or unknown; adjusted for analyses of all-cause and cancer mortality). All five lifestyle factors were included simultaneously in the same model. HR=hazard ratio. BMI=body-mass index. *Participants who stopped smoking for reasons other than illness were classified as former smokers. Participants who had stopped smoking due to illness were classified as current smokers. †Less than daily group included never-regular drinkers and current weekly drinkers. Participants who used to drink at least once weekly but drank less than weekly at baseline were classified as former drinkers. ‡Physical activity level was categorised on the basis of age-(<50 years, 50-59 years, and ≥60 years) and sex-specific quintile of total physical activity level. §Diet score was created according to the following criteria: eating fresh vegetables daily, eating fresh fruits daily, eating red meat 1-6 days per week, eating legumes ≥4 days per week, eating fish ≥1 day per week. For each food group, the participant who met the criterion received a score of 1, and otherwise, 0. 1 and 2). Results by sex are presented in the appendix (appendix pp [16][17][18][19]. When low-risk lifestyle factors were considered jointly, compared with participants with 0-1 low-risk lifestyle factors, the adjusted HR of participants who had five lowrisk lifestyle factors was 0·38 (95% CI 0·34-0·43) for allcause mortality, 0·37 (0·30-0·46) for cardiovascular disease mortality, 0·47 (0·39-0·56) for cancer mortality, and 0·30 (0·14-0·64) for chronic respiratory diseases mortality. The PAR% of not adopting all five low-risk lifestyle factors was 37·8% (30·7 to 44·5) for all-cause mortality, 42·8% (30·5 to 53·6) for cardiovascular disease mortality, 29·9% (17·8 to 41·2) for cancer mortality, and 36·3% (-16·3 to 72·9) for chronic respiratory disease mortality (figure 1). The exclusion of deaths that occurred during the first 2 years of the follow-up did not substantially alter the results (appendix pp 20-23). Fine-Gray regression models yielded slightly attenuated risk estimates for cause-specific mortality (appendix pp 24-25). All five low-risk lifestyle factors, including non-smoking, moderate alcohol intake, physical activity, healthy dietary habits, and absence of underweight or obesity, were associated with longer life expectancy (figure 2).
In subgroup analysis stratified by residence, education level, smoking status, obesity status, or disease status at baseline, we observed a consistent relationship between the increasing number of low-risk lifestyle factors and the gained life expectancy at age 30 years across Data from 451 233 participants. Multivariable models were adjusted for sex (men or women), education (no formal school, primary school, middle school, high school, college, or university or higher), marital status (married, widowed, divorced or separated, or never married), hip circumference (mm), family history of heart attack and stroke (presence, absence, or unknown; adjusted for analyses of all-cause and cardiovascular mortality), and family history of cancer (presence, absence, or unknown; adjusted for analyses of all-cause and cancer mortality). All five lifestyle factors were included simultaneously in the same model. HR=hazard ratio. BMI=body-mass index. *Participants who stopped smoking for reasons other than illness were classified as former smokers. Participants who had stopped smoking due to illness were classified as current smokers. †Less than daily group included never-regular drinkers and current weekly drinkers. Participants who used to drink at least once weekly but drank less than weekly at baseline were classified as former drinkers. ‡Physical activity level was categorised on the basis of age-specific (<50 years, 50-59 years, and ≥60 years) and sex-specific quintile of total physical activity level. §Diet score was created according to the following criteria: eating fresh vegetables daily, eating fresh fruits daily, eating red meat 1-6 days per week, eating legumes ≥4 days per week, eating fish ≥1 day per week. For each food group, the participant who met the criterion received a score of 1, and otherwise, 0. subpopulations (appendix pp 31-35). In the analysis using an expanded low-risk score, the average life expectancy at age 30 years for individuals with a score of at least 23 was 13·5 years longer for men and 12·1 years longer for women than those with a score of 8 or less (appendix 36-37).
Discussion
Our results suggest that adherence to each of the five lowrisk lifestyle factors, namely never smoking or quitting for reasons other than illness, no excessive alcohol use, being physically active, healthy eating habits, and a BMI between 18·5 and 27·9 kg/m² without abdominal obesity, was associated with longer life expectancy for Chinese adults. The estimated life expectancy at age 30 years for individuals with all five low-risk factors was on average 8·8 years longer in men and 8·1 years longer in women than those with 0-1 low-risk factors. The estimated improved life expectancy for men and women was mostly attributable to reduced death from cardio vascular disease, cancer, and chronic respiratory disease.
To the best of our knowledge, this is the first study to quantify the association between combined lifestyle factors and life expectancy in China. In 2015, the average life expectancy at age 30 years for Chinese adults was 45·5 years for men and 51·3 years for women. 23 In the present study, the estimated life expectancy at age 30 for individuals with 0-1 low-risk lifestyle factors was 41·7 years for men and 47·3 years for women. However, adopting all five low-risk lifestyle factors was associated with an improved life expectancy at age 30, reaching 50·5 years for men and 55·4 years for women. The Singapore Chinese Health Study-which had a median of 20·6 years of follow-up data-showed that the differences in life expectancy when comparing individuals with 4-5 low-risk lifestyle factors with those with zero Multivariable models were adjusted for sex (men or women), education (no formal school, primary school, middle school, high school, college, or university or higher), marital status (married, widowed, divorced or separated, or never married), hip circumference (mm), family histories of heart attack and stroke (presence, absence, or unknown; adjusted for analyses of all-cause and cardiovascular mortality), and family history of cancer (presence, absence, or unknown; adjusted for analyses of all-cause and cancer mortality). Low-risk lifestyle factors were defined as: never smoking or having stopped for reasons other than illness; less than daily drinking or drinking <30 g (men) and <15 g (women) of pure alcohol per day (former drinkers excluded); engaging in an age-specific (<50 years, 50-59 years, and ≥60 years) and sex-specific median or higher level of physical activity; scoring 4-5 for all food groups; having a BMI between 18·5 and 27·9 kg/m² and a waist circumference <90 cm (men) and <85 cm (women). HR=hazard ratio. PAR%=population attributable risk percent.
Number of low-risk lifestyle factors
0·2 0·4 0·8 1·0 HRs (95% CIs) low-risk lifestyle factors at age 50 years were 6·6 years for men and 8·1 years for women. 24 In the present study, the corresponding estimates of gained life-years at 50 years were 7·7 years for men and 7·6 years for women, similar to the estimates from the afore mentioned study, but with a smaller sex difference.
Our findings were consistent with previous studies in high-income countries; life expectancy increased with increasing numbers of low-risk lifestyle factors. Adherence to a healthy lifestyle has been associated with a 17·9-year increase in life expectancy at age 20 for Canadians, 6 and 12·2 years (for men) and 14 years (for women) at age 50 years for Americans, 5 and 18·5 years (for men) and 15·7 years (for women) at age 40 years for the EPIC-Heidelberg cohort population from Germany. 7 By contrast, the estimates of gained life-years in our study were lower than that of the three aforementioned studies. This inconsistency might be explained by the differences between populations in the definitions and components of a healthy lifestyle and their prevalence. 5 Additionally, in developing countries, potential environmental hazards in the home, work, and broader outdoor environment, such as ambient and household air pollution, and chemical contamination of food and water, could increase the burden of diseases. 25 Therefore, the relative impact of a healthy lifestyle alone on life expectancy might be slightly diminished in developing countries.
In the cause-specific decomposition analysis of the life expectancy differences, we observed that compared with individuals with 0-1 low-risk lifestyle factors, about two thirds of the increased life expectancy from adopting all five low-risk factors could be explained by the reduced death from cardiovascular disease, cancer, and chronic respiratory disease, all representing the leading causes of death in the Chinese population. A larger proportion (72%) of the gained life expectancy between individuals with all five low-risk lifestyle factors and those with 0-1 low-risk lifestyle factors in women was attributable to the reduced death from cardiovascular disease, cancer, and chronic respiratory disease than that in men (64%). Additionally, the major contributors to the life expectancy difference were from cardiovascular disease and other causes among women and from cancer and other causes among men. This difference might be related to the sex differences in the relative risks of lifestyle risk factors for various outcomes, disease burden patterns, and preva lence of lifestyle risk factors.
Women
The lifestyle-related factors included in this study and the definition of their low-risk group were generally consistent with previous studies, except for physical activity and obesity indicators. Many studies in highincome countries specifically focused on leisure-time physical activity. However, most of the physical activity in the current population was occupational and household. 26 We defined the low-risk group according to total physical activity, and being physically active was associated with an increase in life expectancy at age 30 by more than 4 years. We suggest that this alternative definition of physical activity is valid in the Chinese population. Regarding adiposity measures, by contrast to previous studies that only included BMI, 5,7,24 this study used both BMI and waist circumference. A recent meta-analysis of 72 prospective studies suggests that the measures of central adiposity could be used with BMI as an auxiliary indicator to determine the risk of premature death. 27 This study has several strengths. First, the nature of the CKB study in terms of its large sample size, long-term follow-up, and the high number of documented deaths enables us to obtain more precise sex-specific effect estimates for all-cause and cause-specific mortality than do smaller studies. The inclusion of a geographically spread population living in urban and rural areas, with different socio-demographic characteristics, and the loss to follow-up rate of less than 1%, make the effect estimates broadly applicable. Second, we used a nationally representative survey to estimate the prevalence of lifestyle factors, improving the representativeness of the findings for the Chinese population. Third, existing studies mainly investigated the impact of lifestyle factors on life expectancy at middle age and old age, such as life expectancy at 50 years. 5,24 The present study expands on previous findings and supports the benefits of starting a healthy lifestyle early at a young age.
Several limitations also merit discussion when interpreting the results. First, the lifestyle behaviours were self-reported in CKB and CNHS, most likely leading to biased results towards the null in the estimated associations and might have provided overestimates in the prevalence of low-risk lifestyle factors. Second, we only used information on lifestyle factors at one timepoint at baseline in the CKB without considering their potential changes during the followup. However, one of our previous studies using resurvey data from a subset of the CKB population showed that the lifestyle of most participants remained relatively stable over long periods. 28 Third, we dichotomised each lifestyle factor and counted the number of low-risk lifestyle factors, ignoring the difference in the magnitude of association between various lifestyle factors and death. However, two previous studies compared the analyses using weighted lifestyle scores with nonweighted scores, and no significant differences were observed. 24,29 Fourth, the definitions of low-risk lifestyle factors might not be entirely consistent between the CKB and CNHS due to subtle differences in the questionnaires. Nevertheless, slight changes in the prevalence of lifestyle factors would not substantially affect the results of our study under different simulation scenarios. Other limitations include the observational nature of the study precluding causal inference, the CKB cohort not being fully representative of the Chinese gained from adopting five versus 0-1 low-risk lifestyle factors attributable to reduced death from cardiovascular disease, cancer, chronic respiratory disease, and other causes. Low-risk lifestyle factors were defined as: never smoking or having stopped for reasons other than illness; less than daily drinking or drinking <30 g (men) and <15 g (women) of pure alcohol per day (former drinkers excluded); engaging in an age-specific (<50 years, 50-59 years, and ≥60 years) and sex-specific median or higher level of physical activity; scoring 4-5 for all food groups; having a BMI between 18·5 and 27·9 kg/m² and a waist circumference <90 cm (men) and <85 cm (women). BMI=body-mass index.
Causes
Others This study of the Chinese population shows that adopting a low-risk lifestyle was associated with a higher life expectancy, than not adopting a low-risk lifestyle, at age 30 years by 8·8 years in men and 8·1 years in women; mostly accounted for by reduced deaths from cardiovascular disease, cancer, and chronic respiratory disease. Assuming that the observed associations are causal, there is still much room for improvement in the life expectancy of the Chinese population through population-wide healthy lifestyle interventions. For example, a recent study from Hong Kong has shown the possibility of realising this vision, emphasising the crucial role of tobacco control in improving life expectancy. 30 Public health interventions that improve adoption of healthy lifestyles should be one of the priorities for implementing the Healthy China 2030 agenda.
Contributors
QS and DY are joint first authors. JL and LL conceived and designed the study, and contributed to the interpretation of the results and critical revision of the manuscript for valuable intellectual content. LL, ZC, and JC, as members of the CKB steering committee, designed and supervised the conduct of the study, obtained funding, and together with CY, YG, PP, LY, YC, HD, XY, SS, and YW, acquired the CKB data. DY, LZ, and WZ designed and supervised the conduct of the CNHS. QS, DY, and JF accessed, verified, and analysed the data. QS drafted the manuscript. The corresponding authors attest that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. JL, LL, and LZ are the guarantors. All authors had access to the data, have read and approved the final manuscript, and accept responsibility for the decision to submit for publication.
Declaration of interests
We declare no competing interests.
Data sharing
Details of how to access China Kadoorie Biobank data and details of the data release schedule are available from: https://www.ckbiobank.org/ site/Data+Access. The CNHS data will be available from the corresponding authors on request. | 2022-08-03T15:15:03.545Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "b64de27513d2abbf79cf6d4dc8d6e4aeb8b5fe21",
"oa_license": "CCBY",
"oa_url": "http://www.thelancet.com/article/S2468266722001104/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f2191a39d85f7c03ff571096171f72d91e934bdb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125881681 | pes2o/s2orc | v3-fos-license | Intensity-based readout of resonant-waveguide grating biosensors: systems and nanostructures
Resonant waveguide gratings (RWG) – also called photonic crystal slabs (PCS) – have been established as reliable optical transducers for label-free biochemical assays as well as for cell-based assays. Current readout systems are based on mechanical scanning and spectrometric measurements with system sizes suitable for laboratory equipment. Here, we review recent progress in compact intensity-based readout systems for point-of-care (POC) applications. We briefly introduce PCSs as sensitive optical transducers and introduce different approaches for intensity-based readout systems. Photometric measurements have been realized with a simple combination of a light source and a photodetector. Recently a 96-channel, intensity-based readout system for both biochemical interaction analyses as well as cellular assays was presented employing the intensity change of a near cut-off mode. As an alternative for multiparametric detection, a camera system for imaging detection has been implemented. A portable, camera-based system of size 13 cm x 4.9 cm x 3.5 cm with six detection areas on an RWG surface area of 11 mm x 7 mm has been demonstrated for the parallel detection of six protein binding kinetics. The signal-to-noise ratio of this system corresponds to a limit of detection of 168 pM (24 ng/ml). To further improve the signal-to-noise ratio advanced nanostructure designs are investigated for RWGs. Here, results on multiperiodic and deterministic aperiodic nanostructures are presented. These advanced nanostructures allow for the design of the number and wavelengths of the RWG resonances. In the context of intensity-based readout systems they are particularly interesting for the realization of multi-LED systems. These recent trends suggest that compact point-of-care systems employing disposable test chips with RWG functional areas may reach market in the near future.
Introduction
In the last years, the detection of proteins as one type of biomarker has become more and more important in diagnostics of slowly progressing diseases as well as for acute cases [1,2]. Thereby the detection of multiple proteins is often needed for unambiguous diagnosis. For the detection of multiple protein levels in blood, typically a blood sample is analyzed in a central laboratory using, e.g., ELISA (enzyme-linked immunosorbent assay) tests. Currently, there is a strive towards decentralized point-of-care (POC) analysis systems that allow for detection of the relevant protein levels in small sample volumes to ensure regular and fast diagnosis. Several systems for POC biomarker detection are already on the market including, e.g., pregnancy tests, glucose tests, the Roche TROP T® sensitive-test (Roche Deutschland Holding GmbH) for troponin detection or the iron self-test STADA Diagnostik Eisen (STADApharm GmbH). Multiple-protein measurements from a single sample still pose a challenge and a variety of systems is currently under investigation. Biosensors may be classified by the technology of their three constitutive parts: the selective and specific biological recognition component (receptor), the signal transducer, and the data evaluation. The bio-chemical receptor is directly connected to the transducer and enables the specific and unambiguous binding of protein target molecules. The transducer transforms this binding into a detectable signal. This transformation can be performed, e.g., mechanically [3], electrochemically [4], optically [5], or with surface acoustic wave sensors [6]. Optical biosensors offer the advantage of a physical separation of the transducer with the biological recognition component, which is in contact with the sample, and the optical detection hardware with the electronics and the data evaluation. A non-contact, free-space optical link allows for the combination of one-way test chips with a separate detection system. Figure 1 (a, b) depicts a scheme using out-of-plane illumination of a one-way chip for parallel detection of multiple proteins. Figure 1. (a) Point-of-care (POC) sensor concept with one-way microfluidic detection chip and camera-based readout system for multiple protein detection [reproduced from [30] with permission (will be obtained after acceptance of manuscript)]. (b) Microfluidic chip prototype with filter unit and detection field. (c, d) 384-well biosensor microplate (Epic® system, Corning Inc.) and schematic of nanostructured waveguide with cell showing mass redistribution upon stimulation [reproduced from [13] with permission (will be obtained after acceptance of manuscript)].
Today, most of the optical approaches use label-based detection schemes such as fluorescence assays [7]. For point-of-care systems, label-free detection is of particular interest as additional reagents and preparation steps are not required [8]. Label-free optical transducers studied in the past range from waveguide interferometry [9] and surface plasmon resonances [10] to micro ring resonators [11] and photonic crystal sensors [12]. Nanostructured surfaces such as resonant waveguide gratings allow for coupling of the transducer with out-of-plane illumination without additional coupling structures. Figure 1c) shows the photograph of a commercially available 384-well biosensor microplate with a nanostructured well bottom (Epic® system, Corning Inc.). Currently, a spectrometer-based readout scheme is utilized in commercial systems with broadband illumination [13]. Imaging angle-based readout has been demonstrated using a laser as the light source [14]. These label-free systems may be employed for biochemical assays as well as for cell-based assays. In cell-based assays the mass change at the surface of the nanostructured waveguide is detected, which may be linked to a change in the cell number or size or a dynamic mass redistribution (DMR) in cells upon a stimulus (see Figure 1d).
An overview of emerging photonic crystal biosensors is found in references [15] and [16]. In this review we discuss the intensity-based readout of resonant waveguide grating sensors, also called photonic crystal sensors. For an intensity-based readout no spectrometer or angle-scanning mechanics are needed. Therefore, this detection scheme offers high potential for the miniaturization of the complete optical systemnot just the detection volume. On the other hand, an intensity-based system is more susceptible to background effects. For cell-based assays, the question of optical scattering losses due to cells is of particular importance. In biochemical assays the separation of signal and background drift needs to be considered. Section 2 explains the principles of guided-mode resonances for optical biosensing. In section 3, intensity-based system designs are presented and the state-of-the-art performance is discussed. Subsequently in section 4, new developments on multiperiodic and deterministic aperiodic nanostructures for resonant waveguide grating sensors are introduced. A summary and an outlook are given in section 5.
Photonic Crystal Slabs (PCS) as Sensitive Optical Transducers
Resonant waveguide gratings (RWG) with one-or two-dimensionally periodic nanostructures as depicted schematically in Figure 2a are also called photonic crystal slabs (PCS). Here, double-grating nanostructures are considered with gratings on both sides of the waveguide. These model samples obtained by a nanoimprint-lithography process on the substrate and subsequent waveguide layer deposition by sputtering. The sub-wavelength nanostructure serves as a grating coupler coupling outof-plane incident light to the waveguide layer and light from quasi-guided modes to radiation modes. Due to the leakage of the guided light to radiation, the waveguide modes are called quasi-guided modes. For the waveguide layer to be able to guide light, its refractive index (n2) must be higher than that of the substrate (n3) and that of the surrounding medium (n1), also referenced as cladding or analyte region. Following this condition, the waveguide is often called high-index layer. At certain wavelengths, the light reflected from the nanostructured waveguide layer interferes constructively in reflection and reflection peaks are observed at these guided-mode resonances (GMR). Simultaneously, dips appear in the transmission spectrum. Due to interference effects of different propagations paths, the guided-mode resonances have a Fano-type line shape [17]. Allowing for transverse-electric modes (TE case, electric field component in the plane of the waveguide) and transverse-magnetic modes (TM case, magnetic field vector in the plane of the waveguide), PCSs have distinct resonances for the two polarizations, that may be addressed by the polarization of the incident light. We use a transmission confocal microscope setup to characterize samples. To excite either transverse electric (TE) or transverse magnetic (TM) guided-mode resonances a polarization filter is used and the polarization is changed by either turning the sample or the filter by 90° degrees. The transmitted light is collected and coupled to a spectroscope (Shamrock 500i, Andor) with a CCD camera (Andor). The recorded transmission spectra are normalized to the recorded spectrum of the broadband halogen illumination source. Two typical transmission spectra of a PCS with a period of 370 nm and a titanium dioxide high-index layer are depicted in figure 2 b), showing the TE0 guided-mode resonance and the TM0 guided-mode resonance. Distinct minima are visible at 540 nm for the TM case and 585 nm for the TE case. Following Bragg theory, the resonance wavelength is a function of the grating period Λ, the effective refractive index of the guided mode and the angle of incidence Θ [18]: The grating allows for light coupling into forward-and backward-propagating modes, corresponding to the ±sin(Θ) term. At normal incidence, the resonance position becomes a mere function of the grating period and the effective refractive index of the quasi-guided mode. Here the forward-and backwardpropagating modes interfere destructively and open up the Λ/2 band gap known from 1-dimensional photonic crystals [19]. Note that the resonance splitting observed in figure 2b is caused by the irradiation cone of ± 2° in our setup. For perfect normal incidence only a single dip is observed corresponding to one of the band edges with the other band edge being dark. For higher angles two dips are observed following equation (1). The illumination cone of our microscope setup is limited by a pinhole in front of the halogen illumination lamp. Thus, the spectrum is averaged over the irradiation cone.
a) b)
As depicted in figure 2 a), the quasi-guided mode has evanescent parts in the cladding and substrate regions und thus the effective refractive index is a function of the geometric and the material properties of the structure and the surrounding media. Changes in the refractive index of the analyte region will lead to a change in the resonance wavelength Δλres and a frequency change Δωres, respectively. Δλres and Δωres depend on the overlap of the electric field with the refractive index change as described by J. Yang, H. Giessen, and P. Lalanne [20] In equation (2), Vp is the volume experiencing a refractive index perturbation, Δε is the change in dielectric constant due to the perturbation, ̃ the unperturbed electric field vector of the quasi-normalized mode, and ̃t he approximate perturbed normalized electric field vector. For a high sensitivity, a large electric field overlap with the volume of changed refractive index is necessary. The sensitivity S is defined as the quotient of the resonance shift and the change of refractive index in the analyte region: The energy stored in a resonator is linked to the quality factor of the resonance, which is defined by [21]: If target biomarkers bind to the specific receptor on the sensor surface, the refractive index is locally changed and the binding thus leads to a shift in the resonance position. It is distinguished between bulk sensitivity and surface sensitivity. Bulk sensitivity considers a change in refractive index for the whole analyte region. Surface sensitivity assumes a refractive index change in a thin layer close to the structure surface [22][23][24].
We simulate the performance of different photonic crystal slabs using the simulation tool FDTD Solutions (finite difference time domain, Lumerical Solutions, Inc.) and COMSOL Multiphysics® (finite element method, FEM). A comparative study of three simulation methods for nanostructured dielectric waveguides has recently been published, marking FDTD, FEM and RCWA (rigorous-coupled wave analysis) as suitable simulation tools for this purpose [25]. Figure 3 shows examples of the electric field distribution on resonance for TE and TM polarization. From Figure 3 a higher sensitivity is expected for TM polarization as the field fill factor in the analyte is higher. To consider effects of the penetration depth in some more detail, figure 4a shows the calculated mode profiles in a 100-nm thick waveguide layer for wavelengths ranging from 400 nm to 700 nm. In this simulation we consider the waveguide without the nanostructure for simplicity. An evanescent-field penetration depth of about 40 nm to 90 nm into the air analyte region on the left is observed for this geometry. The modes of shorter wavelengths have a tighter confinement to the waveguide. Due to the confinement of the mode to the waveguide, photonic-crystal slabs are only sensitive to refractive index changes close to the surface. This is highly advantageous for suppression of background effects in the sample volume. On the other hand, for cell experiments such as the one depicted in figure 1d, a larger penetration depth may be needed. Thus, designs are of interest that push the electric field into the analyte such as the reverse-symmetry waveguides suggested by R. Horvath et al. [22]. Next, the influence of the nanostructure depth on the resonance spectra is discussed. Figure 4b plots the simulated TE transmission spectra for two different structure depths of 20 nm and 60 nm. The Fanotype line shape is clearly visible. For shallow gratings, a higher quality factor is observed. Deeper gratings cause larger scattering losses. As scattering losses are the dominant loss factor in these dielectric gratings, the stored energy in the resonator is reduced correspondingly, the quality factor is smaller, and the resonance linewidth is larger following equation (4). While the sensitivity is a mere function of the field distribution, a higher Q has no impact on the resonance shift, as seen in figure 4 b. This is in contrast to PCSs that are used for fluorescence enhancement, which is a function of the field intensity [26]. For intensity-based measurement setups, an intermediate quality factor of around Q ~ 160 has proven advantageous for combining a significant intensity change and a good signal-to-noise ratio.
Single-channel intensity-based PCS system
In the simplest case, an intensity-based, single-channel PCS readout system requires only a lightemitting diode (LED) and an optical detector. As discussed in section 2 a refractive index change at the surface causes a wavelength shift of the guided-mode resonance. Placing the guided-mode resonance on one of the edges of the LED emission spectrum, this wavelength shift is converted to a change in the transmitted intensity. The signal-to-noise ratio (SNR) may be improved significantly by the integration of two crossed polarization filters [27]. As shown in figure 5 these crossed polarization filters are placed before and after the PCS. Background light not interacting with the PCS and not experiencing any other change in polarization direction is blocked by the second polarizer. Light coupled into the quasi-guided mode in the PCS by the grating structure has a new polarization direction determined by the grating. Thus, light transmitted through the PCS at resonance can pass the second polarizer. Nazirizadeh et al. [27] realized a compact intensity-based readout system capable of detecting 2.5 nM of the protein streptavidin (66 kDa) with a biotinylated surface. The signal-to-noise ratio corresponds to a limit of detection of 280 pM (18,4 ng/ml). [27] with permission (will be obtained after acceptance of manuscript)].
96-channel intensity-based PCS system
Based on the single-channel technology discussed in section 3.1, Nazirizadeh et al. demonstrated 2016 a 96-channel intensity-based reader for microtiter plates that has the same footprint as the microtiter plate [28]. The microtiter plates have a nanostructured bottom such as the one shown in figure 1c. The readout is performed using 96 optical channels consisting of an LED, a circular polarizing filter, and a photodiode as depicted in figure 6. This intensity-based system allows for both biochemical interaction analyses as well as cellular assays. The system does not use a spectrometer or moving mechanical parts allowing for the compact footprint. An important issue regarding intensity-based measurements of cellular assays is the changing cell scattering [28,29]. In a cell adhesion and spreading experiment, first the scattering increases as the number of cells on the surface increases. As a continuous layer of cells is formed, the scattering decreases again. Mass redistribution in cells may as well change the scattering properties for the quasiguided mode in the PCS. While the wavelength response is not influenced by scattering, the changed scattering causes a changed resonance peak width as well as a changed intensity signal. Therefore, the intensity signal in general is ambiguous in cell experiments. Nazirizadeh et al. demonstrate that choosing a guided-mode resonance near cutoff allows for an unambiguous response [28].
Portable, intensity-based imaging readout system
Towards an even more compact system with a high channel number, Jahns et al. realized an imaging camera-based reader system [30]. This imaging system is able to validate multiple and specific proteins for non-ambiguous diagnostics with single shot measurements. The system has an overall size of 13 cm c) The intensity values within the binding positions M1-M6 are then background corrected with an inhouse developed algorithm and plotted against time (Figure 8b) [30]. The association of the protein causes a corresponding intensity reduction showing typical binding kinetics. The nominally identical detection sites exhibit a different intensity response. We attribute this behavior to the experimental variation in the functionalization process. An automatic spotter should be used to guarantee identical spot sizes and receptor densities for the different sites. Regarding the resolution, it is feasible to place 20 x 20 = 400 receptor positions with 12 nl drops, an approximate spot diameter of 300 μm, and a pitch of 500 μm on a 1 cm² large sensor surface. For further miniaturization and high-Q samples the mode propagation in the waveguide needs to be considered [31,32]. The imaging approach does not require scanning and thus allows reliable and fast, time-resolved measurement results. The demonstrated system limit of detection of 168 pM (24 ng/ml) is already suitable to identify several relevant biomarker concentrations [30]. For a broader application range, the signal-to-noise ratio needs to be further enhanced to be able to detect concentrations in the range of pg/ml.
Multiperiodic and Aperiodic Nanostructures
This section reviews advanced nanostructures for resonant-waveguide-grating biosensors beyond simple monoperiodic gratings. We present two different concepts of nanostructure designs: compound multiperiodic gratings and deterministic aperiodic nanostructures. Both recently have been suggested for refractive index biosensors [33][34][35].
Compound gratings are obtained by a superposition of multiple monoperiodic gratings, e.g., by performing a logical disjunction operation. Each period adds peaks to the spectrum. Thus, compound multiperiodic gratings allow the design of number and wavelengths of guided-mode resonances (GMRs) in the transmission and reflection spectrum. By tuning the duty cycle, the relative intensities of the resonance peaks may be tailored [36]. Figure 9 shows the measured transmission spectra of two different multiperiodic PCSs with transverse-electric excitation. The gratings consist of two and three grating periods (2-compound: 250 nm and 300 nm, 3-compound: 250 nm, 300 nm and 350 nm), each having a distinct resonance dip in the spectrum. The fabrication and experimental characterization are described in detail in [25,30]. The additional peaks in the spectrum are promising for multi-LED intensity-based readout systems allowing for further improvements in the signal-to-noise ratio. The second design concept of deterministic aperiodic nanostructures (DANS) employs the idea to introduce disorder to the grating layer. Disordered, dielectric media have been shown to feature high field concentrations due to localization effects [37] and high quality resonances have been reported. Deterministic aperiodic sequences are generated by mathematical substitution rules. While being nonperiodic, they feature self-similarity at different orders and thus allow for guided mode resonances when employed as a diffraction layer within the waveguide. We investigated three different aperiodic nanostructures based on the Rudin-Shapiro, the Thue-Morse, and the Fibonacci sequence. These three examples are chosen for their different degrees in disorder. The Rudin-Shapiro nanostructure, having a continuous spatial Fourier spectrum, has the highest disorder, while Thue-Morse and Fibonacci nanostructures have less disorder, featuring singular-continuous and pure-point, Bragg-like spatial Fourier spectra [38,39]. The three different aperiodic sequence substitution rules are given in table 1.
To create a nanostructured grating from the binary sequences, the letters of the calculated sequences are translated into 50-nm wide ridges for an A and 50-nm wide grooves for a B as depicted in figure 10 [35]. Figure 11 shows images of the three different grating structures taken by a scanning electron microscope (SEM). For analyzing the sensitivity of aperiodic nanostructured waveguides we consider experimental results for step-wise changes of the analyte refractive index. The nanostructured waveguides are placed into a fluid cell and water-glycerol mixtures with changing composition are injected into the fluid cell. The refractive index is varied from 1.33 (water) to 1.389 by mixing water with glycerol in 5% steps. For each refractive index step and each nanostructure under investigation 100 spectra are recorded within 50 s. After each step the fluid cell is flushed with distilled water and another 100 reference spectra are recorded. The recorded spectra are normalized to the excitation spectrum of the setup and dip positions in the spectrum are obtained by a parabolic fit. The mean resonance position of the 100 recorded spectra is calculated. Figure 13 shows the time sequences of the resonance positions during the experiments. Particularly interesting is the different noise behavior of the different resonances. The noise is a combined effect of the signal-to-noise ratio of the hardware and the performance of the resonance tracking algorithm. Figure 13: Measured relative resonance positions for different resonances of three deterministic aperiodic nanostructured waveguides. The time sequence of measured spectra is plotted (100 per refractive index step). The average resonance wavelength of the first 100 spectra is subtracted for each dip and the curves are offset by 1.5 nm for better visibility. The bulk refractive index in the analyte region is changed in the following steps waterwater/5% glycerolwater -water/10% glycerol waterwater/15% glycerolwater -water/20% glycerol.
a) b) c)
The change in resonance position with refractive index change is plotted in figure 14. It is observed that different resonances show different sensitivities. Table 2 summarizes the calculated sensitivity, the standard deviation, and the limit of detection for each resonance under investigation. The limit of detection is calculated as the quotient of the standard deviation and the sensitivity S: In general, the resonances at higher wavelengths show higher sensitivities, which can be attributed to the more extended evanescent field ( fig. 4 (a)). This effect also explains the highest sensitivities of the Rudin-Shapiro sequence. Figure 15 plots the measured sensitivities over the absolute resonance position. A linear correlation is obtained from the measured data. Both with the compound-grating nanostructures and the deterministic aperiodic nanostructures new opportunities are obtained for designing the spectral characteristics. This allows for new multi-LED intensity-based detection systems. The observed bulk sensitivity is similar to the one observed in monoperiodic gratings [35]. On the other hand, exciting new opportunities open with highly localized fields with high field intensities [37]. To take advantage of these localized modes for highly sensitive biosensors, schemes for localized functionalization need to be implemented, i.e., receptor sites for target molecules should only be placed in high-field areas. This localized functionalization will improve both the performance of traditional spectrometer-based readout systems as well as of intensity-based readout systems.
Summary and Outlook
We reviewed progress in intensity-based reader systems for resonant waveguide grating (RWG) biosensors. The transmission and reflection spectrum of an RWG are linked to the refractive index distribution in the analyte region above the RWG. Changes of the refractive index caused by biomolecular binding or by a change of the cell number or cellular mass distribution in a cellular assay cause a shift of the guided-mode resonance (GMR) in the spectrum. Here, the biosensor is only sensitive to changes of refractive index within the penetration depth of quasi-guided modes.
In an intensity-based reader, the spectral change is converted to a change in transmitted or reflected intensity by positioning the GMR on an intensity edge of the spectral response. This measurement principle requires no mechanical scanning and no spectrometer. Single-channel readers, 96-well microtiter plate readers, as well as an imaging camera-based reader have been discussed. The readout system has a size of 13 cm x 4.9 cm x 3.5 cm. The system has a signal-to-noise ratio corresponding to a limit of detection of 168 pM (24 ng/ml). These results show the potential of intensity-based reader systems for point-of-care applications.
Future development towards a high channel count test chip with imaging readout needs to address two aspects in particular: further improvement in signal-to-noise ratio as well as the practical realization of the disposable test chip. For improving the signal-to-noise ratio an in-depth analysis of the noise sources should be carried out. Here, results obtained for microresonator-based sensors form a basis for the investigation [40,41]. The noise sources in RWG biosensors should be compared to these results and intrinsic noise sources need to be identified to determine the intrinsic noise floor. We reviewed recent research on advanced nanostructure designs for improved signal detection. It was shown that compound grating nanostructures and deterministic aperiodic nanostructures allow for the design of the spectral features. These investigations show that, for example, a three-LED intensity-based reader could be designed. Here, the LEDs should have non-overlapping spectra and one resonance should be at either the rising or the falling edge of each LED. With this concept, new opportunities for self-calibration arise. The local functionalization with receptor sites corresponding to the locally-enhanced electric field in deterministic aperiodic nanostructures is another promising route toward systems with higher sensitivity.
For the disposable test chip the separate design and combination of a microfluidic chip with a nanostructured and biofunctionalized functional chip appears most promising. Here, the microfluidic chip needs to be designed for sample introduction and filtering as well as for fluid delivery to the functional chip. This microfluidic chip may be fabricated cost effectively by injection molding. The nanostructuring of the functional chip is feasible by nanoimprint lithography. The subsequent biofunctionalization needs to be performed under well-controlled conditions to prevent degradation. The microfluidic chip and the functional chip may be combined by adhesives with tightness still being an unresolved issue. For a 1 cm² large sensor surface of the functional chip 400 separate receptor positions are feasible.
The second large application area opening up for intensity-based readout of RWG biosensors is compact reader systems for cellular assays. Intensity-based readout of cellular assays in a nanostructured microtiter plate has been demonstrated successfully. Due to the small system size, potentially many each microtiter plate may be monitored with its own intensity-based reader underneath. Furthermore, intensity-based systems are small enough to be integrated into incubators. Thus, the systems have the potential for continuous monitoring of cellular assays and high-throughput analysis. | 2019-04-22T13:07:23.936Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "f47385748c7b081292a9419ce1458153bd5c29c8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.photonics.2017.07.003",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "3328c846ed584adb9d73ca65c159147b5b51f2d7",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
266996656 | pes2o/s2orc | v3-fos-license | Disulfidptosis-related classification patterns and tumor microenvironment characterization in skin cutaneous melanoma
Aim: To identify distinct disulfidptosis-molecular subtypes and develop a novel prognostic signature. Methods/materials: We integrated into this study multiple SKCM transcriptomic datasets from the Cancer Genome Atlas database and Gene Expression Omnibus dataset. The consensus clustering algorithm was applied to categorize SKCM patients into different DRG subtypes. Results: Three distinct DRG subtypes were identified, which were correlated to different clinical outcomes and signaling pathways. Then, a disulfidptosis-relaed signature and nomogram were constructed, which could accurately predict the individual OS of patients with SKCM. The high-risk group was less sensitive to immunotherapy than the low-risk group. Conclusion: The signature can assist healthcare professionals in making more accurate and individualized treatment choices for patients with SKCM.
Graphical abstract:
Consensus matrix k = 3 Skin cutaneous melanoma (SKCM) is a malignant tumor originating from melanocytes and is the third most common skin malignancy [1].The total number of new cases of melanoma worldwide in 2020 is estimated to be 325,000 and 57,000 deaths [2].SKCM has become one of the major problems that endanger human health.SKCM is a complex disorder characterized by a high mutational load, extensive genetic heterogeneity, and complex tumor microenvironment (TME) interactions that place it among the most aggressive types of cancer [3,4].The highly invasive and metastatic nature of SKCM is responsible for the 5-year overall survival (OS) of only 23% of CM patients [2].Despite having similar tumor grading and identical pathological staging, the survival outcomes of SKCM patients can vary significantly due to distinct genetic characteristics [5].Hence, in order to enhance the result and forecast of melanoma more effectively, it is imperative to delve deeper into the exploration of more precise and specific targets as well as molecular profiling.SKCM is regarded as a highly aggressive form of cancer.Based on the 2020 worldwide cancer data, SKCM stands at the 19th position in terms of prevalence among the most frequent types of cancer [6].The incidence of new cases has surged to 324,635, resulting in 57,043 fatalities [6].In early-stage melanoma, when surgery was performed promptly, a positive outcome was observed, with survival rates reaching 95% after 10 years, compared with less than 20% for metastatic melanoma.Immunotherapy is the primary approach for managing advanced melanoma [7].In the past few years, the field of CM therapy has experienced remarkable advancements and entered a fresh era with the development of CM immunotherapy.Immunotherapy, in contrast to traditional chemotherapy, can induce an unparalleled and enduring reaction in individuals suffering from advanced cancer.Nevertheless, this response occurs solely in a comparatively limited group of individuals, and the impact significantly differs among patients with SKCM.Researchers are motivated by these clinical obstacles to discover novel methods for predicting which patients possess inherent resistance to targeted therapy and immunotherapy.By doing so, it can provide improved guidance for the clinical management of patients and encourage the rational utilization of clinical resources.
Recently, a group of scientists identify a novel programmed cell death (PCD) mode induced by aberrant accumulation of intracellular disulfides and define it as disulfidptosis [8].They identified several genes remarkably associated with disulfidptosis via whole-genome CRISPR-Cas9 screen, including SLC7A11 and its chaperone SLC3A2, and various components of mitochondrial oxidative phosphorylation system.Under glucose starvation, high expression of SLC7A11 in kidney cancer cells accelerates nicotinamide adenine dinucleotide phosphate (NADPH) depletion in the cytoplasm, intracellular disulfide accumulation, and ultimate disulfidptosis.Of note, the pharmacological blockade of glucose uptake by GLUT inhibitors was proved to exert cancer killing effects by the promoting disulfidptosis in SLC7A11-high tumor cells, highlighting the therapeutic utility of disulfidptosisinduction strategies in cancer treatment [8].Disulfidptosis is correlated with prognosis, the tumor microenvironment (TME), immune evasion, and therapeutic outcomes in other forms of cancer, including colon cancer [9,10], bladder cancer [11], and renal clear cell carcinoma [12].Li et al. [10] indicated that lack of effective immune infiltration were observed in the high-risk score group, indicating a immune-exclusion TME phenotype, with poorer survival.Low-risk score group was also linked to increased anti-tumor immune cell infiltrations and elevated activation status of anti-tumor immunity.Despite its recent emergence, the association between disulfidptosis and prognosis and TME in SKCM is unclear.
The genomic characteristics of disulfidptosis-related genes (DRGs) specific to SKCM were thoroughly investigated in this study.Unsupervised consensus clustering identified three distinct disulfidptosis expression patterns based on DRGs.We clarified the variations in prognosis, clinical traits, and immune characteristics among the three clusters.Furthermore, we investigated the predictive function of these DRGs among the three clusters, conducted functional analysis on genes that were differentially expressed among different clusters, and developed a prognostic model.This signature helps measure the disulfidptosis-related features, where a high-risk score indicates a poor prognosis and a lower tumor mutation burden (TMB) in patients with SKCM.Afterward, we examined scores for evaluating the tumor microenvironment (TME), associations between TMB, and disparities in chemotherapy sensitivity within the two risk groups.The findings indicate that DRGs have a significant impact on SKCM, aiding in the assessment of patient prognosis and their reaction to chemotherapy and immunotherapy.Additionally, these genes have the potential to serve as collaborative targets for enhancing the effectiveness of SKCM treatment.
Individuals & collections of data
We examined melanoma datasets (TCGA-SKCM and GSE65904) in a pair of databases, the Cancer Genome Atlas (http://portal.gdc.cancer.gov/)and GEO (https://www.ncbi.nlm.nih.gov/geo/).The normal human skin transcriptome data stored in the Genotype-Tissue Expression (GTEx) database were downloaded from the UCSC Xena database (http://xena.ucsc.edu/)and used as a control (TPM format).Samples without substantial clinicopathological or survival data were eliminated from subsequent analysis.Finally, the TCGA-SKCM dataset (468 samples) and GSE65094 dataset (210 samples) were selected as the training and validation sets, respectively.
Unsupervised clustering analysis of DRGs
After excluding unexpressed DRGs in certain samples, we selected a total of 9 DRGs for model construction, which were obtained from previous studies [8].Consensus Clustering, a widely used research technique for classifying cancer subtypes, is an unsupervised clustering approach.By utilizing various omics data sets, it can classify samples into distinct subcategories, enabling the identification of novel disease subtypes or facilitating comparative analysis among different subtypes.To differentiate various molecular subtypes, the "ConsensusClusterPlus" R package was employed for performing consensus clustering according to the expression level of 9 DRGs.Furthermore, to further examine the distinctions among subcategories.To investigate the distribution of various subtypes, we utilized the t-distributed stochastic neighbor embedding (tSNE) technique and employed the R package "Rtsne" and "umap" to assess the impact of classification.Moreover, we examined the level of immune cell infiltration among various subcategories.Different subtypes were analyzed using the R package called "heatmap" to examine the expression levels of DRGs, age, gender, tumor location, stage, Breslow depth, ulceration, and tumor status.In the end, the analysis of the enriched pathways between various subtypes was performed using the "GSVA" R package and visualized as heatmaps.
Establishment & validation of DRG-related signature
To identify the different subtypes' differentially expressed genes, the R package called "limma" was utilized using the criteria of FDR <0.05 and logFC >0.585.Following the acquisition of the distinctively expressed genes among each subtype, we selected the common genes for further analysis.The samples in TCGA database were used as the training set, and the samples in GSE65904 dataset were used as the testing set.To identify genes associated with prognosis (p < 0.05), the Univariate Cox regression analyses were utilized to discover intersecting genes in the training cohort.The "glmnet" R package was utilized to conduct Least Absolute Shrinkage and Selection Operator (LASSO) Cox regression analysis using differentially expressed genes associated with prognosis in the training set.The risk score was calculated by multiplying the sum of the expression levels for each gene with the LASSO regression coefficient for each gene.The samples were categorized into high and low groups based on the median DRG-score.Afterward, we examined the AUC of the training set and the validation set.
Clinical significance of the DRG-related signature Univariate and multivariate Cox regression analysis was conducted to validate the independence of the DRG-score as a prognostic predictor, considering both the risk score and clinicopathological variables.The findings were unveiled on the map of the woodland.Next, we performed a classification analysis to investigate if the DRG-score maintains its predictive accuracy in different subgroups defined by various clinical variables.
Development & verification of a nomogram scoring model
Based on the autonomous prognosis result, a prognostic nomogram was generated using the clinical features and the DRG-score with the assistance of the "rms" package in R. Each variable in the nomogram is assigned a specific score, and the overall score is calculated by summing up the scores of all variables for each sample.ROC curves were used to evaluate the nomogram's performance in predicting survival rates at 1, 3, and 5 years.The calibration plots of the nomogram were utilized to depict the prognostic significance of the expected survival events at 1, 3, and 5 years in comparison to the observed outcomes.
Analysis of tumor microenvironment
Immune cell infiltration analysis was conducted using the ssGSEA algorithms.Wilcoxon signed-rank test was used to analyze the distinct composition of immune infiltrating cells in the high-and low-risk groups.We examined the future science group 10.2217/mmt-2023-0006 relationship between immune cells and 8 crucial genes by conducting a correlation analysis.Simultaneously, we examined the association between the two predictive risk categories and the TME.A box plot was generated using the "estimate" package to compare the TME scores between the different DRG-score groups.
Anticipating the outcome of immunotherapy
The Tumor Immune Dysfunction and Exclusion (TIDE) method was utilized to determine the anticipated reactions to the immune checkpoint inhibitors by comparing the variations in TIDE scores for each sample between the distinct groups.In each sample from the TCIA database, an immunophenoscore (IPS) was created, which is a notable indicator of response to anti-PD-1 and anti-CTLA-4.Subsequently, the IPS was compared across risk groups in TCGA-SKCM to investigate the correlation between the risk score and IPS.A box plot was generated using the "reshape2" and "ggplot2" packages to compare the expression levels of immune checkpoints between the high-risk and low-risk groups.Additionally, the correlation between immune cells and the signature genes was analyzed.
Evaluation of drug responsiveness
Patients were divided into two subgroups based on their DRG-score, and the prediction of drug sensitivity was made for different medications.Drug prediction was performed using the "pRRophetic" R package.The Wilcoxon signed-rank test was employed to investigate the variation in IC50 values among various risk categories.The analysis was conducted using the R package "ggplot2".
The relationship between the DRG-score & TMB Data on somatic mutation data of SKCM was obtained from the TCGA database for the analysis of gene mutation.The gene mutation in various risk subgroups was analyzed using the "Maftools" R package.Subsequently, the analysis of the relationship between the DRG-score and the tumor mutation burden (TMB) was performed.Furthermore, we performed survival analysis among various TMB subcategories to investigate the influence of TMB status on the prognosis of patients with SKCM.Afterward, we merged TMB and DRG-score to conduct survival analysis on SKCM patients.
Detection of differentially expressed genes & analysis of functional enrichment
To detect differentially expressed genes (DEGs) in the different risk subgroups, we employed the "limma" software package, applying the following criteria: |Fold Change| >1.5 and false discovery rate (FDR) <0.05.The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis was conducted using the "clusterPrfiler" with a significance threshold of FDR <0.05.
Statistical analysis
The data analysis was conducted using R software and R Bioconductor packages.The version is 4.3.0.The distinctions between non-parametric and parametric methods were evaluated using the Wilcoxon test, Kruskal-Wallis test, t-test, or one-way ANOVA.The ROC curve was used to confirm the accuracy of the model.The prognosis of the survival curve was analyzed using the Kaplan-Meier test and Log-rank test, which were employed to evaluate distinctions among the groups.Statistical analyses were conducted on both sides, and a significance level of p < 0.05 was used.
Development of DRG molecular subtypes of melanoma
We employed the unsupervised clustering method to detect distinct regulatory patterns of SKCM by analyzing the expression levels of DRGs.After analyzing 468 samples, it was determined that the ideal number of clusters is 3 in the cluster analysis.At K = 3, the groups exhibited the least disparity, while the dissimilarity beyond the group was the most significant.As a result, we successfully classified melanoma patients into three distinct subcategories, specifically A, B, and C (Figure 1A).The stable distribution of melanoma patients was closely indicated by the relative change in the area under the CDF curve when dividing them into 3 subtypes (Figure 1B, C).The tSNE, UMAP, and PCA analysis indicated that the A, B, and C subtypes can be differentiated from one another.Figure 1D-F demonstrates that our classification using DRGs for subtype analysis exhibits superior typing capability.
Comparative examination of the three DRG subtypes
The Kaplan-Meier analysis showed that the C subtype had the highest prognosis, followed by the B subtype, while the A subtype had the worst survival outcome (Figure 2A).In a heat map, we display the levels of DRGs and clinical characteristics, including age, gender, tumor location, stage, Breslow depth, ulceration, and tumor status for the A, B, and C subtypes.The A subtype exhibited the highest level of DRG expression, followed by the B subtype, and the C subtype showed the lowest level (Figure 2B).Next, the disparities in biological behavior among different subgypes were examined through gene set variation analysis (GSVA) (Figure 2C-E).Upon comparing the A and B subtypes, it was observed that the B subtype exhibited a considerably higher enrichment in apoptosis, NOD-like receptor signaling pathway, and Protein export than the A subtype.The A subtype exhibited considerably greater enrichment in the Aminoacyl-tRNA biosynthesis, base excision repair, and RNA polymerase compared with the C subtype when comparing the A and C subtypes.Pathways like the T cell receptor signaling pathway, Natural killer cell-mediated cytotoxicity, and chemokine signaling pathway showed a slightly higher enrichment of the C subtype compared with the A subtype.In pathways Intestinal immune network for IgA production, Antigen processing and presentation, and Primary immunodeficiency, the C subtype exhibits a higher enrichment level compared with the B subtype when comparing the B subtype and C subtype.The B subtype showed a higher level of enrichment in pathways related to the chemokine signaling pathway, pentose phosphate pathway, and the citric acid cycle compared with the C subtype.Furthermore, we examined the three SKCM subtypes to investigate the 23 infiltrating immune cell categories (Figure 2F).The outcome indicated that the majority of the invading immune cells exhibited notable variations between the three DRG subtypes, excluding CD56 dim NK cells and monocytes.In terms of immune cell infiltration, the C subtype exhibited the highest level while the A subtype had the lowest level (Figure 2F).
Establishment & validation of DRG-related signature DEGs associated with DRG subtypes were utilized to create the DRG-related signature.We examined the genes that were expressed differently among the subtypes A, B, and C. Between the A and B subtypes, there were 504 genes expressed differentially.Additionally, there were 1626 genes expressed differentially between the A and C subtypes.Furthermore, there exists a total of 463 genes expressed differentially between the B and C subtypes.By intersecting the sets of differentially expressed genes from the three subtypes, we identified a total of 37 genes that were co-expressed across all three subtypes (Figure 3A).Next, we conducted a uniCox analysis to assess the survival importance of these genes, resulting in the identification of 26 genes meeting the criteria of p < 0.05 (Figure 3B).Afterward, we conducted LASSO analysis on 3E.To ascertain the general distribution of melanoma samples in subpopulations with low and high risk, PCA and t-SNE analyses were performed.It is possible to effectively distinguish the patients in the two subgroups (Figure 3F & G).The analysis of survival demonstrated that the samples in the two subgroups have distinct survival statuses.The OS rates were found to be lower in the high-risk subgroup (p < 0.05; Figure 3H).For 1-, 3-, and 5-year survival, the training set's time-dependent ROC areas are 0.723, 0.728, and 0.701 (Figure 3I).Afterward, the predictive model was utilized on the validation set.The PCA and t-SNE analyses demonstrated a distinct separation between the two groups at risk (Figure 4A & B). Figure 4C demonstrates that in the validation set, the high-risk group exhibited a significantly diminished prognostic value compared with the low-risk group.Figure 4D displays the time-dependent ROC areas for 1-, 3-, and 5-year survival in the validation set, which are 0.719, 0.753, and 0.717, respectively.To further confirm the accuracy of our signature, we compared the C-index, RMS, and AUC values of our signature with those of eight published risk models.Compared with the C-index and RMS of eight published risk models, our signature has great advantages (Supplementary Figure 1A & B).Furthermore, the AUC values of the signature were assessed in comparison to those of previously published signatures.Our signature's AUC values for the ROC curve were superior to the published signatures in terms of the highest values (Supplementary Figure 1C-J).
Clinical significance of the DRG-related signature
The prognosis of SKCM was significantly associated with age, DRG-score, and stage, as shown by the analysis of univariate Cox regression (Figure 5A).After additional multivariate Cox regression analysis, it is demonstrated that the DRG-score remains an independent prognostic indicator even after accounting for other clinical traits (Figure 5B).Furthermore, to investigate the predictive importance of DRG-scores in patients with SKCM, the patients were categorized into various subgroups according to clinical factors.In general, the survival of high-risk patients was typically worse when compared with low-risk patients (Supplementary Figure 2).
Creating a nomogram for forecasting survival
To predict the 1-, 3-, and 5-year OS rates in patients with SKCM, a nomogram was created that combines the DRG-score with clinicopathological characteristics, considering the limited clinical usefulness of the DRG-score in predicting OS (Figure 5C).Our analysis of the nomogram model showed high accuracy for OS at 1, 3, and 5 years (Figure 5D).The proposed nomogram showed a similar performance to an ideal model based on the calibration plots (Figure 5E).
Evaluation of TME in different risk subpopulations
Based on the ssGSEA algorithm, the TCGA database's gene expression matrix for SKCM was used to calculate the enrichment scores of 16 immune cells and the activity of 13 immune-related pathways.Afterward, we investigated the makeup of immune cells and immune-related pathways in various risk subcategories.The findings indicated that individuals in the high-risk category exhibited notably low levels of immune cell infiltration (Figure 6A).Similarly, all the immune-related pathways were less active in the high-risk group than in the low-risk group (Figure 6B).Furthermore, there were significant variations in the tumor immune microenvironment between low-and high-risk subpopulations.Patients classified as high-risk displayed reduced ImmuneScore, StromalScore, and ESTIMATEScore levels but higher TumorPurity levels in comparison to low-risk patients (Figure 6C-F).Additionally, we examine the correlation between immune cell enrichments and the eight genes in the prognostic signature.Based on Figure 6G, we reached the determination that the majority of immune cells exhibited a strong correlation with the chosen genes.
Assessment of the immunotherapeutic reaction
To assess the response to immunotherapy, we conducted an analysis called tumor immune dysfunction and exclusion (TIDE).By utilizing two primary tumor immune evasion mechanisms, TIDE can forecast the response to immunotherapy.These mechanisms include the inhibition of T cell dysfunction and T cell infiltration in tumors that have low CTL levels.A greater TIDE score indicated a greater likelihood of immune evasion, suggesting that patients were less likely to experience benefits from ICI treatment.Our findings indicate that patients in the non-responder group had a risk score that was comparatively higher than the responder group, suggesting that ICI therapy may be more advantageous for DRG-low patients (Figure 7A).Utilizing the TCIA repository, we employed it to produce IPS for every SKCM specimen.Notably, patients classified as low-risk exhibited higher IPS for anti-CTLA-4, anti-PD-1, and anti-(CTLA-4 plus PD-1) compared with high-risk patients, implying more favorable immunotherapy results among individuals possessing a lower risk score (Figure 7B-D).Furthermore, the group with low risk exhibited increased expression of significant immune checkpoint biomarkers (PD-1, PD-L1, and CTLA-4) in contrast to the high-risk group (Figure 7E).The results suggest that individuals with a low-risk profile may experience greater advantages from immunotherapy.
Identification of potential medications for the treatment of melanoma
In the training cohort, we attempted to establish connections between various risk categories and the efficacy of chemotherapy in the treatment of SKCM, excluding the evaluation of ICI therapy.We demonstrated that a lower half inhibitory concentration (IC50) of chemo-therapeutics like 5-Fluorouracil, Cisplatin, Dasatinib, Gemcitabine, and Ribociclib (p < 0.05) was linked to low risk, while a low IC50 such as Tamoxifen, Dihydrorotenone, and Lapatinib (p < 0.05) was associated with high risk.Hence, it is demonstrated that the DRG-score functioned as a possible indicator for chemo-responsiveness (Supplementary Figure 3).
The relationship between the DRG-score & TMB
To gain a deeper understanding of the immunological characteristics in various risk subcategories, we examined the genetic mutations.In the high (Figure 8A) and low DRG-score groups (Figure 8B), we have identified the 20 genes exhibiting the most elevated mutation rates.The findings demonstrated that the occurrence of missense mutation and Multi Hit was the predominant type of mutation.Not only were the mutation frequencies of TTN, MUC16, DNAH5, BRAF, and PCLO higher than 40% in both groups, but they were also the most prevalent mutations in both groups.Furthermore, we examined the correlation between the risk score and TMB.The level of TMB exhibited a notable increase in the subgroup with low risk compared with the subgroup with high risk (Figure 8C).
To investigate the influence of TMB status on the prognosis of patients with SKCM, we additionally performed survival analysis among various TMB subcategories.Patients with a high TMB exhibited a more favorable prognosis compared with patients with a low TMB (Figure 8D).Afterward, we merged TMB and DRG-score to conduct survival analysis on patients with SKCM, and the DRG-score nullified the prognostic advantage in the high-TMB category (Figure 8E).
Detection of DEGs & analysis of functional enrichment
To better understand the physiological functions and pathways linked to risk scores, GO and KEGG enrichment analysis was performed on DEGs between the two DRG-score groups.A total of 1448 differentially expressed genes (DEGs) were identified using the criteria of |Fold Change| >1.5 and FDR <0.05 (Figure 9A, B).The GO terms include Biological Process (BP) categories like immune receptor activity, Cellular Component (CC) categories like T cell receptor complex, and Molecular Function (MF) categories like production of molecular mediators of immune response (Figure 9C).Furthermore, the KEGG results revealed that the DEGs were predominantly enriched in pathways related to the immune system, such as the T cell receptor signaling pathway, natural killer cell-mediated cytotoxicity, and antigen processing and presentation (Figure 9D).It was unexpected to discover that immunity was linked to both GO terms and KEGG pathways.
Discussion
Melanoma, one of the most lethal forms of skin cancer, is distinguished by its significant invasiveness and mortality rates [13,14].The treatment approaches for advanced melanoma have undergone significant transformations over the last ten years as a result of the implementation of targeted therapy and immunotherapy [15,16].Nevertheless, even though the implementation of focused therapy and immunological checkpoint treatments exhibits considerable potential, the emergence of swift resistance continues to pose a predominantly unconquerable obstacle [17][18][19].Therefore, it is imperative to investigate novel approaches for treatment, and disulfidptosis may emerge as a promising area of future research.Extensive research indicates that addressing disulfidptosis could emerge as a novel approach for treating SKCM.The regulation of the redox state is closely linked to the metabolism of disulfides, serving as a significant mechanism in the occurrence and development of tumors [20].For instance, certain cancerous cells enhance their ability to survive and endure by modifying the redox conditions within their cells [21].Furthermore, certain research has indicated that disulfides could hold promise in the field of cancer therapy.
For instance, the anti-cancer properties of certain drugs like cisplatin and paclitaxel are manifested through their interaction with intracellular disulfides [22,23].Moreover, extensive studies have demonstrated that overexpression of SLC7A11 in cancer suppresses ferroptosis and thus plays a crucial role in promoting tumor growth [24][25][26].
The discovery of disulfidptosis subverts this traditional thinking, that is, SLC7A11 also plays an essential role in promoting disulfidptosis.Because SLC7A11 not only inhibits the occurrence of ferroptosis but also promotes disulfidptosis.The treatment that promotes ferroptosis by interfering with SLC7A11 may inhibit the occurrence of disulfidptosis.Hence, maintaining a balance between ferroptosis and disulfidptosis could emerge as a novel approach for enhancing the rate of treatment response and survival in individuals with SKCM.At the same time, preclinical findings suggest that metabolic therapy using glucose transporter (GLUT) inhibitors can trigger disulfidptosis and inhibit cancer growth [8].Future studies are required to explore the anticancer effects of disulfidoptosis inhibitors in combination with other therapies, such as immunotherapy.SKCM exhibits significant heterogeneity, leading to diverse clinical outcomes and varying responses to treatment.To tackle this problem, we explored the possibility of disulfidptosis in impeding the advancement of SKCM and devised an innovative disulfidptosis-associated indicator to facilitate risk categorization and personalized treatment prognosis.The use of consensus clustering algorithms enables efficient analysis and identification of patient clusters with diverse characteristics in extensive datasets.Hence, we employed this unguided algorithm to detect three separate molecular subcategories (A, B, and C) by analyzing the levels of expression of 9 DRGs.Compared with subtypes A and B, subtype C showed the most favorable survival outcomes with higher levels of immune infiltration.Additionally, we employed GSVA enrichment analysis to explore the differences in biological behavior among these three subtypes.The enrichment of pathways related to immunity was observed in subtype C compared with the A and B subtypes.Afterward, we acquired a total of 37 genes that were differentially expressed and exhibited coexpression in all three subtypes.The LASSO algorithm was employed to identify eight essential genes (MGAT5B, SEMA6A, OCA2, SRGN, GPR143, FGL2, PTPRC, and CDH3) for model development and validation, after acquiring 26 genes associated with prognosis.Our risk assessment model can differentiate between groups with a high level of risk and those with a low level of risk.Our model demonstrated improved accuracy in predicting the prognosis of melanoma patients, as evidenced by Kaplan-Meier analysis, ROC curve, nomogram, and calibration plot.Afterward, we examined scores for evaluating the TME, associations between TMB, and disparities in immunotherapy and chemotherapy sensitivity within the two DRG-score groups.The findings indicate that DRGs have a significant impact on SKCM, aiding in the assessment of patient prognosis and their reaction to chemotherapy and immunotherapy.Additionally, these genes have the potential to serve as collaborative targets for enhancing the effectiveness of SKCM treatment.
Despite being commonly regarded as a cancerous malignancy with limited treatment options, the outcomes of patients have significantly improved thanks to innovative therapies that focus on vulnerable genes and immune checkpoints.This improvement is attributed to enhanced biological knowledge and groundbreaking advancements.In recent years, the employment of ICI treatment has led to a notable rise in the 5-year survival rate for melanoma patients, increasing from less than 5% to approximately 30% [27,28].The goal of ICIs is to address the malfunctioning immune system and stimulate CD8-positive T cells to eliminate cancerous cells [29].The current treatments have completely transformed the level of care for patients with SKCM, however, the limited rate of response and the unavoidable development of treatment resistance might hinder any future advancements in treatment results.Our research may provide additional insights into the management of melanoma.To assess the response of ICIs, TIDE, and IPS scores were calculated.The TIDE score is linked to two distinct methods of immune evasion, specifically, the impairment of cytotoxic T lymphocytes (CTLs) that infiltrate the tumor and the exclusion of CTLs.TIDE scores demonstrate the correlation between the likelihood of immune evasion by tumors and the effectiveness of ICI therapy.Based on our analysis, we noticed that SKCM patients with reduced DRG scores exhibited decreased TIDE scores and exhibited a favorable response to anti-PD1 and anti-CTLA-4 treatment.Additionally, we examined the manifestation of various significant genes related to immunological checkpoints in both the high-risk and low-risk patient groups.The results showed that individuals with a low-risk status exhibited increased levels of immunological checkpoint gene expression, suggesting that patients with low-risk scores could potentially gain advantages from ICI treatment.Therefore, we concluded that the levels of immunological checkpoint gene expression could serve as a reliable indicator for evaluating the impact of immunotherapy in patients with SKCM.Moreover, a significant biomarker in immune checkpoint inhibitor therapy is the high tumor mutation burden (TMB-H), which helps identify tumor patients who could potentially benefit from the therapy.The basic premise is that by increasing the number of mutant proteins, it is possible to produce antigenic peptides that could potentially boost immunogenicity.The research demonstrated that patients with a low risk displayed a high TMB and a favorable prognosis.The findings showcased the precision of our predictive model in evaluating the risk of patients from an alternative standpoint.
Gene mutations are abundant during the formation and progression of tumor tissue.Altered genetic material can produce tumor antigens, which can be identified by the immune system as foreign tissues, thereby stimulating a response from immune cells [30].The treatment of tumors greatly benefits from immunotherapy as it exploits the ability of immune cells to identify and eradicate cancer cells [31,32].Nonetheless, tumors efficiently inhibit immune responses (immune evasion) through the activation of negative regulatory pathways related to immune balance or by acquiring characteristics that enable them to actively avoid recognition [33,34].Highly effective immunotherapy drugs have received approval in preclinical and clinical phase I-III trials for melanoma that is aggressive, refractory, advanced, and metastatic [35].Clinical trials are currently evaluating the efficacy of nivolumab and pembrolizumab, which are anti-PD-1 monoclonal antibodies, along with ipilimumab, an anti-CTLA-4 antibody, for the treatment of melanoma [36].Research has indicated that frequently employed immune checkpoint inhibitors (ICIs) can enhance the survival rates of individuals with melanoma, both in terms of progression-free survival and overall survival [37,38].Our study revealed that the high-risk group exhibited a notably reduced level of immune cell infiltration compared with the low-risk group, as indicated by the risk score model.Notably, the high-risk group had a considerably shorter survival time compared with the low-risk group.In previous investigations on melanoma, it was observed that the high immune score group exhibited a notably longer survival time compared with the low immune score group [39,40].In this study, we put forward a hypothesis: Does the level of immune cell infiltration in melanoma have a positive correlation with patients' survival time?To validate this issue, additional clinical data or experiments are required.
Figure 1 .
Figure 1.Using unsupervised clustering analysis to cluster SKCM based on differentially expressed genes.(A) The heatmap of the consensus matrix delineates three clusters (k = 3) and their corresponding correlation region.(B) a CDF curve of K = 2-10.(C) The relative change in area under the CDF curve of K = 2-10.(D-F) PCA, tSNE, and UMAP identified three DRG clusters.CDF: Cumulative distribution function; PCA: Principal component analysis; tSNE: t-distributed stochastic neighbor embedding; UMAP: Uniform manifold approximation and projection.
Figure 2 .
Figure 2. Differentially expressed gene-based molecular clusters with distinct clinical features and TIME landscapes.(A) Survival analysis was conducted on three DRG clusters.(B) Expression profiles of DRGs and clinicopathological characteristics between clusters.(C-E) The heat map of GSVA revealed the variations in pathways among the three clusters.(F) Abundances of infiltrating immune cells between three clusters.
Figure 3 .
Figure 3. Development of the differentially expressed gene-related signature in the training set.(A) The Venn diagram displays the overlap of differentially expressed DRGs. (B) The Univariate Cox regression analysis of 26 genes is represented in the forest plot.(C, D) The prognostic genes were analyzed using LASSO-Cox regression and the partial likelihood deviance.(E) The distribution of patients in the three DRG clusters and two DRG-score groups.(F, G) Principal Component Analysis (PCA) and t-distributed stochastic neighbor embedding (tSNE) reveal a clear distinction in transcriptomes among the two subcategories.(H) Analysis of overall survival (OS) in TCGA cohort using Kaplan-Meier method.(I) Receiver Operating Characteristic curves for predicting overall survival at 1 year, 3 years, and 5 years.
10 .Figure 4 .
Figure 4. Validation of the differentially expressed gene-related signature in the GSE65904 set.(A, B) Principal component analysis (PCA) and t-distributed stochastic neighbor embedding (tSNE) plot of the GSE65904 set.(C) Analysis of overall survival (OS) in GSE65904 set using the Kaplan-Meier method.(D) Receiver Operating Characteristic curves for predicting overall survival at 1 year, 3 years, and 5 years.
Figure 5 .
Figure 5. Developing a nomogram to forecast the overall survival (OS) of individuals diagnosed with SKCM in the TCGA dataset.(A & B) Univariate and multivariate Cox regression analysis was conducted to validate the independence of the DRG-score as a prognostic predictor.(C) A nomogram utilizing the DRG-score, along with age, tumor stage.(D) Time-based receiver operating characteristic (ROC) curves of the nomogram for the prediction of 1-, 3-, and 5-year OS. (E) Calibration plots are used for internally validating the nomogram.
10 .Figure 6 .
Figure 6.Evaluation of TME in different risk subpopulations.(A & B) Comparison of the enrichment levels of immune-related cells and pathways in the low-and high-risk groups.(C-F) A box plot illustrates the variances in the TME score when comparing the two DRG-score groups.(G) The relationship between the model genes and immune-related cells.
High Low High Low High Low High Low High Low High Low
Figure 7 .
Figure 7. Assessment of the responsiveness to immunotherapy using differentially expressed gene score.(A) Comparison of the tumor immune dysfunction and exclusion (TIDE) in the two DRG-score groups.(B-D) Comparison of the immunophenoscore (IPS) in the two DRG-score groups, categorized by anti-PD-1, anti-CTLA-4, or anti-(CTLA-4 plus PD-1).(E) Comparison of immune checkpoint gene expression in the two DRG-score groups.
Figure 8 .
Figure 8. Correlation between the differentially expressed gene-score and tumor mutational burden.(A & B) Waterfall diagram illustrating the somatic mutation of tumors in both the high-risk and low-risk groups.(C) The tumor mutational burdenvaries among the subgroups with different DRG score.(D) Survival analysis was conducted on the various groups, categorized based on tumor mutational burden.(E) Analyzing the survival of different groups categorized by both tumor mutational burden and DRG score.
Figure 9 .
Figure 9. Analysis of functional enrichment for differentially expressed genes.(A & B) The heatmap and Volcano plot of differentially expressed genes (DEGs).(C) The analysis of GO enrichment for BP, CC, and MF terms revealed the potential role of the DEGs.(D) The possible pathways were identified through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis.
The training set consisted of 454 patients who were stratified into high and low DRG-score groups based on the score formula's median score calculation.The distribution of patients in the three DRG clusters and two DRG-score groups was shown in Figure genes were utilized from the training set data to calculate the DRG-score, with the following formula: Risk score = expMGAT5B × 0.00068 + expSEMA6A × 0.05199 + expOCA2 × 0.03466 + expSRGN × (-0.12988) + expGPR143 × 0.03095 + expFGL2 × (-0.01660) + expPTPRC × (-0.03695) + expCDH3 × 0.01336. | 2024-01-17T05:07:18.839Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "76b1068c243e4fa6caec3f4689ba733866478755",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "76b1068c243e4fa6caec3f4689ba733866478755",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240238085 | pes2o/s2orc | v3-fos-license | The Race of CAR Therapies: CAR-NK Cells for Fighting B-Cell Hematological Cancers
Simple Summary Over the last few years, CAR-T cells have arisen as one of the most promising immunotherapies against relapsed or refractory hematological cancers. Despite their good results in clinical trials, there are some limitations to overcome, such as undesirable side-effects or the restraints of an autologous treatment. Therefore, CAR-NK cells have emerged as a good alternative for these kinds of treatments. This review discusses the advantages of CAR-NK cells compared to CAR-T cells, as well as the different sources and strategies in order to obtain these CAR-NK cells. Abstract Acute lymphoblastic leukemia (ALL) and Chronic lymphocytic leukemia (CLL) are the most common leukemias in children and elderly people, respectively. Standard therapies, such as chemotherapy, are only effective in 40% of ALL adult patients with a five-year survival rate and therefore new alternatives need to be used, such as immunotherapy targeting specific receptors of malignant cells. Among all the options, CAR (Chimeric antigen receptor)-based therapy has arisen as a new opportunity for refractory or relapsed hematological cancer patients. CARs were designed to be used along with T lymphocytes, creating CAR-T cells, but they are presenting such encouraging results that they are already in use as drugs. Nonetheless, their side-effects and the fact that it is not possible to infuse an allogenic CAR-T product without causing graft-versus-host-disease, have meant using a different cell source to solve these problems, such as Natural Killer (NK) cells. Although CAR-based treatment is a high-speed race led by CAR-T cells, CAR-NK cells are slowly (but surely) consolidating their position; their demonstrated efficacy and the lack of undesirable side-effects is opening a new door for CAR-based treatments. CAR-NKs are now in the field to stay.
Background
B cell leukemias, such as Acute lymphoblastic leukemia (ALL) and Chronic lymphocytic leukemia (CLL), are hematological cancers affecting young and elderly patients [1]. In a patient with ALL, large amounts of stem cells become B-lymphoblasts very quickly, so the function of leukemia cells is then compromised and they are unable to fight infection correctly. Furthermore, leukemia cell numbers increase in both blood and bone marrow to such a high level (100 or even 1000 times more than normal); as a consequence healthy white blood cells, red blood cells and platelets are reduced, leading to anemia, infection and easy bleeding [2]. The current protocol in the diagnosis and classification of ALL involves the study of its morphology, immunophenotype, cytochemistry, cytogenetics, and molecular genetics [3]. Particularly, the immunophenotype is vital for the initial diagnostic work-up of ALL. In all B-lineage ALLs, total expression of the CD19 surface marker is mainly present, along with the total or partial expression of other surface markers such as CD22, CD20 and CD33. The median content of blast cells in bone marrow is around 82% [4]. ALL is the most prevalent cancer in children, with a rate of 20-25% of all cases [5], but this disease has become highly curable, with five-year survival rates above 90% [6]. In contrast, the outcome in adults is quite different, with a five-year survival rate of 40% with conventional treatments. Prognosis is more adverse with age, and this may be due to more unfavorable cytogenetic/genetic anomalies and patient comorbidities [7].
In adults, CLL is one of the most frequent types of leukemia, which usually evolves moderately slowly. It is a low-grade lymphoproliferative neoplasm with ≥5 × 10 9 /L clonal B-cells in the peripheral blood, which usually express CD19, CD20, dim CD23, and CD5 [8]. It generally manifests during or after middle age and is exceptional in children. In CLL, a large number of hematopoietic stem cells turn into anomalous lymphocytes and do not differentiate into healthy white blood cells; as a consequence, the lymphocytes are not able to fight infection appropriately. Moreover, CLL patients do not normally show any symptoms and are often diagnosed during a routine blood test [9]. The fiveyear survival rate for CLL patients is 79.2%, and it is still incurable in many patients [10]. Furthermore, a major complication of CLL is the transformation into a fast-growing type of non-Hodgkin Lymphoma (NHL) called diffuse large B-cell lymphoma (DLBCL). This is known as Richter's transformation, and it happens in 2% to 10% of CLL cases. Rarely, in CLL patients, the leukemia can also transform into ALL, and when this occurs, the treatment administered to ALL patients is likely to be used in CLL patients as well [11].
Standard Therapies for ALL
Over the last four decades, ALL treatment advances have principally been made in the children and adolescent populations, with less success reported in adults [12,13] ALL patients are mainly treated with chemotherapy, and treatment is usually split into three phases: induction, consolidation and maintenance. The aim of induction chemotherapy is to bring the leukemia to the point where it lacks detectable tumor cells (complete remission). There are several chemotherapy drug combinations for the induction phase, which generally include dexamethasone, vincristine or prednisone, and an anthracycline drug such as daunorubicin or doxorubicin (Adriamycin), and L-aspaginase. Next, the aim of the consolidation phase is to eliminate any leukemia cells that may still be in the blood or bone marrow but are undetectable in tests. This reduces the risk of the leukemia relapse, and it often includes another short course of high doses chemotherapy, using methotrexate (MTX) and cytarabine (ARA-C). Lastly, after consolidation, in most cases the patient receives a chemotherapy maintenance regime of methotrexate and 6mercaptopurine (6-MP) with the aim of ALL in remission. Occasionally, this may be given along with other drugs such as prednisone and vincristine. However, 10-20% of ALL patients are refractory, which means that the leukemia cells do not disappear with the first treatment. Next, different or higher doses of chemotherapy drugs may be used, such as clofarabin, even though the outcome is poorer [14][15][16][17]. Other drugs such as blinatumomab (Blincyto) [18], rituximab or inotuzumab ozogamicin (Besponsa) may be a choice for ALL patients [19]. A hematopoietic stem cell transplant (HSCT) may be tried when the ALL is put into remission. Minimal residual disease is a key variable for the HSCT outcome [20], since if the ALL enters into remission with the first treatment but then relapses, bone marrow and blood will be affected. The chosen treatment depends on the time elapsed from the initial treatment until the leukemia returns. In the event of a long time free of disease, a similar treatment may be tried for a second remission. However, if the relapse appears earlier, new drugs for more combative chemotherapy may be used [21,22].
There are two principal types of HSCT: allogenic stem cell transplants, in which the stem cells belong to a donor (bone marrow, blood or umbilical cord blood (CB), which is the preferred transplant for ALL patients; and autologous stem cell transplants, in which the patient receives their own blood cells.
Standard Therapies for CLL
A greater number of CLL patients have no symptoms at diagnosis, and all patients should be classified based on their risk. Patients classified as low-and intermediate-risk (~75% patients) should be checked for disease progression every half a year to a year, while those in the high and very high-risk group (~25% patients) should be checked every 3-6 months [23]. For the initiation of a treatment against CLL, patients must present any of the following symptoms: (a) progressive marrow failure, (b) progressive lymphocytosis, (c) massive nodes with or without progressive or symptomatic lymphadenopathy or (d) autoimmune complications of CLL [24]. Patients could also present clinical symptoms such as fever, asthenia or night sweats. Treatment choices for CLL differ widely, determined by the patient's age, their risk group classification, and the different symptoms that led to treatment. Many patients live with CLL for a long time, but usually this disease is extremely hard to cure, and it has been proved that early treatment helps prolong life. Patients without significant comorbidity are treated with rituximab, fluradabine and cyclophosphamide; patients with comorbidity are treated with obinutuzumab-chlorambucil or rituximabchlorambucil. Finally, patients who present deletions in 17p or alterations of p53 are treated with ibrutinib, idelalisib, ofatumumab or venetoclax [25].
Relapsed or refractory ALL or CLL patients after two or more lines of treatment may need alternative therapies to treat their disease. New therapies, especially immunotherapies, are emerging to treat these and other hematological cancers. Chimeric antigen receptor (CAR) based therapy is positioning itself as one of the most promising therapies for getting rid of the malignant cells. ALL and CLL biological and clinical milestones are summarized in Figure 1.
Immunotherapies against ALL and CLL
Immunotherapy is a type of therapy that uses substances to stimulate or suppress the immune system to help the body fight cancer, infection, and other diseases [26,27]. The 2018 Nobel Prize in Physiology or Medicine was awarded to Prof. James P. Allison and Prof. Tasuku Honjo for their achievements in cancer immunotherapy by inhibition of negative immune regulation. Several types of immunotherapies are used to treat cancer, and these treatments can either help the immune system attack the cancer directly or stimulate the immune system in a more general way. Indeed, prior to its designation as the Science Breakthrough of the Year in 2013, cancer immunotherapy was already active in the treatment of hematologic malignancies [28].
In ALL, complete remission (CR) rates after chemotherapy are low and range between 25 and 45% [29], with most of these patients dying, which leaves a lot of room for improvement. Generally, four different immunotherapies have been established to date, including conjugated monoclonal antibodies, naked monoclonal antibodies (mAbs), bispecific T cell engager (BiTE), and chimeric antigen receptor (CAR) T cell therapy [30]. These therapies can target different antigens present on the surface of B cells. About 30 to 50% of late pre-B cell lymphoblasts express CD20 and it is linked to a higher relapse and lower survival rate [31,32]. Several drugs that target this receptor are used in the clinic, such as rituximab, which has been integrated into chemotherapy programs [33]; ofatumumab, which induces higher levels of Complement-dependent-Citotoxicity (CDC) and Antibody-Dependent-Cellular-Cytotoxicity (ADCC) compared to rituximab [34]; and obinutuzumab, which is designed to intensify ADCC as compared to ofatumumab and rituximab [35]. Although the Food and Drug Administration (FDA) approved it for first-line treatment of CLL, the clinical scene of pre-B ALL needs to be studied.
CD19 is expressed in 90% of pre-B and mature ALL lymphoblasts, providing an interesting target for immunotherapy. A BiTE construct named blinatumomab binds CD19positive B cells and CD3-positive cytotoxic T cells. The cytotoxic T cells are activated when binding to CD19 and induce cell death via direct tumor lysis [39]. In October 2017 the FDA approved one of the most promising cellular therapy-based treatments for relapsed B-cell ALL: Tisagenlecleucel (Kymriah from Novartis). A short time later, Axicabtagene Ciloleucel (Yescarta from Kite-Gilead) was approved for relapsed or refractory large B cell lymphomas. In 2020, a third treatment named Brexucabtagene Autoleucel (TECARTUS, Kite-Pharma) was approved by the FDA. These treatments are Chimeric antigen receptor (CAR) T-cell based therapies [40], the first two of which were approved in Europe by the European Medicines Agency (EMA) in June 2018, and the third, TECARTUS, was approved in December 2020.
State of the Art of CD19-CAR-T Therapies. From Bench to Current Clinical Trials Results
In the 1980s, Israeli researchers expressed chimeric TCR genes including the TCR constant domains united to the variable domain from an antibody, which led to a hypothesis about CAR-T treatments [42]; in 1989, Gideon Gross and Zelig Eshhar developed the first CAR-T cells at the Weizmann Institute, Israel. Some years later, Prof. Carl H. June from the University of Pennsylvania tested genetically modified CAR-Ts in humans for the treatment of cancer and clinical use, and thanks to his work, the first FDA-approved gene therapy, named Tisagenlecleucel (Novartis), was developed and commercialized. A CAR is a chimeric receptor construct consisting of an extracellular single-chain variable fragment (scFv) derived from an antibody [43] or a full-length antibody [44]. It is connected to a hinge fragment, which acts as a "spacer" between the extracellular and intracellular part, being usually a CD8α, which enhances responses initiated by TCR [45]; a transmembrane domain, and a CD3 ζ chain, or FcR receptor γ, consisting of an intracellular tyrosine-based activation motif. This was the structure of the first generation CARs (1G) [46]. T cell activation could be mediated by TCR ligation of the host antigen. Two signals are needed to activate the T cells fully. One of the signals is through the TCR, whereas the second signal is activated by the recognition of CD86 or CD80 in the surface of antigen presenting cells (APC), costimulating CD28. Consequently, during infection or inflammation, APCs upregulate CD86 and CD80 and both signals, TCR and CD28, are activated, so that T-cells perform target killing, with long-term persistence [47,48]. Researchers accordingly designed a CAR which included the two-signal model of T-cell activation CD28 co-stimulatory domain along with CD3ζ ITAM domains [49]. These kinds of constructs constitute second generation CARs (2G). Furthermore, it has also been reported that other co-stimulatory domains, such as 4-1BB, support comparable in vivo improvements to CAR-T cell persistence and function [50]. Nonetheless, CAR-T cell properties could change in regard to these last two domains; CD28based CARs have direct antitumor efficacy, while 4-1BB-based CARs have long persistence activity [51]. As a result, third generation (3G) CARs have been developed to include two co-stimulatory domains, 4-1BB and CD28 intracellular domains [52]. CARs from 2G or 3G containing the 4-1BB domain have been reported to have greater in vivo expansion and anti-tumor activity compared to CD28 2G CARs [53]. Due to the vast heterogeneity of cancer cells in solid tumors, a fourth generation of CARs (4G), known as TRUCKs ("T cells redirected for antigen-unrestricted cytokine-initiated killing") were developed, where cytokines are used to armor CARs. These CARs contain an additional modification which consists of an inducible or constitutive expression cassette for a transgenic protein, for example a cytokine, which is released by the CAR-T cell to modulate the T-cell response; As a consequence, an improvement of T cell properties and recruitment of additional immune cells can be achieved [54].
As CARs seem to be a newer and more effective way to treat cancers in relapse or refractoriness, especially hematological cancers, there are various clinical trials going on. Most of them use T cells as a vehicle for the CARs, with more than 1.000 clinical trials worldwide, and CD19 is mostly used as the CAR antigen.
The Role of NK Cells in the Immune System
Natural Killer cells, or NK cells, belong to the innate immune system, providing rapid responses against viral infections and tumors. Usually, the detection of the major histocompatibility complex (MHC) on the surface of the infected cells by the immune cells triggers cytokine release, causing lysis or apoptosis. NK cells have, in fact, a unique ability to recognize stressed cells lacking antibodies and MHC, accelerating an immune reaction. They owe their name "natural killers" to the initial perception that they do not need prior activation to kill cells with no "self" antigens of MHC class I. As malignant cells do not express MHC I markers, T cells cannot destroy them, so NK cells play a key role [55].
NK cells and their functions were described more than 30 years ago, but for the first time in 1975 these cells were described as bigger lymphocytes than B cells and T cells, which contained distinctive cytoplasmic granules. NK cells were characterized as cells which showed a co-stimulatory independent spontaneous cytotoxic capacity, differentiating them functionally from B cells and T cells [56,57].
In the immune system, NK cells are the third major lymphocyte subset. These large granular cells constitute approximately 10-15% of lymphocytes in the blood [58]. NK cells are able to kill tumor cells and infected cells "naturally", i.e., in a casual manner that does not need any prior activation and is not limited to the expression of MHC molecules [57,59]. NK cells are usually defined within the lymphocyte population by a lack of CD3 and expression of CD56, a neural cell adhesion molecule (NCAM) [60].
During the development process of NK cells, they express several surface markers progressively and in an orderly way, and they are classified into stage 1 (CD34+, CD45RA+, NK cell functions involve recognition of potential target cells by the initial binding interactions between activating and inhibitory receptors with ligands available on the target, and the integration of signals transmitted by these receptors, which determines whether the NK cell detaches and moves on or stays and responds. Therefore, the activating and inhibitory receptors are crucial for NK cell function regulation. NK cells express clonally distributed inhibitor receptors named killer cell immunoglobulin-like receptors (KIRs), that recognize allotypic determinants (KIR ligands) shared by particular groups of HLA class I alleles. The regulatory mechanism mediated by these receptors is thought to protect normal cells from autologous NK cell attack, while rendering cells for which class I expression is compromised (e.g., by tumor transformation or viral infection) susceptible to NK-mediated killing [62]. The absence of HLA molecules in the membrane is not enough to trigger a response in NK cells. A larger number of activating signal are needed. Several different families of activating receptors are found in NK cells. C-type lectine family, mainly represented by NKG2C and NKG2D receptors, which interact with DAP10 and DAP12 [63,64]. Other activating receptors are NCR family, practically exclusive in NK cells, mainly form by NKp30 (CD337), NKp44 (CD336) and NKp46 (CD335) [65]; and SLAM family, in which we found 2B4 (CD244) [66]. The imbalance between inhibitory and activating signals will determine the killing outcome of NK cells.
NK cell role during ALL or CLL may determine the prognosis of the disease. An increase in the number of NK cells is associated with a better prognosis in ALL and CLL [67]. The presence of NK cells in bone marrow is associated with better prognosis and higher chances of a good response to chemotherapy in ALL patients [68]. Besides, a strong NK cell phenotype at the time of ALL diagnosis seems to be related with the control of the disease after chemotherapy treatment [69]. In CLL, the role of NK cells remains controversial. While it was described that CLL patients paired with defects in NK cell cytotoxicity [70], other groups demonstrated that NK cell functions are not affected in CLL patients [71]. Thus, the role of NK cells is still uncertain.
Advantages and Disadvantages of CAR-NK and CAR-T Therapies
Despite the good results of CAR-T cells, there are some expected side-effects. On the one hand, the main and most serious side-effect of CAR-T cell therapy is cytokine-release syndrome and its dangerous form, a "cytokine storm", in which T-cells are massively activated, triggering a cascade of pro-inflammatory cytokines which cause flushing, fever and dyspnea. Although an acute cytokine storm can potentially be lethal, it has been proved that the anti-interleukin-6 receptor antibody tocilizumab is an effective treatment [72]. Immune effector cell associated neurotoxicity syndrome (ICANS) has also emerged as a serious side-effect after CAR-T cell therapy. On the other hand, as these patients are usually highly medicated, it is not always viable to expand and manufacture their own autologous modified T-cells from lymphocytes, due to the scarce lymphocyte count or the poor state of the cells. Hence, the manufacture of off-the-shelf allogenic CAR-T cells from healthy donors' lymphocytes is promising in many aspects, although there are some concerns that keep them from use in clinical trials [73]. Allogenic T cells express the human leucocyte antigen (HLA), which can give a mismatch between donor and recipient, leading to severe, even lethal graft-versus-host disease (GvHD) [74]. This leads to a new source of cells, since the less alloreactive T cell subset such as CD45RAlymphocytes [75] and Natural Killer (NK) cells are good candidates because they suppress GvHD by inhibiting activated, alloreactive T cells without causing GvHD themselves [76,77]. NK lymphocytes constitute an attractive source for CAR-based treatments, owing to their innate ability to kill malignant or infected cells without prior activation or HLA restriction [78]. Moreover, due to NK cells' shorter lifespan, B cell depletion could be less severe for the patient [79]. However, T cell expansion occurs in a differential manner regarding their subpopulations and polyclonal populations [80]; this could lead to poor expansion and persistence, which is directly correlated with patient relapse [81]. Another point to address is that, with a good initial cell product, T cells are easier to expand and less resistant to genetic engineering, thanks to the use of a CD3/CD28 activation kit [82]. Although NK cells appear to be harder to expand and transfect, some groups have achieved impressive fold expansion numbers by co-culturing them with activation beads or modified feeder cells [83,84] along with some good transduction numbers [85]. The mechanism of action of NK cells differs from that of T cells. On one hand, NK cells interact with target cells through activating and inhibitory receptors, and the outcome is determined by the accumulation of signal strength. If they are activated, they release cytotoxic granules, such as perforin and granzyme, and they secrete a variety of cytokines [86]. T cells, however, are activated through antigen presenting cells (APC). This triggers a signalling cascade from the TCR complex, which transforms the T cell from a resting state to a state of activation and proliferation [87]. T cells also need co-stimulation from APC and cytokines in order to attack tumor cells [88] ( Figure 2).
Finally, T cells and NK cells are differentially activated. Despite the fact that some of the signal domains are conserved between these two types of lymphocytes, such as CD3ζ and 4-1BB, other co-stimulatory domains typically present in T cells are absent in an NK cell, such as CD8α and CD28 [89]. NK cells can operate through several adapter domains for downstream signaling, such as CD3ζ, DAP10, DAP12, and FcRγ chains. While CD3ζ signaling occurs via CD16, NKp30, and NKp46, FcRγ chains signalling also occurs via CD16, NKp30 and NKp46. DAP10 mediates signaling through NKG2D whereas DAP12 activates KIRs, NKG2C, and NKp44 [90,91]. These differences between CAR-T and CAR-NK therapy are shown in Table 2. Table 2. Most noteworthy differences between CAR-T based treatments and CAR-NK based treatments.
Designing NK Cells Specific CAR Constructs
As mentioned before, NK cells can be activated through CD3ζ, resulting in ADCC mediated by CD16 receptor. Thus, the vast majority of CAR constructs used for engineering NK cells contains this signaling component [92], classically present since 1G CARs. Despite the fact that traditional 2G CARs designed for T cells, that is, CARs containing CD3ζ and CD28 or 4-1BB domains, are functional in NK cells [85] new approaches need to be explored for NK cells (Figure 3). Taking into account other main signaling pathways that activate NK cells, new CAR constructs have been designed, in which signaling domains derived from 2B4, NKG2D, DAP10 or DAP12 have shown some promising results [93][94][95][96]. A 2B4 containing CAR construct integrated in the endodomain significantly enhanced all aspects of the NK-cell activation, having a powerful costimulatory effect in NK cells [96]. Regarding NKG2D, coupling this ectodomain with DAP10 and CD3ζ cytoplasmic signaling endodomain has shown increased NKG2D expression on CAR-NK cells with enhanced cytotoxicity against tumor cell lines [97]. Nevertheless, including the DAP10 cytoplasmic endodomain seemed to be needless since CAR-NK cells without DAP10 outperformed those with the DAP10 domain [98]. When designing a new CAR, the active interaction between the transmembrane domain and the endodomain must be taken into consideration to guarantee functionality of the CAR construct. Accordingly, the design of a perfect CAR for NK cells is still a current challenge in which a deep understanding of NK cell and CAR design signaling is crucial to enhance potency and performance in vivo [99].
NK Cells from Several Cell Sources: Adult Peripheral Blood, Umbilical Cord Blood, Hematopoietic Progenitors from Cord Blood and Human-Induced Pluripotent Stem Cells
NK cells can be obtained from different cell sources as shown in Figure 4a. Firstly, NK cells can be isolated from adult blood (AB) or cord blood (CB) PBMCs by negative selection after magnetic cell isolation. NK cells can be cultured with different cytokines, such as IL-2 and/or IL-15 in order to ensure their survival, proliferation and higher cytotoxicity [100]. We can obtain fully mature and functional NK cells from these cell sources, although their number is limited and hard to expand. Secondly, among all the uses that could be attributed to human induced pluripotent stem cells (hiPSCs), the generation of hematopoietic stem cells is one of the most widely studied, and several protocols have been proposed for getting in vitro CD34+ cells [101][102][103]. Firstly, it requires CD34+ hematopoietic precursors to be obtained for differentiating protocols from human embryonic stem cells (hESCs) and hiPSCs. There are several stromal cell lines used in co-culture systems with hESCs/hiPSCs; for instance, OP9 cells are the most popular [104]. In the first data reported, they were able to obtain up to 20% of CD34+ cells by coculturing hESCs with OP9 cells. Therefore, these in vitro generated hematopoietic stem cells could be used to obtain different cells from the hematopoietic lineage, such as T cells [105], platelets [106], red blood cells [107] and NK cells. HiPSCs could became a new source for immunotherapies involving NK cells, as several groups have developed methods for producing clinical scale NK cells from hiPSCs [108,109]. Moreover, as previously mentioned in the Natural Killer cells clinical trials section, a hiPSCs derived NK cell pharmacology product was developed for treating several solid tumors (FT500) in 2019. Not only did they create this product, but they also used the hiPSCs derived NK cells for CAR based treatments (FT596).
Finally, hiPSCss constitute a source for NK cells, as do CD34+ cells from umbilical cord blood, since these cells could have therapeutic uses apart from hematopoietic stem cell transplantation [110,111]. They have been taken into consideration, as they could provide a large number of NK cells [112,113], already extensively described [114]. In general, all NK cell sources mentioned above can provide fully mature and functional NK cells suitable for immunotherapy, but the NK cells with high rates of proliferation described are NK cells from CD34+ cells and hiPSCs cells sources in comparison with peripheral and umbilical blood sources.
State of the Art of NK Cell Therapies and CD19-CAR-NK Therapies with the Recent Clinical Trial Data in Refractory B Malignancies Patients
There are several clinical trials taking place to treat different types of cancer with NK cells; combinations of cryosurgery and NK-based immunotherapy for advanced kidney cancer (NCT02843607), NK cell-based immunotherapy as maintenance therapy for smallcell lung cancer (NCT03410368) or NK cells along with IL-2 following chemotherapy to treat advanced melanoma or kidney cancer (NCT00328861). hiPSC-derived NK cells, named FT500, are also being used in combination with Immune Checkpoint Inhibitors (ICI) in a clinical trial to treat subjects with advanced solid tumors (NCT03841110). In April 2019, Fate Therapeutics declared that the first patient treated with FT500 had successfully completed an initial safety appraisal. The patient was administered with three once-weekly doses of FT500. There were no toxicities or severe adverse events reported and the treatment cycle was well tolerated, with no dose-limiting toxicities or severe adverse events reported during the initial 28-day observation period. Not only are NK cells able to treat solid tumors, but they also play a key role in immunotherapies against hematological cancer like acute myeloid leukemia (AML), by using high doses of these cells [115] or infusing NK cells after chemotherapy along with IL-2 (NCT02763475). Nonetheless, expanded and stimulated NK cells or high-dose NK cell therapy are not the only options when treating patients.
Although T cells have typically been used in CAR technology-based therapy, with more than 400 clinical trials on-going and 3 commercial products, Kymriah Yescarta and TECARTUS, NK cells are also emerging as one of the new promises in this field [116], as shown in Figure 4b. Due to their low infection rate, poor in vivo expansion and short life span, NK cells were not taken into account from the beginning for this kind of therapy. Nevertheless, newer protocols that enhance viral transduction efficiency and prosperous expansion of these cells have made a space for NK cells in the CAR therapy field [84]. Furthermore, allogenic NK cells have a major advantage over allogenic T cells, i.e., they could be used as a "universal" product as they do not cause GvHD as they lack TCR [76]. The interest in using allogenic NK cells for this kind of therapy is increasing, and there are already 13 clinical trials using CD19 CAR-NK cells (NCT03056339, NCT03690310, NCT00995137, NCT01974479, NCT04639739, NCT02892695, NCT04887012, NCT04796675, NCT03824964, NCT05020678, NCT03579927, NCT04796688, NCT02134262), and numerous preclinical studies with NK cells from different sources as vehicles. Firstly, NK-92 is an established NK cell line of a non-Hodgkin's lymphoma patient [117]. NK-92 cells, featured with activated human NK cells, are applied in clinical practice for allogenic adoptive cellular immunotherapy. Due to the loss of absent expression of most currently known KIRs, NK-92 cells target and kill a wider range of tumor cells with enhanced toxicity in vitro and in vivo [118]. The NK-92 cells are irradiated before infusion, avoiding the induction of NK-derived leukemia. Despite the fact that the NK-92 cell line has been proved safe for clinical use [119], however, these cells are not as consistent as expected at killing lymphoid blast by themselves [120]. Taking this into account, the use of a CAR along with these cells could successfully kill NK-resistant lymphoblastic leukemia cells. Several preclinical studies confirm that NK-92 cell lines are a good source for CAR based therapy, as they possess consistent cytotoxic activity, a good expansion rate and low tumorigenicity risk when irradiated and transfused in patients [121][122][123].
Secondly, allogenic primary NK cells from adult peripheral blood (AB) or umbilical cord blood (CB) could represent feasible, safe, off-the-shelf CAR-cell products to treat various malignancies such as hematological cancers. When studying AB CD19-CAR-NK cells, they not only successfully kill CD19-expressing target cells, but they also retain the function and expression of their native activating receptors, preserving their activity [83]. However, AB NK cells are more variable from donor to donor in number, and they expand and activate less in vitro than CB NK cells [85,124]. Some studies show higher antitumor activity of CB cells compared with other NK cell sources, which justifies the use of CBderived immunotherapy [125]. Moreover, CB units stored in blood banks could be used for this purpose. CB CAR-NK cells have shown great performance against their target cells, and more flexibility to be expanded [84]. These CB CAR-NK cells are currently being used in a clinical trial at MD Anderson Cancer Center targeting CD19 cells, with great results. CAR-NK cells from cord blood were administered to 11 patients with relapsed or refractory CD19-positive cancers (NHL or CLL). An anti-CD19-CD28-CD3ζ CAR was used for the transduction and the retroviral vector included an IL-15 gene and a suicidal switch. Seven out of eleven patients achieved a complete remission, exhibiting a significantly higher early expansion of CAR-NK cells compared to the non-responders. There were no reported side-effects associated with the high response rate of the treatment, even in even in KIR-ligand mismatch cases (5/11), and there was no interleukin-6 increase, which proved the safety of the treatment. The published observation of circulating CAR-NK cells by flow cytometry was limited to the first three weeks [126].
Finally, in the last few years, hiPSCs have arisen as one of the most promising cell sources for personalized medicine. HiPSC-NK cells can be manufactured from standardized cells resulting in a homogeneous clinical scale NK cell population [78]. These standardized NK cells would be excellent candidates for CAR based treatment as they show a similar phenotype and similar anti-tumor activity to AB NK cells [127]. As a matter of fact, iPSCderived, universal, off-the-shelf CAR-NK cell immunotherapy for B-cell hematological cancer has been manufactured as a product called FT596. This product is being used in a phase I clinical trial (NCT04245722) along with anti-CD20 monoclonal antibodies. FT596 product was administrated to 20 patients in order to evaluate for assessment of safety and efficacy in the first, second and third dose cohorts. From the 14 patients that were administrated with a second and third dose, 10 of them achieved an objective response [128].
Nowadays the clinical indications of CAR-NK cell therapy in B cell hematological cancer remains an open question. Currently, it has been confirmed that various CAR-T cell therapy clinical trials are safe being an efficient therapeutic procedure for relapse or refractory B-ALL patients when used as a bridging approach before or after HSCT. Therefore, CAR-NK cells may also handle as a bridge strategy, through which patients can accomplish a low pre-infusion minimal residual disease (MRD) condition before administration of allo-HSCT. The aim of NCT02892695 clinical trial is to figure out the safety and best dose of CD19 CAR-NK cells used as a bridge therapy in patients who intend to undergo HSCT [129].
Other potential uses of CAR-NK cellular therapy may be indicated for patients that have not enough T cell numbers for autologous CAR-T cell product manufacture. Patients with aggressive ALL or CLL will benefit from allogeneic CAR-NK permitting early treatment or later in combination with CAR-T. Additionally, for those CD19+ patients that have been relapsed after CAR-T therapy or patients who develop high toxicity (ICANS and/or cytokine-release syndrome) after CAR-T infusion.
Challenges and Future Perspectives
Due to the boom in immunotherapy for hematological cancers, great strides have been made in this field, especially with the arrival of CAR-T treatments. However, despite the successful results obtained with autologous CAR-T treatment, there are still some worrying aspects. Undesirable side-effects, such as cytokine storms, ICANS or GvHD caused by allogenic CAR-T cells, are driving the CAR field towards new alternatives. The lack of side-effects or GvHD in allogenic treatments has put NK cells in the spotlight.
For the last few years, CAR-NK cells have proved to be an optimum product for the treatment of hematological malignances in vitro. Recently, the report from one of the few clinical trials taking place has not only proved their efficacy but also their safety when using allogenic CAR-NK cells. Hence, CAR-NK cells can serve as an off-the-shelf product to treat refractory hematological malignances, with no need for the patient-specific product request by CAR-T treatments. Moreover, the possibility of obtaining in vitro generated NK cells for this purpose could lead to massive production of universal allogenic CAR-NK cells, which could end up in a reduction of costs and more open access to this treatment. Combinations of CAR-NK products with other immunotherapies and even with other CAR-T cells are becoming a promising option.
Therefore, the benefits of CAR-NK cells over CAR-T cells augur promising applications in cellular immunotherapy against hematological malignancies as an alternative or combination cell drug. Nevertheless, there are still challenges to be addressed. For instance, because of the heterogeneity of NK cells with various functional features, the selection of appropriate NK cell subsets (e.g., killer, naïve or memory cell subsets) to specifically expand and arm CAR-NK cells has to be explored. NK cells are known for their transduction difficulty. Although several protocols with retrovirus and lentivirus have been successfully developed, there is still room for new techniques in order to improve NK cell transduction, such as mRNA electroporation or CRISPR/Cas9 technology. As CAR-NK cells are thought to be an "off-the-shelf" product, cryopreservation is also a crucial step. Focusing on not losing many NK cells at the thawing process, and re-establishing their function, it is determinant to elucidate the appropriate cryopreserving media and protocol for CAR-NK cell therapy success. Due to their short persistence in vivo, the NK cell cytolytic effect could also be restricted, but they are probably unable to trigger cytokine storms or on-target/offtumor effects; continuous cytokine support or several infusions of CAR-NK cells may will be needed. Additionally, various NK cell sources have been studied (peripheral blood, cord blood or hiPSCs), and therefore in the future the foremost appropriate one could be used for refractory malignancies. Moreover, the best configuration of CARs to boost the activation, proliferation, cytolytic activity and cytokine secretion of NK cells has not yet been found. Perhaps the future of these CAR constructs for NK cells lies on exploring NK cell activating domains such as NKG2D, DAP10, DAP12 and 2B4 in order to improve their performance. Moreover, future directions of CAR-NK cell therapy could be administrated in combination with other therapies, being lymphodepletion, radiation, immune checkpoint blockers or even CAR-T cells. Notwithstanding, as a result of the recent advances, quick developments and future challenges for improvement, CAR-NK cell-based immunotherapy constitutes an encouraging scenario for cancer treatment. Tumor-associated antigens do not require the recognition of HLA molecules from patients, which makes it possible to manufacture off the shelf NK cell banks rather than manufacturing individualized CAR-NK cells. In conclusion, as more evidence from clinical trials is procured within the coming years, CAR-NK cell therapies could provide meaningful progress in tumor immunotherapy. Moreover, CAR-NK therapies in combination with other immunotherapies or even other CAR-T cells may pave a new way for CAR-NK cell-based immunotherapy in the future.
Conclusions
CAR-NK cells have excellent potential as an innovative and "off-the-shelf" cellular immunotherapy against hematological cancer that could be quick, accessible and safe for clinical practice. With the growing safety and encouraging work reported in preclinical studies and clinical trials, together with progressive achievements addressing the remaining challenges, it is envisioned that CAR-NK cell therapy will progress to emerge and contribute to significant improvement in the survival of relapsed or refractory hematological cancer patients. Funding: This work was supported by the Health Department of the Basque Government (Grant 2020111058), Economic Development and Infrastructures Department of the Basque Government (KK-2020/00068), Project "PI18/01299" and "PI21/01187", funded by Instituto de Salud Carlos III and co-funded by European Union (ERDF) "A way to make Europe", "ICI21/00095" funded by Instituto de Salud Carlos III and co-funded by European Union (NextGenerationEU), Inocente Inocente Foundation (FII18-003-CPS). LH was supported by the Jesus Gangoiti Barrera Foundation and the Asociación Española contra el Cáncer (AECC) and the Fundación Mutua Madrileña. | 2021-10-31T15:19:47.498Z | 2021-10-28T00:00:00.000 | {
"year": 2021,
"sha1": "a95d5b9acc613cd432a291168a151d474b427732",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/21/5418/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "214b2647096317194a57c63c204982d81a3deebd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53410202 | pes2o/s2orc | v3-fos-license | Estimates of heritabilities and genetic correlations in polled Estimates of heritabilities and genetic correlations in polled Hereford cattle selected for feed conversion Hereford cattle selected for feed conversion
Performance records of 1459 Polled Here-ford cattle were analyzed to estimate heritabilities and genetic correlations of beef cattle traits from birth to maturity. Estimates of heritability (h 2 ) for birth weight (BWT), weaning weight (WWT), yearling weight (YWT), scrotal circumference (SC), yearling height (YHT), mature height (MHT), and mature weight (MWT) were moderate to high, with the exception of WWT (h 2 = .14), and ranged from .38 to .72. The traits associated with feed conversion, daily feed intake (INT), average daily gain (ADG), and feed conversion (CONV) had heritabilities of .24, .25, and .14, respectively. Genetic correlations (r g ) between the growth traits (BWT, WWT, YWT, YHT, MHT, MWT, and SC) were positive and ranged from .20 to .88. The r g =.99 between milk production (MILK) and maternal weaning weight (MWW) indicates that the traits are essentially the same and supports the method in which many breed associations calculate and report expected progeny differences (EPDs) for milk production. The r g = .42 between ADG and INT, r g = .27 between INT and CONV, and the r g = -.82 between ADG and CONV suggest that faster gaining cattle have greater feed intakes and are more efficient.
Introduction
Feed costs represent a significant economic input to beef producers.To attain greater efficiency in production systems, beef producers should consider including feed conversion in selection programs.Reported heritabilities suggest that selection for more efficient cattle can be effective.However, one of the major stumbling blocks in selecting feed for conversion is the difficulty with which it is measured.It requires measurement of individual animal feed intakes and weight gains, a process that is expensive and not feasible for most beef producers.Therefore, beef producers need to identify traits that have favorable genetic associations with feed conversion, are easily and cost effectively measured, and can be incorporated readily into a selection program.Our purpose was to estimate the heritabilities and genetic correlations of beef cattle traits from birth to maturity and provide producers with an indirect means for improving feed conversion.
Experimental Procedures
The data set examined in this study contained the performance records of 1459 Polled Hereford cattle born from the spring of 1967 through the spring of 1979.These data were the result of a project conducted at Kansas State University in which animals were selected on the basis of improved feed conversion.This herd was assembled in 1967 using animals donated by breeders from several states.The original animals (42 females and 5 males) represented 34 herds from Colorado, Illinois, Kansas, Missouri, Oklahoma, andPennsylvania. From 1967 to 1971, animals in the herd were mated randomly to increase the size of the herd and to provide a foundation herd from which the selection and control herds would be established.Beginning with the 1971 breeding season, cows were assigned randomly to either the selection or control herds.Once these herds were established, they were closed, and no other genetic material was introduced.Each year in the selection herd, the two bulls exhibiting the best feed conversion (feed/gain) were selected as herd sires and used for 2 consecutive years.In the control herd, the first bull born to the oldest herd sire was selected to replace his sire.These bulls were used in the control herd for approximately 6 years.
Cows in both the selection and control herds were maintained on native Kansas tallgrass prairie throughout the year and were supplemented in the winter.Cows were bred to calve in March and April.Breeding was primarily by natural service during a 60 to 70 day breeding season.Progeny were weaned in the fall at approximately 200 days of age.Following a 3 to 4 week weaning period, bull calves were placed on an individual 140-day postweaning performance test, which allowed for the selection for feed conversion.The ration consisted of 25% prairie hay, 15% dehydrated alfalfa, 43% corn, 12.5% soybean meal, 4% molasses, and .5% salt.Heifers were group fed and not selected on the basis of improved feed conversion.In both the selection and control herds, cows were culled if they: (1) were not pregnant at the end of the breeding season, (2) had severe structural problems, or (3) were horned.Birth weight (BWT), weaning weight (WWT), yearling weight (YWT), yearling height (YHT), daily feed intake (INT), average daily gain (ADG), feed conversion (CONV), scrotal circumference (SC), scanned ribeye area (REA), scanned backfat thickness (FAT), mature height (MHT), mature weight (MWT), and milk production (MILK) records were available for analysis.The number of observations, means, and standard deviations are presented in Table 1.
A multiple-trait, derivative-free, restricted maximum likelihood procedure (MTDFREML) was used to analyze the data generated in this study.A full animal model was used to calculate the genetic and phenotypic (co)variances.The fixed effects used in the model included age of dam (2, 3, 4, 5-10, and >10 years) and contemporary group (sex and year of birth).For the analyses of MHT and MWT, age of cow was the only fixed effect included in the model.Year of milking and age at milking were the fixed effects used in the analyses of milk production.Ages at which various measurements were recorded were used as covariates for the respective trait.Average weight maintained over the 140-day test period was used as a covariate in the analyses of INT and CONV.Maternal and permanent environmental effects were included as random effects in the analyses of BWT and WWT.
Results and Discussion
Heritabilities (h 2 ) provide an indication of the amount of genetic change that can be made through selection.The heritabilities that were estimated for the traits in this study generally would be considerate moderate to high (Table 2).The traits associated with weight, BWT, WWT, YWT, and MWT, had heritabilities of .38,.14, .39,and .47,respectively; generally within the range of reported estimates.The maternal heritabilities for BWT and WWT were .14 and .18,respectively.T raits related to structure usually have high heritabilities.The same held true for our study.Yearling height had a heritability of .52,and MHT had a heritability of .72.In this study, the traits associated with feed conversion (feed/gain) included ADG, INT, and CONV, which had heritabilities of .25,.24,and .14, respectively.Ultrasound technology allows for the measurement of various beef cattle traits without slaughter.Scanned backfat thickness (h 2 = .25)and REA (h 2 = .19)were moderately heritable.Scrotal circumference was found to be highly heritable, with an estimate of .48.The heritability estimate for MILK in this study was .19.
Genetic correlations (r g ) measure the strength of the relationship between the breeding values of two traits.They provide an estimate of how traits will react in a selection program.The genetic correlations estimated for the traits in this study are presented in T able 2. The genetic correlations between growth traits (BWT, WWT, YWT, MWT, YHT, and MHT) were found to be strong and positive, ranging from .33 to .88.The strength of these correlations was expected, because many of the same genes are involved in the expression of the growth traits and also because of the part-whole relationship that many of the traits share.The genetic correlations between traits associated with feed conversion (INT, ADG and CONV) and other traits in the study were of various magnitudes and signs.The r g = .42between ADG and INT, r g = .27between INT and CONV, and the r g = -.82 between ADG and CONV suggest that faster gaining cattle have greater feed intakes and are more efficient.Average daily gain on test had negative associations with BWT (r g = -.01) and WWT (r g = -.22) and positive associations with YWT (r g = .49)and MWT (r g =.72).This indicates that animals with poor preweaning performance had greater average daily gains during the postweaning test period.Animals with greater postweaning gains were heavier when yearling and mature weights were measured.Larger framed animals had greater postweaning average daily gains, as evidenced by the genetic associations between ADG and YHT (r g =.65) and between ADG and MHT (r g =.97).Negative genetic correlations were found between INT and BWT (r g = -.35) and between INT and WWT (r g = -.61),suggesting that animals with poor preweaning performance had greater feed intakes during the postweaning performance test period.The positive genetic association between INT and YWT (r g = .59)indicates that those animals with greater feed intakes during the post-weaning period had heavier weights at the end of the test.The genetic associations between SC and many of the growth traits (BWT, WWT, YWT, YHT, and ADG) were positive.Scrotal circumference had a positive association (r g = .25)with MHT and a negative (r g = -.11)association with MWT.This suggests that animals with larger scrotal circumferences reached maturity sooner and had lighter mature weights.The genetic correlations between REA and other growth traits (BWT, WWT, YWT, YHT, ADG, and SC) ranged from .18 to .70.These correlations suggest that faster growing cattle have the propensity to have larger REA.The genetic association between MILK and maternal WWT was found to be very strong (r g = .99).This suggests that these traits are essentially the same.Milk expected progeny differences (EPDs), published by many breed associations, are calculated as maternal weaning weight.The strong correlation between MILK and MWW lends support for this method of estimating an animal's genotype for milk production.
Table 2 . Heritabilities and Genetic Correlations of Traits Analyzed a
Traits b BWT MBW WWT MWW YWT YHT INT ADG CONV SC REA FAT MHT MW MILK | 2018-10-22T14:46:18.108Z | 1999-01-01T00:00:00.000 | {
"year": 1999,
"sha1": "2994250ad2a1f86aab723709f275d6a89e3d9e12",
"oa_license": "CCBY",
"oa_url": "http://newprairiepress.org/cgi/viewcontent.cgi?article=1855&context=kaesrr",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "519cf69fdb957af2ccf4db75e7995b94c4754232",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
248408700 | pes2o/s2orc | v3-fos-license | The Role of Intra-Operative Duplex Ultrasonography Following Translabyrinthine Approach for Vestibular Schwannoma
Objective Sigmoid sinus (SS) stenosis is a complication of translabyrinthine approach. Velocity changes in the SS measured by intra-operative doppler ultrasound may help in identifying patients at risk for sinus occlusion. Patients SS velocity was measured using doppler ultrasound prior to opening dura and again prior to placement of the abdominal fat graft. Intervention Data collected included: patient age, surgical side, sinus dominance, tumor volume, intra-operative doppler ultrasound measurements, post-operative venous sinus imaging, anticoagulation, and morbidities and mortalities. Main Outcome Measure SS patency and velocity. Results Eight patients were included in the analysis (22 to 69 years). Four had left-sided and four had right-sided craniotomies. Sigmoid sinuses were either right-side dominant or co-dominant. The mean velocity ± standard deviation (SD) prior to dura opening and abdominal fat packing was 23.2 ± 11.3 and 25.5 ± 13.9 cm/s, respectively, p = 0.575. Post-operative Magnetic Resonance Venography (MRV) imaging showed four sigmoid sinus occlusions; seven patients showed sigmoid sinus stenosis, and one internal jugular vein occlusion. One patient had post-operative Computed Tomography Venography (CTV) only. Of the four patients with MRV occlusions, CTVs were performed with three showing occlusion and all four-showing stenosis. One patient with internal jugular vein occlusion on MRV received warfarin anticoagulation. There was one cerebrospinal fluid leak requiring ear closure, one small cerebellar infarct, and one with facial nerve palsy (House-Brackman Grade 3). Conclusion SS velocity changes before and after tumor resection were not predictive of sinus occlusion. We hypothesize that sinus occlusion may be caused by related factors other than thrombosis, such as external compression of the sinus secondary to abdominal fat grafting.
INTRODUCTION
Sigmoid sinus (SS) stenosis and/or occlusion is a known complication of the translabyrinthine approach (TLC) for vestibular schwannomas. The etiology of stenosis/occlusion may be presumed to be due to thrombosis of the sinus. This may be recognized during surgery such as when the sinus is packed due to injury during exposure or unrecognized, such as thrombosis occurring secondary to thermal injury during drilling or during prolonged exposure to heat from the operative microscope. The risk of propagation of clot from a thrombosed sinus may in turn lead to therapeutic anticoagulation. There have been a number of retrospective analysis looking at sinus thrombosis diagnosis and treatment strategies although none have yielded a consensus (1)(2)(3)(4)(5).
The goal of the present study was to determine whether sinus occlusion could be recognized intra-operatively with the use of duplex ultrasonography of the exposed sigmoid sinus. To our knowledge, there are no studies that have evaluated intra-operative flow of the SS during a TLC using doppler ultrasonography. This technique may yield an etiology or potential intervention for SS thrombosis or compression. We describe the technique of using intraoperative doppler ultrasound as a method for assessing velocity changes equating to narrowing of the SS. We believe that this may aid in changing operative technique as well as directing post-operative management including, obtaining venous sinus imaging or the need for medical therapy.
METHODS
This study was approved by the institutional review board at St. Vincent's Hospital (IRB# SV-19-026). The data from all consecutive patients undergoing translabyrinthine approach during the study period were prospectively gathered and analyzed.
All patients underwent a standard translabyrinthine approach for resection of vestibular schwannomas. In brief, this procedure was performed as follows: a semicircular incision was made with separate scalp and T-shaped muscular flaps. Mastoidectomy, labyrinthectomy, and drilling of the internal auditory canal (IAC) were performed with round cutting and diamond burrs of various sizes. The bone posterior to the SS was drilled in order to completely skeletonize the SS in order to allow for optimal operative corridor, including intermittent compression of the sinus during dissection. If present, any remaining layer of bone overlying the SS following translabyrinthine exposure (aka "Bill's Island") was removed prior to dural opening. The dura was then opened, retracted posteriorly, and tumor resection proceeded in the usual fashion. An abdominal fat graft was harvested and woven into the dural opening. The site was then covered with a titanium mesh cranioplasty followed by layered closer of the muscle, dermis, and skin.
Ultrasonographic evaluation of the SS including velocity measurement was performed at two timepoints during surgery: before opening the dura and after tumor resection, prior to placement of the abdominal fat graft. A GE Logiq ultrasound (General Electric, Boston, MA) in M-mode with the hockey stick probe ( Figure 1A,B) was used for measurement of the SS velocity along with 2D imaging. The venous sinus ideal waveform was identified and a still picture taken. The velocity (cm/s) of blood flow was measured as the peak amplitude (Figure 2A,B). Three measurements were taken from the inferior, middle, and superior segments of the SS at each time point and used for statistical analysis. Statistical analysis was performed using Wilcoxon signed rank test (IBM SPSS Statistics 24).
RESULTS
Eight patients undergoing a translabyrinthine approach for vestibular schwannoma removal were underwent intraoperative doppler ultrasound measurements and associated post-operative SS imaging ( Table 1). Ages ranged from 22 to 69 years old. There were four left-sided and four right-sided craniotomies. Sigmoid sinuses were either right-sided dominant or co-dominant.
The mean velocity ±SD prior to dura opening and abdominal fat packing was 23.2 ± 11.3 and 25.5 ± 13.9 cm/s, respectively, p = 0.575. The mean post-operative MRV was done in 1.5 months (range, 0.3-5 months). Post-operative MRV imaging showed four patients with sigmoid sinus occlusions. All eight patients had sigmoid sinus compression and one had an internal jugular vein occlusion. One patient had only a postoperative CTV due to the inability to get an MRI. On the four patients with MRV occlusions, confirmatory contrast enhanced CTVs were obtained with no discrepant findings. The one patient with internal jugular vein occlusion on MRV received warfarin anticoagulation.
MRI T1 showing the fat packing in patients with no SS stenosis (A) and stenosis (B) in the early postoperative period.
Complications included one CSF leak requiring ear closure, one small cerebellar infarct and one facial nerve palsy (House-Brackman grade 3). None of the patient has any symptomatic complications except the patient with facial nerve palsy which improved gradually.
DISCUSSION
Sigmoid sinus (SS) stenosis and/or occlusion is a known complication surgery for vestibular schwannomas (1). Postoperative non-invasive imaging (either MRV or CTV) has been used to assess sigmoid sinus patency when compromise is suspected (4,6), such as in patients complaining of symptoms of elevated intracranial pressure including headache and CSF leak. Recent studies have shown that the incidence of post-operative stenosis of the lateral sinus following translabyrinthine surgery may be higher than previously thought (2). The clinical significance of asymptomatic postoperative stenosis of sigmoid sinus following lateral skull base surgery, however, is not established, especially with regard to whether or not prophylactic anti-coagulation is required in the presence of an asymptomatic stenosis (1, 2). In our opinion, the need for anti-coagulation also depends on the presumptive etiology of the stenosis: stenosis, or even occlusion, due to external compression of the sinus can be treated more expectantly as there is presumably little or no risk of clot propagation within the sinus, whereas thrombosis caused by damaged endothelium may result in a higher risk of morbidity.
Vascular complications including arterial and venous infarcts, venous sinus occlusion, and hematomas are known to occur after skull base surgery and can be a major source of morbidity and mortality. The identification and etiology of SS occlusion and compression are necessary to help mitigate associated morbidity. Post-operative imaging has classically been used to assess sigmoid sinus patency when compromise is suspected. Unfortunately, this does not allow for real-time monitoring and potential intervention. We conducted intraoperative measurements of sigmoid sinus velocity before and after opening the dura in order to evaluate whether intraoperative thrombosis of the sinus could be reliably detected, and whether thrombosis was the etiology of postoperative stenosis. By measuring the velocity of the sigmoid sinus at multiple time points throughout the surgical case we may better understand the possible cause of sinus occlusion or compression. Etiology has been linked to multiple intraoperative techniques such as thermal injury from the operating microscope and high-speed drills, electrocautery, and fat graft compression. Although avoiding these may be difficult there are maneuvers that may diminish risk such as water cooling a drill bit, reducing the intensity of the microscope light, limiting electrocautery on the sinus, and avoiding overpacking the approach site with abdominal fat.
We did not see any differences between the pre-dural opening and the pre-closure SS velocities (Figure 2A,B). However, it was not powered to detect a difference between the two groups only to establish the feasibility of using doppler ultrasound to measure velocities. In line with this goal, our measured velocities of the sigmoid sinus aligned with a prior study that measured the SS velocity with endovascular techniques. They found a median normal SS velocity range of 10-67(cm/s) and SS velocity range of 49-182 (cm/s) in sinus stenosis (7). Interestingly, there were no major changes in the velocity between our two measured time points in spite of the postoperative imaging demonstrating significant stenosis and/or occlusion of the sinus. Were the sinus stenosis to have been caused by intra-operative thrombosis, we could expect to see increases in the pre-closure measurements indicating a narrowing of the SS. This was not observed. Rather, our data support the conclusion that stenosis of the sinus occurs after our last ultrasound measurement; therefore, it is only appreciated on the post-operative imaging ( Figure 2C, D). We believe the most likely explanation of our results is that stenosis of the sinus was seen following compression of the sinus from excessive packing of the fat graft and subsequent mesh cranioplasty. Importantly, after placement of the mesh cranioplasty the ability to use doppler ultrasound is limited ( Figure 3A,B).
Limitations: Our study is limited due to, (i) small sample size, (ii) retrospective in nature, (iii) one study center. However, we believe the data is sufficient to conclude that SS velocity is not predictive of SS stenosis on non-invasive post-operative imaging. Further studies with larger numbers of patients are necessary to assess the sensitivity of Doppler ultrasound in the diagnosis intra-operative sinus thrombosis, and/or predict post-operative stenosis. Finally, clinicians deciding whether therapeutic anti-coagulation is indicated for post-operative SS stenosis or occlusion may need to take into consideration that causes other than sinus thrombosis, such as abdominal fat grafting and titanium mesh cranioplasty, may contribute to post-operative stenosis seen on non-invasive post-operative vascular imaging.
CONCLUSION
Intra-operative duplex ultrasound is a novel technique for assessing SS patency during translabyrinthine approach. Using intra-operative duplex ultrasound may provide immediate information regarding thrombosis of the SS which may in turn assist in intra-and post-operative management.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by House Clinic IRB. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. | 2022-04-28T13:33:08.201Z | 2022-04-28T00:00:00.000 | {
"year": 2022,
"sha1": "6bd5258ec4ae287287c855e4ce04c6314d4ae8bb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "6bd5258ec4ae287287c855e4ce04c6314d4ae8bb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6787350 | pes2o/s2orc | v3-fos-license | Evaluating a LARC Expansion Program in 14 Sub-Saharan African Countries: A Service Delivery Model for Meeting FP2020 Goals
Objectives In many sub-Saharan African countries, the use of long-acting reversible contraceptives (LARCs) is low while unmet need for family planning (FP) remains high. We evaluated the effectiveness of a LARC access expansion initiative in reaching young, less educated, poor, and rural women. Methods Starting in 2008, Marie Stopes International (MSI) has implemented a cross-country expansion intervention to increase access to LARCs through static clinics, mobile outreach units, and social franchising of private sector providers. We analyzed routine service statistics for 2008–2014 and 2014 client exit interview data. Indicators of effectiveness were the number of LARCs provided and the percentages of LARC clients who had not used a modern contraceptive in the last 3 months (“adopters”); switched from a short-term contraceptive to a LARC (“switchers”); were aged <25; lived in extreme poverty; had not completed primary school; lived in rural areas; and reported satisfaction with their overall experience at the facility/site. Results Our annual LARC service distribution increased 1037 % (from 149,881 to over 1.7 million) over 2008–2014. Of 3816 LARC clients interviewed, 46 % were adopters and 46 % switchers; 37 % were aged 15–24, 42 % had not completed primary education, and 56 % lived in a rural location. Satisfaction with services received was rated 4.46 out of 5. Conclusions The effectiveness of the LARC expansion in these 14 sub-Saharan African FP programs demonstrates vast untapped potential for wider use of LARC methods, and suggests that this service delivery model is a plausible way to support FP 2020 goals of reaching those with an unmet need for FP.
Introduction
Over two hundred million women and girls in developing countries lack access to contraceptives; the Family Planning 2020 (FP2020) initiative aims to reach 120 million new users by 2020 [3,20]. In sub-Saharan Africa, the region with the greatest unmet need for contraceptives, provision over the past 30 years has relied predominantly on short-term methods (condoms, pills and injectables) [26]. Despite being cost-effective and highly efficacious [22], long-acting reversible contraceptive (LARC) availability and use is low while unmet need for family planning (FP) remains high, particularly in rural and remote regions of sub-Saharan Africa [2,21].
Population-based surveys indicate that the average unmet need for modern methods of contraception is 34 % in West Africa and 31 % in East and Southern Africa (settings with high total fertility rates) [23,26], and only one in four women in Africa uses a modern method of contraception [18]. Unintended pregnancies are common; 44 % of pregnancies in Eastern Africa in 2012 were unintended, 55 % in Southern and 26 % in West Africa [19]. Access to FP services is particularly low among rural, less educated, and poorer women [23,25,26]. Availability, perceived costs, lack of provider skills, and misperceptions about modern contraceptives and their risks and benefits are barriers to uptake [8,23,27].
Intrauterine devices (IUDs) and implants give effective long-term protection against pregnancy, with between 0.05 and 0.8 % of women experiencing pregnancy failure during the first year of use [7]. LARC use is associated with high user satisfaction and convenience, and LARCs have low discontinuation rates compared to short-term methods [9]. The expansion of access to LARCs as part of the full range of FP methods, is a critical component of FP service delivery programs that aim to address high rates of unintended pregnancy and curb high unmet need for FP. Previous studies have shown success in expanding access to IUDs in sub-Saharan Africa, Latin America and Asia, with younger and less educated women reached through demand generation approaches and complementary service delivery mechanisms [1,11]. Introduction of both implants and IUDs is critical for reaching goals set out by the FP2020 initiative to increase contraceptive prevalence, reduce unmet need and expand method mix. However, neither implementation experiences nor evaluations of program effectiveness in reaching those with unmet need for FP have been widely documented. In this paper, we describe the effectiveness of Marie Stopes International's (MSI's) integrated service delivery intervention designed to expand access to and knowledge of LARCs among women living in 14 sub-Saharan Africa countries.
We defined LARC users as women who chose IUD or implant services when presenting at an MSI service delivery site in the study countries.
MSI's LARC Expansion Intervention
This multi-country expansion intervention employed an integrated service delivery approach by expanding a network of providers in urban, peri-urban, and rural settings through four service delivery channels, coupled with a wide range of demand creation activities. Approaches are described below and in Table 1. Further detail on the approach and experiences of implementation can be found in a previously published article [10].
Service Delivery Approaches
LARCs were delivered through MSI's three main service delivery channels: static clinics, mobile outreach units, and social franchising of private sector providers. One program utilized a fourth mechanism of service delivery (Marie Stopes Ladies), where nurses and midwives provide services in rural and peri-urban communities as described in Table 1. In all channels, providers counselled on and offered short term methods (including condoms), LARCs and, where available, permanent methods, to ensure that women could choose the method that best suited their lifestyle, their fertility intentions and their contraceptive preferences. Comprehensive integrated family planning counselling was provided to all clients. This included client-centred two-way communication and active listening; assessing life style preferences and circumstances; assessing knowledge and information gaps; provision of information about the client's chosen method including the benefits, risks, complications, and associated side effects; method of use or procedural information and alternative options available; use of models, leaflets, flip charts and examples of contraceptive methods; counselling on dual protection and STI/HIV risk assessment; provision of referrals for needed sexual and reproductive health services; and information on what to do or where to go in case of problems.
Task Sharing Initiative
The sharing of tasks or clinical procedures such as contraceptive counseling, and insertion or removal of implants and IUDs between different cadres of non-physician providers was implemented across service delivery channels wherever both legal and feasible.
Demand Creation Approaches
The service delivery models integrate a wide range of demand creation approaches to ensure women are aware of the contraceptive choices available to them and able to access MSI LARC services. Such activities promote the entire range of contraceptives offered, but where knowledge of a particular method(s) is low, or where certain misperceptions (such as incorrect health impacts of contraception or fears of infertility when using reversible methods) are a barrier to uptake, the programs aim to address these information gaps. Misperceptions of contraception were also addressed during contraceptive counselling at service delivery sites. Strong relationships with communities were developed by working with religious leaders, use of community health workers and peer educators, and partnering with local health authorities and local government clinics [13][14][15]. Demand-side financing approaches such as vouchers (in Ethiopia, Kenya, Madagascar, Sierra Leone and Uganda) further increased accessibility of LARCs and a full range of other contraceptive services [5].
Ensuring Quality
MSI maintains the quality of services provided across the 14 programs through upholding minimum standards for service delivery -a range of clinical guidelines and standard operating procedures that all programs must adhere to. All service providers receive training on FP methods, which includes counseling and choice of methods, and receive supportive supervision. All outlets (clinic, mobile outreach unit, social franchisee, Marie Stopes ladies) receive an internal audit at least annually, and each year Demand generation Use of local media; e.g., radio spots Education and awareness raising through community health workers and satisfied clients
Roadshows
Paper-based flyers and posters randomly selected outlets receive a quality technical assurance (QTA) visit conducted by a medical advisor external to the program. Where necessary, QTA visits are followed-up with additional training, supportive supervision and further visits. Some programs undertake mystery client studies (also known as simulated patient studies) to better understand provision practices from the client perspective. In addition to the above, social franchisees are required to meet minimum standards before being signedup, and must maintain those standards. These activities are rolled-out with the objective of ensuring the maximum possible clinical effectiveness, client safety and a positive client experience.
Study Methods and Procedures
This evaluation used two sources of data: routinely collected health management information system (HMIS) and client exit interviews. Routine service data from 2008 to 2014 for the 14 country programs were collected and analyzed. Service data were collected via a paper-based or electronic HMIS. Where possible, data were collected at the client level; otherwise, service-level or aggregate-level data were collected for each facility or team providing services. Data were collated monthly and reported to a country-level central support office in aggregate form for quality checks. Any inconsistencies in data reporting were resolved by the country-level support office. Subsequently, data were sent to MSI's central support office for further quality checks, and data discrepancies were resolved between the two levels of support offices. Client exit interviews were conducted between April and December 2014, with the majority conducted between September and December 2014. When a country had 40 or fewer facilities or service delivery sites in the service delivery channel, all were included in the sample. When there were more than 40 facilities/sites, it was not considered feasible to visit all facilities/sites, so a systematic sample of sites were taken. A skip pattern was determined based on the number of facilities/sites in the service delivery channel, and the skip pattern was used to select sites from a list which had been ordered by the average daily number of clients to ensure that facilities/sites of a variety of sites were selected. At the facility/site, a skip pattern was used to systematically select clients to be asked for an interview, for a set number of days per facility/site. The skip pattern was determined based on the average number of client visits per day at that site, to ensure that clients were selected across different times of day. More interviews were conducted at facilities/sites with a higher client flow due to the use of a standardized skip pattern and the set number of days per facility/site. The minimum sample size was determined for each service delivery channel in each country to ensure a representative sample of clients for the period of data collection providing indicators with 95 % confidence intervals of not more than ±10 %. The sample size was calculated for a hypothetical key indicator with coverage of 50 % for a conservative sample size estimate. In service delivery channels where not all facilities/sites were included, the sample size was doubled to account for the design effect of clustering by facility/site. The minimum sample size was also increased by 10 % to allow for non-response, giving a minimum sample size of 107 in service delivery channels where all facilities/sites were sampled, and 214 in sites where a selection of sites were sampled.
Clients were interviewed after receiving a service from MSI using a standardized questionnaire, and were interviewed one-on-one by trained research assistants. Clients were asked about their socio-demographic characteristics, contraceptive behaviours, and satisfaction with any services received. All clients provided informed consent prior to the interview. The average response rate was 96 % across 8 countries that collected response rate information, and ranged from 79 to 99 %. Response rates were not adequately reported from 6 of the programs. Data were entered into standard data entry forms using Epi Info, and went through two rounds of data cleaning (in country and at head office) to assess and ensure data quality. MSI's independent ethics review committee approved the client exit interview protocol. Ethical clearance was not sought for the collection of routine HMIS data as only aggregate data were available from the global system used to extract data (e.g. number of implants provided in country X in year X), and no clientlevel information is available in this system.
The percentage of clients who were ''adopters'' (women who had not used a modern method of family planning in the last 3 months) and ''switchers'' (women who switched from a short-term contraceptive method to a LARC) were used as proxy indicators of the program's effectiveness in addressing unmet need for long-acting contraception. ''First-time users'' is the commonly used metric by which programs have documented their success in reaching additional women with FP services; however, this measure underestimates the extent to which a program is reaching women with unmet need. The ''adopter'' metric was developed to better capture this group; as well as first-time users, it includes women who might have used contraception previously but are currently at risk of unintended pregnancy [17].
Research staff familiar with the local contexts in each country classified the location where clients received services as rural, urban or peri-urban, except in Zimbabwe, where location was not recorded. The poverty status of clients was assessed using the Progress Out of Poverty Index (PPI) in all countries except Zimbabwe and Madagascar, where the multidimensional poverty index (MPI) was used [6,16]. Clients were considered to be living in extreme poverty if living on less than purchasingpower parity-adjusted $1.25 a day (according to the PPI) or multidimensionally poor (according to the MPI). As part of the client exit interview survey, clients were asked to use a Likert scale to rate various aspects of service delivery, including waiting times, length of time with a health care provider, quality of advice and information, and friendliness and respect from staff. Scores of 1 (indicating very bad) to 5 (indicating very good) were given to each aspect. There were low levels of missing data across almost all variables, ranging from 0.24 % missing marital status to 1.13 % missing age, and indicators were calculated using only available data. Missing data was higher for the PPI indicator (3.57 %), as only respondents who answered all ten PPI questions should be included in the overall score [6].
To allow comparison to national figures, the proportions of our participants who were aged less than 25 years, had not completed primary education, and lived in extreme poverty were compared to regional data from Demographic Health Surveys (DHS) and the World Bank PovCalNet website.
Service statistics data from HMIS were stored in Microsoft Excel 2011 Ò . All exit interview data were analyzed in STATA version 13 (College Station, Texas, USA). Exit interview data were summarized using descriptive analyses. Differences in socio-demographic and reproductive characteristics between IUD and implant users were assessed using Chi squared tests and t-tests. Data were adjusted during analysis for the sampling design effect.
All the exit interview results presented are weighted by client flow; that is, the number of LARC clients that came through each channel and each country program in 2014.
Findings Service Provision of LARC in 14 Countries
Between January and December 2014, 1,703,576 LARC services were delivered (1,334,566 implants and 369,010 IUDs) through the 14 sub-Saharan Africa programs. From 2008 to 2014, uptake of IUDs increased by 429 % and uptake of implants by 1567 % ( Table 2). The number of LARC services delivered in 2014 was over ten times that in 2008 (an increase of 1037 %).
MSI programs in East and Southern Africa drove most of the total growth in LARC services expansion, with 1,303,541 services delivered in 2014. Nevertheless, LARC services delivered in the West African programs included in this evaluation, which are fewer in number, had high growth between 2008 and 2014-with LARC services growing 6439 % in 2008-2014-but made-up a smaller share of total LARC services. In 2014, 24 % of all our LARC services were delivered by the six included West African programs (Burkina Faso, Ghana, Mali, Nigeria, Senegal, and Sierra Leone), while the remaining 76 % of services were delivered by programs in East and Southern Africa.
Socio-Demographic Characteristics of LARC Clients
In total, 3816 LARC users (3214 implant users, and 602 IUD users) participated in exit interviews after receiving services from static clinics, mobile outreach units, social franchises and Marie Stopes ladies in the 14 national programs. Thirty-seven per cent were aged 15-24, 42 % had not completed primary education, 84 % were married or living with a partner, and 56 % lived in rural locations. We observed some notable differences between IUD and implant users. IUD users were older (28 % aged over 35 years compared to 15 % among implant users, P \ 0.01) and more educated (24 % had completed secondary education or above compared to 16 % of implant users, P = 0.03). IUD users were also more likely to live in an urban location (41 % compared to 28 % of implant clients, P \ 0.01). The two groups of users had similar marital status and parity (Table 3).
Comparison to Demographic Health Survey Datasets
Over a third [37 %, 95 % CI 34-39) of interviewed LARC users were aged 15-24 years in 2014, ranging from 14 % in Nigeria to 52 % in Malawi, compared to a cross-country DHS average proportion of LARC users of 13 % (range: 2 % in Zambia to 27 % in Sierra Leone). Of MSI's LARC users, 42 % did not have complete primary education (95 % CI 40-45), ranging from 0 % in Uganda to 84 % in Burkina Faso. This compares to a cross-country DHS average proportion of 45 % of LARC users (range: 5 % in Zimbabwe to 89 % in Ethiopia). World Bank poverty estimates could not be confined to LARC users, so we compared the poverty status of MSI clients to that of the entire national population. The proportion of poor within the MSI client population, 38 % (data not shown), was lower than the average of 49 % in the 13 countries with comparison data available (excluding Zimbabwe). However, 44 % of LARC users served through outreach (the service delivery channel that expands access to the most underserved populations) were poor, which is similar to the overall population average.
Adoption of Contraception and Switching from Short-Term Methods
About half (46 %) of LARC users were adopters (women who had not used FP in the previous 3 months): 49 % at mobile outreach; 45 % in static clinics; 33 % in social franchises. About half (46 %) switched from a short-term method: 44 % at mobile outreach; 45 % in static clinics; 54 % in social franchisees ( Table 4). The remaining 8 % of clients were continuing users of long-acting contraception. Table 5 contains data on reach of underserved groups by service delivery channel, excluding Marie Stopes Ladies, a channel only available for one country (Madagascar). Statistically significant differences were observed for variables related to poverty, education, and urban/rural location. Clients receiving a LARC method at outreach sites were more likely to be living in extreme poverty than at static clinics or social franchise clinics. Fifty-four per cent of social franchise clients had not completed primary education, compared to 44 % of outreach clients and 20 % of static clinic clients. Outreach is a predominantly rural service delivery channel, with 71 % of clients visiting sites in a rural location. Overall, similar proportions of LARC users were aged under 25 at all three service delivery channels, though static clinics and social franchise clinics had higher proportions of under-25 s than outreach sites in West Africa.
Acceptability of LARC/Services Among Women
High levels of client satisfaction were reported for LARC clients across all service delivery channels in 14 country programs. Over 99 % of LARC users indicated that they would use MSI services in the future and 99 % indicated that they would recommend MSI to a friend. The mean score for overall satisfaction with services received was 4
Discussion
This paper describes the effectiveness of a LARC expansion intervention in 14 sub-Saharan African countries and demonstrates the untapped potential for wider use of LARCs across the region. Between 2008 and 2014, there was a 1037 % increase in the use of MSI LARC services across 14 countries in sub-Saharan Africa; from 149,881 services in 2008 to 1,703,576 in 2014. This LARC expansion initiative has been successful in expanding access to contraception to a diverse group of women with high unmet need for FP, through MSI's three main service delivery channels (static clinics, outreach, and social franchise clinics), plus Marie Stopes Ladies in Madagascar. The majority of LARC provision was in East and Southern Africa, while West African provision was lower, possibly due to the more recent initiation of these programmes, lower levels of donor funding available, relatively smaller population sizes, and social desirability of many children [12]. Implant uptake was higher than IUD uptake, suggesting higher acceptability of the method which may be due to ease of insertion, method of insertion, length of efficacy, or preference for hormonal methods, but more research is needed to fully understand the variations in uptake.
Complementary service delivery channels enabled programs to expand access to LARCs to individuals across socio-demographic divides (Table 5) while serving those who previously had an unmet need for long-acting methods and increasing demand for LARCs. The service delivery sites attained high levels of client satisfaction. The results presented here demonstrate that mobile outreach services are reaching underserved populations in rural areas with quality services, and support the use of outreach services to overcome some of the structural barriers to effective and high quality service delivery which often exist in rural locations such as restricted mobility, poor-quality health facilities, lack of trained personnel, and inadequate information about contraceptive choices [24,26]. Furthermore, 46 % of LARC users had not used a modern method of contraception over the previous 3 months, and 46 % switched from a short-term to a long-term contraceptive method at the time of visit, demonstrating that this initiative was able to expand the available method mix and address unmet need in this population. This initiative also reached a high proportion of younger women, especially in comparison with country-level data on method mix from the DHS; this is particularly noteworthy, as this group usually faces greater levels of unmet need. MSI's comprehensive service delivery model utilizing static clinics, mobile outreach units, and social franchises, and innovative approaches including alternative finance mechanisms (vouchers), demand generation in advance to outreach visits, and task sharing strategies in mobile outreach units, increased demand for and uptake of LARCs, especially among underserved populations. Programmatic evidence from initiatives in other countries has demonstrated similar success in increasing availability of LARC services. In 2003, PSI Nepal's Sun Quality Health Network created a social franchise with private clinic partners across the country [4]. Through social franchised stationary and mobile clinics and health fairs to improve counseling for long-acting permanent methods, this initiative provided 2000 IUDs and 6000 sterilizations over a span of 3 years. Similarly, early public-private partnerships in Ghana and Tanzania resulted in significant increases in the number of IUDs and female sterilization services provided [4]. These examples show that improving access to contraceptive choices requires utilization of robust service delivery models that incorporate innovative delivery mechanisms, public-private partnerships, voucher schemes and task sharing, in order to reach those most at need for FP services.
Limitations
As this was not an experimental study, we cannot separate the effects of various program expansion components on the results observed. We also cannot explore the cost-effectiveness of this initiative in this analysis, though this has been explored for expanding IUD provision in Africa in a previous publication [11].
Data for this study was extracted from exit interviews, which represent a cross-sectional sample of our clients, and therefore may not be representative of our entire client population across the year if there are seasonal differences in family planning uptake and clientele. Exit interview samples were calculated for MSI's entire client base, and LARC users were not specifically sampled. Courtesy bias and the validity and reliability of the Likert scale are limitations associated with satisfaction data and the instrument used to collect this information. Satisfaction scores are generally high in exit interviews, and follow up studies may be needed to better access client experience. Classification of facilities and sites as urban or rural was completed by research staff in each country and was not standardized between countries, reducing our ability to make cross-country comparisons. Lastly, our analysis utilized poverty proportions for the overall population from the World Bank and compared these to the proportion of poor in our study population (LARC users). This comparison of country-representative samples in World Bank data to all MSI LARC users is not a like-with-like comparison and is a limitation of this analysis; further research is required to accurately determine effectiveness in reaching poorer populations.
Recommendations
In order to increase women's contraceptive choices in sub-Saharan Africa, and to expand access to quality LARC services, we recommend that FP organisations: (1) use complimentary service delivery channels to reach populations in varying locations and with different socio-economic profiles; (2) engage with private providers to build capacity to provide high-quality, client-focused LARC services; (3) expand availability of LARC services in rural areas, through the use of outreach services; (4) increase public awareness about the benefits of LARCs by employing method-specific marketing and interpersonal communications campaigns with a mixed-media approach; (5) ensure that task-sharing of clinical tasks and procedures follows WHO guidelines where legal and feasible; and (6) safeguard the quality of family planning service provision using minimum standards, provider training, monitoring and supportive supervision.
Family planning organisations must also address ethical issues by creating mechanisms that ensure a full choice of methods are available and encouraged and ensuring removal and re-insertion services continue to be available when methods expire or women decide to change method or choose to have children.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2017-08-25T05:33:53.461Z | 2016-05-06T00:00:00.000 | {
"year": 2016,
"sha1": "b615b06a652b1abbb681ee06bc8aaed4d0435b2c",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10995-016-2014-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd3a628e918340ebcaec78a5e41b2d3ca9c1927d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
108318898 | pes2o/s2orc | v3-fos-license | A VARIATIONAL GAMMA CORRECTION MODEL FOR IMAGE CONTRAST ENHANCEMENT
. Image contrast enhancement plays an important role in computer vision and pattern recognition by improving image quality. The main aim of this paper is to propose and develop a variational model for contrast enhancement of color images based on local gamma correction. The proposed variational model contains an energy functional to determine a local gamma function such that the gamma values can be set according to the local infor- mation of the input image. A spatial regularization of the gamma function is incorporated into the functional so that the contrast in an image can be modi- fied by using the information of each pixel and its neighboring pixels. Another regularization term is also employed to preserve the ordering of pixel values. Theoretically, the existence and uniqueness of the minimizer of the proposed model are established. A fast algorithm can be developed to solve the resulting minimization model. Experimental results on benchmark images are presented to show that the performance of the proposed model are better than that of the other testing methods.
1. Introduction. Contrast enhancement plays an important role in computer vision, pattern recognition, and image processing based on the improvement of visual quality. In many applications, we often encounter low contrast digital images, for example, images have uneven illumination, a large area of shadow or background, and a full black target. Such low contrast images are obtained because of many different factors including illumination and imaginative angles. In order to restore realistic scene, it is necessary to perform image processing procedures to enhance image contrast.
In general, there are three kinds of methods for enhancing dimmed images: spectral methods, histogram methods, and spatial methods, see [21] for an overview. Spectral methods are based on wavelet processing. For example, Tang et al. [31] proposed an image enhancement technology by using a multi-scale contrast measure in the wavelet domain. In [13], wavelet transform was proposed to perform the image enhancement in the wavelet domain with a non-linear operator applied to the wavelet coefficients.
Histogram methods transform the grayscale input image to an output image with a specified histogram. Global histogram equalization (GHE) [11] is one of the most popular histogram methods for efficient implementation. In [28], the histogram of the input image was calculated locally according to both the mean pixel values of the local regions and the local cumulative distribution functions. In [6], an automatic transformation technique based on gamma correction was proposed, and the transformation function was formulated by using the probability distribution function (corresponding to histogram) of the input image. In [34], Wang and Ng proposed a variational approach containing an energy functional to determine a local transformation such that the histogram can be redistributed locally. In [33], Wang et al. proposed a variational model containing an energy functional to adjust the pixel values of an input image directly so that the resulting histogram was redistributed to be uniform. Arici et al. [1] proposed a variational framework by making a trade-off between the histogram of the input image and the uniform one. Chen et al. [4] proposed to divide the histogram specification into sub-histogram specifications recursively. Variational approach based on the minimization of a fully smoothed l 1 -TV functional and its fast version were proposed in [3,18,19]. A two step algorithm for color image enhancement by using a hue and range preserving color adjustment was proposed by Nikolova et al. in [16,17].
Most spatial methods are based on Human Visual System (HVS). In [20], a total variation model for Retinex was proposed for image enhancement. In [2], Beghdadi and Negrate gave a new contrast measure based on the edge detection operators and the visual perception criterion. In [22], the average local contrast measure was increased within a variational framework which preserved the hue of the original image by coupling the channels. In [23], Polesel et al. introduced an adaptive filter which is used to control the enhancement in different regions. In [5], Cheng and Xu proposed a direct fuzzy contrast enhancement method which aimed to use the maximum fuzzy entropy principle to map an input image into the fuzzy domain, and then enhanced the input image. Rizzi et al. [25] proposed Automatic Color Equalization (ACE) based on a perceptual hypothesis, and Provenzi et al. [24] further proposed to work in the wavelet domain in order to reduce the computation time.
Gamma correction is another important contrast enhancement method with a varying adaptive parameter γ in the model. Many gamma correction methods and related generalizations have been proposed and studied in the literature, such as linear gamma correction method [35] and nonlinear gamma correction method [6,30]. In [35], the idea of the proposed gamma correction is to decrease the pixel values in the low grayscale and to increase the pixel values in the high grayscale, while to keep the pixel values in the middle range of grayscale. In order to handle different illumination effect, an adaptive gamma correction method which was used to modify the gamma values by using two nonlinear functions, was given in [30]. However, linear or nonlinear functions used to correct the illumination, may be uniform for different regions and patterns of an input image. Instead of using a fixed value, the gamma function can be adjusted by statistical information extracted from input images. In [12], an adaptive gamma correction method was proposed to use the cumulative distribution function to slightly modify the associated statistical histogram. All the above gamma correction models are global, and they may not able to restore the local details of low contrast input images. Therefore, it is necessary to set different gamma values for different regions in practical image contrast enhancement.
In this paper, we propose and develop a variational model for contrast enhancement based on local gamma correction. Different from other local adaptive gamma correction methods ( [29,10]), the proposed model is a variational type. In [29], Shi et al. used three-level thresholding algorithm to segment the image into three gray levels (dark, medium tone, and bright). Then the local gamma correction method was applied to these three levels respectively, and then the input image was linearly stretched. In [10], the most appropriate gamma value was chosen by computing k-nearest neighbors of the feature vectors of each region. The main contribution of this paper is to propose a variational approach containing an energy functional to determine a local gamma function such that the gamma values can be set automatically according to local information of input image. A spatial regularization of the gamma function is incorporated into the functional so that the contrast in input image can be modified by using the information of each pixel and its neighboring pixels. In particular, H 1 -norm regularization is employed in the regularization procedure. Another H 1 -norm regularization is also employed to preserve the ordering of pixel values. Theoretically, the existence and uniqueness of the minimizer of the proposed model are established. Experimental results on benchmark images are reported to show that the performance of the proposed model are better than that of the other testing methods (GC , GRC-AGC [7], LRC-AGC, and GHE [11]).
The outline of this paper is as follows. In Section 2, we present related work of gamma correction. In Section 3, we give the proposed variational model and the theoretical results. In Section 4, we develop the algorithm to solve the resulting model. In Section 5, numerical examples are shown to demonstrate the effectiveness of the proposed model and the proposed algorithm. Finally, some concluding remarks are given in Section 6.
2.1.
Global raised cosine function. The principle of gamma correction is that a transformation function is applied to an input image such that the contrast of the output image is enhanced. In general, the transformation function is formulated as follows: where r max is the maximal brightness of the input image (for example, it is equal to 255 for a 8-bit image), and γ is generally a fixed value for contrast adjustment [14]. If γ is less than one, the enhanced image becomes brighter. If γ is greater than one, the enhanced image becomes darker. The main issue of this approach is that the value of γ is fixed, and it is independent of pixel locations and the statistics of the grayscale values of a set of neighborhood pixels. Therefore, the approach generates an enhancement stereotypical for all kinds of grayscale values, image patterns and regions of input images. In [12], Huang et al. proposed a new adaptive gamma correction model by using weighting distribution (AGCWD). The AGCWD model assigns several different values of γ to their corresponding grayscale pixel values. Note that the pixels in bright regions are easy to become saturated, which lead to poor contrast at the bright regions. Saha et al. [7] proposed a novel raised cosine function based adaptive gamma correction (RC-AGC) for efficient global contrast enhancement. The values of γ are determined by the raised cosine function: is the probability distribution function of the input image, and pdf min and pdf max represent the minimum and the maximum probability density values respectively. It has shown in [7] that the performance of the RC-AGC method is better than that of the AGC method.
2.2.
Local raised cosine function. We note that the above discussed gamma correction models are global, and the values of γ are set independently with the positions of the pixels of the input image. Therefore, local image details may not be preserved. One idea is to incorporate the position information in the raised cosine function. The values of γ are determined as follows: .
Here, at each pixel location x, we compute a local probability density function: , and pdf min and pdf max represent the minimum and the maximum local probability density values respectively.
As an example, we show the difference between the global raised cosine function based adaptive gamma correction (GRC-AGC) using (1) and the local raised cosine function based adaptive gamma correction (LRC-AGC) using (2). As shown in Figure 1, GRC-AGC cannot preserve the image details very well. For example, the regions containing the girl's dress and the curtain cannot be enhanced properly. On the other hand, LRC-AGC can keep more image details. For comparison, we refer to the zooming regions in Figure 1.
It is interesting to note that local raised cosine functions work individually. In this paper, we propose a collaborative model among all local raised cosine functions so that the resulting image can be further enhanced via an variational approach.
3. The proposed model. In this section, we propose a novel variational model to determine a local gamma function such that the gamma values can be set according to the local information of the input image. In the following discussion, f (r, x) represents the objective local transformation function with r referring to the intensity variable and x referring to the location variable, Ω is the image domain. In order to minimize the differences among the local gamma functions at the nearby pixel locations, the spatial regularization of the local gamma function is incorporated into the objective functional for the enhancing process. In particular, we consider the H 1 -norm regularization |∇f | 2 in the model, where ∇ denotes the gradient operator of the objective function f with respect to the horizontal and vertical directions of an image. Moreover, we incorporate another intensity regularization term so that the ordering of intensity of the output image can be similar to that of the input image, i.e., similar intensity values preserve after the enhancement transformation. The proposed variational model is given as follows: where f 0 = 1 2 (1+cos(π·cdf w (x, r))) is the LRC function, and f r is the first derivative of f with respect to r, α 1 , α 2 are two positive parameters to balance these three terms in the model. Let us study the existence and the uniqueness of the minimizer of J(f ). Firstly, the proposed model can be reformulated as follows: For any (r, x) ∈ Λ ≡ (0, 1) × Ω, the energy functional is well defined in the following admissible set: Here W 1,2 [(0, 1); L 2 (Ω)] denotes the space of functions in W 1,2 where each function is of (0, 1) → L 2 (Ω). Similarly, L 2 [(0, 1); W 1,2 (Ω)] denotes the space of functions in L 2 where each function is of (0, 1) → W 1,2 (Ω). Proof. First, if we set f to be constant, the energy will be finite, which implies that problem (3) is the correct setting. Noting that 0 is a lower bound of J(f ) which implies that inf J(f ) exists. Suppose {f n } is a minimizing sequence of J(f ), then there exists a constant M > 0 such that J(f n ) ≤ M . The above inequality reads as: therefore, we have Λ (f n r ) 2 dxdr ≤ M . The above inequality guarantees that the sequence {f n } is uniformly bounded in W 1,2 [(0, 1); L 2 (Ω)], thus, up to a subsequence, there exists f * ∈ W 1,2 [(0, 1); L 2 (Ω)] such that f n f * in W 1,2 [(0, 1); L 2 (Ω)]. Since W 1,2 [(0, 1); L 2 (Ω)] is compactly embedded in L 2 [(0, 1); L 2 (Ω)] (see [9] for details), we get: As a consequence of the lower semicontinuity for the W 1,2 -norm, we have: By using the inequality Λ |∇f n | 2 dxdr ≤ M , we know that Ω |∇f n | 2 dx is uniformly bounded for almost every r ∈ (0, 1), combining this with the facts f n ∈ L ∞ ((0, 1) × Ω) and 0 ≤ f ≤ 1, we can derive that {f n } is uniformly bounded in W 1,2 (Ω) for almost every r ∈ (0, 1). By using the same compactness property, up to a subsequence also denoted by {f n }, there exists a function f r * ∈ W 1,2 (Ω) such that, for fixed r, Combining the above convergence results with (4), we can easily deduce that f r * = f * (r, x) for almost every r ∈ (0, 1). By using the lower semicontinuity of the W 1,2norm, for almost every r ∈ (0, 1), Then we can easily get: and f * ∈ L 2 [(0, 1); W 1,2 (Ω)]. Meanwhile, because of the convergence result (4), we have: Combining (5), (6), and (7), we obtain: Meanwhile, by using the convergence results, we have f * ∈ Σ. Noting that the proposed functional J(f ) is strict convex on (f, ∇f, f r ), leads to the uniqueness of the solution. This completes the proof.
4. The proposed algorithm. In this section, we develop an efficient algorithm to solve (3). Let's first introduce the following notation: It is used for the projection of the objective function between 0 and 1 as required in (3). Then we can rewrite problem (3) as follows: subject to u = f , v = f , w = f r and z = f . For this constrained optimization problem, we employ the alternating direction method of multipliers (ADMM) [8] to solve it. By using the Lagrangian multiplier λ 1 , λ 2 , λ 3 , λ 4 to the linear constraints, the augmented Lagrangian function is given as: Here the scalar product ·, · is the corresponding inner product. Then the ADMM iterations are described in the following algorithm: as the initial input data; (ii) At the k − th iteration: • Given f k , λ k 1 , λ k 2 , λ k 3 , λ k 4 , and compute u k+1 , v k+1 , w k+1 , z k+1 by solving: • Given u k+1 , v k+1 , w k+1 , z k+1 , and calculate f k+1 by solving: (9) argmin f L(u k+1 , v k+1 , w k+1 , z k+1 , λ k 1 , λ k 2 , λ k 3 , λ k 4 ).
• Updating λ k+1 1 , λ k+1 2 , λ k+1 3 , λ k+1 4 by using: We rewrite the subproblem in (8) as follows: where Note that all parameters in L 1 (u, v, w, z) can be separated. Then the minimization problem (11) can be solved separately. Firstly, u k+1 , w k+1 have the following closed form solution by solving the corresponding Euler-Lagrange equation: The Euler-Lagrange equation of the subproblem corresponding to v k+1 is given as follows: and it can be solved by using Fast Fourier Transform if periodic condition is considered. Finally, the following projection gives z k+1 : z k+1 = max(min(f k − λ k 4 2β , 1), 0). For the subproblem in (9), we rewrite it as follows: where The corresponding Euler-Lagrange equation is given as follows: this equation can be solved by using Fast Fourier Transform if periodic condition is considered. We note that the convergence result for ADMM can be used here for the proposed algorithm, see the detail information in [8]. We conclude it as the following theorem: Theorem 4.1. Letf ,λ 1 ,λ 2 ,λ 3 ,λ 4 be arbitrary and let β > 0. Then the sequence {u k , v k , w k , z k , λ k 1 , λ k 2 , λ k 3 , λ k 4 } generated by (8)-(10) converges to (u * , v * , w * , z * , λ * 1 , λ * 2 , λ * 3 , λ * 4 ), which is a saddle point of L (i.e, the unique solution of the problem in (3)).
Experimental results.
In this section, we present the experimental results to illustrate the effectiveness of the proposed model and the proposed algorithm. For color images, we make use of the HSV color space, i.e., we enhance the V-channel separately, and keep the H-channel and the S-channel unchanged. In the following experiments, the stopping criteria of algorithm is set to be = 10 −3 , and the window size of the local transformation function is set to be 21 × 21. When the ADMM method is employed, we set β = 1, and λ 0 1 = λ 0 2 = λ 0 3 = λ 0 4 = 0 as the initial parameters for iterations. In the numerical tests, we use ALC [15,2], DE [27], SSIM [32], PSNR to compare the performance of different methods: • Here r i is the gray-level value at pixel i, and E i is the mean edge gray-level which is defined in a neighborhood N i of size s × s pixels and entered at pixel i. Practically, we choose s = 3, and E i = Σ k∈N i S k r k Σ k∈N i S k where S k is the edge value computed by Sobel operators [11,26]. The higher the ALC value is, the better contrast the image has; where p(I(k))is the probability of pixel intensity I(k) which is estimated from the normalized histogram. The higher the value of discrete entropy, the better the enhancement is in terms of providing better image details; • where µ X , µ Y , σ X , σ Y are the means and the variances of X, Y respectively and σ XY is the covariance of X and Y . 5.1. The parameters of α 1 and α 2 . In the first experiment, we test the effect of the two parameters α 1 and α 2 . We consider the low-contrast input image in Figure 2, and we also display the enhanced results by using several pairs of (α 1 , α 2 ) in Figure 2. The enhanced results are corresponding to (α 1 , α 2 ) = (1000, 10), (1000, 100), (1000,1000), (1000, 10000), (10, 1000), (100, 1000), (1000,1000), (10000, 1000). We observe from the results in Figure 2 that the transformed result has many artifacts when α 1 is small, while the result tends to be visually better as α 1 getting larger because of the role of the spatial regularization term. We also find that as α 2 increases, the intensity consistency of the enhanced results becomes stronger, thus the details have been restored, see especially the carpet in the dark region. We show the ALC values and the DE values in Table 1. We see from the table that ALC and DE decrease when α 1 or α 2 increases, which reflects the role of the two regularization terms.
Preservation of the details.
In the second experiment, we test the effect of the detail preserving. We compare the proposed model with GC, GRC-AGC, LRC-AGC, and GHE. The experiment is based on 4 low-contrast color images. The enhanced results corresponding to GC with γ = 1/2.2, GC with γ = 1/5, GRC-AGC, LRC-AGC, GHE, and the proposed model are displayed in Figures 3-6. We also show the corresponding zooming parts respectively. We see from the results that GC model usually produce over enhanced effect, and cause a serious loss of details. For the LRC-AGC model, there are plenty of artifacts in the corresponding enhanced results, which makes the details visually unpleasant. Noting that GRC-AGC and GHE are global models without considering the local information, which give the enhanced results without texture and details preservation. As we can see from the enhanced results, the proposed model provides very good detail preserving effect in contrast enhancement, see especially the yellow flowers in the first row, the dress of the girl in the third row, the stairs in the fifth row, and the gray background in the seventh row.
We report the values of ALC and DE of different methods in Table 2. We remark that the best numbers are in bold face. We see that the proposed model gives the best DE values (excluding LRC-AGC which generates many unexpected artifacts) which is corresponding to more details based on the four testing images. Meanwhile, we find the ALC values of the enhanced results by using the proposed model are quite competitive. Therefore, the proposed model provides reasonable values of ALC and DE by balancing both the visual quality and detail preserving. These results show that the proposed model can recover the details and enhance the contrast very well. by using GC with γ = 1/2.2, GC with γ = 1/5, GRC-AGC, LRC-AGC, GHE, and the proposed model. We can see from the results that LRC-AGC usually generate over enhanced contrast results, and the visual quality of the enhanced images by using the proposed model is competitive with the other testing methods.
In Table 3, we report the PSNR values and the SSIM values of different methods applying to the low-contrast images in Figures 7-8. We find that when we compare the two measures of different methods between the enhanced results and the groundtruth images, the proposed model is always better than the other testing methods. Again these results show that the proposed method can recover the details and enhance the contrast suitably.
Input
GC-1/2.2 GC-1/5 GRC-AGC LRC-AGC GHE Proposed Figure 5. First row (from left to right): the input low contrast image; the enhanced results by using GC with γ = 1/2.2; the enhanced results by using GC with γ = 1/5; the enhanced results by using GRC-AGC; Second row: the enhanced results by using LRC-AGC; the enhanced results by using GHE; the enhanced results by using the proposed model. The corresponding zooming parts are displayed in the last two rows respectively.
6. Concluding remarks. Gamma correction is an important contrast enhancement method achieved by employing a varying adaptive parameter γ. In this paper, we propose and develop a variational model for contrast enhancement of color images based on local gamma correction. Different from other local adaptive gamma correction methods, the proposed model is a variational model. The contribution of this paper is to propose a variational approach containing an energy functional to determine a local gamma function such that the gamma values can be set according to the local information of the input images. A spatial regularization of the gamma Input GC-1/2.2 GC-1/5 GRC-AGC LRC-AGC GHE Proposed Figure 6. First row (from left to right): the input low contrast image; the enhanced results by using GC with γ = 1/2.2; the enhanced results by using GC with γ = 1/5; the enhanced results by using GRC-AGC; Second row: the enhanced results by using LRC-AGC; the enhanced results by using GHE; the enhanced results by using the proposed model. The corresponding zooming parts are displayed in the last two rows respectively.
function is incorporated into the functional so that the contrast in an image can be modified by using the information of each pixel and its neighboring pixels. In particular, H 1 -norm regularization is employed in the regularization procedure. Another H 1 -norm regularization is also considered to satisfy the need for keeping the gray value orders. Theoretically, the existence and uniqueness of the minimizer of the proposed model are established. Experimental results are reported to show that the performance of the proposed model are competitive with the other compared methods for several image types. | 2019-04-12T13:11:52.555Z | 2019-03-19T00:00:00.000 | {
"year": 2019,
"sha1": "babdf92241851da6e33970dd8f0ef3f0e94f17c0",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=9300110c-f748-4946-9f5c-534961ed1712",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6bb634b69ce339e59e3a8aa62625f2bdc4685e49",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
139695955 | pes2o/s2orc | v3-fos-license | The research of influence polymeric compounds on the effectiveness of intumescent coatings for the fire-protection of construction structures
This article focuses on the devoted to the investigation of the influence polymeric binders on the flame retardant characteristics of intumescent species. It has been found that polymeric binders are the most preferable in the intumescent type of fire retardant compositions, with the thermal destruction of which graphite-like crystalline structures are formed. The chloroparaffin to improves the swelling of intumescent coatings based on acrylic binders by lowering the temperature of their thermal degradation, which occurs under the influence of the dehydrating agent - chlorohydrogen, which catalyzes the processes of carbonization and graphitization into the compounds.
Introduction
The range of polymeric binders in the compositions of intumescent (intumescent) compounds designed for the fire protection of building structures is rather narrow, despite the huge variety of film formers present in the world market. The most preferred are [1,2]: the homopolymers of vinyl acetate; the copolymers of vinyl acetate, ethylene and vinyl chloride; the copolymers of vinyl acetate and vinyl ether of one or more long chain or branched chain carboxylic acids; the copolymers of vinyl acetate and dibutyl maleic acid; the copolymers of vinyl acetate and acrylic acid ester; the copolymers of styrene and acrylic acid ester; the copolymers of acrylic acid ester; the copolymers of vinyltoluene and acrylic acid ester. The groups of film-forming agents have been selected experimentally. In publications, there are individual studies on the effect of certain binders on the flame retardant characteristics of the intumescent composition. For example, R. McNair and T. Stapler [3] came to the conclusion that, as applied to flame retardant paints, the binder plays an important role in the development of the foaming process. Polymer binders can inhibit foaming, so their content should be maintained at a minimum level.
In addition to the aqueous dispersions of polymers and synthetic latexes, a great interest is shown to film formers on the organic solutions of polymers.
Coatings formed on their basis have the number of important features: high adhesion to the substrate, resistance to UV rays, the weak solutions of alkalis and acids, greater operational capacity compared to coatings prepared on the aqueous solutions of polymers, high moisture and weather resistance, less drying time and the possibility of coating at low temperatures. The most promising is the use of various solutions of acrylic copolymer solutions (for example, methyl methacrylate and butyl methacrylate) as film formers, which allow obtaining coatings with high strength characteristics, good adhesion to the surface to be protected, and weather resistance.
We investigated the influence of film formers on the flame retardant characteristics of intumescent coatings, in particular, the swelling coefficient and the time of reaching the limiting state of the samples under temperature conditions corresponding to cellulose fire were determined [4].
Results
The Determination of the flame retardant effectiveness of paints for metal structures is carried out with the help of standardized complex thermophysical tests that are as close as possible to real fire conditions [5], these methods are labor intensive and expensive. Their use is advisable at the stage of material certification, so researchers in the chemical laboratory use semi-quantitative comparative methods for assessing the characteristics of intumescent compositions, which they themselves develop and substantiate in accordance with the scientific and practical tasks being solved. The most common when assessing the characteristics of intumescent coatings, judging by the publications, is the determination of the expansion coefficient and adhesion-strength parameters of foam coke [6]. The coefficient of swelling of intumescent coatings can be considered a function of flame retardant efficiency, our studies [7] to show that it correlates with the decomposition temperature of pentaerythritol during the thermolytic synthesis of foam coke. This indicator should be considered in conjunction with other methods for assessing the fire-retardant properties of intumescent coatings, since there is no data on the thickness of the foam layer, other things being equal, contributing to the necessary time of resistance to unfavorable fire factors. It is well known that epoxy intumescent coatings have poorer viscosities than, for example, compositions based on vinyl acetate copolymers, due to which they are applied in thicker layers -about 4 mm, and even thus they "do not catch up" with water-dispersion compositions in terms of the number of foam boxes but in inefficiency they are not to be blamed. They are most widely used in the protection of oil and gas facilities, and they are recommended for protection against hydrocarbon fire. Although, of course, there is a search for increasing the degree of swelling of epoxy compositions, for example, there are descriptions of hybrid epoxy-vinyl materials, where the epoxy part is responsible for good climatic stability, and vinyl for reducing the destruction temperature and, as a consequence, increasing the multiplicity of coke.
The expansion ratio was determined as the ratio of the thickness of the intumescent carbonized layer to the thickness of the original coating layer. To determine the time of resistance to heating (the time of the onset of the limiting state), the specialists of FNPP GEFEST LLC collected a laboratory installation on the basis of a test site, shown in Figure 1 The tests of the samples were carried out in the "standard fire" mode, in which the average temperature of the furnace, measured by the installed thermocouples, was controlled and regulated according to the dependence [9]: where Тоinitial furnace temperature; ttime, min. Steel plates measuring 200 × 200 × 4 mm, with a flame retardant composition, were used as the samples. Fire-retardant compounds were applied to a clean, degreased, primed GF-021 surface. The thickness of the dry layer was 1 mm, without taking into account the primer layer (0.3 mm). Before the tests, control measurements were made of the actual coating thicknesses at no less than nine points. The average of all measurements was taken as the result. The temperature on the surface of the test plates was measured with the help of thermoelectric converters (TEC), which were installed by the method of cobbling onto the unheated surface of the samples in an amount of three pieces. The unheated surface of the prototype was isolated with a Rockwool mineral wool board 100 mm thick. The temperature of the metal of the test sample was determined as the arithmetic average of the TIC readings located at the specified locations. The tests were carried out before the onset of the limiting state of the prototype. The limiting state was taken to reach the temperature of 500 ° C of the steel of the test samples (the average temperature for the three TICs). While we did not establish any direct correlation between standardized full-scale tests and those used by us, in our opinion, the comparative laboratory methods for determining the given indices of fireretardant coating, nevertheless, can characterize the change in the fire-protective properties of the intumescent material. Although it is obvious that in a compact electrical laboratory installation there are, for example, such important factors characteristic for a fire as impact impacts of turbulent flows of hot gases. And, consequently, the time of the onset of the limiting state of the samples in full-scale tests will be less than the results obtained by us. We tested identical composition compositions based on a triple intumescent mixture consisting of ammonium polyphosphate, melamine and pentaerythritol. Only the type of film former has changed. The content of 15% solutions of various binder polymers in the composition (see table 1) was 20% by weight with respect to the rest of the ingredients. The results of our study of fire retardant characteristics of intumescent compositions based on various types of binders are presented in the table 1. The tests showed (see table 1) that unplasticized acrylic film-forming agents are inferior to the copolymer of vinyl acetate with vinyl chloride. Presumably, acrylates, due to higher temperatures of thermal destruction, suppress the intumescent process. The situation changes with the introduction of chloroparaffins (CP-470) in the amount of 10% (by weight) into the composition. The multiplicity of coke during thermolysis of the material containing HP-470 increases, which, it seems to us, gives rise to some researchers to attribute chloroparaffins to porophors. Although, in fact, it only lowers the destruction temperature of the acrylic binder due to the dehydrating effect of the hydrogen chloride formed and directs the process toward the formation of graphite-like crystalline phases, which, according to X-ray diffraction analysis of the foam, we observe (in Figure 2) as a diffraction peak in area 22o, located approximately in the angular position in place of the corresponding three-dimensional reflection of graphite. The peak obtained does not have a clear maximum, which indicates the absence of three-dimensional ordering in carbonizate samples. One of the explanations for the asymmetry of the line profile is the presence of amorphous carbon in the analyzed samples.
The x-ray diffraction analysis of the carbonized residue of an intumescent composition consisting of ammonium polyphosphate, melamine, pentaerythritol and a solution (in xylene) of unplasticized acrylic resin Degalan 64/12 showed that the crystalline phase is not detected (Figure 3). In addition, the chloroparaffin has a stabilizing effect on ammonium phosphates, preventing their premature (before the formation of resin) decomposition with the release of foaming gases [9].
Conclusion
Satisfactory results in the tests of fire retardant efficiency (table 1) showed the copolymer of vinyl acetate with vinyl chloride. Let us consider the possible contribution of vinyl chloride monomer to the thermolysis of a copolymer with vinyl acetate. At temperatures around 150 ° C, the polymer begins to decompose with the liberation of hydrogen chloride which being a Lewis acid shifts the flow of thermolysis towards carbonization. In the process of carbonization an increase in the conjugated systems is observed. By the Diels-Alder reaction, structuring takes place with the formation of aromatic planes which contribute to an increase in the thermal stability of the fireproof material. | 2019-04-30T13:04:44.506Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "11650868820600ea5acfcf124178d83ea9fafd59",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/90/1/012206",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "836fc779c14cc0f375d1bff5d56928fc7376a5a9",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
54975290 | pes2o/s2orc | v3-fos-license | Infrastructure Joint Venture Projects in Malaysia : A Preliminary Study
As many developed country practise, the function of the infrastructure is to connect the each region of Malaysia holistically and infrastructure is an investment network projects such as transportation water and sewerage, power, communication and irrigations system. Hence, a billions allocations of government income reserved for the sake of the infrastructure development. Towards a successful infrastructure development, a joint venture approach has been promotes by 2016 in one of the government thrust in Construction Industry Transformation Plan which encourage the internationalisation among contractors. However, there is depletion in information on the actual practise of the infrastructure joint venture projects in Malaysia. Therefore, this study attempt to explore the real application of the joint venture in Malaysian infrastructure projects. Using the questionnaire survey, a set of survey question distributed to the targeted respondents. The survey contained three section which the sections are respondent details, organizations background and project capital in infrastructure joint venture project. The results recorded and analyse using SPSS software. The contractors stated that they have implemented the joint venture practice with mostly the client with the usual construction period of the infrastructure project are more than 5 years. Other than that, the study indicates that there are problems in the joint venture project in the perspective of the project capital and the railway infrastructure should be given a highlights in future study due to its high significant in term of cost and technical issues.
Introduction
The circumstance of the construction joint venture has been a standout amongst the most basic issues in the industry development.Many finished ventures were accounted for costs considerably higher than the genuine contract whole because of low quality.In Malaysia, the development part contributes altogether to the financial development for enhancing the personal satisfaction [1,2,4].The development area has been assuming a critical part in total economy of the nation in term of its commitment to income era which energizes the total national output and the financial improvement of Malaysia.The construction industry in Malaysia unanimously dominated by four pioneer fields which are construction of industrial, residential, commercial and infrastructure.The research focused on the infrastructure development as the construction cost for the infrastructure development is quite large it lead to the economic significant in Malaysian economic transformation history.Fig. 1 shows the infrastructure division in construction industry.The railway, highway, seaports, bridges, airports and dam are the example of the types of the buildable infrastructure elements.The infrastructure implementation is usually via joint venture, partnership, privatisation, and outsourcing [16,18].Through the JV, a imperative measures usually discussed in IJVP is a collaboration pattern, time frame, infrastructure types, procurement selection, benefits from the collaboration, sharing elements, and financial determination [3,4,6,20,21].
Landscape of Malaysian Infrastructure Joint Venture Projects
The IJVP typically adhere with the involvement of private concessionaire in the delivery of public infrastructure worldwide is apparent in reducing the budgetary burden on the government's part specifically due to the major downturn in the global economy [16,19].Their involvement varies from concessionaire, privatization to partnerships.[13,14,18].Looking at the most current type of project delivery approach procured around the globe, which is known as Public Private Partnerships (PPP), although the opportunities are widely opened for the private concessionaires to partake in the delivery of public infrastructure projects with numerous incentives by the government, their responses are still minimal [9][10][11].The clients appoint the Project Delivery Partners or Special Purpose Vehicle to manifest the project from the monetary acquisition, procurement path, tendering process and claim from the awarded contractors.A huge responsibility portray by the PDP to ensure the successfulness of the project [17,19,20].Meanwhile, in certain project the IJVP occurred with the establishment of the new company for example ABJV to remarks the new legal organisation that will manifest the IJVP projects.The demand of the IJVP projects due to transferring the foreign technology to Malaysia for example the tunnelling process which requires expertise as in the application of New Austrian Tunnelling Method (NATM) and many others outstanding technologies in various infrastructure project [5][6][7].Joint venture implementation is not new in Malaysia according to the current foreign direct investment to Malaysia which currently above the average levels at 67 per cent by year 2015 which indicated the sustainable implementation [12]; however there is relatively small exposure from the academic writing to uncover these matters [1,15].Most of the knowledge gained from the Western reference via various research engines, however the research currently made at the Western countries and the legal and geographical factors may be different from the IJVP implementation in Malaysia.Other than that, there is depletion in information on the actual practise of the infrastructure joint venture projects in Malaysia.The depletion can be seen through the research trend analysis for past decades at Table 1.The issues on the project capital and financing been explored by majority of the foreign researcher and only few from the local.Less highlights given to the IVP in Malaysia on financial matters compares to risk, key performance indicator, critical success factors and knowledge management.Therefore, the circumstances suited the purpose of this study which attempt to explore the real application of the joint venture in Malaysian infrastructure projects.
Methodology
The methodology used in this research is quantitative methods.Therefore an instruments used is questionnaire survey.The respondent sample is selected based on the purposive sample.The list of contractors derived from the Construction Industry Development Board (CIDB), later on 35 numbers of infrastructure contractors selected according to the higher class in the contractor's registration with years of the establishment of the organisation more than 30 years.The purpose of selection is due to the higher years of the company establishment represent the validity of the response based on their experience in the infrastructure development.The questionnaire survey are divided into three (3) sections as Section A is for The Respondent Details, Section B is for Overview of the IJVP and Section C is for The Project Capital in Infrastructure Joint Venture Project.In within 6 months, the response recorded and keyed in the Statistical Package for Social Science (SPSS) software and then analysed via the frequency analysis.The findings of the research documented as chapters below.
Section A: The Respondent Details
The outcome of this section can be view at the Fig. 1 to 6 below.For the professional background, there are 40 per cent of the respondents are quantity surveyors, followed by 27 per cent engineer, 23 per cent project manager and 10 per cent is architect.As for the working experience there are 34 per cent of them with 10 to 15 years of working experience, 33 per cent at 5 to 10 years and 23 per cent less than 5 years.The results show that the respondents are among the professional practitioner that valid to answer the questions.Therefore the results shows that findings of this research from the reliable sources.The questions also asked about the preferable sharing ration in their company.The results indicated that 46 per cent of them agreed the JV sharing ration is 50%:50%, followed by 27 per cent respectively for ratio of 30%: 70% and 60%:40%.The respondents also had more experience implemented JV with local partner at 66.67 per cent rather than foreign partner at 63.33 per cent.Meanwhile, the respondent collaboration partner's countries are highest with the Asian partners at 96.67 per cent, followed by Middle East at 33.33 per cent, European at 25.42 per cent, and Oceania at 8.47 per cent, and America at 0.00 per cent and Africa at 0.00 per cent.This shows that the JV can be a continent approach for the Asian countries as the collaboration term and geographical factor slightly acceptable rather than other continents in the world.For types of JV partners the respondents agreed that the preferable JV approach implemented is among the client or owner at 76.67 per cent, followed by supplier at 70.00 per cent, manufacturer at 70.00 per cent, sub contractors at 63.33 per cent, specialist at 56.67 per cent and consultant at 16.67 per cent.The results indicated that the flows of the joint venture may smooth and practical with the collaboration partners of owner and clients compared to consultants.It is understood that from the Section A that the characteristics of the IJVP practitioners are identified and can be the guideline to the future researcher to acknowledge the respondents in this study area.
Section B: The Overview of the Infrastructure Joint Venture Projects
The questionnaire gets into the deeper essence of the knowledge where the enquiries on overview of the infrastructure joint venture project asked in the survey sessions.The questions are comprises of the types of JV implemented, the typical IJVP construction period, the IJVP construction cost, the classification of the IJVP, the IJVP procurement selection, benefits from the collaborations and the sharing IJVP elements.The outcome of the research is presented at the Figure 7 to 13 below.The results indicated the typical types of JV implemented are incorporated joint venture at 83.33 per cent compared to unincorporated joint venture at 63.33 per cent.The number shows that the respondents prefer to implement the JV via establishing a new company rather than appointing the project delivery partner as parts of the JV structure.
For the IJVP period of construction the 63.33 per cent answer recorded on more than 5 years, followed by 33.33 per cent for 3 to 5 years, 26.67 per cent for one to 3 years, and 0.00 per cent for less than a year.The results indicated that the IJVP construction period took longer than normal construction period due to the complexities of the projects.For the IJVP construction cost, the highest poll is on the MYR 50 Billion to MYR 100 Billion at 66.67 per cent, followed by 33.33 per cent for MYR 10 Billion to MYR 50 Billion, 26.67 per cent for below MYR 10 Billion and 13.33 per cent for more than MYR 100 Billion.The results shows that the IJVP required a big financial fund due to the big infrastructure development, material, manpower as well as the new technology adopted making the project required large financial resources.For the classification of the IJVP, the respondent agreed that the IJVP usually held for the highway project at 66.67 per cent, followed by For the perspective of the benefits of the collaboration the respondent agreed that the implementation of the IJVP are based on the legal reason which the highest score is at 83.33 per cent, followed by sharing benefits at 70.00 per cent, minimizing cost at 63.33 per cent, employment at 56.67 per cent, strategic goals and purpose of survive at 46.67 per cent respectively.The results show that most of the establishment of IJVP in Malaysia are based on the legal procedure in the country itself.This has supported by the implementation of the Construction Industry Transformation Program which allow the joint venture implementation in Malaysia establish from 35 percent of the non -Malaysian in any joint venture infrastructure development in Malaysia.Apart from the benefits of the collaboration, the outcome on the preferable sharing elements is derived from this research.There are 93.33 per cent response recorded on risk, followed by market at 83.33 per cent, The research focused on the perception of the contractors in Malaysia that concern the contractors the most on the risk implementation process.The researches however generally discuss risk, and in depth study are required to explore the risk stated by the author especially on the technical and financial risks.Hence, the issues on the IJVP financial then explored at the Section C.
Section C: The Project Capital in Infrastructure Joint Venture Project
The outcome from this section explored on the financial side of the IJVP.The financial issues that mostly top of the practitioner's worries are the project capital as it is the start up instrument to commence the project.Hence, the question on the barriers in project capital acquisition, benefits of comprehensible project capital acquisition and attributes to project capital improvement been asked.The results from the questionnaire survey are as follows.
Barriers in Project Capital Acquisition
Fig 14 shows that the respondent response on the barriers in project capital acquisition in the IJVP.The results shows that the highest response are on the lack of financial aid at 83.30 per cent, followed by hard to control the project capital at 70.00 per cent, less understanding on the project capital contractual term and condition at 33.33 per cent, and the project capital expenditure accuracy at 30.00 per cent.The results show that the resources of the project capital are lacking in the IJVP.Although the government provides huge funding aid to support the projects, it is still insufficient as the normal infrastructure project would cost around MYR 50 Billion to MYR 100 Billion.Other than that, the difficulties to control the project capital can lead to the damage of the project financial flows.There are the needs for the better control tools or instrument to manifest the flow of the project capital at the initial, recurring and completion of the project.It is advisable for the project practitioner to control their tools via Code of Accounts Structure, Definitive Cost Estimate, Engineering Labour Report, Field Labour Report, and Project Cost Summary.These tools are imperative to estimate the outflow and inflow of the project capital in IJVP.The understanding of the project capital term and condition must be interpret by the legal practitioner to avoid the breach of the contract, upon the project capital utilisation phase as if the project capital is in term of the bank loan, the practitioner are required to pay the loan in within payback period with certain interest rates.The failure to comply with the contract leads the IJVP practitioner into the financial fatality stage.The accuracy of the project capital expenditure are among the problems do jeopardise the performance of the project capital.Hence, the better project cost planning and framework are required to sustain the project capital expenditure in the future.
Benefits of Comprehensible Project Capital Acquisition
Fig. 15 shows the results on the benefits of comprehensible project capital acquisition in IJVP.The results indicated that the benefits of the comprehensible project capital acquisition are as the structure for partner fit at 73.33 per cent, followed by financial stability at 66.67 per cent, to avoid dispute at 40.00 per cent and shows the mutual interest at 36.67 per cent.The respondents agreed that the structure fit are among the most benefits of having a comprehensible project capital acquisition as the foundation of having the IJVP is to share the monetary matters and distribute the risk accordingly and the measurement towards the successful joint venture project are having the project successfully fits.Meanwhile the project capital strength is act as the financial stability in ones project which the contingencies cost can be overcome if the organisation obtains the financial stability.
Besides that, the dispute can be handling wisely with the existence of the stable project capital as the common dispute occurred due to the financial crisis in the organisation.The mutual interest in this research score the least as the respondent perceived that there is other matters in the IJVP can represent the mutual interest among the collaboration partners.
Attributes to Project Capital Improvement
Fig. 16 shows that the results on the attributes to project capital improvement in IJVP.The results indicated that the respondents agreed that the attributes are to have better tools to supervise the flow of the project capital at 74.00 per cent, followed by concentration on project capital contractual form at 67.00 per cent, adequate funding base at 40.00 per cent, and a good networking skills at 37.00 per cent, and better construction cost planning at 21.00 per cent.The results indicated that the tools to supervise the flow project capital are required in this industry of IJVP.Hence, the improvement of the project capital shall focus on the control tools.Besides that, the understandable contractual form is needed to improve the project capital performance and stability.Hence, the capital project can be improved with the deeper understanding of the practitioner while handling the IJVP projects.The characteristic of adequate funding base a likely preferably by the respondents too, as the availability of the project capital are lack of the source so the IJVP practitioner shall do networking, do target the investor and investigate the investors background to improve the project capital.Apart from that, the IJVP practitioners may need to do the background check on the investors due to the term and condition from the lenders may distinguish from another.Some of the IJVP may also do come up with the better construction cost planning and development proposal to attract multitypes of investor towards the successful funding of the IJVP.From the project capital acquisition process, the research found that there is so much the strategies are needed to improve the project capital as the stability of the project capital are uncertain, therefore there is an art to lever it.
Conclusions
As the conclusion the IJVP implementation are mostly in between contractors and client with the usual construction period of the infrastructure project are more than 5 years with the project cost are high and the procurement path also complex due to the project needs.The collaboration made among the local practitioners and for the foreign IJVP the construction mostly among the Asian countries and the typical infrastructure development are highways instead of railways.The collaboration made based on the legal reason with the high level of risk sharing and mitigation awareness as the concern of the collaboration is on the monetary perspectives.Other than that, the study indicates that there are problems in the joint venture project in the perspective of the project capital and the railway infrastructure should be given a highlights in future study due to its high significant in term of cost and technical issues.Hence, the research provides an ample justification on the current practise of the infrastructure joint venture project in Malaysia.This research would deliver a general picture on the joint venture in infrastructure project in Malaysia to the body of knowledge and as well as the construction practitioner.
Table 1 .
Research Trends on IJVP for past decades.
.33 per cent, dam, airport and seaports at 16.67 per cent and railway at 13.33 per cent.The results show that the IJVP commonly focus on the highway infrastructure rather than railway infrastructure.Therefore, the concentration towards railroad infrastructure been given fewer highlights in Malaysian joint venture framework.For the procurement path, the respondent agreed Design and Build has been the typical procurement path available for the IJVP at 83.33 per cent, followed by Engineering, Procurement and Management Contract at 33.33 per cent, Traditional Method at 16.67 per cent, Turnkey at 10.00 per cent, and Emerging Cost Contract at 6.67 per cent.The results shows that the IJVP practitioner perceived infrastructure construction via the Design and Build approach are likely more doable rather than Emerging Cost Contract.
.00 per cent, property at 76.67 per cent, manpower at 73.33 per cent, services for 66.67 per cent, technology and monetary at 40.00 per cent respectively, managerial style at 33.33 per cent, and culture at 30.00 per cent.The results indicated that the preferable elements to be share among the IJVP practitioner are risk as they perceive the risk is imperative rather than the culture.This is aligning with the research by local authorAdnan (2008)on her research in construction risk implementation management in IJVP Malaysia. | 2018-12-15T12:12:34.459Z | 2018-03-19T00:00:00.000 | {
"year": 2018,
"sha1": "b06d502bb103413a84bef11b0977ccb9a72a7031",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/09/e3sconf_cenviron2018_01020.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b06d502bb103413a84bef11b0977ccb9a72a7031",
"s2fieldsofstudy": [
"Engineering",
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
233447406 | pes2o/s2orc | v3-fos-license | Plant-Derived Natural Compounds for the Treatment of Amyotrophic Lateral Sclerosis: An Update
Background Amyotrophic lateral sclerosis (ALS) is a motor neuron disease (MND) that typically causes death within 3-5 years after diagnosis. Regardless of the substantial scientific knowledge accrued more than a century ago, truly effective therapeutic strategies remain distant. Various conventional drugs are being used but are having several adverse effects. Objective/Aim The current study aims to thoroughly review plant-derived compounds with well-defined ALS activities and their structure-activity relationships. Moreover, the review also focuses on complex genetics, clinical trials, and the use of natural products that might decrypt the future and novel therapeutics in ALS. Methods The collection of data for the compilation of this review work was searched in PubMed Scopus, Google Scholar, and Science Direct. Results Results showed that phytochemicals like-Ginkgolides, Protopanaxatriol, Genistein, epigallocatechingallate, resveratrol, cassoside, and others possess Amyotrophic lateral sclerosis (ALS) activity by various mechanisms. Conclusion These plant-derived compounds may be considered as supplements for conventional (ALS). Moreover, further preclinical and clinical studies are required to understand the structure-activity relationships, metabolism, absorption, and mechanisms of plant-derived natural agents.
INTRODUCTION
Amyotrophic lateral sclerosis (ALS), also termed as Lou Gehrig's disease, is an idiopathic, fatal cumulative neurodegenerative disease initiated by motor neurons dysfunction in the spinal cord and brain within weeks or months, which progresses into paralysis and finally death [1,2]. There is no treatment available to cure this destructive disease. The majority of the deaths in ALS patients occur due to respiratory failure within 3 to 5 years from the onset of various signs and symptoms [3,4]. The incidence of ALS in Western European countries is 2-3 in 100,000 individuals per year and has a prevalence of 4.6 per 100,000 [5][6][7]. ALS is more commonly found in men than women, affecting 1.2-1.5 men for every woman [8]. Evidence indicates that the incidence and prevalence are lesser in mixed ancestral origin populations than European people, with differences in age of onset in genetically heterogeneous populations [9]. Compared to Alzheimer's disease, the maximum occurrence of the disease is between the age of 50 to 75 years and decreases after that [7]. However, chances of lower incidences among non-Caucasian populations or American Indians and Eskimos are still controversial, but most epidemiological studies accord with insignificant male/women predominance of 1.2-1.5/1 [10][11][12].
The etiology of ALS is highly multifactorial [1,13]. It is associated with multiple cellular pathologies that are restricted to oxidative stress, loss of neurotrophic factors, glutamate-induced excite-toxicity, inflammation, insufficient protein quality control, accumulation and misfolding of proteins, and mitochondrial dysfunction [14,15]. The clinical manifestations of sporadic amyotrophic lateral sclerosis (sALS) and familial amyotrophic lateral sclerosis (fALS) are almost very similar, and the median age of onset of sALS is around 60 years, and the age of onset for fALS is about ten years earlier than sALS. In juvenile ALS (JALS) families, mutations in ALS2 and SETX genes have been reported [16,17].
Due to the secondary phenomena, deficiencies in a few of these pathways occur. To identify the primary pathophysiological processes underlying ALS, genetics would be the rational primary perspective. The ALS shows genetic predisposition, about 5-15% of patients diagnosed with ALS have a family history of the disease. A single defect in genetics is believed to lead ALS [18,19]. While most people lack family background of ALS, in such cases, it is accepted that both genetic and environmental risk factors contribute to the development of disease [20]. Several genetic risk factors have been recognised that are involved in sporadic ALS. However, environmental risk factors exploration has been less successful. Many genetic and molecular pathways are most likely responsible for developing and progressing neurodegenerative changes in ALS.
Several pathological pathways have been suggested, yet no authenticated target for researchers while designing new molecules to impact the disease has been evidenced. To date, various molecules have failed in clinical trials so far while targeting the above-mentioned potential pathways. Thus, attempts carried in this field so far have not provided any success in new drug development [15]. Therefore, to successfully develop new medicine that will change the motor neuron degeneration process, several pathways need to be targeted due to the involvement of multiple pathways. Despite various preclinical and clinical studies, the accurate pathway of pathogenesis and progression of ALS is still not fully known. Thus, the development of successful and targeted therapy is challenging and is a major problem faced by scientists to treat ALS. Over the past two decades, the only FDA approved drug is riluzole, an anti-glutamatergic agent that acts by blocking glutamatergic neurotransmission in the CNS. However, riluzole's efficacy is questionable, without any effects on disease symptoms and nominal therapeutic benefits of about 2-3 months of survival increase in ALS patients [21,22]. After 22 years, another drug, edaravone, a free radical scavenging agent, was approved by FDA in May 2017, which was found to be effective in slowing ALS progression but its mechanistic pathway in ALS is not fully known yet [23,24].
Data Sources and Search Strategy
Databases like Scopus, Science Direct, Pubmed, Google Scholar, Web of Science were used to collect literature for the compilation of the present review by searching the terms including plant-derived bioactive compounds against (excitatory amino acid toxicity, neuroinflammation, calcium cytotoxicity and oxidative stress) in amyotrophic lateral sclerosis, traditional herbal medicines and.
PLANT-DERIVED NATURAL COMPOUNDS FOR THE TREATMENT OF AMYOTROPHIC LATERAL SCLEROSIS
Regardless of the fact that drug design and discovery have a high reliance on synthetic chemistry, the contribution of natural products cannot be ignored [25][26][27][28][29][30]. WHO's list of essential drugs consists of 252, of which 11% are of plant origin [31]. So, there is an absolute chance of finding a natural molecule having desired ALS activity. The phytochemicals, including flavonoids, alkaloids, terpenes, and saponins from plant sources may instill positive change, which researchers are looking for, as they possess unique chemical diversity. Some of these cannot be synthesized by currently known methods [30,32]. As a result, these natural compounds as novel drug molecules for ALS treatment remain untapped. Different scientific reports have focused on the validation of the phytoconstituents isolated from various medicinal plants. Scientific investigations claiming various phytochemicals as ameliorative agents in ALS are limited. However, some key findings have demonstrated flavonoids, alkaloids, terpenes, and saponins isolated from multiple medicinal plants exhibit ALS activity. In this review, we have discussed the potential of various plant origin phytochemicals for the treatment of ALS. This review will try to understand the mechanism of action of selected molecules (Fig. 1), and in vivo and in vitro activities of these Phytoconstituents will also be covered.
Phytochemicals Acting against Oxidative Stress
Oxidative stress imparts a major role in the process of neurodegeneration and is one of the most common pathways of all neurodegenerative diseases [33][34][35][36]. The death of neurons occurs mainly due to increases in the reactive oxygen species (ROS) generation and malfunctioning of the antioxidative system [37]. Herbal medicines impart a prospective role in oxidative stress regulation by improving the antioxidant activity of various enzymatic and non-enzymatic systems, decreasing the levels of (ROS) and maintaining the expression and regulation of various genes involved in ALS [38,39]. Madecassoside, isolated from Centella asiatica is a triterpenoid saponin. It has been reported that in ALS involving transgenic SOD1-G93A mice model, madecassoside safeguards the motor neurons from degeneration and increases the survival time of mice. In another study, it was revealed that madecassoside reduces malondialdehyde levels and enhances the activity of SOD in the brain. In ALS mouse model, madecassoside protects the neurons from apoptosis due to free radicals by increasing the antioxidant activity. It has also been reported that madecassoside improves the LPS mediated neurotoxicity in rats by upregulating the Nrf2-HO pathway [40][41][42][43][44]. Ampelopsin, isolated from Ampelopsis grossedentata, belongs to the flavonoid class and exhibits prominent antioxidant activity. It has been reported that ampelopsin showed neuroprotective effects against H 2 O 2induced apoptosis in PC 12 cells by suppressing the ROS generation, upregulating the expression of HO-1 protein and hampering the expression of caspase-3. Moreover, in PC-12 cells, 1/2 (ERK1/2) and Akt-dependent signalling pathways play a role in the HO-1 protein upregulation. The studies suggested that ampelopsin could be a strong candidate in the ministration of various neurodegenerative diseases, including ALS [45][46][47][48]. Epigallocatechin gallate (EGCG), isolated from green tea, is its main constituent and is a water-soluble polyphenolic compound. It was reported to have strong antioxidant activity, besides acting as a radical scavenger mediating antioxidant activity in various neurodegenerative diseases, including ALS. The antioxidant activity of (EGCG) against ALS was further evaluated in transgenic SOD1 mice, where it slows down the beginning of symptoms and prolongs the lifespan. Moreover, upregulation of Bcl-2 gene, which is an anti-apoptotic gene, was also detected with (EGCG), suggesting the antioxidant activity of (EGCG) in ALS is associated with the upregulation of Bcl-2 gene [49][50][51][52][53][54][55][56]. Picroside-II, isolated from Picrorhi zarhizoma is a type of iridoid glycoside that is widely found in Tibet as well as in India. It was reported that in PC-12 cells, picroside-II strengths nerve growth factor (NGF) mediated neurite outgrowth besides acting synergistically against oxidative stress. Due to their synergistic effect, they are used to manage various nervous disorders, including ALS. Moreover, the neuroprotective activity against oxidative stress of picroside-II was also evaluated in various models, including in vitro model of glutamate-treated PC12 cells and in vivo model of AlCl 3induced male mice. Picroside -II also enhances the SOD levels in the brain of mice which results in suppression of ROS generation depicting picroside-II protects the brain from a neuronal injury that occurs due to oxidative stress [57][58][59][60]. Morroniside, isolated from Cornus officinalis is a type of iridoid glycoside reported to have a strong neuroprotective activity against oxidative stress. It was also reported that in SH-SY5Y cells, when exposed to H 2 O 2 -mediated cytotoxicity, Morroniside elevates the levels of cellular GSH and reduces the levels of lactate dehydrogenase (LDH), besides maintaining the Matrix metalloproteinases (MMP) and cell stability. Moreover, it suppresses the intracellular activity of SOD and ROS generation. In addition, Upregulation of Bcl-2 genes was also reported, which confirms the anti-apoptotic and anti-oxidative activity of this compound [61][62][63][64][65]. Astragaloside IV, a saponin isolated from Radix astragali, is generally used for ALS treatment in China and is reported to have a strong antioxidant activity in various In-vitro and in-vivo studies. Astragaloside IV also showed a protective role against H 2 O 2 mediated oxidative stress in PC-12 cells. Moreover, it also improves the viability of PC-12 cells, activation of HO-1, suppresses the intracellular production of ROS as well as apoptotic cell death [66][67][68][69][70]. Diallyl trisulfide (DATS), an active monomer of allicin isolated from bulbs of Liliaceae allium, was reported to exhibit diverse pharmacological activity owing to its capability to pass through the (BBB). It was reported that (DATS) acts as an inducer of phase II enzymes resulting in the amelioration of oxidative stress besides safeguards the activity of various antioxidant enzymes, thus imparting an important role in ALS. Diallyl trisulfide acts via multiple pathways in ALS, including activating the heme oxygenase-1 (HO-1), downregulating the expression of glial fibrillary acidic protein, activating the antioxidant activity of various enzymes [71][72][73][74][75][76].The various plant-derived phytochemicals (Fig. 2), along with their diverse mechanistic insights against oxidative stress, are shown in Table 1. Increase the mice survival time and reduces the malondialdehyde levels, and enhances the SOD activity in the brain [40] Ampelopsin (2) In-vitro Ampelopsin Neuroprotective effect against H2O2-induced cell death in PC12 cells is well observed.
Suppressing the production of ROS, upregulating the expression of HO-1 protein and hampering the expression of caspase-3 [45,47]
Epigallocatechin gallate (EGCG) (3)
In-vivo Transgenic mice SOD1-G93A EGCG given in doses of 1.5, 2.9, 5.8 µg/g body weight after 60 days of age suggest that it significantly delays the disease onset by 1.4weeks and prolongs the survival time by 1.8 weeks.
In-vitro PC12 cells
In-vivo mice AlCl3-induced toxicity Neuroprotective action of picroside-II has been observed in glutamate-treated PC12 cells and improved SOD activity in the brain of mice Enhances the SOD levels in the brain of mice, suppression of ROS generation [58] Morroniside ( Activation of HO-1 suppresses the intracellular production of ROS [66] Diallyl trisulfide (DATS) In-vivo
Phytochemicals Acting against Neuroinflammation
A strong correlation exists between inflammation and various CNS disorders, particularly ALS. Microglia cells in the CNS impart an essential role in ALS pathogenesis due to their primary role in the release of various pro-inflammatory factors, including (TNF-α), (iNOS), (COX-2). So, one of the targets for ALS involves decreasing the activation of microglia cells, which in turn, inhibits neuroinflammation [77][78][79][80]. Celastrol, isolated from Tripterygium wilfordii, is a triterpenoid pigment that inhibits cancer cell proliferation and in-flammation-related various auto-immune diseases. In transgenic mice, SOD1-G93A model of ALS, Celastrol suppresses (TNF-α) and (iNOS) expression, decreased the expression of CD40 and glial fibrillary acidic protein in the lumbar spinal cord section of mice, resulting in delayed onset of disease and improvement in the motor function. Moreover, it was observed that celastrol at the molecular level inhibits LPS mediated activation of mitogen-activated protein kinase/ERK1/2 signaling pathway and (NF-kB), which plays a vital role in the damage to cells and stress. So celastrol suppresses the activation of microglia cells that further decreases the generation of pro-inflammatory cytokines [81][82][83][84][85][86]. Resveratrol, mainly isolated from Veratrum nigrum and Rhizoma polygoni, is a type of flavonoid (polyhydroxy) diphenyl ethylene, and has intense antioxidant activity due to various hydroxyl groups. Studies also revealed that resveratrol inhibits the release of pro-inflammatory cytokines instigated by LPS in mouse N-9 microglial and rat cortical microglia cells, besides inhibiting the degradation of IkBα and iNOS N-9 microglial cells expression, disclosing the role of resveratrol in the amelioration of various neurodegenerative disease including ALS [87][88][89][90][91][92][93][94]. Curcumin, isolated from Curcuma longa, is a polyphenolic monomer known for its neuroprotective and anti-inflammatory activity. In LPS stimulated microglia cells, curcumin suppresses the release of nitric oxide and iNOS expression. Moreover, curcumin also upregulates the expression of (Nrf-2) and (HO-1), exhibiting strong neuroprotective activity during inflammatory stress. The neuroprotective role of curcumin in ALS has also been reported due to the downregulation of NF-kB signaling pathway, which suppresses the pro-inflammatory cytokines, including IL-6, IL-1, and TNF-α [39,[95][96][97][98][99][100]. Isorhynchophylline (IRN), isolated from Uncaria rhynchophylla, has been reported to exhibit strong neuroprotective activity due to its ability to inhibit cytokine release like IL-6, IL-1, and TNF-α in LPS stimulated microglial cells. Moreover, the synthesis of inflammatory mediators and expression of mRNA and iNOS has also been reduced by IRN, which impart an essential role in various neurodegenerative disease, including ALS [101][102][103]. Obovatol, isolated from Magnolia officinalis leaves, is a type of neolignan. The neuroprotective activity of obovatol has been examined in various models of neuroinflammation mediated by LPS. It has also been reported that obovatol suppresses the release of NO and iNOS in microglial cells by inhibiting the signaling pathways of mitogen-activated protein kinase and NF-kB, besides one of the primary molecular targets of obovatol in microglia is Peroxiredoxin 2 (Prx2), which played an essential role in the various signalling pathways of neuroinflammation [104][105][106]. Paeonol, isolated from the bark of Paeonia suffruticosa, acts as a neuroprotective agent by inhibiting inflammation mediated by microglia as well as oxidative stress. In LPS induced inflammation in cortical neurons, paeonol downregulates the expression of COX-2 and iNOS, which results in reduced production of ROS and NO. Moreover, the phosphorylation of ERK induced by LPS was also suppressed by paeonol, which results in an increase in cell viability [107][108][109]. Wogonin, isolated from the Scutellaria root, acts as a neuroprotective agent by inhibiting the NO, TNF-α, and IL-6 production. Furthermore, wogonin also shows neuroprotective activity in LPS induced microglia injury by suppressing the various mediators of inflammation [110][111][112]. The various plant-derived phytochemicals (Fig. 3), along with their diverse mechanistic insights against neuroinflammation, are shown in Table 2.
Phytochemicals Acting against Calcium Cytotoxicity
One of the prime factors that are involved in ALS is calcium toxicity. When the calcium channels are opened up, a massive influx of calcium via NMDA receptors piles up in the nuclear cell membrane. This results in nerve cell damage [82,113,114] Transgenic mouse model of ALS Celastrol was administered to the mice at 30 days of age, and reduction in body weight, improvement in motor function along with delayed onset of ALS was achieved.
Suppresses the TNF-α and iNOS expression Downregulated the expression of CD40
SH-SY5Y neuronal cell model
There was Increased induction of Heat shock proteins (HSPs) after Co-application of celastrol and arimoclomol Activation of HSPF1.
Wogonin (14)
SH-SY5Y cells Aβ changes were observed in the cell line with treatment by wogonin.
GSK3β inhibition via the mediation of mTOR signalling pathway [110,124,125] Microglia cell Lps stimulated microglial cells were subjected to treatment to monitor changes with regard to TNF, NO and IL-6.
Inhibiting the NO, TNF-α, and IL-6production. and even death of cells. Nowadays, the focus shifts to herbal medicines to find a phytochemical that can be beneficial in treating ALS [126][127][128]. Paeoniflorin, isolated from Paeoniae radix, has an essential role as a neuroprotective agent in ALS management by inhibiting the influx of calcium in cytoplasm in PC12 cell-injury models. Moreover, it also inhibits the extra intracellular level of calcium which is generated due to glutamate and suppresses the apoptosis in PC-12 cells. Further, in PC-12 cells, Paeoniflorin shows its neuroprotective effect by suppressing NMDA induced neurotoxicity [129][130][131][132][133][134]. Ligustrazine, isolated from Rhizoma chuanxiong, is known for its neuroprotective activity by blocking calcium channels. It has been reported that in SH-SY5Y cells, ligustrazine blocks L-type calcium channels, which impart a vital role in neurotoxicity development in ALS [135,136]. Gastrodin, isolated from Gastrodia elata, can cross the (BBB) and exert its effect on CNS. In SH-SY5Y cells, Gastrodin was reported to limit calcium entry via acting on voltage-gated calcium channels, inhibiting the degeneration of neurons due to calcium toxicity [137][138][139]. Muscone, obtained from natural muskies, is its principal active component. In PC12 cells stimulated with glutamate, muscone administration exhibits its neuroprotective activity by reducing the intracellular accumulation of calcium [140]. The various plant-derived phytochemicals (Fig. 4), along with their diverse mechanistic insights against calcium cytotoxicity, are shown in Table 3.
Phytochemical Acting against Excitatory Amino Acid Toxicity
The primary excitatory neurotransmitter in the CNS is glutamate. To maintain the optimum level of glutamate, various metabolic enzymes, as well as transporters, are available, failure in the function of which leads to excessive accumulation of glutamate in the CNS resulting in various nerv-ous disorders, including ALS [141][142][143]. Different phytochemicals are involved in maintaining the optimum level of glutamate in CNS, which include: β-Asarone, isolated from Acorus tatarinowii, acts as a neuroprotective agent due to its ability to cross the (BBB). It was reported that in ALS, β-Asarone suppresses (NMDA) or produces glutamate-induced excitotoxicity. Moreover, in PC-12 cells, β-Asarone increases the survival rate of cells, reduces the leakage of LDH, apoptosis ratio, and intracellular accumulation of calcium [144][145][146][147]. Huperzine-A, isolated from Huperzia serrata, is a novel alkaloid commonly used in the treatment of Alzheimer's disease due to its ability to block glutamatemediated neurotransmission. (Hup A) also inhibits glutamate toxicity by blocking NMDA receptors. In patients with ALS, (Hup A) acts as a neuroprotective agent by preventing damage to motor neurons [56,[148][149][150]. Catalpol, isolated from Rehmannia glutinosa, acts as a neuroprotective agent in various neurological disease, including ALS, by suppressing glutamate excitotoxicity. Moreover, it also increases the viability of cells, protects the neurons from various damages mediated via NMDA receptors [151][152][153]. Selaginellin, isolated from Saussurea pulvinata, exhibits neuroprotective activity in PC-12 cells by suppressing glutamate toxicity. It also decreases the ROS generation and expression of klotho gene [154][155][156]. Ferulic acid, a phenolic acid monomer mainly present in Chinese herbs, including angelica and Szechwan, crosses the BBB with ease. It shows its neuroprotective activity by preventing damage to neurons due to glutamate excitotoxicity and apoptosis in cortical neurons. Furthermore, it also protects the In-vitro PC-12 cells from hypoxia, free radicals, and excitatory amino acids [157][158][159][160]. Cryptotanshinone, isolated from Salvia miltiorrhiza, suppresses glutamate toxicity by activating phosphoinositide 3kinase signalling pathway and inhibiting the downregulation of Bcl-2, an anti-apoptotic protein. The PI3K/Akt pathway plays an important role in controlling the pathogenesis of ALS [161,162]. The various plant-derived phytochemicals (Fig. 5), along with their diverse mechanistic insights against excitatory amino acid toxicity, are shown in Table 4.
CONCLUSION
There is currently only one drug available in the market approved by FDA in the treatment of ALS. However, various attempts have been carried out to develop an efficient therapeutic agent against ALS. Majority of the drugs passed the preclinical animal studies, but the results are not promising in human clinical trials. Herbal medicines, on the other hand, act as an alternative and complementary medicinal approach for ALS treatment. The phytochemicals, including flavonoids, alkaloids, terpenes, and saponins from plant sources may instill positive change, which researchers are looking for, as they possess unique chemical diversity. Some of these cannot be synthesized by currently known methods. As a result, these natural compounds as novel drug molecules for ALS treatment remain untapped. Different scientific reports have focused on the validation of the Phytoconstituents isolated from various medicinal plants. The phytochemicals isolated from herbal medicines act via multiple pathways, including an antioxidant, anti-inflammatory and an antiapoptotic agent in ALS. The requirement of natural products to be used in the treatment of ALS has increased because of their safety and efficacy compared to conventional drugs as an alternative treatment measure. The review explains that natural products could be used as a new approach in relieving the intensity of various ALS symptoms. In addition, the review mentions that natural antioxidant compounds with multi targets, multi links, or multi pathways that can be used in the modern pharmacology of ALS. However, all these data underline the importance of testing the tolerability and efficacy of natural products to ameliorate the symptoms or disease progression in ALS in the context of controlled clinical trials.
In-vitro
Cultured rat cortical cells Anti-excitotoxicity effect of isolated α & β asarone as compared to commercially available asarone.
Downregulation of NSE and improved levels of DA, l-DOPA, DOPAC and HVA in striatum.
Huperzine-A (20)
In-vitro NSC34 and rat spinal cord organotypic culture Inducers like staurosporine, hydrogen peroxide, CCCP, THA etc. were used in a cell line, and the effects of huperazine A were noted Inhibits glutamate toxicity by blocking NMDA receptors [149] Catalpol (21) In-vitro
PC12 cells
Glutamate induced excitotoxicity in PC12 cells was exposed with selaginellin administration Decreases the ROS generation and expression of klotho gene [155,164] Ferulic acid (23) In-vitro
PC12 cells
The protective effects against hypoxia and excitotoxicity were monitored.
Preventing the damage to neurons due to glutamate excitotoxicity and apoptosis in cortical neurons. [44,157,165]
In-vivo
Male Sprague Dawley rat Protective effects of ferulic acid against hypoxia-induced cerebral injury was the focus of the study TLR and MyD88 pathways inactivation
Rat cortical neurons
Glutamate was used to entice neurotoxicity in a cell line.
CONSENT FOR PUBLICATION
Not applicable.
FUNDING
None.
CONFLICT OF INTEREST
The authors declare no conflict of interest, financial or otherwise. | 2021-04-30T06:16:43.048Z | 2021-04-28T00:00:00.000 | {
"year": 2022,
"sha1": "32d185f8895de78f30fa8a435626327323ce4981",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "97b29cf036c3e091e85cecc2dbc36c7790d794eb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225801294 | pes2o/s2orc | v3-fos-license | Characterizing Hydrological Drought and Water Scarcity Changes in the Future: A Case Study in the Jinghe River Basin of China
The assessment of future climate changes on drought and water scarcity is extremely important for water resources management. A modeling system is developed to study the potential status of hydrological drought and water scarcity in the future, and this modeling system is applied to the Jinghe River Basin (JRB) of China. Driven by high-resolution climate projections from the Regional Climate Modeling System (RegCM), the Variable Infiltration Capacity model is employed to produce future streamflow projections (2020–2099) under two Representative Concentration Pathway (RCP) scenarios. The copula-based method is applied to identify the correlation between drought variables (i.e., duration and severity), and to further quantify their joint risks. Based on a variety of hypothetical water use scenarios in the future, the water scarcity conditions including extreme cases are estimated through the Water Exploitation Index Plus (WEI+) indicator. The results indicate that the joint risks of drought variables at different return periods would decrease. In detail, the severity of future drought events would become less serious under different RCP scenarios when compared with that in the historical period. However, considering the increase in water consumption in the future, the water scarcity in JRB may not be alleviated in the future, and thus drought assessment alone may underestimate the severity of future water shortage. The results obtained from the modeling system can help policy makers to develop reasonable future water-saving planning schemes, as well as drought mitigation measures.
Introduction
Drought and water scarcity (WS) are inevitable research topics for water resource management in water-stressed areas [1][2][3]. Drought is a natural disaster that affects a large number of people and causes huge economic losses in the world [4,5]. WS is usually caused by the combined effects of natural factors and excessive water uses of human beings, and has already become a vital obstacle for socio-economic development in many parts of the world [6]. In the context of global climate change, many traditional arid regions may experience more severe drought events in the future, which have even been observed in the past few years [7][8][9]. Moreover, due to the increase in population and the improvement of living standards, water usage is significantly growing, which will result in massive WS in numerous regions in the future [2,10,11]. Consequently, it is desired to explore the severity of drought and further reveal the associated WS under climate change in order to manage water resources reasonably and meet the needs of sustainable development [6,11].
Drought can be classified from meteorology, agriculture, hydrology and socioeconomic aspects [12]. Among the four types of drought, hydrological drought (HD) can characterize the reduction of water
Runoff Simulation and Drought Identification
In this study, the climate projections (2020-2099) are employed to drive the Variable Infiltration Capacity (VIC) model [40,41] to obtain future runoff for two RCP scenarios. The VIC hydrological model is run at a 0.22-degree resolution (24-h time scale), and then the runoff and base flow of the simulated grid cells are processed by a river routing model [42]. Detailed information for the VIC
Runoff Simulation and Drought Identification
In this study, the climate projections (2020-2099) are employed to drive the Variable Infiltration Capacity (VIC) model [40,41] to obtain future runoff for two RCP scenarios. The VIC hydrological model is run at a 0.22-degree resolution (24-h time scale), and then the runoff and base flow of the simulated grid cells are processed by a river routing model [42]. Detailed information for the VIC model can be found in Zhang et al. [43]. The VIC model is calibrated for the period of 1981-1990 and validated for the period of 1998-2005 before being used to generate future runoff. The performance of the VIC model is assessed by the Nash-Sutcliffe efficiency coefficient (NSE, Equation (1)) [44,45] and determination coefficient (R 2 , Equation (2)).
It is worth noting that the simulation results of annual runoff volume are needed in the subsequent calculation process of WEI+. Although NSE and R2 can well reflect the fluctuation of monthly runoff time series, they cannot reflect the deviation degree of runoff volumes. Therefore, the deviation of the runoff volumes (DV, [46]) indicator is then calculated. The closer the DV value is to 1, the smaller the volumetric deviation is: where V obs,t and V sim,t represent the observed and simulated annual runoff volumes in year t, respectively; n is the number of the data.
The SDI is used to identify hydrological drought events and their corresponding characteristics (duration and severity). SDI can be calculated from monthly streamflows: where m is the time scale (can be 1, 3, 6, 12 et al.); Q i,j is the runoff in month j, year i; V i,j denotes the cumulative streamflow of the chosen time scale. The SDI values can be obtained by normalizing the V i,j values [13]: where V and s V represent the average and standard deviation of V i,j , respectively. The thresholds of HD according to SDI values are shown in Table 1 [13]. In this study, duration and severity of HD events are derived from monthly SDI values using the threshold approach [47][48][49]. The duration of a HD event is the time period from the beginning to the end of the event, and severity is the cumulative deviation of the SDI below the threshold. More details of HD identification process and HD duration and severity meanings are presented in Figure S1 and its explanations in the Supplementary Materials. In reality, duration and severity of HD events are correlated with each other. An individual variable (duration or severity) of the HD event can hardly provide the event a comprehensive description and thus may lead to misestimation of the HD risk. Therefore, the correlation between HD variables should be carefully taken into account in HD risk analysis. Developed by Sklar [50], copulas can be used to construct multivariate distributions in a flexible way in choosing marginal distributions and their dependence structures [51,52]. A two-dimensional copula function can be expressed as: where (F X , F Y ) represent the cumulative distribution functions (CDFs) of a two-dimensional random vector (X, Y).
Marginal distributions are fitted for the variables before establishing the bivariate distribution in Equation (6). Gamma, lognormal (LN), Weibull, generalized extreme value (GEV), Pearson type III (P-III), and Log Pearson type III (LP III) distributions are employed to establish marginal distributions for HD variables. The goodness-of-fit tests for marginal distributions are implemented by using the Akaike's information criteria (AIC) [53], root mean square error (RMSE) and Kolmogorov-Smirnov (KS) statistic test [54].
Because of the simplicity and good accuracy properties, Archimedean copulas have attracted wide attention in drought risk analyses [15,55]. Three Archimedean copulas: Frank, Clayton, and Gumbel-Hougaard are chosen as candidate models to derive bivariate distribution for the duration and severity of HD events. The parameters in the chosen copulas are obtained through the method-of-moments-like (MOM) estimator [56]. The Cramér-von Mises statistic (CvM) test [57], AIC and RMSE are used for goodness-of-fit tests of copulas.
Drought Risk Analysis
In this study, different kinds of return periods are adopted to quantify the risk of drought events. The return period is the mean interval time between two consecutive drought events. The univariate return periods of the two HD variables can be derived by estimating the exceedance probability from the fitted marginal distributions: where T D /T S denotes the return period of duration (D)/severity (S) larger than d/s; E(L) represents the average interarrival time of the HD variable; F D (d) and F S (s) are CDFs of duration and severity, respectively. Two types of joint return periods (JRPs) are considered in this study to quantitatively evaluate the joint risk of the two variables: where T AND DS denotes the JRP when D and S respectively exceed specific values of d and s; T OR DS denotes the JRP when D or S is higher than the certain thresholds (i.e., D ≥ d or S ≥ s). F DS (d,s) is the joint CDF of D and S.
Water Exploitation Index +
The WS indicator used in this study is the WEI+. It is applied to analyze the level of pressure exerted by human activities on natural water resources. WEI+ is calculated as the ratio of water withdrawal minus return water (i.e., net water consumption, in m 3 ) and renewable water resources (in m 3 ) [26,27]: where Abstractions refers to the total water resources extracted from the watershed (containing surface and ground water); Returns indicates the amount of water resources that have been exploited and re-entered into the hydrological cycle; RWR is the amount of renewable water resources. For basins influenced by human activities, two methods exist for renewable water calculation [26]: Method 2: RWR = Out f low + (Abstractions − Returns) − ∆S art (13) where ExIn (External Inflow) indicates the amount of surface water and groundwater inflow from outside the basin; P refers to the total precipitation; Eta (Evapotranspiration) denotes the total evapotranspiration; ∆S nat refers to the change of water resources reserves in natural environment; Outflow refers to the actual water resources flowing out of the drainage basin; ∆S art denotes changes in water reserves in artificially regulated lakes or artificial reservoirs. Thresholds of water stress according to the WEI+ are shown in Table 1 [27,58].
Hydrological Model Verification and Hydrological Drought Identification
The NSE and R 2 calculated from monthly flow discharges are 0.80 and 0.86, respectively, for the calibration period 1981-1990, and are 0.78 and 0.82, respectively, for the validation period of 1998-2005. These indicate that the runoff simulations and observations are in good agreement. Figure 2 shows the comparison between the monthly runoff simulated from climate model projections and from the gridded observations of 1987-2004 (with NSE and R 2 are 0.71 and 0.78, respectively). The simulated and observed annual runoff volumes produce an DV value of 1.04, which is very close to 1. All these indicate that the simulated runoff series driven by the outputs of RegCM perform well in capturing the runoff variations in observations, and thus the future runoff simulations can then be used for HD-and WS-related calculations. The monthly streamflow series of observations from 1960 to 2009 and projections from 2020 to 2099 are used to generate the SDI values for both history and two future scenarios. The SDI variations with corresponding monthly precipitation series under history and projected future scenarios are depicted in Figure 3, which helps understand the runoff deficit under different precipitation conditions. The average annual precipitation under RCP4.5 and RCP8.5 scenarios would increase by 108.1 mm and 135.9 mm respectively compared with that in the historical period. The SDIs over the historical period changes from −3.46 to 3.44, and SDIs under scenarios of RCP4.5 and RCP8.5 have ranges of [−1.75, 5.45] and [1.46, 4.06], respectively. The mean values of SDIs for RCP4.5 and RCP8.5 are, respectively, 0.29 and 0.33 higher than those in the historical period. We can preliminarily conclude that hydrological droughts would be relieved under projected future climate change scenarios.
HD events with their corresponding deficit characteristics (duration and severity) are identified from monthly SDI series. Table 2 provides the statistical information about the characteristics of HD occurrence, persistence and severity under history and two projected future scenarios. Mean interval time of HD events would be increased by 0.96 and 2.08 months in RCP4.5 and RCP8.5, respectively. Average HD duration and severity for projected future scenarios would slightly decrease when compared with those in the historical period. The maximum value of severity for historical HD events is 7.97 and it would be respectively reduced to 4.98 for RCP4.5 and 3.67 for RCP8.5. The Kendall correlation coefficient (τ) [19] between drought duration and severity under historical and future scenarios ranges from 0.60 to 0.67, which indicates that there is a strong correlation between the two variables. The monthly streamflow series of observations from 1960 to 2009 and projections from 2020 to 2099 are used to generate the SDI values for both history and two future scenarios. The SDI variations with corresponding monthly precipitation series under history and projected future scenarios are depicted in Figure 3, which helps understand the runoff deficit under different precipitation conditions. The average annual precipitation under RCP4.5 and RCP8.5 scenarios would increase by 108.1 mm and 135.9 mm respectively compared with that in the historical period. The SDIs over the historical period changes from −3.46 to 3.44, and SDIs under scenarios of RCP4.5 and RCP8.5 have ranges of [−1.75, 5.45] and [1.46, 4.06], respectively. The mean values of SDIs for RCP4.5 and RCP8.5 are, respectively, 0.29 and 0.33 higher than those in the historical period. We can preliminarily conclude that hydrological droughts would be relieved under projected future climate change scenarios.
HD events with their corresponding deficit characteristics (duration and severity) are identified from monthly SDI series. Table 2 provides the statistical information about the characteristics of HD occurrence, persistence and severity under history and two projected future scenarios. Mean interval time of HD events would be increased by 0.96 and 2.08 months in RCP4.5 and RCP8.5, respectively. Average HD duration and severity for projected future scenarios would slightly decrease when compared with those in the historical period. The maximum value of severity for historical HD events is 7.97 and it would be respectively reduced to 4.98 for RCP4.5 and 3.67 for RCP8.5. The Kendall correlation coefficient (τ) [19] between drought duration and severity under historical and future scenarios ranges from 0.60 to 0.67, which indicates that there is a strong correlation between the two variables.
Univariate and Bivariate Distributions
The univariate distributions for HD variables are achieved based on the identified HD records in both historical and future periods. Table 3 shows the parameters of AIC, RMSE and KS test results for the best fitted distributions for HD duration and severity. It can be seen that the Gamma and LN distributions would be employed for HD duration and severity in a historical period. The GEV distribution is suitable for both HD duration and severity under the RCP4.5 scenario. The HD duration and severity under the RCP8.5 scenario are fitted with GEV and Weibull distributions, respectively. The p-values of the KS test results for all fitted distributions are greater than 0.05, which indicate that these distributions can effectively describe the probability characteristics of the HD variables. Figure 4 illustrate the comparison of empirical and theoretical CDFs for HD duration and severity under historical period and projected future scenarios. It shows that the theoretical CDFs from best fitted distributions of the HD variables are very close to empirical distributions. The dependence structure between HD duration and severity is then characterized by the copula functions. Table 4 shows the statistical test results for best fitted copulas. The Clayton, Gumbel and Frank copulas are, respectively, the best choices in modeling the joint distributions of HD duration and severity under history and two future scenarios. Figure 5 displays the bivariate CDFs based on the fitted copulas for HD duration and severity under history and two future scenarios. Table 4. Goodness-of-fit tests of best fitted copulas for HD duration and severity under historical The dependence structure between HD duration and severity is then characterized by the copula functions. Table 4 shows the statistical test results for best fitted copulas. The Clayton, Gumbel and Frank copulas are, respectively, the best choices in modeling the joint distributions of HD duration and severity under history and two future scenarios. Figure 5 displays the bivariate CDFs based on the fitted copulas for HD duration and severity under history and two future scenarios. Table 4. Goodness-of-fit tests of best fitted copulas for HD duration and severity under historical periods and projected future scenarios.
Return Periods of Hydrological Drought Events
Six return period levels from 3 to 100 (in Figure 6) for HD duration and severity are estimated according to Equations (7) and (8). Figure 6 shows that the values of the two variables for all return periods in both future scenarios would decrease significantly, especially in the RCP8.5 scenario. Under the RCP4.5 scenario, the 3-year and 100-year return periods of HD duration would reduce by 0.28 and 1.54 months, respectively, compared with those in the historical stage, and they would decrease by 0.4 months and 2.2 months, respectively, under the RCP8.5 scenario. The severity of drought with the same return periods would also decrease by 0.15 and 1.19 months under the RCP4.5 scenario, and by 0.22 and 2.07 months under the RCP8.5 scenario, respectively. These imply that HD events in the JRB would be significantly reduced under projected future scenarios.
Return Periods of Hydrological Drought Events
Six return period levels from 3 to 100 (in Figure 6) for HD duration and severity are estimated according to Equations (7) and (8). Figure 6 shows that the values of the two variables for all return periods in both future scenarios would decrease significantly, especially in the RCP8.5 scenario. Under the RCP4.5 scenario, the 3-year and 100-year return periods of HD duration would reduce by 0.28 and 1.54 months, respectively, compared with those in the historical stage, and they would decrease by 0.4 months and 2.2 months, respectively, under the RCP8.5 scenario. The severity of drought with the same return periods would also decrease by 0.15 and 1.19 months under the RCP4.5 scenario, and by 0.22 and 2.07 months under the RCP8.5 scenario, respectively. These imply that HD events in the JRB would be significantly reduced under projected future scenarios. For the reason that different combinations of HD variables can lead to the same JRP, the JRPs for two variables in the historical period and future scenarios calculated by Equations (9) and (10) are shown in the contour plots in Figure 7. The JRPs at six levels (3, 5, 10, 20, 50, 100 years) for historical period and projected future scenarios are quite different. According to the contour plots, once certain values of HD variables are given (Table 5), the corresponding JRPs are much larger in future scenarios than that in the historical period. For example, when s is 2.8 and d is 5.5 in Figure 7a, the Tand for the historical period and projected RCP4.5 scenario are 10 and 20 years, respectively. This means that, with the same JRP, the future HD events would become less serious than those in historical situations. Table 5. Joint return periods for hydrological drought events at given duration and severity in Figure 6. For the reason that different combinations of HD variables can lead to the same JRP, the JRPs for two variables in the historical period and future scenarios calculated by Equations (9) and (10) are shown in the contour plots in Figure 7. The JRPs at six levels (3, 5, 10, 20, 50, 100 years) for historical period and projected future scenarios are quite different. According to the contour plots, once certain values of HD variables are given (Table 5), the corresponding JRPs are much larger in future scenarios than that in the historical period. For example, when s is 2.8 and d is 5.5 in Figure 7a, the Tand for the historical period and projected RCP4.5 scenario are 10 and 20 years, respectively. This means that, with the same JRP, the future HD events would become less serious than those in historical situations. Table 5. Joint return periods for hydrological drought events at given duration and severity in Figure 6.
Water Scarcity in the Jinghe River Basin
The Jinghe River flows across three provinces of Ningxia, Gansu, and Shanxi. The annual renewable water resources produced by the JRB and the changes in water availability in major artificial reservoirs (for 2000-2017, in m 3 ) are obtained from the Water Resources Bulletins of the three provinces. The annual net water consumption can be evaluated according to Equation (13), and the annual WEI+ of the watershed can then be obtained based on Equation (8). The WEI+ of the JRB from 2000 to 2017 are shown in Table 6. The average annual net water consumption in 2000-2008 is 881 million m 3 and the average WEI+ is 0.59. The average annual net water consumption in 2009-2017 is 1093 million m 3 , and the average WEI+ is 0.62. Water consumption has increased significantly in these 9 years, while the increase in WEI+ is not visible, which may be due to precipitation increases during these years in the JRB.
Water Scarcity in the Jinghe River Basin
The Jinghe River flows across three provinces of Ningxia, Gansu, and Shanxi. The annual renewable water resources produced by the JRB and the changes in water availability in major artificial reservoirs (for 2000-2017, in m 3 ) are obtained from the Water Resources Bulletins of the three provinces. The annual net water consumption can be evaluated according to Equation (13), and the annual WEI+ of the watershed can then be obtained based on Equation (8). The WEI+ of the JRB from 2000 to 2017 are shown in Table 6. The average annual net water consumption in 2000-2008 is 881 million m 3 and the average WEI+ is 0.59. The average annual net water consumption in 2009-2017 is 1093 million m 3 , and the average WEI+ is 0.62. Water consumption has increased significantly in these 9 years, while the increase in WEI+ is not visible, which may be due to precipitation increases during these years in the JRB. With the development of our socio-economic society, water consumption is expected to increase continuously in the future [2]. To investigate possible conditions of WS in the future and explore the ultimate carrying capacity of water resources in the JRB, five water consumption scenarios for the future are set: the future water consumption is equivalent to the averaged water consumption in the past nine years (2009-2017), denoted as S0; the water consumption at the end of the century is 10%, 20%, 30% and 40% higher than that in the past 9 years and has a constant increasing rate for every ten years, which are respectively denoted as S1, S2, S3 and S4. The fluctuation of water reserves in the artificial reservoir is set to be zero since it has very little impact on the total water resources. The WEI+ values for the five water consumption scenarios under climate change projections of RCP4.5 and RCP8.5 are then calculated. The frequencies of WEI+ located in different intervals for each scenario are shown in Table 7. Table 7 shows that the WEI+ would be most frequently located in the interval 0.7 to 0.8 under the five water consumption scenarios, with an average of 26.4 times under RCP4.5 and 16.6 times under RCP8.5. The values of WEI+ from S0 to S5 under the RCP4.5 scenario are mainly concentrated (occupied 73.5%) in the interval of [0.6, 0.9]. While the values of WEI+ are more dispersedly distributed between 0.4 and 0.9 under RCP8.5, indicating that the water pressure will vary significantly under this scenario. For the extreme scenario of S4, the WEI+ greater than 1 would be observed on six occasions under both climate change scenarios, which means that non-renewable water is available or water resources may need to be imported from outside to meet the demand in some years. The 10-year average WEI+ is applied to reflect the water pressure fluctuations at different time periods under different scenarios in the future ( Table 8). The values of WEI + at the end of the century under S3 and S4 scenario for RCP4.5 and S4 scenario for RCP8.5 are all greater than 0.8, which indicate that serious WS would occur at the end of the century. The WEI+ averages of S0-S4 in each time period show that the most severe WS under the RCP4.5 scenario is from 2059 to 2069, and the corresponding mean value of WEI+ is 0.81. According to the classification of water resources pressure shown in Table 1, severe WS occurs when WEI+ exceeds 0.4, and some researchers believe that the freshwater ecosystems of a river basin cannot be healthily maintained in this state [26,27]. However, some studies argue that the utilization rate of water resources can be greatly improved, and thus a threshold of 0.6 would be a more appropriate value [27]. Although it requires further in-depth investigation to set the threshold, according to the comprehensive results of this study, the JRB will still be in a state of extreme WS for a long time under the projected future scenarios, even with a significant increase in precipitation.
Conclusions and Discussions
The presented modeling framework in this study allows us to estimate the univariate and joint risks of hydrological drought variables, and also provide quantitative analyses for water scarcity under climate change. The WEI+ index was innovatively applied to evaluate water scarcity in an arid basin (JRB) of China under climate change. Lower frequency with lower mean duration and severity of HD events were identified under RCP8.5 scenario than those characterized under the RCP4.5 scenario. HD duration and severity under both RCP scenarios are projected to decrease remarkably when compared to those in the past. The copula-based bivariate HD risk assessment model also reveals that climate change under projected future scenarios would alleviate drought situation in the JRB. However, the WS assessment results show that the shortage of water resources in JRB is still severe in the future.
Drought indicators can flexibly evaluate the degree of water resources deviating from normal at different time scales, such as the monthly SDI estimated in this study. The developed modeling framework has important implications for HD risk management. However, drought assessment can only reflect the relative changes of water resources in a natural state, without taking into account water consumption changes in the future. Therefore, simply assessing the severity of future droughts cannot accurately reflect the severity of water shortages, and decision-makers cannot provide valuable water resources management policies. Considering the increase in human water consumption in the future, the degree of water shortage may not be alleviated by the increase in precipitation and water availability. This makes it necessary to introduce quantitative assessment methods for future water scarcity evaluation.
Although setting thresholds for the WEI+ is still debatable, the authors suggest that the WEI+ thresholds for the basin should be set based on different environmental protection objectives and economic development goals. Moreover, an obvious disadvantage of the WEI+ indicator is the lack of consideration of water quality, which is closely related to the quality of "return water" (exploited and re-entered into the hydrological cycle). The WEI+ may be improved or different WS assessment methods can be used to comprehensively assess water stress and water quality in the future. However, a river basin often covers several provinces, and the conditions of water use are complex, which requires the relevant departments to do a more collaborative job on basic data statistics. The results of this study are of great significance for policy makers and local stakeholders to make a prospective long-term plan. Based on the assessment of future drought and water shortage status, an appropriate plan can be made to support the sustainable development of social economy. This research is repeatable and can be used in other basins in traditional arid or semi-arid areas. In addition, due to the increase in population and the acceleration of urbanization, the water shortages in large cities and their surrounding areas in some developed countries are becoming increasingly severe, and some water-rich countries have also experienced WS events in recent years. The presented modelling system can also be applicable in such areas. | 2020-06-11T09:04:17.573Z | 2020-06-04T00:00:00.000 | {
"year": 2020,
"sha1": "95a957bf28db460e8425072d35ff8632d087a21a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/12/6/1605/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d874c2103307af28b1db3b21ac609aef9ed7fdc1",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
153987254 | pes2o/s2orc | v3-fos-license | English Grammatical Problems of Chinese Undergraduate Students
Grammar teaching and learning is necessary in foreign language teaching. However, its function and method have been argued for decades. In the average teaching process in China, teachers divide grammar teaching into four stages: a) Presentation; b) Isolation and explanation; c) Practice; and d) Test. There are problems existing in grammar teaching in China now, including the inconsistence between the goal of teaching and real classroom teaching, ignorance of teachers and learners, inappropriate textbooks, and negative learning attitudes. According to a research on 10 English teacher and 30 undergraduate students in mainland China, the paper reports some conclusions and implications on grammar teaching in classroom. First, understanding students’ attitudes is a key factor in teaching. Second, grammatical rules should be presented and explained implicitly in certain contexts. Third, students’ involvement needs to be increased. Finally, more real communicative activities are effective in class.
Introduction
Marianne Celce-Murcia (1985) maintained that noticing and persuasive evidences show that no-grammar teaching will lead to the product of clumsy and impropriate foreign languages, which means that grammar teaching is essential for language teaching.However, teachers have argued for several decades about the function and the method of grammar teaching in foreign language teaching (FLT) in China.Grammar teaching and learning is necessary in FLT.As Ur (1988a) puts it "You cannot use words unless you know how they should be put together."For the past decades, English grammar teaching in China has been dominated by the traditional grammar-translation approach.Marianne (1979a) concluded as follows: 1) Classes are taught in the mother tongue, with little active use of the target language; 2) Long elaborate explanations of the intricacies of grammar are given; 3) Grammar provides the rules for putting words together, and instruction often focuses on the form and inflection of words; 4) Often the only drills are exercises in translating disconnected sentences from the target language into the mother tongue.5) Little or no attention is given to pronunciation.It holds true in Chinese grammar teaching.As a result, the traditional method produced unsatisfactory teaching results and students lacked the ability to speak and understand English.
In the average teaching process in China, teachers divide grammar teaching into four stages: a) Presentation; b) Isolation and explanation; c) Practice; and d) Test.a) Presentation.The aim of the presentation is to get the learners to perceive the structure-its form and meaning-in both speech and writing and to take it into short-term memory (Ur, 1988b).Teachers often read aloud the dialogue or the short story in the textbook, and then the students are asked to read, repeat, or retell.Teachers will ask students to make sentences with the pattern drills learnt.b) Isolation and explanation.According to Ur, the objective is that the learners should understand these various aspects of the structure.At this stage, teachers focus on the grammatical items: the form, the meaning, the function, and the rules.c) Practice.…whose aim is to cause the learners to absorb the structure thoroughly; or, to put it another way, to transfer what they know from short-term to long-term memory (Ur, 1988b).At this stage, teachers design a series of exercises for classroom practice, or home assignments, this can make the learners absorb the grammar rules completely.d) Test.The main objective of tests within a taught course is to provide feedback, without which neither teacher nor learner would be able to progress very far.We have to know where we are in order to know there to go next (Ur, 1988c).A test is a good way to check whether the students have mastered the grammar rules they have been learning.It is also an evaluation of teachers' work.
Introduction of Communicative Language Teaching Method
In the 1980s, communicative language teaching method (CLT) was introduced into China.That was a great transformation in grammar teaching in China.Furthermore, the New Curriculum Standard was invented in 2001, requiring more attention to the use of grammar rules in certain situations, and stressing the meaning of sentence function, not just the patterns.Students' ability to speak and listen has been improved.However, some teachers and schools believe it is useless to teach grammar.They simply avoid explaining the rules in their teaching, leading to a decline in students' abilities to read and write.
In the eyes of Sandra J.S.( 2002), center of CLT is the understanding of language learning as both an educational and a political issue.It is interpreted that language teaching is inextricably linked with language policy.Language learning goals and teaching strategies should vary according to the specific contexts.Therefore, program design and implementation depend on negotiation between policy makers, linguists, researchers, and teachers.
The Current Situation
There are problems existing in grammar teaching in China now.
First of all, the goals of grammar teaching CLT cannot be realized in the classroom.The goal is to enable students to communicate in the target language (Diane,2000a).But in real classroom teaching, the goal becomes to help students get high marks.
Second, after the rise of CLT, grammar teaching was ignored by some linguists.Some instructors maintained that it was not necessary to teach grammar.So many teachers abandon teaching grammar.As a result, the students have made rapid progress in speaking and listening more than before, but their written English still lacks accuracy.
Third, the current textbook is not appropriate.In any language teaching-learning situation, success depends on giving proper consideration to both human elements, and also to the non-human elements such as the textbook, the syllabus… (Marinne, 1979b).With the invention of New Curriculum Standard, most textbooks have been changed to meet the need of CLT.They focus on communicative ability, while in real classroom teaching, grammar still is the focus.
Lastly, students hold negative attitudes towards grammar learning.Many students feel grammar teaching has little effect on students' practical ability to use English, especially in listening and speaking.They think the presentation and explanation of grammar rules in class are dull and less motivated.
What then is the current situation?Do students regard grammar as something they only need to review before their exams, so they can get higher marks?
The authors of this article undertook a small study to find out the attitudes and teachers towards grammar, students' and teachers' practice in grammar learning and teaching, and their attitudes towards the grammatical knowledge in textbooks.
Method
The participants of this research included student participants and teacher participants.All the participants are from one technical university in China.All the student participants are freshmen majoring in engineering.The teacher participants are those teachers who teach the freshmen.The primary instruments were questionnaires and a writing task.The questionnaires for this research comprised a questionnaire for students and one for English teachers.The questionnaires were written in Chinese to avoid any misunderstandings by the subjects.
The questionnaires were distributed to 10 English teachers and 30 students.One of the English teachers explained the purpose of the research, and guided the students to give true responses to those questions.The students spent no more than 10 minutes accomplishing the questions and 15 minutes to finish the writing assignment.
All 10 teachers and 30 students handed back the questionnaire.Among them, all questionnaires for teachers, but only 26 questionnaires for students were valid, because four students selected all the choices, or made the same choice for every question.
Results from the questionnaire
As a whole, most of the students (56%) and teachers (71%) thought that grammar plays an important role in the mastery of English.
From their response, 41.6% students and most of the teachers (78%) agreed that the amount of grammar in the textbook was suitable.We also learnt that 38.4% students thought that there was too much grammar in the textbook.Very few students (18%) and teachers (6%) though the grammatical knowledge in the textbook is not enough.
Most of the students (89%) and nearly more than 90% teachers agreed that teachers emphasize grammar teaching in class.This tells us that most of the teachers emphasize grammar teaching in practice.
Interestingly, most students thought their teacher often uses deductive ways to explain grammar rules, while most teachers thought they often applied inductive ways to teach grammar.They held different ideas about the same question.
Most students thought their teacher corrected the errors in their oral English immediately, while more than half of the teachers chose they corrected afterwards.This tells us that the teachers' evaluation of their teaching practice does not agree with their real practice in class.
Most teachers said they collected all kinds of errors in students' composition and corrected them in front of the whole class while most students reflected that teachers only made a mark where there were mistakes.
At the same time, 79% students thought their weakness in learning grammar was that they can neither memorize the grammatical rules nor apply them in a correct way.As for the teacher participants, half of them thought students failed to memorize and apply, and half thought their students failed to apply them in a correct way.
Analysis of the written task
As EFL teachers, we encounter many errors particularly the verb from when marking the students' written work, because in the Chinese language, the verb forms do not change to indicate tense or personal forms.These are indicated by other words by the context.Therefore, the students always feel confused when writing in English.They tend to associate and use the Chinese grammatical patterns to write in English.
We collected 26 compositions from freshmen, who had learned all the basic grammar rules in their middle school.They should have mastered a vocabulary of about 3000 words.They were asked to write a short paragraph, for which they were given the outline.When their work was collected, a number of problems came to sight.They are shown in Table 1.Examples of the errors are shown in appendix A.
As may be seen from the table, the most frequent error was the incorrect use of the modal verb.Other frequent errors included misuse of tense, confusion of verbs and nouns, and mismatching or loss of prepositions.On the other hand, there were not many errors in confusion of active voice and passive voice and misuse of infinite clauses.
Conclusion
Correctly understanding students' attitudes towards grammar is a key factor in teaching.With the development of the communicative approach, some teachers think that teaching grammar is old-fashioned and that present English teaching should aim at developing students' speaking and listening ability.These Chinese teachers have not captured the real meaning of the communicative approach.CLT does not exclude grammar teaching; instead CLT aims broadly to apply theoretical perspectives of the Communicative Approach by making communicative competence the goal of language teaching and by acknowledging the interdependence of language and communication (Diane, 2000b).
Grammatical rules help students to understand and use the target language better, if they are presented and explained implicitly in certain contexts.That is to say, the students are first presented with the target grammar items in context.Under the instruction of teachers, the students can deduce the grammar usage from the context, and then apply the rules to real situations.
In addition, the teachers should increase and encourage students' involvement.This could be done, for example, through classroom discussion, speech contests, establishing English corner, form an English club, etc. Common forms of classroom activities like problem-solving activities will encourage students to talk and discuss the problem to find a solution.Discussions and debates can take learners one step further.They can provoke spontaneous fluent language use by learners when they exchange opinions.Role-play can be process-oriented group or pair technique, which is effective for practicing doing things in the target language.All these suggested activities provide an appropriate platform for the students to communicate in English.In such an environment, students may have low anxiety, good self-confidence and high motivation.In short, the teacher makes his lesson amusing, stimulating and challenging, so that students are fully engaged during the whole lesson.At the same time, we need to bear in mind that discourse learnt by students cannot be separated from comprehension and expression.Students should therefore be encouraged to focus on the whole material rather than on separated forms.
Table 1 .
Categorized mistakes in students' compositions | 2017-09-07T20:21:57.989Z | 2010-05-18T00:00:00.000 | {
"year": 2010,
"sha1": "d61782189219b6ed42674a86f844450968f22398",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/elt/article/download/6238/5010",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d61782189219b6ed42674a86f844450968f22398",
"s2fieldsofstudy": [
"Linguistics",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
1450604 | pes2o/s2orc | v3-fos-license | Protective effect of the omega-3 polyunsaturated fatty acids: Eicosapentaenoic acid/Docosahexaenoic acid 1:1 ratio on cardiovascular disease risk markers in rats
Background High consumption of fish carries a lower risk of cardiovascular disease as a consequence of dietary omega-3 long chain polyunsaturated fatty acid (n-3 PUFA; especially EPA and DHA) content. A controversy exists about the component/s responsible of these beneficial effects and, in consequence, which is the best proportion between both fatty acids. We sought to determine, in healthy Wistar rats, the proportions of EPA and DHA that would induce beneficial effects on biomarkers of oxidative stress, and cardiovascular disease risk. Methods Female Wistar rats were fed for 13 weeks with 5 different dietary supplements of oils; 3 derived from fish (EPA/DHA ratios of 1:1, 2:1, 1:2) plus soybean and linseed as controls. The activities of major antioxidant enzymes (SOD, CAT, GPX, and GR) were determined in erythrocytes and liver, and the ORAC test was used to determine the antioxidant capacity in plasma. Also measured were: C reactive protein (CRP), endothelial dysfunction (sVCAM and sICAM), prothrombotic activity (PAI-1), lipid profile (triglycerides, cholesterol, HDLc, LDLc, Apo-A1, and Apo-B100), glycated haemoglobin and lipid peroxidation (LDL-ox and MDA values). Results After three months of nutritional intervention, we observed statistically significant differences in the ApoB100/ApoA1 ratio, glycated haemoglobin, VCAM-1, SOD and GPx in erythrocytes, ORAC values and LDL-ox. Supplementation with fish oil derived omega-3 PUFA increased VCAM-1, LDL-ox and plasma antioxidant capacity (ORAC). Conversely, the ApoB100/ApoA1 ratio and percentage glycated haemoglobin decreased. Conclusions Our results showed that a diet of a 1:1 ratio of EPA/DHA improved many of the oxidative stress parameters (SOD and GPx in erythrocytes), plasma antioxidant capacity (ORAC) and cardiovascular risk factors (glycated haemoglobin) relative to the other diets.
Fish are the major food sources of DHA and EPA and are carried in the circulation as triglycerides, especially phospholipids [1]. There are several experimental studies that show that the n-3 PUFA perform several functions in relation to the structure and function of the membrane, tissue metabolism, and gene regulation [2]. These fatty acids play important roles in reducing hypertriglyceridaemia [3,4], low density lipoprotein cholesterol (LDLc), very low density lipoprotein cholesterol (VLDLc), and increasing high density lipoprotein cholesterol (HDLc) concentrations [5] as well as various components of these molecules e.g. ApoA1 and ApoB100 of HDL and LDL/VLDL respectively. EPA and DHA improve hypertension [6], insulin sensitivity and glycaemia [7]. Oxidative stress and vascular endothelial dysfunction also play a critical role in the pathogenesis of CVD. Increased oxidative stress underlies the pathophysiology of hypertension and atherosclerosis by directly affecting the cells of the vascular wall [8]. Higher levels of soluble intercellular adhesion molecule-1 (sICAM-1) and soluble vascular cell adhesion molecule-1 (sVCAM-1) have been associated with an increased risk of ischaemic disease and peripheral artery disease mediated, in part, by C reactive protein (CRP) [9]. Type 1 plasminogen activator inhibitor (PAI-1) is also related to metabolic syndrome, obesity, and CVD [5].
The amount of n-3 PUFA necessary to provide health benefits is unknown [10] as are the proportions of EPA and DHA that provide the greatest benefit. The majority of the clinical studies carried-out to date use fish-oil derived dietary supplements, but with a higher EPA/DHA ratio than that commonly found in the fish themselves [11,12], and it would not trigger the effects in vivo compared to the ratio contained in fish. EPA and DHA derived from fish oils have demonstrable cardiovascular disease benefits in observational studies and experimental trials which, mainly, have investigated their effects in combination. As such, little is known of the potentially different effects of EPA and DHA, especially regarding which has the better protective effect on CVD [2].
Hence, our present study seeks to determine the proportion of EPA/DHA that is best able to achieve a protective effect of the n-3 PUFA on CVD risk factors. Three dietary interventions with the optimal relation n-3/ n-6 and different EPA/DHA ratios (1:1, 2:1, 1:2) were evaluated in a healthy animal model. Soybean and linseed oils were used as control diets. Soybean is a rich source of linoleic acid (LA, 18:2 n-6), while linseed oil has an elevated content of alpha-linolenic acid (ALA, 18:3 n-3) [13].
Parameters of oxidative stress, inflammation, endothelial dysfunction, prothrombotic state, protein glycation, lipid peroxidation, and lipid profile were determined as risk factors or biomarkers indicative of CVD risk.
Antioxidant status and oxidative stress
The biomarkers of antioxidant status and oxidative stress are summarised in Table 1.
The concentrations of antioxidant enzymes in erythrocytes indicated an activation of these enzymes in fish-oil diets.
SOD values were higher in 1:1 EPA/DHA diet followed by 2:1 EPA/DHA, compared to the other 3 diets. GPx values were also higher in 1:1 and 2:1 EPA/DHA. There was a trend, albeit not statistically significant, towards higher CAT and lower GR values in fish-oil diets compared to the control diets.
These results indicated that fish-oil diets, especially 1:1 and 2:1 EPA/DHA, had improved values of antioxidant enzymes than did soybean and linseed oils.
The two control diets (soybean and linseed) had no significant differences between them with respect to the values of erythrocyte antioxidant enzymes.
Finally, the plasma antioxidant capacity (ORAC) was significantly higher in the 1:1 diet than in the other diets. This result is in agreement with the high SOD and GPx values found in this supplemented group.
Lipid peroxidation
The mean LDL-ox values indicated higher oxidation in fish-oil diets than controls. LDL-ox of diets with 2:1 and 1:2 EPA/DHA ratios were significantly higher, compared to soybean and linseed diets. The EPA/DHA (1:1) group did not show significant differences with respect to control groups (Table 1).
Mean values of MDA in the liver were not significantly different between groups.
Lipid profile
TG, CHOL, LDLc, HDLc, LDLc/HDLc, Apo A1 and Apo B100 were not statistically significantly different between supplemented groups (Table 2). TG, LDLc and HDLc values were within the reference ranges observed in other studies [14,15], whereas CHOL concentrations were increased in all the groups compared to that observed by Levy et al. [14].
The linseed-oil diet group had significantly higher values of the ApoB100/ApoA1 ratio, compared to the 1:1, 2:1 and the soybean-oil diets.
Glycaemia control and insulin resistance
The post-intervention glucose concentrations were within the laboratory reference range in all the groups at <14 mM [16,17], but the glucose decreases obtained at the end of the experiment were greater in 1:1 and 2:1 EPA/DHA with respect to the other diets ( Table 2).
All EPA/DHA diets showed significantly lower values of glycated haemoglobin, relative to the linseed and soybean oil diets. Glycated haemoglobin concentrations were not significantly different among the 3 EPA/DHA diets ( Table 2).
The initial concentrations of insulin were within reference range values (3.5 -4.4 ng/mL) described by other authors [17,18] in all groups (results not shown). The insulin increases observed at the end of the experiment were not significantly different relative to the baseline values, and without differences between groups ( Table 2).
Following nutritional intervention, the Wistar rats had HOMA values within laboratory reference ranges (<14) described by other authors [19]. The linseed group had significantly lower values compared to the EPA/DHA 1:1 and soybean supplemented groups (Table 2).
Cardiovascular disease risk biomarkers
Linseed diet decreased the sVCAM significantly with respect to the fish-oil diets while the 2:1 diet had significantly lower values than 1:2 diet ( Table 3). The soybean group had lower values of PAI-1 than the 1:2 EPA/DHA group. No statistically significant differences were observed between the two control diets with respect to these biomarkers (Table 3).
No significant differences were observed in CRP and in sICAM concentrations.
Discussion
Apart from the n-3 PUFA, fish oils contain aminoacids, vitamins, selenium and other minerals which contribute to the cardiovascular benefit. The majority of studies with purified EPA or DHA have demonstrated the bioactivity and effectiveness of these fatty acids; the implication being that the substantial CVD benefit of fish oil consumption is related to the n-3 PUFA content [2]. In our study, the supplemented dose of fish oils (extrapolated to animals) provides approximately double the amount of EPA and DHA of the European Union's recommendation for the maintenance of normal blood concentration of triglycerides in adults [20].
However, all diets had a similar fat and energy content and, hence, the observed differences can be attributed to the different ratios of EPA and DHA.
Moreover, the reasoning for using a weekly administration of the oils was because in a previous test; the daily feed became very stressing for animals. According to that, it was decided a weekly doses as has been already described in the article of Méndez et al. [13]. The data are expressed as mean ± SD (standard deviation). a : difference with respect to EPA/DHA 1:1 supplementation; b : differences with respect to EPA/DHA 2:1 supplementation; c : differences with respect to EPA/DHA 1:2 supplementation.
The influence of dietary interventions on the levels and composition of plasmatic FFA was also evaluated previously in a recently reported [13] in which we demonstrated that fish oil supplementation did not change the total amount of plasmatic FFA, but it altered the profile of the individual FFA. Animals fed fish oils exhibited significantly higher levels of EPA (20:5 n-3) and DHA (22:6 n-3) compared to those fed soybean oil. The supplementation with linseed oil provided similar levels of EPA compared to those observed in the FFA fraction from animals supplemented with fish oil; however, the amount of DHA in the FFA fraction was intermediate between those supplemented with fish and those with soybean oils. Animals fed linseed oil showed the highest amount of free The data are expressed as mean ± SD (standard deviation). a : differences with respect to EPA/DHA 1:1 supplementation; b : differences with respect to EPA/DHA 2:1 supplementation; c : differences with respect to EPA/DHA 1:2 supplementation. The data are expressed as mean ± SD (standard deviation). a : differences with respect to EPA/DHA 1:1 supplementation; b : differences with respect to EPA/DHA 2:1 supplementation; c : differences with respect to EPA/DHA 1:2 supplementation; d : differences with respect to soybean oil supplementation.
LA (18:3 n-3), in agreement with the elevated content of LA in the linseed oil [13].
Oxidative stress
Fatty acids with a greater degree of unsaturation in their molecular structure are more easily oxidised. As a consequence, diets rich in fish-oils are predisposed to causing increased oxidative damage in humans and animals. However, other studies have shown that diets supplemented with fish oils do not increase cellular oxidative damage, but may even exert an antioxidant effect [21,22].
Our results indicate that improvement in activity of SOD and GPX may explain the higher plasma antioxidant capacity (ORAC) in the EPA/DHA 1:1 supplemented group. In other study, n-3 PUFA diet corrected the decreased ORAC values in diabetic rats, and was probably due to the increased erythrocyte antioxidant enzymes SOD and GPX [11]. Our results corroborate this hypothesis.
Further, these results concur with those recently reported [13] in which we demonstrated that fish-oils, especially EPA:DHA 1:1, averted protein carbonylation in plasma and liver. All these findings favour diets rich in EPA and DHA with respect to antioxidant enzymes; the proportion of 1:1 having a higher relevance than others.
Lipid peroxidation
To date, fish-oil supplements have not been unequivocally shown to prevent oxidation of LDL particles [23]. In our study, plasma oxidised LDL was higher in the groups supplemented with fish-oil n-3 PUFA. However, EPA/DHA 1:1 showed lower values than the other two diets with n-3 PUFA, and significant differences existed only among the groups supplemented with 2:1 and 1:2 diets, relative to soybean and linseed.
LDL-ox was lower in the linseed oil group i.e. the ALA is more effective, in this case, than the administration of EPA and DHA. This is contrary to that which occurs in antioxidant protection and protein oxidation, and may be due to a synergistic effect of ALA i.e. on the one hand having less unsaturated bonds than EPA and DHA and hence less susceptible to oxidative attack while, on the other hand, although the rate of conversion from ALA to EPA and DHA is low in the organism, small quantities of these PUFA can contribute to an increase in the levels of antioxidants that protect LDL.
There were no differences observed in liver MDA among the different groups, despite differences in the consumption of n-3 PUFA. As such, although the diets rich in fish oils do not exercise a clear protection to plasma LDL oxidation, liver lipoperoxidation were not affected in these groups, relative to the soybean and linseed oil groups.
Few studies have evaluated MDA levels in liver following fish-oil administration, and the results have been contradictory. While some studies concluded that the administration of fish-oil for 2 months decrease MDA in rats with partial hepatectomy [24], other studies demonstrated, following 2 months of nutritional intervention in rats with experimental non-alcoholic fatty liver disease, that fish-oil derived n-3 PUFA increased liver MDA concentrations and promoted severe fatty liver [25]. These differences can be due to the different fatty acid proportions contained in the oils compared to other molecules with antioxidant capacity such vitamins C and E that, as well, affect the redox status of the organism.
Lipid profile
None of the supplementations increased body-weight and the amount of abdominal fatty tissue of the animals. We need to bear in mind that the intake of total fat was the same in all groups of animals. There were trends, albeit not statistically significant, towards a decrease in triglycerides, total cholesterol, LDLc and ApoB100 in the group supplemented with EPA/DHA 1:1 diet, relative to the other groups. Decreases in these factors would support a CVD protective role for EPA/DHA 1:1 diet.
The Apo B100/Apo A1 ratio was significantly reduced by 1:1, 2:1 and soybean diets, compared to supplementation with linseed oil. Soybean and fish-oil PUFA reduce the risk of cardiovascular disease, essentially by improving the lipid profile, as has been shown by other studies [3,5].
Glycaemia control and insulin resistance
Fish-oils are reported to be especially efficient in improving glycated haemoglobin concentrations in the circulation. Our study further highlighted that the 1:1 ratio of EPA/DHA induced the most beneficial improvement in this factor.
There is a dearth of physiological data regarding the effect of n-3 PUFA on glucose homeostasis and insulin resistance in healthy rats. Glycated haemoglobin decreased on average by 33% in the groups receiving fish-oil supplements compared to those receiving soybean and linseed diets at the end of the nutritional intervention. This result concurs with those observed by our group on the protective effect of supplementation with EPA/DHA on the carbonylation of proteins [13]. Diets enriched with fish oil decreased protein carbonylation, especially the diet with 1:1 ratio of EPA/DHA. The significant decreases in HbA1c and non-increase in circulating glucose values observed in the study, indicate a beneficial effect of marine-fish-derived long chain n-3 PUFA in healthy animals. These changes would contribute towards the prevention of some diseases such as metabolic syndrome and obesity [5,7].
In our study, all the groups had values of insulin and HOMA index within the reference range [18,19,26]; the linseed diet group showing the lowest values in these parameters. As such, the n-3 PUFA did not provoke increases in plasma insulin in healthy rats.
Conclusions
Our results demonstrate a positive protective effect of fish-oil supplementation in vivo. Specifically, these beneficial protective effects depend on the different proportions of EPA and DHA; the 1:1 proportion of EPA:DHA being the most beneficial since it improved antioxidant status, oxidised LDL, Apo-B100/Apo-A1 ratio, and glycated haemoglobin.
Animals
This study was conducted in compliance with the norms of the Ethics Committee for Animal Research at the Centro Superior de Investigaciones Científicas, Spain.
Female Wistar rats (n = 35; 13 weeks of age) were purchased from Janvier (Le Genest St-Isle, France) and acclimatised for 8 days prior to the initiation of the study. The animal room was maintained at a temperature of 22 ± 2ºC and 50-60% relative humidity with a 12 h light/dark photoperiod. The animals were fed a standard pellet diet (Panlab A04, Barcelona, Spain) and had free access to bottled water and food, except for a fasting period before sacrifice.
Animals were randomly assigned to five groups (7 rats each).
Supplementation
Three groups had dietary supplements of fish oils containing different proportions of EPA/DHA (1:1, 2:1 and 1:2). The 4 th group was fed soybean oil and the 5 th with linseed oil. Oils differing in EPA:DHA ratio were obtained by mixing appropriate quantities of the commercial fish oils. Soybean oil, obtained from unrefined organic soy oil (first cold pressing), and linseed oil, obtained from unrefined organic flax oil (first cold pressing). All diets had a similar fat and energy content [13].
Feeding the selected oil involved weekly oral doses of 0.8 mL/Kg bodyweight for 13 weeks, and administered by gavage.
Because of the high predisposition of fish oils to peroxidation, we established quality control to ensure that the oils did not oxidise during the nutritional intervention [13].
At the end of the study, the animals were anaesthetised with ketamine/xylacine (80/10 mg/kg, respectively) by intra-peritoneal injection. The animals were sacrificed by cardiac puncture, and exsanguinated. Plasma, serum and erythrocytes were stored at -80ºC until required for batched analyses. Livers were removed, washed in phosphate buffered saline, weighed, and immediately frozen in liquid nitrogen and stored at -80ºC until required for processing.
Oxidative stress Antioxidant enzymes in erythrocytes
Superoxide dismutase (SOD) [27], catalase (CAT) [28], glutathione peroxidase (GPx) [29] and glutathione reductase (GR) [30] activities were determined in erythrocytes using standard spectrophotometric methods. The units of measurement for SOD, GPx and GR are expressed as U/g Hb, and as mmol/min/g Hb for CAT.
Plasma antioxidant capacity
Plasma antioxidant capacity was measured as the oxygen radical absorbance capacity (ORAC method) [31]. The assay measures the oxidative degradation of fluorescein after being mixed with free radical generators such as azo-initiator compounds. The units of measurement are μmol trolox-equivalent/mL plasma (μmol TE/mL).
Lipid peroxidation was calculated by measuring MDA concentrations using high performance liquid chromatography with fluorescence detection (HPLC-FL). The results are expressed as micrograms of malondialdehyde per gram of liver [32].
Oxidised LDL
Plasma oxidised LDL (LDL-ox) was measured using ELISA kits (CUSABIO BIOTECH, China) according to the manufacturer's instructions. The units of measurement are expressed as ng/mL.
Cardiovascular disease risk factors Lipid profile
The lipid profiles consisting of total plasma triglycerides (TG), cholesterol (CHOL), HDL cholesterol (HDLc) and LDL cholesterol (LDLc) were measured by spectrophotometric methods (SPINREACT kits, Spain). ApoA1 and ApoB100 were measured using ELISA kits (CUSABIO BIOTECH, China). The units of measurement are expressed as mg/dL for TG, CHOL, HDLc, LDLc and as mg/mL for APO A1 and APO B100.
Glycaemia control and insulin resistance Glucose
Animals were fasted for 24 hours and a blood sample taken from the saphenous vein. The capillary blood was spotted on glucose strips and read in the Ascensia Elite XL glucometer (Bayer Consumer Care AG, Basel, Switzerland). The results are expressed in mmol/L.
Glycated haemoglobin
Glycated haemoglobin was measured using a spectrophotometric kit method (SPINREACT, Spain). The units of measurement are expressed as percentage of total haemoglobin.
Insulin
Insulin was measured using an ELISA kit (Millipore Corporation, Billerica, MA, USA). The results are expressed in ng/mL.
HOMA index
HOMA index (Homeostasis Assessment Model) is an estimate of insulin resistance and is calculated as: Fasting insulin (μU/mL) * fasting glucose (mmol/L) / 22.5.
Biomarkers of cardiovascular disease risk
Plasma CRP, sICAM, sVCAM and PAI-1 were measured by ELISA kits (CUSABIO BIOTECH, China). The results are expressed as μg/mL for CRP, sVCAM and PAI-1 and in ng/mL for sICAM.
Statistical analysis
Results were expressed as means and standard deviations for each group of dietary supplementation. The data were analysed for differences between groups using one-way analysis of variance (ANOVA). The SPSS IBM 19 for Windows was used throughout. When significant differences were found, the means were compared using the Scheffé post-hoc test. Statistical significant was set at p < 0.05. | 2017-06-16T14:40:37.064Z | 2013-10-01T00:00:00.000 | {
"year": 2013,
"sha1": "a79214056cdd22e36476ddc16c74a0d184cf7b45",
"oa_license": "CCBY",
"oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/1476-511X-12-140",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a79214056cdd22e36476ddc16c74a0d184cf7b45",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
10395358 | pes2o/s2orc | v3-fos-license | Mass casualty modelling: a spatial tool to support triage decision making
Background During a mass casualty incident, evacuation of patients to the appropriate health care facility is critical to survival. Despite this, no existing system provides the evidence required to make informed evacuation decisions from the scene of the incident. To mitigate this absence and enable more informed decision making, a web based spatial decision support system (SDSS) was developed. This system supports decision making by providing data regarding hospital proximity, capacity, and treatment specializations to decision makers at the scene of the incident. Methods This web-based SDSS utilizes pre-calculated driving times to estimate the actual driving time to each hospital within the inclusive trauma system of the large metropolitan region within which it is situated. In calculating and displaying its results, the model incorporates both road network and hospital data (e.g. capacity, treatment specialties, etc.), and produces results in a matter of seconds, as is required in a MCI situation. In addition, its application interface allows the user to map the incident location and assists in the execution of triage decisions. Results Upon running the model, driving time from the MCI location to the surrounding hospitals is quickly displayed alongside information regarding hospital capacity and capability, thereby assisting the user in the decision-making process. Conclusions The use of SDSS in the prioritization of MCI evacuation decision making is potentially valuable in cases of mass casualty. The key to this model is the utilization of pre-calculated driving times from each hospital in the region to each point on the road network. The incorporation of real-time traffic and hospital capacity data would further improve this model.
Introduction
On July 7 th , 2005, a series of terrorist attacks shook the London transit system [1]. Four bombs exploded almost simultaneously in a coordinated attack that left the city in a state of chaos [2]. Based on the sheer number of casualties, the incident has been described as the largest mass casualty incident in the United Kingdom since World War Two. Altogether, 775 people were injured in the attack, of which 56 died and 55 were critically injured. Casualties were divided amongst six hospitals (inclusive) within the city, based on hospital proximity, capacity and capability [2].
The following paper describes a spatial decision support system (SDSS) intended to help determine where best to evacuate patients during a mass casualty incident (MCI) of this type.
Mass casualty incidents are those that, by the sheer number and severity of casualties, overwhelm the health care capacity within a given community [3][4][5]. This definition emphasizes the crucial role played by triage and trauma centers in maximizing capacity during a mass casualty incident [6]. A concept that originated on the battlefield, triage, meaning 'to sort' in French, is one of the critical factors in the effective management of mass casualty incidents and refers to the process of prioritizing medical care based on the medical condition of the patient [7][8][9].
Intended to simplify and make evidence-based decisions concerning the evacuation of critically injured patients from an MCI location, this SDSS provides the information required by emergency service personnel at MCI location to make decisions in what is typically, a highly stressful and often chaotic situation. In addition to providing, within a matter of seconds, critical information describing hospital driving time/proximity, trauma level and bed capacity, the model is also useful within a planning context. For example, the model can be used to examine proposed locations for large scale events, conferences, etc. in relation to health care facilities or to help to determine where to position a mobile health facility in relation to the event.
Spatial models have been used within emergency services (EMS) for some time. Location allocation models, for example, are used to position facilities so as to optimize services to customers. In EMS, such models are focused on the optimization of ambulance locations in order to maximize coverage [10][11][12][13]. These models have evolved from the simple static models first developed 30 years ago to incorporate dynamic circumstantial changes. For example, such models can determine how best to fill the gap in coverage that is created when an ambulance within a particular geographical catchment is dispatched. In recent years, there have been a handful of attempts to optimize ambulance response times using models that incorporate dynamic traffic changes [14][15][16]. Advances in computer technologies that support decision making have made this process easier.
Combining geographic information systems (GIS) with decision support systems (DSS), Spatial Decision Support Systems (SDSS) were first introduced in the mid 1980's [17,18]. Decision support systems consist of distinct data management, model and interface components. Spatial Decision Support Systems add the visualization of spatial attributes, while Geographic Information Systems enable spatial data to be stored, manipulated and displayed. SDSS provide the ability to solve and simplify complex spatially-oriented problems [19][20][21][22]. In recent years a new kind of SDSS has emerged; one that relies on the web as a platform for interaction with the user. Made possible by increases in the speed of data transfer between client and server computers, web based SDSS enable greater information sharing and heightened use by non experts [23]. Web based SDSS also allow for the building of customized GIS applications that can be used with a remote server. These applications are platform independent and therefore more widely accessible. They are also purpose built, with tailored commands and functions making the application simpler to operate and understand than a full blown desktop application [24][25][26]. To date, no known modelling of MCI evacuation priorities has been undertaken and no emergency service models have been created to aid in evacuation prioritization. While there have been a few attempts to model optimal EMS routing to the scene of an incident, there was only one known attempt to model the return [15,16]. Drawing inspiration from the EMS models described above, the SDSS proposed within this paper also incorporates the use of GIS in the calculation of road network driving times.
Data
Two sets of data were used in constructing this model: road network data and hospital location data. The road data for metro Vancouver, obtained through GIS Innovations [27], is highly suitable for calculating travel time as it incorporates both speed limits and travel impedances (i.e. stop signs, traffic lights, etc.) which, in turn, allow for accurate travel time calculation. The data also provides the ability to control travel and impedance times. This is important, as travel times for an ambulance will differ from that of a regular vehicle. The fact that this data enables control of such variables heightens the accuracy of the results. The road network dataset used in this study excluded back roads and logging roads in order to focus on the more populated sections of the study area. Excluding these smaller roads also helped to reduce the database size.
The second set of data utilized in this study is comprised of the locations of participating hospitals within the metro Vancouver region. In addition to geocoded hospital locations, the hospital dataset also attaches attributes describing the hospital's capacity to receive patients in the case of a mass casualty incident and the type of treatment a given hospital is able to provide (Table 1). For trauma services, the range of services includes ICU, neurosurgery, orthopedics and plastic surgery. The hospitals are represented as a set of GIS point features and are geocoded as close to the main emergency room access as possible. As large hospitals can Table 1 Trauma center designation in Canada [28] Level of Care 1 Central role in the provincial trauma system, and the majority of tertiary/quaternary major trauma care in the system. Academic leadership, teaching, research program span several street blocks, geocoding the ER location rather than the hospital centroid can produce more accurate driving time results. In order to obtain results in a more immediate fashion, this model utilized pre-calculated driving times from each location on the road network to each hospital in the study area. Before pre-calculating the driving times, the data first had to be discretized to a length which would minimize the effect on actual driving time calculation. By restricting the length of the discretized road segments to a maximum of 200 m, it was determined that accurate driving times could be achieved without negatively affecting either the results or the size of the road dataset. The same road data used for the driving time calculation was also used to create the road segments. Close examination of the GIS Innovations [27] data indicated that the road segments within the data varied drastically in length, with segments both much smaller and much larger than 200 m. After several experiments, it was found that leaving all road segments below 200 m unchanged and subdividing all road segments larger than 200 m to the 200 m maximum worked most effectively. The 200 m street segments provided accurate driving times while also keeping the size of the database manageable. The resulting dataset contains road segments of varying lengths, with no segment larger than the 200 m maximum.
In order to calculate driving time from each road segment to each hospital, each road segment was converted into a centroid. The ODMatrix function within ESRI ArcGIS network analyst was then used to calculate driving time to each hospital. The ODMatrix function calculates the shortest driving time from each point of origin to each destination on the road network producing a 'drivingTime' table which contains a unique ID for each centroid plus the driving time in minutes to each hospital [29]. In order to attain greater accuracy, an impedance time value was obtained from experienced paramedics and assigned to both stop signs (5 second) and traffic lights (10 seconds). The table also produces a hospital unique ID for each destination hospital. Once this table was created, the centroid ID was reassigned to its road segment so that the user could click on the road segment and retrieve its unique ID (Figure 1). The road data set consisted of a road segment shapefile within which each segment was related to the driving time table through a one-to-many relationship.
The final step in the data preparation was to create the hospital data list. This was a relatively simple task, as all the information was readily available, the locations were known and only a relatively small number of hospitals were involved in the study. As part of the data preparation, each hospital was given a unique ID corresponding to the driving time table with a many-to-one relationship.
Model Construction
The construction of the model was divided into two distinct parts: creation of the mapping interface (the SDSS) and creation of a mechanism to analyze and process the data (model). The mapping interface was designed to allow the user to zoom to a location and to click on a road segment and insert a location into the map. In order to facilitate this, the 200 m segmented road data was first uploaded into ArcGIS server. A block of code was then written to allow users to click on a road segment, insert an MCI location and retrieve the unique ID of the road segment. Once retrieved, the unique ID is used to obtain the driving time to each hospital from the pre calculated driving time table. This portion of the model was constructed using ArcGIS server API, as it provides a rich set of functionalities and tools to interact with the road data and allow developers to build complex web-based mapping applications.
The second aspect of constructing the model involved creating a mechanism to join the unique ID from each road segment to the pre calculated driving time table, establishing a database relationship between the driving time table and the hospital table, and analyzing and visualizing the resulting data ( Figure 2). For this purpose, VB. NET[30] was utilized as the server side scripting language while javascript was used as the client side scripting language. VB.NET[30] enables database interaction and provides a set of decision making tools for the analysis and visualization of results using tables and graphs. More specifically, VB.NET[30] is used to compile the data and display the results based on the user's input. The entire model, including mapping and analysis, was built in Visual Web Developer (VWD) 2008 express edition [31].
Results
The database becomes active when the user enters the web site and a connection to the hospital data table is established as the page loads. Once this takes place, the user can modify the default hospital capacity and determine which hospitals should be included in the analysis. The user then needs to insert the MCI location into a high resolution map ( Figure 3) and enter additional information like the incident reference location. After an MCI location is inserted into the map, the model is ready to be executed. Upon running the model, a new results page opens listing each hospital, its associated attributes and its driving time from the MCI location. The results page provides a visual representation of the analysis, using both tables and graphs.
In order to test the model, a simulated MCI was created within the study area, using casualty counts from the 2005 London bombings. Using the King's Cross counts, where 10 critically injured patients were evacuated, an incident location was inserted at Broadway sky train station, one of Vancouver's busiest train stations. Figure 4 shows the results page produced by the simulation. Driving times to each of the hospitals in the study area are shown along with hospital capacity and trauma level. The results indicate that patients should be distributed between Vancouver General Hospital and Royal Columbian Hospital. In addition to driving times to trauma hospitals, the proximity to the nearest nontrauma hospital (depicted as trauma level 9) is also important as it provide an option in cases where the trauma hospitals become overloaded. Figure 5 and table 2 illustrate differences between the model's driving times and actual ambulance driving times collected from two ambulance stations within the metro Vancouver area. The driving times that were collected were for critically injured patients only. One ambulance station was located within an urban setting while the other was located in suburban Metro Vancouver. After filtering the data to show only trips occurring between 7 pm and 7 am, and 12 to 3 pm, the 132 ambulance trips showed larger variability in the ambulance driving time compared with the model driving time. The graph shows that the model underestimates and overestimates driving time in both long and short ambulance trips. There are several reasons that this may have occurred. First, the model driving times were rounded to the minute in order to be able to compare them to ambulance driving times (ambulance results were logged in minutes). Second, ambulance driving time records were taken from the ambulance paper log and there is no way to track at which point in the ambulance trip the start and end time of the trips were entered into the paper sheet. Both of these issues may drastically affect the results, particularly when the trip time is short. These unavoidable inconsistencies may partially explain the variability scatter in the graph in Figure 5. The table below shows nine incidents where ambulance trips started and ended in exactly the same location. In this case, patients were being transferred from a non-trauma hospital to a major trauma hospital. The model time calculation was 13 minutes while most of the actual ambulance driving time ranges from 8 to 13 minutes with one trip as an outlier at 27 minutes. The table results illustrate the variability between trips from and to the same locations. The results from the table illustrate the relatively limited variability of ambulance driving time compared to our model.
Conclusion
The response to a MCI must be both swift and precise if it is to be effective. As a result, dynamic decision making is of critical importance [32]. To be useful in this context, MCI modelling must produce results within an extremely short time frame. Although the proposed model provides the basic information required for evidence-based decision making, improvements can still be made, particularly in regard to the provision of real time hospital capacity and traffic data. Real time hospital capacity can be obtained by creating a utility that will enable hospitals to update capacity in the hospital database as soon as a mass casualty is declared. The model can then connect to the hospital database to retrieve the capacity. In addition, the model allows updates in hospital capacity as patients are evacuated from the scene of the incident to a given hospital. Unfortunately, incorporating real time traffic data is more complicated, as to do so would significantly extend the time required for computer data processing [16,33]. Although the model described in this study was able to avoid significant processing delays by utilizing pre-calculated driving times from each location on the road network to each hospital in the study area, the use of pre-calculated driving times also introduces some limitations. It does not, for example, allow for the input of travel impedances, like street closure as a result of the MCI, like bridge closures, or construction, into the calculation. Table 2, which compares model travel times with actual ambulance travel times, highlights the need to implement travel time calculations in real time while also incorporating real time traffic data. Out of the nine identical ambulance trips that were recorded, one trip clearly took much longer than the others. While the reason for this particular delay is unknown, a real time traffic data and driving During an MCI, decisions regarding the evacuation of patients are based on an evaluation of injury type and severity, in relation to hospital proximity and capacity. The web based model proposed within this study is intended to provide evidence-based hospital and driving time information in a timely manner to assist in the onsite management of MCI incidents. | 2019-01-23T18:42:21.444Z | 2011-06-10T00:00:00.000 | {
"year": 2011,
"sha1": "4fe8f20ce84532b7d8d4e94bc7d8cf3e4eb07333",
"oa_license": "CCBY",
"oa_url": "https://ij-healthgeographics.biomedcentral.com/track/pdf/10.1186/1476-072X-10-40",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2375bebceaccca56e3e0fae4c1abb7ad8c701deb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
219978045 | pes2o/s2orc | v3-fos-license | Prevalence and associated risk factors of self-medication among patients attending El-Mahsama family practice center, Ismailia, Egypt
Background: Self-medication is defined as taking medications without the physician’s prescription. It is a worldwide public health problem, especially in countries with limited resources. Although self-medication can reduce waiting time and save money, it may carry some potential risks, e.g., antibiotic resistance or inappropriate management with subsequent complication. A limited number of self-medication studies have been conducted in Egypt. Objectives: To determine the prevalence of self-medication practices and to identify the factors associated with self-medication Methods: A cross-sectional study was conducted on 160 patients. The sample was randomly selected from those who attended the El-Mahsama family practice center, Ismailia, Egypt, from November 2018 to February 2019. The center is affiliated to the Suez Canal University and provides preventive and curative services to a rural community. The participants were interviewed using a semi-structured questionnaire including sociodemographic scale and selfmedication knowledge and behavior. Data was analyzed using descriptive and analytic statistical methods. Results: Frequency of self-medication among the study sample has reached 96%. More than half of the participants (53.6%) reported that the first reason behind using self-medication was cost saving. Meanwhile, the most prevalent conditions that make them use these medications by themselves were headaches (17%), aches and pain (other than headache) (39.2%), and fevers (11.8%). The most frequent self-administered drugs were analgesics (59.5%) and antibiotics (23.5%). Conclusion: Prevalence of self-medication is high among all socioeconomic levels of patients attending the ElMahsama family practice center, which serves a rural community in Ismailia governorate, Egypt.
Introduction
Self-medication is defined as the use of medicines to treat self-diagnosed disorders without any medical consultation (Sarahroodi et al. 2012). It may involve over-thecounter (OTC) medications, prescription-only medicines (POM), or the use of complementary and alternative medicine (Torres et al. 2019). Methods of self-medication may include buying drugs by reutilizing a previous prescription, taking medicines on advice of relative or others, or consuming leftover medicines already available at home (Helal and Abou-ElWafa 2017). Self-medication is a global phenomenon and potential contributor to human pathogen resistance to antibiotics (Bennadi 2013). However, its patterns vary between countries depending on various features, e.g., socioeconomic factors, medical knowledge, satisfaction and people's perception of disease, ready access to drugs, the increased potential to manage certain illnesses through self-care, and greater availability of medicinal products (Abay and Amelo 2010;Klemenc-Ketiš et al. 2011).
Although self-medication can reduce the load on medical services and save cost, it is far from being a completely safe practice (WHO 2005). Potential risks may include incorrect self-diagnosis, delays in seeking medical advice when needed, severe adverse reactions, dangerous drug interactions especially for older people with multi-morbidity, incorrect manner of administration, incorrect dosage, incorrect choice of therapy, masking of a severe disease, and development of microbial resistance (Ruiz 2010).
Antibiotics resistance, one of the biggest threats to global health, may result from self-medication of antibiotics (Rather et al. 2017). The acceleration of antibiotic resistance and the decline in the development of new antibiotics to combat the problem have created significant public health challenges to health policymakers, health care workers, and the population around the world (Gebeyehu et al. 2015). In many developing countries including Egypt, antibiotics are unregulated and available over the counter without a prescription (Ventola 2015).
Despite the potential risks of self-medication, and to the best of our knowledge, few epidemiological data is available about the prevalence of self-medication among patients in our community.
Methods
A cross-sectional study was conducted on 160 participants, who were selected by simple random sample from the daily registry of patients who attended the El-Mahsama family practice center, Ismailia, Egypt, from November 2018 to February 2019. The sample size was determined by using the equation of one proportion: , where n = sample size, Z α/2 = 1.96 (the critical value that divides the central 95% of the Z distribution from the 5% in the tail), P = 88.2% (the prevalence of the outcome variable) (Kasim and Hassan 2018), and E = 0.05, the margin of error (width of confidence interval) (Charan and Biswas 2013). The El-Mahsama family practice center was selected as a study setting, because it is a primary health care center providing preventive and curative services to a rural community with different socioeconomic levels and different cultural backgrounds. Both genders above 18 years, who live in El-Mahsama village, were included, while medical staff and drug addicts were excluded from the study. Participants were interviewed using a semistructured questionnaire, which contains two sections: (1) socioeconomic status and (2) self-medication assessment. The socioeconomic status section contained 7 domains, with a total score of 84: education and cultural domain (score = 30: highest level of education and access to health information), occupation domain (score = 10: occupation of husband and wife), family domain (score = 10: residence, number of family members, number of earning family members, and education of children), family possessions (score = 12: refrigerator, television, washing machine, mobile/phone, radio, car, agricultural land, non-agricultural land for housing, shop or animal shed, computer, internet, another house rather than living in, animals/poultry), economic (score = 5: income sufficiency, governmental support, and taxes), home sanitation domain (score = 12: services, e.g., water/electricity/sewage, owned/rented, number of rooms, crowding index), and healthcare domains (score = 5: private/health insurance/free governmental/traditional healers). The total socioeconomic level was classified into very low, low, middle, and high levels depending on the quartiles of the score calculated (El-Gilany et al. 2012). The self-medication assessment section covers the following items in the last 12 months (Ilhan et al. 2009): frequency of self-use of medications, diseases for which medicines were self-prescribed, drugs commonly used, sources of information, reason for not consulting a doctor, and patients' opinion about selfmedication practice. Pilot testing of tools was carried out on 20 persons out of the study sample to assess the understandability and feasibility of the questionnaire. A Cronbach's alpha of 0.66 with a moderate agreement (kappa = 0.76) and strong positive significant correlation (r = 0.93) between the socioeconomic levels and scores of both scales indicated the acceptable reliability and validity of the tool of study (El-Gilany et al. 2012). The validity of the self-medication assessment tool was pre-determined by three experienced professors of family and community medicine.
Data management
Data was analyzed using the Statistical Package for the Social Sciences (SPSS), version 22 (IBM Corp., Chicago, IL, USA). Descriptive data was presented as numbers and percentages. Fisher's exact test and Pearson's chisquared test were used for statistical analysis of categorical variables. For all tests, a probability value of less than 0.05 was considered.
Results
The participants' mean age was 37.3 ± 12.2 years, and the majority (89.4%) of them came from rural areas. About half of the families (48.8%) had less than five members. Less than half of them (42.5%) depend on free governmental health services as a usual source of health care. The skilled manual workers/farmers represent Figure 1 showed that 96% of the participants have used medications without any medical consultations or supervision, mainly to alleviate pain and fever. As shown in Table 1, about half of them owed that to save money; however, two thirds of them (62.7%) perceive selfmedication as an unacceptable practice. Furthermore, information sources include recommendation by community pharmacists (23.5%), previous doctor's prescriptions (20.9%), or the patient's own experience (18.3%). The most frequent self-administered drugs were analgesics (59.5%) and antibiotics (23.5%). Table 2 showed no statistically significant association between the socioeconomic status of the participants and usage of selfmedications (P = 0.56).
Discussion
The present study revealed a very high prevalence of self-medication (96%), compared to the previously reported prevalence in other Egyptian studies. A recent systematic review included some Egyptian studies conducted in different cities from 1995 to 2014 (Kasim and Hassan 2018). The review demonstrated that the prevalence of medication abuse among the Egyptian population before twenty-first century ranged from 21.1 to 72%. However, according to the more recent studies conducted in Egypt, the reported prevalence has significantly increased to range from 81.1 to 86.4% (Sallam et al. 2009;El-Nimr et al. 2015). As for other countries, the reported prevalence varied widely from 45.4% in China (Lei et al. 2018), 42.5% in Jordan (Yousef et al. 2008), 53.5% in Mexico (Balbuena et al. 2009), 65.1% in Brazil (Bertoldi et al. 2014), 75% in Chile (Fuentes Albarran and Villa Zapata 2008), and 79.9% among university students in Serbia (Lukovic et al. 2014). The variations in the reported prevalence can be explained by the difference in populations, sample size, and study design. But more importantly, the recall period in these studies was only a few weeks or months, whereas in our study, self-medication use was assessed during the past whole year. This long recall period could explain the very high prevalence reported in our study.
In the present study, the most commonly used drugs were analgesics and antibiotics. This is consistent with the findings of previous studies which reported that analgesics and anti-inflammatories were highly used in self-medication (Jerez-Roig et al. 2014;Domingues et al. 2017). In fact, Domingues et al. (2017) explained this by the strong association between self-medication and the presence of minor diseases and conditions. Similarly, El-Nimr et al. (2015) reported that the most commonly used drugs were analgesics, followed by cough and common cold preparations and vitamins and minerals. They also reported that over half of the participants used antibiotics without a prescription. The same findings were reported in the systematic review by Kasim and Hassan (2018).
The socioeconomic score of the participants was not significantly associated with frequency of self-use of medications, diseases for which medicines were selfprescribed, drugs commonly used, sources of information, reason for not consulting a doctor, or patients' opinion about self-medication practice. Additionally, self-medication was not associated with participants' economic and household characteristics, in terms of their occupation, the number of earning members of the family, their income, crowding index, and whether they use governmental support. However, and contrary to our findings, Chang and Trivedi (2003) suggested that economic factors such as family size, income, and availability of health insurance may influence the selfmedication practice. Also, a large Mexican study found that those who practiced self-medication usually had lower income (Pagan et al. 2006). Meanwhile, medical insurance has been proposed as a key determinant for self-medication (Hoai and Dang 2017). Some even suggested that broadening health insurance to cover overthe-counter drugs may lead to a significant reduction in self-medication practice (Lei et al. 2018). With medical insurance come some obstacles that hinder the full utilization of such insurance and therefore direct some insured patients towards self-medication. For example, patients have to wait for quite long times to see a doctor. Also, some drugs might be unavailable sometimes, and thus, patients quite often end up buying the medications themselves. Moreover, regular, non-emergency health services are only available during usual working hours/ days, which force the employee to ask for sick leave and this may not be easily granted (Yousef et al. 2008).
Limitations of the present study
The present study had some limitations. First, the recall bias may have affected our analysis, and although the long recall period may add strength to our study, yet, it might also have caused confusion to the participants. Second, the study was a regional study conducted at a single health unit. Therefore, our results cannot be generalized and are not representing self-medication practice in Egypt. Third, since this was a cross-sectional study, each variable was measured only once and exposure and outcome are simultaneously assessed, so evidence of any associations should be closely interpreted before a causal relationship could be established.
Conclusion
Frequency of self-medication among the study sample has reached 96%. There was no statistically significant association between the socioeconomic status of the participants and usage of self-medications. | 2020-06-18T09:09:26.862Z | 2020-06-12T00:00:00.000 | {
"year": 2020,
"sha1": "30a5510030660cc72b349aefb6fa9244d6e1af20",
"oa_license": "CCBY",
"oa_url": "https://bnrc.springeropen.com/track/pdf/10.1186/s42269-020-00351-7",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "40d0b53767cd6c17b80e3cafe751be4bcf46bc71",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257632054 | pes2o/s2orc | v3-fos-license | Facial Affective Analysis based on MAE and Multi-modal Information for 5th ABAW Competition
Human affective behavior analysis focuses on analyzing human expressions or other behaviors, which helps improve the understanding of human psychology. The CVPR 2023 Competition on Affective Behavior Analysis in-the-wild (ABAW) is dedicated to providing high-quality and large-scale Aff-wild2 and Hume-Reaction datasets for the recognition of commonly used emotion representations, such as Action Units (AU), basic expression categories, Valence-Arousal (VA), and Emotional Reaction Intensity (ERI). The competition strives to make significant efforts toward improving the accuracy and applicability of affective analysis research in real-world scenarios. In this paper, we introduce our submission to the CVPR 2023: ABAW5. Our approach involves several key components. First, we utilize the visual information from an MAE model that has been pre-trained on a large-scale face image dataset in a self-supervised manner. Next, we fine-tune the MAE encoder on the ABAW challenges using single frames from the Aff-wild2 dataset. Additionally, we leverage the multi-modal and temporal information from the videos and implement a transformer-based framework to fuse the multi-modal features. To further enhance model generalization, we introduce a novel two-branch collaboration training strategy that randomly interpolates the logits space. Our approach is supported by extensive quantitative experiments and ablation studies conducted on the Aff-Wild2 dataset and Hume-Reaction dataset, which demonstrate the effectiveness of our proposed method.
Introduction
In recent years, there has been a growing interest in the research of human affective behavior analysis due to its potential to provide a more accurate understanding of human emotions, which can be applied to design more friendly human-computer interaction. The commonly used human * Corresponding Author. expression representations include Action Unit (AU), basic expression categories, Valence-Arousal (VA) and Emotional Reaction Intensity (ERI). Specifically, AU is first proposed by Paul Ekman and Wallace Friesen in the 1970s [4]. It depicts the local regional movement of faces which can be used as the smallest unit to describe the expression. Basic expression categories divide expressions into a limited number of groups according to the emotion categories, e.g., happiness, sadness, etc. VA contains two continuous values Valence (V) and Arousal (A), which are ranged from [-1,1]. They can be used to describe the human emotional state. V represents the degree of positivity or negativity of emotion, while A describes the level of intensity or activation of emotion. ERI typically comprises a sequence of values representing multiple emotional dimensions that reflect the intensity of an individual's emotional response to a specific stimulus.
The fifth Competition on Affective Behavior Analysis in-the-wild (ABAW5) [11] is organized to focus on handling the obstacles in the process of human affective behavior analysis. It makes great efforts to construct large-scale multi-modal video datasets Aff-wild2 [9,12,25] and Aff-wild2 [7,8,10,[13][14][15]. Aff-wild2 contains 598 videos and most of them have the three kinds of frame-wise annotated labels: AU, basic expression categories and VA. There are three challenges of ABAW5 for detecting these three kinds of expression representations. Besides, ABAW5 builds up a Hume-Reaction dataset which consists of about 75 hours of video recordings, recorded via a webcam, in the subjects' homes. Each video in it has been self-annotated by the subjects themselves for the ERI intensity of 7 emotional experiences.
In this paper, we introduce our submission to the ABAW5. First of all, we train a Masked Autoencoder (MAE) [5,18] on our private large-scale face dataset in a self-supervised manner. Then, we choose the MAE encoder as our vision feature extractor to capture the visual features of faces. Given the extensive quantity of faces included in the dataset, the extracted features of the MAE encoder demonstrate strong generalization capabilities. We also finetune the MAE encoder on the specific tasks of AU detection, basic expression recognition (EXPR) and VA estimation on the single frame. After that, to further exploit the temporal information and multi-modal information, we divide the videos into several short clips and perform clipwise training on the downstream tasks. Specifically, we use the finetuned MAE encoder to extract visual features from each frame and employ the pre-trained audio models (Hubert [21], Wav2vec2.0 [1], vggish [6]) to capture acoustic features. The concatenated features of visual and acoustic features are sent into a Transformer structure to acquire the temporal information for downstream tasks. Moreover, we design a dual-branch structure that contains Basic Learning Branch (BLB) and Collaboration Learning Branch (CLB). BLB and CLB have the same structure and shared feature extractors. By randomly interpolating the logit space of BLB and CLB, the model can enrich the feature space by implicitly creating some potential samples, which further enhance the model generalization.
MAE Pre-train
Different from the traditional MAE, our MAE is pretrained on the facial image dataset to focus on learning the facial vision features. We construct a large-scale facial image dataset that contains images from the existing facial image datasets, e.g., AffectNet [19], CASIA-WebFace [24], CelebA [16] and IMDB-WIKI [20]. Then we pre-train the MAE model on the dataset in a self-supervised manner. Specifically, our MAE consists of a ViT-Base encoder and a ViT decoder based on the structure of Vision Transformer (ViT) [3]. The MAE pre-training procedure follows a masking-then-reconstruct method, whereby images are first divided into a series of patches (16x16) and 75% of them are randomly masked. These masked images are sent to the MAE encoder and the complete images should be reconstructed by MAE decoder (See Fig. 2). The loss function of MAE pre-training is the pixel-wise L2 loss to make the reconstructed images close to the target images.
Once self-supervised learning is complete, we remove the MAE decoder and replace it with a fully connected layer attached to the MAE encoder. This allows us to fine-tune downstream tasks: AU detection, expression recognition, and VA estimation on the Aff-wild2 dataset. It is important to note that this process is based on frame-wise training, without taking into account temporal or other modal information. The corresponding loss functions for these three tasks are as follows: (1) whereŷ,ẑ,v andâ denote the model's predictions for AU, expression category, Valence, and Arousal, respectively. The symbols without hats refer to the ground truth. δ X ,δX indicate the standard deviations of X andX , respectively. µ X and µX are the corresponding means and ρ XX is the correlation coefficient. For the AU and EXPR tasks, we utilize weighted cross-entropy as the loss function. The weights for different categories, represented by W auj and W expj , are inversely proportional to the class number in the training set.
Temporal and Multi-modal Features Extraction
To further exploit the temporal and multi-modal features for AU, EXPR and VA tasks, we design the sequence-based model which combines the audio features. To concretely, we first divide the videos into several short clips, each having an equivalent frame number of K. We construct the Basic Learning Branch (BLB) to perform the sequence-wise training which can be seen in Fig 1. Given a video clip C i and the corresponding audio clips A i , we use the finetuned MAE encoder and some existing pre-train audio embedding models (e.g. Hubert [21], Wav2vec2.0 [1], vggish [6].) to extract the vision and acoustic features F i vis and F i aud for each frame separately. Then we concatenate F i vis and F i aud and sent them into a Transformer [22] encoder structure to exploit the temporal correlations between them. The Transformer encoder comprises of four encoder layers with a dropout ratio of 0.3. The output of the Transformer encoder is then directed towards a fully connected layer to resize the final output size, which is tailored to fit various tasks. In the training process of BLB, we flatten the sequence result of a clip and use the same loss function as equations 1, 2, 3.
Dual Branch structure
To further enhance the model generalization, we propose a two-branch structure to perform collaboration training. After the BLB training, we fix its parameters and share the multi-modal feature extraction module (MAE Encoder and Audio Feature Extractor) with the Collaboration Learning Branch (CLB). CLB has the same structure as BLB but different train data distribution D CLB . Specifically, we first filter the hard training samples which are hard to converge after training. For example, we exclude samples that continue to display a large sequence loss even after the completion of training. We add these hard samples into D CLB in order to focus on learning more hard samples. To preserve the original training data distribution of D init , we augment D CLB with a selection of random samples from D init until D CLB contains the same number of samples as D init . After building up D CLB , we commence with the collaboration training by our proposed Dual-branch collaboration Learning (DCL). Given a sample (C i ,A i ) from D init and a sample (C j ,A j ) from D CLB , the corresponding logits outputs of BLB and CLB are h i BLB and h j CLB , respectively. Then we perform the randomly linear interpolation in the logits space. The interpolated logit can be denoted as: where ⊕ denotes the element-wise sum and α is randomly sampled from the Beta distribution controlled by the hyper-parameters τ 1 and τ 2 . We denote the final output after logit h asô. The final loss function of DCL is as follows: where y denotes the corresponding labels of different tasks, L(, ) represents the corresponding loss functions 1, 2, 3 mentioned before. By performing this linear interpolation, we effectively augment the logits space and thereby construct a greater number of potential unseen samples. This approach has the benefit of enriching the feature space and enhancing the model generalization.
Experimental Setting
We processed all videos in the Aff-Wild2 datasets into frames by OpenCV and employ the OpenFace [2] detector to crop all facial images into 224 × 224 scale. We pretrain MAE on the large-scale face image dataset for 800 epochs with the AdamW [17] optimizer. We set the batch size as 4096 and the learning rate as 0.0024. Our training process is implemented based on PyTorch and trained on 8 NVIDIA A30 GPUs. For the single BLB training, we set the clip length as 100. The batch size is set to 32, and the learning rate is set to 0.0001. The BLB training process takes around 20 epochs using the AdamW optimizer. In the DCL training, we set the α to follow the distribution of Beta(2, 2), other experimental settings are the same as single BLB training.
Besides, we also utilize some training tricks during the training process. To concretely, we adjust the learning rate according to the CosineAnnealing policy. Also, to obtain Table 1. Average AU F1 of the official and 5-fold validation set.
robust training, we also take advantage of the Exponential Weighted Average (EMA) policy. Besides, we leverage the model soup [23] to further enhance the performance on the validation set.
Metric
For AU detection and expression classification, we calculate the F1-Score (F1) for each class to evaluate the prediction results. For VA estimation, we calculate the Concordance Correlation Coefficient (CCC) for valence and arousal respectively. The definition of CCC can be seen equ. 4. In the case of ERI estimation, we utilize Pearson's Correlation Coefficient (PCC) for each class as the metric. The specific definitions for each challenge are as follows: where Cov(, ) represents the covariance.
Results on validation set 3.3.1 AU Challenge
We show our experimental results of different stages of our framework on the official validation set in Tab. 1. We evaluate the model by the average F1 metric in equ. 7. To enhance the model generalization, we also perform the 5-fold cross-validation according to random video split in the existing labeled data. The final prediction of test set comes from the ensemble results of these models.
EXPR Challenge
We show our experimental results of different stages of our framework on the official validation set in Tab. 4. We evaluate the model by the average F1 metric in equ. 11. To enhance the model generalization, we also perform the 5fold cross-validation according to random video clip in the existing labeled data. The final prediction of test set comes from the ensemble results of these models.
VA Estimation
We show our experimental results of different stages of our framework on the official validation set in Tab. 1. We evaluate the model by the CCC of Valence and Arousal in equ. 4. To enhance the model generalization, we also perform the 5-fold cross-validation according to random video split in the existing labeled data. The final prediction of test set comes from the ensemble results of these models. In VA task, we find the improvement of our CLB is slight. This may caused by the differences between the classification and regression tasks.
ERI Estimation
We show our experimental results of different stages of our framework on the official validation set in Tab. 4. We evaluate the model by the Pearson's Correlations Coefficient (PCC) metric in equ. 11. To enhance the model generalization, we also perform the 5-fold cross-validation according Method Validation Set Official fold1 fold2 fold3 fold4 fold5 Ours 0.4120 0.4229 0.4199 0.4229 0.4266 0.4049 Table 5. PCC on the official and 5-fold validation set.
to random video split in the existing labeled data. The final prediction of test set comes from the ensemble results of these models. | 2023-03-21T08:02:05.906Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "18544e6ef9bf97bbba475b2a5386ef905f11c5a7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "18544e6ef9bf97bbba475b2a5386ef905f11c5a7",
"s2fieldsofstudy": [
"Computer Science",
"Psychology"
],
"extfieldsofstudy": []
} |
227335079 | pes2o/s2orc | v3-fos-license | Clear correlation between monopoles and the chiral condensate in SU(3) QCD
We study spontaneous chiral-symmetry breaking in SU(3) QCD in terms of the dual superconductor picture for quark confinement in the maximally Abelian (MA) gauge, using lattice QCD Monte Carlo simulations with $24^4$ and $\beta=6.0$, i.e., the spacing $a \simeq$ 0.1 fm at the quenched level. First, in the MA gauge, we observe dominant roles of the Abelian part and monopoles for the chiral condensate in the chiral limit, using the two different methods of i) the Banks-Casher relation with the Dirac eigenvalue density and ii) finite quark-mass calculations with the quark propagator and its chiral extrapolation. Second, local correlation between the chiral condensate and monopoles are investigated. We find that the chiral condensate locally takes a quite large value near monopoles. Thus, the color-magnetic monopole topologically appearing in the MA gauge is responsible to chiral symmetry breaking in QCD, that is, dynamical origin of the matter mass in our Universe.
INTRODUCTION
Since quantum chromodynamics (QCD) was established as the fundamental theory of strong interaction in 1970s, to understand its nonperturbative properties has been one of the most difficult central problems in theoretical physics for about a half century. In particular, QCD exhibits two outstanding nonperturbative phenomena of quark confinement and spontaneous chiralsymmetry breaking in its low-energy region, many physicists have tried to clarify these phenomena and their relation directly from QCD, but this is still an unsolved important issue in the particle physics.
Chiral symmetry breaking in QCD is categorized as well-known spontaneous symmetry breaking, which widely appears in various fields in physics, and is an important phenomenon relating to dynamical quark-mass generation [1,2]. Indeed, apart from the dark matter, about 99% of the matter mass of our Universe originates from chiral symmetry breaking, because the Higgs-origin mass is just a small mass of u, d current quarks, electrons and neutrinos. The order parameter of chiral symmetry breaking is the chiral condensate qq , and it is directly related to low-lying Dirac modes, via the Banks-Casher relation [3].
In contrast, color confinement is a fairly unique phenomenon peculiar in QCD, and quark confinement is characterized by the linear inter-quark potential. As for the confinement mechanism, the dual superconductor picture based on color-magnetic monopole condensation was proposed by Nambu, 't Hooft, and Mandelstam as a typical plausible physical scenario [4][5][6]. In lattice QCD, by taking the maximally Abelian (MA) gauge [7], this dual superconductor scenario has been investigated in terms of Abelian dominance, i.e., dominant role of the Abelian sector [8][9][10][11], and the relevant role of monopoles [12][13][14].
The relation between confinement and chiral symmetry breaking is not yet clarified directly from QCD. While a strong correlation between confinement and chiral symmetry breaking has been suggested by almost coincidence between deconfinement and chiral-restoration temperatures [15], an lattice QCD analysis based on the Diracmode expansion indicates some independence of these phenomena [16].
Their correlation has been also suggested in terms of color-magnetic monopoles, which topologically appear in QCD in the Abelian gauge [17]. In the dual Ginzburg-Landau theory, the monopole condensate is responsible to chiral symmetry breaking as well as quark confinement [18]. Also in SU(2) lattice QCD, Miyamura and Woloshyn showed Abelian dominance [19,20] and monopole dominance [19,21] for chiral symmetry breaking. In fact, by removing the monopoles from the QCD vacuum, confinement and chiral symmetry breaking are simultaneously lost. In SU(3) lattice QCD with a 8 3 × 4 lattice, Thurner et al. showed a local correlation among monopoles, instantons, and the chiral condensate [22]. These studies indicate an important role of the monopoles to both phenomena, and thus these two phenomena might be related via the monopole. However, most of the pioneering lattice works were done in SU(2) lattice QCD or done on a small lattice [19][20][21][22].
In this Letter, we investigate correlation between chiral symmetry breaking and color-magnetic monopoles appearing in the MA gauge in SU(3) lattice QCD with a large-volume fine lattice at the quenched level. Using two different methods, we evaluate the chiral condensate in Abelianized QCD and the monopole system, extracted from lattice QCD. We also investigate correlation between the local chiral-condensate value and the monopole location. We perform SU(3) lattice QCD simulations at the quenched level with the standard plaquette action [15]. On four-dimensional Euclidean lattices, the gauge variable is described as the SU(3) field U µ (s) ≡ e iagAµ(s) ∈ SU(3), with the gluon field A µ (s) ∈ su(3), the QCD gauge coupling g, and the lattice spacing a. In this work, we use the lattice size of 24 4 and β ≡ 6/g 2 = 6.0, i.e., a 0.1 fm, and take the lattice unit a = 1 hereafter. Using the pseudo-heat-bath algorithm, we generate 100 gauge configurations which are taken every 500 sweeps after a thermalization of 5,000 sweeps. The jackknife method is used for the error estimate.
Using the Cartan subalgebra H ≡ (T 3 , T 8 ) of SU(3), the MA gauge fixing is defined so as to maximize under the SU(3) gauge transformation, and thus this gauge fixing suppresses all the off-diagonal fluctuation of the SU(3) field U µ (s). In the MA gauge, the SU(3) gauge group is partially fixed remaining its maximal torus subgroup U(1) 3 ×U(1) 8 , and QCD is reduced into an Abelian gauge theory like the non-Abelian Higgs theory. From the SU(3) field U MA µ (s) ∈ SU(3) in the MA gauge, the Abelian field is defined as u µ (s) = e i θ· H = diag e iθ 1 µ (s) , e iθ 2 µ (s) , e iθ 3 µ (s) ∈ U(1) 2 (2) with the constraint 3 i=1 θ i µ (s) = 0 (mod 2π), by maximizing the overlap so that the distance between u µ (s) and U MA µ (s) becomes the smallest in the SU(3) manifold.
The Abelian projection is defined by the replacement of O[u µ (s)] is called "Abelian dominance" for the operator O.
MONOPOLES IN QCD
Now, let us consider the Abelian plaquette variable, The Abelian field strength θ i µν (s) (i = 1, 2, 3) is the principal value of the exponent in u µν (s), and is defined as with the forward derivative ∂ µ . Here, θ i µν (s) is U(1) 2 gauge invariant and corresponds to the regular continuum Abelian field strength as a → 0, while n i µν (s) corresponds to the singular gauge-variant Dirac string [23].
The electric current j i µ and the monopole current k i µ are defined from the Abelian field strength θ i µν , where ∂ µ is the backward derivative. Both electric and monopole currents are U(1) 2 gauge invariant, according to U(1) 2 gauge invariance of θ i µν (s). In the lattice formalism, k i µ (s) is located at the dual lattice L 4 dual of s α + 1/2, flowing in µ direction [14]. Hereafter, we will omit the color index i as appropriate.
Abelian-projected QCD thus includes both electric current j µ and monopole current k µ , and can be decomposed into the "photon part" which only includes j µ and the "monopole part" which only includes k µ approximately, as follows.
First, we consider the photon part satisfying From ∂ µθ Ph µν = 0, one can set θ Ph µν = (∂ ∧ θ Ph ) µν and then In the Landau gauge ∂ µ θ Ph µ = 0, the photon part θ Ph ν can be derived from the electric current j ν , Therefore, we here define the photon part θ Ph ν by using the inverse d'Alembertian on the lattice [14]. The In this way, in Abelian-projected QCD, the contributions from the electric current j µ and the magnetic current k µ can be well separated into the photon part θ Ph µ and the monopole part θ Mo µ , respectively. In Table I, we show the monopole density ρ M and the electric-current density ρ E defined as for Abelian-projected QCD, monopole and photon parts, respectively. Using the monopole and the photon link-variables, monopole and photon projection are defined by the re- The dominant role of the monopole part is called "monopole dominance," and monopole dominance has been observed for quark confinement in lattice QCD [12].
CHIRAL CONDENSATE
First, we study Abelian dominance and monopole dominance for the chiral condensate in the chiral limit, using the Kogut-Susskind (KS) fermion [15] for quarks in SU(3) lattice QCD.
Mathematically, the chiral condensate qq in the chiral limit is directly related to the low-lying Dirac eigenvalue density ρ(0) through the Banks-Casher relation [3], The Dirac eigenvalue density ρ(λ) is defined as with the space-time volume V . For the KS fermion, the Dirac operator γ µ D µ becomes η µ D µ with the staggered phase η µ (s) ≡ (−1) s1+···+sµ−1 , and the Dirac eigenvalue λ n is obtained from Here, the quark field q α (x) is described by a spinless Grassmann variable χ(x), and the chiral condensate per flavor is given as qq = χχ /4 in the continuum limit. Figure 1 shows the Dirac eigenvalue densities ρ(λ) for SU(3) QCD, Abelian-projected QCD, monopole and photon sectors, extracted from lattice QCD in the MA gauge. We find that the low-lying Dirac eigenvalue density ρ(0) in Abelian-projected QCD takes almost the same value in SU(3) QCD, which means Abelian dominance for the chiral condensate in the chiral limit. For the photon sector, we find no eigenvalues below 0.13 in 10 configurations and conclude that ρ(0) in the photon sector is exactly zero. On the other hand, ρ(0) in the monopole part is close to that in SU(3) QCD, which means monopole dominance for the chiral condensate in the chiral limit.
Next, we calculate the chiral condensate in a different way using the quark propagator. Here, we adopt the KS fermion with the bare quark mass m, and consider the chiral extrapolation of m → 0.
For the gauge-field ensemble of U = {U µ (s)}, the Euclidean KS fermion propagator is given by the inverse matrix, for SU(3) QCD, Abelian-projected QCD, monopole and photon sectors, respectively. Here, we use 100 gauge configurations, and calculate the local chiral condensate at 2 4 distant space-time points x for each gauge configuration. In fact, we perform 1,600 times calculations of χ(x)χ(x) U for each sector at each quark mass m. Here, we consider the net chiral condensate by subtracting the contribution from the trivial vacuum U = 1 as where the subtraction term is exactly zero at the chiral limit m = 0. We eventually take its average over the space-time x and the gauge ensembles U 1 , U 2 , ..., U N , Figure 2 shows the chiral condensates plotted against the bare quark mass m in the lattice unit, for SU(3), Abelian, monopole and photon sectors, extracted from lattice QCD in the MA gauge. For each sector, mdependence of the chiral condensate seems to be linear in this region, so that we evaluate the chiral condensate in the chiral limit, using the linear chiral extrapolation. Provided that the linear chiral extrapolation is valid, Abelian dominance and monopole dominance for the chiral condensate are realized in the chiral limit, whereas the photon part has almost no chiral condensate in the chiral limit.
These results are consistent with the above-mentioned conclusions using the Dirac eigenvalue density ρ(λ) and the Banks-Casher relation. In Table II, we summarize the chiral condensate values in the chiral limit evaluated from the two different methods for SU(3), Abelian, monopole and photon sectors.
In the presence of bare quark masses of m = 0.01−0.02, however, there appears a significant deviation of the chiral condensates between SU(3) and Abelian sectors, which quantitatively differs from SU(2) QCD, where Abelian dominance is observed at m = 0.05 − 0.3 [20]. In particular, compared with SU(3) QCD, bare-quark mass m dependence of the chiral condensate is fairly reduced in Abelian-projected QCD, and also in the monopole part. As an interesting possibility, the net chiral condensate in the Abelian/monopole sector is controlled by quarkmass independent object. This might be understood if monopoles are directly responsible for chiral symmetry breaking, because monopoles have no bare quark mass dependence in the quenched approximation. Then, in the next section, we examine the correlation between the chiral condensate and monopoles in more direct manner.
LOCAL CORRELATION
Second, we study the local correlation between chiral condensate and monopoles, by investigating the local chiral condensate around monopoles in Abelian-projected QCD at each gauge configuration. Note that, at each lattice configuration, the monopoles topologically appear as local objects, so that they might locally influence the chiral condensate around them, although the translational invariance is recovered by the gauge ensemble average.
For the visual demonstration, we show in Fig. 3 the local chiral condensate χχ(x) u and the monopole location at all three-dimensional space points at a time slice of t = 12 in the 1st Abelian configuration. The bare quark mass is taken as m = 0.02. Here, we show all the monopoles located at t = 11.5, 12.5 on the dual lattice L 4 dual of s α + 1/2. The value of the local chiral condensate | χχ(x) u | is visualized with the color graduation. (The same dark color is used for | χχ(x) u | > 0.20, and no color is used for small | χχ(x) u | < 0.04.) It is clearly observed that the local chiral condensate in a configuration has a large fluctuation and takes quite large values in the vicinity of the monopoles.
Finally, we calculate the correlation function between the local chiral condensate χχ(x) u and the local monopole density where P (s) denotes the dual lattices in the vicinity of s, i.e., P (s) = s ∈ L 4 dual |s − s| = 1 2 with the dual lattice L 4 dual of s α +1/2. For this calculation, we use the lattice data of the local chiral condensate and the monopole current for 100 gauge configurations, which were used to obtain Fig.2 in the previous section. Figure 4 shows the correlation function C(x − y) between the local chiral condensate χχ(x) u and the local monopole density ρ L (y), as the function of |x − y|, for the bare quark masses of m = 0.02, 0.015, and 0.01. Here, the correlation function C(x−y) is normalized to be unity at |x−y| = 0 at each m.
Within the error bar, the correlation function C(x − y) seems to be a single-valued function of |x − y|, and no significant m-dependence of the correlation function is found in this bare quark-mass region. It is likely that the correlation function C(x−y) monotonically decreases with the distance r ≡ |x − y| and almost vanishes for large r such as r > 5 ( 0.5 fm), and thus a strong correlation between the local chiral condensate and the monopole density is quantitatively clarified.
From these lattice QCD results, we conclude that there exists a direct clear local correlation between monopoles and the chiral condensate.
Here, let us consider physical origin of the correlation between chiral symmetry breaking and monopoles in terms of the magnetic catalysis. In Abelian gauge theories, chiral symmetry breaking is generally enhanced in the presence of a strong magnetic field, which is called the magnetic catalysis [24][25][26]. In the MA gauge, infrared QCD resembles an Abelian gauge theory with monopoles, which accompany a strong color-magnetic field around them. Therefore, as an interesting possibility, the strong magnetic field around the monopoles enhances chiral symmetry breaking also in this Abelian gauge theory.
SUMMARY AND CONCLUSION
We have studied spontaneous chiral-symmetry breaking in SU(3) QCD in terms of the dual superconductor picture for quark confinement in the MA gauge, using lattice QCD Monte Carlo simulations with a large volume of 24 4 .
First, we have found Abelian dominance and monopole dominance for the chiral condensate in the chiral limit, using the two different methods of i) the Banks-Casher relation with the Dirac eigenvalue spectral density and ii) finite quark-mass calculations with the quark propagator and its chiral extrapolation. We have also found that bare-quark mass dependence of the chiral condensate is fairly reduced in Abelian-projected QCD and the monopole part.
Second, we have investigated local correlation between the chiral condensate and color-magnetic monopoles, and have found that the chiral condensate takes a quite large value near the monopoles in Abelian-projected QCD.
Thus, the color-magnetic monopoles topologically appearing in the MA gauge significantly contribute to not only quark confinement but also chiral symmetry breaking in SU(3) QCD, that is, the dynamical origin of the matter mass in our Universe.
H.S. is supported in part by the Grants-in-Aid for Scientific Research [19K03869] from Japan Society for the Promotion of Science. Most of numerical calculations have been performed on NEC SX-ACE and OCTOPUS at Osaka University. We have used PETSc and SLEPc to solve linear equations and eigenvalue problems for the Dirac operator, respectively [27][28][29][30]. | 2020-12-08T02:00:43.658Z | 2020-12-07T00:00:00.000 | {
"year": 2021,
"sha1": "0ce535f41aee5148c321e93d461501fd07471e24",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.103.054505",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "79f911a79ba4fbf9787a510f579f4faeb98afe70",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
55121388 | pes2o/s2orc | v3-fos-license | Seed quality and water use characteristics of maize landraces compared with selected commercial hybrids
Understanding seed quality and water use characteristics of maize ( Zea mays L.) landraces will improve food security among subsistence farmers who still cultivate them. The objective of this study was to evaluate seed quality and water use characteristics of two maize landraces (GQ1 and GQ2) compared with two commercial hybrids (SC701 and PAN53). Seed quality was determined by the standard germination, electrical conductivity, and tetrazolium tests. A controlled environment study was conducted in which the landraces were compared with hybrids across three water treatments (30% ETc; 50% ETc, and 80% ETc). Although landrace GQ2 performed at par with the hybrids, overall, seed quality tests showed that hybrids had superior seed quality than landraces. This was also confirmed by highly significant emergence results (P < 0.001) from pot trials where SC701 and PAN53 had higher emergence (100% and 94.44%, respectively) compared with GQ2 (86.11%) and GQ1 (61.11%). Subjecting landraces and hybrids to water stress (50% and 30% ETc) resulted in shorter plants with fewer leaves and earlier tasselling compared with non-stressed plants (80% ETc). Plant height for the 30% ETc water treatment was 156.1 cm compared with 175.8 cm for the 80% ETc water treatment, while plants under the 30% ETc water treatment tasseled at 105.4 d compared with 129.5 d for the 80% ETc water treatment. The GQ2 landrace continued to perform similar to, and often better, than the hybrid varieties, especially under stress conditions. Yield was poor under controlled conditions. Performance of the GQ2 landrace for both seed quality tests and under controlled conditions shows that landraces remain an important germplasm resource.
INTRODUCTION
Maize (Zea mays L.) is the staple food in Southern Africa (Mugo et al., 2002).Resource-poor farmers in South Africa still cultivate maize landraces; these are known for their adaptability to harsh environmental conditions and still produce reasonable yields (Zeven, 1998).This indicates their importance (Mabhaudhi, 2009), particularly in rural communities, and their potential ability to contribute to food security.However, the landraces that resource-poor farmers are familiar with tend to produce relatively low yields despite their adaptability to low-input farming systems (Manzanilla et al., 2011;Mabhaudhi and Modi, 2013).Low yield could be the result of poor quality seed from farmers' prior harvests (Manzanilla et al., 2011).This, coupled with the occurrence of drought, particularly in Sub-Saharan Africa, is a major concern.There is a need for strategies that will encourage sustainable agricultural production and also identify possible crops for future crop improvement.Thus, attention has now been gravitating towards studying traditional and underutilized crops (Mabhaudhi and Modi, 2013).
Seed testing can be used as a means of providing information about seed quality parameters, such as physiological, physical, phytosanitary, and genetic (FAO, 2010).Physiological parameters of seed quality are related to viability and vigor.Influences on viability have been well documented over the years (Scharpf, 1970).The term viability refers to the ability of a seed to germinate under ideal conditions (Bradbeer, 1988).According to Linington et al. (1996), a germination test is the most useful method to determine the viability of a seed sample.The International Seed Testing Association (ISTA, 1985) defines germination of a seed lot in a laboratory as the emergence and development of the seedling to a stage where the aspect of its essential structures indicate whether or not it is able to develop further into a satisfactory plant under favorable conditions.Soil quality testing under field conditions is often problematic due to the inability to reliably replicate conditions.The use of laboratory testing allows controlling external factors to provide the most uniform, rapid, and complete germination (Kurdikeri et al., 1996).Chemical tests have also been used to determine viability.These tests detect chemical reactions that usually, but not always, occur in living systems (Scharpf, 1970).The tetrazolium test is one of these and is the most widely applied biochemical method to examine seed viability.
Although seed quality testing is an important initiative, a better understanding of the effects of drought on landraces is important for improving agricultural systems and their management (Chaves et al., 2003), and thus improve food security.The availability of water during the different stages of crop growth influences the crop's ability to survive (Misra, 1991;1995).The early stages in plant development (seed germination, seedling emergence, and establishment) are key processes that are sensitive to water availability (Misra et al., 2002;Hadas, 2004).Factors such as water availability in the growing medium and the duration of wetting influence germination.Studies looking at the response of these early stages to water stress have shown water stress as reducing seed germination (Willenborg et al., 2004) and early seedling growth (Mabhaudhi and Modi, 2010).Water stress has also been observed as causing marked decreases in germination rate and seedling vigor (Mabhaudhi and Modi, 2010;Khodarahmpour, 2011).
The aim of the study was to evaluate seed quality and water use characteristics of two landraces compared with two commercial hybrid varieties under controlled environment conditions.
Plant material
Seeds of maize landraces ('GQ1' and 'GQ2') were sourced from local farmers in the Eastern Cape Province, South Africa in 2013.Two commercial hybrids ('SC701' and 'PAN53') were used as control varieties to compare with the landraces.The 'SC701' hybrid is a popular variety among local farmers who grow it for green mealies.This type of mealies is usually harvested during the early dough stage, approximately 3 wk after flowering.It is popular for its late maturity and fairly good tolerance to drought.The 'PAN53' hybrid is a fairly new medium maturity variety.
Seed quality tests
All experiments were laid out in a randomized complete block design at the University of KwaZulu-Natal's (UKZN) seed technology laboratory in Pietermaritzburg (29°37' S; 30°23' E; 669 m a.s.l.),South Africa.The number of replicates per experiment, as well as the number of seeds used per replicate, varied for each experiment.
For the standard germination test, four replicates of eight seeds from each genotype were germinated in petri dishes.These were lined with double sheets of moistened Whatman filter paper and closed to minimize moisture loss.They were incubated in a germination chamber at alternating temperatures of 20 °C/30 °C (16/8 h) and a 16:8 h photoperiod for 8 d (AOSA, 1992).The filter paper was rewetted on a daily basis with deionized water to maintain adequate moisture levels.Daily germination counts were taken based on radicle protrusion of 2 mm or more.On day 8, the final germination percentage was calculated according to AOSA (1992) guidelines.This was followed by measuring root and shoot lengths, root:shoot ratio, and seedling fresh mass.In addition, the following indices were calculated: Germination velocity index (GVI) was calculated based on Maguire's formula (Maguire, 1962): GVI = G1/N1 + G2/N2 + … + Gn/Nn [1] where GVI is the germination velocity index, G1, G2… Gn are the number of germinated seeds in first, second… last count, and N1, N2…Nn are the number of sowing days at the first, second… last count.
Mean germination time (MGT) was calculated according to the formula by Ellis and Roberts (1981): where MGT is mean germination time, n is the number of seeds that germinated on day D, and D is the number of days counted from the start of germination.
Electrical conductivity was measured with the CM100 Model Single Cell Analyzer (Reid & Associates, South Africa).Only 20 seeds per genotype were used due to limited quantities of maize landrace seeds.Seeds from each genotype were individually weighed and placed into wells filled with 2 mL distilled water.Electrolyte leakage for each variety was then measured over 24 h.
Seed viability was determined by the tetrazolium (TZ) test.Four replicates of 20 seeds each were used for the TZ test.The seeds were preconditioned for 18 h by directly soaking them in water.A single-edge razor blade was then used to bisect each seed longitudinally through the midsection of the embryonic axis.Seeds were placed in petri dishes and soaked in a 1% TZ solution.Petri dishes were placed in a dark cupboard at room temperature for 6 h.The number of stained seeds was recorded.
Controlled environment experiment
A pot trial was conducted in a growth tunnel at the Controlled Environment Facility (CEF) at UKZN, South Africa.The environment in the growth tunnels is not fully controlled.However, temperatures (~ 18/33 °C day/night) and relative humidity (60% to 80%) in the tunnels are designed to resemble those of a warm subtropical climate (Modi, 2007).Temperature, relative humidity, and light in the tunnels were monitored with a HOBOnode logger (Onset Computer Corporation, Bourne, Massachusetts, USA).
The experimental layout was a randomized complete block design (RCBD) with two factors: water stress (three levels: 30% [terminal], 50% [moderate], and 80% [control] of crop water requirement [ETc]) and variety (four levels: 'GQ1', 'GQ2', 'SC701', and 'PAN53'), replicated four times.Forty-eight (48) 20-L pots were filled with 15 kg of soil whose field capacity had previously been gravimetrically determined.Three seeds were planted per pot at a depth of 25 mm.Excess seedlings were thinned soon after emergence to only one plant per pot.Pots were connected to an online drip (2 L h -1 ) irrigation system.Water applied to the 80%, 50%, and 30% ETc treatments totaled 456.96, 285.6, and 171.36 mm during the study.
The amount of irrigation water was based on ETc calculated using the monthly average reference evapotranspiration (ET0) and a crop coefficient (Kc) as described by Allen et al. (1998): where ETc is the crop's water requirement, ET0 is the reference evapotranspiration, and Kc is the crop factor.
Fertilizer application was based on a soil analysis report of the soil used in this study.An organic fertilizer (30 g N kg -1 , 15 g P kg -1 , and 15 g K kg -1 ; Gromor, Cato Ridge, KwaZulu-Natal, South Africa) was applied at a rate of 80 g per pot.Fertilizer was applied in the early stages of plant growth.Weeding of the pots was conducted weekly.Diseases and pests were monitored weekly.
Data collected in the pot trial included seedling emergence, stomatal conductance, chlorophyll content index, soil water content, plant height, and leaf number.Emergence was recorded from the onset of the trial (May 2013) up to 14 d after planting.The crop was deemed to be established 28 d after planting when all the seedlings had formed their first true leaves.Thereafter, weekly data were collected for stomatal conductance, chlorophyll content index, soil water content, plant height, and leaf number.Stomatal conductance (SC) was measured with a steady state leaf porometer (Model SC-1, Decagon Devices, Pullman, Washington, USA).Stomatal conductance readings were taken from the abaxial surface of the second youngest fully expanded and fully exposed leaf.The chlorophyll content index (CCI) was measured with a portable chlorophyll content meter (CCM-200; Opti-Sciences, Hudson, New Hampshire, USA) on the adaxial surface of the second youngest fully expanded and fully exposed leaf of each plant.For both SC and CCI, leaves with visible signs of damage or disease were excluded; measurements were taken at midday (12:00-14:00 h) and during periods when the soil was drying.Soil water content was monitored with a Theta Probe (ML-2x) connected to an HH2 handheld moisture meter (Delta-T Devices, Cambridge, UK).Data collection for growth parameters ceased 22 wk after planting when 100% of the population had reached the tasseling stage.Data collection at harvest (November 2013) included total biomass, ear prolificacy, ear size characteristics, number of kernel rows per ear, number of kernels per row, and harvest index.
Statistical analysis
All data were subjected to ANOVA with the GenStat statistical system (Version 14, VSN International, Hemel Hempstead, UK).Means of significantly different variables were separated by Duncan's multiple range test in GenStat at a 0.05 probability level.
A strong positive significant correlation was observed between the following variables: fresh mass and dry mass (r = 0.98; P = 0.02), MGT and dry mass (r = 0.99; P = 0.01), and root length and GVI (r = 0.98; P = 0.02).
Although not significant, strong positive correlations were also observed between the following variables: MGT and fresh mass (r = 0.93), root length and percentage weight increase during imbibition (r = 0.90); MGT and EC (r = 0.82), percentage mass increase during imbibition and final germination percentage (r = 0.80), shoot length and final germination percentage (r = 0.79), and shoot length and root length (0.74).
A significant strong negative correlation can be observed between GVI and EC (r = -0.99;P = 0.01).Although no significant differences were observed, a strong negative correlation exists between the following variables: root length and EC (r = -0.95),shoot length and R: S ratio (r = -0.84),percentage weight increase during imbibition and EC (r = -0.79),and MGT and GVI (r = -0.76).
Controlled environment experiment
The trend for soil water content showed that SWC was higher at 80% ETc compared with 50% ETc and 30% ETc (Figure 2).This trend was observable throughout the growth period of the crop.Results of seedling emergence showed no differences among varieties because all the treatments were established under optimum conditions.Daily emergence showed highly significant differences (P < 0.001) among varieties.The hybrids (SC701 and PAN53) showed faster and more uniform emergence compared with both 'GQ1' and 'GQ2' (Figure 3).The 'SC701' hybrid had the highest final emergence (100%) followed by 'PAN53' (94.44%).Final emergence of the landraces was low with 86.11% and 61.11% for 'GQ2' and 'GQ1', respectively.No significant differences were observed for stomatal conductance.Results of SC showed no significant interaction (P > 0.05) between water regimes and varieties.No significant differences (P > 0.05) were recorded between water regimes and among varieties (Figure 4).However, stomatal conductance tended to fluctuate throughout crop growth.This was a recurring trend across water regimes.Based on mean values of varieties across water regimes and time, landraces had higher SC than hybrids.The 'GQ2' landrace had the highest SC (58.1 mmol m -2 s -1 ) followed by 'GQ1' (54.6 mmol m -2 s -1 ).The PAN53 and SC701 hybrids had lower values (53.6 and 52.2 mmol m -2 s -1 , respectively).Mean values of water regimes across varieties showed that the 80% ETc water regime had the highest SC (55.3 mmol m -2 s -1 ).Interestingly, the 30% ETc water regime had slightly higher SC (54.3 mmol m -2 s -1 ) than the 50% ETc water regime (54.2 mmol m -2 s -1 ).
Results obtained from chlorophyll content index showed a highly significant interaction (P < 0.001) between water regimes and varieties.There were significant differences (P < 0.05) between water regimes.Chlorophyll content index (CCI) was higher at 30% ETc and 80% ETc and these values differed by a small margin (0.05).Highly significant differences were observed among varieties (P < 0.001).The 'PAN53' hybrid had the highest CCI (10.52) followed by both 'GQ2' and 'GQ1' with 10.09 and 10.05, respectively.The 'SC701' hybrid had the lowest CCI with a value of 8.53.Although fluctuations were evident throughout the growth period, the general trend for CCI showed a decrease in CCI for all treatments (30%, 50%, and 80% ETc) as plant growth progressed toward maturity (Figure 5).
The interaction between water regimes and varieties was not significant (P > 0.05) for plant height.There were also no significant differences (P > 0.05) between water regimes.There were, however, significant differences (P < 0.05) among varieties (Table 2).At 30% ETc, 'SC701' had the tallest plants followed by 'GQ2', 'GQ1', and 'PAN53'.At 50% ETc, 'PAN53' had the tallest plants followed by 'SC701', 'GQ2' and 'GQ1'.Under optimum conditions (80% ETc), 'SC701' had the tallest plants while 'GQ1' had the shortest plants.As expected, mean values of varieties across water regimes showed that the 80% ETc water regime had the tallest plants.This was followed by the 50% ETc water regime (164.4 cm) and the 30% ETc water regime (156.1 cm).Mean values for water regimes across varieties showed a trend where the SC701 and PAN53 hybrids dominated the GQ1 and GQ2 landraces.The 'SC701' hybrid had the highest plant height (192 cm) followed by 'PAN53' (168.2 cm).Plant height for 'GQ2' and 'GQ1' was 159.8 cm and 141.7 cm, respectively.
With regards to leaf number, there was no significant interaction (P > 0.05) between water regimes and varieties.No significant differences (P > 0.05) were observed between water treatments and varieties (Table 2).Both 'SC701' and 'GQ1' increased leaf number with decreasing water availability, while 'PAN53' and 'GQ2' remained consistent for leaf number.Although there were no significant differences, separation of means revealed differences between 'GQ1'with the lowest number of leaves (8.75), and 'SC701' with the highest number of (10.25).Mean values for varieties recorded across water regimes showed that hybrids had more leaves than landraces (mean values were 9.58 and 9.50 for hybrids and landraces, respectively).In terms of water regimes across varieties, the 30% ETc water treatment had the most leaves (9.81).Although differences were minor, the 80% ETc water treatment had the least number of leaves (9.31).
Results of total biomass showed no significant interaction (P > 0.05) between water regimes and varieties (Table 3).There were significant differences (P < 0.05) between water regimes but not among varieties.A trend could be observed for biomass where 80% ETc > 50% ETc > 30% ETc.Based on mean values of varieties across water regimes, landraces had higher total biomass than hybrids.
Ear prolificacy showed no significant interaction (P > 0.05) between water regimes and varieties (Table 3).There were no differences between water regimes or varieties.Separation of means also confirmed that there were no differences.However, mean values of varieties across water regimes showed that landraces had significantly higher ear prolificacy compared with 'SC701' and 'PAN53' (1.25 and 0.92, respectively).This trend was similar to the one observed for total biomass.
For ear size characteristics (ear mass per plant and ear length), there were no significant interactions (P > 0.05) between water regimes and varieties (Table 3).The general trend observed using mean values of water regimes across varieties showed that 80% ETc > 50% ETc > 30% ETc.The GQ2 and GQ1 landraces performed better than the 'PAN53' and 'SC701' hybrids for ear mass.However, this trend did not explain ear length.The 'SC701' hybrid had the longest plant ears (78.9 mm) while 'PAN53' had the shortest plant ears (64.4 mm).
DISCUSSION
Good seed quality is important in any cropping system because it plays an important role in the early growth stages of agricultural crops (Goggi et al., 2008).Good quality seed will enable better field performance in terms of germination, rapid emergence, and vigorous seedling growth (Santos, 2010;Mabhaudhi and Modi, 2010;2011).Observed differences in varieties for final germination were contrary to most findings in the literature that reported landrace seed lots as having inferior quality when compared with hybrid seeds.Similar results were reported by Mabhaudhi and Modi (2010) where landraces performed at par with hybrids in terms of final germination percentage.However, it cannot be concluded that the planting potential of landraces is always equal to that of hybrids given the low performance of 'GQ1'.
Results from the TZ test show inconsistencies with the SG test.According to Naderidarbaghshahi and Bahari (2012), there is a discrepancy between the TZ test and the SG test.It had also been noted by the Department of Agriculture, Food and the Marine (2013) that the TZ test is not suitable for carry-over seed, which is the likely case with landraces.It should also be remembered that the literature points out hybrids to have better seed quality than landraces.Mabhaudhi (2009) attributed hybrid vigor to be limited to hybrid seed.Given the results for MGT where hybrids had a lower MGT, the above statement is in line with research by Mavi et al. (2010), who suggest that MGT is a critical component in determining the emergence performance of a seed lot.Significant differences occurring between varieties could be the result of poor seed quality, particularly for 'GQ1'.It can be suggested that the genetic characteristics of 'GQ1' contribute to poor seed germination and, ultimately, emergence.
Closure of stomata has been identified as an early response to water stress (Khoshvaghti et al., 2013) through reduced transpiration rates.This ultimately leads to reduced net photosynthetic CO2 fixation (Ritchie et al., 1992).Results showed no significant interactions between varieties and water treatments, thereby supporting evidence presented by Chaves (1991) and Cornic and Massacci (1996), who suggest that closure of stomata is more common under field conditions than under controlled environment conditions such as those in the present study.
In the early stages of vegetative growth, plants are more susceptible to water stress than at the middle stages.Thus, water stress during the vegetative stages of crop growth can lead to reduced plant growth (Cominelli et al., 2008).Although differences were not statistically significant, results show plant height to be lower under imposed water stress.With increased water application, plant height increases.Research by Dunford and Vazquez (2005) supports these findings where plants that received more water accumulated more plant material and, therefore, increased in height.Similar results were attained by Khoshvaghti et al. (2013) and Pandey et al. (2000).These results were found for 'SC701' (late maturing hybrid) which had taller plants.This suggests that genotypes with long growth periods are taller on the average than other genotypes (Bert et al., 2003).Although landraces are late maturing (Mabhaudhi, 2009), it must be remembered that hybrid vigor gives hybrids a growth advantage due to their genetic potential.Shorter plant height among landraces supports the reduced loss of water through transpiration, thus potentially improving water use efficiency in those cultivars.
The reproductive phase of maize plant growth is considered to be sensitive to water stress (Çakir, 2004) and causes a marked reduction in yield (Bolaños and Edmeades, 1996).The general trend showing increases in yield and yield components (ear length, ear mass, kernel rows per ear, kernels per row, and harvest index) with increased water application support the idea that plant growth is particularly sensitive to water stress.Reductions in yield and yield components are likely associated with lower evapotranspiration and radiation interception (Stone et al., 2001).Schussler and Westgate (1991) indicated that reduced photosynthetic activity leads to poor seed set in plants grown in pots.This possibly supports the low number of kernels per row and reduced number of kernels per ear.It must be remembered that water stress may be due to the rapid development of water deficit occurring in pots as opposed to crops grown under field conditions (Otegui et al., 1995).The fact that the performance of landraces is at par with hybrids under such tunnel conditions indicates that they could potentially be a viable source to improve food security provided that the yields are similar.
CONCLUSIONS
From the results obtained, it cannot be concluded that the planting potential of landraces is always equal to that of hybrids because of the low performance of 'GQ1'.Results from the controlled environment experiment also showed that landraces can perform at par with hybrids.Under severe water stress, the prolificacy of landraces may compensate yield.This supports the idea that landraces are drought tolerant.Therefore, landraces should be considered for production in marginal areas because of their potential to contribute to food security.Data obtained from this study will contribute in developing parameters for preliminary parameterization of AquaCrop for maize landraces.
Figure 1 .
Figure 1.Daily germination percentages of landraces (GQ1 and GQ2) and hybrids (SC701 and PAN53) measured in a standard germination test.
Table 1 . Performance of landraces (GQ1 and GQ2) and hybrids (SC701 and PAN53) in a standard germination test.
Values with the same letter in a column are similar according to LSD (P = 0.05).Means were sorted in descending order.GVI: Germination velocity index; MGT: mean germination time; LSD: least significant difference; SED: standard error of the difference.
Table 2 . Plant growth and photosynthetic parameters of landraces (GQ1 and GQ2) compared with hybrids (SC701 and PAN53).
Values with the same letter in a column are similar according to LSD (P = 0.05).Means were sorted in descending order.LSD: Least significant difference; SED: standard error of the difference.
Table 3 . Yield components of landraces (GQ1 and GQ2) and commercial hybrids (SC701 and PAN53) subjected to three water treatments (30% FC, 50% FC, and 80% FC).
Values with the same letter in a column are similar according to LSD (P = 0.05).Means were sorted in descending order.LSD: Least significant difference; SED: standard error of the difference. | 2018-12-13T20:43:39.153Z | 2015-03-01T00:00:00.000 | {
"year": 2015,
"sha1": "d4e13b25fa993b91a81d40dea134e09f80aedff2",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/chiljar/v75n1/at02.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d4e13b25fa993b91a81d40dea134e09f80aedff2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
147591 | pes2o/s2orc | v3-fos-license | Functional analysis of fatty acid binding protein 7 and its effect on fatty acid of renal cell carcinoma cell lines
Background Renal cell carcinomas (RCCs) overexpress fatty acid binding protein 7 (FABP7). We chose to study the TUHR14TKB cell line, because it expresses higher levels of FABP7 than other cell lines derived from renal carcinomas (OS-RC-2, 786-O, 769-P, Caki-1, and ACHN). Methods FABP7 expression was detected using western blotting and real-time PCR. Cell proliferation was determined using an MTS assay and by directly by counting cells. The cell cycle was assayed using flow cytometry. Cell migration was assayed using wound-healing assays. An FABP7 expression vector was used to transfect RCC cell lines. Results The levels of FABP7 expressed by TUHR14TKB cells and their doubling times decreased during passage. High-passage TUHR14TKB cells comprised fewer G0/G1-phase and more S-phase cells than low-passage cells. Cell proliferation differed among subclones isolated from cultures of low-passage TUHR14TKB cells. The proliferation of TUHR14TKB cells decreased when FABP7 was overexpressed, and the cell migration property of TUHR14TKB cells were decreased when FABP7 was overexpressed. High concentrations of docosatetraenoic acid and eicosapentaenoic acid accumulated in TUHR14TKB cells that overexpressed FABP7, and docosatetraenoic acid enhanced cell proliferation. Conclusions The TUHR14TKB cell line represents a heterogeneous population that does not express FABP7 when it rapidly proliferates. The differences in FABP7 function between RCC cell lines suggests that FABP7 affects cell proliferation depending on cell phenotype. Electronic supplementary material The online version of this article (doi:10.1186/s12885-017-3184-x) contains supplementary material, which is available to authorized users.
Background
Kidney cancer is the 15th most common malignancy worldwide. In 2008, approximately 271,000 new cases were diagnosed, and 116,000 patients died from this disease [1]. These rates are approximately twice as high in men as in women [1]. Renal cell carcinomas (RCCs) represent 91.6% of kidney cancers [2]. The identification of molecular markers in body fluids, which can be used for screening, diagnosis, follow-up, and monitoring drugbased therapy of patients with RCC, is one of the most important challenges of cancer research [3]. In a search for candidate markers of RCC, we identified the gene (FABP7) encoding fatty acid binding protein 7 [4].
To better understand the role of FABP7 in RCC and to attempt to resolve the conflicting findings summarized above, the present study aimed to analyze the effects of FABP7 on the phenotypes of RCC cell lines, with particular focus on the composition of the fatty acids accumulating in cell lines that overexpress FABP7.
Cell culture
The 786-O cell line (CRL-1932) was purchased from the American Type Culture Collection (Manassas, VA, USA). The TUHR14TKB cell line (RCB1383) was provided by RIKEN (Tsukuba, Ibaraki, Japan). Short tandem-repeat typing was performed to confirm the identity of highpassage TUHR14TKB cells, and the data were verified using the RIKEN short tandem-repeat database [26]. All cell lines were grown in RPMI 1640 medium supplemented with 10% (v/v) or 1% fetal bovine serum (FBS) (Nichirei Biosciences Inc., Tokyo, Japan). Cells were cultured at 37°C in a humidified atmosphere containing 5% CO 2 . Docosatetraenoic acid or EPA (100 mM each) was dissolved in ethanol, and a 1:2000 dilution of each fatty acid was added to the culture medium.
Cell cloning
Clones were isolated from low-passage cultures of TUHR14TKB cells by plating the cells at limiting dilution in 96-well plates. The cells were serially diluted to 128 to 4 viable cells/mL, and 50 μL was added per each well of a 96-well plate. After incubation at 37°C in a humidified atmosphere containing 5% CO 2 , single colonies in the wells were expanded.
Real-time PCR analysis
Real-time PCR assays were performed using a modified version of the method described by Takaoka et al. [27]. Cells were cultured in 10-cm dishes. Total RNA was isolated from cultured cell lines using the RNeasy Mini Kit (QIAGEN, Hilden, Germany) according to the manufacturer's instructions. Two micrograms of RNA was reverse transcribed using SuperScript® III Reverse Transcriptase primed by 500 ng of Oligo(dT) [12][13][14][15][16][17][18] Primer according to the manufacturer's protocol. Realtime PCR analysis of FABP7 expression was performed using an Applied Biosystems StepOnePlus (Thermo Fisher Scientific). The final PCR reaction mix (20 μL) included 2 μL of each specific primer (5 μM), 1 μL of first-strand cDNA, and 10 μL of SYBR® Green PCR Master Mix. Plasmids that encode FABP7 and TATA box binding protein (TBP) were synthesized as described previously [27], and standard curves for each gene were generated using seven serial dilutions of plasmid templates (0.1 nM to 0.1 fM). TBP was used as an internal control. Takaoka et al. [27] and Jung et al. [28] reported the sequences of the primers used to amplify FABP7 and TBP, respectively.
Flow cytometry
Cells were plated in 10-cm culture dishes at a density of 2 × 10 6 cells per plate and incubated for two days at 37°C in an atmosphere containing 5% CO 2 . After incubation, the cells were harvested with trypsin/EDTA, washed once with PBS, and then resuspended to 1 × 10 6 cells/0.2 mL in PBS containing 0.25% Triton X-100 for 5 min at room temperature. Cellular DNA in each cell suspension was stained using 0.6 mL of 50 mg/L propidium iodide for 10 min at room temperature. Cell-cycle analysis was performed using an EPICS-XL flow cytometer (Beckman-Coulter, Brea, CA, USA).
Cell proliferation assay
Cells were plated in 96-well cell culture plates at 400 (786-O transfectant) or 2000 cells per well (low-passage or high-passage TUHR14TKB or TUHR14TKB transfectants, respectively) in 100 μL of culture medium. The plates were incubated at 37°C in an atmosphere containing 5% CO 2 . The cells were analyzed using a CellTiter 96® AQueous One Solution Cell Proliferation Assay Kit (Promega, Madison, WI, USA) according to the manufacturer's instructions. Absorbance (490 nm) was measured one, two, and three days after cell plating. Doubling times were determined from four replicate samples per point.
Cell counts
TUHR14TKB transfectants were plated in 24-well cell culture plates (10,000 cells per well) in 500 μL of RPMI 1640 medium containing 10% FBS with 5 mg/L blasticidin S HCl, 0.3 g/L G418, and 1 mg/L doxycycline hyclate. The plates were incubated at 37°C in an atmosphere containing 5% CO 2 . Cells on the plate were fixed with 4% paraformaldehyde and stained with 0.1% crystal violet one, two, and three days after plating. The numbers of cells were counted in five random fields using a light microscope (×100).
Wound-healing assay
Cells (1 × 10 6 ) were seeded in 24-well plates. After incubation overnight (786-O TR transfectant) or for one day (TUHR-TR transfectant), an artificial wound was created (0 h) using a 200-μL tip to introduce a gap in the confluent cell monolayer, and the culture medium was changed. Images were acquired at 0 h and 6 h (786-O TR transfectant) or 16 h (TUHR-TR transfectant). The wounded areas were measured before and after healing.
Data analysis
Cell proliferation and migration data were analyzed using the Student t test. Statistical significance was defined as p < 0.05.
Analyses of FABP7 expression and proliferation of TUHR14TKB cells during passage in culture
High levels of FABP7 were detected during passages 6-8 of TUHR14TKB cells, but not during passages 16-18 (Fig. 1a). The levels of FABP7 expressed by TUHR14TKB cells decreased by approximately fourfold between two cell passages (Fig. 1b). In contrast, the doubling time of low-passage cells was approximately twice that of high-passage cells (Fig. 2a). The doubling times differed among cells that were isolated from individual colonies of low-passage TUHR14TKB cells (Fig. 2a). Further, the percentage of S-phase cells Fig. 1 Expression of FABP7 during subculture of TUHR14TKB cells. The zero passage (0) was started when the cells were received from RIKEN. TUHR14TKB cells were cultured in RPMI 1640 medium containing 10% FBS for one to two weeks, harvested when they reached confluence, and assayed for FABP7 expression. a Western blot analysis of FABP7 expression. b Real-time PCR analysis of FABP7 expression Fig. 2 Proliferation of TUHR14TKB cells during passage. a The doubling times of TUHR14TKB cells and its subclones were subjected to MTS assay. Low and high passages are defined as TUHR14TKB cells 7-9 and 19-23 passages, respectively. "Subclones 1, 2, and 3" represents subclones from low-passage TUHR14TKB cells. The assay was repeated three to five times, and the data represent the average value and standard deviation (error bars). b The stages of the cell cycle of low-and high-passage TUHR14TKB cells and the subclones were determined using flow cytometry in high-passage TUHR14TKB cells increased and was accompanied by a decrease in the percentage of G0/G1phase cells compared with low-passage TUHR14TKB cells (Fig. 2b).
Functional analysis of FABP7 in RCC cells
We transfected FABP7 low-expressing TUHR14TKB and 786-O cells with an FABP7 expression vector ( Fig. 3a and b and Additional file 1: Figure S1a and S1b). In the presence of 10% FBS, the doubling time of TUHR14TKB cells that overexpressed FABP7 was significantly longer than that of cells transfected with the control vector ( Fig. 4a and b). Although TUHR14TKB cells transfected with the control vector were able to proliferate, the cells that overexpressed FABP7 were unable to proliferate in the presence of 1% FBS (Additional file 2: Figure S2). Further, the percentage of TUHR14TKB FABP7 in G2/M increased compared with that of TUHR14TKB lacZ cells (Fig. 4c), indicating that FABP7 induced the arrest of TUHR14TKB in G2. In contrast, overexpression of FABP7 stimulated the proliferation of the 786-O cell line cultured in medium containing 1% FBS (Additional file 1: Figure S1c).
Wound-healing assays revealed that TUHR14TKB cells that overexpressed FABP7 migrated significantly slower than TUHR14TKB cells transfected with the control vector (Fig. 3c), although overexpression of FABP7 did not affect the migration of 786-O cells (Additional file 1: Figure S1d).
Effects of fatty acids on TUHR14TKB cells expressing FABP7
Although FABP7 binds to fatty acids, it does not catalyze de novo fatty acid synthesis, suggesting that FABP7 expression leads to the accumulation of fatty acid in cells. Docosatetraenoic acid and EPA accumulated in TUHR14TKB cells that expressed FABP7 (Fig. 5a). In contrast, other fatty acids did not accumulate in TUHR14TKB cells that expressed FABP7 (Additional file 3: Table S1). Therefore, we tested the effects of docosatetraenoic acid or EPA on the proliferation of TUHR14TKB cells. The addition of docosatetraenoic acid significantly stimulated the proliferation of TUHR14TKB cells that expressed β-galactosidase (Fig. 5b).
Discussion
Human RCCs overexpress FABP7 [4,[6][7][8][9][10][11][12][13][14], indicating that FABP7 might affect the progression of RCC. Therefore, we studied FABP7 function using RCC cell lines. In the present study, we show that the levels of FABP7 dramatically decreased during passage of the RCC cell line TUHR14TKB. Further, FABP overexpression differentially affected the proliferation of the RCC cell lines analyzed here. Thus, overexpression of FABP7 decreased the proliferation of TUHR14TKB cells. In contrast, overexpression of FABP7 increased the proliferation of 786-O cells.
FABP7 transcripts are expressed in 18 of 30 clear celltype RCC lesions but in only 4 of 19 RCC cell lines [6]. These results are consistent with our previous findings that FABP7 is expressed in one (TUHR14TKB) of six RCC cell lines [27]. We show here that the levels of FABP7 decreased during the passage of TUHR14TKB cells (Fig. 1). Further, TUHR14TKB cells proliferated faster during continued passage (Fig. 2a), suggesting that continued passage selected for cells that did not express FABP7 and therefore proliferated at an increased rate. Moreover, the doubling times of subclones of TUHR15TKB cells differed significantly (Fig 2a), which is consistent with the loss of FABP7 expression during attempts to establish cell lines from primary RCC tumor tissue. In addition, glioblastoma neurospheres express FABP7 at higher levels than those of adherent cells derived from the same tumor [21]. Therefore, conditions that favor the formation of spheres may provide a selective advantage for primary RCC cells that express FABP7.
Overexpression of FABP7 inhibited the proliferation of TUHR14TKB cells ( Fig. 4a and b), which is consistent with findings that FABP7 (referred to formerly in the studies cited here as the protein encoded by mammary-derived growth inhibitor-related gene) inhibits the proliferation of breast cancer cell lines [15,16]. Further, high tumor-grade (G3 + G4) RCCs express significantly lower levels of FABP7 mRNA than low-grade (G1 + G2) RCCs [10], and FABP7 is highly expressed in primary melanomas compared with metastatic melanomas [29,30]. In contrast, knockdown of FABP7 expression inhibits the proliferation of melanoma cells [17,18], an RCC cell line [19], a breast cancer cell line [20], and glioblastoma cells [21]. Further, we show here that FABP7 overexpression did not affect proliferation of the 786-O cell line (Additional file 1: Figure S1c and [14]), and down-regulation of FABP7 expression by FABP7-specific siRNAs does not affect the proliferation of certain melanoma cells [17]. Interestingly, FABP7 overexpression stimulated the proliferation of 786-O cells in medium containing 1% FBS (Additional file 1: Figure S1c and [14]). The present and previous studies demonstrate that the effect of FABP7 on cell proliferation varies among cell lines and with cell culture conditions. These findings may be explained by the interaction of FABP7 with molecule(s) that inhibit or enhance cell proliferation. Cancer is a multistage disease, which develops through a succession of mutations [31,32]. Thus, FABP7 and other molecule(s) may control cell proliferation through a similar mechanism. Another explanation for the inconsistencies among studies of FABP7 function may be that FABP7 modulates signaling networks that influence cell proliferation.
Down-regulation of FABP7 expression by siRNAs significantly reduces the migration of melanoma cell lines [17,18], an RCC cell line [19], breast cancer cells [20], and malignant glioma cells [21][22][23]. Further, overexpression of FABP7 enhances the migration of glioma cells [24]. In contrast, FABP7 overexpression inhibited the migration of TUHR14TKB cells that was revealed using a woundhealing assay (Fig. 3c). Thus, the effect of wound healing may be related to the effect of proliferation.
Docosatetraenoic acid and EPA accumulated in TUHR14TKB cells that expressed FABP7 (Fig. 5a and Additional file 3: Table S1). Ligand-binding studies conducted in vitro show that ω-3 EPA is the preferred ligand of FABP7 [33]. Further, the addition of docosatetraenoic acid significantly increased cell growth (Fig. 5b), suggesting that inhibition of the proliferation by FABP7 of TUHR14TKB cells does not act through the accumulation of docosatetraenoic acid by FABP7.
Conclusions
Our data lead us to conclude that the TUHR14TKB cell line comprises a heterogeneous population and that cells that do not express FABP7 grow faster and are therefore selected during passage in culture. Further, our finding that FABP7 inhibited the proliferation of TUHR14TKB cells but stimulated the proliferation of 786-O cells cultured in medium with 1% FBS indicates that FABP7 function depends on cell type and culture conditions. | 2017-08-03T01:46:22.359Z | 2017-03-14T00:00:00.000 | {
"year": 2017,
"sha1": "6fef0fdce97854c3eee09b9fb2592314eeb0d308",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-017-3184-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6fef0fdce97854c3eee09b9fb2592314eeb0d308",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
8957470 | pes2o/s2orc | v3-fos-license | Integration and publication of heterogeneous text-mined relationships on the Semantic Web
Background Advances in Natural Language Processing (NLP) techniques enable the extraction of fine-grained relationships mentioned in biomedical text. The variability and the complexity of natural language in expressing similar relationships causes the extracted relationships to be highly heterogeneous, which makes the construction of knowledge bases difficult and poses a challenge in using these for data mining or question answering. Results We report on the semi-automatic construction of the PHARE relationship ontology (the PHArmacogenomic RElationships Ontology) consisting of 200 curated relations from over 40,000 heterogeneous relationships extracted via text-mining. These heterogeneous relations are then mapped to the PHARE ontology using synonyms, entity descriptions and hierarchies of entities and roles. Once mapped, relationships can be normalized and compared using the structure of the ontology to identify relationships that have similar semantics but different syntax. We compare and contrast the manual procedure with a fully automated approach using WordNet to quantify the degree of integration enabled by iterative curation and refinement of the PHARE ontology. The result of such integration is a repository of normalized biomedical relationships, named PHARE-KB, which can be queried using Semantic Web technologies such as SPARQL and can be visualized in the form of a biological network. Conclusions The PHARE ontology serves as a common semantic framework to integrate more than 40,000 relationships pertinent to pharmacogenomics. The PHARE ontology forms the foundation of a knowledge base named PHARE-KB. Once populated with relationships, PHARE-KB (i) can be visualized in the form of a biological network to guide human tasks such as database curation and (ii) can be queried programmatically to guide bioinformatics applications such as the prediction of molecular interactions. PHARE is available at http://purl.bioontology.org/ontology/PHARE.
Results:
We report on the semi-automatic construction of the PHARE relationship ontology (the PHArmacogenomic RElationships Ontology) consisting of 200 curated relations from over 40,000 heterogeneous relationships extracted via text-mining. These heterogeneous relations are then mapped to the PHARE ontology using synonyms, entity descriptions and hierarchies of entities and roles. Once mapped, relationships can be normalized and compared using the structure of the ontology to identify relationships that have similar semantics but different syntax. We compare and contrast the manual procedure with a fully automated approach using WordNet to quantify the degree of integration enabled by iterative curation and refinement of the PHARE ontology. The result of such integration is a repository of normalized biomedical relationships, named PHARE-KB, which can be queried using Semantic Web technologies such as SPARQL and can be visualized in the form of a biological network.
Conclusions: The PHARE ontology serves as a common semantic framework to integrate more than 40,000 relationships pertinent to pharmacogenomics. The PHARE ontology forms the foundation of a knowledge base named PHARE-KB. Once populated with relationships, PHARE-KB (i) can be visualized in the form of a biological network to guide human tasks such as database curation and (ii) can be queried programmatically to guide bioinformatics applications such as the prediction of molecular interactions. PHARE is available at http://purl.bioontology.org/ontology/ PHARE.
Background
A large amount of biomedical knowledge is in the form of text embedded in published articles, clinical files or biomedical public databases. In order to construct computable knowledge bases from these sources, there is a great interest in capturing and formalizing this knowledge. The capture of relationships between biological entities is of particular interest since such relationships represent elementary and reusable knowledge units-often called "nano-publications" [1].
Our work is motivated by the need for automated approaches capturing and formalizing knowledge extracted from the literature via manual or computational approaches. Consider for example, that five curators at the Pharmacogenomics Knowledge Base (PharmGKB) manually browse the pharmacogenomics (PGx) literature to curate relationships relevant for storage in the PharmGKB [2]. The result of this curation process is a high quality database queried by clinicians and bioinformaticians. Nevertheless this manual curation process is not sustainable considering the growth of the scientific literature in this domain [3]. Automatic approaches using Natural Language Processing (NLP) are therefore increasingly utilized [4].
The simplest methods to capture relationships rely on co-occurrence of two entities to derive a relation between them. For example, in the sentence "Our study shows that warfarin inhibits the expression of VKORC1" a drug, warfarin, and a gene, VKORC1, can be recognized using simple lexicons. The co-occurrence of these two entities in one or more sentences is used to derive a relation of the form (warfarin, VKORC1).
One key limitation of the co-occurrence based approach is identification of false positive connections. For example the sentence "Warfarin inhibits the expression of VKORC1 while sulfamethoxazole inhibits the expression of CYP2C9" would provide co-occurrence counts towards four relationships including the relationships (warfarin, VKORC1) and (warfarin, CYP2C9); only one of which is true. A second limitation is the coarse granularity of the identified relationships. Considering the previous example, the mentioned relationship links warfarin and theexpression of VKORC1, and not VKORC1 per se. We consider this distinction of importance since VKORC1 and expression of VKORC1 refer to a gene and a phenotype respectively-two very distinct entities. Despite these limitations, co-occurrence is successfully used to generate networks including protein-protein interaction networks, gene-disease networks and regulatory gene expression networks [5,6]. Most of these networks are hard to compute on since their representation format does not support queries with typed relationships and the semantics associated with the nodes and edges differ in every network.
Other NLP approaches can identify typed relationships and recognize entities that can either be the whole or a part of a subject and an object [7][8][9]. For example processing the previous sentence can identify the following relationship inhibits (warfarin, the expression of VKORC1)that can also be represented as inhibits (warfarin, VKORC1 expression). Figure 1 shows three levels of granularity commonly encountered in text-mined relationships. Fine-grained relationships can be identified via syntactic parsing of sentences, which generates structures such as Parse Trees or Dependency Graphs (DG) [10]. In previous work, we presented a method based on syntactic parsing and DG exploration to extract fine-grained PGx relationships [11]. Given the variation in natural language, it is difficult to normalize the fine-grained and typed relationships extracted by this method. In this paper, we report on the construction of a relationship ontology and describe its use for integrating and publishing text-mined relationships on the Semantic Web. The relationships captured as instances of the PHARE ontology can be queried using Semantic Web technologies such as SPARQL and can be visualized in the form of a biological network. Semantics associated with relationships declared in PHARE-KB allow the text-extracted relationships to be consumed both by humans (for example, to guide curation) as well as by machines (for example, to guide computational prediction of molecular interactions).
Methods
In previous work, we described the extraction of over 40,000 raw relationships in the domain of pharmacogenomics from MEDLINE abstracts [11]. In following sections we briefly summarize this extraction process and then describe how we use the PHARE ontology we have created to normalize and integrate these relationships.
Relationships and PGx relationships
We define a relationship as a binary relation R (a, b), where a, and b are subjects and objects related by a relationship of type R. In PGx relationships a and b can be instances of a gene (e.g., VKORC1 gene), drug (e.g., warfarin), or phenotype (e.g., clotting disorder). We note that a and b can also be entities that are related to genes (e.g., VKORC1 expression), drugs (e.g., warfarin dose) or phenotypes (e.g., clotting disorder treatment). R is a type of relation described by words such as "inhibits", "transports", or "treats" and their synonyms.
The three key entities in PGx (genes, drugs, and phenotypes) can be either direct targets for relation extraction, or indicators of latent PGx knowledge, as they modify other entities to create a second set of entities necessary to precisely describe PGx relationships. We refer to these modified entities as composite entities in contrast with the key entities. These composite entities can be any biomedical entity, such as a gene variation, drug effect, or disease treatment. For example, the gene entity VKORC1 (a key entity) is used as a modifier of expression in "warfarin inhibits the expression of VKORC1." Specifically, composite entities are composed of a sequence of terms that can be read left to right and where left term progressively specializes the term on its right. The last word is named the head entity. Figure 2 shows the components of relationships. Figure 1 Coarse to fine-grained relationships. Coarse to fine-grained relationships identified in the sentence "Our study shows that warfarin inhibits the expression of VKORC1". Relationships are mainly of three forms: (1) non-typed relationships composed of two atomic entities; (2) typed relationships between atomic entities; (3) typed relationships between atomic or composite entities.
Identification of a sentence with PGx relationships
Given the definition of PGx relationships, a sentence that potentially contains a PGx relationship would mention a gene and drug, a gene and a phenotype, or a drug and a phenotype. We used a Lucene index created on individual sentences of MEDLINE abstracts published before 2009 (17,396,436 abstracts and 87,806,828 sentences) processed by Xu et al. to identify those sentences that might contain a PGX relationship [12,13]. To select only sentences that potentially mention a PGx relationship we queried the index with pairs of key PGx entities (only gene-drug and gene-phenotype pairs) for sentences that are indexed with both the terms in the query. The PharmGKB lexicon, provides the sets of synonyms used to build such queries for the key entities. Overall, for this study we used 41 genes highlighted by PharmGKB as key, well characterized pharmacogenomic genes [14], as well as 3,007 drugs and 4,202 phenotypes. Future work will expand the relationship extraction to all genes.
Extraction of heterogeneous raw relationships
Sentences returned by the index are parsed using the Stanford Parser to build Dependency Graphs (DGs) [15]. DGs are rooted, directed, and labelled graphs, where nodes are words and edges are dependency relations between words (e.g., noun modifier, nominal subject). The extraction of raw relationships of the form R(a,b) relies on the exploration of syntactic structure provided by DGs where: -a and b are nodes or chains of nodes in a DG, depending on whether they are a single key entity (an instance of gene, drug or phenotype) or a composite entity; -R is a node in the DG that connects a and b, and indicates the nature of their relationship.
We have developed an algorithm to explore the DG and extract raw relationships from the raw text. The extraction of raw relationships is constrained by a set of rules defined using the different type of dependencies that associate nodes in DG. This step results in the extraction of over 40,000 raw relationships discussed in [11]. These relationships are highly heterogeneous and contain multiple equivalent ways to express one single fact. The details of the DG exploration algorithm appear in Table 1 of [11].
Building the PHArmacogenomic RElationship ontology
In order to create a smaller, normalized set of relationships, we first identified the 200 most frequent relationship types from the~40,000 raw relationships. In the next step, we manually merged similar relationships and organized them hierarchically. Groups of similar relationships are used to define roles in the PHARE ontology. For example Figure 3 shows how inhibit, repress, and antagonize are merged to define the role Figure 2 Components of relationships. A relationship has three components: relationship type, subject (here limited to a key entity), and object (here a composite entity which uses key entity as a modifier). inhibits. Role labels are declared using the rdfs:label annotation property. The first label of each role is used as its preferred name. Please note that the symbol $$ is a simple separator symbol that enables us to distinguish the passive voice from the simple past during the next normalization step.
In a similar manner we identified the 200 most frequent terms modified by key entities (e.g., expression for gene names or sensitivity for drug names). Then five PGx experts, including 3 co-authors and 2 PharmGKB curators, manually merged similar ones and organized them hierarchically in the entity hierarchy. Figure 4 shows how variant, polymorphism, and mutation are merged to define the entity Variant.
The entity hierarchy is defined with the subsumption relation (noted as ⊑ or subClassOf in OWL). Existential quantification is used to define sets of composite entities that are only modified by certain concepts. For example the set of entities that are modified by drugs is defined with the existential quantifier (Ǝ) and the role modified by: Ǝ modified. Drug (or modified someValuesFrom Drug in Manchester OWL syntax), see Figure 4 for examples. This definition is associated through a subsumption relation to entities that can be modified by drugs, such as DrugSensitivity. This pattern is used to distinguish what thing is specialized (or modified) by drugs from what is specialized by other modifiers (e.g. disease names). For example warfarin that we know to be a drug enables us to distinguish warfarin sensitivity from cancer sensitivity and to classify warfarin sensitivity as a kind of drug sensitivity versus disease sensitivity (represented by the DiseaseSensitivity concept).
Inverse roles are explicitly defined using the inverse constructor (-1 or inverseOf in OWL). As shown in the example in Figure 5, roles inhibits and isInhibitedBy are inverses of one another. Class declarations are used to list all key entities of the domain of interest and what entity type they belong to. In our case, where gene-drug relationships are studied, known drugs and genes must be defined in the ontology as being an instance of the entity types Drug and Gene.
Building of WN-PHARE ontology using WordNet
In order to quantify the utility of manual review and editing of the raw relationships in building PHARE, we built a second ontology named WN-PHARE in a purely automated manner using the lexical resource WordNet [16]. In this case Integration of heterogeneous relationships. Four raw relationships are normalized to two expressions, using the PHARE ontology. The first two (s1 and s2) mention the same relationships with different words and sentence structures and are consequently integrated (e.g. 'drug dose' and 'drug requirement' are declared synonyms). s3 illustrates the utility of being able to distinguish between concepts modified by Gene and by Drug to disambiguate two different occurrences of "level": one specialized by a gene name, the other by a drug name. Given the ontology, 'gene level' is a reference to gene expression, whereas 'drug level' refers to drug dose. s3 and s4 illustrate the utility of role inverses in the ontology, which enable the integration of relationships extracted from s3 and s4 by swapping subject and object of s3. The last two raw relationships are inverses that express the same relationship.
all relationship types-and not just the 200 most frequent ones-are computationally merged in groups according to WordNet synsets. Resulting groups are directly used to define roles without any manual review. Similarly, all terms that modify gene, drug or phenotype names are merged in groups used to define composite entities.
Normalization and integration of heterogeneous relationships
The algorithm to normalize typed relationships between composite entities consists of four steps. The first three steps normalize the subject entity, the object entity, and the relationship type. The last step, assembles the three normalized pieces in a normalized relationship of the kind shown in Figure 1.
Normalization of composite entities (steps 1 and 2)
This step-described in Table 1-takes as input a raw composite (or atomic) entity and the PHARE ontology to return a normalized entity. The first word of the entity is recognized as the key entity. Then each following word that composes the entity is considered from left to right as something further specialized by previous words. The ontology is searched for an entity label that matches with the processed word (named read_word in the Table 1 algorithm). This algorithm is applied successively to the subject entity and the object entity of a relationship ( Figure 6).
Normalization of relationship types (step 3)
The next step is to normalize the relationship type. The ontology is searched for role labels that match the raw relationship. When a match is found, the preferred name of the corresponding role is used to normalize the relationship type. Note that during this step the normalization process distinguishes between passive voice of the present tense, such as "A is inhibited by B" and active voice of simple past tense "B inhibited A". Dependency Graphs of these two sentences are different because "inhibited" in the Figure 6 Normalization of a composite entity. Starting with the text "differences in coumadin requirements", NLP tools generate the raw entity "coumadin requirements differences" on which we can apply the normalization algorithm (described in table 1) using the PHARE ontology. The first step ensures that the preferred name warfarin is used instead of coumadin. The second step maps "requirements" to the entity type DrugDose, and the final step maps "differences" to the entity type Variation. The axiom noted with a * is added to the ontology during the normalization as a result of the inference that a variation in drug dose was found.
passive voice sentence is related through an aux dependency to "is" (standing for auxiliary). This difference is used during the relationship extraction to extract either is $$inhibited(A, B) or inhibited(A, B).
Assembly of normalized pieces (step 4)
The final step is to group together normalized composite entities and relationship type to produce normalized relationships. For each relationship, this step relies on the simple assembly of normalized type, subject and object. In addition if the role used to normalize the type has inverses or is symmetric then this step also creates the appropriate additional relationships. For each inverse role in the ontology, an inverse relationship is created with the preferred name of the inverse and where normalized subject and object are swapped. If the role is symmetric, one additional relationship is created with the same normalized relationship type but with subject and object swapped. Figure 5 illustrates the integration process that applies such relationship normalization on four heterogeneous sentences.
Applying the normalization on raw relationships produces a set of relationships represented as PHARE entities and roles. Consequently normalized relationships can be directly added to PHARE as instances to create a knowledge base.
Refinement of PHARE by repeating the normalization step
Raw relationships have been normalized twice using PHARE to iteratively refine the ontology. After the first iteration of the normalization, from the pool of un-normalized relationships we manually identify terms and roles that are either frequent or of PGx interest. Such terms (or roles) are then used to extend the set of synonyms of an entity already defined in the ontology, or used to create a new entity in the ontology.
Visualizing gene-disease networks
The PHARE ontology
The PHArmacogenomic RElationship ontology (or PHARE) contains 229 entity classes and 76 roles of interest in the PGx domain. PHARE is encoded in OWL-DL and is constructed semi automatically by (i) listing terms derived from relationships extracted automatically from text ; and (ii) the manual organization of the relationship terms by domain experts. Figures 2 and 3 illustrate how the extracted terms are organized in these hierarchies. The PHARE ontology is available online at http://purl.bioontology. org/ontology/PHARE.
The PHARE-Knowledge Base (PHARE-KB)
The ontology-driven integration process described in the method section takes as input a set of relationships extracted from MEDLINE abstracts and outputs a set of normalized relationships of the form Role(subject, object) represented using entity types and roles defined in PHARE. Therefore, normalized relationships can be used to instantiate roles defined in PHARE without additional processing. We performed such instantiation and obtained the PHARE-Knowledge Base (or PHARE-KB) that contains 28,676 roles instantiations encoded as RDF triples from over 41,000 raw relationships. If we consider instantiation of role inverses (e.g., isInhibitedBy (a,b) ≡ inhibits -1 (b,a)), the number of role instantiations rises to 46,526. Note that some roles in PHARE do not have inverse or are symmetric (e.g., isAssociatedWith).
Almost 77% role instantiations use roles initially encoded in PHARE and 23% necessitate the creation of new roles in PHARE. In other words PHARE roles are sufficiently detailed to capture 77% of the relationships we extracted from text analysis. New roles correspond to types of relationships that are not frequent enough in our corpus and consequently have not yet been manually reviewed and defined in PHARE. These roles, which are added solely to instantiate the 23% of un-normalized relationships are associated with only one, label and thus do not yet contribute to the integration of relationships.
The 28,676 role instances link roughly 16,000 individuals of the KB, including 285 genes, 1,083 drugs and 990 diseases. To facilitate overlap comparisons of PHARE-KB with other data sources individuals that are of type genes, drugs, or diseases are associated with their Entrez Gene, DrugBank, and MeSH identifiers respectively.
Individuals in the PHARE-KB can be classified using reasoning. Classification allows us to make the implicit knowledge units explicit. For example, classification infers that Figure 7 Sub-network related to Alzheimer's disease. Sub-network of genes (or associated entities) strongly related to Alzheimer's Disease (AD) according to PHARE-KB. Linked entities are linked by more than 5 sentences in MEDLINE abstracts. Relationships shown on the edges are the two most frequent type of relations mentioned in these sentences. Some relationships type are false such as "hearing".
Phenotype(VKORC1 expression)
i.e., VKORC1 expression is a phenotype on the basis of the following two axioms Expression(VKORC1 expression) Expression ⊑ Phenotype i.e., VKORC1 expression is a gene expression and gene expression is a phenotype. Every relationship available in the PHARE-KB (in the form of a RDF triple) is associated with its provenance using the property rdfs:comment. For example, the triple isAssociatedWith(UCHL1, parkinson disease) is associated with the following string: "[14522054, Neuronal ubiquitin C-terminal hydrolase (UCH-L1) has been linked to Parkinson's disease (PD), the progression of certain nonneuronal tumors, and neuropathic pain]", Where 14522054 is the PMID (PubMed ID) of the article and the text is the sentence based on which the triple is created.
Evaluation and comparison
To evaluate the impact of the manual review and curation in the construction of the PHARE ontology, we constructed an alternate relationship ontology-named WN-PHARE-in a fully automated manner using WordNet as described in the methods section. Table 2 compares the structure and the effectiveness of PHARE and WN-PHARE in integrating heterogeneous text-mined relationships. These features are measured for the task of integrating a subset of relationships extracted for Parkinson's Disease (PD). This subset contains 2,827 PD relationships extracted from 2,124 distinct MEDLINE abstracts. Logic criteria (e.g., satisfiability) of the ontologies are not included in the comparison since both ontologies are consistent and coherent.
We find that the roles represented in PHARE cover the set of extracted relationships incompletely but they normalize more relationships than the roles in defined in WN-PHARE. Thus the manually reviewed ontology results in a better identification of similar relationships that are phrased differently in natural language, but it captures a smaller fraction of the total relationships extracted from text. Table 3 provides additional evaluation with numbers of similar relationships (same subject, predicate and object) identified first before normalization, second after normalization using PHARE, and third after normalization using WN-PHARE.
SPARQL query point
In order to publish the PHARE-KB for use on the Semantic Web, we set up a SPARQL endpoint, which is available at http://sparql.bioontology.org/webui/. Examples of queries are provided as additional file 1.
The KB is classified and inferred triples are materialized before loading into the triple store underlying the SPARQL endpoint. As a consequence queries return asserted as well as inferred facts.
An example of query for entities related to the uchl1 gene is shown below: SELECT $y $z FROM <http://www.stanford.edu/~coulet/phare.owl> WHERE <http://www.stanford.edu/~coulet/phare.owl#uchl1> $y $z; Comparison of PHARE (built semi automatically with added manual review and curation) and WN-PHARE (built in a fully automated manner). The Reduction column quantifies the ability of each ontology to normalize text-mined relationships.
Reduction is the ratio of the number of normalized relationships and the initial number of raw relationships. The Coverage column quantifies the fraction of raw relationships that are normalized using roles and entity types encoded in the ontology. This query returns the RDF triple isAssociatedWith(UCHL1, parkinson disease) mentioned previously. Queries can also return sets of RDF triples that are used to build sub-network related to a specific diseases as shown in Figure 7.
Disease related gene networks Figures 7 and 8 show gene-disease sub-networks related to AD and PD respectively. For display purpose, these have been reduced by selecting only those nodes that are asserted to be related in more than 5 different sentences. Since the type of relationship differ in sentences, only the two most frequent relationships are displayed as labels on the edges. Each network was obtained using a SPARQL query to select triples where the disease (AD or PD) is either subject or object. Resulting set of triples is then filtered to keep the frequent relationships. Such filtering enables to us remove both false positives as well as irrelevant triples such as phare:alzheimer=disease rdf:type phare: Disease . Note that in RDF we use the symbol '=' as a simple separator to replace spaces in coumpound nouns.
Discussion
Our work is motivated by the need for automated approaches capturing and formalizing knowledge extracted from the literature and the need for publishing such knowledge on the Semantic Web. Recent advances in Natural Language Processing (NLP) techniques enable the extraction of fine-grained relationships mentioned in biomedical text [4]. The variability and the complexity of natural language in expressing similar or simple relationships causes the extracted relationships to be highly heterogeneous. We show that the use of a relationship ontology can normalize and integrate the heterogeneous relationships extracted from text and serve as a common semantic framework to integrate text-mining derived facts into a knowledge base. However, the manual construction of a relationship ontology is a slow and expensive process [18]. We have devised a method to construct such an ontology using the text-extracted heterogeneous relationships as a starting point. Although we only report on our experiments in the pharmacogenomics domain; we note that the approach described here can be applied for relationship extraction in other domains.
Linked data cloud and text-mined relationships
Our results in publishing RDF triples extracted from text align closely with the objectives of the Linking Open Data community project [19] and that of efforts such as the Concept Web Alliance [20]. The goal of projects such as Linked Open Data is to publish various data sets as RDF on the Web and to declare links between data items from different data sources.
Currently, the relationships we extract do not integrate easily with content in the Link Data Cloud for two main reasons: the lack of resource unique identifiers and the lack of an agreed upon relation ontology. Despite community efforts to create unique resource identifiers for life sciences, currently there is no clear consensus [21,22]. In addition, composite entities, such as VKORC1 expression that participate in relationships are too complex to reference using a single identifier. Moreover, the absence of an expressive and comprehensive relation ontology led us to develop our own in a boot-strapped manner from example instances of text-mined relationships. PHARE is designed for the purpose of representing PGx relationships and we anticipate that sharing it with the community will provide a much needed example set for the development of a proper, formal biomedical relation ontology. PHARE is particularly suited to seed that activity, because it is built from the most frequent relationships that are used in the scientific literature. One challenge is thus to propose consistent mappings between relationship types arising from the literature, such as those suggested by PHARE and relationship types arising from functional annotations such as "suppresses gene" or "enhances gene" suggested by TAIR relations or the Gene Ontology [23].
Limitations of our approach
Adequately representing provenance information at the sentence level is a challenge. Currently, we utilize the rdfs:comment property to store provenance for each extracted fact in PHARE-KB. In the future, we plan to evaluate the Annotation Ontology developed by Ciccarese et al. [24] for its utility is representing provenance at the sentence level, particularly in workflows where both automated and manual approaches are used simultaneously.
Another limitation is the incoherence between gene name identifiers across data sources. Our gene identifiers are based on PharmGKB gene names that are not entirely consistent with the HUGO Gene nomenclature [25], making cross referencing with other sources time consuming. In a similar vein, recall for extracted relations may improve upon using advanced Named Entity Recognition such as disambiguation techniques rather than the current PharmGKB-derived dictionary based approach.
The efficacy of the relationship normalization and integration might vary depending on the source of the text such as full articles, clinical reports, clinical files or drug labels. However, because PHARE has been designed using MEDLINE abstracts, it may capture relationships mentioned in diverse sources.
Conclusions
We have described the construction of an ontology of relationships in the PGx domain and its use to integrate heterogeneous relationships extracted by text-mining. The synonyms, entity descriptions, and the hierarchies of entities and roles represented in the ontology are used to map text-derived relationships to the ontology. Once mapped, relationships can be normalized and compared using the semantics defined in the ontology to identify relationships that have similar semantics but different syntax. We compare and contrast a fully automated and a manually edited version of the PHARE ontology to quantify the degree of integration enabled by manual inspection, curation and refinement of the PHARE ontology. PHARE has been successfully used in a pipeline for the integration of pharmacogenomic relationships extracted from MEDLINE abstracts [11]. The result of the integration is compiled into a knowledge base named PHARE-KB, which can now be queried using Semantic Web technologies such as SPARQL and can be visualized in the form of a biological network. PHARE-KB can also be queried programmatically, for example, to guide computational prediction of molecular interactions [26]. | 2017-06-28T15:34:03.028Z | 2011-05-17T00:00:00.000 | {
"year": 2011,
"sha1": "829aed36e5194e90100b1017e391eee9a9784d1b",
"oa_license": "CCBY",
"oa_url": "https://jbiomedsem.biomedcentral.com/track/pdf/10.1186/2041-1480-2-S2-S10",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "829aed36e5194e90100b1017e391eee9a9784d1b",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
257152229 | pes2o/s2orc | v3-fos-license | Prediction of Early Visual Outcome of Small-Incision Lenticule Extraction (SMILE) Based on Deep Learning
Introduction Deep learning (DL) has been widely used to estimate clinical images. The objective of this project was to create DL models to predict the early postoperative visual acuity after small-incision lenticule extraction (SMILE) surgery. Methods We enrolled three independent patient cohorts (a retrospective cohort and two prospective SMILE cohorts) who underwent the SMILE refractive correction procedure at two different refractive surgery centers from July to September 2022. The medical records and surgical videos were collected for further analysis. Based on the uncorrected visual acuity (UCVA) at 24 h postsurgery, the eyes were divided into two groups: those showing good recovery and those showing poor recovery. We then trained a DL model (Resnet50) to predict eyes with early postoperative visual acuity of patients in the retrospective cohort who had undergone SMILE surgery from surgical videos and subsequently validated the model’s performance in the two prospective cohorts. Finally, Gradient-weighted Class Activation Mapping (Grad-CAM) was performed for interpretation of the model. Results Among the 318 eyes (159 patients) enrolled in the study, 10,176 good quality femtosecond laser scanning images were obtained from the surgical videos. We observed that the developed DL model achieved a high accuracy of 96% for image prediction. The area under the curve (AUC) value of the DL model in the retrospective cohort was 0.962 and 0.998 in the training and validation datasets, respectively. The AUC values in two prospective cohorts were 0.959 and 0.936. At the video level, the trained machine learning (ML) model (XGBoost) also accurately distinguished patients with good or poor recovery. The AUC value of the ML model was 0.998 and 0.889 in the retrospective cohort (training and test datasets, respectively) and 1.000 and 0.984 in the two prospective cohorts. We also trained a DL model which can accurately distinguish suction loss (100%), black spots (85%), and opaque bubble layer (96%). The Grad-CAM heatmap indicated that our models can recognize the area of scanning and precisely identify intraoperative complications. Conclusions Our findings suggest that artificial intelligence (DL and ML model) can accurately predict the early postoperative visual acuity and intraoperative complications after SMILE surgery just using surgical videos or images, which may display a great importance for artificial intelligence in application of refractive surgeries. Supplementary Information The online version contains supplementary material available at 10.1007/s40123-023-00680-6.
INTRODUCTION
Refractive error, the most prevalent cause of correctable vision impairment, is expected to affect over 6 billion individuals by 2050. Laser correction of refractive error will become the popular elective surgery performed worldwide [1]. Small manual incision lenticule extraction (SMILE) is a novel corneal refractive surgery that is recently been developed for use in patients with myopia and myopic astigmatism to correct vision, becoming the first choice of treatment for an increasing number of patients and ophthalmologists. As such, SMILE has become the most mainstream refractive surgery, with the advantages of minimal invasion, flapless, and preservation of intact corneal morphology, and provides faster healing of corneal nerve fiber, better biomechanical strength, and a lower incidence of dry eye [2]. Currently, over 2 million myopic patients have undergone SMILE surgery worldwide [3]. Although numerous previous clinical studies have proven its' safety, effectiveness, predictability, and stability in correcting refractive errors, a number of intraoperative and postoperative complications have been reported in different clinical outcomes [4]. Our group has observed that in the clinical setting about 5% of SMILE patients experience poor visual recovery on the first postoperative day (\ 20/25). This is similar to the findings of Chansue et al. [5] and Ganesh et al. [6], who reported that about 90%-95% of patients' uncorrected distance visual acuity was 20/20 on the first day after SMILE. However, postoperative visual acuity recovery is still an important indicator for evaluation of laser vision correction, which is as important as the advantages of the surgery itself, and is also closely associated with patients' satisfaction.
The surgical steps in SMILE divided into three main steps: (1) femtosecond laser lenticule construction; (2) lenticule separation; and (3) lenticule extraction. Intraoperative complications may occur in each step. The lenticule separation and extraction steps mainly depend on the surgeon's operative experience and surgical skills. Consequently, there is a potential for various intraoperative complications, such as corneal cap perforation or incisional tear, lenticule dissection difficulties, lenticule remnant, bleeding, and partial centering, when the surgeon is at the initial phase of the surgical learning curve [7,8]. With improvement in surgical skills and the popularization of SMILE surgical technology, such intraoperative complications correlated with lenticule separation and extraction will be largely avoided [9]. However, lenticule construction is completely dependent on femtosecond laser scanning. Femtosecond laser-related complications, such as suction loss, black spots, and opaque bubble layer, inevitably affect the quality of intraoperative femtosecond laser scanning. Consequently, the poor quality of femtosecond laser scanning will directly determine the level of difficulty of lenticule separation and extraction, as well as delaying postoperative visual recovery [10].
Recent breakthroughs in artificial intelligence (AI), particularly deep learning (DL), have shown considerable promise for diagnosing a number of prevalent diseases using clinical images [11]. For example, machine and DL have been widely conducted in image processing for pathomics, radiomics and genomics [12]. With the application of DL and the availability of massive numbers of clinical images, there is a new opportunity to evaluate old techniques for predicting patient diagnosis and prognosis. Therefore, in this study, we used a DL model to identify SMILE scanning images and predict the early postoperative visual acuity through supervised learning.
The study was carried out in accordance with the principles of the 1964 Helsinki Declaration and its later amendments and was approved by the Ethics Committee of West China Hospital. Before enrolling in the study, each subject provided written informed consent.
Surgical Procedure
Oxybuprocaine hydrochloride eye drop was used as a topical anesthesia. The eye had been sterilized and docked. For the SMILE process, a 500-kHz VisuMax femtosecond laser system (Carl Zeiss Meditec, Jena, Germany) with an energy of 130 nJ was employed. The following scans were performed on the lenticule: spiral in for the posterior plane, border, spiral out for the anterior plane, and side cutting. The diameters of the optical zone and corneal cap were 6.0-6.5 mm and 7.0-7.5 mm, respectively. The corneal cap thickness was 120-130 lm. The eye was undocked after the suction was released. The lenticule was first separated at the anterior surface, then split at the posterior surface with a blunt spatula. Next, the lenticule was extracted from the corneal stromal via a small incision. The postoperative care included antibiotic and topical steroid eye drops (0.1% tobramycin dexamethasone [Alcon China Ophthalmic Product Co., Ltd., Beijing, China]; 0.5% levofloxacin [Santen Pharmaceutical, Ikoma, Nara, Japan]), which were prescribed 4 times a day for 1 week. Also, artificial tears (0.1% sodium hyaluronate) with no preservatives were applied 6 times per day.
Scanning Images Acquired and Processing
Videos of SMILE procedures were acquired from the VisuMax laser and then divided into a 'good visual outcome' category (UCVA B 0.1) and a 'poor visual outcome' category (UCVA [ 0.2) based on postoperative UCVA after 24 h. The SMILE scanning images were then extracted from the videos (1 image per 0.5 s). Taking into account the large number of images acquired from SMILE videos (usually 80-160 images per video), we selected those images obtained from the end of the posterior plane scanning to side cutting for subsequent analysis. The number of scanning images per video generally ranged from 30 to 34. The eyes of the retrospective SMILE cohort were divided into training and validation datasets at random at a 7:3 ratio. The training dataset was used for model building and hyperparameter tuning, while the validation dataset was used to evaluate generalization performance. Both data augmentation and normalization were employed for the training images, however only normalization was used for validated images. In our investigation, we used random affine modification and horizontal patch flipping to enhance the data. After Z-score normalization on RGB channels, the upgraded images were center cropped to 224 9 224 pixels. This simplified procedure is shown in Fig. 1.
DL: Feature Extraction and Screening
The videos for the retrospective SMILE cohort were first cropped into scanning images and trained as a DL Resnet50 model. We use a batch size of 32 and default weight initialization. The default optimizer was SGD with a learning rate of 10-2 and L2 regularization of 10-5. We trained the Resnet50 model for 50 epochs until the validation loss failed to improve. During image prediction, the Resnet50 model was used to compute scanning image probability with the video label. Because the videos consisted of numerous scanning images, we aggregated the likely scanning images into a probability map of the video, which was then used to calculate the features based on the image likelihood histogram. Moreover, we conducted principal component analysis (PCA) to compress the likelihood histogram into 24 DL features. Pearson correlation analysis was initially used to eliminate redundant DL features. If the coefficient of two features was [ 0.9, one of the two features was deleted. After that, LASSO-penalized feature selection was used to identify the most significant features. Then, seven classic machine learning classifiers (Decision-Tree, Extra-Tree, KNN, LightGBM, Random-Forest, SVM, and XGBoost) combined with tenfold cross-validation were applied to train models for predicting each video's classifications [13][14][15]. Two independent prospective SMILE cohorts were processed in the same way to establish the accuracy and robustness of the model in clinical application.
Statistical Analysis
All statistical analyses were performed using R (v 4.0.3) or Python (v 3.8.0) with installed packages. Pytorch (v 1.10.1) in Python was used to implement all DL frameworks. The machine learning algorithms were run using Python's ''sklearn'' package. Receiver operating characteristic (ROC) curves and area under the curve (AUC) values were generated using the ''pROC'' package in R. The logarithm of the minimum angle of resolution (logMAR) units were used in the statistical analysis for visual acuity. Continuous variables were described using the mean ± standard deviation (SD) or median with interquartile ranges (IQR), and the categorical variables were described using frequencies. The correlation test was evaluated using Pearson coefficients. The Wilcoxon test was used to compare two groups, while the Kruskal-Wallis test was used to compare more than two groups. The Chi-square test was performed to evaluate the associations between cohorts and clinicopathological traits.
Study Cohorts
A total of 216 eyes from 113 patients (84 male and 137 female) met our selected criteria and were included in the retrospective SMILE cohort. The median age of the subjects was 27.5 (IQR 22-32) years. Preoperative sphere was -5.00 D (IQR -6.00 to -4.00 D) and the cylinder was -0.50 D (IQR -1.00 to -0.25 D). Preoperative corneal thickness was 539 mm (IQR 519-554 mm), and keratometric 1 (K1) power was 42.85 D (IQR 42.00-43.58 D). The two prospective SMILE cohorts consisted of 48 eyes from 24 individuals (19 male and 29 female) and 54 eyes from 27 individuals (30 male and 24 female), respectively. The baseline characteristics of all patients according to cohort are given in Table 1.
Performance of Scanning Image Classifier
The scanning image classifier was constructed in the training dataset and validated in the validation dataset of the retrospective SMILE cohort (training:validation datasets ratio: 7:3) ( Table 2). Construction of this image classifier consisted of two steps: image prediction and video prediction. To summarize, the SMILE procedure video was first limited from the completed posterior plane scanning to side cutting, following which the SMILE video was cropped into scanning images, which were then sent into a DL model (Resnet50) to predict postoperative visual acuity status at the level of the images. Secondly, a histogram of scanning image probability was utilized to merge many likely images into a probability matrix of the video. To unify the scanning features of the SMILE video, we performed PCA to compress the probability matrix into 24 DL features. Finally, based on the DL features, we used various machine learning methods to determine the visual acuity of the patient post-surgery.
The performance of the scanning image classifier was evaluated by using the validation dataset in the retrospective SMILE cohort. We found that with increasing number of training iterations, the training accuracy converged near 90% at the first 4000 iterations (Fig. 2a). The confusion matrix illustrated that the Resnet50 model achieved a high accuracy of 96% We discovered that the LASSO model had the lowest mean squared error (MSE) when the penalization lambda was 0.039 (Fig. 3a). There were six DL features with coefficients greater than zero based on the lambda criteria (Fig. 3b). Finally, the LASSO-penalized model revealed six DL features, and their relative weight is shown in Fig. 3c. The six DL features were then transferred to seven machine learning models and evaluated using tenfold cross-validation. Based on the AUC distribution of these seven machine learning models, the SVM, XGBoost, and LightGBM methods were observed to have the highest values of AUC (Fig. 3d). The accuracy distribution further indicated that XGBoost manifested the best accuracy in the training and test datasets among the seven machine learning methods (Fig. 3e). Therefore, we selected the XGBoost model for video prediction. The predict probabilities for samples in the training and test datasets were uncovered in Fig. 4a and (Fig. 4c). The AUC value of the XGBoost model for the test dataset was 0.889 (95% CI 0.667-0.889) (Fig. 4d). The AUC values (1.000 and 0.984) in the two prospective SMILE cohorts (Fig. 4e, f) suggested that our XGBoost model also performed very well in terms of predicting in the prospective cohorts.
Interpretation of Scanning Images
Gradient-weighted Class Activation Mapping (Grad-CAM) was able to calculate the key regions predicted by the model on the SMILE scanning images and therefore was utilized to visualize the heatmap of the model's final convolutional layer, overlaying it on the original images [16]. The red section that leads inwards to the blue section is active, suggesting that the model paid special attention to this region. We observed that our model can precisely concentrate on areas of scanning (Fig. 5). Regarding intraoperative complications, the model mainly focused on the edge of the femtosecond laser scanning for the opaque bubble layer (OBL), whereas for black spots (BS) our model played particular attention to the central area of the femtosecond laser scanning (Fig. 6). Furthermore, we used Image-J software to calculate the percentage of BS area in the femtosecond laser scanning. The region of interest BS is visualized in Fig. 6b. The Wilcoxon test determined that the proportion of region of BS in the poor visual group was larger than that in the good visual group (Fig. 6b).
Distinguishing SMILE Intraoperative Complications
The intraoperative complications which occurred during SMILE surgery include suction loss (LS), OBL and BS. We first collected intraoperative images that were associated with LS (n = 120), OBL (n = 90), BS (n = 150), and normal scanning (Norm: n = 150). Subsequently, we randomly split these images into training and test sets at a 7:3 ratio and trained a Res-net50 model for 50 epochs. Details on the parameters (accuracy, AUC, sensitivity, specificity, positive predictive values, negative predictive values, [recision, recall) for assessment of model are listed in ESM File 3. Grad-CAM was applied to visualize the heatmap and superimposed on the four categories (Fig. 6c). The confusion matrix heatmap illustrated that our model was able to accurately distinguish LS (100%), OBL (96%), BS (85%), and Norm (97%) (Fig. 6c).
DISCUSSION
Following the encouraging outcomes of several prospective trials on SMILE surgery and the appearance of recent publications revealing that the visual and refractive corrections achieved with SMILE are comparable to those achieved with femtosecond laser-assisted in situ keratomileusis (FS-LASIK), SMILE refractive surgery has grown in popularity [6,17]. SMILE surgery also achieves better biomechanical strength and stability characteristics than FS-LASIK [18,19]. However, a small proportion of patents who have undergone SMILE surgery report delayed visual acuity recovery postoperatively. This delay appears to be particularly evident in patients with similar refractive errors in eyes that underwent SMILE refractive surgery, while the postoperative visual acuity recovery in each eye was totally different. To our knowledge, the present study is the first to develop a DL model for predicting the outcome of SMILE postoperative visual acuity. The results of this work reveal that the developed DL model can predict images and videos with a high accuracy. It is extensively used in clinical practice, allowing every surgeon with a femtosecond laser scanning image or video to acquire an estimated prognosis of each patient. The formation of the intrastromal lenticule is essential for the safety and predictability of the SMILE procedure [20]. Because the lenticule is exclusively manufactured by the femtosecond laser, femtosecond laser-related intraoperative problems, such as BS and OBL, are unavoidable. BS are defined as several scattered little black dots in the stroma for complete photo disruption following femtosecond laser; in contrast, black areas or black islands occur as patchy or strips formations. Due to the block of debris, such as foreign bodies, secretions of meibomian, and mucus of the conjunctival at the interface, the black areas or black islands generally format at both the anterior and posterior lenticule for incomplete photo disruption [21][22][23]. In our work, we observed that BS occurred in almost all SMILE videos, mainly in the posterior lenticule. However, the incidence of black areas in our study eyes was 2.7% (ESM File 1). The incidence of BS determined in our study differs greatly from that reported in earlier studies, but the incidence of black areas is consistent with earlier studies, ranging from 0.33% to 11% [9,24,25]. Therefore, we believe that the terms 'black spots,' 'black areas,' or 'black islands' may not have been used consistently in previous publications [26,27].
Using the DL model, we can easily assess the quality of femtosecond laser images and predict the postoperative visual acuity at 24 h. To visualize the attention mechanisms of DL, we used the Grad-CAM heatmap to calculate the key regions of the SMILE scanning images. The heatmaps indicated that our model pays special attention to the region of femtosecond laser scanning (Fig. 5) and, in particular, to the active region (blue color) where it pays more attention to BS. Therefore, we calculated the percentage of area occupied by black spots using Image-J software and found that the size of BS ranged from 20 to 100 pixels. Moreover, the area percentage of BS in the poor visual acuity group (3.49 ± 1.08%) was larger than that in the good visual acuity group (1.23 ± 0.68%). None of the BS in our study were associated with difficulties of intrastromal lenticule separation and extraction and did not lead to impairment of visual outcomes. Hence, we recognized that the BS are the tissue bridges between laser spots. This phenomenon was indirectly confirmed by Lin et al. who estimated the black areas of four levels of low laser energy for SMILE surgery and found that the lowest energy used had the largest area of BS [24]. The surface quality of scanning may be mainly determined by two factors: (1) the distance of laser spots and the delivered energy of per pulse; and (2) tissue reaction for photo disruption [28,29]. The surface becomes smoother when the laser spots b Fig. 4 We also found that the poor scanning group had a high incidence of OBL (12.6%) compared to good scanning group (8.0%). The OBL all occurred at the peripheral. Although OBL at the edge of lenticule does not influence the final visual outcome, their presence may make it difficult to separate and extract the lenticule, cause transient corneal edema and delay early recovery of postoperative visual acuity [30][31][32][33]. The Grad-CAM heatmap suggested that our developed DL model can accurately recognize the position of OBL and make correct predictions (Fig. 6a). The good performance of our model for distinguishing OBL and BS led us to speculate whether a Resnet50 model could be trained to identify SMILE intraoperative complications. The results shown here indicate that the Resnet50 DL model also has a high accuracy to distinguish intraoperative complications. However, despite SMILE providing a promising performance for the correction of myopia and myopic astigmatism, intraoperative problems are unavoidable. Therefore, our DL model can timely and correctly remind the surgeon of the correct management strategies for dealing with intraoperative complications during the operation. The Grad-CAM heatmap also revealed various features to be alert for regarding these intraoperative complications, which may display a great importance for artificial intelligence in the application of refractive surgeries.
Our study's main limitation is the small sample size and short follow-up period. However, in addition, this study only utilized SMILE videos and did not make use of support from other aspects, such as clinical records and other physical examinations. As a result, we were unable to perform multi-dimensional evaluation, thus limiting the accuracy of the results in real-world settings. Therefore, future research should focus on expanding cohort size, prolonging the follow-up duration, and including multimode data to increase the accuracy of the DL models.
CONCLUSIONS
Overall, we created a DL model for predicting the early visual outcomes based on SMILE scanning images. Using these images, it is now feasible to distinguish the intraoperative complications of the SMILE procedure than has been previously reported.
Before enrolling in the study, each subject provided written informed consent.
Data Availability. The datasets used in the current study are available from the corresponding author on reasonable request.
Open Access. This article is licensed under a Creative Commons Attribution-Non-Commercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creativecommons.org/licenses/by-nc/4.0/. | 2023-02-25T06:16:25.017Z | 2023-02-24T00:00:00.000 | {
"year": 2023,
"sha1": "a2309bb608855aacfece87d82eaa5fe597dbdc7d",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40123-023-00680-6.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "3f7784e8a0274cc898513742d25da24751933e12",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259498378 | pes2o/s2orc | v3-fos-license | Intracranial Pressure Monitoring for Acute Brain Injured Patients: When, How, What Should We Monitor
While there is no level I recommendation for intracranial pressure (ICP) monitoring, it is typically indicated for patients with severe traumatic brain injury (TBI) with a Glasgow Coma Scale (GCS) score of 3–8 (class II). Even for moderate TBI patients with GCS 9–12, ICP monitoring should be considered for risk of increased ICP. The impact of ICP monitoring on patient outcomes is still not well-established, but recent studies reported a reduction of early mortality (class III) in TBI patients. There is no standard protocol for the application of ICP monitoring. In cases where cerebrospinal fluid drainage is required, an external ventricular drain is commonly used. In other cases, parenchymal ICP monitoring devices are generally employed. Subdural or non-invasive forms are not suitable for ICP monitoring. The mean value of ICP is the parameter recommended for observation in many guidelines. In TBI, values above 22 mmHg are associated with increased mortality. However, recent studies proposed various parameters including cumulative time with ICP above 20 mmHg (pressure-time dose), pressure reactivity index, ICP waveform characteristics (pulse amplitude of ICP, mean ICP wave amplitude), and the compensatory reserve of the brain (reserve-amplitude-pressure), which are useful in predicting patient outcomes and guiding treatment. Further research is required for validation of these parameters compared to simple ICP monitoring.
INTRODUCTION
Based on recent research and expert consensus, it is understood that measuring and regulating intracranial pressure (ICP) is a critical process to minimizing secondary brain injury and is a key component of neurocritical care monitoring. While it is commonly accepted to monitor the response to treatment and evaluate ICP as like monitoring blood pressure, clear indications for ICP monitoring are only suggested as the guideline level for conditions such as traumatic brain injury (TBI). For other acute severe condition by brain injury, there may be conflicting recommendations or no suggestion for ICP monitoring at all. This is due to the lack of higher level evidence demonstrating ICP monitoring leads to significant improvement in outcomes, suggesting the uncertainties regarding the utility of ICP monitoring.
Furthermore, there are several issues regarding the application of ICP monitoring, such as being suitability for implementation (e.g., external ventricular drain [EVD] vs. intraparenchymal vs. other types), the decision for appropriate location of the sensor in cases of intraparenchymal monitoring (IPM), the threshold for ICP, the parameters to be observed (whether it is the mean value of ICP or other values), and the definition of normal ranges. These issues require the discussion and research in the field.
It's important to note that the field of ICP monitoring is in progress, and the current understanding and recommendations may continue to evolve as emergence of the new evidence and consensus. In this review, the indications, methods, and major indicators of ICP monitoring are summarized and discussed.
INDICATION AND CLINICAL EFFECTIVENESS OF ICP MONITORING TBI
According to the Brain Trauma Foundation third edition guidelines published in 2007, ICP monitoring was recommended for all viable patients with a Glasgow Coma Scale (GCS) score between 3 and 8, which exhibited abnormal computed tomography (CT) findings. It included the hematoma, contusion, swelling, herniation, and compressed basal cistern (TABLE 1). Even in the absence of the abnormalities, ICP monitoring was recommended for patients aged 40 or older having unilateral or bilateral abnormal postures (decerebrate or decorticate), or with the following: systolic blood pressure less than 90 mmHg. 5) However, the Brain Trauma Foundation fourth edition in 2016 removed these recommendations and simply provided the evidence for ICP monitoring reduced the mortality of in-hospital and 2-week post-injury, which evaluated as level IIB. 1) Setting the level of evidence for ICP monitoring was based on the BEST:TRIP: A Trial of Intracranial-Pressure Monitoring in Traumatic Brain Injury 8) and 4 cohort studies. 3,15,18,45) The BEST:TRIP trial, the only randomized clinical trial (RCT) among them, reported no difference of mortality and Glasgow Outcome Scale (GOS) Extended at 6 month between the groups of pressure monitoring and the imaging-clinical examination. 8) However, these were inconsistent with the remaining good-quality cohort studies, 3,15,18,45) in which the specific patient population and medical conditions in the South American region where the trial was conducted were taken into account, there were limitations in the applicability of the results. Therefore, it was considered that subsequent RCTs could potentially reverse the results of the BEST:TRIP trial.
As there is still controversy regarding whether ICP monitoring itself improves the outcomes in TBI patients, there is consensus in need to manage ICP appropriately in TBI patients. The International Multidisciplinary Consensus Conference on Multimodality Monitoring in Neurocritical Care in 2014 strongly recommended ICP monitoring and protocol-based treatment when there is a perceived risk of ICP elevation clinically or radiologically. 26) In addition, the World Society of Emergency Surgery conference in 2019 also strongly recommended ICP monitoring in cases of intracranial hypertension, regardless of the need for surgical intervention. 34) The Seattle International Severe Traumatic Brain Injury Consensus Conference in 2019 presented the maintaining of cerebral perfusion pressure (CPP) with at least 60 mmHg as part of the basic treatment. 22) In fact, a survey of 66 centers included in the 2017 CENTER-TBI registry revealed that 58 institutions (91%) performed ICP monitoring in cases where GCS was 8 or below and abnormal CT findings were present. 9) The BEST:TRIP trial 8) divided patients into groups based solely on the presence of ICP monitoring. The results included that the imaging-clinical examination group had more active treatments such as hypertonic saline, mannitol, and hyperventilation compared to the pressure-monitoring group. Thus, this indicates that the ICP control itself should be considered based on the research results. The four cohort studies 3,15,18,45) that formed the basis for ICP monitoring in the Brain Trauma Foundation guidelines were composed of three retrospective studies with 10,628, 2,347, and 1,304 participants, and one prospective observational study with 216 participants. Collectively, these studies reported that ICP monitoring itself significantly reduced in-hospital mortality and 2-week mortality.
Recently, with the availability of continuous ICP measurement using high-resolution ICP monitoring, the parameter known as pressure time dose (PTD) has been used to measure the burden of increased ICP (IICP). It has been reported that higher PTD values are associated with worse outcome of performance and survival rate. 48) Additionally, there is an expectation that analyzing the trend and waveform of ICP and admitting it for treatment may results in additional significant results.
In conclusion, despite the negative outcome of RCTs on ICP monitoring in severe TBI patients, there is a consensus based on the limitations of the studies and the results of good-quality cohort studies that ICP monitoring is necessary. Furthermore, it is important to analyze the impact of ICP-guided treatment on patient outcomes and to conduct research using high-resolution ICP monitoring.
Spontaneous subarachnoid hemorrhage (SAH)
In cases of SAH, there is a significant occurrence of elevated ICP, particularly in patients with higher Hunt and Hess grade or World Federation of Neurosurgical Societies grade. The 54%-81% of patients experienced episodes of ICP exceeding 20 mmHg. The severe TBI shows has the consensus in need for ICP monitoring, but SAH does not need the same level of consensus. In 2014, the Neurocritical Care Society conducted a survey on ICP monitoring in non-TBI patients. It was agreed that ICP monitoring should be considered in SAH patients with the risk of elevated ICP, especially in cases with a high likelihood of hydrocephalus, intraventricular hemorrhage (IVH), or poor-grade SAH. 33) SAH patients with poor-grade who are deeply sedated or have severe initial brain injury with decreased consciousness may have benefit from early detection of hydrocephalus or delayed cerebral ischemia by ICP monitoring as part of multimodal monitoring (TABLE 1). 12) However, the evidence regarding the impact of ICP monitoring on outcomes and mortality in SAH patients is limited, and further research is required. In TBI, there is a consensus on ICP monitoring in severe cases, but SAH is not as strong as TBI.
Spontaneous intracerebral hemorrhage (ICH)
Intracranial hypertension (IICP) is prevalent in ICH patients. Meta-analysis showed that 67% of patients experienced IICP events with ICP exceeding 20 mmHg, which is closely linked to mortality. The 2022 American Heart Association/American Stroke Association guidelines recommend ICP monitoring in ICH patients with a GCS score of 8 or less. 21) Consensus on the timing for necessary of ICP monitoring in ICH patients is lacking, but it is suggested that it should be considered in cases of obstructive hydrocephalus and concomitant IVH, in addition to serving the purpose of cerebrospinal fluid (CSF) drainage. Studies on clinical effectiveness have shown mixed results. Some of studies showed no significant differences in mortality or functional outcomes between ICP monitoring and non-ICP monitoring groups, but higher infection rates and increased use of aggressive treatments in the ICP monitoring group were included. 7,19,38) The MISTIE trial reported higher rates of poor functional outcomes and higher mortality in the ICP monitoring group. A recent study reported the better Intracranial Pressure Monitoring functional outcomes and lower mortality with ICP monitoring, particularly in patients with GCS scores of 9-12 (TABLE 1). 38) In conclusion, while IICP occur in ICH patients, consensus on the necessity and indications for ICP monitoring is still lacking. Further research, particularly regarding long-term outcomes is needed.
TYPE OF ICP MONITORING
The Brain Trauma Foundation's 4th edition guidelines for the management of severe TBI discussed the necessity and indications for ICP monitoring, but there is no specific recommendation regarding the type of monitoring device. The guidelines acknowledge that the choice of monitoring device should be based on the clinician's experience and judgment. This indicates that the decision on which specific monitor to use leave the discretion of the treating physician to consider the factors such as the patient's individual characteristics, clinical presentation, and available resources. It highlights the importance of clinical expertise and personalized decision-making to determine the appropriate monitoring approach for TBI patient. Lundberg 30) introduced the earliest form of ICP monitoring, which remains the gold standard for monitoring to the present day. 30) The reference point for the transducer is the Foramen of Monroe, which closely corresponds to the external auditory meatus, making it clinically convenient to use as a reference. The insertion is commonly performed through Rt. Kocher's point, but the specific approach may vary based on clinical judgment considering the brain pathology. This is cost-effective type of monitoring, but it can measure the real ICP as global CSF pressure. It allows for recalibration from external sources even after initial insertion. One advantage is the ability to control ICP through therapeutic CSF drainage, which affect the patient outcomes. Additionally, it facilitates drainage of IVH and enables the administration of therapeutic agents. However, compared to other types of ICP monitors, this method carries a higher risk of complications such as bleeding and infection. Infections have been reported in meta-analyses with rates ranging from 0.7% to 2.5%, even some studies reported the rates as high as 27% in specific cases. 23,29,39,47) Bleeding is also a major complication, but significant impact of bleeding on morbidity and mortality is low, ranging from 0.9% to 1.2%. 36) Other drawbacks of this method include misplacement, twisting, obstruction due to clots or protein, and the impact of transducer position on accuracy. Considering that ICP measurement with this technique is performed within the ventricle, the factors related to ventricular compliance should also be taken account. Therefore, accurate measurement may be challenging in pediatric patients or cases of SAH. Difficulties may arise during procedures when there is severe brain edema leading to ventricular collapse.
IPM
IPM is currently used around the world with taking various characteristics of brain injuries into consideration. It is typically inserted into the non-dominant frontal hemisphere's white matter, which assist local ICP measurement. However, as significant pressure difference between the ipsilateral and contralateral sides are present, the overall CSF pressure can be over or underestimated by IPM. 40) The accuracy is the biggest drawback of IPM. It does not reflect the overall CSF pressure accurately, and zero drift is a possible issue in situations when recalibration is not available.
The studies reported that IPM devices like Camino or Codman showed zero drift with less than 0.8 mmHg over 24 hours, but the difference was observed with approximately 0.6±0.99 mmHg when it was used longer time with 5 days. 10,35) There are various types of IPM, including fiber optic (Camino), strain gauge microtransducers (Codman), pneumatic strain gauge (Spigelberg), and Neurovent-P ICP monitor. Fiber optic cables are operated by sending the light to a mini-displaceable mirror, which measures the distortion of the mirror by the changes of ICP. Compared to other IPM types, it is relatively expensive, but has a lower risk of infection and hemorrhage. However, there is still possible issues with malfunction or failure of the fiber optic component. 35) Another ICP monitor type is the strain gauge microtransducers, which composed of two semiconductor strain gauges attached to a thin diaphragm at the tip of the catheter. This method provides relatively accurate measurements and allows CSF drainage when connected to an EVD. The small size of the catheter is suitable for applying to pediatric or various anatomical sites in the brain. 24) The other types based on the pneumatic strain gauge technology, employing a balloon-tipped catheter system. It is cost-effective and accurate, but also simultaneous CSF drainage by a monitor tip. Neurovent-P ICP monitor measures ICP by an electronic chip surrounded by a thin silicone membrane at the catheter tip. This method measure ICP, brain tissue oxygen partial pressure, and temperature simultaneously, but there is still a lack of clinical data.
IVM vs. IPM
IVM has a higher procedural difficulty, relatively higher risk of infection, and uncertainty in measurements caused by ventricle shape or compliance compared to IPM. One of the significant advantages of IVM is the ability to perform CSF drainage. According to Liu et al., 27) IVM shows lower mortality, favorable 6-month GOS, and lower refractory intracranial hypertension (IICP) compared to IPM, suggesting it has a role of CSF drainage. Therefore, IVM is more commonly used in conditions of SAH or ICH which highly require for therapeutic CSF drainage compared to TBI. Robba et al., 39) report based on the statistical analysis of 146 intensive care units in 42 countries showed that IPM was more commonly used for TBI (73%), while IVM was frequently for SAH and ICH cases (54%) (TABLE 2).
Other invasive monitoring type
Various attempts have been made to minimize the brain tissue damage, mainly caused by inserting catheters into different locations such as the subdural or epidural space to measure pressure. However, most of these attempts has the low accuracy. Lumbar drains by inserting a catheter into the lumbar region have been used for ICP monitoring. However, issue with the
Non-invasive monitor
As the invasive monitoring including IVM and IPM has the risk for bleeding and infection, non-invasive methods has been brought to attention. Several studies reported the different type of invasive method by transcranial Doppler (TCD) sonography, Near-Infrared Spectroscopy, Tympanic Membrane Displacement (TMD), and Optic Nerve Sheath Diameter (ONSD) measurements. 16,17,32,41,42,46,49) However, there is currently no proven method for the utility in terms of accuracy and practicality.
TCD
Developed by Klingelhofer in 1987, TCD measures the blood flow velocity in the middle cerebral artery by indirectly assessing the brain compliance and ICP was estimated by secondary parameters such as peak systolic velocity, mean flow velocity, end-diastolic velocity, and pulsatility index. 41) However, the accuracy for ICP calculation by TCD has limitations with errors of up to 10 mmHg compared to invasive ICP measurements. Furthermore, it is difficult to predict intracranial hypertension in all cases by TCD, resulting in the clinical usefulness. 49)
ONSD
Since the increased ICP is transmitted to the optic nerve through the subarachnoid space's CSF pressure, measuring the ONSD can be indirect method to estimate ICP. It allows realtime assessment of intracranial compliance. Reported data demonstrated that measurement of intracranial hypertension has the 90% sensitivity and 85% specificity. 17)
TMD
Based on the principle of ICP transmitting to cochlear fluid pressure, which affects stapedial excursion, TMD allows the detection of transient changes in ICP when continuously measured. However, it has challenge to the accurate measurement of the ICP value and the limitations including the requirement for normal stapedial reflex, middle ear pressure, and cochlear aqueduct. 32,42)
Pupillometry
Pupillometry enables quantitative measurement of changes in pupillary light reflex. High ICP has been found to be related to the pupillary constriction velocity, and a 10% change in pupil size has been linked to intracranial hypertension. Continuous ICP monitoring is challenging, and the application is difficult when the measuring the patient's pupils is not feasible caused by eyeball trauma. 16,46)
PARAMETERS OF ICP
As mentioned previously, numerous studies reported that exceeding of ICP above the certain value leads to worse the patient outcomes. 25,43) Thus, a number of effort to lowering ICP below this threshold value as treatment has been made. However, it has controversy that solely measuring the mean value of ICP and striving to lowering below a single value is not comprehensive enough. Considering this, it is likely to explore the different variables that can be obtained through ICP monitoring beyond just the mean value (TABLE 3).
Pressure reactivity index (PRx)
The PRx is a physiological parameter used in the management of TBI to assess cerebrovascular reactivity. It quantifies the brain's ability to regulate cerebral blood flow in response to changes in ICP by analyzing the correlation between ICP and arterial blood pressure (ABP) waveforms. A positive PRx indicates impaired cerebrovascular reactivity, while a negative value suggests intact autoregulation. 13) PRx monitoring provides real-time information about cerebral autoregulation and helps the decision to guide treatment. Elevated PRx values indicate dysfunctional autoregulation reflecting a poor prognosis, while negative or low values suggest intact autoregulation related to better outcomes.
There are studies shown PRx as the potential prognostic marker and its association with functional outcomes related to TBI severity. 28,44) Steiner et al., 44) reported that after evaluation PRx in TBI patients high value had poor clinical outcomes and increased mortality. Liu et al., 28) investigated the association between PRx and cerebral blood flow and found worse functional outcomes with higher PRx and impaired pressure reactivity. In summary, PRx is a valuable tool for TBI management, providing insights into cerebrovascular reactivity and helping optimize cerebral perfusion. Clinicians can use PRx to optimize CPP and prevent secondary brain injury.
PTD
The PTD is a concept employed in neurocritical care to quantify the cumulative exposure of the brain to elevated ICP over a specific duration. It is determined by assessing the duration and intensity of ICP exceeding a defined threshold, typically set a specific pressure value with 20 mmHg. The PTD provides a comprehensive measure of the brain's capacity to withstand increased ICP by considering both the pressure level and duration of exposure. A study conducted by Vik et al.,48) in 2008 demonstrated a stronger correlation between the cumulative dose of ICP, calculated based on the duration ICP surpasses 20 mmHg, the Marshall CT score The duration and intensity of ICP exceeding 20 mmHg Higher PTD values are associated with increased mortality and unfavorable outcomes AMP AMP is the pulse amplitude and MWA is the average AMP over 6-second time window MWA AMP has shown a statistically significant association with cerebral autoregulation RAP index RAP is the correlation between mean ICP and the amplitude of ICP waveform wICP Compensatory reserve-wICP wICP=(1−RAP)×ICP wICP may be a more effective predictor of outcomes ICP: intracranial pressure, TBI: traumatic brain injury, PRx: pressure reactivity index, CPP: cerebral perfusion pressure, PTD: pressure time dose, AMP: pulse amplitude of intracranial pressure, MWA: mean intracranial pressure wave amplitude, RAP: reserve-amplitude-pressure, wICP: weighted intracranial pressure. and clinical outcome in patients. The study proposed that the area under the curve of ICP serve as a more valuable tool in managing TBI. This discovery led to the development of the PTD concept. Subsequent research by Åkerlund et al., 2) utilizing the CENTER-TBI dataset found a correlation between PTD and patient mortality. Similar findings have indicated that higher PTD values are associated with increased mortality and unfavorable outcomes, not only in TBI patients but also in other populations with acute brain injuries. 31) These findings suggest the potential applicability of PTD in various acute brain injury populations.
Pulse amplitude of ICP (AMP)/mean ICP wave amplitude (MWA)
AMP and MWA both involve measuring the pulse amplitude from the ICP waveform, but they have different approach. AMP is based on the amplitude of the ICP waveform, while MWA measures the pulse amplitude based on the time in the ICP waveform. However, previous studies have shown a strong correlation between AMP and MWA values (p<0.001), indicating that they can be examined together. 20) According to a study by Radolovich et al., 37) AMP has a statistically significant association with cerebral autoregulation in TBI patients, suggesting the beneficial effect in the treatment of TBI patients. Additionally, Eide et al., 14) reported that the patient group with SAH treated based on mean ICP and MWA values showed significantly better functional outcomes after 12 months compared to the treated group based on mean ICP values alone.
Correlation coefficient (R) between AMP (A) and mean pressure (P) (RAP) index
RAP index is a coefficient that reflects the correlation between mean ICP and the amplitude of ICP waveform over a short period of time. The index closes to 0 indicates a state where ICP can increase while maintaining effective pressure-volume compensation. On the other hand, RAP value approaching to +1 suggests that minimal volume changes generate significant pressure differences or no pressure changes occur. As ICP continues to increase, amplitude (AMP) decreases, and in such cases, the RAP index approach a negative value close to −1. 11) It is important to note that the RAP index is a relatively new concept and further research is required to validate its clinical utility and establish the significance in predicting patient outcomes or guiding management decisions.
Weighted ICP (wICP)
The concept of compensatory reserve-weighted ICP or wICP consider the negative correlation between ICP and volume. It is defined as wICP=(1−RAP)×ICP, where RAP is the reserveamplitude-pressure index. The Czosnyka group conducted a study on TBI patients within a single institution, comparing the measurement of ICP and wICP and the predictive value for patient mortality. Although statistically significant was not observed, wICP predicted mortality better than ICP alone. 6) In large-scale studies using databases, wICP has been reported to predict patient survival or mortality and better reflect patient prognosis significantly compared to ICP. These findings suggest that wICP is a more effective predictor for outcomes and provide better insights into patient prognosis than ICP alone. 50) It is important to note that further research and validation studies are needed to fully establish the clinical utility and significance of wICP to predict patient outcomes and guide the management decisions. | 2023-07-11T05:05:50.218Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "ebb90be55ce1e255a76ef1b8dd7354147ef21b23",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.13004/kjnt.2023.19.e32",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ebb90be55ce1e255a76ef1b8dd7354147ef21b23",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34717215 | pes2o/s2orc | v3-fos-license | Crk associates with ERM proteins and promotes cell motility toward hyaluronic acid
Cell migration is a well organized process regulated by the extracellular matrix-mediated cytoskeletal reorganization. The signaling adaptor protein Crk has been shown to regulate cell motility, but its precise role is still under investigation. Herein, we report that Crk associates with ERM family proteins (including ezrin, radixin, and moesin), activates RhoA, and promotes cell motility toward hyaluronic acid. The binding of Crk with ERMs was demonstrated both by transient and stable protein expression systems in 293T cells and 3Y1 cells, and it was shown that v-Crk translocated the phosphorylated form of ERMs to microvilli in 3Y1 cells by immunofluorescence and immunoelectron microscopy. This v-Crk-dependent formation of microvilli was suppressed by inhibitors of Rho-associated kinase, and the activity of RhoA was elevated by coexpression of c-Crk-II and ERMs in 3Y1 cells. In concert with the activation of RhoA by Crk, Crk was found to associate with Rho-GDI, which has been shown to bind to ERMs. Furthermore, upon hyaluronic acid treatment, coexpression of c-Crk-II and ERMs enhanced cell motility, whereas the sole expression of c-Crk-II or either of the ERMs decreased the motility of 3Y1 cells. These results suggest that Crk may be involved in regulation of cell motility by a hyaluronic acid-dependent mechanism through an association with ERMs.
INTRODUCTION
The extracellullar matrix (ECM) plays an important role in various cellular responses (1)(2)(3). ECM drives the spatio-temporal reorganization of the cytoskeleton which is involved in physiological cell migration, tumor cell invasion, and metastasis. Multiple cell surface molecules have been shown to participate in this ECM-dependent signalling mechanism.
One of the major molecules is CD44, a transmembrane receptor for hyaluronic acid (4,5), which associates with the actin cytoskeleton through the ERM family proteins (ERMs) including ezrin, radixin, and moesin. The cleavage of CD44 at the extracellular domain by membrane-associated metalloproteinases plays a crucial role in efficient cell detachment during cell migration (6,7). The binding of CD44 and ERMs is controlled by the threonine-phosphorylation of ERMs through Rho-associated kinase (ROCK), and also by N-terminal phospholipid modification of ERMs (8).
The signalling adaptor protein Crk, which is composed of an SH2 and two SH3 domains, is considered to be involved in the cytoskeletal regulation. Crk has been shown to interact with components of focal adhesion, such as p130 Cas and paxillin (9,10), which were tyrosine-phosphorylated mainly by integrin stimulation. Crk transmits signals to downstream effecters through Crk-SH3 binding proteins, C3G and Dock180, which exert a guanine-nucleotide exchange factor (GEF) activity on Rap-1/R-Ras and Rac, respectively (11)(12)(13). Thus, Crk may regulate cytoskeletal movement thorough these GEFs and small GTPases. In fact, studies of C3G knockout mice have suggested the regulation of cell adhesion by C3G through Rap-1 (14). The phagocytosis, membrane ruffling, and lamellipodia formation has been shown to be regulated by a Dock180-ELMO-Rac-dependent mechanism (15, 16).
Besides the identification of the activation of Rap-1 or Rac, we and others previously by guest on March 24, 2020 http://www.jbc.org/ Downloaded from for 1h. For Rac assay, cells were lysed with lysis buffer composed of 1% NP40, 25 mM HEPES pH7.4, 150 mM NaCl, 10% glycerol, 1 mM EDTA, 10 mM MgCl 2 , 1 µg/ml aprotinin, and 1 mM PMSF. Lysates were clarified by 12,000 rpm at 4 for 1 min, and the supernatant was incubated with 10 µg of purified GST-PAK2-RBD and glutathione-beads at 4 for 1h. In both Rho and Rac assays, the beads were washed three times with each lysis buffer and subjected to SDS-PAGE with a 12% gel. Precipitated RhoA or Rac1 was detected by immunoblotting using anti-RhoA or Rac1Ab.
Immunoelectron microscopy-Analysis was performed by the pre-embedding method with double immunostainig. v-Crk-induced 3Y1 cells were fixed with 0.1% glutaraldehyde in 0.1 M cacodylate buffer (CB) for 5 min on ice, and first incubated with a mixture of primary rat mAbs for pERM and a mouse mAb for v-Crk for 3 days at 4 . After washing with PBS, the specimens were incubated with 10 nm gold-labeled anti-mouse Ig Ab for 1hr, followed by incubation for 1 hr with biotin-labeled anti-rat Ab, which was further reacted with peroxydase-labeled streptavisin. After re-fixation for 5 min, the enzyme reaction was visualized by using diaminobenzidine (DAB) as substrate. Cells were re-fixed with 2% OsO4 in 0.1M phosphate buffer for 50 min and then embedded in epon. Cells in epon block were sectioned into 1-µm thicknesses and stained with 1% toluidinblue for confirmation of 8 measured. To examine the mechanism of Crk-mediated cytoskeletal movement, the association of Crk and ERMs was examined, because we reported the Crk-dependent activation of RhoA and the cleavage of CD44 (19), and ERMs are known to bind to CD44 regulating actin cytoskeleton. First, we found that anti-Crk antibody coprecipitated transiently expressed ERMs with endogenous Crk in human embryonal kidney To analyze the binding mechanism of Crk and ERMs, we transiently transfected mutants of c-Crk-II with ERMs in 293T cells. Although the association of c-Crk-II and radixin/moesin seemed to be weakened when we expressed the SH2 or SH3 mutants of c-Crk-II (Fig. 1C, lanes 9, 10, 12, 13), we could not detect a remarkable suppression of the binding of SH2-or SH3-mutants of c-Crk-II and ERMs (Fig. 1C). Mutational analyses using SH2 or SH3 mutants of v-Crk and CrkL were also performed, and similar results were obtained (data not shown). To examine the mechanism of Crk-induced Rho activation, we focused on the association of Crk and Rho-GDI because no known Rho GEF has been reported to bind to Crk. In fact, Rho-GDI contains possible Crk-interacting sequences such as the YXXP motif for the SH2 domain and proline-rich region for the SH3 domain. The colocalization of these proteins was observed in 3Y1 cells by confocal microscopy (Fig. 2B) and force-expressed Crk was coprecipitated with Rho-GDI by using anti-Rho-GDI antibody in 293T cells (Fig. 2C, arrowhead). The association of ERMs and Rho-GDI was also observed, as reported previously (Fig. 2C, asterisk) (21). It should be noted that we failed to detect the association of endogenous Crk and Rho-GDI (data not shown).
Association of Crk and pERMs and induction of microvilli formation-As ERM
proteins were known to be regulated by phosphorylation, the binding of Crk and the phosphorylated form of ERMs (pERMs) were examined by using a v-Crk-inducible 3Y1 cell line (clone 21-2-1) (19). In the presence of v-Crk, the association of Crk with pERMs was detectable in the cytoplasmic fraction of 3Y1 cells (Fig. 3A).
We then analyzed the subcellular localization of pERMs in a v-Crk-inducible 3Y1 cell line. pERMs were observed diffusely in the cytoplasm and partially at the edge of the cytoplasm of 3Y1 cells without v-Crk ( Fig. 3B-a). However, with v-Crk induction, pERMs were demonstrated to translocate to cellular microvilli, and co-localization of v-Crk and pERMs was shown by a merged image (Fig. 3B, b-d). In addition, co-localization of v-Crk and pERMs was also detected as dotted patterns in the cytoplasm (Fig. 3B, b-d, arrowheads).
To confirm the involvement of ROCK, which was known to phosphorylate ERMs in v-Crk-dependent microvilli formation of pERMs, we utilized the ROCK inhibitor Y27632, and found that this reagent inhibited the localization of pERMs to microvilli, while the remaining co-localization of v-Crk and pERMs was still detectable in the cytoplasm ( To confirm the immunofluorescence study, we performed immunoelectron microscopy using the double staining method. pERMs were visualized by using diaminobenzidine, in which they are recognized as electron-dense black substances by transmission electron microscopy (TEM), and the presence of anti-v-Crk Ab was demonstrated by 10 nm-gold particle-labeled secondary antibody. pERM labeled with DAB was recognized in the cytoplasm of vCrk expressing 3Y1 cells by light microscopy (Fig. 4a).
Immunoelectron microscopy demonstrated that vCrk was colocalized with pERM at the microvilli, cytoplasmic edge, and filamentous structure in the cytoplasm of the cells ( Fig. 4b-g). recovered the motility to the levels of wild type 3Y1 cells, but did not provide significant enhancement of motility higher than that of wild type 3Y1 cells (data not shown).
The cleavage of CD44 in 3Y1 cells expressing both Crk and ERMs-To
To confirm the involvement of the CD44 cleavage in v-Crk-regulated cell motility, we examined the effect of PI-3 kinase inhibitors because PI-3 kinase was known to up-regulate CD44 cleavage (23,24). Wound healing assay demonstrated that PI-3 kinase inhibitors such as LY294002 and Wortmannin tend to suppress the motility of 3Y1 cell lines; however, this suppressive function was most prominently found in 3Y1 cells stably expressing Crk and ezrin (Fig. 5C). PI-3 kinase inhibitors did not affect the motility of cells expressing Crk and moesin (Fig. 5C).
DISCUSSION
Signalling adaptor protein Crk was originally found as an avian sarcoma encoding oncoprotein v-Crk(25). Since human c-Crk-II, the homologue of v-Crk, was isolated, the identification of Crk targets has suggested that Crk links between tyrosine phosphorylated proteins and guanine-nucleotide exchange factors for small GTPases, and regulates cytoskeletal reorganization. In particular, under fibronectin stimulation, the integrin-provoked signal has been shown to be mediated by Crk and transmitted to the downstream effecter Dock180, leading to Rac activation. However, the mechanism of Crk-mediated cell migration or tumor cell invasion has still been under investigation.
In this study, we have found a novel interaction of Crk and the ERM family of proteins that is involved in activation of Rho and hyaluronic acid-CD44 dependent regulation of cell motility (Fig. 6) According to our previous results, v-Crk activated RhoA in fibroblasts and coexpression of Crk, and ERMs enhanced the activity of RhoA in 293T cells. As no known Rho-GEF was found to bind to Crk, the mechanism of Crk-dependent activation of Rho was the missing link. Rho-GDI has been shown to bind to the N-terminal FERM domain of ERMs (21), and these data led us to hypothesize that upon ECM stimulation, Crk binds to the negative regulator of RhoA such as Rho-GDI, inactivates of Rho-GDI, and leads to the activation of RhoA shown as Fig. 6. Thus, we examined the association of Rho-GDI and Crk. In this study, the association of force-expressed Crk and Rho-GDI was observed in 293T cells, but we failed to show the association of endogenous Crk and Rho-GDI (data not shown). In 293T cells, we did not examine the inhibition of the function of negative regulator Rho-GDI, because the simple expression of Rho-GDI did not significantly suppress the activity of RhoA measured by pull-down assay. Furthermore, we also tried to test the effect of Crk on another negative regulator of RhoA, Rho-GAP. However, we did not observe a significant activation of RhoA by the double expression of Crk and Rho-GAP (data not shown). Establishment of a deficient cell line for the negative regulator of RhoA may reveal the Crk-dependent activation mechanism of Rho in future studies.
As it is known that ERMs were phosphorylated in the cytoplasm and translocated to membrane, and that the phosphorylated form of ERMs (pERMs) link between CD44 and actin cytoskeleton, we analyzed the localization of pERMs in v-Crk inducible fibroblasts. In 3Y1 cells, v-Crk translocated pERMs and induced microvilli formation by a ROCK-dependent signalling mechanism. Although we expected the induction of v-Crk-induced phophorylation of ERMs, we failed to demonstrate such increased phosphorylation of ERMs by Crk in our system (Fig. 3C). We speculated the relatively high levels of pERMs in the cytoplasm of wild type 3Y1 cells may mask further phosphorylation of ERMs.
In this study, we showed that the association of Crk and ERMs was involved in the hyaluronic acid-CD44 signalling mechanism to promote cell motility. Considering the mechanism of Crk-dependent enhancement of CD44 cleavage, Crk may also regulate the transcriptional levels of matrix-metalloproteinases (MMPs). As Crk is also known to activate PI-3 kinase (26) | 2018-04-03T02:44:01.074Z | 2004-11-05T00:00:00.000 | {
"year": 2004,
"sha1": "b3c56312124412c0b5047356840f21c653abfdd1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1074/jbc.m401476200",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b847cd6f9a547f659ad03bdaa2813abef817676c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
247239473 | pes2o/s2orc | v3-fos-license | Associations between multiple indicators of discrimination and allostatic load among middle-aged adults
The objective of this paper is to examine associations between multiple measures of discrimination (i.e., everyday, lifetime, and appraised burden) and components of allostatic load (AL). We drew on pooled crosssectional data from the Biomarker Project of the Midlife in the United States study (n = 2118). Ages ranged from 25 to 84 years and included mostly Black (n = 389) and white (n = 1598) adults. Quasi-Poisson models were fit to estimate prevalence ratios for each discrimination measure and high-risk quartiles across seven physiological systems (i.e., sympathetic and parasympathetic nervous system; HPA axis; inflammation; cardiovascular; metabolic glucose; and metabolic lipids) and overall AL scores. In fully adjusted models, everyday discrimination was associated with elevated lipids (aPR: 1.07; 95% CI 1.01, 1.13). Lifetime experiences of discrimination were associated with lower sympathetic nervous system (aPR: 0.82; 95% CI: 0.69, 0.98) and greater cardiovascular risk scores (aPR: 1.17; 95% CI: 1.02, 1.34) among those reporting three or more experiences, as well as increased inflammation (aPR: 1.13; 95% CI: 1.02, 1.25; aPR: 1.28; 95% CI: 1.14, 1.43), metabolic glucose (aPR: 1.35; 95% CI: 1.19, 1.54; aPR: 1.45; 95% CI: 1.24, 1.68), and metabolic lipids (aPR: 1.13; 95% CI: 1.03, 1.24; aPR: 1.28; 95% CI: 1.15, 1.43) scores for those reporting one to two and three or more experiences. Appraised burden yielded nuanced associations with metabolic glucose and parasympathetic nervous system scores. Everyday and lifetime measures were also associated with higher overall AL, though burden of discrimination was only associated with AL among those reporting “a little” burden. While AL summary scores provide insight into the cumulative impacts of discrimination on health, there appear to be distinct physiologic pathways through which varying forms of discrimination contribute to AL and, ultimately, to poorer health. These unique pathways may be useful in identifying potential points of intervention to mitigate the impacts of discrimination on health inequities.
Introduction
Discrimination is defined as differential treatment directed towards marginalized and stigmatized groups by social institutions and individuals (Williams et al., 2019a(Williams et al., , 2019b). An additional component of discrimination includes "patterns of dominance and oppression, viewed as expressions of a struggle for power and privilege." (Krieger, 2001;Marshall, 1994) Different groups have been marginalized and discriminated against based on factors such as race/ethnicity, gender, disability status, and sexual orientation (Krieger, 2012(Krieger, , 2020Williams and Mohammed, 2009). There is growing interest in understanding the mechanisms through which discrimination becomes embodied to affect health, including inflammatory pathways or stress responses (Cuevas et al., 2020;Goosby et al., 2018;Lockwood et al., 2018;Ong et al., 2017a;Priest, 2021). Efforts in this area are particularly salient given a growing body of research that documents associations between discrimination and several adverse health outcomes (Williams et al., 2019b;Williams and Mohammed, 2009;Pascoe and Smart Richman, 2009;Paradies et al., 2015).
The increased exposure to stressors such as discrimination as a result of social marginalization and devaluation is a critical component of the concept of allostatic load (AL). AL suggests that individuals from marginalized social groups frequently encounter sources of social stress which overwhelm available coping resources and physiological responses, resulting in adverse health outcomes (McEwen, 2000;Seeman et al., 2001;McEwen and Stellar, 1993). Introduced by McEwen & Stellar, AL is hypothesized as the cumulative "wear and tear" on multiple physiologic systems induced by exposure to chronic stress (McEwen and Stellar, 1993). Increased allostatic load has been associated with several adverse mental and physical health outcomes, including cardiovascular disease, poorer cognitive functioning, and mortality (Goosby et al., 2018;McEwen, 2000;Seeman et al., 2001). Indicators of multisystem physiological dysregulation, such as AL, have been employed to better understand the embodiment of discrimination. To date, studies have reported that experiences of discrimination are associated with higher AL, even after accounting for traditional risk factors (e.g., health behaviors) and sociodemographic covariates (Ong et al., 2017b;Brody et al., 2014;Vadiveloo and Mattei, 2017). Akin to other psychosocial stressors, discrimination is posited to increase allostatic load through changes to psychosocial and behavioral factors that additionally overwhelm biophysiological responses to stress (Williams et al., 2019b;Vadiveloo and Mattei, 2017;Cuevas et al., 2019). However, in understanding how discrimination becomes embodied, there remains a gap in understanding whether the strength of associations between discrimination and allostatic load are driven by specific subscales and whether these associations vary by measurement of discrimination.
In the stress literature, distinctions are drawn between acute stressors such as life events (e.g., divorce, death of a loved one, job loss), chronic, on-going stressors (e.g., a traffic-heavy commute or problems at work or in relationships), and appraisals (Cohen et al., 1995). These distinctions are mirrored in discrimination research where major lifetime events of discrimination are referred to as acute, defined experiences (e.g., unfairly fired or not hired for a job), compared to chronic or recurrent, stressors such as everyday differential treatment (e.g., being treated with less respect or courtesy) (Williams et al., 2016;Lazarus, 1990). Measures of appraisals vary, though recent research has used measures of self-reported burden of discrimination, without relying on the attribution to a specific encounter. Although major lifetime events provide observable and more defined events to measure and provide context to the accumulated impact of discrimination over the lifecourse (Cohen et al., 1995;Williams et al., 2016), they may raise issues around statistical power given their infrequent occurrence. By contrast, chronic, everyday discrimination captures exposure to ongoing and relatively frequent events. While issues pertaining to measurement and assessment remain (Williams et al., 2016), these experiences may be no less "toxic" in their effects. For example, someone might not have (yet) experienced a major discrimination event in their life, but still be exposed to daily inequitable treatment. This persistent exposure to negative experiences and differential treatment through everyday experiences has been a strong predictor of the onset and progression of health outcomes (Lewis et al., 2006;Kershaw et al., 2016;Williams et al., 2003). Additionally, the inclusion of appraisals of burden of discrimination in recent literature provides evidence that the additional consideration of burden and stress from discrimination are beneficial to understanding discrimination as a contributor to adverse health outcomes Sims et al., 2012;Pantesco et al., 2018). This evidence suggests that the inclusion of appraisals of burden may also be useful in understanding the implications of discrimination on wellbeing. Illustrating the scope and range in impacts of measures of discrimination is of particular importance since all stressors may not equally contribute to or share plausible associations with allostatic load measures (Cohen et al., 1995;Lazarus, 1990;Rodriquez et al., 2019).
The measurement of allostatic load, as originally presented by Seeman et al., included 10 biomarkers across different physiologic systems (i.e., DHEA, epinephrine, cortisol, norepinephrine, cholesterol, systolic and diastolic blood pressure, and glycosylated hemoglobin, BMI and waist-hip ratio) (Seeman et al., 2001). However, the operationalization of AL has since been expanded given that additional biomarkers have been found to contribute to AL and, when added, act as better predictors of outcomes such as mortality and physical functioning (Guidi et al., 2021). Allostatic load can capture: the stress response via sympathetic and parasympathetic nervous system and hypothalamic-pituitary-adrenal axis (HPA) axis activity, inflammation via several markers of inflammation (e.g., C-reactive protein), metabolic glucose and lipid profiles, and indicators of cardiovascular health. Given that no gold standard exists, variations in the number of biomarkers used to assess each component and which subscales are represented in the measure have been documented (Beckie, 2012). Additionally, statistical concerns regarding the use of a summary score of highly correlated measures (e.g., BMI, waist-hip ratio) have been raised (Rodriquez et al., 2019). As previous researchers have identified, including a composite score of measures that may not be relevant to the exposure-outcome association increase measurement error (Rodriquez et al., 2019;Beckie, 2012) and do not present plausible biological pathways through which embodiment occurs for specific outcomes (Rodriquez et al., 2019). It is plausible, for example, that the associations between discrimination and mental health could be mediated through one component of AL, such as inflammation or HPA axis measures (Berger and Sarnyai, 2015), but not dysregulation in lipid metabolism. In fact, recent work has found evidence that specific subscales have stronger associations with mental health outcomes (Carbone, 2021). Focus on specific indicators used to create AL summary scores, including individual biomarkers used to compose AL measures, can facilitate an enhanced understanding of the physiological pathways underpinning the embodiment of experiences of discrimination and guide future research.
The present analysis sought to understand patterning in associations between multiple measures of general experiences of discrimination, without attribution, AL subscales, and overall AL scores. Our objectives included, first, examining associations between everyday, lifetime, and burden of discrimination and allostatic load subscales and overall allostatic load scores. Given findings from previous literature that has documented differences in relationships with allostatic load by race (Rodriquez et al., 2019;Guidi et al., 2021;Beckie, 2012), we also evaluated whether the above associations were modified by race. Last, we sought to understand whether the above associations between discrimination and allostatic load were modified by other included measures of discrimination (e.g., lifetime*everyday, everyday*burden). We hypothesized that, individually, each measure of discrimination is associated with increased high-risk scores across seven physiologic indicators of AL. Additionally, we posited that effect modification will exist between each measure of discrimination, as well as between the individual measures and race.
Methods
We use pooled cross-sectional data from the Biomarker Substudies of the Midlife in the United States (MIDUS) Study for our analysis. MIDUS is a longitudinal study of a national probability sample of households in the 48 contiguous states with a telephone. Approximately 7000 noninstitutionalized U.S. residents aged 25 to 74 at the time of interview were included (Brim et al., 2004;Dienberg Love et al., 2010). MIDUS I data includes extensive measurement of sociodemographic and psychosocial factors (e.g., discrimination). Additional detail regarding the sampling and data collection strategies of the MIDUS study are described elsewhere (Dienberg Love et al., 2010).
In follow up interviews of the initial MIDUS I wave (MIDUS II; data collected from 2004-09), MIDUS investigators also added African American participants from Milwaukee, WI (n = 592) in an effort to increase the racial diversity of the sample (Dienberg Love et al., 2010). Data collection in MIDUS II and the Milwaukee waves included measures captured in the initial assessment, however they also captured cognitive, biomarker, and neuro-physiological assessments on a subsample of respondents. In 2011-14, MIDUS investigators recruited and collected data on a Refresher sample of approximately 3500 adults to replenish the original MIDUS I wave. Similar data, including psychosocial factors, were collected in this cohort as were collected in MIDUS II. A subsample of the Refresher wave was also selected for cognitive, biomarker, and neuro-physiological assessments.
For the present analyses, multiple measures of discrimination, including lifetime burden of discrimination are examined in relationship to AL, using pooled cross-sectional data from the Biomarker Substudy of the MIDUS II (n = 1255) and the MIDUS Refresher waves (n = 863). All participants were eligible for inclusion. This analysis of publicly available, de-identified data was exempt from IRB review.
Measures
Experiences of discrimination. Experiences of discrimination were captured via self-administered questionnaires across three levels: (1) lifetime (Williams et al., 2008), (2) everyday (Williams et al., 1997), and (3) burden of discrimination. Items included in the lifetime and everyday measures are outlined in Supplemental Table 1. Lifetime experiences of discrimination were captured using the Major Experiences of Discrimination scale (Williams et al., 2008) measure which captures how many times over the lifecourse respondents were discriminated against as a result of their "race, ethnicity, gender, age, religion, physical appearance, sexual orientation or other characteristics" in 11 areas (e.g., discouraged to seek higher education, denied a scholarship). Similar to previous MIDUS studies examining discrimination, responses for lifetime experiences of discrimination were coded as none (i.e., a response of 0 to all 11 items), 1-2 instances (i.e., a response greater than 0 to any 1-2 of the 11 items), and 3 or more (Friedman et al., 2009). Internal reliability was acceptable within this sample (α = 0.77).
Responses to experiences of everyday discrimination were captured using the Everyday Discrimination Scale (Williams et al., 1997). This measures the frequency of routine experiences of discrimination such as being treated with less respect than others across 9 areas, including items related to being treated with less respect than others. For each item, respondents answered with the frequency of occurrence in their day-to-day life -1 = often, 2 = sometimes, 3 = rarely, or 4 = never. Responses were reverse coded, such that 0 = never, 1 = rarely, 2 = sometimes, 3 = often and averaged. The mean of the frequency responses across the 9 items were used as the everyday discrimination score (Sims et al., 2012). This measure exhibited excellent reliability within the present sample (α = 0.95).
Capturing appraisals of the burden of discrimination, participants responded to two survey items inquiring: "Overall, how much has discrimination interfered with you having a full and productive life?" and "Overall, how much harder has your life been because of discrimination?" These items were developed as part of the MacArthur Foundation Research Network on Successful Midlife Development and were included in the MIDUS studies (Pantesco et al., 2018;MacArthur Foundation. Res, 2021;Midlife in the United Sta, 2021). Potential responses included a lot, some, a little, and not at all. An overall measure of burden of discrimination was created using responses to both questions after testing whether the items were independent of each other (χ 2 = 3255.3, p < 0.001). This measure was coded as none (i.e., "not at all" to both burden questions), a little (i.e., reporting "a little" to both questions or reporting "a little" to one question and "not at all" to the other), some (i.e., "some" to both questions or "a lot" or "some" to one question and "a little" or "not at all" to the other), or high (i.e., "a lot" to both questions). Individuals who reported no everyday or lifetime discrimination and did not respond to these questions were coded as "no discrimination reported" and grouped in the "none" category. The measures together exhibit good internal reliability (α = 0.91).
An overall AL score was computed as the sum of the seven subscales (i.e., SNS, PNS, HPA, inflammation, cardiovascular, glucose, and lipids), using a high-risk quartile defined for each subscale. Subscale scores were computed for respondents with at least half of the measured biomarkers for each subscale. Risk scores were created for each subscale ranging from 0 to 1 indicating the proportion of system indicators that fell into high-risk quartile ranges based on the sample distribution (Gruenewald et al., 2012). The overall AL score (range: 0-7) is a sum of the averaged subscale scores and was calculated where 6 of the 7 subscale scores were present. Additional insight into variable creation for AL is available in supplemental materials from an analysis by Gruenewald and colleagues (Gruenewald et al., 2012). The primary outcomes of interest are average high-risk levels across each AL subscale; however, the secondary outcome includes the overall AL score. Supplemental analyses include an outcomes-wide analysis of each of the 24 available biomarkers.
Statistical analysis
Descriptive statistics of the overall sample (i.e., means and percentages) were calculated. Models were fit using quasi-Poisson regression given that risk scores follow a binomial distribution and allostatic load is a sum of these values ranging from 0 to 7. Modified Poisson models have been identified as an approach to estimate prevalence ratios (or relative risk, in prospective studies) (Zou, 2004) and provide an estimate that is easier to interpret compared to logistic regression (prevalence odds ratio) (Barros and Hirakata, 2003). Baseline models assessed independent associations between lifetime discrimination, everyday discrimination, and appraised burden and each AL subscale and overall AL scores. Two multivariable regression models were run, beginning with a model adjusted for age, sex, and race. The second multivariable model additionally accounted for a fuller set of covariates and potential mediators of the association (i.e., educational attainment, income, employment status, wave of data collection, cigarette use and alcohol consumption). Socioeconomic variables (educational attainment, income, and employment status) are viewed as potentially confounding the association between experiences of discrimination and markers of AL. On the other hand, health behaviors such as smoking, and drinking are viewed as potential mediators of the association between experiences of discrimination and AL.
All analyses were conducted using R (R Core Team., 2013). Effect modification on the multiplicative scale was assessed using interaction terms. In fully adjusted models, interaction terms between race and each measure of discrimination were assessed for significance. We then examined interactions between the included measures of discrimination (e.g., everyday*lifetime) in fully adjusted models.
Imputation of all missing data was conducted using the built-in multivariate imputation by chained equations technique available in the "mice" package in R (van Buuren et al., 2015). Imputations were conducted over 5 iterations, using proportional odds models for ordinal categorical variables (i.e., income, educational attainment, lifetime discrimination, burden of discrimination) and polytomous logistic regression for nominal categorical variables (i.e., race). Numeric variables were imputed using predictive mean modeling (i.e., allostatic load risk scores, individual biomarkers, everyday discrimination, count of lifetime experiences of discrimination). Variables with the greatest level of missing data were burden of discrimination (11.5%) and parasympathetic nervous system subscale scores (14.3%). Remaining variables had less than 10% missingness, with exact levels summarized in Table 1.
Sensitivity analyses. Sensitivity analyses were conducted to assess the robustness of findings to how measures of discrimination were coded, particularly given that there is no "gold standard" regarding operationalizing the included discrimination measures (Williams et al., 2016). Everyday discrimination was assessed categorically, where respondents reporting 0 experiences were coded as none, the top quartile of experiences were coded as high, and remaining non-zero responses were coded as some. This may capture a threshold effect of everyday discrimination. Lifetime discrimination was assessed as a count of experiences. Additionally, each burden appraisal was assessed individually to capture whether each item had unique associations with the AL subscales and overall AL scores.
E-values were calculated to assess the robustness of the associations to potential unmeasured confounding (VanderWeele and Ding, 2017). The E-value is defined as "the minimum strength of association on the risk ratio scale that an unmeasured confounder would need to have with both the treatment and outcome to fully explain away a specific treatment-outcome association, conditional on the measured covariates" (VanderWeele and Ding, 2017) with larger values indicating considerable unmeasured confounding would be necessary to explain away the observed outcome.
E-values were calculated for statistically significant findings from fully adjusted models using: where RR = relative risk values greater than 1. (VanderWeele and Ding, 2017) Prevalence ratios were used to calculate E-values.
Supplemental analyses
To illustrate the potential differences in associations between measures of discrimination and measures used to define the AL scores, we employed an outcome-wide analysis (VanderWeele, 2017) assessing associations between everyday, lifetime, and burden of discrimination with each of the 24 biomarkers used to create AL indicators. Utilizing an outcome-wide approach provides additional insight into the potentially different roles that each distinct measure of discrimination plays with the array of biomarkers used to compile AL measures. Outcome-wide analytic approaches have been proposed to evaluate associations between the same exposure and multiple outcomes where the relationship with each outcome may differ, offering additional guidance and specificity to public health recommendations (VanderWeele, 2017). Bonferroni correction was used to correct for multiple testing (p = [0.05/(3*24)] = 0.0007). The conditional distribution of most biomarkers was skewed. As such, outcomes were log transformed, excluding systolic blood pressure, pulse, pulse pressure, HDL and LDL, and robust standard errors were used in linear regression models. Results from sensitivity and supplemental analyses are available in the Online Supplement.
Results
Overall sample descriptive statistics are provided in Table 1. The mean age of the overall sample was 53 years. Most participants were white (75.4%) and female (54.9%). Nearly 27% of respondents reported no everyday discrimination. Experiencing lifetime discrimination in three or more areas was reported by 18% of respondents and 32% of respondents appraised discrimination as contributing "a lot" of burden to living a full and productive life and in making life harder.
Interactions among lifetime, everyday, and burden of discrimination were not statistically significant for any of the outcomes (p > 0.05); nor was race an effect modifier of discrimination measures in fully adjusted models. Findings are reported for each measure. We place emphasis on findings from model 3 although it accounts for potential mediators between discrimination and physiologic markers.
Everyday discrimination
Among the full sample, increases in average frequency of everyday discrimination was associated with increased prevalence of high-risk scores in the metabolic lipids subscales in models accounting for race, age, and sex (Table 2). Associations between everyday discrimination and metabolic lipids risk scores remained after accounting for potential mediators (i.e., health behaviors, SES measures). Increased everyday discrimination was also associated with higher AL scores.
Lifetime discrimination
Compared to those who reported no lifetime discrimination, experiencing one to two and three or more experiences was associated with high-risk scores in the inflammation, metabolic glucose, and metabolic lipid subscales in partially adjusted models (Table 3). These associations remained after further adjustment. Associations between reporting three or more experiences of lifetime discrimination and SNS and cardiovascular risk scores were also observed. The association between three or more lifetime experiences of discrimination was protective of SNS risk scores. Reporting both one to two and three or more experiences of lifetime discrimination was associated with higher AL scores. No associations were observed for PNS and HPA risk scores.
Burden of discrimination
Associations between the appraisals of burden of discrimination and AL are presented in Table 4. Respondents categorized as experiencing "a little" burden of discrimination had greater prevalence of higher metabolic glucose risk scores compared to those who reported "none" in fully adjusted models. Elevated PNS risk scores were prevalent among those reporting "some" burden of discrimination as compared to those reporting none. Associations between appraised burden and overall AL scores were significant for those reporting "a little" burden in the fully adjusted model, though not for other groups.
Sensitivity analyses
When coded categorically, there appeared to be a threshold effect of experiencing high everyday discrimination compared to respondents reporting no experiences in fully adjusted models (Supplemental Table 5). Associations between count of experiences of lifetime discrimination and AL were directionally similar to associations observed when experiences were assessed categorically, though estimates were smaller (Supplemental Table 6). Increases in experiences of lifetime discrimination remained associated with increased inflammation, metabolic glucose, and metabolic lipid subscales and overall AL scores. Assessing burden of discrimination questions independently also revealed unique associations with AL subscales and overall AL scores. Appraisals of life being harder because of discrimination were associated with the prevalence of higher PNS and metabolic lipid subscale risk scores (Supplemental Table 7). Appraisals of discrimination as having interfered in having a full and productive life were associated with increased PNS, inflammation, metabolic glucose, and metabolic lipid subscale risk scores, as well as overall AL scores (Supplemental Table 8).
Robustness of the observed associations between measures of discrimination and AL subscale risk and overall scores to unmeasured confounding are presented using E-values in Supplemental Table 9. We report these values for model 3. These reflect the minimum association an unmeasured confounder would have to have with both the measure of discrimination and AL outcomebeyond measured covariatesto explain away the observed associations from fully adjusted models (VanderWeele and Ding, 2017).
Supplemental analyses
Post hoc analyses used an outcome-wide approach to understand differences in patterning of associations between measures of discrimination and AL subscales. Findings from these fully adjusted analyses are presented in supplemental tables (Supplemental Tables 10-12). Associations meeting the adjusted p-value level of significance to account for multiple testing (p < 0.0007) are highlighted in red.
Table 3
Prevalence ratios of associations between lifetime discrimination and high-risk allostatic load score. After accounting for multiple testing, we found that everyday discrimination was associated with lower HDL levels. Additionally, reporting 3 or more experiences of lifetime discrimination was associated with elevated IL-6, CRP, insulin resistance (HOMA-IR), BMI, WHR, and triglycerides, while reporting one to two experiences of lifetime discrimination was associated with greater glucose and HOMA-IR. None of the associations between the burden of discrimination measures met the criteria of the adjusted p-value.
Discussion
In this cross-sectional analysis of MIDUS participants, we observed that associations between discrimination and AL subscales varied by the discrimination measure used. These findings suggest that there may be distinct mechanisms through which chronic and acute major lifetime experiences or appraisals of discrimination contribute to AL and, ultimately, poorer mental and physical health outcomes. Though we did not examine these unique relationships in reference to specific health endpoints, these findings could be extended to examine whether these patterns between discrimination measures and allostatic load subscales or indicators are more predictive of certain physical and mental health outcomes. For example, in a latent class analysis of allostatic load and mental health, Carbone (2021) found that individuals meeting the criteria for depression measures were more likely to be clustered in metabolic and inflammatory dysregulation and parasympathetic dysregulation categories as compared to the baseline cluster (e.g., low overall dysregulation, except for metabolic lipids) (Carbone, 2021). Our results, in combination with these recent findings, suggest directions for future exploration of embodiment and the processes through which embodiment occurs.
We extend the findings of previous literature that documented associations between discrimination and AL using individual measures or composite scores of multiple measures of discrimination (Ong et al., 2017b;Brody et al., 2014;Vadiveloo and Mattei, 2017;Cuevas et al., 2019;Currie et al., 2020;Van Dyke et al., 2020) by also noting that associations between discrimination, individual physiologic markers, and AL subscale risk scores vary by type of discrimination assessed and how measures are operationalized. We highlight the main findings of this analysis in a summary table below.
A summary of associations between measures of discrimination and allostatic load subscales and overall allostatic (AL) scores from fully adjusted models We did not find evidence that the included measures of discrimination were associated with HPA axis scores. Associations were observed for everyday discrimination and metabolic lipid risk scores and overall AL score. By contrast, lifetime experiences of discrimination were primarily associated with intermediate-and long-term physiological indicatorssuch as inflammation, cardiovascular, and metabolic glucose and lipids measures. We also observed greater lifetime discrimination to be protective of SNS risk scores. However, this relationship does not align with previous assessments of discrimination (both general experiences and racial discrimination) and SNS indicators where null or positive associations have been observed (Ong et al., 2017b;Brody et al., 2014). Increased reports of lifetime discrimination were associated with higher risk scores for inflammatory markers (specifically, CRP and IL-6), metabolic glucose (i.e., glucose and HOMA-IR), and metabolic lipid (i.e., BMI, WHR, and triglyceride levels) subscales. Appraised burden of discrimination was associated with PNS, metabolic glucose, and metabolic lipids risk scores and overall AL scores. The variations in associations by measures contribute to evidence of the criterion validity of each measure, where lifetime and appraised burden of discrimination may capture the enduring impact of major events or burden of discrimination on long-term health outcomes, while everyday discrimination captures the implications of broader, day-to-day exposures of stress.
Health behaviors (i.e., smoking and alcohol use) were included as potential mediators of associations between discrimination and AL outcomes. Some findings were robust to these adjustments, including the outcome-wide assessments where measures of discrimination remained associated with several individual physiological markers. However, it is important to note that the included health behaviors and covariates do not represent the totality of variables that may mediate the effects of discrimination on indicators of AL and AL scores. Additionally, given the cross-sectional nature of our analyses, we cannot rule out the possibility of reverse causation, whereby components of AL may affect experiences and appraisals of discrimination.
Additionally, we observed no interaction on the multiplicative scale between measures of discrimination or between individual measures and race and AL. Previous work has found larger (though not statistically different) within-group associations between pervasive discrimination and AL among African American respondents compared to whites when relative threshold categorization was used (e.g., high/low) (Van Dyke et al., 2020). Our findings may reflect 1) small samples of African American participants and participants that identified as "Other" racial groups or 2) that interactions between measures or race may occur on the additive scale.
In sensitivity analyses, we found that the operationalization of discrimination measures affected some of the observed associations for everyday and burden of discrimination measures. In understanding how stressors result in adverse health outcomes, these findings suggest that there may be a threshold effect of everyday experiences of discrimination where these experiences may go beyond individual, collective, and structural resources available to mitigate the negative impacts of everyday differential treatment (Van Dyke et al., 2020). For example, we found that persons high in everyday discrimination have increased prevalence of high PNS, cardiovascular, and metabolic lipids risk scores and overall AL scores. These findings, and results from the outcome-wide analysis, provide evidence of the importance of further consideration and theoretical guidance to how we operationalize discrimination measures when evaluating it as a stressor and/or contributor to health inequities.
Discrimination and allostatic load scores
Our findings of positive associations between discrimination measures and overall AL scores are supported by previous work. Three crosssectional studies have found discrimination to be associated with AL in samples of Indigenous Canadian, Puerto Rican, and African American adults (Ong et al., 2017b;Cuevas et al., 2019;Currie et al., 2020). The analysis by Cuevas et al. yielded nuanced findings, however, with results indicating inverse associations between general experiences of everyday discrimination and AL scores, while lifetime discrimination was associated with greater AL in a sample of Puerto Rican adults in the Boston metro area (Cuevas et al., 2019). This difference in results may reflect differences in biomarkers used to compose the AL measure. Longitudinal work assessing the frequency of racist events among African American adolescents by Brody et al. the frequency of everyday discrimination among middle-aged women by Upchurch and colleagues, and weight discrimination among adults in the MIDUS sample by Vadiveloo and Mattei also provides evidence of the persistent effects of discrimination on increased AL (Brody et al., 2014;Vadiveloo and Mattei, 2017;Upchurch et al., 2015). Pervasive discrimination, operationalized as the sum of tertiles across general experiences of everyday, lifetime, and workplace discrimination, was also associated with AL scores in a recent analysis using MIDUS data (Van Dyke et al., 2020). When assessing the components of pervasive discrimination independently, Van Dyke et al. observed associations between lifetime and everyday discrimination with AL scores, though not workplace discrimination.
Discrimination and subscale-specific findings
Subscale-specific findings provide empirical justification for the distinct associations between measures or types of discrimination and AL subscale components. Studies in this area that have examined the relationship between measures of discrimination and specific indicators have yielded similar findings for most associations observed. Work by Wagner and colleagues which examined the physiological implications of lifetime exposure to racial discrimination using the Schedule of Racist Events scale observed no associations between discrimination and plasma norepinephrine levels (Wagner et al., 2015). Null associations between discrimination and epinephrine levels may reflect the differences between plasma and urinary assessments of epinephrine and norepinephrine and the timing of sample draws (Lundberg, 2003). Depending on how the stressor is conceptualized to impact SNS activity, plasma hormonal measures of SNS may reflect acute responses to stress, but require more invasive methods to capture that may influence levels (i.e., venipuncture), while urinary measures provide an opportunity to assess SNS activity over a longer period of time (Lundberg, 2003;Lundberg and Fink, 2000). In contrast to the present analysis, however, prior research found associations between discrimination and markers of PNS activity (Ong et al., 2017b;Wagner et al., 2015;Hill et al., 2017). Hill et al. found greater ethnic discrimination (summary score up to 17 using the Perceived Ethnic Discrimination Questionnaire-Community Version) to be associated with decreased HFHRVone of the indicators used to calculate the parasympathetic nervous system risk score (Hill et al., 2017).
Our results indicated no associations between measures of discrimination and HPA axis risk scores. Research has found inconsistent associations between discrimination and indicators of HPA axis dysregulation (i.e., cortisol, DHEAs) (Busse et al., 2017), with some studies reporting null findings (Ratner et al., 2013) and others finding positive (Zeiders et al., 2012;Huynh et al., 2017) or indirect associations (Lee et al., 2018). Previous findings suggest that associations between discrimination and HPA axis risk scores and indicators are sensitive to the timing of discrimination (i.e., acute, chronic) and may yield elevated changes to HPA axis activity or blunted responses (Busse et al., 2017).
Most studies examining associations between discrimination and inflammation markers have found increased experiences to be associated with greater inflammation, with CRP and IL-6 being the most frequently assessed biomarkers (Cuevas et al., 2020). Similar to previous findings (Cuevas et al., 2020;Stepanikova et al., 2017), we observed that lifetime discrimination is associated with increased levels of inflammatory risk scores with specific associations with IL-6 and CRP. We found no associations with everyday discrimination and overall inflammation risk scores. Our findings are similar to an analysis of everyday and lifetime discrimination with inflammation markers by Stepanikova and colleagues, though their results differ slightly. The authors found lifetime discrimination to be associated with fibrinogen, E-selectin, and IL-6, but not with CRP and no associations between everyday discrimination and the above inflammation biomarkers (Stepanikova et al., 2017). Work by Van Dyke et al. found pervasive discrimination to be associated with inflammation, metabolic glucose and metabolic lipid subscales (Van Dyke et al., 2020). These results are similar to our findings regarding lifetime discrimination, though everyday discrimination was also associated with metabolic lipid risk scores and appraised burden with metabolic glucose risk scores.
Last, our findings regarding associations between discrimination and cardiovascular risk scores were consistently null across everyday and burden measures, though not lifetime discrimination. Our results are consistent with studies that have observed null associations between discrimination and some cardiovascular outcomes (Dolezsar et al., 2014). However, the literature in this area remain mixed, with varying associations seen by operationalization of discrimination (e.g., implicit biases, internalized, interpersonal, institutional, domain-specific), gender, and type of outcome used to assess cardiovascular risk (Lewis et al., 2014).
Calculated E-values for the observed findings (range: 1.28 to 2.19; the lowest possible E-value is 1) suggest that our findings may be robust to unmeasured confounding (Chen and VanderWeele, 2018). For example, an unmeasured confounder would have to have a prevalence ratio of 1.34 with everyday discrimination and prevalence of high-risk metabolic lipids scores to explain away the observed association beyond the included covariates. Potential factors that may be confounders include negative affect and neuroticism, though studies that have included these measures in assessing the effects of discrimination on health outcomes found associations to persist even after accounting for these factors (Van Dyke et al., 2020;Smart Richman et al., 2010;Huebner et al., 2005). However, these factors are not exhaustive and do not negate the possibility of unmeasured confounding impacting our results.
This analysis is not without its limitations. First, given the crosssectional design, the temporality of the associations between experiences and appraisals of discrimination and AL markers is uncertain. However, the advantage of using biomarkers as the outcome is that reverse causality (i.e., values of biomarkers affecting self-reports of discrimination) as well as common-source bias seem less likely. While we may capture major forms of institutional discrimination through items available in the lifetime discrimination measure, we only capture experiences of discrimination that people are able to recognize and are willing to report. This does not speak to forms of structural racism or other forms of oppression that exist and result in material, opportunity, and political deprivation whether or not an individual was aware of such experiences and reported them as discriminatory or harmful (Krieger, 2011(Krieger, , 2012Williams and Mohammed, 2009;Bailey et al., 2017). There is a growing body of evidence that social factors such as structural racism, through interlinked and mutually reinforcing practices, policies, and patterns, directly and indirectly affect health and wellbeing (Bailey et al., 2017). Though we include appraised burden of discrimination which provides some insight into the potential impacts and perceptions of discrimination as a barrier or hindrance without reliance on the report of or reaction to a specific experience, it still relies on self-report. Additionally, we focus on general reports of discrimination, without considerations of attribution of experiences (e.g., race, gender). While there is substantial evidence documenting that the material, social, and economic impacts of discrimination differ by race, for example; where Black or other marginalized racial groups carry a disproportionate burden (Williams et al., 2019a(Williams et al., , 2019bWilliams and Mohammed, 2009;Bailey et al., 2017), there has not been clarity on whether assigning an attribution, or primary reason, for experiences of discrimination to race more negatively impacts health (Lewis et al., 2006;Kessler et al., 1999;Roberts et al., 2008;Guyll et al., 2001). However, studies have found differential health impacts of discrimination reported among Black adults compared to white adults (Guyll et al., 2001;Troxel et al., 2003). The role attribution of experiences of discrimination should continue to be explored in future work.
Also, while the MIDUS I wave is a nationally representative sample of US adults, the proposed analysis is a subsample of MIDUS participantsincluding the longitudinal (MIDUS II) and the Milwaukee waves. The overall percentage of Black respondents in the national MIDUS study were small and, as a result, most of the Black population in the MIDUS study was recruited in the Milwaukee wave. As such, it means that the Milwaukee wave is less representative, however the data provides insight into the experiences of Black Americans living in highly segregated cities. Researchers found that participants in the MIDUS II Biomarker Substudy were similar to participants in the full sample, except for higher levels of educational attainment, were less likely to smoke, and were more likely to use alternative therapies (Dienberg Love et al., 2010). Additionally, small sample sizes of Black (n = 386) and "Other" (n = 128) individuals limits the power to capture interactions between race and discrimination measures; however, future work should assess whether interactions between multiple measures and race occur on the additive scale instead of multiplicative (Bauer, 2014). Additionally, future research should employ other considerations for modeling multiple experiences of marginalization and inequitable treatment, such as latent class analysis (Bauer, 2014).
Our analysis also has several strengths in that it adds to literature examining the impacts of racial discrimination by using and comparing the effects of multiple measures of discrimination on AL subscales, individual biomarkers, and overall scores. These findings extend the existing body of literature by finding that associations between multiple measures of discrimination and AL vary by measure used (i.e., everyday, lifetime, appraisals of burden) and subscale. These unique pathways may be useful in identifying potential points of intervention, though efforts should include rectifying harms from all forms of discrimination through institutional (e.g., policy) and cultural interventions (e.g., changes to norms) (Williams et al., 2019a;Bailey et al., 2017). For example, knowing where experiences of discrimination occur via use of the Major Experiences of Discrimination scale, provides opportunities for accountability (Krieger, 2012). Discrimination occurring in housing, policing, lending, or education can be intervened upon through legal action (Krieger, 2012;Massey, 2015;Nielsen and Nelson, 2005) as well as through organizing and building political support (Bailey et al., 2017). To further the understanding of pathways that drive associations between individual measures of discrimination and AL subscales, we employed an outcomes-wide analysis to capture specific indicators that shed light on the biological pathways through which multiple forms of discrimination may uniquely impact health. We also add to assessments of discrimination by capturing the appraisal of experiences of discrimination as a barrier to living a full life and making life harder, outside of reference to a specific event/experience. Last, we also identified that associations between measures of discrimination and AL outcomes can vary based on how discrimination is used in the analysis.
These findings provide points of focus for future research, specifically around the pathways through which discrimination adversely impacts health and the importance and theoretical implications of how discrimination is operationalized. While most stressful events do not affect health (Williams and Mohammed, 2009), identifying salient experiences of discrimination that are likely to have implications for population health remains important for current and future workin an effort to understand and intervene upon discriminatory processes that unfairly disadvantage some and unfairly advantage others (Jones, 2014). | 2022-03-06T16:06:17.007Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "6b91edef02d29466d8e71e1c0ba2328aa01c435d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.socscimed.2022.114866",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5d6f66ea74578d727084ead8d7e7d0e4905e1fe4",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.