id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247856962
pes2o/s2orc
v3-fos-license
Telemedicine for the pediatric preoperative assessment during the COVID-19 pandemic: Evaluating patient and provider satisfaction The COVID-19 pandemic has presented unprecedented challenges in delivering healthcare to surgical patients. To avoid delays in patient care while still minimizing COVID-19 infection risk to patients and providers, anesthesiology preoperative clinics were presented with the opportunity to implement telemedicine to assess patients’ risks prior to surgery. This study explores patient and provider satisfaction with video-based telemedicine preoperative clinic visits during the COVID-19 pandemic via a patient and provider satisfaction survey. A vast majority (>93%) of patients expressed overall satisfaction with telemedicine visits. Similarly, >85% of providers agreed with the benefits of and expressed overall satisfaction with the preoperative telemedicine visits. Overall, patient and provider study participants had positive feedback in response to anesthesia preoperative telemedicine visits. Future studies could assess the preference of telemedicine to in-person visits once the fears of COVID-19 spread have been mitigated, as well as an assessment of outcomes comparing telemedicine and in-person visits. Introduction The COVID-19 pandemic presented unprecedented challenges in delivering healthcare to patients around the world resulting in delays in non-urgent surgeries and elective healthcare visits, which had immeasurable medical and economic impacts for patients and healthcare systems. Most significant, delays in diagnostic evaluation and treatment can result in more advanced disease and worse outcomes for patients. 1 The World Health Organization recognized and warned against the effects of overwhelmed healthcare systems and the importance of maintaining equitable access to essential health services. 2 Telemedicine was rapidly adopted as a way to deliver healthcare during the pandemic while reducing exposure to potentially ill patients, preserving personal protective equipment, and minimizing the number of patients in waiting rooms at healthcare facilities. 3 The Centers for Disease Control and Prevention (CDC) and various professional medical societies, including the American Society of Anesthesiologists (ASA), provided guidance to healthcare professionals about implementing telemedicine in their own practices. 4,5 The ASA supported the continued investment in telemedicine and recommended the use of telemedicine for components of the preoperative patient evaluation. 6 The U.S. Centers for Medicare & Medicaid Services (CMS) also dramatically expanded access to telemedicine services, setting Medicare reimbursement for telemedicine at equivalent rates as in-person visits. 5,7 The role of the anesthesiology preoperative clinic is to identify and ensure patient's underlying medical conditions are optimized, perform care coordination, patient education and counseling to maximize patient safety and optimal outcomes. 8,9 Anesthesiology preoperative clinics were presented with the opportunity to implement telemedicine to assess patients' risks prior to surgery. Unique to the pediatric setting, healthcare workers are challenged with not only evaluating their patients, but also reassuring, connecting with, and guiding these children's families and guardians. As telemedicine evolves and becomes a potentially permanent option in patient care, it is important to include it in measures to preserve and improve patient satisfaction across all healthcare delivery modalities. Patient and provider satisfaction should play a significant role in the endurance of telemedicine after the COVID-19 pandemic is no longer driving the practice. Studies of patient satisfaction with telemedicine services during the pandemic have found to have high rates of satisfaction in adult patients, 3,10,11 as we predict will be the same in this study. To our knowledge, this is the first descriptive observational study of patient and provider satisfaction with telemedicine use in pediatric preoperative clinics since the start of the COVID-19 pandemic. This study explores patient and provider satisfaction with video-based telemedicine visits during the COVID-19 pandemic, utilizing a telemedicine satisfaction survey of both pediatric patients and their parents or guardians who presented to an Anesthesiology Preoperative Evaluation Clinic, and of physician and advanced practice providers who conducted the visits. Intervention: telemedicine protocol Children's National Hospital is a nationally ranked, freestanding, 323-bed, pediatric acute care children's hospital located in Washington D.C. The Anesthesiology Preoperative Care Clinic (POCC) sees an average of 900 patients a year in preparation for surgery or procedures under anesthesia either in person or via telemedicine. The Preoperative Care Clinic at Children's National Hospital sees patients with complex medical conditions or multiple comorbidities, those who have had problems with anesthesia in the past, have severe anxiety or behavioral concerns, and any patient or guardian who desires education and counseling prior to surgery. Upon receiving a consultation from the surgeon, the patient is reviewed by a POCC provider and determined if the patient is low risk and can be evaluated with chart review or warrants a visit with POCC. Patients ineligible for telemedicine are those with unstable comorbidities, such as uncontrolled asthma or decompensated cardiac condition, those with syndromes or conditions associated with potential difficult intubation, morbid obesity, and patients with recent respiratory illness requiring auscultation for anesthesia clearance. Patients who required routine preoperative laboratory studies were given the option of attending an in-person POCC visit which would allow for all necessary studies to be obtained or having the visit via telemedicine and presenting to our laboratory facilities for blood work at a different time. Similarly, patients without access to or knowledge of technology required to participate in telemedicine were offered an in-person visit. The clinic's video-based telemedicine visits were conducted via Zoom for Healthcare (Zoom Video Communications, Inc., San Jose, California), a Health Insurance Portability and Accountability Actcompliant platform with data in motion encrypted at the application layer using Advanced Encryption Standard. Physicians and advanced practice providers in the anesthesiology department received training to conduct virtual telemedicine visits. Patients and their parents/guardians were assessed for capability for a video-based telemedicine visit and then given instructions to prepare for their visit. The telemedicine visit included verbal or electronic consent; confirmation of upcoming procedure and goal of the visit; review of medical history, anesthesia history, and medications; and a brief physical exam. Preoperative instructions and anesthesia and postoperative plan were discussed. The anesthesiologist conducted the telemedicine visit in a private office. At the conclusion of the telemedicine preoperative clinic visit, patients or their parents/guardians were given the opportunity to complete an anonymous survey about their experience. A separate survey was created to assess overall provider satisfaction with telemedicine encounters at the end of the study timeframe. Study design A chart review was conducted of telemedicine outpatient encounters by in the Anesthesiology Preoperative Care Clinic at Children's National Hospital from September 1 to December 15, 2020. Only patients who were seen via video-based encounter were included, while patients who received telephone consults or in-person visits were excluded. Following the telemedicine visit, the patient or their parent or guardian was contacted by a research assistant to voluntarily complete the anonymous patient satisfaction survey. Verbal consent to participate was obtained. A structured survey was created to assess patient and caregiver satisfaction with video-based telemedicine visits. The device used and education level of the respondents was obtained by querying respondents during the survey. The survey used modifications of the telemedicine usability questionnaire (TUQ) and included questions regarding interaction quality, ease of use, privacy concerns, comparison to in-person visits, and overall satisfaction. The TUQ is a validated survey tool used to measure the quality of computer-based user interface and telemedicine interaction and services. 12 The survey allowed responses using the Likert scale to range from 1-strongly disagree to 5-strongly agree. The survey permitted comparison of responses across different devices, measured the quality of the telemedicine interaction, and assessed patient satisfaction with the encounter compared to conventional in-person visits. A similar Likert scale survey was created to assess provider satisfaction with telemedicine consultation. This project was reviewed by Children's National Institutional Review Board (IRB) and was determined to be a Quality Improvement Initiative. As such it was exempted from further IRB review and not under the direct oversight of the IRB. Data collection If the patient was over the age of 18 and able to make medical decisions, they participated in the survey themselves. However, since the majority of the patients were underage, the parent or guardian present at the visit was contacted by phone following the telemedicine visit. Non-English-speaking respondents were contacted with an interpreter. Phone calls were made using the hospital line or via the hospital operator, to allow the hospital name to display on the recipient's caller ID. The first attempt at contact was made on the same day as the telemedicine visit. For non-answered calls, two additional attempts were made the following day at different times. After three attempts, the patient was deemed unable to be contacted. Demographic data and survey responses were collected anonymously. Study data were collected and managed using REDCap electronic data capture tools hosted at Children's National Hospital. All providers who conducted video-based telemedicine visits were given an email link to anonymously complete a satisfaction survey regarding their overall views of telemedicine for the pre-anesthesia evaluation. Data analysis Patient demographics were presented descriptively using mean with standard deviation (SD) for continuous data, and frequency with percentage for categorical data. The Likert scale responses from the patient and provider satisfaction surveys were presented as frequencies with percentages of the corresponding responses. Mean and median scores and the standard deviations and range for each item were calculated by assigning points to each response as follows: 1= strongly disagree, 2= disagree, 3= neither agree nor disagree, 4= agree and 5= strongly agree. Statistical analyses were performed using R statistical software, version 4.0.0 (R Core Team, 2020). Results Between September 1 and December 15, 2020, a total of 325 patients received clinic-based preoperative consultations, of which 204 encounters were conducted using a video-based telemedicine platform. Survey responses were obtained from 101 of the 204 encounters, for a response rate of 49.5%. Patient demographic characteristics are presented in Table 1. The average age of participating patients was 9 years old. The majority of study participants preferred English as their primary language (88%). Most participants conducted the telemedicine visit using a smartphone (52%), followed by a laptop (37%). The majority of the patient study participants (>93%) either agreed or strongly agreed with statements regarding the benefits of the preoperative visit, that their concerns were addressed, that video clarity was acceptable, that they were able to talk easily and understand recommendations, that patient privacy was maintained, that they saved time traveling, their overall satisfaction with the visit, and their willingness to participate in telemedicine again. Fewer participants (84% agreed or strongly agreed) felt the technology was easy to use. The lowest mean score was with opinions of telemedicine visits being as effective as an inperson visit (4.3); still, a clear majority had positive responses with 85% who agreed or strongly agreed that the telemedicine visit was as effective as an in-person visit. The results of the study participant satisfaction survey can be seen in Table 2. Eighteen out of 21 providers consisting of 15 anesthesiologists and 3 nurse practitioners completed a satisfaction survey regarding their overall experience with the telemedicine anesthesia preoperative visits, resulting in a response rate of 85%. Similar to patients and caregivers, providers who conducted video-based telemedicine preoperative clinic visits were overwhelmingly satisfied. The majority of providers (86%) either agreed or strongly agreed with statements regarding the benefits of and overall satisfaction with the preoperative visits, the ability to communicate with and hear patients and their parents/guardians during visits, the ability to obtain necessary information, and the ease of technology use. Fewer providers were confident that patients' privacy was protected (83%) and again the lowest scores were associated with opinions of telemedicine visits being as good as in-person visits (mean score of 3.7, with 61.1% of providers agreeing or strongly agreeing). Despite this, 88.2% of providers agreed or strongly agreed that they were ultimately "very satisfied" with the telemedicine preoperative anesthesia visit. The results of the provider satisfaction survey can be seen in Table 3. Discussion Preoperative clinics provide a structured environment for anesthesiologists to perform assessments prior to surgery. The goals of the preoperative evaluations are to optimize medical conditions for patients, create rapport with the patient and family and coordinate care with specialists and surgeons prior to surgery. The expectations and goals of the surgery are reviewed, potential risk factors and complications are discussed, factors that might influence surgery and postoperative recovery after surgery. Preoperative clinics should be accessible to the patient and team. Many practice groups, including anesthesiology preoperative clinics, have increasingly relied on telemedicine visits during the COVID-19 pandemic to help with the accessibility to care for patients. Telemedicine has allowed healthcare practices to mitigate the spread of COVID-19 and other illnesses while also preserving personal protective equipment. 13 Apart from limiting possible exposure, telemedicine provides several potential benefits including increased access to healthcare particularly for underserved and rural areas, reduced travel time to and from the physician's office, reduced wait times during appointments, and reduced costs associated with attending an in-person appointment. 7,14 Telemedicine visits have reduced patient's total time of a clinical encounter with a 72% reduction in duration without any significant change in quality of care. 15 This can lead to reduced costs for patients as well as increased efficiency for healthcare providers. While there has been a steep learning curve for both patients and healthcare practitioners to implementing telemedicine, the levels of patient and provider satisfaction with these platforms suggests telemedicine could be a permanent tool for healthcare delivery. 3,13 We found telemedicine to be a valuable tool to conduct an anesthetic preoperative evaluation, with high patient and provider satisfaction. Anesthesiologists in preoperative clinic are able to obtain a comprehensive history, review patient specific risk factors, discuss risk factors of anesthesia and surgery, develop a preoperative plan, recommend labs/tests that need to be completed prior to surgery, prepare the patient and family for the day of surgery and answer anesthesia related questions over video-based visits. While only a limited physical examination can be carried out during a telemedicine visit, Prasad et al. (2020) showed airway examination, which is critical for an anesthesiology evaluation, can be performed thoroughly via a virtual visit with appropriate equipment and patient guidance. 10 If it is medically appropriate to delay a formal physical exam until the morning of surgery, a pre-anesthesia clearance visit conducted via telemedicine is an alternative that should be considered in the pediatric population. The lack of full physical examination raises concerns regarding the appropriateness of telemedicine consultation for medically complex patients. There is a paucity of studies identifying what medical conditions exclude patients from telemedicine consultation. 16 We permitted telemedicine 17 Similarly, we had zero same-day surgical cancellations due to an incomplete preoperative evaluation, suggesting the integration of telemedicine did not negatively impact the quality of our preoperative assessment. Telemedicine visits during the COVID-19 pandemic have shown to have high patient satisfaction in the adult population. 10,11 Patient satisfaction is an important measure of high quality, value-based care and has shown to reduce 30-day readmission rates and postoperative complications. 18 In this study, patients and parents/guardians reported high satisfaction with the telemedicine visits, with a mean score of 4.5-4.6. Along with high satisfaction with the preoperative anesthesia telemedicine visits, patients or parents/guardians indicated that they would use the telemedicine services again (mean score 4.9). Patients also appeared to be satisfied with the interface of Zoom itself, despite reported difficulties setting it up. Prior to each video-based telemedicine visit, each parent or guardian was contacted, their capability for a video-based telemedicine visit was assessed, and log-in instructions were provided. The parent or guardian was emailed the Zoom link, along with the names of the clinician and time of the appointment. Although the patient, parent or guardian indicated high satisfaction with the how the technology enabled them to talk, see, and understand providers during the video-based visit, they shared that the technology was difficult to set up, citing timing of receiving and finding the email link, confirming an internet connection, and comfort with the technology as barriers to setting up the technology. With this knowledge, we have implemented changes in our telemedicine scheduling process, Table 2 Telemedicine patient satisfaction Likert scale score data (N = 101). Table 3 Likert scale score data for providers (N = 18). including making the email confirmation easier to search for, sending email reminders two days prior to appointment, and calling each scheduled patient the day prior to their appointment to ensure they have the Zoom link, adequate internet connection, and are able to log in. With these changes in place, there have been fewer delays in patients logging into the virtual visit, our no show and cancelation rate has decreased, and parents/guardians have expressed increased satisfaction with the log-in process. A majority of respondents also agreed that telemedicine visits saved them time in traveling to the clinic. Our telemedicine visits are scheduled at one-hour intervals and we have found that regardless of delays due to technical issues, the telemedicine visit rarely requires an hour to complete. In contrast, our in-person visits are scheduled in 30-60 min intervals depending on patient complexity, and while the visit itself may take 30-60 min, there is increased time and money spent when accounting for travel time and costs, especially in a metropolitan area with significant travel times due to traffic, as is the case in Washington, D.C. 17 A previous study indicates that the patient-provider relationship during a telemedicine visit is comparable to in-person visits where patients were equally satisfied with the physician's ability to build rapport, have a shared mental model with the patients, and advocate for patientcentered communication. 19 Consistent with this previous study, patients we surveyed expressed high satisfaction with their interaction with the health care provider during the telemedicine visit. However, a unique aspect of pre-anesthesia evaluation visits is that most patients do not have an established relationship with the provider; therefore, the satisfaction expressed by these study participants pertains to the telemedicine visit itself, without any previous bias based on provider-patient relationship influencing satisfaction scores. Interestingly, the feasibility of telemedicine increases access to care and diminishes many obstacles for patients who may require multiple preanesthetic evaluations over time, thereby improving those patient-provider relationships and patient outcomes. The importance of face-to-face interaction has been stressed by health care providers expressing concern about the barriers to telemedicine. 20 Although patients, parents/guardians shared that the anesthesia evaluation was beneficial to the patient's care, the lowest mean scores were seen when the telemedicine visit was compared to an in-person visit. Some caretakers felt that the telemedicine visits were not as effective as in-person visits. Anecdotal reports show caretakers express concern with the lack of physical examination. Similarly, adult patients have indicated that telemedicine visits are not the same as in-person visits due to the lack of physical examination and lack of "human touch". 21 We must effectively communicate and demonstrate the value of telemedicine to enhance confidence in the encounter. Reassurance that many elements of the physical examination pertinent to anesthesia clearance can be completed effectively via telemedicine with patient and parental collaboration is necessary. The clinician should use a high-resolution camera with ample lighting and maintain eye contact. Every individual present should be acknowledged and introduced. An overview and expectations of the visit should be declared, as well as possible tasks that may require patient or parental assistance. After a comprehensive and detailed clinical assessment and treatment plan, allowing time for questions and providing the clinician's contact information ensures patients they are receiving the same quality care as an in-person encounter. A verbal or electronic consent was provided to patients and guardians prior to or during the telemedicine visit based on the legal counsel at Children's National Hospital. The consent included informing participants of the possibility that a breach of security protocols with the technology would allow the chance for medical information to be shared; cautioned participants that telemedicine is not an exact science; and stated that no guarantees could be made regarding outcomes and results of the examinations and treatments. Despite these cautions, most patients and guardians indicated they felt confident that the patient's privacy was respected, and the quality of care was not compromised with telemedicine. Many patients, parents/guardians, especially with regard to children with medical complexity, shared that they were familiar with telemedicine visits with other specialists which made the patient, parent or guardian more comfortable with each visit. When the survey started on September 1, 2020, of the 1336 Children's National Hospital ambulatory visits, 497 (37%) were telemedicine visits. Similarly, on December 15, 2020, 35% of all Children's National Hospital ambulatory visits were telemedicine encounters, showing that more than 1/3 of patients utilized the telemedicine services consistently during the study period. Unique to many patient satisfaction studies involving telemedicine services, physicians and nurse practitioners were also surveyed since provider satisfaction is a critical factor in determining if telemedicine will continue to be utilized. 22 Previous studies have looked at various determinants of provider satisfaction including effective organization, reliable technology, sufficient financing, institutional support of its use, and acceptance from providers and patients. 23 In our study, providers rated telemedicine preoperative visits favorably in terms of physician-patient communication and the technological interface. Anecdotal reports showed providers believed telemedicine is extremely efficient as it eliminates the time needed for travel, registration, and checking into an in-person visit, it was easier to communicate with patients without the PPE required for in-person visits, patients were more easily accessible and allowed participation from additional guardians/caretakers that may not have been able to attend an in-person visit, and overall felt safer with telemedicine services as it decreased the risk of exposure to COVID-19. Providers also noticed patients and their family were more at ease as they were able to conduct the visit in the comfort of their home, and the visit was perceived to be a more pleasant and joyful experience. Similar to patient opinions about telemedicine, providers also had the lowest mean score with telemedicine visits being as effective as in-person visits, possibly indicating that providers and patients still find that the most thorough or complete evaluation is in person. Anecdotally, providers expressed the lack of the ability to auscultate as the major concern. Limitations The COVID-19 pandemic expedited the use of telemedicine services, and although the study started a few months after the implementation of telemedicine in our clinic, the sample size was low (101 survey responses from 204 encounters). The results of the survey may have been influenced by the respondents' alleviated fears of reducing their exposure to COVID-19 with telemedicine visits instead of in person clinic visits. The survey did not include questions regarding patient or provider concerns about the pandemic and their individual health, or their thoughts of the telemedicine visits in relation to COVID-19. Recall bias may have had an impact on the survey results depending on if the parent/guardian was reachable immediately after the telemedicine encounter versus the next day. There is also a selection bias: patients included in this study had access to technology, knowledge about the technology, and the literacy level needed to understand the email link that was sent to them. A comparative study between telemedicine encounters and in-person visits could be more effective in determining patient and provider satisfaction with telemedicine services. Conclusion The COVID-19 pandemic pushed providers to re-examine access to care and incorporate telemedicine into their practice. Patient satisfaction metrics are an important component of health care quality and play a significant role in long-term acceptance and success of a telemedicine program. Telemedicine is valuable tool to conduct an anesthetic preoperative evaluation, with high patient and provider satisfaction. To our knowledge, this is the first study addressing patient and provider satisfaction with telemedicine in an anesthesiology pediatric preoperative clinic during the COVID-19 pandemic and serves as an excellent pilot study preluding randomization of telemedicine versus in-person visits for an assessment of outcomes.
2022-04-02T13:10:27.610Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "b123a6d340e74e8364e411aa19d19308f178a690", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.pcorm.2022.100252", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c57955adf630d96963689642b42eb421c04499d0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16752015
pes2o/s2orc
v3-fos-license
Findings from the Medicaid Competition Demonstrations: a guide for states. The Medicaid Competition Demonstrations were initiated in 1983-84 in six States (California, Florida, Minnesota, Missouri, New Jersey, and New York). State experiences in implementing the demonstrations are presented in this article. Although problems of enrolling Medicaid recipients in prepaid plans or with primary care case managers under these demonstrations proved challenging to States, lessons were learned in three key areas: program design and administration, health plan and provider relations, and beneficiary acceptance. Therefore, States considering similar programs in the future could benefit from these findings. Introduction In 1982, the Health Care Financing Administration (HCFA) approved demonstration projects in six States to experiment with alternative methods of organizing and financing the delivery of care to Medicaid recipients. The demonstrations were developed in response to concerns that the Medicaid program was not fully meeting its goals of access to mainstream medicine, continuity of care, and cost containment. On the one hand, there was evidence of "doctor shopping" and of high self-referral rates contributing to excessive utilization, a problem that may have been exacerbated by the most common mode of cost control, i.e., fee constraints. On the other hand, patients lacked access to primary care physicians in some locations because of low Medicaid payment rates. As a result, they often received inadequate or inappropriate (e.g., emergency room) care. The demonstrations were intended to test a number of concepts that State and Federal officials hoped would contain costs while promoting greater continuity of care and improving or maintaining access to care. The goal was to change the incentives facing both providers and consumers under Medicaid so that program goals could be met more effectively. The demonstrations incorporated a variety of innovations of the traditional Medicaid program structure: • Capitation as a mechanism of provider payment. • Case management by a primary care physician ''gatekeeper.'' • Limitations on provider choice as a means of promoting efficiency and, it was hoped, competition among providers for patients. The purpose of this article is to share the lessons learned in the demonstrations with other States that are considering making similar changes in their Medicaid programs. We first provide an overview and comparison of the demonstration sites, then we offer findings from the demonstration experience in three areas: program design and administration, health plan and provider relations, and beneficiary acceptance. Overview of demonstration sites Highlights of the demonstrations are provided below and summarized in Table 1. References to the series of detailed case studies available for each demonstration are cited at the end of the article. Particularly for the completed demonstrations, an indicator of State satisfaction with the projects is their current status. Three of the demonstrations have been converted to ongoing State programs (Santa Barbara, California; Missouri; and New Jersey), one has been extended (Minnesota), and two have ended (Monterey, California; and New Jersey). 1 California demonstration programs There were two programs in California: the Monterey County Health Initiative and the Santa Barbara County Health Initiative. In both, the county established a new authority that accepted capitation payments from the State and, in turn, contracted with physicians, clinics, hospitals, and other providers. Despite this organizational similarity, the programs differed in two critical areas. The first relates to the method of provider payment: Monterey paid a fee for service plus a case-management fee for primary care physicians, while Santa Barbara capitated primary care physicians for their services and paid for referral services on a fee-for-service basis. Second, physician attitudes differed: Although there was resistance to managed care in both counties, a few well-respected physicians served on an advisory board in Santa Barbara and helped smooth the way for utilization review and other cost-containment efforts. Physician opposition to the capitation of case managers, for example, was overcome in Santa Barbara but not in Monterey. The Monterey program was terminated in March 1985 because of cost overruns and administrative difficulties; the Santa Barbara program, however, has become an ongoing State initiative. California passed State legislation and obtained a 1915(b) waiver 2 from HCFA allowing the Santa Barbara program to continue operating. The Monterey program ran into trouble because physicians were paid on a fee-for-service basis at rates that exceeded Medi-Cal's, without any incentives or administrative controls to alter medical practice patterns 1 This excludes Florida's program for the frail elderly, which has not been followed as part of this evaluation. 2 Some Federal requirements for Medicaid programs can be temporarily suspended by the State acquiring a HCFA program waiver. 1915(b) waivers can be given to suspend requirements that Medicaid programs be uniform statewide, to limit recipients' freedom of choice, to change comparability of services, or to suspend upper payment limits that require capitation payments not exceed the fee-for-service costs of comparable recipients. until the second year of operation. In addition, the resulting cost overruns were not recognized for several months because the management information system was inadequate; by the time the problems were identified and controls were developed, the demonstration could not be resuscitated. An important lesson is that counties accepting financial risk for the delivery of services must behave like health maintenance organizations (HMOs) with respect to controlling and tracking utilization. Minnesota Prepaid Competition Demonstration Project Minnesota elected to experiment with prepayment in three counties: one urban (Hennepin, which includes the city of Minneapolis), one suburban (Dakota), and one rural (Itasca). The approach in Hennepin and Dakota Counties, which are geographically contiguous and served by many of the same HMOs, differed from that in Itasca County. The demonstration in Hennepin and Dakota required Medicaid recipients, including most aged, blind, and disabled persons, to enroll in prepaid health plans. In Itasca, the county itself was treated like a health plan; it received capitation payments from the State and, in turn, paid providers. The demonstration in Minnesota (Hennepin, Dakota, and Itasca) was delayed in starting and will end in June 1991. Missouri Managed Health Care Project In Jackson County (Kansas City), Aid to Families with Dependent Children (AFDC) cash recipients were required to enroll in one of five prepaid health plans (including two formed by community health centers, an IPA-type (individual practice association) HMO, and two hospital-based plans) or with individual physicians who were paid a fee for service and a case-management fee. By far, the most popular plans were the two sponsored by the teaching hospitals. The program also proved a useful introduction to prepayment for two community health centers, one of which hopes to obtain prepaid contracts in the future. Missouri also obtained the Federal waivers necessary to operate an ongoing program, and four of the five prepaid plans that participated in the demonstration have continued serving Medicaid recipients on a prepaid basis. To do so, each met Federal requirements for participation (i.e., become a federally qualified HMO with no more than 25 percent Medicare and Medicaid enrollment or qualify as a public HMO or federally funded community health center). New Jersey Medicaid Personal Physician Plan The New Jersey program originally was intended to be statewide, but, by the end of the demonstration period, it was implemented only in 11 of 20 counties. New Jersey's was the only demonstration project in which recipient participation was voluntary, and it had difficulty attracting enrollment. The State contracted with individual physicians, clinics, and community health centers and paid them a capitation to provide primary care services. The program was not successful in attaining its goal of attracting new physicians to the Medicaid program, and most of the demonstration enrollees were served by a small number of traditional Medicaid providers. New Jersey's demonstration has been converted to an ongoing program, operating as Garden State Health Plan, a State-certified HMO. Federal legislation was needed to establish this health plan because the State itself acts as the HMO. The Monroe County (New York) MediCap Program In Monroe County, New York State capitated a countylevel intermediary known as MediCap, Inc., which in turn capitated prepaid health plans. Although the State had intended MediCap, Inc., to contract with several health plans, most declined to bid, and a single HMO (Rochester Health Network, known as RHN) provided care to demonstration participants throughout most of the project. However, RHN-a network model with multiple providers, including community health centers, individual physicians, and hospitals-allowed participants considerable choice of provider. The New York (Monroe County) demonstration ended in August 1987 (after having received an extension from HCFA to operate through April 1988) when RHN withdrew because it was unable to reach agreement with the State on an appropriate capitation rate. As a result of declining utilization in the fee-for-service system, the State wanted to pay only a small increase for the third year, but RHN said its contracting providers faced considerably higher costs. (Providers were still unhappy with the State for having reduced payment rates between the first and second years.) Some Monroe County Medicaid recipients continue to be enrolled on a voluntary basis in another local HMO. Florida Alternative Health Plan Florida originally planned to implement four modules, but only two of them ever became operational. Module A, which involved Medicaid enrollment in prepaid plans, failed largely because the established HMOs found the Medicaid capitation rate too low, particularly when compared with the alternatives (Medicare capitation rates, for example, were much more generous). In addition, the plans objected to what they viewed as arbitrary enrollment caps established by the State. Module B, case management for recipients who consistently overutilize or underutilize medical care, was partially implemented but quickly converted from the demonstration to an ongoing State program. After a lengthy development period, Module C, prepaid health plans for frail elderly patients, eventually became operational at one site only. Module D, medical care vouchers offered by private insurers, failed to attract insurer interest and thus could not be implemented. Because its only operational demonstration had little in common with the other States, Florida was excluded from the Research Triangle Institute (RTI) evaluation and is addressed only selectively herein. Shared goals and characteristics Despite their differences, the demonstrations shared some fundamental concepts: capitation, primary care case management, and limited provider and health plan choice. Health Care Financing Review/Summer 1990/volume 11, Number 4 These concepts were employed to create competition among providers and health plans in order to promote cost-effective health care delivery. All of the programs used capitation as a method of payment, at least at one level. In some cases, States capitated health plans either directly (Minnesota and Missouri) or through an intermediary (New York). In other cases, States treated counties like health plans and paid them a capitation (Monterey and Santa Barbara, California; and Itasca County, Minnesota). The counties at risk in turn contracted with providers, paying them on either a capitation (Santa Barbara) or a fee-for-service (Itasca and Monterey) basis. Finally, New Jersey capitated primary care physicians directly. Most of the demonstrations employed case management; in some, it was explicitly a requirement of physician and hospital participation. Demonstrations that contracted directly with individual physicians required them to assume case management responsibilities, i.e., to provide all primary care and to authorize specialty and inpatient referrals. In Monterey and Missouri, physicians received a case-management fee ($1.50 per patient per month in Missouri) for this service. In demonstrations that contracted with health plans, the method of case management was typically left up to the individual health plan (Minnesota and New York). Although dubbed the "competition" demonstrations, evidence of competition among providers and/or plans was mixed at best. By and large, the demonstrations did not change the fact that States are more often in a position of recruiting health plans and providers than of selecting among competitors. In Hennepin and Dakota Counties, where the health plans were more eager for Medicaid business than in any other site, the State received nine applications, seven of which were accepted. Although other sites, such as New York, had trouble recruiting adequate numbers of participating plans, there was not noticeably more competition among participating plans for enrollees in Minnesota than in New York. This primarily reflected the plans' concerns about the inability to make money by serving the Medicaid population, resulting in a reluctance to enroll large numbers of recipients. About the only evidence of competition (and this was not extensive) occurred among traditional Medicaid providers that had formed health plans specifically to serve this population. Two plans in Minnesota and four in Missouri fell into this category, and each of those plans focused more attention on the demonstrations because they viewed Medicaid as a major line of business. Program design and administration Developing and implementing the Medicaid demonstrations required a substantial administrative commitment from the respective States and/or counties. Minnesota, New Jersey, and Missouri all created Statelevel offices to manage the demonstration and other prepayment initiatives; New Jersey relied on existing Medicaid staff. In California (Santa Barbara and Monterey Counties) and New York (Monroe County), most administrative functions were performed by countybased authorities. When a State decides to implement prepayment and/or primary care case management, it faces the following administrative issues: • Legal requirements, such as the need to enact State legislation and/or obtain waivers from HCFA. • Program design choices relating to the populations and services the program will cover as well as the type of payment and delivery system to be established. • Problems associated with managing enrollment and disenrollment. • The management of data flow, particularly current enrollment information and utilization data (psuedoclaims 3 ), to and from participating plans and providers. • The monitoring of participation plans with regard to financial solvency and quality assurance. • Staffing needs. Waivers and legislation States that establish Medicaid prepayment programs modeled on the demonstrations will generally need to apply to HCFA for waivers from certain provisions of Title XIX, Grants to States for Medical Assistance Programs. In addition, some States require approval by the State legislature (e.g., California, Minnesota, Missouri, and New York). HCFA granted section 1115 research and demonstration waivers to the demonstration sites, but requests to replicate these demonstrations are likely to be denied in the future because HCFA reserves this authority to test bona fide innovations. However, States can develop similar programs through the regular program waiver mechanism. In general, States need to apply for section 1915(b) Medicaid waivers to limit recipients' freedom of choice, offer a different Medicaid program in one area of the State, change the comparability of services among recipient categories (e.g., by adding a service), and/or make payments (such as capitation or case-management fees) that exceed the fee-for-service costs for comparable recipients. In addition, the county-at-risk model employed by Santa Barbara, Monterey, and Itasca Counties is no longer feasible under current law. Thus, in planning a new program, States should seek guidance at an early stage as to the organizational options that are acceptable and the circumstances under which waivers are likely to be granted. Key program design considerations States will face a number of decisions about program design early in the planning process. Among these are: • Which Medicaid populations will be included? Missouri served AFDC (cash) only; Minnesota and Santa Barbara and Monterey Counties covered the Supplemental Security Income aged, blind, and disabled. Minnesota and California included the AFDC medically needy, and New York covered county Home Relief recipients in addition to AFDC (cash) recipients. New Jersey's program was open to any noninstitutionalized Medicaid eligible. • What kind of payment and delivery system will be established? The demonstrations represent a mixture of competitive prepaid health plans (Hennepin and Dakota Counties in Minnesota, Missouri, and New York); fee-for-service partially capitated primary care case management (Missouri and New Jersey); and counties at risk (Santa Barbara and Monterey Counties in California, and Itasca County in Minnesota). • What services will be included in the demonstration? The programs included most Medicaid-covered services. However, some States excluded items such as prescription drugs or dental care and reimbursed them on a fee-for-service basis. Minnesota largely excluded the room and board cost of institutional care (skilled nursing facility and intermediate care facility for the mentally retarded) from the capitation payments to health plans. Santa Barbara excluded certain services, including obstetrics, from the capitation payments to physicians. The question of whether to operate a voluntary or a mandatory program, a critical policy decision, is addressed later in the Enrollment section. In many cases, the need for adaptations in programs to achieve political accommodations became evident early on. As with the development of any new program, public or private, extensive negotiation and compromise shaped the outcome. Two common sources of political friction were intergovernmental conflicts, notably between a powerful county and the State, and conflicts with the provider community. States need to strike an appropriate balance between necessary compromise and "giving away the store" to special interests, as arguably happened in Monterey County, leading to the demise of the demonstration there. In Monterey, it was the physicians' opposition to capitation that led to a change in the method of provider payment to fee for service. In other sites, compromise led to more satisfactory outcomes. As an example of county influence, in Minnesota, one of about six States where Medicaid is jointly administered by the State and the counties, Hennepin County was reluctant to participate in the demonstration. Among other concerns, it feared an adverse financial impact on its medical center, which is a major Medicaid provider. The county thus established several conditions for its participation in the demonstration, most of which the State ultimately accepted. Two of the more visible outcomes of this negotiation were the enrollment of only 35 percent of the target Medicaid eligibles from Hennepin County in the demonstration, leaving the remainder under fee-for-service Medicaid, and the use of an outside broker (a nonprofit organization specializing in marketing public programs) to conduct consumer education and enrollment because the county, which is responsible for eligibility determination, did not want its workers to assume this responsibility. (A similar concern for the county hospital in Monterey County resulted in agreement to payment rates that were widely criticized as too generous.) In Missouri, primary care physicians were included as case managers and paid a fee for service only after a number of them opposed the State's plan to enroll Medicaid recipients exclusively in prepaid plans. (These physicians relied on Medicaid patients as a major source of practice revenues and were for the most part not affiliated with prepaid plans.) As a result, the Missouri Managed Health Care Project allows recipients to sign up with 1 of some 50 participating primary care physicians in lieu of enrolling in a prepaid plan (approximately 15 percent of the population exercised this option). Furthermore, the powerful retail druggists' lobby was successful in excluding prescription drugs from the program. Not surprisingly, these political conflicts and negotiations extended the planning process and delayed startup; the Minnesota program, for example, did not begin enrolling recipients until November 1985, 16 months after the demonstration was slated to begin. Enrollment Enrollment requires both policy decisions and ongoing administrative attention. Three key decisions are: whether enrollment in the demonstration is mandatory or voluntary, how to educate recipients about their choices, and how recipients who fail to make a choice are assigned to a plan or provider in mandatory programs. States also face ongoing administrative issues with respect to tracking enrollment and disenrollment; these are described later in the Data management section. A basic policy decision facing States was whether to establish a mandatory or a voluntary program. All but New Jersey chose to make enrollment mandatory, principally to increase the numbers of participants. Cost savings was the prime motivation because, historically, voluntary programs have had low participation. In addition, significant enrollment is necessary for program viability (e.g., to attract providers, justify administrative costs, and permit meaningful evaluation). Some of the States already had voluntary HMO enrollment for Medicaid recipients (New York and Minnesota) in which participation was low, and they sought more dramatic results. In New York, an exception to mandatory enrollment was made for patients who had an ongoing relationship of a year's duration or longer with a provider not included in the demonstration. In both voluntary and mandatory programs, recipients must be informed of the health plan options. In a voluntary program, the emphasis is on the merits of prepayment relative to fee for service (e.g., one-stop shopping), whereas in a mandatory program, the focus is on the differences among the health plan options. New Jersey, the only voluntary program, initially contracted with a consulting firm to market the program, then later allowed physicians serving as case managers to educate and enroll recipients in their offices. This boosted enrollment but raises questions about biased selection as well as the potential for misleading messages and lack of uniformity in the information provided to recipients. Most of the mandatory programs handled consumer education and enrollment at the time eligibility was determined or recertified. The advantage of conducting education and enrollment in this manner is that recipients are generally receptive and motivated to make a choice. Minnesota initially contracted consumer education and enrollment to an outside broker who contacted recipients after they had established their eligibility and were using the fee-for-service system. This process resulted in a high percent of recipients failing to choose a health plan and, consequently, needing to be assigned by the broker. In Missouri and Santa Barbara, California, recipients were informed of their health plan options in a group presentation in the county social services office at the time they established eligibility, and most recipients indicated their preferences at that time. Only a small percent refused to attend the presentations and were assigned to one of the options. Under the demonstrations, States either prohibited direct marketing to consumers or constrained it. Except for New Jersey, States did not allow plans or providers to market their services directly. (New Jersey required providers to attend a training session and abide by certain rules when educating and enrolling patients in their own offices.) In general, plan brochures and other literature were reviewed and distributed by the States or intermediaries. The restrictions on marketing to Medicaid recipients proved not to be a significant constraint, because most health plans and other providers were not interested in actively marketing to this population. Instead, the plans were content to gain experience serving small numbers of recipients. Even for traditional Medicaid providers participating in the demonstrations, such as community health centers and public hospitals, marketing was not a key issue. Some passively attracted their clientele in adequate numbers without it, and others, who might have liked greater enrollment, lacked the resources to pay for marketing. Despite this, it is likely that some health plans, at least in a modest way, would take advantage of the ability to market their services directly to patients via advertising campaigns (mailed brochures, telephone marketing, etc.) if this were permitted. Medicaid recipients who should have made a health plan or provider choice but did not were customarily assigned to a participating plan or provider. Recipients were randomly assigned in Minnesota, Missouri, and Santa Barbara. (In Santa Barbara, patient age was considered in order to avoid, for example, assigning pediatric patients to an inappropriate case manager.) In New York, assignment took into consideration such factors as prior provider use and geographic location. (The relationship of assignment to the potential for biased selection is discussed later in this article.) Data management State information systems should generally be adapted to meet the demands of a prepayment program. Under the demonstrations, two critical areas of data management were enrollment and utilization. Current enrollment data are always important in a capitated system. Utilization data (e.g., records of the volume and type of services delivered) may be collected as well, but their value to the State has been questioned. (Despite good intentions, most demonstration States had other pressing priorities and did not marshall the resources necessary to analyze the data.) Under prepayment, participating providers and plans need prompt notification of enrollments and disenrollments. They should also receive a statement itemizing the capitation payment for each enrollee, and the two sets of data should match. This information is important because providers and plans are typically at risk for the cost of health services and, therefore, need to know whether they will be paid for services they provide or authorize. (There is also the possibility of denial of service for those who are entitled if a health plan does not have a record of new enrollees.) Several problems made it difficult for States to provide timely enrollment data to plans: lack of priority for State computer time (i.e., queueing problems), programming errors, poorly trained staff and/or inadequate management information system (MIS) staff time, lack of necessary hardware and software, and computer failures. In general, State systems for processing eligibility data were different from those used to maintain health plan enrollment, and it was difficult to merge the two. Missouri modified its MIS for the demonstration, but most States struggled along with their existing systems. The second area of data flow relates to tracking services utilization through pseudo or dummy claims. (HMOs that record only utilization and not costs commonly refer to these as "encounter data.") As part of the evaluation effort, HCFA required all demonstrations to compile pseudoclaims, even though those claims were no longer the basis of payment (because the capitation rates were fixed and not related to services rendered). The data were analyzed by the HCFA-contracted evaluation team headed by the Research Triangle Institute. 4 In addition, most States intended to use these data to compare the performance of the demonstrations with fee for service. Other States had reinsurance provisions that limited the financial liability of a plan once expenses for an individual enrollee reached a preset level, and the pseudoclaims served to determine when that level had been reached. However, it is not clear that the States actually used much of the data. If States wish to collect utilization data, they should plan carefully for uniform reporting and efficient transfer of the information to State data bases. Under the demonstration, contracting health plans and providers had some incentive to submit pseudoclaims in accordance with State wishes because of the reinsurance provisions described earlier. Nonetheless, problems abounded. States reported inconsistencies and lack of comparability both over time and across providers and plans. Plans for their part said States were unclear in their specifications. Standardized reporting is difficult because health plans and providers have different definitions of an encounter. Also, although some health plans (particularly staff and group) may have computerized encounter data, plans that capitate medical groups do not themselves record encounters; and the chances of noncomparable data increase when they are reported by individual providers. In addition, the States did not keep up with the input and tabulation of these data, creating a formidable backlog. Some observers question whether the States will eventually allocate the necessary resources to sort and analyze these records, which were submitted at considerable time and expense by some providers and health plans. Missouri, for example, had planned to base its payment rates in subsequent years on pseudoclaims data from early years of the demonstration. This never occurred because the State decided that its methodology was adequate and because it had concerns about possible inconsistencies in the pseudoclaims data. Quality assurance and program monitoring States and/or counties administering the demonstrations sought to monitor quality of care and financial solvency. Most programs devoted relatively few staff to either activity, reflecting mainly the press of routine administrative responsibilities and the lack of any obvious problems or threats to the demonstrations stemming from insolvency (except in Monterey) or poor quality care. States also recognized that prepaid health plans had their own quality assurance systems, although the majority of those delivering care under the demonstration were not prepared health plans, except in Minnesota and New York. States approached quality of care monitoring in different ways. In New York, consumer satisfaction surveys were conducted pre-and post-demonstration. MediCap, Inc., the agency administering the project, reported improvements in consumer satisfaction levels under managed care. (However, advocates claimed that enrollees were disgruntled with restricted access to providers under the system and were overall less satisfied than before.) MediCap delegated responsibility for ongoing quality assurance efforts to Rochester Health Network, the health plan serving demonstration enrollees. Minnesota hired a quality assurance coordinator in 1986 and charged her with developing methods of assessing quality beyond the existing grievance process. Minnesota also plans to evaluate questionnaires completed by recipients who switched health plans during open enrollment to assess reasons for dissatisfaction and to compare results across plans. In Missouri, the quality assurance department was headed by a registered nurse who reviewed medical records. Protocols for selecting and reviewing records were developed, and nurses conducted onsite audits. Both Missouri and Santa Barbara conducted consumer satisfaction surveys and analyzed recipient requests to change plans or providers to determine if quality of care was at issue. The Santa Barbara program was especially aggressive about soliciting consumer feedback on the demonstration and following up any reported problems. The States struggled with quality assurance but ultimately did little, in part because they were unsure how to proceed. Because consumers have difficulty assessing technical quality of care and because of beneficial "sentinel" effects from monitoring quality, it is useful for States to be more proactive in this area. States could, for example, target particular diagnoses for review. They could also check ambulatory records for routine immunizations, proper followup of abnormal laboratory test results, or other indicators of clinical quality. Staffing States should expect their staffing needs to change under a prepaid Medicaid program. In addition to understanding the traditional Medicaid program, staff need experience dealing with managed care issues, including ratesetting, marketing and consumer education, MIS, and quality assurance. Acquiring such staff requires perseverance because of civil service hiring procedures and because the talents involved are in demand by the private sector. The use of outside consultants to supplement State staff created problems in some cases. Such consultants can provide valuable services, as in Minnesota and Missouri, where they were used during the development phase for assistance in both planning and ratesetting. It is important that State staff have the technical ability to work in a collegial fashion with consultants and supervise their efforts. The experiences in New Jersey and California illustrate the problems of using outside consultants. Santa Barbara and Monterey initially hired the same consulting firm to develop and run a management information system. Santa Barbara eventually assumed the responsibility in-house after encountering problems. In Monterey, the contractor promised more than it could deliver. Although a system for paying claims was up and running within about 3 months, it took a year before timely utilization reports were available to providers. The cost overruns that accumulated during this time were largely responsible for the demise of the Monterey demonstration. New Jersey relied heavily on a consulting firm to conduct many operational aspects of the program and lacked the technical staff to supervise the firm. In both New Jersey and Santa Barbara, contracting helped alleviate problems in the short run but detracted from the building of necessary internal capacity. Health plan and provider issues Most of the demonstrations succeeded in soliciting the participation of health plans and providers. The major exception was Florida, where a combination of factors (mainly low payment rates but also the exclusion of health plans from the planning process and restrictions on market share) discouraged plans from bidding. However, neither New Jersey nor New York obtained as many participating plans (New York) or providers (New Jersey) as desired. It is noteworthy that, in general, most of the providers and plans had participated in Medicaid prior to the demonstrations. Features of Medicaid prepayment that particularly relate to providers and plans include: • The planning process and provider recruitment. • Ratesetting and the availability of reinsurance. • The potential for biased selection. • Service delivery issues. Planning process and provider recruitment States that involved health plans and providers (as well as consumer advocates and other interested parties) in the process of planning for the demonstration took longer to become operational but may have benefited in the long run. Minnesota is a case in point; after a planning process in which the State consulted extensively with interested parties, the program was implemented with widespread community acceptance and understanding of its intent. (The downside to a thorough planning process is delay, and some observers felt that "tinkering" in Minnesota and New York unduly delayed startup.) In contrast, Monterey became operational quickly before necessary systems (e.g., for monitoring utilization) were in place; in order to defuse the concerns of physicians and recruit case managers, the program acquiesced to high payment rates. In retrospect, more careful planning and a strategy for recruiting providers that did not place the demonstration in financial jeopardy might have paid off. States also benefit from the goodwill and understanding on the part of health plans and providers that can result from giving them an opportunity to be consulted. (Problems and delays affecting health plans and providers are inevitable during the startup phase, and the existence of good working relations can reduce frictions.) Finally, by consulting with the health plans or providers, the State gains a better understanding of their objectives and concerns. The lack of such understanding was one of the problems that doomed the Florida effort almost from the outset. Health plans and providers participated in the demonstrations for many reasons, but chief among them were the goals of expanding or maintaining market share and gaining experience with Medicaid prepayment. However, other variables also came into play. Some health plans participated simply because their physicians wanted them to (e.g., Prevention Plus in Missouri), often reflecting a concern that they might otherwise lose patients. Sometimes the plans' payment levels to physicians were higher than those in the Medicaid fee-for-service program. In other cases, the State's willingness to guarantee eligibility for 6 months for demonstration enrollees convinced health plans that it was worth the administrative burden. Individual physicians were sometimes attracted to prepaid programs instead of fee for service by the significant reduction in paperwork, although in actuality the paperwork was not always lessened (physicians in Santa Barbara, for example, complained about the paperwork associated with treatment and referral authorizations). Ratesetting States must balance the need for payment rates generous enough to attract plans and providers against the goal of containing costs. Similarly, a balance must be struck between requiring contractors (e.g., health plans) to bear some risk for the cost of health care for recipients and asking them, particularly those with little prepayment experience, to assume more risk than they can afford. Reinsurance, often provided by the State, reduces the degree of exposure. 62 The major factor determining providers' willingness to serve Medicaid patients on a prepaid basis is the perceived adequacy of the payment rate. In the demonstrations, the starting point for the development of rates was the fee-for-service experience. Following the Medicare program's precedent, the States' objective was to pay providers, plans, or counties at risk a percent of estimated fee-for-service costs (typically 90 or 95 percent) because this was believed to ensure savings. Importantly, by establishing a capitation rate in this manner, States set the stage for holding down future costs to a percentage increase. In some cases, the intent was to use fee-forservice experience only at the start of the program and constrain future year increases. States based rates on the fee-for-service experience by rating category for the Medicaid population in some prior year or years. The number of categories (rate cells) used varied from 2 in Missouri (AFDC adult and AFDC child) to more than 70 in Minnesota (reflecting age, sex, eligibility category, residence in an institution, etc.). These data were then actuarially adjusted for any program differences between fee-for-service Medicaid and the demonstration (e.g., changes in covered services, eligibility, and geographic area). Where applicable, the rates were reduced to reflect State-provided reinsurance protections. Finally, the rates were trended forward to the appropriate year. Predictably, rates were a common source of friction between health plans or other providers and States. Complaints included: • Inadequate documentation of ratesetting methodology. • Trend factors viewed as unfairly low. • Too many or too few rate cells and lack of homogeneity within rate cells. • Inappropriate geographic base (e.g., use of statewide averages for a locality or, conversely, reliance or too small a sample of local recipients instead of statewide data). Disagreement over rates that are derived from the fee-for-service experience is likely to increase over time as the base year recedes, practice patterns change, adjustments are made to reflect benefit changes, and so on. Demonstration health plans and providers were dismayed when, in some cases, the second and third year rates declined as a result of reduced expenditures in the fee-for-service program (New York and Missouri). Despite a clear rationale for the reductions, providers and/or plans saw their own costs rising and viewed this phenomenon as evidence of the inadequacy of the ratesetting process. The New York demonstration ended in the wake of a dispute over rates. After the first year, the State reduced capitation rates by 11 percent; after the second year, it proposed a modest increase (about 4 percent), but providers affiliated with the major participating health plan felt they were losing money and were unwilling to continue without a substantial payment increase. A separate problem arose in Minnesota, where the capitation levels rose by the same fixed percent each year as did fees in the regular program. The plans felt that they were disadvantaged because their total revenues were capped, whereas fee-for-service provider revenues could (and did) rise as a result of increases in volume and intensity of services. Despite complaints from a number of plans and providers who reported losing money, many were satisfied with the rates or, at least, lacked grounds for challenging them. Furthermore, many recognized that the States were developing capitation rates for the first time and were trying to be fair. Reinsurance Reinsurance (or stop-loss) protection limits the risk that plans face either on a per enrollee basis or in the aggregate. This protection was a major concern of smaller and less experienced health plans, and the State's willingness to offer such coverage was in some cases a determining factor in their participation. In Missouri, for example, community health centers viewed this protection as vital to ensure their participation as prepaid health plans. Missouri originally considered a simple catastrophic reinsurance arrangement whereby the State agreed to pay annual per recipient costs above $20,000 of Medicaid-allowable expenses. However, the hospitals and community health centers that formed plans to participate in the demonstration had no experience accepting financial risk and wanted more protection. The State then developed special risk pools to accommodate their concerns, specifically, a pool to fund (on a fee-for-service basis) high-risk deliveries and neonatal intensive care for patients hospitalized more than 9 days, a pool for adverse selection (eventually abandoned because a method for measuring adverse selection was lacking), and another to fund any additional births after the plan had absorbed the costs for a certain number of deliveries. The plans agreed to have a small amount deducted from their monthly capitation in return. In other cases, for example New York, large plans accustomed to bearing risk preferred to handle reinsurance themselves, ostensibly at lower cost. Although States sometimes permitted individual providers and plans to decide whether to purchase reinsurance, more often than not a minimum level of coverage was required, and its cost was automatically deducted from the payment rate. Santa Barbara experienced more catastrophic claims than expected, and the State responded by increasing the per enrollee reinsurance limit from $15,000 to $25,000, leaving the county authority with less protection. Minnesota had both individual and aggregate reinsurance provisions. For individual patients with over $15,000 (AFDC) or $30,000 (aged, blind, disabled) in cumulative annual hospital expenses based on Medicaid allowable charges, the State paid 80 percent of the cost. The aggregate reinsurance provision applied only during the first 2 years of the demonstration. For the AFDC population (capitated at 90 percent of fee for service), the State agreed to pay 50 percent of first-year losses based on actual costs between 90 and 110 percent of fee for service. Above 110 percent, the plans were fully at risk. In the second year, the State shared one-half of the loss between 90 and 100 percent (rather than between 90 and 110 percent). Similar provisions applied for the aged, blind, and disabled population. Capitation rates were reduced to account for the per enrollee reinsurance protection but not for the aggregate risk-sharing provisions. Potential for biased selection Biased selection refers to the systematic enrollment into a prepaid system of individuals who are healthier (favorable selection) or sicker (adverse selection) than average. The result is underpayment or overpayment by a rate structure that assumes average risk. The consequences of biased selection differ, depending on whether enrollment is voluntary or mandatory. In voluntary programs, the State may be concerned that the plans or providers at risk obtain a disproportionately healthy enrollment, leaving the sicker and higher utilizing enrollees in the fee-for-service system. For example, in New Jersey, a voluntary demonstration, it is possible for healthier Medicaid recipients to join while sicker recipients stay in the fee-for-service system (or vice versa). In mandatory programs, the concern is for biased selection across providers or plans. Providers and plans in some demonstrations claim to be victims of adverse selection. Such assertions have not been fully documented, although in analyses conducted in Missouri as part of the RTI evaluation no evidence was found of adverse selection at that site. Adverse selection could be more likely in situations where the participating providers are hospital-based and/or specialize in the treatment of such chronic conditions as arthritis and diabetes because these providers may attract chronically ill patients. In addition, health plans with particularly broad provider panels (such as many Blue Cross and Blue Shield plans) could be adversely selected against because patients with multiple physician relationships are believed to be more likely to choose the plan that allows them to maintain these relationships. The consumer education process, if not unbiased, could also steer recipients to certain plans, a concern that some plans expressed. Finally, the assignment of recipients who do not elect a provider offers another opportunity for biased enrollment. The assignment process can be random, as in Minnesota and Missouri, or subjectively determined, as in New York. If social service workers are responsible for assigning recipients to health plans, they may make biased assignments, relying on their own impressions of the participating providers. Service delivery It was hoped that the demonstrations would contribute to the state of the art in delivering services to the populations covered under Medicaid. This expectation was, for the most part, not realized, perhaps because participating plans and providers did not perceive it worth their while to invest in developing methods for improving care to these populations for a 3-year demonstration at the Medicaid payment rates. In addition, both States and plans or providers were hesitant about enrolling the aged and disabled populations, who generally use services much more intensively than AFDC recipients. Health Care Financing Review/Summer 1990/volume 11. Number 4 Few of the plans adapted their delivery systems to meet the special needs of, for example, patients with chronic conditions. Established HMOs generally have limited experience with patients who suffer from mental and physical handicaps, mental retardation, and chronic mental conditions. These patients may need more continuous and intensive medical supervision than prepaid plans typically provide. As discussed more fully in the next section, the demonstrations appeared not to result in innovations in this area. By the same token, some public teaching hospitals that traditionally served the Medicaid population were also slow to adapt. In these hospitals, the teaching curriculum typically takes precedence; and, thus, prepaid enrollees are likely to use physician specialty care and ancillary services at a higher rate than enrollees in private plans, making cost control difficult. Further, the hospitals have often been slow to introduce effective utilization review and patient-management procedures (e.g., prior authorization). As a result, it is business as usual for many plans operated by teaching hospitals. State Medicaid programs have developed systems and policies over the years that are not necessarily compatible with prepaid care. Examples of problems were: • Court-ordered treatment, often for mental health services or substance abuse cases, which Medicaid traditionally reimburses. Prepaid plans do not customarily pay for any nonemergency care that they have not specifically authorized, but the State will not pay for services included in the capitation. Thus, the party responsible for the cost of court-ordered care may be in dispute. • Overlapping responsibilities for the case management of special populations, such as the mentally retarded. For example, in Minnesota, county workers traditionally performed this service and, in some cases, were unwilling to cooperate with the health plans. • Managing nonmedical benefits under a medical case management model, such as the problem in New York of day treatment for children with emotional, physical, or developmental handicaps. Such treatment is covered by Medicaid in New York, and its cost was part of the capitation rate. However, the treatment is expensive (estimated at $9,000 per year), and the referral decision is, by law, made by the day treatment program with input from other parties, sometimes including the child's health plan. This creates a dilemma: Although the primary care physician is not equipped for some decisions, removing the decision from the plan is of questionable fairness if the plan is at risk. • Emergency room use and the difficulty faced by providers and plans attempting to change patients' reliance on the emergency room for nonurgent care. In New York, for example, primary care providers complained that hospital emergency room staff were reluctant to adapt their procedures in order to identify demonstration enrollees and contact case managers for authorization, even in nonurgent situations. In light of New York law requiring hospitals to ensure that any person who comes to the emergency department is seen by a physician, the health plans felt they could not deny payment altogether even if they denied authorization for treatment. Instead, they agreed to pay hospitals a triage fee ($20-$40) for any demonstration enrollee who was seen in the emergency room in this situation. Particularly difficult for States and health plans and providers has been the question of whether and how to serve the disabled and chronically ill under the demonstration. Although States would like to include these populations in the demonstrations, they tend to have established provider relationships, require specialized services, and otherwise pose challenges to case management. Also, health plans and providers, even if they have the necessary expertise, may hesitate to accept risk for such potentially high-cost populations. Only California, Minnesota, and New York intended to serve Medicaid disabled eligibles under the demonstration; all three encountered difficulties. New York never succeeded in developing a protocol for enrolling the disabled that all parties (the State, MediCap, the health plans and providers, and advocacy groups) would endorse and, as a result, they were never enrolled. California capitated the Santa Barbara Health Authority for the disabled, as for other eligibility categories. However, on a case-by-case basis, the Health Authority designated as "special class" those individuals who were deemed difficult to case manage; instead, it reimbursed primary care case managers on a fee-for-service basis (rather than by capitation) for their care. Special class recipients included patients with acquired immunodeficiency syndrome (AIDS), the long-term institutionalized, spend-down cases, renal dialysis patients, and others for whom primary care case managers were unwilling to bear risk. In Minnesota, the State required participating health plans to serve either the aged or the disabled and blind in addition to AFDC recipients. Four of the seven plans elected to serve the disabled, about 60 percent of whom joined Blue Cross and Blue Shield (BC/BS). BC/BS had the largest network of participating physicians and other providers, which allowed beneficiaries to maintain many of their existing provider relationships. However, after 2 years, BC/BS withdrew from the demonstration because of financial losses. Because this affected a large fraction of the disabled, the State decided to return the disabled to fee-for-service Medicaid rather than ask patients to elect a new health plan for the remaining year of the demonstration. Beneficiary and advocate issues Medicaid beneficiaries and their advocates expressed a number of concerns about the demonstration projects relating to access and quality of health care. In this section, program design and service delivery issues are addressed, particularly for vulnerable populations, and the results of information on consumer responses collected by the States to date are relayed. Program design Features of the demonstrations to which some health and welfare advocates objected included mandatory participation (a feature of all sites except New Jersey), random assignment to health plans or providers for recipients who do not make an election (Minnesota, Missouri, Santa Barbara, and Monterey), and, in mandatory programs, included restrictions on switching providers or case managers. The opposition to mandatory participation stems from the belief that Medicaid recipients should have freedom of choice in selecting providers and from the view that prepaid health plans or case managers are not appropriate for at least some segments of the Medicaid population (particularly patients with complex medical and/or social problems). The degree of opposition to mandatory enrollment in the demonstrations varied. In Minnesota, where more than one-half the population of the Minneapolis-St. Paul metropolitan area belongs to prepaid plans, there was little questioning of the mandatory nature of the program. In New York, by contrast, where HMOs had little market share in 1982 (although Monroe County was ahead of the rest of the State), there was considerable debate by the State legislature. In Missouri, where there was little familiarity with prepaid health care in 1982, mandatory enrollment was accepted after a minimum of debate. A second area of concern is random assignment to health plans for recipients who fail to choose a plan in manadatory enrollment programs. In Minnesota, the proportion of demonstration enrollees randomly assigned to health plans average more than 30 percent. From a recipient standpoint, random assignment is problematic because of its implications for access to care, both geographically and in terms of access to the most appropriate providers for individual patient needs. However, in the view of one Medicaid administrator, there is a limit on how much protection should be afforded a recipient who fails to make a choice. Random assignment to health plans was adopted in Minnesota and Missouri as the means of enrolling recipients who failed to choose a health plan. In Minnesota, recipients had the right to change plans within 60 days of being assigned as well as annually thereafter during open enrollment. Despite such safeguards, some advocates (and providers) have suggested that, instead, recipients should be assigned to health plans or case managers based on the providers they have used in the past, as ascertained from Medicaid claims data. Others would rely on geographic proximity to the recipient's residence. Finally, medical history and patient age have been proposed as a basis for assignment by matching beneficiary needs with provider expertise. In New York, where assignment is not random, a combination of geographic proximity and prior use was employed by the administrative agent, MediCap, Inc. In Minnesota, it was believed that methods of nonrandom assignment applied systematically would be unduly burdensome for State staff and risked generating biased selection among health plans. Missouri considered assignment based on prior use, but in an analysis of patient records, it was found that many recipients had previously used multiple primary care providers, and it was often unclear which providers should be selected for the case management function. Finally, the fact that recipients in mandatory programs were "locked in" to a particular health plan or case manager was problematic in the eyes of some participants and observers. They argued that many Medicaid recipients, especially those who were assigned, might not seek care during the 30-or 60-day period after enrollment, when they could still exercise their option to switch. Thus, by the time they realized they wanted to switch, they would be locked in and unable to change plans or providers for some fixed period of time (between 6 months and a year). On the other hand, in Santa Barbara, where the restrictions on changing case managers were minimal, case management was impeded by frequent switching among providers. As noted earlier, "doctor-shopping" was one of the practices the demonstrations were designed to discourage. Service delivery Special concerns about the demonstration projects were raised with respect to vulnerable populations such as the aged, the physically disabled, and, especially, the chronically mentally ill and the mentally retarded. Because most of these individuals are aged or disabled rather than AFDC recipients, these issues arose less frequently in AFDC-only programs. In some States, political alliances developed between recipient advocacy groups (e.g., for the mentally ill) and the provider associations most threatened by the new initiative (e.g., private psychologists). Concerns raised in Minnesota, where the program included residents of nursing homes and intermediate care facilities for the mentally retarded as well as the ambulatory aged and disabled living in the community, included the following: • Health plans participating in the demonstration have had little experience with the mentally retarded. Providers who lack the expertise and/or willingness to deal with the mentally retarded may not take the extra time needed to explain issues to patients, leading to confusion, fear, and noncompliance. • Ready access to care for the chronically mentally ill (e.g., those with schizophrenia, manic depression, and personality disorders) is important because of the need for ongoing medication. Patients are often not motivated to stay on medication, and, if access to the physician or the drugstore is inconvenient, the risk of noncompliance is greater. • Many health plans treat principally employed persons and their families and have little experience with the physically disabled. Providers unaccustomed to treating the handicapped may lack the attitudinal awareness and sensitivity needed to work with this group. In Santa Barbara, many concerns have been raised about the physically disabled. Advocates believe health care services for the disabled are compromised under the program in three main respects: • Access to care, including the inability to self-refer for specialty services and the problem of limited numbers of primary care physicians accepting new Medi-Cal patients. The latter problem forces disabled clients into the county clinics, which may be difficult to reach, are too large and complex, and fail to provide continuity of care. These factors are believed to cause delays in recipient care-seeking. • Appropriateness of case management by primary care physicians, because persons with severe or multiple disabilities often require routine treatment by specialists. • Authorization of durable medical equipment, which advocates say requires so much time that patients have had to do without necessary equipment while authorization requests are being processed. In addition, there have been disagreements about the interpretation of medical necessity criteria with respect to equipment for the disabled (e.g., whether an electric wheelchair is necessary to maintain independent function). Consumer satisfaction and grievances Overall, consumer reactions to the demonstrations appeared to range from neutral to positive, although only limited data are available and the definition and reporting of grievances varied widely. In Minnesota, fewer than 20 grievances were filed in 1986, 7 of which went to a formal hearing. In Santa Barbara, by contrast, grievances numbered in the hundreds. This reflects differences in the grievance process (including how aggressively grievances were sought) more than real differences in patient satisfaction; Santa Barbara devoted more effort to assessing and reporting grievances than did any other site. Most of the demonstrations actively solicited beneficiary feedback. Missouri surveyed participants annually, and New York conducted pre-and postenrollment surveys. Minnesota developed a common form used by the State, counties, and all the health plans to take telephone complaints; the results will be tabulated and published. Some of the results of the States' own efforts are not yet available. The Missouri survey results, which have been tabulated, suggested that the majority of patients were satisfied with their care. In 1986, 30 percent of respondents said they were more satisfied with care under the demonstration than previously, 56 percent reported no change, and 12 percent were less satisfied. The New York MediCap data also indicate that satisfaction with the Medicaid program in Monroe County was at least as high after prepayment was initiated as before. Santa Barbara has conduced the most extensive assessment of grievances. In 1985, the most recent year for which detailed data are available, 546 grievances were filed, a 13-percent increase over 1984. The average monthly enrolled population was 20,400 in 1985, implying a grievance rate of about 3 percent. (In 1986, however, the number of grievances dropped to 295 when 66 it was made easier for patients to change case managers.) A breakdown of 1985 grievances by subject is provided in Table 2. Conclusions The fact that most of the States initiating demonstrations under the HCFA solicitation have sought to convert them to permanent programs testifies to their success in the view of State officials. Furthermore, even some of the problems and failures encountered by certain demonstrations represented learning experiences that had identifiable impact on State policies. At the same time, some of the limitations of the approaches attempted must be recognized. Although the demonstrations were labeled "procompetitive," the extent of competition that resulted was limited. First, several of the programs entailed placing counties (e.g., Itasca) or county authorities (e.g., Santa Barbara and Monterey) at risk rather than having plans compete against one another. Under this arrangement, competition could still occur among individual providers, such as primary care physicians, but the evidence that this happened is lacking. This is not to represent a judgment on the merits of the approach; rather, we would simply observe that any resulting costcontainment effects were the result of mechanisms such as physician risk-sharing arrangements or administrative controls that have little to do with competition per se. Perhaps most important is that, even where multiple prepaid health plans within a given community in theory competed for enrollees, the extent of competition was limited. Minnesota, New York, and Missouri adopted such programs with the intent of replicating some of the competitive dynamics that surround private enrollees. HMOs and other prepaid health plans compete heavily for private enrollees on the basis of price and scope of services (as well as on factors such as quality and beneficiary convenience, e.g., provider location and choice, waiting times for appointments, and attractiveness of the office). However, most private enrollees face premium differentials when they select from among various health plans, whereas Medicaid recipients do not. Furthermore, because Medicaid benefits are comprehensive in many States, plans have little opportunity to improve benefits. Competition can occur, but primarily on the basis of service, such as convenient access to providers and short waiting times. Nonetheless, if plans are reluctant to participate in the program in the first place, even this form of competition may not be apparent. In addition, the opportunities for cost savings may be limited. HMOs, case management, and other approaches to managed care achieve their savings primarily by constraining utilization, whether by instituting strong utilization review programs or placing financial incentives on providers. However, plans contracting with Medicaid may be at a disadvantage in two respects. First, the rates they pay providers are commonly above those paid by Medicaid fee-for-service programs. Indeed, one of the reasons that providers in some instances encourage the plans with which they are affiliated to participate is to benefit from payment levels that exceed those of the regular Medicaid program. Second, managed care, regardless of who undertakes it, entails some administrative costs that the regular Medicaid program does not incur. States to date have been most comfortable entering into prepaid arrangements for some population groups and for certain services. The States were most comfortable covering AFDC cash recipients, although there were some notable exceptions. New York covered county Home Relief recipients in addition to AFDC. Minnesota and California served the medically needy and the aged and disabled in addition to AFDC cash beneficiaries. However, both States limited the risk borne by health plans and providers. In Minnesota, the plans were primarily at risk for physician and ancillary services for institutionalized patients, with most routine room and board costs excluded from the capitation and paid directly by the State. In Monterey and Santa Barbara, providers were reimbursed for the care of so-called "special class" recipients, many of whom were disabled, on a fee-forservice basis. Recipients were designated special class on a case-by-case basis; among those included were the long-term institutionalized, AIDS patients, spend-down cases, and renal dialysis patients. Finally, startup times should also not be underestimated, for both political and technical reasons. Initial enrollment in New York and Minnesota occurred a full 2 years after the start of the demonstrations, although several of the other demonstrations became operational in less than a year. This raises the question of whether a 3-year demonstration is too short in light of both the startup and the winddown times.
2016-05-04T20:20:58.661Z
1990-01-01T00:00:00.000
{ "year": 1990, "sha1": "50f44dba34357b7e6cc4999ce5747f989d77b63d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "50f44dba34357b7e6cc4999ce5747f989d77b63d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14371055
pes2o/s2orc
v3-fos-license
Low frequency dispersive estimates for the wave equation in higher dimensions We prove dispersive estimates at low frequency in dimensions n ≥ 4 for the wave equation for a very large class of real-valued potentials, provided the zero is neither an eigenvalue nor a resonance. This class includes potentials V ∈ L ∞ ( R n ) satisfying V ( x ) = O i − ( n +1) / 2 − ǫ > 0. Introduction and statement of results High frequency dispersive estimates with loss of (n − 3)/2 have been recently proved in [9] for the wave equation with a real-valued potential V ∈ L ∞ (R n ), n ≥ 4, satisfying with constants C > 0, δ > (n + 1)/2. The problem of proving dispersive estimates at low frequency, however, left open. The purposes of the present paper is to address this problem. Such low frequency dispersive estimates for the Schrödinger group have been recently proved in [7] for a large class of real-valued potentials (not necessarily in L ∞ ), and in particular for potentials satisfying (1.1) with δ > (n + 2)/2. Denote by G 0 and G the self-adjoint realizations of the operators −∆ and −∆ + V on L 2 (R n ), respectively. It is well known that, under the condition (1.1), the absolutely continuous spectrums of the operators G 0 and G coincide with the interval [0, +∞), and that G has no embedded strictly positive eigenvalues nor strictly positive resonances. However, G may have in general a finite number of non-positive eigenvalues and that the zero may be a resonance. We will say that the zero is a regular point for G if it is neither an eigenvalue nor a resonance in the sense that the operator 1 − V ∆ −1 is invertible on L 1 with a bounded inverse denoted by T . Let P ac denote the spectral projection onto the absolutely continuous spectrum of G. Given any a > 0, set χ a (σ) = χ 1 (σ/a), where χ 1 ∈ C ∞ (R), χ 1 (σ) = 0 for σ ≤ 1, χ 1 (σ) = 1 for σ ≥ 2. Set η a = χ(1−χ a ), where χ denotes the characteristic function of the interval [0, +∞). Clearly, η a (G) + χ a (G) = P ac . As in the case of the Schrödinger group (see [7]), the dispersive estimates for the low frequency part e it √ G η a (G), a > 0 small, turn out to be easier to prove when n ≥ 4, and this can be done for a larger class of potentials. In the present paper we will do so for potentials satisfying Clearly, (1.2) is fulfilled for potentials satisfying (1.1). Our main result is the following Theorem 1.1 Let n ≥ 4, let V satisfy (1.2) and assume that the zero is a regular point for G. Then, there exists a constant a 0 > 0 so that for every 0 < a ≤ a 0 , 0 < ǫ ≪ 1, t, we have the estimates Moreover, for every 2 ≤ p < +∞, we have the estimate Remark 1. Note that our proof of the above estimates works out in the case n = 3, too, for potentials satysfying (1.2) as well as the condition V ∈ L 3/2−ǫ with some 0 < ǫ ≪ 1. In this case, however, a similar result has been already proved by D'ancona and Pierfelice [5]. In fact, in [5] the whole range of frequencies has been treated for a very large subset of Kato potentials. Combining Theorem 1.1 with the estimates of [9], we obtain the following Corollary 1.2 Let n ≥ 4, let V satisfy (1.1) and assume that the zero is a regular point for G. Then, for every 2 ≤ p < +∞, 0 < ǫ ≪ 1, t = 0, we have the estimates Note that when n = 2 and n = 3 similar dispersive estimates (without loss of derivatives) for the high frequency part e it √ G χ a (G) are proved in [2] for potentials satisfying (1.1) (see also [3], [5]). For higher dimensions Beals [1] proved optimal (without loss of derivatives) dispersive estimates for potentials belonging to the Schwartz class. It seems that to avoid the loss of derivatives in dimensions n ≥ 4 one needs to impose some regularity condition on the potential. Similar phenomenon also occurs in the case of the Schrödinger equation (see [4]). Note that dispersive estimates without loss of derivatives for the Schrödinger group e itG in dimensions n ≥ 4 are proved in [6] under the regularity condition V ∈ L 1 . This result has been recently extended in [7] to potentials V satisfying (1.1) with δ > n − 1 as well as V ∈ L 1 . To prove Theorem 1.1 we adapt the approach of [7] to the wave equation. It consists of proving uniform L 1 → L ∞ dispersive estimates for the operator e it To do so, we use Duhamel's formula for the wave equation (which in our case takes the form (2.12)). It turns out that when n ≥ 4 one can absorb the remaining terms taking the parameter h big enough, so one does not need anymore to work on weighted L 2 spaces (as in [9]). This allows to cover a larger class of potentials not necessarily in L ∞ . Proof of Theorem 1.1 Let ψ ∈ C ∞ 0 ((0, +∞)). The following proposition is proved in [7] and that is why we omit the proof. Proposition 2.1 Under the assumptions of Theorem 1.1, there exist positive constants C, β and h 0 so that the following estimates hold where the operator is bounded by assumption. We will first show that Theorem 1.1 follows from the following By interpolation between (2.5) and the trivial bound for every 2 ≤ p ≤ +∞, where 1/p + 1/p ′ = 1, α = 1 − 2/p. Now, writing , and using (2.7) we get (for 2 < p ≤ +∞) provided a is taken small enough. The estimate (1.5) follows from (2.8) and the fact that it holds for G 0 (see [8]). Clearly, (1.3) follows from (2.8) with p = +∞ and the estimate (A.1) in the appendix. In the same way we get which together with the estimate (A.2) in the appendix imply (1.4). We will make use of the fact that the kernel of the operator e it ) /2 is the Bessel function of order ν = (n − 2)/2. It is shown in [9] (Section 2) that K h satisfies the estimates (for all σ, t > 0, Clearly, (2.9) follows from (3.3) with s = (n − 1)/2. It is not hard to see that (2.10) follows from (1.2) and the following Proof. In view of (3.1), it suffices to show (3.4) with h = 1. When 0 < σ ≤ 1, this follows from (3.2). Let now σ ≥ 1. We will use the fact that the function J ν can be decomposed as J ν (z) = e iz b + ν (z) + e −iz b − ν (z), where b ± ν (z) are symbols of order (n − 3)/2 for z ≥ 1. Then, we can decompose the function K 1 as K + 1 + K − 1 , where K ± 1 are defined by replacing in the definition of K 1 the function J ν (σλ) by e ±iσλ b ± ν (σλ). Integrating by parts, we get for every integer m ≥ 0. By (3.5), which clearly implies (3.4) in this case. 2 To prove (2.11) we will use the formula where ϕ h (λ) = ϕ 1 (hλ), ϕ 1 (λ) = λψ(λ 2 ), and R ± (λ) = (G − λ 2 ± i0) −1 satisfy the identity Here R ± 0 (λ) denote the outgoing and incoming free resolvents with kernels given in terms of the Hankel functions, H ± ν , of order ν = (n − 2)/2 by the formula , ∀z > 0. It follows easily from these bounds and (1.2) that by assumption with a bounded inverse denoted by T , it follows from (3.10) that there exists a constant λ 0 > 0 so that the operator By (3.7) and (3.11), Proof. In view of (3.13), it suffices to prove (3.14) with h = 1. Consider first the case 0 < σ ≤ 1. Using the inequality which is the desired bound. Let now σ ≥ 1. We have where c ± are constants and K ± 1 are as in the proof of Lemma 3.1. Hence, in this case, (3.14) (with h = 1) follows from (3.6) (with s = 0). 2 By (3.12), (3.14) and (1.2), we have Thus, (2.11) follows from (3.15) and the following Proof. Using the identity which implies (3.16) provided h is taken big enough. 2
2007-09-17T12:19:23.000Z
2007-04-26T00:00:00.000
{ "year": 2008, "sha1": "2974f30bfa68febcae9efd7ebcb7b679ff04d1b7", "oa_license": null, "oa_url": "https://hal.archives-ouvertes.fr/hal-00143671/file/m.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f35badd5b8a10040e9f0d823d0e27c90c5b8135e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Physics" ] }
261023214
pes2o/s2orc
v3-fos-license
Climate change impacts and adaptation to permafrost change in High Mountain Asia: a comprehensive review Changing climatic conditions in High Mountain Asia (HMA), especially regional warming and changing precipitation patterns, have led to notable effects on mountain permafrost. Comprehensive knowledge of mountain permafrost in HMA is mostly limited to the mountains of the Qinghai-Tibetan Plateau, with a strong cluster of research activity related to critical infrastructure providing a basis for related climate adaptation measures. Insights related to the extent and changing characteristics of permafrost in the Hindu Kush Himalaya (HKH), are much more limited. This study provides the first comprehensive review of peer-reviewed journal articles, focused on hydrological, ecological, and geomorphic impacts associated with thawing permafrost in HMA, as well as those examining adaptations to changes in mountain permafrost. Studies reveal a clear warming trend across the region, likely resulting in increased landslide activity, effects on streamflow, soil saturation and subsequent vegetation change. Adaptation strategies have been documented only around infrastructure megaprojects as well as animal herding in China. While available research provides important insight that can inform planning in the region, we also identify a need for further research in the areas of hazards related to changing permafrost as well as its effect on ecosystems and subsequently livelihoods. We suggest that future planning of infrastructure in HMA can rely on extrapolation of already existing knowledge within the region to reduce risks associated with warming permafrost. We highlight key research gaps as well as specific areas where insights are limited. These are areas where additional support from governments and funders is urgently needed to enhance regional collaboration to sufficiently understand and effectively respond to permafrost change in the HKH region. Introduction Evidence from multiple studies indicates that mountain permafrost is degrading because of a warming climate (IPCC 2019).Due to rising air temperatures and increased insolation, the extent of frozen ground will continue to reduce, and alpine permafrost will continue warming, thus affecting mountain livelihoods, regional economies, and alpine ecosystems (Huss et al 2017).Consequently, climate change is expected to evoke significant impacts on regional hydrology, geomorphology, and ecology across the high mountains of Asia.Unlike glaciers that can be easily detected, permafrost is a subsurface phenomenon and hence observation and analysis is often difficult.This aggravates the problem of understanding the distribution of and changes occurring, especially for the case of discontinuous permafrost.This has led to relatively insufficient knowledge about permafrost dynamics in the region.Although permafrost in different mountain ranges in China, especially the Qinghai-Tibetan Plateau (QTP) are relatively well investigated (Ran et al 2012, Zhang et al 2021), permafrost in the mountains of Hindu Kush Himalaya (HKH, (Gruber et al 2017)) and Central Asia (Barandun et al 2020) need further consideration to understand the degree and extent of climate change impacts. Climate change is expected to cause increasing active layer depths due to permafrost thaw in the HKH (Wester et al 2019).While observations are limited, changes in the continuously warming permafrost regime are expected to generate several hydrologic, geomorphic, and ecological impacts in the region (Gruber et al 2017, Wester et al 2019).Managing water resources is going to be a major challenge for regions in Central Asia where populations living downstream depend heavily upon meltwater from glaciers and permafrost which nourish perennial rivers even during the dry seasons (Huss et al 2017).On the QTP, effects of a warming climate on dynamics of low flows are expected in the high elevation regions but remain mostly uninvestigated (Wang et al 2019). The negative impacts are diverse and wideranging and have received considerable attention in international climate policy (Huggel et al 2019).Changes associated with climate, which includes thawing permafrost, affect mountain livelihoods and adaptation in mountain regions is crucial to alleviate the considerable socio-ecological impacts brought on by climate change (McDowell et al 2019).Nevertheless, insufficient knowledge about the present and future impacts and corresponding appropriate adaptation measures will severely restrict the capacity to assess, prepare for, and to mitigate the adverse effects of climate change. It is expected that future changes of permafrost will be more rapid than what has been witnessed in the recent past (Gruber et al 2017).To anticipate these changes and potential strategies for adaptation, measurement programmes are being carried out in various regions of the globe to evaluate and record the distribution, present condition, and potential future variations of mountain permafrost.These measurement activities have been increasing on the Tibetan Plateau and Central Asia (Haeberli et al 2011) yet are lacking elsewhere in High Mountain Asia (HMA). This comprehensive review attempts to contribute towards a deeper understanding about the present status of knowledge regarding climate change impacts and adaptation associated with warming permafrost in the high mountains of Asia.The specific questions we set out to answer in the study are: • What are the major impacts and adaptation issues associated with climate-driven changes in mountain permafrost in HMA, how are they manifesting across the vast region and what gaps in impact documentation and adaptation measures can be identified?• Are definite trends of change visible and if so, how homogenous are they across the region? Considering our knowledge and identified observation gaps, we aim to propose future research protocols as well as baseline strategies for regional stakeholders to strengthen understanding of changing permafrost and its implications as well as potential adaptation for the future. Comprehensive review The study is based on a comprehensive review of peer reviewed articles obtained from the Web of Science database (Web of Science 2015) and Google Scholar, spanning the years from 1970 to 2021.Specific details on the review process are provided in the supplementary material (section S1).We consider the geographic definition of HMA as used in Bolch et al (2019).The initial literature search resulted in a total of 1505 studies, of which 1308 were removed after initial screening, resulting in a total number of 197 to be reviewed in detail (see section S1).The screening of literature resulted in publications that included studies with original data, those that discuss permafrost in the region but do not present new data, and global reviews that include the region.An overview of all studies included in this review is accessible at https:// github.com/fidelsteiner/HMAPermafrost. Information classification Documented climate change impacts were broadly classified under three categories: hydrological, ecological and geomorphological impacts.After full text review, the respective key themes were identified in each publication (which can be more than one).Geomorphological impacts crucially include the thermal state of permafrost as well as subsequent impacts related to slope instability and related hazards.Existing cases and possible consequences of land surface subsidence and slope failures on infrastructure were identified. For hydrological impacts, hydrological responses to permafrost thaw were analysed.This included both quantitative knowledge (e.g.discharge or surface runoff) as well as qualitative information (e.g.sedimentation, presence, or absence of solutes). To understand ecological responses, variations in frozen soil depths, soil moisture, soil organic carbon, dissolved organic carbon, and release of greenhouse gases due to permafrost degradation were analysed. The classification process is detailed in the supplementary material (S2). Comprehensive review From the 197 studies that were closely reviewed, 24 studies were reviews that included non-original data on permafrost in the region and a further 42 studies only referred to permafrost in the conclusions, with the focus of the investigation being on another topic that had only some relation to permafrost.Data from the remaining 131 studies were investigated to understand regional patterns and trends (figure 1). Studies on permafrost were sparse until 2005 (∼1 per year), rose steeper than the global average of papers on science and technology until 2019 to nearly 40 per year but have since stagnated (figures S1 and S2).Of all 197 studies we consider here for review, 93 were focused on the QTP, of which 30 covered the whole plateau, 23 focused on the mountainous Qilian Shan to the North and 40 looked at some local field site (figure S3).Of the latter, 16 focused specifically on permafrost change affecting the Qinghai Tibet Economic Corridor (QTEC, figure 1(B)).39 studies focused on the HKH, and 18 covered Central Asian Mountain ranges, predominantly the Tien Shan (12, figure S3). Studies are visibly clustered in both elevation as well as latitude around the centre due to the focus of research around the QTEC, with further smaller clusters across the mountain ranges of HKH, Qilian Shan and Tien Shan (figure 2(B)). More than 90% of all reviewed articles mention climate change impacts, but only about 40% refer to climate change adaptation actions (figure S7(A)), predominately related to infrastructure development and changing herding practices.Less than a quarter of reviewed publications indicate measures taken to avoid risks due to changing permafrost conditions (figure S7(B)), with most of these studies focusing on the QTEC.The academic sector is identified as the actor most engaged in adaptation action (figure S7(D)), followed by government entities, providing some evidence of a potential disconnect between those able to identify actions and those with a capacity to implement them. Permafrost presence and impacts of climate change Evidence from different field studies show that the warming climate has continuously increased the active layer depths of mountain permafrost (Jin et al 2000, Zhao et al 2010, Liu et al 2017, Yin et al 2017) and shifted the lower limit to higher elevations (Fukui et al 2007, Li et al 2008).These transformations are affecting the soil moisture content, ground thermal regime, cryopedogenic mechanisms, activelayer detachment slides and different mass movement processes (Bockheim 2015, Yuan et al 2020).Possibilities of widespread mass wasting activities (figure 3) as a result of weakened mountain slopes are anticipated to increase due to changes in climate conditions (Fort et al 2009, Huggel 2009, Kalvoda and Emmer 2021). Below we recapitulate on our state of knowledge on permafrost and climate chanage from a hydrological, geomorphological and ecological angle, and identify impacts of permafrost change. Geomorphological impacts Climate change largely governs processes occurring in the active layer of permafrost regions in China which has subsequent effects upon engineering practices and geohydrological properties (Jin et al 2000, Hu et al 2015).Numerous studies have investigated ground temperatures at a variety of depths, including in boreholes (Zhao et al 2010).Wang and French (1994) found a 0.2 • C-0.3 • C increase over 15 years (∼0.013• C-0.02 • C yr −1 ) at 20 m depth starting in the late 1970s next to the QTEC.Between 2003 and 2015 somewhat lower accelerations were found in the same region at 0.011 • C yr −1 , with rates of 0.014 • C yr −1 at 10 m and 0.009 • C yr −1 at 30 m (Zhangqiong et al 2020).Permafrost depths were estimated between 0.8 and 2.1 m during this time.Between 1996and 2006Wu and Zhang (2008) found an increase of 0.043 • C yr −1 at 6 m, while mean annual air temperatures increased at 0.1 • C yr −1 .Data between 2014 and 2016 in the same area at 15 m depth aligns with all previous measured rates, with rates up to 0.02 • C yr −1 (Yin et al 2017).Within a 60 m borehole further north in the Tien Shan, rates of increase varied between 0.042 • C yr −1 at 1 m depth and 0.018 • C yr −1 at 58.5 m between 1992 and 2011 (Liu et al 2017), matching with much older measurements further West at 20 m (Severskiy 2018).These data suggest regionally different gradients with depth between the QTP and the Tien Shan (figure S6).There are no observations available for other regions in HMA. Few sites have been instrumented with sensors in the shallow soil layer (10-20 cm), including along QTEC with measured and modelled data (Yin et al 2018) and measurements in the Central (Steiner et al 2021) and Western Himalaya (Wani et al 2020).Surface and thermal offsets are comparable between the QTEC site (3.5 • C and −0.3 • C respectively, (Luo et al 2018a)) and in Ladakh (−1.1 • C-3.9 • C and −0.9 • C-0 • C, (Wani et al 2020)).Temperature data eventually allowed to make estimates of active layer thickness (ALT), which on the QTP reached 1.7 m by 2011, increasing by 2 cm yr −1 since 1992 at 3500 m (Liu et al 2017) to 3.4 m at 4628 m between 2014 and 2016, with a rate of change of 21 cm yr −1 (Yin et al 2017).At different boreholes across the plateau, ALT ranged from 1.05 to 3.22 m in 2007 and 2008, decreasing with elevation (Zhao et al 2010).In the Tien Shan the ALT was already up to 4 m thick in the 1970s, reaching up to 5.2 m by 2009 at 3300 m (Gorbunov et al 2004, Zhao et al 2010).In the Western Himalaya ALT between 2016 and 2017 was found to be between 0.1 and 4.2 m at an elevation between 4700 and 5600 m (Wani et al 2020).There is less data on other geomorphological variables across HMA.Sorg et al (2015) show synchronous activity of rock glaciers at decadal scale in Central Asia.Li et al (2013) show the formation of patterned ground in line with an expansion of soil moisture, most prevalent in the first 25 cm of soil.Zhong et al (2021) show deformation of thermokarst landforms around 3600 m between April 2016 and June 2018, finding a 3.4 m vertical change and 10.7 m wall retreat.On the plateau, thaw slumps were also found to increase with ever larger rates both below 4000 (Mu et al 2020) as well as between 4400 and 5300 m a.s.l.(Luo et al 2019). In the Tien Shan rising temperatures and soil moisture content in permafrost soils have caused upfreezing, a phenomenon where freezing leads to the movement of deposited materials towards the soil surface, and the development of sorted circles (Li et al 2013). An inventory of rock glaciers compiled for the Karakoram region, Tien Shan and Altai shows that rock glacier bodies are able to partially cover, obstruct, narrow down or reroute several segments of mountain rivers (Blöthe et al 2019).These rock glaciers are a potential outcome of permafrost in nonequilibrium conditions.Increased warming is predicted to boost the degree of activity of rock glaciers and support faster movement of rock glaciers (Hartl et al 2023), increasing the potential for such events, however investigations into this direction in HMA are lacking. Hydrological impacts Two separate studies suggest that the contribution of permafrost thaw in headwaters of catchments on the northern fringe of the QTP make up more than one third of total discharge, with half of it made up of precipitation in a lower catchment with limited ice and snow melt (3367 m a.s.l.), but only 6% from precipitation in a catchment at 4500 m a.s.l., where glacier melt accounts for more than 50% (Yang et al 2016, Zongxing et al 2016).For other mountainous parts of HMA such partitioning is not available. A significant contribution of glaciers and thawing permafrost to river discharge of mountain rivers on the QTP indicates that volume and seasonal variability of river discharge will be considerably affected due to future warming in the region (Yang et al 2016).For example, in northwest China, statistical analyses of climate variables and river runoff in a mountain permafrost catchment showed a significant rise in winter air temperatures and subsequent increase in monthly flows during winter months (Liu et al 2007).This was also found for long term data between 1960 and 2014, due to an increase in baseflow as precipitation increased and maximum thickness of the seasonally frozen ground decreased (Qin et al 2016).The increase in the contribution from groundwater during freezing days was also found in the far southeastern part of the Plateau (Luo et al 2020).However, an increase in ground temperatures has also resulted in a decrease in winter runoff elsewhere (Gao et al 2016) as well as an increase in spring and decrease in summer (Tian et al 2016).Local field studies have also investigated the link between thawing permafrost and resulting decrease in soil moisture and desiccation providing an important link between cryosphere change and its direct impact on ecology (Wang andWu 2013, Sun et al 2019). Regional observations of climate conditions in the Central Asian region predict a continuous increase in warming leading to retreat of glaciers and permafrost (Shahgedanova et al 2018).As a result, future mountain river discharge will experience an initial increase followed by reduction. Ecological impacts Changing permafrost has varied impacts on ecosystems on the QTP, with swamps and cold meadows suffering the most negative impacts between the Kunlun Pass and the Tanggula Shan (Wang et al 2006).Similar reductions were also observed for mountain vegetation cover in the Yangtze and Yellow Rivers region of the Tibetan Plateau (Yang et al 2006).One explanation could be an increase of evapotranspiration from rapid decreases in frozen soil, negatively affecting vegetation growth (Wang et al 2020).A general decrease in soil moisture due to permafrost thaw was already observed in a number of studies (Wang and Wu 2013, Xu et al 2018, Sun et al 2019).In the Qilian Shan, Qin et al (2016) found an increase in leaf area index and earlier start of the growing season as permafrost thaws.A similar increase in vegetation cover was also found further inland on the Plateau resulting in a 65% increase of organic carbon in the topsoil between 1986 and 2000 (Genxu et al 2008).There is some field evidence on the QTP for dissolved organic carbon increasing in relation to a thawing permafrost where snow melt does not play a significant role (Zhu et al 2020). Natural hazards As the active layer changes, shallow slope movement has been shown to increase (Dini et al 2019, Daout et al 2020), while deeper and slower warming of permafrost is expected to result in increasing large bedrock failures (e.g.Shugar et al 2021).If these movements occur close to infrastructure or adjacent to lakes, they can pose hazards for downstream areas.High elevation lakes formed by natural dams are likely to increase in number under the influence of climate change (Zheng et al 2021) and the potential threat from surrounding unstable slopes has been indicated at the regional scale (Furian et al 2021).As a result of the potential threat of rising temperatures on moraine stability, existing strength and potential disintegration of natural dams are uncertain (Korup and Tweed 2007), but dam stability in light of potentially thawing permafrost and ice cores remains a concern.Due to regional warming, the lower limit of mountain permafrost has been gradually shifting to higher elevations in China (Li et al 2008), although documentation of the lower limit is so far lacking for most other regions in HMA.In addition to that, ground temperatures are rising, depths of thawing permafrost are increasing, permafrost islands are appearing, and taliks are expanding (Jin et al 2000).Ongoing changes clearly indicate that the stability of infrastructures are under great risk as the extent of permafrost cover on the QTP will decrease and permafrost areas will shift upward (Li et al 2003).However, studies investigating permafrost related threats to infrastructure and livelihoods beyond QTEC remain lacking so far in HMA. Adaptation Climate and related cryospheric change, together with other anthropogenic pressures, is expected to further impact regional geomorphology, hydrology, ecology and economy in future, calling for appropriate adaptation responses.This has been recognized in the western Himalaya, where studies highlight that permafrost has a strong scientific as well as societal significance (Ali et al 2018) as well as along the QTEC, where monitoring has been in line with concerns about the sustainability of both road and train infrastructure.For other infrastructure spanning HMA, such dedicated appraisal of permafrost does not exist.We argue that future adaptation should be built on dedicated monitoring and awareness of linkages between permafrost change and hydrology, ecosystems as well as natural hazards, which we review below.Furthermore, although not a focus of this review, we call attention to the importance of recognizing and addressing underlying socioeconomic factors that can constrain adaptation across the region (e.g.(McDowell et al 2020)). Permafrost monitoring To date, intermittent observations, irregular monitoring, and insufficient field-based observations have been a major constraint to effective understanding of the interrelationship among ecological, hydrological and geomorphological attributes in permafrost environments of HMA (Luo et al 2018b).Permafrost monitoring across the HKH in Nepal, India, and Pakistan has received some attention in most recent years, including mapping of rock glaciers (Schmid et al 2015, Stumm et al 2020) as well as ground temperature measurements (Steiner et al 2021).Similar efforts were made in Ladakh (Wani et al 2020), and in September 2015 an initial workshop was held in Gilgit, Pakistan. Adaptation in the hydrological sector To minimize the risks from slope failures and GLOFs associated with warming permafrost, well defined frameworks for hazard and risk assessment have been proposed (e.g.(GAPHAZ 2017, NDMA 2020)) including specifically for thermokarst lakes in Central Asia (Falatkova et al 2019) and the hydropower sector (Li et al 2022).Such frameworks recommend monitoring of mountain lakes that are susceptible to outbursts, evaluation of the magnitude and impact of potential hazard events and implementation of proper risk reduction measures, including Early Warning Systems, and related land-use zoning (Ives et al 2010, NDMA 2020). In China, several research initiatives to explore different aspects of geocryology as well as hydrogeology have been carried out since the early 1960s, including the development of regional maps.Such studies have focused on processes and interactions related to permafrost and groundwater and contributed to the knowledge of impacts of climate change on cold region hydrology (Cheng and Jin 2013).These investigations provide important baseline information to develop national plans and policies in the past up to today to adapt to changing climate conditions and understand subsequent impacts upon the hydrological cycle in cold and arid regions. Ecosystem protection and adaptation In China, there has been a gradual shift in the focus of national grassland policy from sustainable socioeconomic development towards conservation (Fang et al 2011).In this context, one adaptation option is to focus on compensation mechanisms as land for cultivation is lost as well as the provision of vocational training and livestock replacement to provide a broader scale of potential employment in affected regions.Wang et al (2020) suggest that changed grazing patterns, following directives by the government resulted in positive effects for the ecosystems from 2000 onwards compared to negative contributions in the two decades prior, however these actions were not able to offset the degradation of vegetation caused by a decrease in the depth of frozen soil over the same period. Reducing risk of natural hazards Appropriate applications of hazard and risk management principles, in the context of glacial and permafrost mountain areas, need to be planned in such a way that probable future climate conditions are taken into account, along with changing socio-economic conditions (GAPHAZ 2017).Research to investigate the effect of permafrost change on infrastructure as well as the spatiotemporal effects of infrastructure such as roads on thermal regimes in permafrost areas, have developed from early studies monitoring ground temperatures (Wang and French 1995) to using state of the art remote sensing techniques (Zhang et al 2019a).This assists in designing operational frameworks for planning of construction programmes and selection of appropriate locations to avoid potential threats to the environment due to future operations.The national project 'Research on a series of technologies for highway constructing in the permafrost regions' was launched by the Chinese Ministry of Communication in 2002 to cope with the difficulties related to climate induced warming of permafrost.The project was designed to investigate aspects of construction, management, and protection of engineered structures, especially roads, in high altitude permafrost regions (Wang et al 2009).Field based investigations and mathematical simulations of an unusual and natural permafrost location within a scree slope in northern China have led to the conclusion that thermal conductivity of peat is essential to preserve permafrost even under warm climate conditions (Niu et al 2016).This discovery has helped ensure the effectiveness of using crushed rock layers as slope cover in recent constructions of the Qinghai-Tibet Railway. In China, to ensure safe operation of engineering corridors that exist on permafrost areas, appropriate management practices are followed which include quantitative as well as qualitative analysis (Luo et al 2017).These management practices include continuous monitoring through surveying using high precision GPS, borehole drilling, microwave remote sensing techniques, terrestrial photogrammetry, electrical resistivity tomography and terrestrial laser scanning. Impacts of changing permafrost Water from thawing permafrost has been found to contribute one third of total discharge of a single catchment, but such data is lacking elsewhere.Increasing temperatures found an increase in contribution to baseflow from permafrost in winter over a limited number of sites, an effect elsewhere recorded especially for discontinuous permafrost (Walvoord and Kurylyk 2016), with some evidence also of the contrary.This follows the generally heterogenous response of streamflow to a changing permafrost also observed in the Arctic (Walvoord and Kurylyk 2016).The formation of ice layers as groundwater discharges in winter (Woo 2012) has been seen in different sites in HMA but so far only documented in Ladakh (Brombierstäudl et al 2021).Like in the Arctic (Hayashi 2013), the relevance of a changing hydrology for ecosystems and nutrient transport is however well documented on the QTP, resulting in concerns over desiccation of soils and impacts on herding and plant species.What is so far largely lacking is a detailed description of subsurface processes affecting hydrology, either through observations or modelling.As snow cover is changing across HMA, with heterogenous trends but expected significant impacts on hydrology (Kraaijenbrink et al 2021) we also expect impacts on permafrost (Zhang 2005), but so far lack in depth analysis in the region.It is projected that even though discharge in river basins with thawing permafrost areas may be gradually increasing, in future, river runoff will reduce and become largely dependent upon precipitation distribution and seasonality.Similarly, an increase in winter base flows and surface runoff could be considered beneficial as they ensure availability of more than anticipated discharge in mountain rivers that could be utilized for increasing hydropower capacity and strengthening irrigation management systems for cultivated lands downstream.However, these benefits are anticipated to be only temporary. Geomorphic impacts resulting from fragile mountain slopes such as outburst floods, landslides, debris flows, rock falls, ice and rock avalanches, ground subsidence and other mass wasting events can have catastrophic consequences that extend far downstream.Studies have indicated a potential link between large scale, highly mobile mass movements and changes in permafrost in the upstream catchment, however such links remain largely speculative owing to a lack of in situ observations (Maharjan et al 2021, Shugar et al 2021, Sattar et al 2023).A slow onset hazard related partially to a thawing permafrost is the constant rise of sediments in river discharge, posing challenges especially to hydropower infrastructure (Li et al 2022).More recently the potential costs to mitigate risks associated with permafrost thaw on the QTP have been quantified at $6.31 billion by 2090, with ample potential of reduction when warming is limited (Ran et al 2022a). Continuous monitoring and assessment of biophysical and social systems affected by permafrost change is challenging.Understanding impacts of global changes in climate on permafrost in HMA requires knowledge about changes in each climate variable that control permafrost processes, many of which have been discussed in literature on HMA (figure 4).However, a major challenge lies in the integration of different methods to monitor and to understand changes in components and processes (Bugmann et al 2007).While numerous individual aspects of permafrost have been investigated in HMA over the last years, including a combination of multiple datasets and multi-disciplinary approaches to aid in adaptation as along the QTEC, this integrated approach is still lacking across the region. Research gaps and recommendations There have been a number of calls for more longterm monitoring of permafrost globally (Haeberli and Gruber 2008) and regionally for the QTP in 2000 (Jin et al 2000), the Tien Shan seven years later (Marchenko et al 2007) and another decade later for the HKH (Gruber et al 2017).A large amount of evidence followed with research studies increasing markedly from around 2015 (figure S1), especially for the relatively flat QTP (figures 1 and 2).For the more mountainous HKH to the south this has so far not translated into concerted efforts for long term monitoring and the sustainability of efforts in the Tien Shan and Altay established during the Soviet Union are unclear.However, the large number of studies in the equally mountainous Qilian Shan (figure 2) provides some basic understanding of potential impacts of changing permafrost outside of the Plateau. The steadily increasing numbers of scientific publications in recent years are indicators of increasing scientific attention being given to climate change impacts and adaptation related to mountain permafrost in the region.A majority of these publications identify that the current status of research is insufficient and recommend additional research efforts.Nevertheless, a growing number of scientific investigations confirm the presence of significant climate change impacts and point to the urgent need for appropriate adaptation action in the region.Hence, an incomplete scientific knowledge base should not prevent moving forward with adaptation action. Studies investigating future projections of permafrost and its consequence in HMA are relatively rare, limited to the Tibetan Plateau, vary in range and across different regions.There is some consensus that projected areal extents will be reduced by approximately 40% after mid-century (Guo et al 2012, Ni et al 2021), with projections till end of century varying between a 40% and an 80% reduction.ALT is projected to increase between 5 and 30 cm and will be greater than 30 cm, irrespective of concentration pathways, with an increase from south to north (Zhao and Wu 2019).A recent study suggests that soil water contents need to be accounted for in ALT change projections to avoid overestimations, but nevertheless project an average ALT on the QTP of 4 m by end of century (Ji et al 2022).This certain ALT increase is expected to eventually lead to a decrease in surface runoff (Guo et al 2012), suggesting that peak flow is also imminent for permafrost thaw and can have drastic consequences for ecosystems as alpine meadows dry out (Zhao and Wu 2019).It is expected that in the long run, although richness of certain species may increase, the diversity of species would gradually decrease (Jin et al 2021).Consequences are dire for infrastructure with projected hazards for one third of the settlement area along QTEC by mid-century (Guo and Sun 2015), a value similar to projected risks for circumpolar infrastructure (Hjort et al 2022). We show that field-based investigations and monitoring are the most applied method for mountain permafrost assessment, while modelling studies are rare.However, permafrost across the region covers extensive areas and therefore, field-based investigations offer only limited knowledge.Here, remote sensing methods and numerical simulations are more feasible procedures for extensive regional analysis.A preliminary mapping of rock glaciers (Schmid et al 2015) has been a benchmark and paved the way for multiple recent studies on rock glaciers (Jones et al 2018, Haq and Baral 2019, Pandey 2019), initial discussions of the impact of permafrost change on infrastructure (Streletskiy et al 2012), and has led to quantifications in HMA even considering future projections (Guo andSun 2015, Hjort et al 2022).Although permafrost maps especially for the plateau have existed for a long time (Ran et al 2012), more recent studies have improved upon their granularity (Gruber et al 2017, Obu 2021, Ran et al 2022b), while still leaving room for improvement, especially in mountainous regions.Such benchmark studies are central to generating an interest in the scientific community and guide the way for future research, and regional studies that link ecology and hazards to permafrost should be attempted to provide groundwork for these relatively underrepresented topics. Extensive areal coverage of permafrost makes it a regional phenomenon and therefore, transboundary rather than localized national initiatives seem essential for effective adaptation, particularly concerning hydrological impacts.Building on documented investigations and well-built infrastructure in response to climate change related impacts on mountain permafrost (especially on the QTP) could assist in developing regional programmes to address the issue (see Schmid et al (2015) and Allen et al (2016) as examples).Such collaboration can extend beyond the region, to build on experiences with permafrost science, impacts and adaptation in Europe and North America, and provide a fertile ground for future permafrost research programs that are not confined to specific locations but affect most mountain regions globally. Mountain water resources are highly likely to be impacted due to changes in climate conditions.Water management under such conditions should comprise of well-defined policies for water allowance, reservoir installation, irrigation management, responses to low and high flows and prevention of unnecessary loss of distributed water through different mechanisms (Beniston and Stoffel 2014).Ecological impacts such as growth in richness and diversity of herbs, shrubs and tree communities are valuable for ecosystem services.Their role and interconnectedness with a changing permafrost needs to be appreciated to be able to better anticipate potential cascading risks that are currently difficult to project (Ehlers et al 2022). Based on the available knowledge on permafrost, local policies regarding water, ecosystems, livelihoods, and hazards should include the role and impact of permafrost in future. Conclusions In this study we have reviewed the state of knowledge on permafrost, its change and existing or envisioned adaptation measures across HMA.Insights from a total of 197 studies show an increasing interest in the recent past, from five studies per year around 2010 to more than 20 only 10 years later.While initial studies focused mainly on ground temperatures, recent research has become more diverse investigating impacts on hydrology, surface displacement and ecology using both field data as well as remote sensing and modelling approaches.Direct information on permafrost presence relying on field data remains however limited to the central QTP, with only three field monitoring sites in the HKH and three borehole locations in the Tien Shan. The majority of impacts documented include a continuing increase of ground temperatures across all monitored sites from which an increase in ALT follows, as well as indications of a decrease in winter runoff and drying of soils as permafrost thaws.These observed changes are projected to continue to increase unabated into the future.The documented impact on hydrology is confined to few catchments in the northwestern QTP, and responses vary with different elevations and contributions of other water sources.More studies across HMA are required to assess the impact of permafrost hydrology and potentially develop estimates of peak water, as has been done for glacier and snow melt before.Additionally, there is evidence that permafrost change has led to slope movements and increased mass wasting.What is less clear is how such changes are affecting existing or infrastructure under development and how compound drivers, e.g. an increase in liquid precipitation at altitude or the impact from road construction further exacerbate or otherwise influence these hazards. While global maps of permafrost extents exist, an accurate understanding of the lower permafrost limit as well as high resolution (<1 km) maps remain absent.Present as well as anticipated future direct impacts of permafrost change on livelihoods and potential migration due to changes in vegetation or evolving hazards remain largely unstudied.This includes anticipated risks for major transboundary infrastructure like the two main road (and possibly future road) corridors linking China and Nepal, China and Pakistan (the Karakoram Highway) as well as the Pamir Highway linking China, Kyrgyzstan, Tajikistan, and Afghanistan.Considering the lack of knowledge on permafrost extent, its interrelation with ecosystems as well as the cascading nature of expected changes, we also foresee unanticipated risks that are difficult to prepare for, calling for adaptation strategies that are flexible enough to account for unknown future developments.The direct effect of permafrost change on sediment loads in rivers, and subsequent risks for hydropower and other adjacent infrastructure to waterways remains largely underappreciated.Finally, local and Indigenous knowledge regarding permafrost distribution, impacts of warming and related adaptation does not surface in any of the peer reviewed literature to date.In the Arctic the need for appreciation and integration into conventional research practice of the same has been emphasized very recently with respect to permafrost (Ulturgasheva 2022, Gruber et al 2023) as well as on a global level for climate science in general (Miner et al 2023).For HMA in a climate or cryosphere context this has been highlighted in the context of hazards (Emmer et al 2022, Acharya et al 2023) but needs to be documented and integrated in future studies on permafrost. The increasing volume of literature focusing on multi-disciplinary research is already a positive indication that awareness around the wide-ranging impacts of permafrost degradation is advancing, while there has been relatively less focus on adaptation options in mountain regions.There remain few holistic approaches beyond the central QTP, that would allow for strong recommendations for adaptation.We argue this is due to a combination of factors, namely a lack of continuous monitoring, difficulty in identifying clear trends as has been done for glacier ice as well as a lack of communicating the (potential) effects of a changing permafrost in the region.We therefore recommend that successful practices of permafrost monitoring already present for decades in a few locations should be replicated elsewhere and knowledge needs to be exchanged on the topic in regional exchange forums.Modelling and field studies investigating the high mountain water balance need to add the permafrost component, in order to evaluate its role.Including the effects of enhanced nutrient cycling and lateral carbon flux in future projections will inform the risks of climate-induced permafrost thaw on hydrologic, geomorphologic, ecological, and infrastructure systems.Finally, following the successful focus of research and subsequent adaptation recommendations for critical infrastructure along the QTEC, other infrastructure development across the region should learn from this process and integrate monitoring and modelling approaches into their planning.This could also include an estimate of the economic costs of potential non-action to adapt and a quantification of any other associated risks.Due to the transboundary nature of the observed and anticipated impacts an increase in scientific collaboration, especially focused on less represented mountain regions remains necessary.This needs to be accompanied by active participation of local authorities and young researchers thus contributing towards capacity building on a regional scale. Figure 1 . Figure 1.(A) Overview of studies with original data from specific field sites across HMA (n = 95).Permafrost map for the Northern Hemisphere (Obu et al 2019) and Chinese railways (https://download.geofabrik.de/asia/china.html)shown in the background.(B) Focus on the QTEC area in the central Tibetan Plateau, showing a cluster of studies. Figure 2 . Figure 2. (A) Conceptual north-south cross section across HMA with main interests of published studies across elevation ranges described in the centre.(B) Relative total land area and total area covered by permafrost for each elevation band.(C) Location of the studies across the north-south transect. Figure 3 . Figure 3. Examples of surface changes attributed to a change in permafrost ranging from patterned ground (A), (B), thaw slumps (D), (E) to landslides (F), (H) and debris flows (H), (I).Permafrost landscape near China-Nepal border, Humla, Nepal (A), Hidden Valley, Mustang, Nepal (B).Tsho Rolpa lake situated below steep permafrost head walls, Nepal (C).Thaw slumps on the trekking route to Yala Glacier in Langtang, Nepal (D).Yaks grazing near a thaw slump near Limi Lapcha road in Humla, Nepal (E).Permafrost slope failures near Kyangjing Village in Langtang, Nepal (F).Landslide due to floods near Melamchi, Nepal (G).Settlement destroyed (H) and bridge damaged (I) due to flood event in Melamchi, Nepal. Figure 4 . Figure 4. Schematic representation of climate system, permafrost processes, impacts of permafrost change on biophysical and social systems, and adaptation measures to minimize the impacts of permafrost change in HMA.Continuous monitoring and assessment of biophysical and social systems is necessary to address the impacts which are bidirectional as changes in biophysical system affects the social system and vice-versa. Acharya A, Steiner J F, Walizada K M, Ali S, Zakir Z H, Caiserman A and Watanabe T 2023 Review article: snow and ice avalanches in High Mountain Asia-scientific, local and indigenous knowledge Nat.Hazards Earth Syst.Sci.23 2569-92 Ali S N, Quamar M F, Phartiyal B and Sharma A 2018 Need for permafrost researches in Indian Himalaya J. Clim.Change 4 33-36 Allen S K, Fiddes J, Linsbauer A, Randhawa S S, Saklani B and Salzmann N 2016 Permafrost studies in Kullu district, Himachal Pradesh Curr.Sci. 111 550 Barandun M, Fiddes J, Scherler M, Mathys T, Saks T, Petrakov D and Hoelzle M 2020 The state and future of the cryosphere in Central Asia Water Secur.11 100072 Beniston M and Stoffel M 2014 Assessing the impacts of climatic change on mountain water resources Sci.Total Environ.493 1129-37 Blöthe J H, Rosenwinkel S, Höser T and Korup O 2019 Rock-glacier dams in high Asia Earth Surf.Process.Landf.44 808-24 Bockheim J G 2015 Global distribution of cryosols with mountain permafrost: an overview Permafr.Periglac.Process.26 1-12 Bolch T et al 2019 Status and change of the cryosphere in the extended Hindu Kush Himalaya region The Hindu Kush Himalaya Assessment pp 209-55 Brombierstäudl D, Schmidt S and Nüsser M 2021 Distribution and relevance of aufeis (icing) in the Upper Indus Basin Sci.Total Environ.780 146604 Bugmann H et al 2007 Modeling the biophysical impacts of global change in mountain biosphere reserves Mt.Res.Dev.27 66-77 Cheng G and Jin H 2013 Permafrost and groundwater on the Qinghai-Tibet Plateau and in northeast China Hydrogeol.J. 21 5-23 Daout S, Dini B, Haeberli W, Doin M P and Parsons B 2020 Ice loss in the Northeastern Tibetan Plateau permafrost as seen by 16 yr of ESA SAR missions Earth Planet.Sci.Lett.545 116404 Dini B, Daout S, Manconi A and Loew S 2019 Classification of slope processes based on multitemporal DInSAR analyses in the Himalaya of NW Bhutan Remote Sens. Environ.233 111408 Ehlers T A et al 2022 Past, present, and future geo-biosphere interactions on the Tibetan Plateau and implications for permafrost Earth Sci.Rev. 234 104197 Emmer A et al 2022 Progress and challenges in glacial lake outburst flood research (2017-2021): a research community perspective Nat.Hazards Earth Syst.Sci.22 3041-61 Falatkova K, Šobr M, Neureiter A, Schöner W, Janský B, Häusler H, Engel Z and Bene V 2019 Development of proglacial lakes and evaluation of related outburst susceptibility at the Adygine ice-debris complex, northern Tien Shan Earth Surf.Dyn. 7 301-20 Fang Y, Qin D and Ding Y 2011 Frozen soil change and adaptation of animal husbandry: a case of the source regions of Yangtze and Yellow Rivers Environ.Sci.Policy 14 555-68 Fort M, Cossart E, Deline P, Dzikowski M, Nicoud G, Ravanel L, Schoeneich P and Wassmer P 2009 Geomorphic impacts of large and rapid mass movements: a review Geomorphol.Relief Process.Environ.15 4764 Fukui K, Fujii Y, Ageta Y and Asahi K 2007 Changes in the lower limit of mountain permafrost between 1973 and 2004 in the Khumbu Himal, the Nepal Himalayas Glob.Planet.Change 55 251-6 Furian W, Loibl D and Schneider C 2021 Future glacial lakes in High Mountain Asia: an inventory and assessment of hazard potential from surrounding slopes J. Glaciol.67 653-70 Gao T, Zhang T, Cao L, Kang S and Sillanpää M 2016 Reduced winter runoff in a mountainous permafrost region in the northern Tibetan Plateau Cold Reg.Sci.Technol.126 36-43 GAPHAZ 2017 Assessment of glacier and permafrost hazards in mountain regions: technical guidance document Standing Group on Glacier and Permafrost Hazards in Mountains (GAPHAZ) of the International Association of Cryospheric Sciences (IACS) and the International Per 72 Genxu W, Yuanshou L, Yibo W and Qingbo W 2008 Effects of permafrost thawing on vegetation and soil carbon pool
2023-08-20T15:13:33.400Z
2023-08-18T00:00:00.000
{ "year": 2023, "sha1": "513dce78763f7acc0a09d19691c4957bf9b89e1b", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1748-9326/acf1b4/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "649c0bd0ce76306260b1a2f84d2a70ff91ac1551", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
270550309
pes2o/s2orc
v3-fos-license
Oxidised Albumin Levels in Plasma and Skeletal Muscle as Biomarkers of Disease Progression and Treatment Efficacy in Dystrophic mdx Mice Redox modifications to the plasma protein albumin have the potential to be used as biomarkers of disease progression and treatment efficacy in pathologies associated with inflammation and oxidative stress. One such pathology is Duchenne muscular dystrophy (DMD), a fatal childhood disease characterised by severe muscle wasting. We have previously shown in the mdx mouse model of DMD that plasma albumin thiol oxidation is increased; therefore, the first aim of this paper was to establish that albumin thiol oxidation in plasma reflects levels within mdx muscle tissue. We therefore developed a method to measure tissue albumin thiol oxidation. We show that albumin thiol oxidation was increased in both mdx muscle and plasma, with levels correlated with measures of dystropathology. In dystrophic muscle, albumin content was associated with areas of myonecrosis. The second aim was to test the ability of plasma thiol oxidation to track acute changes in dystropathology: we therefore subjected mdx mice to a single treadmill exercise session (known to increase myonecrosis) and took serial blood samples. This acute exercise caused a transient increase in total plasma albumin oxidation and measures of dystropathology. Together, these data support the use of plasma albumin thiol oxidation as a biomarker to track active myonecrosis in DMD. Introduction Albumin is the most abundant plasma protein and is present in almost all mammalian tissues, with approximately 60% of total body albumin in the extravascular compartment of muscle, skin, and adipose tissue [1,2].Apart from its roles in osmosis, signalling, and transport, albumin is considered an important antioxidant due to the free thiol group of cysteine 34 (Cys34), allowing this to serve as a trap for reactive oxygen and nitrogen species [3].Cys34 exists mostly in a reduced state in human plasma but is susceptible to direct oxidation by oxidants or indirect oxidation via thiol/disulfide (SH/SS) exchange reactions [4][5][6].Cys34 has three isoforms according to the redox state of the free cysteine residue at position 34: mercaptalbumin (reduced albumin), non-mercaptalbumin-1 (reversibly oxidised albumin), and non-mercaptalbumin-2 (irreversibly oxidised albumin) [7].It should be noted that in mice only, we have observed an additional cysteine residue on albumin that is susceptible to redox modifications; whereas in humans, rats, dogs, cows, horses, and sheep, we have observed only one (which is consistent with previous research).Oxidative modifications of serum albumin Cys34 have previously been investigated in exercise [8] and various disease states, such as organ failure, kidney diseases, and diabetes mellitus [4,[9][10][11][12][13][14], where increased percentages of reversibly and irreversibly oxidised plasma albumin Cys34 are reported. We have investigated the use of plasma albumin thiol oxidation as a blood biomarker in animal models of Duchenne muscular dystrophy (DMD), a fatal X-chromosome linked disease with an incidence of 1 in 5000 male births (reviewed in [15][16][17]).DMD occurs due to mutations in the dystrophin gene that result in dysfunctional or missing dystrophin protein in skeletal muscle [18].An absence of functional dystrophin leads to a severe loss of muscle mass over time, due to increased susceptibility of myofibres to sarcolemma damage resulting in myofibre necrosis (myonecrosis) (reviewed in [19,20]).While the precise mechanisms of myonecrosis and progressive dystropathology remain unclear, oxidative stress caused by excessive generation of oxidants has long been widely implicated (reviewed in [21][22][23][24][25]).We recently investigated the relationship between myonecrosis and oxidative stress in dystrophic muscle and showed images of protein oxidation mainly colocalised to areas of myonecrosis and associated immune cell infiltration [26].We also showed that in the mdx mouse model for DMD [27], plasma albumin thiol oxidation is increased at various stages of the disease and correlates with plasma creatine kinase (CK) levels, a marker of dystropathology [28]. Our research into this potential blood biomarker for DMD hypothesised that the increased albumin thiol oxidation in mdx plasma is due to increased oxidative stress associated with myonecrosis within the large mass of dystrophic muscle.The primary aim of the present study was to test this hypothesis, by examining the relationship between plasma and muscle albumin thiol oxidation using mdx mice.To facilitate this, we modified the plasma thiol oxidation technique developed in our laboratory [28] to be suitable for tissue analysis of albumin oxidation by immunoblot.We then measured albumin thiol oxidation in plasma and muscle from mdx and normal control C57 (WT) mice at two ages: 23 days (when myonecrosis as a measure of dystropathology is most severe) and 12 weeks (when myonecrosis is less active).We also included a group of 23-day-old mdx mice treated with taurine, since we and others have shown that taurine is an effective therapeutic intervention for this muscular dystrophy, including reduced myonecrosis [29][30][31][32][33][34][35][36][37]. We compared the levels of albumin thiol oxidation with albumin protein content and measures of dystropathology such as myofibre necrosis, muscle inflammation (myeloperoxidase activity), and membrane leakiness, as measured by plasma CK [38].An additional aim was to further test the use of plasma albumin thiol oxidation as a biomarker of myonecrosis and dystropathology, in response to an acute intervention known to modify the amount of myonecrosis.Since we could not take serial blood samples from 23-day-old young mice subjected to taurine treatment (such juvenile mice are too small), we measured albumin thiol oxidation in plasma taken from tail-vein samples from 12 week adult mdx mice subjected to a single treadmill exercise session, known to increase dystropathology [32,39].We also tracked plasma CK and grip strength in these mice, as established biomarkers of muscle pathology. This research developed a simple new method to measure albumin thiol oxidation in tissues.This was used as a marker of oxidative stress in dystrophic muscle, to provide more evidence of a strong relationship between plasma and muscle albumin thiol oxidation (and other measures of dystropathology).This study confirms that plasma albumin thiol oxidation is a reliable biomarker that can be used to track levels of myonecrosis in mdx mice. Materials and Methods All chemicals and reagents were purchased from Sigma-Aldrich, St Louis, MO, USA, unless otherwise stated. Animal Procedures Experiments were carried out on dystrophic mdx (C57Bl/10ScSn mdx/mdx ) and normal wildtype control (C57Bl/10ScSn) mice (the parental strain for mdx) from the Animal Resource Centre, Murdoch, Western Australia.Mice were maintained at the University of Western Australia under standard conditions, with free access to food and drinking water.All experiments were conducted in strict accordance with the guidelines of the National Health and Medical Research Council Code of practice for the care and use of animals for scientific purposes (2004) and the Animal Welfare act of Western Australia (2002) and were approved by the Animal Ethics committee at the University of Western Australia (ethics number 2020ET000034).Young mice were sampled at 23 days of age and adult mice at 12 weeks. Taurine Treatment Juvenile mdx mice were given taurine from 15 days of postnatal age (prior to weaning and the acute onset of myonecrosis that occurs by 21 days) in soft chow containing 4% taurine.Untreated mdx and WT mice had soft chow without taurine.Each group included pups (n = 8), with approximately equal male and female mdx pups, with all males for the WT group.Mice were sampled at 23 days of age after 7 days of taurine treatment. Treadmill Exercise Adult mdx mice aged 12 weeks (n = 7-8) underwent a single 30 min exercise session on a horizontal rodent treadmill (Columbus Instruments, Columbus, OH, USA), using an established protocol [39].In brief, the protocol involved a settling (stationary) period for 2 min, an acclimatisation with gentle walking period for 2 min (2 m/minute), a warmup period for 8 min (8 m/minute) and the main exercise session of 30 min at a pace of 12 m/minute.Prior to the exercise session, and at 1 h and 24 h after the exercise bout, blood samples were taken by tail vein, using 22-gauge needles and heparinised capillary tubes.We have previously shown at these post-exercise times that plasma CK and muscle protein thiol oxidation are increased [28,39].Blood was centrifuged and plasma stored at −80 • C until biochemical analysis.Just before the time of tail vein sampling, the grip strength of mice was measured using a Chatillon Digital Force Gauge (DFE-002).Mice were placed on the front of the triangle bar (attached to a force transducer) and pulled gently until released.Each mouse underwent 5 consecutive grip-strength trials; the grip strength value for each mouse was recorded as the average of the three trials with the highest force.Average grip strength was normalised to body weight.Mice were sampled 24 h after the exercise protocol. At termination of the experiment, while mice were under terminal anaesthesia (2% v/v Attane isoflurane, Bomac, Hallam, VIC, Australia), whole blood was collected via cardiac puncture, immediately centrifuged, plasma removed, and stored at −80 • C until biochemical analysis.Mice were then killed by cervical dislocation and quadriceps muscles were dissected and either frozen in pre-cooled isopentane for histological analysis or snap frozen in liquid nitrogen for biochemical analysis. Histology Frozen muscles were cut in transverse sections (8 µm) through the mid-region on a Leica CM3050S cryostat (Leica, Wetzlar, Germany) and stained with Haematoxylin and Eosin (H&E).For morphological analysis, non-overlapping tiled images of transverse muscle sections were acquired with a Nikon Eclipse Ti microscope equipped with a CoolSNAP-HQ2 camera, using Nikon NIS-Elements software (version 4.0, Melville, NY, USA). Muscle morphology was drawn manually by the researcher using ImageJ software (version 1.53, National Institutes of Health, Bethesda, MD, USA).The area occupied by necrotic myofibres (myofibres with fragmented sarcoplasm and/or areas of inflammatory cells) was measured as a percentage (area) of the whole muscle section.All section analyses were performed 'blind'. Serial sections (as per above) were also stained for albumin by immunohistochemistry. Briefly, sections were air-dried and incubated in 2% paraformaldehyde in phosphate buffered saline (PBS), pH 7.2.After washing with tris-buffered saline with Tween-20 (TBST), samples were incubated with 3% hydrogen peroxide.Sections were washed again and incubated overnight at 4 • C with antibodies to albumin (A0433, Sigma-Aldrich) diluted 1:20,000 in TBST.After washing, sections were incubated with horseradish peroxidase conjugated goat anti-rabbit secondary antibodies (Thermo Fisher Scientific, Waltham, MA, USA) diluted 1:1000 in 5% skim milk in TBST for one hour at room temperature.Samples were washed and incubated with 3,3 ′ -Diaminobenzidine (DAB) solution for 15 min.Sections were finally washed with tap water, counter-stained with haematoxylin, dehydrated, cleared, and mounted for microscopy.Digital images were acquired as per above. Plasma CK Plasma CK activity reflects the leak of CK from myofibres into the blood and is a classic systemic measure of damage and necrosis of dystrophic muscles [40].CK levels were measured using a CK-NAC kit (CK110, Randox Laboratories, Crumlin, UK) and analysed kinetically using a BioTek Powerwave XS Spectrophotometer using the Biotek software (Gen 5, Agilent, Santa Clara, CA, USA). Muscle Inflammation Myeloperoxidase (MPO) is an enzyme secreted by neutrophils (inflammatory cells that appear very rapidly after tissue damage) and MPO activity is a useful biomarker of neutrophils in tissues [41,42].The enzyme MPO catalyses the production of hypochlorous acid (HOCl) from hydrogen peroxide and chloride [43] and HOCl acid reacts with 2-[6-(4aminophenoxy)-3-oxo-3H-xanthen-9-yl]benzoic acid (APF) to form the highly fluorescent compound fluorescein, that is measured in this method, as previously described [29].Briefly, frozen muscle was crushed under liquid nitrogen and homogenised in 0.5% hexadecyltrimethylammonium bromide in PBS.Samples were centrifuged and supernatants were diluted in PBS.Human MPO was used as the standard for the assay (Cayman Chemical, Ann Arbor, MI, USA).Aliquots of each experimental sample or MPO standard were pipetted into a 384-well plate, before APF working solution (20 µM APF and 20 µM hydrogen peroxide in PBS) was added.The plate was incubated at room temperature (protected from light) for 30 min, with the fluorescence being measured every minute using excitation at 485 nm and emission at 515-530 nm.The rate of change of fluorescence for each sample was compared to that of the standards and results were expressed per mg of protein, quantified using the DC protein assay (Bio-Rad, Hercules, CA, USA). Muscle Total Protein Thiol Oxidation Total muscle protein thiol oxidation was measured using the 2-tag technique as described previously [29].In brief, frozen muscle was crushed under liquid nitrogen before protein was extracted with 20% trichloroacetic acid (TCA)/acetone.Protein was solubilised in SDS buffer and protein thiols were labelled with the fluorescent dye BODIPY FL-N-(2aminoethyl) maleimide (FLM, Invitrogen, Waltham, MA, USA).Following the removal of the unbound dye using cysteine, protein was re-solubilised in SDS buffer and oxidised thiols were reduced with tris(2-carboxyethyl)phosphine (TCEP) before the subsequent unlabelled reduced thiols were labelled with a second fluorescent dye, Texas Red C2-maleimide (Texas red, Invitrogen).The sample was washed in 100% TCA, followed by acetone, and resuspended in SDS buffer.Samples were read using a fluorescent plate reader (Fluostar Optima, Offenburg, Germany) with wavelengths set at excitation 485 nm and emission 520 nm for FLM and excitation 595 nm and emission 610 nm for Texas red.A standard curve for each dye was generated using ovalbumin and the results were expressed per mg of protein, quantified using the DC protein assay (Bio-Rad). Muscle Albumin Thiol Oxidation Method Development It is important to note that while all previous research has indicated that albumin has a total of 35 cysteine residues, with 34 forming 17 intramolecular disulfide bridges with the remaining residue (Cys34) being free and redox-active [7], we have consistently detected two thiols in mouse albumin susceptible to oxidation (confirmed by UniProt sequence).We have not observed this in any other species.We therefore adapted our method to measure oxidised albumin specifically in mouse plasma and modified it further to be used in muscle tissue.These methods are detailed below and illustrated in Figure 1.Malpeg undergoes PEGylation reactions with thiol groups on cysteine side chains and has a large molecular weight (5 kD is used in our method), and therefore it can cause a molecular weight shift that is observable on SDS-PAGE gels and immunoblot.This allows for the separation of oxidised and reduced albumin (since oxidised cysteine cannot bind to malpeg).We were therefore able to establish the oxidation state of albumin in both mdx plasma as well as muscle.As illustrated in Figure 1, most albumin in both WT and mdx muscle has one thiol oxidised; therefore, we have chosen to present for skeletal muscle just the data for the percentage of albumin with two cysteine side chains being oxidised (expressed as fully oxidised albumin). Antioxidants 2024, 13, x FOR PEER REVIEW 5 of 17 A standard curve for each dye was generated using ovalbumin and the results were expressed per mg of protein, quantified using the DC protein assay (Bio-Rad). Muscle Albumin Thiol Oxidation Method Development It is important to note that while all previous research has indicated that albumin has a total of 35 cysteine residues, with 34 forming 17 intramolecular disulfide bridges with the remaining residue (Cys34) being free and redox-active [7], we have consistently detected two thiols in mouse albumin susceptible to oxidation (confirmed by UniProt sequence).We have not observed this in any other species.We therefore adapted our method to measure oxidised albumin specifically in mouse plasma and modified it further to be used in muscle tissue.These methods are detailed below and illustrated in Figure 1.Malpeg undergoes PEGylation reactions with thiol groups on cysteine side chains and has a large molecular weight (5 kD is used in our method), and therefore it can cause a molecular weight shift that is observable on SDS-PAGE gels and immunoblot.This allows for the separation of oxidised and reduced albumin (since oxidised cysteine cannot bind to malpeg).We were therefore able to establish the oxidation state of albumin in both mdx plasma as well as muscle.As illustrated in Figure 1, most albumin in both WT and mdx muscle has one thiol oxidised; therefore, we have chosen to present for skeletal muscle just the data for the percentage of albumin with two cysteine side chains being oxidised (expressed as fully oxidised albumin).Plasma and muscle tissue extracts are treated with malpeg that binds to reduced albumin, causing a molecular weight shift that is detectable via immunoblot using antibodies for albumin.Since mouse albumin has two cysteine residues susceptible to redox modifications, albumin can be detected in three states, represented as three distinct bands on the immunoblot membrane.(A) represents albumin with two thiols in the reduced (R) state (with two malpeg molecules bound) and a 10 kD shift in band observed.(B) represents albumin with only one reduced thiol, and therefore one oxidised (Ox) thiol.This results in only one malpeg molecule bound and a 5 kD shift observed.(C) represents albumin with both thiols being oxidised (with no malpeg bound) and no shift is observed.In the Results text, the extent of thiol oxidation (Ox albumin) is referred to as 'the sum of fully and partially oxidised albumin' (B + C) and 'fully oxidised albumin' (C).Note that in other species, only one cysteine residue (Cys34) is susceptible to redox modifications, and therefore only one shift is observed when Cys34 is in the reduced state.Also of note, in mice, we do not know which thiol group (Cys34 or the other) is more susceptible to oxidation, and therefore which one is highly oxidised in muscle. In contrast, for plasma albumin thiol, a significant amount of albumin has two reduced thiols in both WT and mdx mice.Therefore, for plasma, we present the sum of the percentage of albumin with both one and two thiols oxidised (which we have termed 'sum of fully and partially oxidised albumin') and two thiols oxidised (fully oxidised). Muscle Albumin Thiol Oxidation Method Analysis of muscle albumin thiol oxidation (non-mercaptalbumin-1 and 2) was performed using our method for analysis of plasma samples with some modifications [28].Although plasma albumin thiol oxidation could be measured using electrophoresis (see below), the method was not sufficiently sensitive for muscle and therefore an immunoblot with increased sensitivity was used.Frozen muscle samples were crushed under liquid nitrogen, and aliquots of approximately 10 mg were homogenised in PBS containing 1 mM methoxypolyethylene glycol maleimide (malpeg, 5000 g/mol, JenKem Technology) malpeg (previously diluted in 40 mM imidazole, pH 7.4).In order to remove excess malpeg (which interferes with electrophoresis), protein was extracted using a chloroform-methanol protein precipitation method.Extracted protein was resuspended in loading buffer containing 94 mM Tris pH 7, 2% SDS, 0.015% bromophenol blue, 15% glycerol, and 350 mM dithiothreitol (DTT), and it was heated for 15 min at 80 • C. Samples were resolved in 12% acrylamide and 1% SDS gels containing 1% (v/v) of 2,2,2-trichloroethanol for fluorescent stain-free imaging [44].Gels were imaged using the Stain-Free imaging program on the ChemiDoc MP Imaging System (Bio-Rad), and loading was checked by measuring albumin signal.Proteins were transferred to nitrocellulose membranes using the Trans Turbo Blot System (Bio-Rad).Immunoblotting was performed with antibodies to albumin (A0433, Sigma-Aldrich) dissolved 1:60,000 in tris-buffered saline (TBST).Horseradish peroxidase conjugated goat anti-rabbit secondary antibodies (Thermo Fisher Scientific) were diluted 1:10,000 in 5% skim milk in TBST.The ChemiDoc MP Imaging System (Bio-Rad) was used to capture chemiluminescence signals.ImageJ software (version 1.53) was used to quantify the resultant images [45].Total albumin content was standardised to total protein (as measured by stain-free imaging). Sample analysis for irreversible albumin oxidation (non-mercaptalbumin-2) was also performed by reducing malpeg labelled as per above with L-cysteine hydrochloride monohydrate, which was added to a final concentration of 3 mM, followed by incubation at room temperature for 60 min.Malpeg was added to a final concentration of 3.5 mM, followed by incubation at room temperature for 15 min.Excess malpeg was removed and immunoblotting was performed as above.We observed that in both WT and mdx muscle, the majority of albumin had one cysteine residue undergoing irreversible oxidation, while neither WT nor mdx had both cysteine residues undergoing irreversible oxidation.It was therefore determined that in muscle, the measurement of irreversible albumin cysteine oxidation was not a useful measure of dystropathology. Plasma Albumin Thiol Oxidation Plasma albumin thiol oxidation was measured using our established method [28], adapted for capillary electrophoresis (CE).In brief, plasma samples were incubated with 30 mM malpeg for 15 min.Malpeg conjugated plasma was aliquoted into two: (1) aliquot-1 was diluted with CE-buffer (0.01% DMSO, 45 mM phosphate pH 8.0 with 2.5% w/v SDS) and (2) aliquot-2 was reduced for irreversible albumin oxidation analysis.The reduction of reversibly oxidised thiols from aliquot-2 was performed by incubation with 7.5 mM cysteine for 30 min.Malpeg was added to a final concentration of 15 mM, followed by incubation for 15 min.Reduced samples were diluted with CE-buffer. Capillary electrophoresis was performed using an Agilent 7100 CE system.The capillary used was a 50-micron bare-fused silica capillary (Polymicro Technologies, Tucson, AZ, USA) with an effective length of 32.5 cm.The background electrolyte was composed of 45 mM phosphate pH 8.0 with 2.5% SDS.Diluted samples were hydrodynamically (10 mbar/s for 10 s) injected, with electrophoretic separation performed by applying +15 kV for 15 min.Detection was by absorbance at a wavelength of 214 nm.Area under the curve analysis was performed using Agilent Chemstation software (version B.04.03). Statistics Significant differences between groups were determined using GraphPad Prism software (Version 9.4.1).Data were analysed using two-way ANOVA tests with post hoc testing.All data are presented as mean ± standard error of the mean (SEM).Significance was set at p < 0.05.Pearson's correlation was used to assess the relationship between plasma and muscle albumin thiol oxidation and dystropathology measures. Characterisation of Dystropathology To examine the effect of changes in dystropathology on albumin oxidation in the muscle and blood of mdx mice, we tested young mice at the time of peak active myonecrosis (23 days old) and young adults where there is less active myonecrosis (12 weeks).Myofibre necrosis was 16-fold higher in quadriceps of 23-day-old mdx muscle versus WT and 20-fold higher in 12-week-old mdx muscle versus WT (Figure 2A).Myofibre necrosis was 1.6-fold higher in quadriceps of 23-day-old compared with 12-week-old mdx mice (Figure 2A). Capillary electrophoresis was performed using an Agilent 7100 CE system.The capillary used was a 50-micron bare-fused silica capillary (Polymicro Technologies, Tucson, AZ, USA) with an effective length of 32.5 cm.The background electrolyte was composed of 45 mM phosphate pH 8.0 with 2.5% SDS.Diluted samples were hydrodynamically (10 mbar/s for 10 s) injected, with electrophoretic separation performed by applying +15 kV for 15 min.Detection was by absorbance at a wavelength of 214 nm.Area under the curve analysis was performed using Agilent Chemstation software (version B.04.03). Statistics Significant differences between groups were determined using GraphPad Prism software (Version 9.4.1).Data were analysed using two-way ANOVA tests with post hoc testing.All data are presented as mean ± standard error of the mean (SEM).Significance was set at p < 0.05.Pearson's correlation was used to assess the relationship between plasma and muscle albumin thiol oxidation and dystropathology measures. Characterisation of Dystropathology To examine the effect of changes in dystropathology on albumin oxidation in the muscle and blood of mdx mice, we tested young mice at the time of peak active myonecrosis (23 days old) and young adults where there is less active myonecrosis (12 weeks).Myofibre necrosis was 16-fold higher in quadriceps of 23-day-old mdx muscle versus WT and 20-fold higher in 12-week-old mdx muscle versus WT (Figure 2A).Myofibre necrosis was 1.6-fold higher in quadriceps of 23-day-old compared with 12-week-old mdx mice (Figure 2A).Another classic marker of dystropathology, plasma CK content measuring myofibre membrane leakiness, was 12-fold higher in 23-day-old mdx versus WT mice and 24-fold higher in 12-week-old mdx versus WT mice (Figure 2B), with no significant difference between the young and adult mdx plasma CK levels (Figure 2B).The characterisation of dystropathology by increased inflammation and oxidative stress was evident in mdx quadriceps muscles at both ages, compared with WT mice as shown by increased MPO Another classic marker of dystropathology, plasma CK content measuring myofibre membrane leakiness, was 12-fold higher in 23-day-old mdx versus WT mice and 24-fold higher in 12-week-old mdx versus WT mice (Figure 2B), with no significant difference between the young and adult mdx plasma CK levels (Figure 2B).The characterisation of dystropathology by increased inflammation and oxidative stress was evident in mdx quadriceps muscles at both ages, compared with WT mice as shown by increased MPO activity, a marker of neutrophil presence, and protein thiol oxidation, a marker of oxidative stress.The MPO activity in quadriceps muscles was 5.5-6-fold higher in mdx compared with WT mice (Figure 2C), with no significant impact of age.Similarly, total protein thiol oxidation in quadriceps was 1.4-1.5-foldhigher in mdx compared with WT mice, with no impact of age (Figure 2D).Short-term treatment of juvenile mdx mice with taurine (from 15-23 days), compared with untreated control mdx mice, decreased myonecrosis by 38% (Figure 2A), plasma CK content by 60% (Figure 2B), muscle MPO activity by 45%, and muscle protein thiol oxidation by 25% (Figure 2C,D).This therapeutic intervention further demonstrates a strong association between these four different measurements of dystropathology in muscle and blood.This combined data in young and adult mdx mice form the background for the following in-depth analysis of protein thiol oxidation of albumin in dystrophic plasma and muscle.Taken together, these data show that 23-day-old and 12-week-old mdx mice were suitable models to examine the effects of dystropathology on albumin oxidation. Albumin Thiol Oxidation in mdx Plasma Two forms of oxidised albumin were measured as summarised in Figure 1: partially oxidised albumin (one thiol oxidised) and fully oxidised albumin (two thiols oxidised).In plasma, the sum of partially and fully oxidised albumin was 1.2-fold higher in both young and adult mdx mice, compared with WT, with no impact of age in either strain (Figure 3A). a marker of neutrophil presence, and protein thiol oxidation, a marker of oxidative stress.The MPO activity in quadriceps muscles was 5.5-6-fold higher in mdx compared with WT mice (Figure 2C), with no significant impact of age.Similarly, total protein thiol oxidation in quadriceps was 1.4-1.5-foldhigher in mdx compared with WT mice, with no impact of age (Figure 2D). Short-term treatment of juvenile mdx mice with taurine (from 15-23 days), compared with untreated control mdx mice, decreased myonecrosis by 38% (Figure 2A), plasma CK content by 60% (Figure 2B), muscle MPO activity by 45%, and muscle protein thiol oxidation by 25% (Figure 2C,D).This therapeutic intervention further demonstrates a strong association between these four different measurements of dystropathology in muscle and blood.This combined data in young and adult mdx mice form the background for the following in-depth analysis of protein thiol oxidation of albumin in dystrophic plasma and muscle.Taken together, these data show that 23-day-old and 12-week-old mdx mice were suitable models to examine the effects of dystropathology on albumin oxidation. Albumin Thiol Oxidation in mdx Plasma Two forms of oxidised albumin were measured as summarised in Figure 1: partially oxidised albumin (one thiol oxidised) and fully oxidised albumin (two thiols oxidised).In plasma, the sum of partially and fully oxidised albumin was 1.2-fold higher in both young and adult mdx mice, compared with WT, with no impact of age in either strain (Figure 3A).Fully oxidised albumin was 1.5-fold higher in the plasma of young (23 day) mdx mice compared with WT (Figure 3B).However, for adult (12 week) mice, there was no difference between strains.Levels of fully oxidised albumin were 50% lower in both adult strains, compared with young mice of the same strain (Figure 3B).These data indicate that oxidation of plasma albumin was particularly pronounced during periods of active myonecrosis (23-day-old young mdx mice).Taurine treatment of juvenile mdx mice for 7 days significantly reduced the sum of partially and fully oxidised albumin by 15% and 33%, respectively, in 23-day-old mice (Figure 3A,B). Levels of Albumin in Dystrophic and Normal Muscles Dystrophy causes extensive tissue disturbances involving the vasculature, myonecrosis and inflammation which have the potential to affect albumin levels within the muscle, and exposure to oxidants.We therefore measured the levels of albumin in skeletal muscle.In mdx muscles, albumin content was higher (about 2-fold) at both ages, compared with WT (Figure 4). Albumin content was approximately 3-fold higher in quadriceps of 23-day-old compared with 12-week-old WT and mdx mice (Figure 4).Taurine treatment prevented the Fully oxidised albumin was 1.5-fold higher in the plasma of young (23 day) mdx mice compared with WT (Figure 3B).However, for adult (12 week) mice, there was no difference between strains.Levels of fully oxidised albumin were 50% lower in both adult strains, compared with young mice of the same strain (Figure 3B).These data indicate that oxidation of plasma albumin was particularly pronounced during periods of active myonecrosis (23-day-old young mdx mice).Taurine treatment of juvenile mdx mice for 7 days significantly reduced the sum of partially and fully oxidised albumin by 15% and 33%, respectively, in 23-day-old mice (Figure 3A,B). Levels of Albumin in Dystrophic and Normal Muscles Dystrophy causes extensive tissue disturbances involving the vasculature, myonecrosis and inflammation which have the potential to affect albumin levels within the muscle, and exposure to oxidants.We therefore measured the levels of albumin in skeletal muscle.In mdx muscles, albumin content was higher (about 2-fold) at both ages, compared with WT (Figure 4). Albumin content was approximately 3-fold higher in quadriceps of 23-day-old compared with 12-week-old WT and mdx mice (Figure 4).Taurine treatment prevented the increase in albumin content seen at 23 days in untreated mdx muscle (Figure 4), reflecting the prevention of myonecrosis, with albumin levels being similar to normal WT muscle. Since albumin content was about 2-fold higher in (untreated) mdx quadriceps muscle at both ages (compared with WT mice), the location of albumin within the muscles was investigated by immunostaining using albumin antibodies.Albumin in WT muscles was present in the interstitium outside the myofibres (as expected) since albumin does not enter intact myofibres (Figure 5B,D). at both ages (compared with WT mice), the location of albumin within th investigated by immunostaining using albumin antibodies.Albumin in W present in the interstitium outside the myofibres (as expected) since alb enter intact myofibres (Figure 5B,D).increase in albumin content seen at 23 days in untreated mdx muscle (Figure 4), reflecting the prevention of myonecrosis, with albumin levels being similar to normal WT muscle.Since albumin content was about 2-fold higher in (untreated) mdx quadriceps muscle at both (compared with WT mice), the location of albumin within the muscles was investigated by immunostaining using albumin antibodies.Albumin in WT muscles was present in the interstitium outside the myofibres (as expected) since albumin does not enter intact myofibres (Figure 5B,D).In mdx muscle as well as the interstitium, there was increased staining of albumin in areas of myonecrosis, also in the interstitium where immune cells were present, and there was inflammation within myofibres that were undergoing necrosis (Figure 5F,H,J).This albumin staining is observed in hypercontracted myofibres (indicating an early stage of necrosis), myofibres that are fully degenerated, and some newly regenerating myofibres (Figure 5F,H,J). Levels of Albumin Thiol Oxidation in mdx Muscle We examined whether albumin in muscle was oxidised given that dystrophic muscle is characterised by increased oxidative stress (Figure 2D).Fully oxidised albumin was 1.6-fold higher in 23-day-old mdx muscle versus WT and 4-fold higher in 12-week-old mdx muscle versus WT (Figure 6). with antibodies to albumin (B,D,F,H,J).Section (L) is a negative control (no primary antibody).In mdx muscle, albumin staining is in the areas of immune cell recruitment (arrow), and within necrotic myofibres, including hypercontracted (asterisk), degenerated (hash), and newly formed (dollar).Images were taken at 10× magnification and scalebar line = 200 μm. In mdx muscle as well as the interstitium, there was increased staining of albumin in areas of myonecrosis, also in the interstitium where immune cells were present, and there was inflammation within myofibres that were undergoing necrosis (Figure 5F,H,J).This albumin staining is observed in hypercontracted myofibres (indicating an early stage of necrosis), myofibres that are fully and some newly regenerating myofibres (Figure 5F,H,J). Levels of Albumin Thiol Oxidation in mdx Muscle We examined whether albumin in muscle was oxidised given that dystrophic muscle is characterised by increased oxidative stress (Figure 2D).Fully oxidised albumin was 1.6fold higher in 23-day-old mdx muscle versus WT and 4-fold higher in 12-week-old mdx muscle versus WT (Figure 6).Fully oxidised albumin was 65% lower in 12-week-old WT muscle compared to 23day-old WT muscle, but there was no difference in fully oxidised albumin between 23day-old and 12-week-old adult mdx muscle (Figure 6).Taurine treatment decreased fully oxidised albumin in 23-day-old mdx muscle by 40% (Figure 6). We compared the extent of albumin oxidation in plasma and muscle tissue by comparing the amount of fully oxidised albumin in plasma and muscle shown in Figures 1B and 6.In 23-day-old and 12-week-old WT mice, fully oxidised albumin was 3.3-fold and 2.5-fold higher in muscle tissue, relative to plasma.This difference was also observed with mdx mice, where fully oxidised albumin was 3.4-fold, 3.5-fold, and 5-fold higher in 23day-old mice, 23-day-old mice treated with taurine, and 12-week-old mdx mice.Together, these data indicate that in mice, fully oxidised albumin is consistently higher in muscle than plasma. To assess whether changes in muscle albumin thiol oxidation are associated with increased necrosis and inflammation, we compared thiol oxidation with other measures of damage and inflammation (in both strains).All indices (muscle albumin, total muscle protein thiol oxidation, myofibre necrosis, muscle inflammation, and plasma CK content) correlated significantly with plasma and muscle albumin thiol oxidation; with plasma and muscle albumin thiol oxidation also correlated with each other (Table 1).Fully oxidised albumin was 65% lower in 12-week-old WT muscle compared to 23-dayold WT muscle, but there was no difference in fully oxidised albumin between 23-day-old and 12-week-old adult mdx muscle (Figure 6).Taurine treatment decreased fully oxidised albumin in 23-day-old mdx muscle by 40% (Figure 6). We compared the extent of albumin oxidation in plasma and muscle tissue by comparing the amount of fully oxidised albumin in plasma and muscle shown in Figures 1B and 6.In 23-day-old and 12-week-old WT mice, fully oxidised albumin was 3.3-fold and 2.5-fold higher in muscle tissue, relative to plasma.This difference was also observed with mdx mice, where fully oxidised albumin was 3.4-fold, 3.5-fold, and 5-fold higher in 23-day-old mice, 23-day-old mice treated with taurine, and 12-week-old mdx mice.Together, these data indicate that in mice, fully oxidised albumin is consistently higher in muscle than plasma. To assess whether changes in muscle albumin thiol oxidation are associated with increased necrosis and inflammation, we compared thiol oxidation with other measures of damage and inflammation (in both strains).All indices (muscle albumin, total muscle protein thiol oxidation, myofibre necrosis, muscle inflammation, and plasma CK content) correlated significantly with plasma and muscle albumin thiol oxidation; with plasma and muscle albumin thiol oxidation also correlated with each other (Table 1). Table 1.Correlation between plasma and muscle for fully oxidised albumin (see definition in Figure 1) and measures of dystropathology.Data shown for 23-day-old and 12-week-old mdx and WT control mice.N = 37. r = Pearson correlation coefficient.* represents significant correlation of p < 0.05. Tracking Plasma Albumin Thiol Oxidation We have previously proposed that plasma albumin thiol oxidation could be used to track acute changes in dystropathology (such as after the initiation of a drug treatment) [46].Therefore, we tested changes in albumin thiol oxidation (and other measures of dystropathology including plasma CK and muscle strength) in individual 12-week-old mdx mice that underwent a treadmill exercise session, known to exacerbate dystropathology.Tail vein blood samples were taken before, 1 h after, and 24 h after a single 30 min treadmill exercise session.These time points were chosen as we have previously shown that plasma CK and muscle protein thiol oxidation are increased [28,39].CK and grip strength were used as established biomarkers of muscle pathology to track the effects of exercise. Tracking Plasma Albumin Thiol Oxidation We have previously proposed that plasma albumin thiol oxidation could be used to track acute changes in dystropathology (such as after the initiation of a drug treatment) [46].Therefore, we tested changes in albumin thiol oxidation (and other measures of dystropathology including plasma CK and muscle strength) in individual 12-week-old mdx mice that underwent a treadmill exercise session, known to exacerbate dystropathology.Tail vein blood samples were taken before, 1 h after, and 24 h after a single 30 min treadmill exercise session.These time points were chosen as we have previously shown that plasma CK and muscle protein thiol oxidation are increased [28,39].CK and grip strength were used as established biomarkers of muscle pathology to track the effects of exercise. Since differences in irreversible oxidation levels of plasma albumin were observed when comparing mdx and WT mice, data are presented as reversible, irreversible, and both combined.Reversibly oxidised plasma albumin was 1.2-fold higher in mdx versus WT plasma (Figure 7C).One hour post-exercise, reversibly oxidised albumin in mdx plasma was 1.3-fold higher than baseline (pre-exercised, Figure 7C); and by 24 h post-exercise, this had returned to baseline (Figure 7C).Irreversibly oxidised albumin in plasma was the same in (unexercised) mdx and WT mice (Figure 7D).At 1 h post-exercise, irreversibly oxidised albumin was 1.2-fold higher than baseline (pre-exercised, Figure 7D) and remained high at 24 h (i.e., did not return to baseline levels in contrast with reversibly oxidised plasma albumin).When combining the reversible and irreversible data, combined oxidised albumin was 1.2-fold higher in (unexercised) mdx plasma versus WT (Figure 7E).At 1 h post-exercise, combined mdx oxidised albumin was 1.3-fold higher than baseline (pre-exercised, Figure 7E) and at 24 h post-exercise, it had returned to baseline (Figure 7E). Discussion The new data presented here reinforce the relationship between plasma and muscle albumin thiol oxidation, as well as measures of ongoing dystropathology in mdx mice.The key results are the following: (i) a new method is described to measure albumin thiol oxidation in tissues utilising immunoblot.The use of this new method shows (ii) that levels of muscle albumin thiol oxidation in dystrophic mdx plasma and muscle are significantly higher than normal control at two ages, 23 days and 12 weeks, and these data correlate with measures of dystropathology, including global protein thiol oxidation, myofibre necrosis, muscle inflammation, and plasma CK release.We also demonstrate (iii) that plasma albumin thiol oxidation measured serially in mdx mice can track acute changes in dystropathology induced experimentally by an acute bout of exercise in adult mice.The significance of these combined new observations is discussed below. Oxidative stress has been implicated in the pathology of DMD [47], so the accurate measurement of oxidation in dystrophic tissue can provide insight into molecular mechanisms causing pathology.We have previously developed a method (2-tag) to measure the global protein thiol oxidation in tissue, using two different maleimide tags that measure reduced and oxidised protein thiols [48].This method has been used to successfully measure oxidative stress in skeletal muscle in the mdx mouse model of DMD [29].We have found this to be an accurate and sensitive measure of oxidative stress; however, the method is challenging and requires training.We have also previously developed a sensitive oxidative stress method for plasma albumin thiol oxidation, which can be performed with immunoblot [28].We therefore adapted this method to measure albumin thiol oxidation in tissue utilising immunoblot, making it a relatively easy to perform and a readily accessible method for use as a sensitive marker of oxidative stress in muscle. Of note is that previous research suggests that albumin has a total of 35 cysteine residues with 34 forming 17 intramolecular disulfide bridges, and the remaining residue (Cys34) is free and redox-active [7].However, we consistently detected two thiols in mouse albumin susceptible to oxidation but not in any other species we have investigated, including humans, rats, dogs, cows, horses, and sheep.Interestingly, and unlike in plasma, we found substantial irreversible thiol oxidation (non-mercaptalbumin-2) on one cysteine residue only in muscle, even in normal WT mice.It is not known if this irreversible oxidation is occurring on this additional cysteine or on Cys34.Since we have not been able to find any other studies that have also observed this additional thiol group on mouse albumin, nor any studies that have observed this extensive irreversible oxidation of albumin in muscle or other tissue, we are not able to further interpret the biological significance of observations.Additional mass spectrometry analysis of mouse albumin, from both plasma and within tissue, is required to better understand the type of irreversible oxidation.The analysis of the extent of irreversible albumin thiol oxidation in muscle from other species (where this additional Cys residue is not observed) may also give some indication of the significance of this new result in mice. An interesting observation was that the age of mice (mdx and normal WT) had an impact on levels of both plasma and muscle albumin thiol oxidation, with levels of fully oxidised albumin initially high in juveniles at 23 days and then decreased by 12 weeks (in adults).We have previously observed this in both mdx and WT plasma, with albumin thiol oxidation decreasing after 23 days and increasing again by 18 months [28].Increased oxidation of albumin in aging has been reported previously [49], but the high levels of albumin oxidation in juvenile mice appear to be a novel observation as we were not able to locate any literature reporting this occurrence.Further work is required to establish the biological significance of the higher level of albumin oxidation during early post-natal growth and whether this is a consequence of increased generation of oxidants or decreased antioxidant activity. While albumin is abundant in plasma, most body albumin is in the extravascular compartment of tissues such as muscle, skin, and adipose tissue [1,2].Studies show the substantial transcapillary transit of albumin with return to the plasma compartment via the lymphatic system [2,50].With exercise, there is a substantial increase in albumin content, and albumin transit, in muscle [51].We therefore hypothesised that changes in the extent of plasma albumin thiol oxidation are a consequence of changes in the oxidative state of albumin in tissues, specifically skeletal muscles that comprise about 40% of body mass.A novel observation was that even in normal WT muscles, the amount of oxidised albumin is considerably (approximately 4-fold) higher than for plasma, suggesting that the movement of albumin through tissue is a significant source of oxidation.This may be particularly true for muscle, which produces significant oxidants due to its contractile activity and high oxygen consumption [52]. We found in mdx muscle that albumin content was increased (relative to normal muscle), and this is consistent with reports that inflammation increases albumin content, and albumin transit, in tissues [53].In mdx muscles, albumin was high in the interstitium (especially around areas of inflammatory cell invasion), and also within myofibres.Many studies show that albumin (bound to Evans blue dye) accumulates in dystrophic myofibres after exercise-induced damage [54,55].Since it is visually confirmed that protein oxidation is mainly colocalised to areas of myofibre necrosis and associated immune cell infiltration in mdx muscles [26], the high levels of albumin localised in these areas of dystrophic muscle damage are likely to become oxidised within this cellular environment [26].The present study also demonstrated that the oxidation of albumin is prevented by taurine administration, an intervention widely reported to reduce dystropathology, including decreased myonecrosis and inflammation [29][30][31][32][33][34][35][36][37].Interestingly, the present study also showed that taurine decreased the amount of albumin entering the muscle, supporting the hypothesis that the increased albumin thiol oxidation in mdx muscle (and therefore plasma) is a consequence of increased albumin migrating through the muscle tissue. To further examine whether plasma albumin thiol oxidation could be used as a biomarker to track acute changes in dystropathology in individual mdx mice, we subjected adult (12 week) mdx mice to a single treadmill exercise session, which is known to increase myonecrosis and plasma CK release and decrease grip strength [39].Our findings revealed two patterns of plasma albumin oxidation, with both reversible and irreversible albumin thiol oxidation rapidly elevated at 1 h after exercise, with reversible, but not irreversible, plasma albumin thiol oxidation returning to baseline by 24 h after exercise.This difference may reflect irreversible oxidation of albumin being a permanent modification to the protein, whereas reversible oxidation can be reversed by thiol/disulfide exchange [11,56,57].Thus, the exercise caused acute changes in pathology associated with the very rapid increase in oxidative stress within the muscles, that oxidised the local albumin in the muscle tissues, then returned to the circulating plasma, which caused the change in albumin reversible oxidation.This concept is supported by the pattern of plasma CK release, which was similar (with a rapid increase within one hour) to total plasma albumin thiol oxidation.As changes in CK reflect increased permeability of the muscle plasma membrane (sarcolemma) caused by dystropathology, the changes in reversible thiol oxidation likely also reflect acute changes in dystropathology.In contrast, the sustained decrease in grip strength (loss of function) indicates ongoing damage which may be related to a sustained increase in oxidants having direct adverse effects on myosin and other contractile proteins [58] and causing the additional irreversible oxidation of albumin. Conclusions We show that albumin thiol oxidation is elevated in plasma of mdx mice, and this reflects increased albumin oxidation in mdx muscles, plus plasma albumin thiol oxidation is acutely responsive to changes in dystropathology.These combined observations strongly support the measurement of changes in plasma albumin oxidation as a promising biomarker to track acute changes in dystropathology.There is a need for molecular biomarkers to track the severity of dystropathology in animal models and DMD, particularly for blood biomarkers to help rapidly evaluate the potential efficacy of clinical trials for DMD. Figure 1 .Figure 1 . Figure 1.Diagram for immunoblot method to quantify albumin thiol oxidation in plasma and muscle.Plasma and muscle tissue extracts are treated with malpeg that binds to reduced albumin, causing a molecular weight shift that is detectable via immunoblot using antibodies for albumin.Since mouse albumin has two cysteine residues susceptible to redox modifications, albumin can be detected in three states, represented as three distinct bands on the immunoblot membrane.(A) represents albumin with two thiols in the reduced (R) state (with two malpeg molecules bound) and a 10 kD shift in band observed.(B) represents albumin with only one reduced thiol, and therefore one oxidised (Ox) thiol.This results in only one malpeg molecule bound and a 5 kD shift observed.(C) represents albumin with both thiols being oxidised (with no malpeg bound) and no shift is observed.In the Results text, the extent of thiol oxidation (Ox albumin) is referred to as 'the sum of fully and partially oxidised albumin' (B + C) and 'fully oxidised albumin' (C).Note that in other species, only Figure 3 . Figure 3. Levels of oxidised plasma albumin.Both the sum of fully and partially oxidised plasma albumin (A) and fully oxidised plasma albumin (B) in 23-day-old and 12-week-old WT and untreated mdx mice, and taurine-treated mdx mice aged 23 days.* = significantly (p < 0.05) different to WT of same age.^ = significantly (p < 0.05) different to untreated mdx (effect of taurine treatment).# = significantly (p < 0.05) different to same strain at 23 days.Bars represent mean ± SEM and n = 7-8 mice per group. Figure 4 . Figure 4. Levels of albumin protein in muscles.Albumin content in quadriceps m old and 12-week-old WT and untreated mdx mice and taurine-treated mdx mice a significantly (p < 0.05) different to WT of same age.^ = significantly (p < 0.05) diffe mdx (effect of taurine treatment).# = significantly (p < 0.05) different to same strain represent mean ± SEM and n = 7-8 mice per group. Figure 4 . Figure 4. Levels of albumin protein in muscles.Albumin content in quadriceps muscles of 23-dayold and 12-week-old WT and untreated mdx mice and taurine-treated mdx mice aged 23 days.* = significantly (p < 0.05) different to WT of same age.^ = significantly (p < 0.05) different to untreated mdx (effect of taurine treatment).# = significantly (p < 0.05) different to same strain at 23 days.Bars represent mean ± SEM and n = 7-8 mice per group. Figure 5 .Figure 5 . Figure 5. Location of albumin in muscles, visualised by immunostaining.Serial frozen sections of 23-day-old WT (A-D) and mdx (E-J) quadriceps muscle were stained with H&E (A,C,E,G,I,K) and Figure 6 . Figure 6.Fully oxidised albumin in quadriceps muscle.Albumin in 23-day-old and 12-week-old WT and untreated mdx mice and taurine-treated mdx mice aged 23 days.* = significantly (p < 0.05) different to WT of same age.^ = significantly (p < 0.05) different to untreated mdx (effect of taurine treatment).# = significantly (p < 0.05) different to same strain at 23 days.Bars represent mean ± SEM and n = 7-8 mice per group. Figure 7 . Figure 7. Plasma albumin thiol oxidation and measures of dystropathology in serial samples of treadmill-exercised mdx mice.Graphs show data from unexercised WT mice, and mdx mice preexercise and with sampling at 1 h and 24 h after a single 30 min treadmill exercise for plasma CK (A), grip strength (B), plasma reversibly oxidised albumin (C), plasma irreversibly oxidised albumin (D), and the combination of reversibly and irreversibly oxidised albumin (E).* = significantly (p < 0.05) different to WT of same age.ˆ= significantly (p < 0.05) different to pre-exercised mdx values.# = significantly (p < 0.05) different to 1 h post-exercise mdx values.Bars represent mean ± SEM and n = 6-8 mice per group. Table 1 . Correlation between plasma and muscle for fully oxidised albumin (see definition in Figure1) and measures of dystropathology.Data shown for 23-day-old and 12-week-old mdx and WT control mice.N = 37. r = Pearson correlation coefficient.* represents significant correlation of p < 0.05.
2024-06-17T15:40:59.077Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "8f072af556e8c292210a87341257e2c0bb7831d2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/13/6/720/pdf?version=1718273974", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eef87aef4eac08684f67a1da5431491896548263", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
253070162
pes2o/s2orc
v3-fos-license
Special Issue on Sciences and Innovations in Heat Pump/Refrigeration: Volume II Heat pumps and refrigeration are key technologies to realize carbon neutrality, and active research is being conducted around the world [...] Introduction Heat pumps and refrigeration are key technologies to realize carbon neutrality, and active research is being conducted around the world. With this background, "Sciences in Heat Pump and Refrigeration" was published as a Special Issue in applied sciences. This Special Issue is a continuation of the previous Special Issue "Sciences in Heat Pump and Refrigeration", which was closed in December 2019, and we have intended to attract publications related to heat pump and refrigeration. As heat pump and refrigeration are technologies used in a variety of applications (air conditioning, food preservation, hot water and steam generation, drying, cryogenic storage, etc.), the related research area's span is very broad and includes both basic science and advanced engineering. Based on the paper submitted to a Special Issue previously published on the same topic, five important issues related to heat pumps and refrigeration were derived as follows. They are low-global-warming-potential refrigerants, absorption/adsorption heat pump and refrigeration, desiccant air conditioning, heat and mass transfer enhancement for innovative heat exchangers, and the application of AI for air conditioning. In this editorial, the 10 papers submitted to this Special Issue [1][2][3][4][5][6][7][8][9][10] are categorized into the above 5 topics and summarized. Low-Global-Warming-Potential Refrigerants As a response to climate change, research on low-GWP refrigerants is very important, but unfortunately, there was no study directly using low-GWP refrigerants in this Special Issue. Instead, Yun and Chang [5] contains techniques for diagnosing refrigerant leaks that will cause global warming problems. Most of the existing refrigerant leak prediction models were based on steady-state conditions. In this paper, the results of developing a refrigerant charge prediction model using dynamic experimental data are presented. In the proposed dynamic model, the refrigerant charge was estimated within 2.54% by introducing the condensation temperature and subcooling. The paper introduced in this Special Issue is expected to contribute greatly to the control of refrigeration systems. In the future, it is expected that papers on low GWP refrigerants and systems will be published more actively. Absorption/Adsorption Heat Pump and Refrigeration and Low-Grade Thermal Energy Utilization Absorption and adsorption are mainstreams of thermally driven heat pump and refrigeration, which would enhance the utilization of low-grade thermal energy. The important factors to improve the performance of these technologies are materials, such as absorbent, adsorbent, and refrigerant. In this context, Rahmawati et al. [3] investigated activated carbon production from bagasse. The produced activated carbon is expected to be used by many applications including adsorption heat pumps. The study revealed adsorption characteristics from both physical and chemical point of views. Adsorption heat pump with a new combination of adsorbent-refrigerant pair, activated carbon-R1234yf, was investigated by Seo et al. [6]. The adsorption isotherm data provided in the study are very useful to design adsorption heat pump and refrigeration. Although the system performance of the pair was not comparable with conventional adsorption heat pumps using water as a refrigerant, the study expanded a scope of adsorption heat pump application. Raza et al. [1] investigated evaporative cooling system, which is also capable of utilizing low-grade thermal energy. The study focused on the application of evaporative cooling to the poultry house section from the viewpoint of enhancing thermal comfort, and the applicability of several options of the system was discussed. Desiccant Air Conditioning Desiccant air conditioning can mitigate global warming by using water as a refrigerant and can contribute to solving the energy crisis by using heat instead of electricity as the main energy source. In this Special Issue, two papers related to desiccant air conditioning were published. Since desiccant air conditioning includes dehumidification and ventilation functions, it shows thermal comfort different from conventional air conditioners. Ahn and Choi [2] presented thermal comfort in a residential space equipped with desiccant air conditioning. Three thermal comfort indexes were evaluated by measuring the local temperature, global temperature, and humidity in the cooling space and combining the wind speed obtained by simulation. The change in thermal comfort in three spaces where cooling is performed according to the supply angle of supply air was compared. Desiccant cooling often consists of a hybrid system depending on the outdoor conditions. At this time, it is necessary to analyze the performance according to the contribution of desiccant cooling and heat pump. In this Special Issue, Kim and Ahn [7] simulated a hybrid desiccant cooling system powered by gas engine cogeneration using TRNSYS. They presented the performance of the hybrid system according to the desiccant capacity. Heat and Mass Transfer Enhancement for Innovative Heat Exchangers Heat exchangers are the key components of heat pump/refrigeration and air conditioning. Effort to enhance heat transfer is going on in this research field. Attempts to increase heat flux on the subcooled flow boiling using high sintered fiber were reported by Otomo [4] and by Galicia et al. [10]. They showed that the heat flux was enhanced by 56% and wall superheat was reduced by 12 K in the case of a high-porosity sintered fiber attached surface compared with those of the bare surface. The study visualized bubble formation and pattern flow and the mechanism of heat flux enhancement and wall superheat reduction were also clarified. Application of AI for Air Conditioning Recently, artificial intelligence technology has been applied in many fields, and the heat pump/refrigeration field is no exception. In this Special Issue, two related papers were published. AI technology can be used very effectively to process image information. Garniwa [8] published a study on estimating solar radiation from satellite images in this Special Issue. Solar radiation information is very important in determining heating and cooling loads in air conditioning. They compared the four models, confirmed that the Hammer model is the best, and introduced a long/short-term memory (LSTM) model to increase the prediction accuracy. As a result, it was shown that the LSTM model can increase the prediction accuracy up to 11.2%. Deep learning is the most representative example of artificial intelligence technology. Rajagukguk [9] predicted cloud cover from images of a sky camera using deep learning technology. Their study showed that deep learning could predict cloud cover in sky images and was useful for predicting solar radiation on partially cloudy days with high variability in solar radiation.
2022-10-23T15:14:57.235Z
2022-10-21T00:00:00.000
{ "year": 2022, "sha1": "806695dddd20b17afc7a8a58d9f569b18ee6078c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/20/10630/pdf?version=1666324654", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b4cc1b67fa3a35eca2882306f598290f823f80d9", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
257336999
pes2o/s2orc
v3-fos-license
A review of Gabonese gorillas and their pathogens: Diversity, transfer and One Health approach to avoid future outbreaks? In Africa, great apes, among which gorillas, are the reservoir of several infectious agents, some of which have zoonotic potential. However, scientific reports summarizing data on the pathogens harbored by some primate species still need to be published for the scientific community, conservation, and public health actors. In the case of Gabon, despite its outstanding biodiversity, particularly in great apes, and the history of outbreaks involving wildlife, there is a lack of reports on pathogens found in some ape species living in the vicinity of the human being. Thus, it is becoming urgent for us to synthesize the available data on pathogens (parasites, bacteria, and viruses) identified in gorillas living in different ecosystems of Gabon to assess the risks for the human population. Therefore, this review article presents the diversity of pathogens identified in gorillas in Gabon, their impact on primates’ health, the cases of transfer between gorillas and humans, and the interest in a One Health approach for prevention and a better understanding of the ecology of gorilla’s diseases infection in Gabon. Introduction Besides habitat loss, climate change, non-native species invasion, and overexploitation, pathogens or infectious diseases (ID) are also recognized as determinant factors that sometimes regulate animal population density as drivers of species extension (Chapman et al., 2005;Smith et al., 2009). However, it must be recognized that the role of pathogens alone in this extinction is subject to debate or controversy because the role of ID in population declines was often considered secondary to other factors (Cunningham et al., 2017). However, since 1999 many reports have been published on disease-driven species extinction (Cunningham et al., 2017): such as the decline of tree snail P. turgida due to a microsporidian infection in Polynesia (Cunningham and Daszak, 1998), of one-third of Hawaiian honeycreepers and the slime mould induced decline of eelgrass (Zostera marina) beds in the USA, leading to the extinction of the eelgrass limpet (Lottia alveus) (Thorne and Williams, 1988;Carlton et al., 1991;Juliano, 1998;Daszak and Cunningham, 1999). Until recently, the main threats to the African ape population were poaching, habitat loss, and human encroachment. However, as in other mammals, ID (i.e., macro and microparasites) have emerged as a threat of the same magnitude. A diverse array of virulent pathogens threatens wild great ape populations, including the Ebola virus (Walsh et al., 2003;Bermejo et al., 2006;Leendertz et al., 2006), Anthrax (Leendertz et al., 2006), simian immunodeficiency virus (SIV) (Keele et al., 2009), and a variety of human respiratory viruses (Köndgen et al., 2008;Kaur et al., 2008). For instance, the Ebola virus caused an 80% decline in the gorilla and chimpanzee populations on the borders of Gabon and the Republic of Congo between 2001 and 2003 . However, Gabon, belonging to the Congo Bassin, one of the most important reservoirs of biological biodiversity, is still home to the richest wildlife and plant communities in Africa, with 20% of them being endemic to the country (Maslin, 2008). Moreover, 40% of the world's gorillas are thought to live in Gabon (Morgan, 2007). In Moukalaba-Doudou National Park, one of the 13 national parks established in 2002, the abundance of lowland gorillas is evaluated at 6.99 gorillas/km 2 (Takenoshita and Yamagiwa, 2008), and these gorillas are referred to in this review as Gabonese gorillas. For more than a decade, these gorillas have been the subject of intense research activities that have led to habituation and ecotourism projects. In Moulakaba-Doudou and Loango National Parks, two gorilla habituation projects are being conducted, and gorilla tourism is gradually being introduced (Ando et al., 2008;Boesch et al., 2009;Terada et al., 2021). In addition, Gabonese gorillas are present in primatology center and sanctuaries (Ngoubangoye et al., 2019;Boundenga et al., 2021). All these activities promote and increase contact between humans (researchers, local population, tourists) and gorillas with a high potential for pathogen exchange. Thus, the objective of this review is to summarize the current knowledge regarding the diversity of pathogens known (enzootic and non-enzootic) to have infected Gabonese gorillas and how, through a One Health approach, we can mitigate the threats to the conservation and public health of these gorilla's pathogens in Gabon settings. Diversity of infectious agents identified in Gabonese gorillas It is estimated that nearly 60% of infectious diseases of animal origin affect humans (Jones et al., 2008). Indeed, African NHPs, particularly lowland gorillas, are known to harbor a wide diversity of pathogens (Liovat et al., 2009), and the cases of transfer are not uncommon (Apetrei et al., 2004;Devaux et al., 2019;Locarnini et al., 2021). In the case of Gabon, some gorilla populations living in different Gabonese ecosystems were found harboring pathogens, among which are parasites, bacteria, and viruses. However, the question would be whether the fact that gorillas harbor a wide variety of infectious agents constitutes a risk to human and animal health. We believe that the exchange of viruses might be possible in Gabon settings because some of the viruses isolated are zoonotic, moreover, they were found at variable percentages: 1.9% for Merkel Cell Polyomavirus (Madinda et al., 2016), 30% for HBV (Makuwa et al., 2003), and 48% for HAdV (Hoppe et al., 2015). Therefore, all human activities that favored contact with gorillas infected with one of these viruses would be a potential exposure risk to infectious agents ( Figure 3). Nevertheless, this remains to be demonstrated by further studies. What are the impacts of these infections? The carriage of these pathogens is not without consequence on the health of the great apes, their population density, and the humans living nearby. The successive Ebola outbreaks between 2001 and 2003 occurred in the border region of Gabon and the Republic of Congo have decimated approximately 80% of the great ape populations (Huijbregts et al., 2003;Walsh et al., 2003;Leroy and Misson, 2004). For the specific case of Gabon, report that they discovered or were informed of 64 animal carcasses (gorillas, chimpanzees, and duikers) over 8 months in the epidemic zone, the Zadiéregion in Gabon (3000 km 2 ) . These authors insist that between November and December 2001, at the peak of the epidemic, 36 carcasses of gorillas were found in the area of the epidemic, covering 3000 km 2 . This is likely an underestimate of the severity of the disease, and many more gorillas probably died than were identified. Because the decomposition of a gorilla carcass in the tropical forest lasts about a month, and most of the carcasses were found in the vicinity of villages after 2 hours of walking, hundreds if not thousands of gorillas possibly died from these epidemics . Viruses are not the only ones to cause great apes' death or deleterious effects on the health of great apes. Indeed, Nagel et al. (2013) reported at the CIRMF primatology center the death of a List of different pathogens identified in gabonses gorillas (including parasites, viruses, and bacteria). The different colors indicate the group of pathogens (green for parasites; red: for the virus and blue for the bacteria). gorilla having a large necrotizing wound. After analysis, it was septicemia due to Staphylococcus aureus (Nagel et al., 2013). Indeed, Nagel et al. reported that molecular analyses revealed that immediate neighboring chimpanzees that were settled to the infected gorillas were infected by the t148 type S. aureus known to be virulent (Li et al., 2019). Although mortality of gorillas following Oesophagostomum spp. infestation has not yet been reported, the fact remains that recently at the Primatology Center of CIRMF, we observed the death of several chimpanzees following infections with Oesophagostomum spp. between 2015 and 2019 . Although not all infections with pathogens may lead to death, they could nevertheless have severe consequences for the health of primates, as was observed during the follow-up of an orphaned youngster in Leḱedi Park (Herbert et al., 2015). Thus, all these infections of apes by infectious agents in the wild or captivity are not without consequence and could impact human health if cohabitation with humans favors transfer. Are there any transfers, and why? Cases of potential transmission of pathogens between gorillas and humans, and vice versa, have been reported (Mouinga-Ondeḿeé t al., 2012;Nagel et al., 2013;Prugnolle et al., 2013); (example of transfer of pathogens between gorillas and human Figure 3). In the case of simian foamy retroviruses (SFVs), the transmission was done through gorilla bite. Indeed, among the 78 samples from humans screened for SFV, mostly hunters who bitted or scratched by NHPs (gorillas), 19 were SFV seropositive, whose one hunter was infected by gorillas SFV (the PCR confirmed this result) (Mouinga-Ondeḿéet al., 2012). Regarding Plasmodium vivaxlike, Prugnolle et al. reported the infection of a tourist who stayed in a forest environment where this parasite is circulated (Prugnolle et al., 2013). Thus, we believe that this tourist would have indeed been infected by the bite of a mosquito with a zooanthropophilic feeding behavior (Paupy et al., 2013). All the above demonstrates that Gabonese gorillas are a reservoir for a wide range of infectious agents with zoonotic potential whose transmission is favored by increasing contact (Bittar et al., 2014). However, the question would be whether the existence of such contagious potential would constitute a risk to animal and human health and even, in the long run, hinder the conservation efforts of this species. (Mouinga-Ondeḿéet al., 2012;Nagel et al., 2013;Prugnolle et al., 2013). Indeed, infections of human populations with some of these gorillas pathogens have been documented in Gabon. Recently, studies have revealed infection with spumaviruses (Foamy virus) in gorillas and hunters whom gorillas had bitten. The infection of hunters is believed to be the result of frequent contact with these animals' blood or body fluids (Calattini et al., 2004;Mouinga-Ondeḿéet al., 2012). The other emblematic example of virus transmission between gorillas and the human population is the infection by the Ebola virus. Indeed, as one of the most virulent infectious agents, the Ebola virus has been responsible for several Variation of Plasmodium spp prevalences in Gabon. This picture shows the variations of prevalence within the various populations of gorillas studied . human epidemics in Gabon due to the direct handling of gorilla and chimpanzee carcasses (Georges-Courbot et al., 1997;Rouquet et al., 2005). During a study of enteroviruses, Mombo et al. (2015) isolated a serotype causing paralysis in great apes (Mombo et al., 2015). Thus, all cases of transmission of infectious agents between great apes, especially gorillas, and humans, result from handling dead animals or permanent cohabitation between these two host groups as described elsewhere (Mekibib and Ariën, 2016). Concerning gastrointestinal parasites and bacteria, although cases of transfer between gorillas and humans in Gabon have not been demonstrated, several studies report cases of infection of captive and wild gorillas with geohelminths [Sch. Mansoni (Červenáet al., 2016), Necator americanus (Sirima et al., 2021), Cryptosporidium spp (van Zijll Langhout et al., 2010)], and bacteria [S. aureus (Nagel et al., 2013), Chlamydia-Related Bacteria (Klöckner et al., 2016), E. coli (Mbehang Nguema et al., 2021), K. pneumoniae (Mbehang Nguema et al., 2021;Shojaei et al., 2022) known to infect humans. However, for some bacteria like S. aureus, the origin of transfer has not been identified, i.e. we do not know which the man or the gorillas, transmitted the pathogens to the other. Furthermore, it is not obvious to believe that the exchange of Plasmodium species between gorillas and humans in the Gabon settings may become frequent insofar as among the vectors responsible for the transmission of simian parasites in gorillas, secondary vectors for the transmission of human malaria in urban and rural areas are found (Paupy et al., 2013;Makanga et al., 2016;Longo-Pendy et al., 2022). For instance, P. vivax-like, whose vector species identified are Anopheles moucheti, Anophles vinckei and Anopheles marshallii (Prugnolle et al., 2013;Makanga et al., 2016). This demonstrates the need for more multidisciplinary and longitudinal studies on the real impact of the increase in contact between gorillas and human populations via ecotourism activities, habituation, and mining in order to better understand the role of great apes, particularly gorillas, in the transmission or circulation of pathogens in the Gabonese ecosystem. How to reconcile conservation and public health in this context of cohabitation? Wildlife still represents a source of an array of high-impact pathogens that affect human health, with more than 72% of human emerging infectious diseases having wildlife origin (Jones et al., 2008). In the epidemiology of most described zoonoses, wild animals act as primary reservoirs for transmitting zoonotic agents to humans and domestic animals (Taylor et al., 2001). Zoonoses with a wildlife reservoir are typically caused by various bacteria, viruses, and parasites, whereas fungi are unimportant (Biase et al., 2022). Abundant literature documents the spillover Some examples of the transfer of pathogens between the gorillas and humans and vice versa. In some of the cases illustrated in this picture, gorillas have been clearly identified as the source of the pathogens found in humans in other cases the parasites have been found in both gorillas and humans and the direction of transfer has not yet been identified. These exchanges are the result of human actions on the environment. In this review, we have established that viruses, bacteria, and parasites are capable of zoonotic spread from Gabonese gorillas to humans. It is, therefore, to be feared that with Gabonese government policies aimed at promoting ecotourism and then research and mining activities that will accentuate contacts between wildlife, particularly gorillas, and humans, the exchange of pathogens will be more frequent. This provides ample justification for implementing a One Health approach, mainly since gorilla habituation projects are being conducted in some of Gabon's thirteen national parks (Hernańdez Tienda et al., 2022), like in Loango National Park (Oelze et al., 2014;Hernańdez Tienda et al., 2022) or Moukalaba Doudou National Park (Ando et al., 2008). The One Health approach must prevent and control any emergence, re-emergence, or spread/dissemination of zoonotic pathogens harbored by Gabonese gorillas. To this end, a long-term monitoring system of the health of Gabonese gorillas under habituation must be put in place to achieve what Leendertz et al. (2006) have proposed (Sacks et al., 2018): baseline data on the pathogens of gorillas in Gabon settings. This approach should therefore make it possible to build up a biobank of Gabonese gorilla pathogens, monitor any possible exchange with susceptible animals or humans in direct or indirect contact with these gorillas, and understand the environmental factors have led to this pathogen transfer. In the Gabonese context, with previous Ebola outbreaks that affected human populations, there is an urgent need to implement what Zimmerman et al. (2022) have called "Great Ape Health Watch", which consist of standardizing surveillance across sites and geographic scales, which monitors primate health in real-time and generates early warnings of disease outbreaks (Zimmerman et al., 2022). In addition, the local population must be educated on the characteristics, ecology, and history of gorilla pathogens and the threats it poses to the wildlife and the human population in case of spillover (Kuisma et al., 2019). They should be aware of how to act when finding gorilla carcasses (even of any animal) to avoid exposure Illustration of the One Health approach. This image illustrates how the increase in human-animal contact, particularly with gorillas in the interface, argues for an increase in the emergence of zoonotic diseases. It shows how in a country such as Gabon where human-gorilla contact is a result of human activities, the success of the best public health prevention strategy would require the collaboration/cooperation of human, animal, and environmental health partners. and how to inform local research institutions (CIRMF, IRET/ CENAREST) able to conduct investigations to confirm the cause of these deaths. However, for adequate surveillance and effective implementation of a One health approach (Figure 4), it is more than necessary to establish multi-sectoral teams that should include all sectors involved in public health surveillance (environmental, research, health, and agricultural services). Conclusion In conclusion, Gabonese gorillas are a reservoir for a wide range of pathogens, some of which are zoonotic with deleterious effects on their health and that of populations living in their vicinity, as cases of exchange have been documented. These pathogens threaten the biodiversity conservation efforts undertaken by the Gabonese authorities in creating national parks (13) to promote ecotourism. There is an urgent need for a real strategy based on a One-health approach to prevent and control any emergence, re-emergence, and transmission of pathogens between Gabonese gorillas and the local population. Author contributions All authors contributed to the article and approved the submitted version. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-03-05T16:15:18.931Z
2023-03-03T00:00:00.000
{ "year": 2023, "sha1": "ce534993e85e4d9d055585a4f2cbdff1d618a1e0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpara.2023.1115316/pdf", "oa_status": "HYBRID", "pdf_src": "Frontier", "pdf_hash": "9c5e4d6861554555d9722fdc716ed7abe4e55bb5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
258662880
pes2o/s2orc
v3-fos-license
FDA Approvals of Biologics in 2022 The year 2022 witnessed the control of the COVID-19 pandemic in most countries through social and hygiene measures and also vaccination campaigns. It also saw a decrease in total approvals by the U.S. Food and Drug Administration (FDA). Nevertheless, there was no fall in the Biologics class, which was boosted through the authorization of 15 novel molecules, thus maintaining the figures achieved in previous years. Indeed, the decrease in approvals was only for the category of small molecules. Monoclonal antibodies (mAbs) continued to be the drug class with the most approvals, and cancer remained the most targeted disease, followed by autoimmune conditions, as in previous years. Interestingly, the FDA gave the green light to a remarkable number of bispecific Biologics (four), the highest number in recent years. Indeed, 2022 was another year without the approval of an antimicrobial Biologic, although important advancements were made in targeting new diseases, which are discussed herein. In this work, we only analyze the Biologics authorized in 2022. Furthermore, we also consider the orphan drugs authorized. We not only apply a quantitative analysis to this year’s harvest, but also compare the efficacy of the Biologics with those authorized in previous years. On the basis of their chemical structure, the Biologics addressed fall into the following classes: monoclonal antibodies; antibody-drug conjugates; and proteins/enzymes. Introduction In 2022 (also referred to as "this year" herein), after two years fighting the COVID-19 pandemic, the world finally saw a decrease in infections, hospitalizations, and deaths by SARS-CoV-2. This turnaround was attributed to a variety of safety measures adopted by governments and to vaccination campaigns. However, there was also a decrease in the total number of drugs approved by the U.S. Food and Drug Administration (FDA) in 2022 (37 vs. an average of approximately 50 in recent years). This fall could be linked to the fact that most pharmaceutical companies focused their resources on tackling the COVID-19 pandemic to the detriment of their normal activities. In terms of total drug approvals (New Chemical Entities (NCEs) and Biologics) between 2015 and 2022, the lowest numbers were registered in 2016 (22) and 2022 (37) [1,2]. On the other hand, regarding Biologics, 2022 had the highest percentage of approvals (40%), thereby indicating that the presence of this class is fully stabilized in the pharmaceutical industry. In recent years, Biologics have accounted for less than 30% of the total drugs approved ( Table 1). We highlight new drugs and their efficacy, mechanisms, and targets, and compare them to the Biologics already on the market. We also undertake a quantitative and qualitative analysis of the approvals and discuss the Biologics market. Of note, we did not include biosimilars in the analysis. We also reviewed clinical trials ongoing for all the Biologics mentioned herein, testing them for diseases other than their primary targets approved, exposing the potential of these drugs to treat other diseases in the long term. Analysis Several factors, among them industry failures and/or political issues, can explain a drop in the number of authorizations. For example, 2016 registered a decrease in submissions and also an increase in the rejection rate of drugs by the FDA. Of note, 2016 was also an election year in the United States, with Donald Trump taking office in January 2017 [11,12]. The following election year in that country was 2020, with Joe Biden taking over in January 2021, and there was a completely different scenario in terms of the total number of drug approvals, as that year registered the second highest figure (53) in the period 2015-2022. Importantly, the World Health Organization (WHO) declared the COVID-19 pandemic in 2020. In response, the FDA, along with other organizations, devoted efforts to research, and accelerated approval and review processes, resulting in many Emergency Use Authorizations issued for vaccines and other drugs. This scenario also affected the authorization process of specific drugs such as Veklury TM (Remdesivir-2020), for example, indicated to treat COVID-19 [5,13]. Although 2016 registered the lowest number of drug authorizations in that period, it witnessed the approval of two key antibacterial mAbs, namely Anthim TM (oblitoxaximab), indicated to treat inhalation of Bacillus anthracis (anthrax), and Zinplava TM (bezotoxumab), indicated to reduce recurrent infections of Clostridum difficile [9]. Oblitoxaximab and bezotoxumab were the only antibacterial mAbs approved in the period 2015-2022, and this fact gains relevance in the context of increasing antibacterial/antimicrobial resistance [12,14,15]. As seen in Figure 1, in 2022, the number of approvals of the three classes of Biologics hardly varied from the figures of previous years. Similarly, mAbs for cancer continued to be the therapeutic indication receiving the most approvals, with six Biologics, followed by autoimmune conditions, with four. The other conditions and diseases targeted by the Biologics authorized this year included eschar removal from thermal burns, aesthetic purposes, chemotherapy-induced neutropenia (CIN), eye disorders, and acid sphingomyelinase deficiency (ASMD). Fifteen Biologics were approved in 2022 and mAbs continued to account for the majority of FDA approvals among them. The authorization of mAbs in 2022 was slightly higher than in 2021 (nine vs. eight, respectively), the same applied for proteins and enzymes, while there was one less ADC approval than in 2021. The authorization of a new mAb for Alzheimer's Disease (AD) was expected in 2022, but it was finally approved in January 2023. Orphan Drugs All drugs must go through the pertinent development processes and subsequent approval and licensing process. However, the submission of a request seeking Orphan Drug Status is a completely different process and can be started by sending the required information by regular mail to the Office of Orphan Products Development at the FDA, emailing this information to the correct FDA address, or submitting it through the CDER NextGen portal [16]. As their name indicates, Orphan Drugs are intended to treat orphan diseases (rare conditions and diseases). These pose important and specific challenges in the development process, such as difficulties in clinical trials as a result of small patient populations, problems in the recruitment process, and a lack of knowledge of the disease. Furthermore, the concept of rare disease may vary from country to country. Despite these challenges, the pharmaceutical industry as a whole increasingly addresses the urgency of developing more treatment options for these kinds of diseases. In this context, the incentives provided by the FDA also drive greater resource allocation to these diseases. Annual growth in the development of Orphan Drugs is now expected [17,18]. Interestingly, as seen in Table 2 46% (seven) of all approvals in 2022 received Orphan Drug Status from the FDA. This is a considerable number given the difficulty faced by the pharmaceutical sector. The number of drugs to be awarded this status is almost half that of the new Biologics approved each year. Of note, enzymes are emerging as key approaches to tackle rare diseases and conditions. In this context, all the enzymes approved from 2015 to 2022 received Orphan Drug Status. Cancer Of the 15 Biologics approved in 2022, six were indicated for the treatment of a diversity of cancers (Table 3). Comparatively, we had four Biologics approved indicated for cancer approved in 2019, eight in 2020, and six in 2021 [1]. From 2019, there has clearly been a growth aimed at cancer in the Biologics market. The first-in-class Biologic, namely the bispecific fusion protein Kimmtrak TM (tebentafusp), which is intravenously administered, is the first drug to date specifically for the treatment of metastatic uveal melanoma in HLA-A-positive patients, leading the immune system directly to the cancer cell [19]. One arm of tebentafusp (anti-CD3 effector) binds to T lymphocytes, later dragging the T cell to the cancer cell. This immune cell must bind to glycoprotein 100 (gp100), which may be inside the tumor cell, and therefore needs to be presented to the tumor cell surface through the human leukocyte antigen (HLA). The other arm of tebentafusp (T-cell receptor arm) then targets gp100, binding to the melanoma cell and activating the T cell, which then kills the melanoma cell. Other Biologics have previously been approved for the treatment unresectable or metastatic melanomas, namely Yervoy TM (ipilimumab) in 2011, Keytruda TM (pembrolizumab) and Opdivo TM (nivolumab) in 2014 [28], Tecentriq TM (atezolizumab) in 2016 [1], and Opdualag TM (relatlimab and nivolumab) in 2022. However, tebentafusp is the first Biologic indicated for metastatic uveal melanoma. This type of cancer is very different from other melanomas as it shows distinct patterns, pessimistic prognosis, and a high likelihood of metastasis [29]. These characteristics thus make Kimmtrak TM an important breakthrough. Overall Survival (OS) was the main measure found in the literature for Kimmtrak TM , with a median OS of 21.7 months vs. 16 months for the control group [19,29]. For Opdualag TM , which is intravenously administered, the main measure found was Progression-Free Survival (PFS) vs. nivolumab alone, with a PFS of 10.1 months for Opdualag TM vs. 4.6 months for nivolumab [21]. As shown in Ref. [30], the combination of mAbs such as Opdualag TM (nivolumab and relatlimab), which was approved this year, offers the interesting advantage of simultaneously targeting multiple pathways. Opdualag TM provides a first-in-class mechanism of action by carrying two fully human mAbs, the first one targeting LAG-3 receptors and the second one PD-1 receptors, thereby increasing T-cell activation [21]. Bispecific mAbs can also target more than one pathway. However, three distinct mAbs can be combined, as is the case of Phesgo TM , in which all the mAbs target the glycoprotein (GP) of Zaire ebolavirus but in distinct ways. In this regard, between 2015 and 2022, only three combinations of mAbs have received approval, namely the aforementioned Phesgo TM (pertuzumab, trastuzumab, and hyaluronidase) for Ebola vírus, Inmazeb TM (atoltivimab, maftivimab, odesivimab) to treat early or metastatic breast cancer, both approved in 2020 [1], and Opdualag TM , which received authorization this year [21]. Of note, the last fusion protein approved by the FDA was in 2018, with tagraxofusp, indicated for the treatment of blastic plasmocytoid dendritic cell neoplasm [1]. While the two cancer drugs tagraxofusp and tebentafusp received Orphan Drug Status, tebentafusp is the first bispecific fusion protein to get the green light to date. Lunsumio TM (mosunetuzumab), a humanized bispecific mAb, has received accelerated approval from the FDA this year. Indicated to treat a type of non-Hodgkin's lymphoma (relapsed or refractory follicular lymphoma (FL), it presented an Objective Response Rate (ORR) of 80% in clinical trials, with 60% of patients presenting a Complete Response (CR) [27]. Patients affected by follicular lymphoma (FL) have very few treatment options when it comes to Biologics. The other treatment option for this condition is rituximab, which was approved in 1997 and was the first mAb for cancer patients. Its therapeutic indications include FL. In comparison with the new bispecific mosunetuzumab, rituximab has an ORR of around 50% and a CR of 6% [31]. In 2017, we saw the approval of a reformulated Rituxan Hycela TM (rituximab) with the addition of the human enzyme hyaluronidase, which is subcutaneously administered [32]. In this regard, no other mAb has been authorized since Rituxan Hycela TM , which is indicated to FL; it has taken five years for a new mAb for this disease to come onto the market. Imjudo TM (tremelimumab) was approved for cancer this year, intravenously administered, indicated for unresectable hepatocellular carcinoma (uHCC) [23]. It is a mAb whose mechanism works by blocking CTLA4, thus stopping the interaction of ligands with the cytotoxic T-lymphocyte-associated antigen 4. The previous cancer Biologic indicated for uHCC to get the green light was Tecentriq TM (atezolizumab)(2016). This Biologic is also a mAb but, in contrast to tremelimumab, it acts by blocking PD-L1 [1]. For uHCC, both drugs are indicated to be used in combination with other mAbs, namely atezolizumab + bevacizumab, and tremelimumab + durvalumab. Belantamab mafodotin (approved in 2020) binds to the B-cell maturation antigen (BCMA), and therefore, has a similar mechanism of action to that of the novel teclistamab. However, the latter is the first bispecific mAb to treat MM. It binds to BCMA and also to CD3 receptors [24]. In clinical trials, teclistamab showed a good ORR, with 40% of the patients presenting a CR [37][38][39]. Intravenously administered, Elahere TM (mirvetuximab soravtansine) was the antibodydrug conjugate (ADC) of 2022 to be approved (fast-track process) by the FDA [40]. This ADC is a FRα-directed (folate receptor alfa) chimeric mAb that targets epithelial ovarian cancer, which has high expression of FRα. When internalized, Elahere TM releases its small molecule (DM4), a microtubule inhibitor, after cleavage of its disulfide linker, unleashing apoptotic cell death. The anti-tubulin agent DM4 is an analog of maytansine, which was last found, before 2022, almost one decade ago in another ADC Kadcyla TM (rastuzumab emtansine) [30]. DM4 is genotoxic, it confers risk to pregnant women, and it is a potent CYP3A4 substrate. Patients treated with DM4 must be closely monitored [25]. From 2015 to 2021, nine ADCs were approved by the FDA [1] and Elahere TM is the tenth of this class. Regarding efficacy, in a single-arm trial, Elahere TM demonstrated an ORR of 31.7% and a DOR of 6.9 months, but further research is still ongoing [40,41]. Ongoing Clinical Trials for the New Biologics for Cancer There are trials ongoing for tebentafusp (phase 1b/2) to test this Biologic in metastatic cutaneous melanoma, but in combination with other Biologics (durvalumab and/or tremelimumab), and also tebentafusp alone in advanced non-uveal melanoma, with no results posted yet [42]. Regarding trials for nivolumab and relatlimab to potentially treat diseases other than its primary target, there are trials ongoing to test it in metastatic or unresectable chordoma and [43], a phase 2 trial to test it in advanced microsatellite stable (MSS) colorectal cancer [44], a phase 1 2 trial to test its effectiveness in liver cancer [45], and interestingly, just like Kimmtrak TM (tebentafusp) mentioned earlier in this paper, whose therapeutic indication is metastatic uveal melanoma (MUM), the first treatment to date specifically for MUM, there is a phase 2 trial ongoing to test nivolumab and relatlimab for MUM [46]. A combination of mAbs such as Opdualag TM carries great potential for repurposing and exploiting new targets/diseases; unfortunately, there are no results posted yet regarding the ongoing trials mentioned. Tremelimumab is being tested for bladder cancer, with a completion study date in 2026 [47]. It is currently only approved for adult patients, but ongoing studies were found to test tremelimumab in combination with durvalumab in pediatric patients with solid tumors and hematological malignancies [48], and a phase 1 study with a completion date in 2024 for metastatic melanoma [49]. Mirvetuximab soravtansine is being tested as a first-line treatment for triple-negative breast cancer [50], and in combination with pembrolizumab as a new option for endometrial cancer in a phase 2 study, expected to be completed in 2025 [51]. There are trials with mosunetuzumab for four other conditions: reduction of the tumor with mosunetuzumab in combination with polatuzumab vedotin for refractory, relapsed, or aggressive non-Hodgkin lymphoma in a phase 2 study [52]; a phase 1 study to test mosunetuzumab to treat B-cell lymphoma after replacement of patient's stem cells by autologous stem cell transplant [53]; and a phase 1 study assessing the efficacy of mosunetuzumab in relapsed or refractory chronic lymphocytic leukemia (CLL) [54]. All of these three studies have a completion date expected in 2027. There is still a phase 1 study testing mosunetuzumab for systemic lupus erythematosus, with a completion date expected in 2024 [55]. None of these studies have posted results yet. Regarding teclistamab, ongoing trials were not found for a disease other than multiple myeloma. Autoimmune Conditions The second type of disease most targeted by the approved Biologics in 2022 is autoimmune conditions (Table 4). Spevigo TM (spesolimab) has been approved this year to treat generalized pustular psoriasis (GPP), a rare and autoinflammatory condition that can strike both children and adults, affecting more Asians than other population groups [60]. To date, there is no standard treatment specifically for GPP, therapeutic strategies being limited to the use of synthetic drugs and Biologics previously authorized for moderate and severe plaque psoriasis, which have a poor outcome in GPP. As other Biologics indicated to treat plaque psoriasis or psoriatic arthritis have distinct interleukin receptors as targets (i.e., IL-17R), spesolimab brings a new mechanism of action by binding to IL-36R (interleukin-36 receptor), thereby inhibiting IL-36 from binding to IL-36R [20], since GPP seems to have a singular mechanism in its pathogenesis involving the IL-36R [60]. Although further research is needed on this subject, current studies support the efficacy of anti-IL-36R therapy in GPP [60][61][62]. Briumvi TM (ublituximab) (2022) is intravenously administered and it is indicated to treat relapsing forms of multiple sclerosis. Its mechanism of action is like that of Ocrevus TM (ocrelizumab), the previous mAb for multiple sclerosis approved by the FDA, in 2017. These two drugs bind to CD-20 on B-cells, both pre-B cells and mature B cells, thereby unleashing cell lysis [59,63]. Prior to ocrelizumab, the FDA had only approved Zinbryta™ (daclizumab) (2016) [1], which acts by binding to a subunit of IL-2 receptors, namely CD-25. Between 2015 and 2022, only these three mAbs received the green light for this condition. Ublituximab is a chimeric mAb, and from 2015 onwards, we have seen a really low number of chimeric mAb approvals by the FDA, namely: Unituxin TM (dinutuximab) in 2015; Anthim TM (obiltoxaximab) in 2016; Rituxan Hycela TM (rituximab and hyaluronidase) in 2017; Sarclisa TM (isatuximab) and Margenza TM (margetuximab), both in 2020; and none of them are for multiple sclerosis [1], making ublituximab the first chimeric mAb for MM. In trials, ublituximab has been demonstrated to be superior to an orally administered medication (teriflunomide) in the two endpoints evaluated. In the primary endpoint in trial I, the Annualized Relapse Rate (ARR) reported was 0.08 for ublituximab vs. 0.19 for teriflunomide, and in the same endpoint in trial II, it was 0.09 for ublituximab vs. 0.18 for teriflunomide. In the secondary endpoint in trial I, the average number of gadoliniumenhancing lesions was measured at 0.02 for ublituximab vs. 0.49 for teriflunomide 0.49, and in trial II it was 0.01 for ublituximab vs. 0.25 for teriflunomide, demonstrating lower rates and fewer lesions in the magnetic resonance imaging [64]. Tzield TM (teplizumab) binds to its target CD3 and patients with Stage 2 type 1 diabetes (T1D) can benefit from a delay in the onset of Stage 3. This is a first-in-class and unique treatment that can deactivate certain immune cells involved in T1D. The efficacy of teplizumab in delaying the onset of Stage 3 T1D has been demonstrated in trials. Indeed, the primary measure was time from randomization to the diagnosis of Stage 3 T1D. It was observed that 19.8 (45%) of the patients (of a total of 44) receiving teplizumab had a later diagnosis of Stage 3 TID than the placebo group [57,58,65]. Teplizumab is intravenously administered and it is one of the few Biologics authorized in 2022 for both adult and pediatric patients. Another important advancement in autoimmune diseases this year is the first-in-class Enjaymo TM (sutimlimab), which is intravenously administered. It is indicated to decrease the need for red blood cells (RBCs) transfusion in cold agglutinin disease (CAD); CAD is a rare condition characterized by the destruction of RBCs in cold temperatures. Sutimlimab also brings a new mechanism of action by binding to the complement protein component 1, inhibiting the complement pathway [56,66]. In a clinical trial, more than half the patients positively responded to sutimlimab by increasing hemoglobin and no RBC transfusion was required after five weeks of treatment, and they reported decreased fatigue [67,68]. From 2015 to 2021, the FDA approved 13 Biologics for autoimmune conditions; as such, this is the second disease category to receive the most authorizations after cancer [1]. This year, we have seen four new Biologics added to this category. Ongoing Clinical Trials for the New Biologics for Autoimmune Conditions Boehringer Ingelheim is conducting studies to test spesolimab in other conditions. There are trials ongoing to test the efficacy of spesolimab for palmoplantar pustulosis (PPP) in a phase IIa study, with results supporting its efficacy vs. the placebo, but there are still trials ongoing to keep testing for PPP [69,70]. In 2024, a phase 2 study is expected to be completed to test the efficacy of Spesolimab in hidradenitis suppurativa (HS) [71], ulcerative colitis (UC) [72], an improvement of the narrowing of the small bowel in Crohn's disease patients [73], and there are also studies ongoing to test it in atopic dermatitis (AD) and other conditions whose mechanism is similar to those that can cause HC, UC, or AD [74,75]. Ublituximab is being tested in combination with umbralisib for proggressive CLL in a phase 2 trial, and in a phase 1 and 2 study testing tazemetostat in combination with umbrasilib and ublituximab to treat relapsed or refractory follicular lymphoma [76,77]; no results have been posted yet for either study. Regarding teplizumab and sutimlimab, ongoing trials were not found for diseases other than the primary authorized ones described in the Prescribing Information. Aesthetic Daxxify TM (daxibotulinumtoxin A) was the Biologic for aesthetic purposes approved by the FDA in 2022 (Table 5) and it is administered by intramuscular injection. It is found in the literature as an advancement, considering past decades of the hegemony of Botox TM (onabotulinumtoxin A) to treat glabellar lines. Daxibotulinumtoxin A shows promising results and greater internalization of the neurotoxin, and clinical trials have demonstrated significant differences in response rate and also a longer period of effect for this Biologic. Patients in this trial also showed a better response to this Biologic than to the placebo [78][79][80]. The mean duration of the effects of daxibotulinumtoxin A in clinical trials is around 24 weeks, while for onabotulinumtoxin A it is around 19 weeks [79]. The IGA-FWS (Investigator Global Assessment-Facial Wrinkle Severity) and Global Aesthetic Improvement Scale (GAIS) were used to assess the results. Participants in the trial using 40 U of daxibotulinumtoxin A obtained between a 1 and 2 point improvement in glabellar lines, on both scales, over those using 20 U of onabotulinumtoxin A [79,82]. Before 2022, Jeuveau TM (prabotulinumtoxin A) was the last Biologic authorized for aesthetic purposes (2019) [1]. Prabotulinumtoxin A and onabotulinumtoxin A presented similar outcomes in a 3-month study evaluating their effect on crow's feet. The main measure of efficacy for prabotulinumtoxin A vs. onabotulinumtoxin A was mean onset of action (3.81 days for prabotulinumtoxin A vs. 3.47 days for onabotulinumtoxin A) and time to peak effect (9.58 days for prabotulinumtoxin A vs. 11.11 days for onabotulinumtoxin A). The secondary measure was the duration of action (11.11 weeks for prabotulinumtoxin A vs. 11.22 weeks onabotulinumtoxin A) [83]. The literature is still lacking data comparing the novel daxibotulinumtoxin A with prabotulinumtoxin A for the treatment of glabellar lines. Eye Disorders There have been two important drug advancements for eye disorders in less than three years. In this regard, back in 2019, the single-chain fragment variable (scFv) Beovu TM (brolucizumab), which inhibits three isoforms of VEGF-A, received the green light from the FDA to treat neovascular (Wet) age-related macular degeneration (nAMD) [84]. Two years later, in January 2022, Vabysmo TM (faricimab) ( Table 6) was also approved for eye disorders such as nAMD and diabetic macular edema (DME). In the context of eye disorders, there is also ranibizumab, which was first approved in 2006 for nAMD, DME, and macular edema following retinal vein occlusion (RVO), and Eylea TM (aflibercept), authorized in 2011 for nMAD. In clinical trials, the main measure of which was a change in Best-Corrected Visual Acuity (BCVA), brolucizumab outperformed aflibercept in minor endpoints and was non-inferior in primary endpoints, and it showed a higher remission of retinal thickness when compared to ranibizumab [85]. In contrast to brolucizumab, faricimab is a bispecifc humanized antibody [3,35]. The small size of the immunoglobulin fragments mechanism found in brolucizumab and its drug delivery features are important characteristics for Biologics, and these are also seen as important characteristics in bispecific mAbs, such as faricimab, whose mechanism is to inhibit two pathways, enhancing the fight against many diseases. Faricimab exerts anti-vascular endothelial growth factor-A (VEGF-A) and anti-angiopoietin-2 (Ang-2) activity [86]. In clinical trials, faricimab demonstrated similar outcomes in the same measure (BCVA) and anatomic improvement when compared to brolucizumab. However, further research is required [87]. Ongoing Clinical Trials for Faricimab Regarding the potential of faricimab to treat other conditions, in this year (2023), a phase 2 trial has begun for faricimab to test non-proliferative diabetic retinopathy, but no results have been posted yet [88]. This year, two phase 3 trials are expected to be completed to test faricimab in macular edema due to hemiretinal vein occlusion, retinal vein occlusion, and central retinal vein occlusion [89,90]; no results have been posted yet for those studies as well. Enzymes and Proteins Three out of the fifteen Biologics to get the green light in 2022 fall into the class of proteins and enzymes, as found in Table 7. We discuss the results of the efficacy of these new Biologics compared the ones approved in previous years. Xenpozyme TM (olipudase alfa) was the first enzyme to be approved by the FDA in 2022. It is a replacement therapy indicated to treat a rare disease named acid sphingomyelinase deficiency (ASMD) (also known as Nieamann-Pick Disease) [91,92]. The deficiency of acid sphingomyelinase (ASM) leads to the accumulation of sphingomyelin and other lipids, which can cause involvement of the central nervous system (CNS), hepatosplenomegaly, and/or lung impairment. There are two types of ASMD, type A and type B. The former causes hepatosplenomegaly and CNS impairment, while type B leads to hepatosplenomegaly, and liver and lung impairment, and it may not present CNS disruption [95,96]. In clinical trials, olipudase alfa demonstrated improved clinical symptoms, including enhanced platelet counts, a reduction in liver and spleen volume, and a greater lung diffusing capacity, and it also cleared sphingomyelin from tissues [97,98]. Given the difficulty in managing neutropenia in some cancer treatments, Rolvedon TM (eflapegrastim), which has been approved this year, is an important innovation. In this regard, Biologics for chemotherapy-induced neutropenia (CIN) started in 1991 with filgras-tim, followed by pegfilgrastim in 2002. However, since then, the industry has struggled to develop a new Biologic other than biosimilars for CIN. Eflapegrastim has the addition of an Fc Fragment of a human IgG4 [93], which extends its half-life and increases its absorption by the bone marrow. In clinical trials, eflapegrastim demonstrated non-inferior efficacy in reducing neutropenia compared to pegfilgrastim at a reduced dose of G-CSF (Granulocyte-Colony Stimulating Factor); 3.6 mg and 6.0 mg, respectively, administered in all four cycles. Furthermore, the safety profiles of these two drugs are similar [99][100][101]. In 2022, nexoBrid TM (anacaulase) was the only Biologic of topical administration approved with a distinct therapeutic indication: eschar removal in adults with full-or partial-thickness thermal burns. However, it still has significant limitations for the treatment of electrical and chemical burns, or burns to the face and genitalia [94]. Schar removal is a procedure that helps to better manage the wound and wound closure, and when eschar removal occurs in the first hours it can reduce bacterial growth and days of hospitalization [102]. Of note, no Biologic for this indication has been approved by the FDA in recent years. Ongoing Clinical Trials for Eflapegrastim Spectrum Pharmaceuticals Inc. is testing eflapegrastim for other conditions: pediatric participants with solid tumors or lymphoma and treated with myelosuppressive chemotherapy [103], to compare the effect of eflapegrastim on the duration of neutropenia in patients with early-stage breast cancer [104], but there are still trials to be carried out. No studies were found for anacaulase and olipudase alfa for diseases other than the primary ones described in the Prescribing Information. Discussion The quantitative aspect of total drug approvals by the FDA in 2022 could lead to some concern as it ranks as the second year with the lowest number [30]. However, as seen in 2016, factors such as fewer submissions and/or an increase in rejections by the FDA [11,12] may have occurred. Possible political influences should also be considered, including the end of the pandemic in many countries, as well as the slow process which is demanded to obtain the final approval or delays in the processes, as seen in 2016 [12], for example. In the same way, this year, a few drugs could have had an expected approval deadline in 2022, but, due to delays, perhaps they are going to be approved in 2023. This decrease in authorizations should not be interpreted as a greater failure on the part of the pharmaceutical industry, as authorization has been granted to key drugs and there has been a trend to devote resources to an increasing number of rare diseases and conditions. It should be noted that the decrease in the approvals in 2022 only applied to the small molecules category, not to Biologics. In this regard, 15 Biologics have been approved in 2022. In terms of the period 2015-2022, this year is among those with the highest number of biopharmaceuticals to receive the green light [1,30]. Special mention is given to four Biologics: the first bispecific fusion protein approved to date, namely Kimmtrak TM (tebentafusp), and the antibodies Vabysmo TM (faricimab), Tecvayli TM (teclistamab), and Lunsumio TM (mosunetuzumab), two of which are Orphan Drugs. This harvest makes 2022 the year with the highest number of bispecific biologic products to receive authorization. Of note, Vabysmo TM is produced by the giant Genentech, which also manufactures two of the Biologics approved in 2022. This company has received authorization for a Biologic almost every year between 2015 and 2022. Indeed, in 2017, three of its Biologics received the green light. Although cancer continues to be the most targeted disease for Biologics, only six out of the fifteen drugs authorized in 2022 are indicated for cancer. This may indicate a trend towards targeting other diseases. The harvest of 2022 included an ADC, Elahere TM , which, like all the other ADCs approved to date, is indicated for cancer. However, it carries a distinct payload from previous ADCs. As in previous years, in 2022, autoimmune conditions continued to be the second in the ranking of targets after cancer, with four out of the fifteen Biologics authorized for this indication. For aesthetic purposes, 2022 saw the approval of Daxxify TM (daxibotulinumtoxin A), the effects of which proved to have a longer duration than those of the widely used onabotulinumtoxin A. Of note, the last botulinum toxin to be approved, prabotulinumtoxin A, was in 2019. It could be speculated that there is an emerging trend to provide alternatives to Botox TM . Daaxify TM falls into the natural product section, along with NexoBrid TM (anacaulase), a mixture of enzymes from pineapple [30,79,80,94]. Regarding the efficacy of the other Biologics in 2022, so far, data show that they were all either superior to previous Biologics or non-inferior. Each year brings approvals for new targets and 2022 was no different. In this regard, important advancements were made, such as Kimmtrak TM (tebentafusp). In addition to being the first bispecific fusion protein, it is the first Biologic indicated for the treatment of unresectable or metastatic uveal melanoma. Furthermore, Spevigo TM (spesolimab) is the first mAb to specifically treat generalized pustular psoriasis flares [19,20]. Both drugs have been granted Orphan Drug Status by the FDA. In the context of the increasing concern regarding antibiotic resistance worldwide, the pharmaceutical industry appears to be struggling to develop antibacterial Biologics. This is reflected by the fact that only two such products have been authorized since 2015 [1]. The complexity of bacteria and the presence of polymicrobial infections may hinder the development of such Biologics. In this regard, for example, while mAbs are highly selective drugs that are directed at one specific target, they may not be effective against the extremely high number of possible targets on bacterial surfaces that appear to be involved in infection. Such features make the development of new drugs even harder and greater research efforts will be needed [15]. The combination of mAbs or bispecific mAbs emerges as a potentially relevant approach when addressing multiple targets in bacteria. Conclusions Notable advancements in the Biologics market were witnessed between 2015 and 2021, and 2022 was no different. Despite the smaller number of approvals than in previous years, the total number for all Biologics did not vary from previous years. In this regard, the decrease applied only to NCE. Some of the drugs authorized in 2022 aimed at diseases and conditions without a specific standard treatment, and have novel mechanisms of action. This finding reflects the continued efforts to tackle challenges and provide patients with diseases other than cancer with more treatment options. Given all the ongoing trials found for the Biologics presented herein, it is observed that a great effort to repurpose these Biologics is in order to find other therapeutic indications for them, but it is still too early to find results posted on clinicaltrials.gov. The FDA granted Orphan Drug Status to seven out of the fifteen Biologics approved in 2022. This figure emphasizes the tendency of the pharmaceutical industry to embrace the important fight against rare diseases and conditions. Given all the Orphan Drug Statuses granted, plus the fact that cancer remains the main targeted disease, followed by autoimmune conditions, we wonder whether or not there is a tendency for the Biologics market to pay more attention to those and set aside all the other diseases and conditions that exist which also need support; for example, the antimicrobial resistance mentioned.
2023-05-14T15:16:17.228Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "8dbb2525390616bbe2ce6cfd9db481f69ff35af9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/biomedicines11051434", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "afe4f08867b1a768c652659e59037a9b5aec0ec8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214587522
pes2o/s2orc
v3-fos-license
Successful Management of a Huge Pulmonary Hydatid Cyst with Lung-Preserving Surgery The lung is the second most commonly involved organ in humans by hydatid disease. Management of large pulmonary hydatid cysts is a great challenge for thoracic surgeons. Lung resections should be considered the last choice for huge pulmonary hydatid cysts when the lung expansion is not optimal after cyst removal. Here, we present a case of huge lung hydatid cyst involving the entire right lower lobe which was successfully managed by lung-preserving surgery in which the postoperative course showed gradual resolution of the involved lobe during a one-year follow-up. Introduction After the liver, the lung is considered to be the second most infected part of the body in hydatid disease [1][2][3]. Sign and symptoms of the patients may vary depending on the size and location of the cyst, ranging from asymptomatic to severe dyspnea, cough, chest tightness, and pain [4,5]. Even though hospitalization due to human cystic hydatidosis has decreased, some parts of the world such as the Middle East are still considered to be an endemic area for the disease [6,7]. Here, we present successful management of a huge pulmonary hydatid cyst with lung-preserving surgery. Case Presentation A 28-year-old man with no past medical and family history presented with dyspnea during strenuous activities. The radiologic finding was in favor of a large cyst (20 × 18 × 14 cm) in the right hemithorax (Figures 1(a) and 1(b)). He denied any drug or substance abuse. On physical examination, there was a decreased breathing sound in the right hemithorax and laboratory data were unremarkable. A computed tomography scan showed no evidence of a hydatid cyst, except the one mentioned above, in other loca-tions or organs. On the day of surgery, fiberoptic bronchoscopy was done before placement of double-lumen ETT, which only showed narrowing in the orifice of the right lower lobe bronchus from external compression. The patient underwent right posterolateral thoracotomy through the 6th intercostal space, in which the cyst contents including 2500 cc clear fluid and the laminated membrane of the hydatid cyst were removed. We entered a large cystic cavity in the right lower lobe, of which walls were densely adhered to the mediastinum, chest wall, and diaphragm. Since lobectomy was technically hazardous, further dissection was abandoned and multiple large bronchopleural fistulas were individually suture ligated with Vicryl 3-0 stitches. Lung ventilation was done, and significant air leaks were closed without the lower lobe being able to expand. After inserting an N: 28F chest tube in the cavity, the chest wall was closed. The operation time was 128 minutes, and total blood loss was 280 milliliters. A postop chest X-ray showed an empty cyst cavity with no evidence of the expansion of the lung in the lower hemithorax ( Figure 2(a)). The postop course was uneventful, and the chest tube was removed on the 4th postop day. The patient was discharged on the 5th postop day with a chest X-ray showing no improvement in lung expansion (Figure 2(b)) and a medical therapy consisting of oral albendazole 400 mg twice daily for a duration of three months with one-week drug cessation after each month and oral levofloxacin 750 mg daily for 10 days. Follow-up visits at the clinic were done 2 weeks, 2 months, 6 months, and one year after the operation with serial chest X-rays showing gradual obliteration of the remained cavity and resolution of the right lower lobe (Figures 2(c) and 2(d)). The patient had a significant improvement of symptoms during this period. Discussion It is reported in literature that chest pain, cough, and dyspnea are the most common clinical symptoms of patients presenting with a pulmonary hydatid cyst; however, our patient developed only dyspnea regarding the fact that the size of the hydatid cyst was huge [4,8]. Kuzucu et al. reported that patients having a hydatid cyst greater than 10 cm may present with productive cough and dyspnea more frequent than those having smaller pulmonary hydatid cysts [8]. In the aspect of treatment, surgery provides the best option for the treatment of pulmonary hydatid cysts. The most common procedure for the management of lung hydatid cysts is Barrett/Posadas' technique (cystotomy and closure of bronchopleural fistulas with or without capitonnage) [9]. Although it is well accepted in the literature that parenchymal resections should be reserved as the last resort options, sometimes segmentectomy and even lobectomy may be inevitable [10,11]. There is no generally accepted size to define the diameter of the cyst as "huge," although in most studies, cysts more than 10 centimeters were regarded as "giant" or "huge" cysts [8]. In cases with huge hydatid cyst (more than 10 cm in diameter), postoperative complications are more frequent. Cases of a huge hydatid cyst presented with more prolonged air leakage and atelectasis; however, our case did not present with such complications after surgery [8]. The lobectomy rate in pulmonary hydatid surgery is reported to range from 0.5% to 45% in the literature. In a study by Karaoglanoglu et al., this rate was 13% in giant cysts [12]. Indications for parenchymal resection are giant cysts occupying the entire lobe, multiple cysts, and an unexpandable lobe after the excision of the cyst. It is believed that parenchyma-preserving procedures should be preferred in most cases because the lung parenchyma that has been compressed by the cyst is healthy and would expand postoperatively [12]. The decision for parenchymal resection is taken during the operation with an evaluation of the lung expansion after the excision of the cyst [13]. Significance in our case is encountering a huge cyst in the right lower lobe which was not amenable to lobectomy due to technical aspects. Cyst walls were densely adhered to the chest wall, diaphragm, and mediastinum, and dissection of fissures was hazardous. Even after meticulous airtight closure of bronchial openings, the lower lobe did not expand. The chest tube was removed early in the postop course due to the absence of air leakage to prevent further contamination of the cavity. As the cyst cavity collapsed gradually during the follow-up period, the lower lobe was well expanded. The learning point of this case is the importance of secure bronchial opening closure in the surgical management of pulmonary hydatid cysts to prevent space infection which may complicate the postop course. Even if the remaining lobe does not expand adequately during operation, gradual expansion in the long term would be expected while there is no air leakage from it. Conclusion Thoracic surgeons should be aware of unexpected difficulties in operating huge pulmonary hydatid cysts. Parenchymapreserving surgery is advised and is considered fundamental in the surgical management of lung hydatidosis, and radical surgery can be avoided even in cases with a large hydatid cyst. Secure airtight closure of bronchial openings is invaluable in attaining such excellent results. Consent Informed written consent to write and publish his case as a report with accompanying radiological images was obtained from the patient.
2020-03-20T12:08:10.129Z
2020-03-17T00:00:00.000
{ "year": 2020, "sha1": "e3e970225ebf4d41ba693f9d1fc4a44845f280fb", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/cris/2020/9526406.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df1d5300cdae0df9b5b4f72a1c1a6074a8a2b498", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225063389
pes2o/s2orc
v3-fos-license
Genome-wide association studies of antidepressant class response and treatment-resistant depression The “antidepressant efficacy” survey (AES) was deployed to > 50,000 23andMe, Inc. research participants to investigate the genetic basis of treatment-resistant depression (TRD) and non-treatment-resistant depression (NTRD). Genome-wide association studies (GWAS) were performed, including TRD vs. NTRD, selective serotonin reuptake inhibitor (SSRI) responders vs. non-responders, serotonin-norepinephrine reuptake inhibitor (SNRI) responders vs. non-responders, and norepinephrine-dopamine reuptake inhibitor responders vs. non-responders. Only the SSRI association reached the genome-wide significance threshold (p < 5 × 10−8): one genomic region in RNF219-AS1 (SNP rs4884091, p = 2.42 × 10−8, OR = 1.21); this association was also observed in the meta-analysis (13,130 responders vs. 6,610 non-responders) of AES and an earlier “antidepressant efficacy and side effects” survey (AESES) cohort. Meta-analysis for SNRI response phenotype derived from AES and AESES (4030 responders vs. 3049 non-responders) identified another genomic region (lead SNP rs4955665, p = 1.62 × 10−9, OR = 1.25) in an intronic region of MECOM passing the genome-wide significance threshold. Meta-analysis for the TRD phenotype (31,068 NTRD vs 5,714 TRD) identified one additional genomic region (lead SNP rs150245813, p = 8.07 × 10−9, OR = 0.80) in 10p11.1 passing the genome-wide significance threshold. A stronger association for rs150245813 was observed in current study (p = 7.35 × 10−7, OR = 0.79) than the previous study (p = 1.40 × 10−3, OR = 0.81), and for rs4955665, a stronger association in previous study (p = 1.21 × 10−6, OR = 1.27) than the current study (p = 2.64 × 10−4, OR = 1.21). In total, three novel loci associated with SSRI or SNRI (responders vs. non-responders), and NTRD vs TRD were identified; gene level association and gene set enrichment analyses implicate enrichment of genes involved in immune process. Introduction A wide variety of antidepressants are available for major depressive disorder (MDD) and response to treatment varies in time to onset of benefit, overall efficacy, and duration of effect. Approximately 30% of individuals with MDD who are considered to have treatment-resistant depression (TRD) do not achieve full remission despite treatment with multiple agents at an adequate dose and duration 1 . Genetic variability may contribute to the differences in drug-specific, class-specific response, or TRD. Genome wide association studies (GWAS) have been employed as an approach to identify novel genetic variants that may contribute to variations in antidepressant response. Several antidepressant efficacy GWAS have been conducted using samples from the munich antidepressant response signature (MARS) project (a naturalistic prospective study, n = 339) 2 , the genome-based therapeutic drugs for depression (GENDEP) project (n = 394 on escitalopram and n = 312 on nortriptyline) 3 , the sequenced treatment alternatives to relieve depression (STAR*D) study (n = 1491 on citalopram) 4 , the mayo clinic pharmacogenomic research network antidepressant medication pharmacogenomic study (PGRN-AMPS) study (n = 529 individuals on selective serotonin reuptake inhibitors [SSRI]) 5 , and the Janssen-23andMe antidepressant efficacy GWAS study 6 . No significant genome-wide associations were found in the analysis of individual-level data from the novel methods leading to new medications in depression and schizophrenia (NEWMEDS) consortium, which consisted of 1790 individuals of European-ancestry with MDD; nor in the meta-analysis of the NEWMEDS and STAR*D studies (n = 2,897) 7 . In the antidepressant efficacy GWAS meta-analysis performed on three studies with data from individuals of Northern European descent (STAR*D, GENDEP, and MARS [n = 2256]), no variants passing the genome-wide significance threshold associated with antidepressant response were identified in primary outcome assessment of percentage improvement on clinician-rated depression scales and remission rates after 12 weeks of treatment 8 . Recently, Fabbri et al. re-analyzed GENDEP and STAR*D samples by adding the exome array rare variant content and using the Haplotype Reference Consortium (HRC) panel for imputation and identified rs116692768 (p = 1.80 × 10 −8 ), integrin subunit alpha 9 (ITGA9) and rs76191705 (p = 2.59 × 10 −8 ), neurexin 3 (NRXN3) to be significantly associated with symptom improvement during citalopram/escitalopram treatment 9 . Only the association between rs116692768 and symptom improvement was replicated in PGRN-AMPS (p = 0.047) and neither polymorphism was replicated in NEWMEDS 9 . Lastly, multi-allelic polygenic risk scores to estimate MDD risk score also showed no prediction of antidepressant treatment response 10 . The antidepressant response information obtained from self-reported questionnaires could offer an alternative approach to conduct a study with much larger sample sizes. In the current study, treatment outcome data based on an antidepressant efficacy survey (AES) 11 deployed to 23andMe's participants were utilized in GWAS. The primary aim of this study was to identify novel genetic variants specifically associated with response to classes of antidepressant therapy to improve our understanding of a potential genetic basis of antidepressant treatment response and to differentiate TRD from non-TRD (NTRD). Furthermore, a similar GWAS using phenotype data derived from "antidepressant efficacy and side effects" survey (AESES) was reported by Li et al. 6 . The AESES survey that was also deployed to 23andMe research participants reported responses on specific drugs, and class-or drug-specific antidepressant treatment response including SSRI response, norepinephrinedopamine reuptake inhibitor (NDRI) response 6 , citalopram/escitalopram response, SNRI response, and TRD vs. NTRD could be derived. As of this analysis, SNRI response was not previously conducted using data from AESES; we have now included this analysis in the current study. Overlapping phenotypes from AES and AESES were also meta-analyzed to increase the study power. Methods Cohorts "Antidepressant Efficacy" survey (AES) cohort 11 Saliva samples for genetic testing from approximately 56,000 research participants from 23andMe were collected under the protocol approved by Ethical and Independent Review Services 12 , a private institutional review board (IRB). Informed consent was obtained. Participants answered the AES and the 'Your Profile and Health History' survey online between August 2015 and January 2017. cohort Approximately 48,000 23andMe research participants (including the overlap with participants who took the AES) provided saliva samples and informed consent for genetic testing under the same IRB-approved protocol and answered the AESES and the "Your Profile and Health History" survey online between June 2013 and June 2015. The GWAS using data from AESES has been previously reported 6 . Sample genotyping and SNP data imputation DNA extraction and genotyping were performed, as described previosuly 6,13 . Briefly, samples were genotyped on platform variants (V1 and V2) of the Illumina HumanHap550 + BeadChip (Illumina Inc., San Diego, CA), and included~25,000 custom single nucleotide polymorphisms (SNPs) selected by 23andMe, with a total of~560,000 SNPs. A custom content platform (V3) based on the Illumina OmniExpress + BeadChip was used to improve the overlap, with a total of~950,000 SNPs. A fully custom array platform (V4) was used which included a subset of SNPs with additional coverage of lowerfrequency coding variation, and~570,000 SNPs. The samples that failed to reach 98.5% call rate were reanalyzed. Prior to imputation of genotype data against the September 2013 release of 1000 Genomes 14 Phase1 reference haplotypes, we excluded SNPs with Hardy-Weinberg equilibrium p < 10 −20 , call rate < 95%, or with large allele frequency discrepancies compared to European 1000 Genomes reference data 15 . Additional details on the imputation procedure are provided in Supplementary Text S1. Data and phenotypic analysis groups The AES taken by 23andMe participants was designed by Janssen in collaboration with Dr. Ronald Kessler, Harvard University. The survey asked respondents about their use of antidepressants and antipsychotics over the last 5 years and the perceived qualitative effect from the treatment of the current depressive episode overall. If a study participant also used non-pharmacotherapy options, the survey attempted to tease out the contribution of pharmacotherapy (See Supplementary Fig. S1A for example questions). The list of drugs included SSRIs citalopram, escitalopram, fluoxetine, paroxetine, and sertraline; SNRIs duloxetine, venlafaxine, desvenlafaxine, and levomilnacipran; NDRI bupropion; serotonin antagonist and reuptake inhibitor trazodone; atypical antipsychotics (quetiapine, olanzapine, and aripiprazole); and serotonin modulators (vortioxetine and vilazodone), and Symbyax ® (a combination of olanzapine and fluoxetine). Using phenotype data collected from the AES 11 and genotype data from 23andMe participants, genome-wide association analyses were performed on 4 groups of phenotypes (a) NTRD (n = 17,214) vs. TRD (n = 3168), (b) SSRI responders (n = 8,491) vs. non-responders (n = 4046), (c) SNRI responders (n = 2055) vs. non-responders (n = 1950), and (d) NDRI responders (n = 1616) vs. nonresponders (n = 2068). All participants included in these analyses self-reported taking antidepressants for depression. In the AES, a participant was classified as having TRD if (1) he or she took at least two antidepressants for ≥ 5-6 weeks; and (2) the overall treatment effect was not "helpful or very helpful", or medication did not help despite the overall treatment effect was "helpful or very helpful". A survey participant was classified as NTRD if (1) he or she only received antidepressant pharmacotherapy and the treatment effect was helpful or very helpful; (2) he or she also received non-pharmacotherapy but stated that the overall treatment effect was helpful or very helpful and medication was the main reason the treatment was helpful, or medication was important but not the main reason the treatment was helpful. In both cases, the participant took ≤ 2 antidepressant medications for more than 3-4 weeks. A schematic flow diagram on both TRD/NTRD and class-specific responders/non-responders phenotype classification based on the AES questionnaire is provided in Supplementary Fig. S1B. Since the AES survey did not ask questions on response for each antidepressant, only participants responding to mono-pharmacotherapy were considered for class-specific responder analysis. Using phenotype data collected from 23andMe surveys (AESES and "Your Profile and Health History") and genotype data from 23andMe's research participants, genome-wide association analyses were performed on one additional phenotype that was not previously analyzed 6 , SNRI responders (n = 2547) vs. non-responders (n = 1567). The responder status was defined in accordance with the previous report 6 and described in Supplementary Text S2 and depicted in Supplemental Fig. S1C. For each of the four AES phenotype groups, responders vs. non-responder analyses were performed both with or without AESES overlapping participants included. In addition, the responder subgroups (e.g., the resistant/nonresponder groups and the non-resistant/responder groups) were also compared to healthy controls (n3 54,000) self-reported to be free of any of the following conditions based on the survey data captured from the "Your Profile and Health History" survey: attention-deficit/hyperactivity disorder, anxiety, schizophrenia, depression, bipolar, OCD, autism, PTSD, and insomnia as a way to confirm if the study population was similar to clinically ascertained cohorts. Genome-wide association analysis Overall analysis flow is depicted in Supplemental Fig. 1D. Specially, genome-wide analysis was restricted to a set of unrelated individuals who had > 97% European ancestry, as determined through an analysis of local ancestry. Standard quality control on directly genotyped markers excluded (1) SNPs that were only genotyped on the V1 and/or V2 platforms due to small sample size, and SNPs on chrM or chrY; (2) SNPs that failed a test for parent-offspring transmission using trio data; (3) Hardy-Weinberg P < 10 −20 in Europeans; (4) SNPs with call rate of < 90%; (5) SNPs with genotyping batch effect. Imputed markers were excluded if overall r 2 < 0.5, or r 2 < 0.3 in any imputation batch, or with a significant imputation batch effect. For case control comparisons, association test results were computed by logistic regression assuming additive allelic effects using custom scripts implemented by 23andMe in the C ++ programing language, which were also used to compute association test results in previous publications 6,13,[16][17][18][19][20][21] . For tests using imputed data, the imputed dosages rather than best-guess genotypes were computed. Covariates for age, gender, genotype platforms, and the top five principal components to account for residual population structure were included. The association test p-value reported was computed using a likelihood ratio test. A p-value threshold of 5 × 10 −8 was considered to be genome-wide significant 22 . No additional multiple testing correction was applied for considering multiple phenotype groups. Additional details on the method can be found in Supplementary Text S1. Meta-analysis For overlapping phenotypes between a similar analysis based on the AESES conducted previously 6 or reported herein, and the AES study 11 reported herein, a meta-analysis was performed. The overlapping participants who responded to both surveys were removed and only the non-overlapping participants were included in the 'Antidepressant Efficacy' cohort for the meta-analysis. Dosage association statistics were used in meta-analysis using PLINK 23 (version 1.07) and fixed-effects model p-value is reported. Conventional genome-wide significance threshold of 5 × 10 −8 was used to declare study-wide significance. A list of variants with an unadjusted p-value < 5 × 10 −4 is also reported. In addition, meta-analyses using the AES GWAS summary statistics before removing overlapping participants and using METACARPA 24 (a method accounting for sample overlap) were also applied and p wald , p corrected , and p stouffer were reported. Some of the Manhattan, Q-Q and circos plots were generated using FUMA 25 , while regional plots were generated using LocusZoom v1.2 26 . Genetic heritability estimates Psychiatric Genomics Consortium (PGC) disease susceptibility summary association statistics for MDD, bipolar, and schizophrenia [27][28][29][30] were downloaded from the PGC website (http://www.med.unc.edu/pgc/ downloads) and included with the summary statistics from this study as reference datasets for genetic heritability estimates. Phenotypic variance explained by variants (both genotyped and imputed, mostly SNPs) (h 2 ) for each of the phenotype groups was estimated using association statistics as implemented in LD Score regression 31 . We additionally calculated the h 2 for the response phenotype using the genome-wide complex trait analysis (GCTA) 25 (using pruned genotyped SNPs only) due to computation intensive step of the genetic relationship matrix (GRM) calculation. Multi-marker analysis of genomic annotation (MAGMA) gene, gene-set, and cell type analysis In addition to single-marker-based GWAS, gene and gene-set analyses were computed using MAGMA 32 based on GWAS summary statistics. SNPs were mapped to 18,927 protein coding genes. Genome-wide significance was defined at p = 0.05/18,927 = 2.64 × 10 −6 . MAGMA gene-set analysis was performed for curated gene sets and GO terms obtained from the Molecular Signatures Database 33 (MsigDB) (total of 10,894 gene sets). Lastly, MAGMA gene-property analysis was performed to test cell type specificity of phenotype using GWAS summary statistics. All MAGMA analyses were performed using FUMA 34 . Annotation of variants The implication of a causal gene for a genetic association (e.g., linking a variant to a gene) in general is not straightforward unless the variant itself causes a deleterious functional consequence. Variant-to-gene mappings (position-based, expression quantitative trait loci [eQTL]-based, or chromatin interaction-based) were generated using FUMA. eQTL-based and chromatin interaction-based mapping were used to aid the interpretation of variants identified. FUMA advocates taking position-based, eQTL, and 3-D chromatin interaction as ways to link variants to genes 34 . Open Target Platform 35 also leverages protein quantitative trait locus (pQTL), distance to transcriptional start site (TSS) etc. The data sources for overlapping approaches (such as eQTL) are not entirely identical between bioinformatics resources such as FUMA or Open Target Genetics and therefore it is beneficial to utilize multiple tools. Open Target was used to provide additional information to aid the variantgene linking interpretation. Replication of published antidepressant treatment response GWAS top hits Two published antidepressant treatment response GWAS meta-analyses 8,9 have a full list of top hits with p < 0.0001 and p < 5 × 10 −6 , respectively, in the supplemental material. Despite the phenotype ascertainment difference, we attempted to replicate the findings reported focusing on the remission status endpoint and adjusting for the number of top hits in the published GWAS metaanalyses. Association passing multiple testing correction threshold was considered to be replicated; others with p < 0.05 were considered as suggestive only. Results from other treatment response endpoints were cross checked as well. No multiple testing correction was applied for 4 treatment response phenotypes that we consider in this study or multiple endpoint definitions (symptom improvement vs. remission, 2 weeks vs. 12 weeks, whole samples vs. SSRI samples only). Cross reference of UK Biobank (UKB) phenome-wide association study (PheWAS) and other antidepressant treatment response results for genome wide significant variants from this study Results from UKB PheWAS analysis performed by the Neale Lab (Broad Institute of MIT and Harvard, Cambridge, Massachusetts) are available from Open Targets. UKB PheWAS association results were assessed for top hits from the current study, especially for traits related to psychiatric conditions as corroborating evidence. An association passing phenome-wide significance threshold (p < 0.05/2000~2.5 × 10 −5 ) was considered as significant, while p < 0.05 was considered as suggestive. Furthermore, antidepressant studies especially the STRA*D-GENDEP-MARS meta-analysis 8 (Pharmacogenetics -PhaCoGe in https://data.broadinstitute.org/mpg/ricopili/), were assessed using SNPs in linkage disequilibrium (LD) with the genome-wide significant variants. Results The sample size and demographics for each phenotype definition derived from AES are described in Table 1 with additional details in Supplementary Table S1. A full list of suggestive association with p < 5 × 10 −4 for all four treatment response endpoints is available in the Supplemental Table S2. Analysis of the heritability estimates for responders vs. non-responders is shown in Supplementary Table S3. Overall, the heritability estimates for response phenotypes are still unreliable with confidence intervals crossing zero except two that were estimated using GCTA, suggesting the sample size is still not sufficiently large to yield a reliable estimate. Most of the disease phenotypes (responders vs. controls, or non-responders vs. controls) were similar to those estimated for MDD cases vs. controls from PGC, as observed in the AESES study 6 . (see figure on previous page) Fig. 1 Genome-wide significant association signals. (A) Manhattan plots for SSRI GWAS in AES cohort; (B) SNRI responders vs. non-responders GWAS meta-analysis; (C) NTRD vs. TRD GWAS meta-analysis; (D) Regional plot for chromosome 13; (E) Regional plot for chromosome 3; (F) Regional plot for chromosome 10; (G) Circos plot for chromosomes 13; (H) Circos plot for chromosomes 3; (I) Circos plot for chromosomes 10. The dotted line indicates genome-wide significance threshold of 5 × 10 −8 . For the regional association plot generated by LocusZoom 26 , SNPs in genomic risk loci are color-coded as a function of their r 2 to the index SNP in the locus, as follows: red (r 2 > 0.8), orange (r 2 > 0.6), green (r 2 > 0.4) and light blue (r 2 > 0.2). SNPs that are not in LD with the index SNP (with r 2 ≤ 0.2) are dark blue, while SNPs with missing LD information are shown in gray. For the circos plot, the outer most layer is Manhattan plot and the middle layer highlights genomic risk loci (as defined by FUMA using minimum P-value of lead SNPs of 1 × 10 -5 and default values for other parameters) in blue in, while the inner most layer highlights eQTLs and/or chromatin interactions. Only SNPs with p < 0.05 are displayed in the outer ring. SNPs in genomic risk loci are color-coded as a function of their maximum r 2 to the one of the independent significant SNPs in the locus, as follows: red (r 2 > 0.8), orange (r 2 > 0.6), green (r 2 > 0.4) and blue (r 2 > 0.2). SNPs that are not in LD with any of the independent significant SNPs (with r 2 ≤ 0.2) are gray. The rsID of the top SNPs in each risk locus are displayed in the most outer layer. For the inner most layer, if the gene is mapped only by chromatin interactions or only by eQTLs, it is colored orange or green, respectively. It is colored red when the gene is mapped by both. AES Antidepressant Efficacy Survey, GWAS genome-wide association analysis, SSRI selective serotonin reuptake inhibitor, SNRI serotonin-norepinephrine reuptake inhibitor, TRD treatment-resistant depression. MAGMA gene analysis identified one and three genes passing the multiple testing correction threshold for SNRI and TRD phenotypes, respectively, in the AES cohort (Table 3, Supplementary Table S4), including lymphotoxin beta (LTB), an inflammation-related gene implicated to be associated with TRD 42 . None of the gene-level MAGMA associations (using meta-analysis association statistics) yield genome-wide significance (Supplementary Table S5). Cell type analysis of SNRI METACARPA meta-analysis results revealed potential enrichment of GABAergic neurons (p = 0.03, p adj = 0.08 when adjusted within Allen Brain Atlas Cell Type human MTG 43 dataset and multiple single cell RNA-Seq [scRNA-Seq] datasets showed trends towards GABAergic neurons), while SSRI, NDRI, and TRD meta-analysis results revealed potential enrichment of glutamatergic neurons and microglia (Supplementary Table S8). The enrichment was not statistically significant when adjusted for across all scRNA-Seq datasets tested. We used our study results to replicate antidepressant treatment response outcome reported in the literature 8,9 . Among the top hits (p < 0.0001) from remission after up to 12 weeks of treatment in the meta-analysis of SSRItreated participants in GENDEP and STAR*D (top hits n = 54) and the meta-analysis of the entire GENDEP, MARS, and STAR*D samples (top hits n = 60), only rs6540437 near complement C3b/C4b receptor 1 like (CR1L) that was suggestively associated with SSRI remission (p = 0.00004 in GENDEP and STAR*D metaanalysis) was replicated (p = 0.0008 for NTRD vs. TRD analysis) in this study (p < = 0.05/54~0.0009) with consistent directional effect. The results from NDRI responders vs. non-responder (p = 0.003) and SSRI responders vs. nonGenome-responders (p = 0.01) were suggestively supportive. Other replication results of suggestive associations are shown in Supplementary Table S9 and S10. Among the genome-wide significant variants identified by Fabbri et al. 9 , in the re-analysis of GENDEP and STAR*D samples, rs76191705 had a nominal association in the NDRI responder analysis in AESES cohort (p = 0.02, OR = 1.58) but not in the AES cohort (p = 0.08, OR Table 2 Genome-wide significant SNPs for each phenotype in either AES cohort or in meta-analysis. Conversely, we examined corroborating evidence from UKB PheWAS and other antidepressant treatment response studies for the genome-wide significant variants identified in the current study. Interestingly, rs4884091 that was associated with SSRI responders (vs. nonresponders) was also suggestively associated with "manic/ hyper symptoms: I was more creative or had more ideas than usual" (p = 0.003) in UKB PheWAS. In the metaanalysis between AES and AESES cohorts, we identified two additional genome-wide significant loci. The rs4955665 variant associated with SNRI response was also suggestively associated with "longest period of unenthusiasm / disinterest" (p = 0.0004), "manic/hyper symptoms: I was more talkative than usual" (p = 0.02), and "diagnoses -main ICD10: F99 Mental disorder, not otherwise specified" (p = 0.004) in UKB (data source: Open Target, PheWAS analysis performed by the Neale Lab) 35 . The rs150245813 variant associated with NTRD (vs. TRD) was also additionally suggestive of association with "Diagnoses -main ICD10: F33 Recurrent depressive disorder" (p = 0.01) and "Diagnoses -main ICD10: F43 Reaction to severe stress, and adjustment disorders" (p = 0.02). However, none of the PheWAS-suggestive associations would be significant after adjusting for more than 2,000 traits tested. In addition, the genome-wide significant variants from this study were not replicated in the STAR*D-MARS-GENDEP meta-analysis 8 for remission status after up to 12 weeks of treatment. Specifically, rs2804669 in LD (r 2 = 0.5, D' = 1) with rs150245813 was not associated with remission status (p = 0.88). Likewise, rs4955666 in LD (r 2 = 0.90, D' = 1) with rs4955665 was not associated with remission (p = 0.68). Lastly, rs9318544 in LD (r 2 = 0.92, D' = 0.98) with rs4884091 was not associated with remission (p = 0.64). Discussion In the current analysis, the use of GWAS identified several genetic markers potentially associated with TRD and with antidepressant treatment response in a large population of individuals using self-reported outcomes. To the best of our knowledge, this study included the largest cohort to date for evaluation of GWAS of antidepressant efficacy. Among the variants, genes, and gene sets that were identified in various analyses, a common theme on immune regulation emerges. LTB is one of the genes implicated to be associated with TRD in gene-based MAGMA analysis. LTB is an inducer of the inflammatory response system, implicating the role of inflammatory response modulation in TRD, which is consistent with the role of inflammation in MDD 45,46 and antidepressant treatment response 46,47 . However, it is noteworthy that there are three genes in this region of chromosome 6 associated with TRD status in the gene-base analysis. This is a gene-dense, high-LD region of chromosome 6, which raises a caveat that it could be difficult to localize the causal associations in this region. MAGMA gene-set analysis also suggested an association with genes involved in organ or tissue specific immune response and innate immune response in NDRI analysis. SNRI analysis in AESES cohort also suggested an enrichment of genes involved in inflammatory response and the inflammasome complex. Meta-analysis suggested an enrichment of genes involved in interleukin signaling in SSRI treatment response, consistent with the theme that inflammatory response plays a role in antidepressant treatment response. Replication of STAR*D and GENDEP metaanalysis also highlighted the involvement of CR1L. Dysregulation of synaptic plasticity and deficits in functional connectivity are hypothesized to contribute to symptoms associated with MDD. Holmes et al. 48 , used the synaptic vesicle glycoprotein 2 A (SV2A) radioligand to index the number of nerve terminals as an indirect estimate of synaptic density and showed that the severity of depressive symptoms was inversely correlated with SV2A density. In mice models of demyelinating diseases, synapse loss coincided with gliosis and increased complement component C3 at synapses. Overexpression of the complement inhibitor Crry/Cr1l at C3-bound synapses decreased microglial engulfment of synapses 49 . It is intriguing that we observed the replication of CR1L polymorphism for SSRI treatment response, despite the difference in phenotypes. Although the present study is suggestive of the converging theme of involvement of immune processes, several limitations are worth mentioning. First, it will be most convincing if the finding was identified in one of the cohorts (AESES or AES) and strengthened in the meta-analysis. This is likely due to combination of sample size being not large enough (i.e., small compared to disease phenotype GWAS meta-analysis) and because the phenotype definition was not based on ascertainment by depression symptom severity scales. Second, the gene set evidence is suggestive as they did not pass the stringent multiple testing correction threshold. Despite the use of self-reported phenotypes, the genetic heritability for MDD disease phenotype estimated from our cohort is comparable to that estimated from the PGC2 MDD cohort 50 , supporting that the disease phenotype ascertained by self-reporting is comparable with that ascertained by clinical assessment. However, we cannot readily extrapolate this finding to the self-reported treatment outcome phenotype. To our knowledge, possibly due to inadequate sample size of most studies, no reliable estimates of the heritability of antidepressant treatment response have been published to-date. The Janssen-23andMe AESES survey GWAS, based on selfreported outcomes, identified one genome-wide association locus but did not take treatment duration into consideration 6 . The AES used in this study was aimed to interrogate self-report predictors of TRD and did consider antidepressant exposure time. The genome wide significant locus reported from NDRI analysis in the AESES cohort 6 was not replicated in the AES cohort. A limitation of this study was the treatment outcome phenotype was self-reported and that heritability estimates are still very low/unreliable with the meta-analysis sample size used for responder vs. non-responder analysis. Another limitation is that survey participants were not representative of all patients with depression, as they volunteered to provide samples for genetic testing. In addition, outcome assessments were based entirely on retrospective self-reporting. The genome significant findings reported here have not been replicated and thus require further study to provide supporting or refuting evidence. The current study appears to be the largest cohort ever evaluated for GWAS of antidepressant efficacy. Our results identified novel associations of genetic variants with antidepressant responders vs. non-responders, but the findings require replication. Further, the meta-analysis of two antidepressant efficacy surveys identified two additional loci at the single variant level for TRD and SNRI response phenotype. Several additional loci at gene level passed genome-wide significance for both the TRD and SNRI response phenotype. Future GWAS with large sample size and meta-analysis with additional cohorts will be needed to replicate the findings reported here. Metaanalysis with other antidepressant treatment response studies may eventually have enough study power to help to predict treatment outcome to a specific antidepressant class and/or in TRD vs. NTRD.
2020-10-26T13:27:11.679Z
2020-10-26T00:00:00.000
{ "year": 2020, "sha1": "4d410bc9cf0676325e927a6b74bf3f2b193de790", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41398-020-01035-6.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "4d410bc9cf0676325e927a6b74bf3f2b193de790", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
8751688
pes2o/s2orc
v3-fos-license
A NOTE ON THE FEKETE–SZEGÖ PROBLEM FOR CLOSE-TO-CONVEX FUNCTIONS WITH RESPECT TO CONVEX FUNCTIONS . We discuss the sharpness of the bound of the Fekete–Szegö functional for close-to-convex functions with respect to convex functions. We also briefly consider other related developments involving the Fekete–Szegö functional | a 3 − λa 22 | (0 6 λ 6 1) as well as the corresponding Hankel determinant for the Taylor–Maclaurin coefficients { a n } n ∈ Nr { 1 } of normalized univalent functions in the open unit disk D , N being the set of positive integers. Introduction A classical problem in geometric function theory of complex analysis, which was settled by Fekete and Szegö [4], is to find for each λ ∈ [0, 1] the maximum value of the coefficient functional Φ λ (f ) given by (1.2) f (z) = z + ∞ n=2 a n z n (z ∈ D). By applying the Loewner method, Fekete and Szegö [4] proved that For various compact subclasses F of the class A of all analytic functions f in D of the form (1.2), as well as with λ being an arbitrary real or complex number, many authors computed or calculated the upper bound of (1.3) (see, e.g., [2,8,11,21]). Let S * denote the class of starlike functions, that is, Given δ ∈ (− π 2 , π 2 ) and g ∈ S * , let C δ (g) denote the class of functions called closeto-convex with argument δ with respect to g, that is, the class of all functions f ∈ A such that We also suppose that, given g ∈ S * , C(g) := g∈S * C δ (g) and that, given δ ∈ (− π 2 , π 2 ), C δ := g∈S * C δ (g). Let denote the class of close-to-convex functions (see, for details, [20, pp. 184-185], [6,10]). For the whole class C, the sharp bound of the Fekete-Szegö coefficient functional Φ λ for λ ∈ [0, 1], given by (1.1), was calculated by Koepf [13] who extended the earlier result for the class C 0 and for λ ∈ R due to Keogh and Merkes [11], namely, it holds For various subclasses of the class of close-to-convex functions, the problem to estimate the coefficient functional Φ λ is continued in several subsequent works (see, for details, [9,12,[14][15][16]). Some interesting and important subclasses of the class C are the classes C c δ and C c , which are defined below. Let S c denote the class of convex functions, that is, Since S c S * , the class C c δ := g∈S c C δ (g) is a proper subclass of the class C δ and the class is a proper subclass of the class C. The class C c 0 was defined by Abdel-Gawad and Thomas [1]. The class C c of close-to-convex functions with respect to convex functions was introduced by Srivastava, Mishra and Das [23]. In both of these cited papers, the authors (Abdel-Gawad and Thomas [1] and Srivastava, Mishra and Das [23]) considered the coefficient functional Φ λ with λ ∈ [0, 1] also. In fact, in Srivastava, Mishra and Das [23] extended, for the class C c , the earlier result of Abdel-Gawad and Thomas [1] for the class C c 0 . However, in each of the above-cited papers, the proof for the sharpness of the bound in (1.3) for λ ∈ 2 3 , 1 was proposed incorrectly as 5/6. This note is motivated essentially by the earlier papers [1] and [23]. The main purpose of our investigation here is to discuss such sharpness results for the bound in (1.3). We also provide a rather brief consideration of other related developments involving the Fekete-Szegö functional a 3 − λa 2 2 (0 λ 1) in (1.1) as well as the corresponding Hankel determinant for the Taylor-Maclaurin coefficients {a n } n∈N {1} of normalized univalent functions of the form (1.2). Main Observation As we remarked in Section 1, in both of the afore cited papers [1,23], the upper bounds of the Fekete-Szegö coefficient functional Φ λ (0 λ 1) for the classes C c 0 and C c , were computed. In fact, Theorems 5 and 6 of Srivastava, Mishra and Das [23] state that the following sharp inequality holds true and that this result is the same as in [1] for the class C c 0 (a part of Theorem 3). However, the assertion that the extremal function, for which the equality in (2.1) is satisfied when λ ∈ ( 2 3 , 1], belongs to C c is incorrect. Indeed, here in this section, we note that the above-cited papers [1,23] contain a statement to the effect that the equality in (2.1) is attained by a function f ∈ A given by where h ∈ S c is of the form and ω is a function of the form Unfortunately, however, ω is not a Schwarz function for λ ∈ ( 2 3 , 1]. We recall here that a Schwarz function means an analytic self-mapping of D with ω(0) := 0. Let us denote the class of Schwarz functions by B 0 . In order to see that ω / ∈ B 0 , we verify (by straightforward computation) that, for λ ∈ ( 2 3 , 1], the following inequality: is false, so a necessary condition for ω to be in B 0 (see, for example, [5, Vol. II, p. 78]) does not hold true. Alternatively, in order to get a contradiction, we suppose that ω with its coefficients in (2.5) is a Schwarz function. Thus, clearly, (2.6) holds true. Hence we find from (2.5) that 1 − |β 1 | 2 |β 2 | = |1 − β 2 1 | 1 − |β 1 | 2 . Thus we have |1 − β 2 1 | = 1 − |β 1 | 2 and, therefore, β 1 = |β 1 | or β 1 = −|β 1 |. This means that β 1 is a real number, which by (2.5) is possible only for λ = 2 3 . Consequently, for λ ∈ ( 2 3 , 1], the function ω with its coefficients in (2.5) does not belong to B 0 . So, in light of (2.2), it does not follow that f is in C c or in C c 0 . Equivalently, let where ω is as given above. Then where, in view of (2.7), (2.4) and (2.5), we have c 1 = 2β 1 and c 2 = 2(β 2 + β 2 1 ) = 2. We observe further that, for λ ∈ ( 2 3 , 1], the function p does not belong to the Carathéodory class. We recall here that the Carathéodory class, denoted as P, contains analytic functions p of the form (2.8) with a positive real part. In order to see that p / ∈ P, we verify for λ ∈ ( 2 3 , 1] that the inequality c 2 − c 2 1 /2 2 − |c 1 | 2 /2, is false, which happens to be a necessary condition for p to be in the class P (see, for example, [22, p. 166]). Concluding remarks and further developments By means of Theorem 3 of Abdel-Gawad and Thomas [1], Theorems 1 to 4 of Srivastava, Mishra and Das [23], and in light of our observation in Section 2, we arrive at the following result. Theorem 1. Each of the following assertions holds true: We now note that, by Loewner Theorem (see, for example, [5, Vol. I, p. 1127 and defines the class C δ (h), and further the class C(h). For the first time, the inequality in (3.3), treated as the univalence criterion, was distinguished explicitly in [20, p. 185]. For the class C(h), the upper bound of the Fekete-Szegö coefficient functional Φ λ for λ ∈ R was recently obtained in [14], where the following result was proven. The determinant H q (n) has also been considered by several other authors. For example, Noor [18] determined the rate of growth of H q (n) as n → ∞ for functions f given by (1.1) with bounded boundary. In particular, sharp upper bounds on H 2 (2) were obtained in the recent works [7,18] for different classes of functions. We note, in particular, that H 2 (1) = a 1 a 2 a 2 a 3 = a 3 − a 2 2 and H 2 (2) = a 2 a 3 a 3 a 4 = a 2 a 4 − a 2 3 . The Hankel determinant H 2 (1) = a 3 − a 2 2 is the classical Fekete-Szegö coefficient functional. The upper bounds of H 2 (2) for some specific analytic function classes were discussed quite recently by Deniz et al. [3] (see also [19]).
2017-08-20T07:55:28.816Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "c0f25b88121d9394edddfd7d576fc8c8938d1947", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0350-13021715143K", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "53e5b873c38acdbe6f6b58edfcc58921aae6f084", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
216562278
pes2o/s2orc
v3-fos-license
Photon directional profile from stimulated decay of axion clouds with arbitrary axion spatial distributions We model clusters of axions with spherically symmetric momentum but arbitrary spatial distributions and study the directional profile of photos produced in their evolution through spontaneous and stimulated decay of axions via the process $a \rightarrow \gamma + \gamma$. Several specific examples are presented. INTRODUCTION Axions are copiously produced at the QCD phase transition. A possible way to detect these cosmological axions is through the observation of lasing axion clouds (clumps). If axions are a component of the cold dark matter (CDM), they can form density perturbations in the early Universe. If the over dense regions have high enough number density, then ambient photons from the cosmic microwave background (CMB) or from spontaneous axion decays, can induce stimulated axion decay within the clumps, i.e., the axions can lase [1][2][3]. Besides the initial clumps, other axion structures can form. The initial density perturbations can infall and evolve to form caustics [4] which have complicated geometries. Yet another possibility is that axions can be produced after the formation of primordial black holes (PBHs). Such black holes can be the results of various early universe processes, from cosmic string or domain wall singularities to density perturbations. However they for, if they have sufficient angular, either initially or from mergers, then superradience can occur causing axions to populate an n, l, m = 2, 1, 1 hydrogen-like orbit around them if the axion Compton wavelength is comperiable to the PBHs' radius. If the axion density is high enough they can lase [5]. The process can saturate, stop and then repeat in what is similar to what has been seen for fast radio bursts (FRBs). Lasing in the PBH superradience case has so far only been approximated using the sphyerically symmetric model [5]. In this work and in [6] we point the way to an improving this approximation using multipole expansions of the spatial and momentum space distributions to more closely represent the physical axion distributions expected around a PBH. PHOTON ANGULAR DISTRIBUTION In [1,3] nonrelativistic axions of mass m a were contained in a ball of radius R, with a maximum momentum value of p max ≈ m a β. Here we allow a non spherically symmetric spatial distribution X(θ, φ) to modify the axion clouds model previously studied, with the aim of finding the angular distribution Y (θ, φ) of photons re-sulted from decays of axions, providing that there is some outside constraint (e.g., a gravitational field or self interactions) that can keep the axions in the initial spatial distribution. For such an axion distribution, assuming it factorizes, the occupation number f a (p, r, θ, t) and number densities n a (r, θ, t) can be written and where we can translate between the two with f ac (t) = 6π 2 m 3 a β 3 n ac (t). ( Here and elsewhere we use the short hand notation X(θ) for X(θ, φ), likewise for Y , f and n. The photons are contained in a ball of radius R, a momentum spherical shell of inner and outer radius k − = maγ 2 (1 − β) and k + = maγ 2 (1 + β) respectively [3], where we use β = v/c. and where f λ (k, r, θ, t) and n λ (r, θ, t) are the photon occupation number and photon number density, of helicity λ = ±1 respectively, which are related by eq.(6) and V k is the volume of the momentum spherical shell We assume that the number density of each helicity state is the same, so the total photon number density n γ can be written as which defines n xc . Hence the coefficient of the total photon number density is just 2 times that of photon number density of each helicity state. The evolution relation between axion and photon occupation numbers is (see equation (13) of [3]) where f λ (k) and f λ (k 1 ) are photon occupation numbers of momentum k and k 1 , respectively. Other variables in f λ (k) and f λ (k 1 ), i.e. r, θ, t, are the same since they share the same spacetime. f a (k + k 1 ) is the axion occupation number of momentum k+k 1 . Γ a is the spontaneous axion decay rate. This evolution equation can be integrated over k and k1 phase space to yield (see the Appendix) Now we employ the nonrelativistic approximation (β ≪ 1). to arrive at Substituting the derived relations (3) and (??) into (9) Taking into consideration photon surface loss we have an equation which gives the number density for each helicity state where we are assuming, as was shown in (7), that total number density of photon is twice that of the individual helicity states. Therefore the rate of change of total number density of photon is Since from (7) dn if we drop the step function Θ(R−r) we have an equation for the coefficient of total number density of photon From the first to the last term on the right hand side(RHS) of the equation, the terms account for spontaneous decay of axions, photon stimulated decay of axions, back reaction of photons, and surface loss of photons, respectively. Following similar approach, we obtain an equation regarding the coefficient of total number density of axions The third term on the RHS of (11) is proportional to β, while the third term on the RHS of (10) has a factor of (β + 3 2 ). Keeping track of two parts of axions generated from the back reacting photons, we find that the 3 2 in the third term on the RHS of (10) represents sterile axions and it should have been and was excluded in the derivation of (11). The left hand sides (LHS) of (10) and (11) have no θ dependence, but the RHS does. X(θ) = Y (θ) won't make (10) and (11) valid simultaneously. So even if there is some outside constraint which can keep the axions in the X(θ) distribution fixed, the photons cannot have the same distribution, i.e., Y (θ) = X(θ). There is no simple way to find a closed form for Y (θ) because the LHS of the equations (10) and (11) have no θ dependence, while the θ dependences on the RHS of these equations are different. This suggests the possibility that Y (θ) may be found as a series expansion in X(θ). As a first test of this idea we replaced the general form X(θ) with sin θ to study the distribution with more axions accumulated near the equatorial plane with few near the polar area, aiming at matching orders of sin θ on each side of equations. But this fails as it turns out that sin n θ (n ∈ Z) is not an orthogonal set of functions and thus the calculation leads to contradictions. Therefore, we must expand the occupation numbers and number density in terms of a full set of orthogonal functions. We do this in the next section where we choose the set to be the real spherical harmonics. REAL SPERICAL HARMONICS EXPANSION The set-up here is similar to the previous discussion except that the axion and photon occupation numbers and number densities have coefficients labeled by order index l and m. For the axions where we have set Note that n a can not be any superposition of real spherical harmonics, it has to be real and positive, so it should be put into the form where Y m l are complex spherical harmonics. This also applies to photons. where similar the the axion case we have set Following the steps from the previous general discussion, we have an equation similar to (10) for each choice of lm where E lm and F lm are defined through We also have equations similar to equation (11) for each choice of lm with regard to the changing number density of axions. The equation includes components representing spontaneous decay, stimulated decay and back reaction with sterile axions excluded The sterile axions evolve according to The rate of change of photon number density component can be expressed in terms of the changing components of normal axion and sterile axion, and the components of surface loss We now proceed to explore some example choices of initial axion distributions. Y00 distribution As a first example we consider the spherical symmetric axion distribution where the only nonzero component of axion number density is n a00 , n a =Θ(R − r)n a00 Y 00 (Ω) , then n alm = 0 (lm = 00) . where the only nonzero component of photon number density is n γ00 . So if there is spherical symmetry in the axion distribution, then spherical symmetry also exist in photon distribution. Now we argue that this is the only solution of finite spherical harmonics series. Suppose that the highest spherical harmonics in the photon number density n γ (Ω, t) is Y lama . According to (20) and taking the Y 00 as a number, the highest spherical harmonics in [n γ (Ω, t)] 2 is also Y lama . However, according to (21), the highest spherical harmonics in [n γ (Ω, t)] 2 is going to be Y 2la 2ma . This contradiction can only be resolved when n γlm = 0 (lm = 00), i.e. the photon number density retains spherical symmetry. The reason why this is the only finite series case is that the Y 00 distribution of axions mathematically requires the photons to couple in a specific way that retains the Y 00 distribution of axions, as is implied by equations (18) and (19). Now we know all the coupling coefficients E lm and F lm , Equations (12), (15) and (16) reduce to the equations (34'), (37'), (38') in [3] given that because it is the n γ00 Y 00 that describes the photon number density. Hence we have checked the spherically symmetric model results given in [3] . Y20 distribution For a Y 20 axion distribution the only nonzero component of the axion number density is n a20 , n a =Θ(R − r)n a20 Y 20 (Ω) , so that n alm = 0 (lm = 20) . Equation (15) is simplified for lm = 20, to which reduces to The nonzero component n a20 of axion number density evolves via The photon number density component n γ20 growth rate is while the other photon number density component n γlm (lm = 20) evolve as Since no spontaneous decay from axion feeds into these components, they are negligible. This example is not physical because a number density of the form Y 20 becomes negative in some regions. It is included here for demonstration purpose. The next examples is physical and motivated by superradience. A sin 2 θ distribution is torodial and is positive definite everywhere, and hence can represent a physical distribution of particles. For this case the only nonzero components of the axion number density are n a00 and n a20 , so we can write n a (r, θ, t) in several useful forms The relation between n a00 and n a20 is Similar to previous examples, we find that for components other than 00 and 20 so that the components of photon number density evolve as Since no spontaneous decay from axion feeds into these components, they are negligible, as in the previous example. The nonzero axion number density components are given by Because of (23), this leads to the relation The photon number density component n γ20 grows as and Because of (23) and (24), we can combine the previous two equations and write We observe that if the part of back reaction that results in sterile axions is neglected, then n γ20 (t) = − n γ00 (t) √ 5 , so the photons would remain in sin 2 θ distribution. General distribution Suppose that we have an axion number density For n alm = 0, then according to (15) this leads to Substituting this condition into equation (12), we have also for n alm = 0. Hence there is no source feeding those photon components. The parts of back reaction that results in sterile axions and surface loss are the only terms that contribute to these components. It is expected that these components die out quickly and thus have no effect on lasing. So where n γlm ≈ 0 (when n alm = 0) . I.e., the photon field has the same spherical harmonic components as the axion field, as other components die out quickly due to lack of sources. Neither spontaneous decay nor stimulated decay contributes to the harmonic components of photons that are not present in the axions. Suppose that all the axion components are nonzero, and they are proportional to each other, where α lm are numbers and n al0m0 is the fiducial component to which all other components are proportional. Then If the part of the back reaction that results in sterile axions is neglected, then n γlm = α lm n γl0m0 Hence the distribution of photons would keep the same shape as that of the axions if sterile axions were neglected. DISCUSSION The calculation presented here tells one the initial spatial distribution of photons once the spatial distribution of the axions is given. It does not give direct instructions on how to achieve observable effects from axion cluster lasing. The model does take the mechanism that the stimulated decay of axion produces type of photons that have the same momenta as the photons which induced the stimulated decay process. However, there is a compromise made here by using this equation. The entire model is a local theory. The photon occupation number here and now depends only on particle occupation numbers here and now. If the cluster in the model is a ball and all the quantities are spherical symmetric, the local theory provides useful predictions about the lasing process. However, if the cluster is of some specific geometrical shape, then the local theory probably won't give pertinent information that reflect the geometry of the cluster. Thus we suggest that a non-local lasing theory which could be governed by the following equation, In the non-local model, the photon occupation number here and now depends on all the past occupation number of events that are casually connected to here and now. The factor e −Γa(t−t ′ ) takes account the probability that photons propagating from x ′ to x without stimulating axion or going to annihilation. APPENDIX Starting from the evolution relation between axion and photon occupation numbers [3] where f λ (k) and f λ (k 1 ) are photon occupation numbers of momentum k and k 1 , respectively. Other variables in f λ (k) and f λ (k 1 ), i.e. r, θ, t, are the same since they share the same spacetime. f a (k + k 1 ) is the axion occupation number of momentum k+k 1 . Γ a is the spontaneous axion decay rate. Substitute (1) and (4) into (25) we arrive at The first and second integrals are the same, The third integral is related to the back reaction of photons. It is convenient to split it into two parts The first part represents back reaction resulting in axions with energy a less than m a γ that axions that can again participate in stimulated emission, while the second part gives the back reaction resulting in sterile axions, i.e., where the total energy of the axion k + k 1 is larger than m a γ. Moving the step function Θ(R − r) in front of the curly brackets and substituting the results of the integrations, we have . Collecting terms f λ (k) can be written The rate of change of photon number density is the integration of this equation over k space
2020-04-29T01:01:22.979Z
2020-04-28T00:00:00.000
{ "year": 2020, "sha1": "1ea5b2dce9be6af5cb50de8c6932d4d770e59048", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.102.096010", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "3c9d7dc5226c00ee4980840c6cb8c7d9634b61e5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
41070021
pes2o/s2orc
v3-fos-license
Creating a New Paradigm for Premedical Undergraduate Studies: Physicians' Perceptions of Subjects and Skills Critical for Success in Medical School and Practice. Background/Purpose: The purpose of this study is to determine subjects and skills that are perceived by practicing physicians as essential for success in medical training and practice. Previous studies suggest that better premedical preparation for a future career as a physician may reduce the need for expanded study of non-clinical subjects and skills in the graduate medical curriculum. Methods: The study was performed with a random sample of licensed physicians in Ohio (n=2,100), who were queried utilizing a survey instrument of 54 questions including demographics and perceptions on eight subjects and sixteen skills essential for success in medical school and practice. Completed surveys (n=356) were found to be representative of the national demographics of practicing physicians, including similar age, education, gender, type of practice, and specialty. Results: Respondents indicated that the subjects of business, communications, and technology were rated as most important for physician success, while communications, natural sciences and technology were most important for students. Skills identified as most essential to both training and practice included the ability to utilize technology, being honest and truthful, ability to explore, self-educate and research, and ability to communicate orally. Conclusions: The findings of the study support previous research and indicate that some students entering medical school may not have the breadth of study that practitioners identify as best preparing them for success as a student and practitioner. In the current medical education environment there is strong evidence, both objective and subjective, to suggest that an interdisciplinary approach to the training of physicians for the future realities of medicine is warranted. [1][2][3][4][5][6] In response to this issue, many medical schools have begun to add elective classes in such areas as health finance, ethics, legal aspects of practice, and practice management. 7 Changes in curricula, teaching methodology, and content have also altered the graduate medical education experience. Interdisciplinary elective courses are often limited in timeframe because of a lack of time free time in the curricula. When compared to introductory courses taken in undergraduate studies (40-50 contact hours) the amount of time given for these courses is insufficient to explore these complex subjects. The purpose of this study is to assess physicians' perceptions of undergraduate premedical subjects and skills critical for success in medical training and practice. With the increasing role of the physician as manager, educa-tor, and patient advocate, it is questionable whether the predominantly science-focused premedical major (biology, chemistry and other premedical courses), that is still the norm for the majority of entering medical students, is engendering the most necessary subjects and skills for future success. Results of this study were used to create a ranking of subjects and skills physicians felt were critical to success for medical school and the practice of medicine, in addition to current required premedical coursework. Background Medical education has evolved over centuries. In early, pre-20 th century America, medical studies were often informal and varied greatly in the breadth and depth of instruction. In 1910, Abraham Flexner released his benchmark study to the Carnegie Foundation for the Advancement of Teaching. This study examined the adequacy of medical instruction across the United States. A primary recommendation of Flexner was the requirement of an undergraduate degree to study medicine. Since that time, with the exception of a very few combined M.D./ B.S. programs, the undergraduate degree has been required for entrance into medical school. Over the course of the last 75 years, medical education has evolved with the addition of such instructional strategies as early clinical contact, case and systems-based study, and care continuums. However, premedical undergraduate education has not seen similar development and has changed little over time. While the literature concerning the undergraduate premedical education curriculum is somewhat limited, various studies point to some remarkable facts regarding the use of a science-based undergraduate experience for entrance into medical studies. Currently, approximately 65% of students enter medical school with a science-based undergraduate degree, 8 although research has shown that non-science majors are accepted at approximately equal rates as their non-science peers. 9 Koenig and Wiley, 10 Jones and Seeman, 11 and Shen and Comrey 12 found positive correlations between an undergraduate science-based curriculum and high Medical College Admissions Test scores, as well as performance in the first and second years of medical school. Other studies however, have questioned whether additional science courses in the undergraduate premedical curriculum affect performance in the initial pre-clinical years of education. Studies by Hall and Stocks 13 and Zeleznick, Hojat and Veloski 14 indicate that additional undergraduate premedical science courses offer no advantage to students in medical board examination scores or successful completion of medical studies. While these studies are sound research, they do not portray the entire picture for medical school matriculants. In a study of 1,135 medical school graduates Gough 15 found that science-based and non-science based students had no significant difference in performance in the third and fourth years of medical school (clinical years), and no significant difference in successful completion of medical studies. Other researchers have supported these findings, including Smith, 16 Koenig,17 Schaad, 18 and Herman and Veloski. 19 Each performed studies comparing success in medical studies correlated with undergraduate major, science grade point average, and admission scores. None found that a non-science undergraduate degree or low grade point average in undergraduate science courses was a significant risk factor in a student not completing medical studies. In addition, Huff and Fang 20 found that incidence of academic difficulty in medical school occurred no more often in non-science undergraduate majors. Methods Following institutional review board approval at Ohio University, the July 2000 list of licensed physicians in Ohio was used to identify potential subjects for this study. Surveys were mailed to each survey participant via United States Postal Service bulk mail. Each survey included a self-addressed postage paid envelope for returns. The survey instrument was tested for validity by consultation with members of the Ohio University College of Osteopathic Medicine. Basic skill sets were determined using alumni questionnaires developed at three Ohio universities. The survey consisted of four major sections: demographic data, a query regarding medicine as an art or science on a nine-point Likert scale, and 24 questions regarding skills and subjects necessary for success as a medical student and practicing physician on a five-point Likert scale. Reliability tests were conducted on the pilot test returns (n=77). Reliability was measured for all items and for constructs for skill items. Alpha levels and split-half reliability tests were run as reliability measures. When all 48 subject and skill areas were included, reliability for the survey instrument was measured at alpha of 0.9219, with a split-half coefficient of 0.8612 for part one and 0.8786 for part two. Data were analyzed using the Statistical Package for the Social Sciences (v 10.01, 2000). Results Demographics of the study respondents indicated that the population was relatively similar to the population of physicians as a whole in the state of Ohio and nationally. 24 Survey respondents were described as 1) 71.5% male, 23.7% female, 4.8% no-response; 2) 84.3% allopathic physicians, 15.7% osteopathic physicians; 3) 49% primary care, 51% specialty care; 4) average years since high school = 32.85 (average age = 50.85); 5) average years since medical school = 24.2; 6) average years since last residency = 19.27; 7) 76.36% of all time spent in clinical practice; 8) 72.4% hold a science undergraduate degree; 9) 14.4% hold a masters or doctoral degree. Regarding the question of physicians' perceptions of medicine as an art or science, the mean response on a nine-point Likert-type scale was 4.684 (sd=1.4744, vari-ance=2.1739). Terms on this query were purposefully left undefined to gain insight into basic respondent beliefs. This response may indicate that equal importance should likely be placed on training future physicians in both scientific and technical subjects and skills, as well as social, personal, and communications subjects and skills. Study results suggest that natural sciences, a traditional emphasis for premedical studies, may not be solely indicative of success in practice. Physicians identified communications, natural sciences, and technology as the three subject areas most important for success in medical school. However, natural sciences were perceived as less importance for success as a practicing physician. The three subjects deemed essential to the successful practice of medicine are business, communications, and technology. This follows the trends in additional coursework currently being included in the medical curriculum and reinforces the utility of broadening the undergraduate premedical experience. Sciences may also be likely viewed as lower in importance for premedical training due to strong subject area knowledge upon completion of medical training. The subject areas of communications and technology were perceived as important for success in both the medical school and the practice environments. The identification of these two subject areas is significant, as many entering medical students may be lacking education in these areas, regardless of premedical major. With the emphasis placed on these subjects by practicing physicians, it may be appropriate for these courses to be required as prerequisites to medical study, in addition to advanced study in these areas during the medical training experience. Skills that are deemed important for success in preparation for medical school and success in practice, as well as subject areas regarded as critical, share many commonalities. The skill that ranked first in both training and practice was the ability to utilize computers/information technology. Skills ranked in the top five positions in both medical school and practice were 1) being honest and truthful, 2) the ability to explore, self-educate and research ideas, and 3) the ability to communicate orally. Sensitivity to others was also included in the top five skills for success in medical school, while coping with complex moral and ethical issues was included in the top five for practice. In general, these skills share much in common with the general subject areas judged necessary and include skills inherent in a liberal, interdisciplinary undergraduate experience. ANOVA and Tukey's Post Hoc tests were also performed to identify any significant differences between groups based on age and years of practice. For subjects and skills identified as important for medical school, natural sciences were perceived to become less important as one had greater age and experience, while the education subject showed the opposite pattern. Business was perceived to be more important by younger respondents. Writing well was perceived as a particularly important skill amongst the respondents with fewer years of practice experience, while using computer technology, coping with moral/ethical issues, sensitivity to others, and participating in the community were perceived as higher in importance by mid-career practitioners. Higher scores on sensitivity to others were also correlated with increased age. Differences noted regarding success as a practicing physician indicated that business was perceived to have increasing importance as years of practice increased, while education was perceived as more important for mid-career and mid-age physicians. Allied health was perceived as most important for mid-career practitioners. Regarding skills necessary for success in practice, respondents indicated that thinking analytically and sensitivity to others were deemed less important by mid-career practitioners than by early or late career practitioners, while participating in the community was deemed more important as the respondents aged. Discussion As the medical school curriculum continues to be filled with an ever-increasing wealth of medical knowledge, it is appropriate that the skills necessary for success in medical school and the practice of medicine be included in the undergraduate premedical curriculum. Students and faculty of medical schools have little time for exploring non-medical subjects in the depth and breadth that is necessary for subject mastery. Therefore, defining premedical studies to meet the demands of the medical 4 Duffrin C, Berryman D, Shu J. Creating a new paradigm for premedical undergaduate studies: Physicians' perceptions of subjects and skills critical for success in medical school and practice. environment with sufficient scientific content, as well as providing a non-clinical premedical curriculum that will engender critical traits serves both the student and the profession. It is clear that the skills necessary for success as a practicing physician and as a medical student differ in some aspects. While the required undergraduate premedical courses in the natural sciences are certainly not contraindicated by this study, the general trend of the science major and its associated strict curricula and limited time for liberal studies is not supported, nor is any major that does not afford the student the opportunity to gain an interdisciplinary undergraduate experience. The identified skills share much in common with the general idea of a well-rounded liberal education. This indicates that acquired knowledge and skills are vastly more important than any particular area of expertise or knowledge. The importance of business, communications and technology became very clear and were supported by responses in both the subject areas and skills deemed important for success. In addition, coping with ethical and moral issues; the ability to explore, self-educate and research; the ability to evaluate options and think analytically; and the ability to communicate orally were considered of paramount importance for success in practice. These areas and skills are certainly appropriate for inclusion in the undergraduate curriculum and may be consistent with some interdisciplinary degree programs. Further study identifying specific skills and courses deemed essential for the practice of medicine should be performed and subsequent changes to preferred or recommended undergraduate premedical coursework should be developed. In addition, determining when the subjects and skill should be taught must be analyzed to determine proper placement in the continuum of medical education. All courses and subjects identified may not be necessarily placed in the premedical, but rather in the graduate medical environment. It is inherently important for the success of future students as well as the quality of future physicians that every opportunity is taken to choose medical school matriculants with skills and subject knowledge that will help to assure their success in training and practice. This study has identified broad skills and subjects that will assist in providing premedical advisors and medical school admissions committees with insight into choosing and preparing matriculants for the current medical training and practice environments.
2018-04-03T04:57:49.904Z
2006-12-01T00:00:00.000
{ "year": 2006, "sha1": "d1a8783fada05c027a1b76ed500dc03f6d106287", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3402/meo.v11i.4606", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "3a40701dde7aa8c83af65d56362e640aa603091a", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214301881
pes2o/s2orc
v3-fos-license
Effective Assessment of Workplace Problem-Solving in Higher Effective Assessment of Workplace Problem-Solving in Higher Education Education - solving INTRODUCTION Prior to the start of the 21 st century, the term 21 st century skills, soft skills, or professional skills had long been a buzzword for governments, employers, and academics (Accreditation Board for Engineering and Technology [ABET], n.d.).Though in no way a true divider, the advent of the 21 st century was used as a way to promote the need for meaningful change in education and particularly tertiary education in the sciences, engineering and computing.In combination with the expansive growth in the Internet and the availability of information, access to data and information had been transformed so that access to knowledge was no longer the issue.The ability to interpret information, work effectively in teams, communicate ideas, and solve complex problems was becoming more of the challenge.If these challenges are to be met, learning outcomes pertaining to the 21 st century skills need to be integrated into the curriculum.Worldwide in fields such as computing and engineering, a historical curricular emphasis on theory, technical skills, and knowledge production rather than these more applied 21 st century skills has left the fields open to criticism from employers (Ellis & Petersen, 2011;Farr & Brazil, 2010;Stawiski, Germuth, Yarborough, Alford, & Parrish, 2017).Specific to the Middle East, employers have found that engineering graduates are weak in 21st century skills (Batiyeh & Naja, 2010).Because of these issues, not only do 21 st century skills need to be integrated into the curriculum, they need to be assessed regularly. This paper aims to assess computing students' proficiency in one of the key 21 st century skills, problem-solving.This is accomplished through the implementation of the Computing Professional Skills Assessment (CPSA), an assessment instrument that uses a scenario-based asynchronous discussion board to assess student groups' ability to problem-solve (Danaher, Schoepp, Rhodes, & Ater Kranov, 2019).The ability to solve problems has been rated with top importance and as a core activity within the engineering field (Passow & Passow, 2017).These problems are workplace problems, not word problems with a single answer.They are ill-structured, complex, open-ended, collaborative, have multiple solutions, and may have conflicting goals (Jonassen, Strobel, & Lee, 2006).The ability to solve such problems is key to successful employment and being able to contribute in a meaningful manner to a knowledge society. The remainder of this article provides the overall background and description of the instrument and method used to assess problem-solving, followed by a discussion of the findings.Results show that for the three problem-solving criteria, problem identification, recommendations for solutions, and stakeholder perspective that students often failed to meet the target level of performance even though there was a general increase in performance from the 2 nd , 3 rd , 4 th year, and master's levels.All of this points to the need for more robust integration of problem-solving ill-structured workplace problems throughout the computing curriculum. RESEARCH QUESTIONS The importance of 21 st century skills, especially the ability to solve ill-structured, complex, and openended problems within the fields of engineering and computing is paramount to academic and workplace success.Because of this, an overarching research question along with a set of subquestions pertaining to the amount of, types of, and sophistication of problem-solving have been devised. 1. What are the abilities of students to solve ill-structured, complex, and open-ended problems within the computing program? 1.1.What is the prevalence of problem-solving within the discussions? 1.2.How does problem-solving manifest itself throughout the discussions? 1.3.Are there differences in the way problem-solving is manifested based on students' year of study? LITERATURE REVIEW Research into computing and engineering student problem-solving consistently brings forth two major themes.The first theme is that the ability to solve ill-structured, complex, workplace driven problems is essential to employment.The second theme is that curricular modifications are needed if students are going to meet learning outcomes pertaining to problem-solving.Jonassen, Strobel, and Lee (2006) noted that learning to solve well-framed problems in the classroom does not lead automatically to graduates to be able to solve the complex, multidimensional types of problems they will encounter in the workplace.Because of this, real world problems need to be integrated into curricular experiences to prepare graduates for 21 st century employment. The ability to solve problems in the workplace has always been recognized as imperative to workplace success, especially for computing and engineering graduates where problem-solving is often a main responsibility.In fact, researchers have recently stated that "problem solving is the core of engineering practice" (Passow & Passow, 2017, p. 475), and others have noted previously that practicing engineers are hired and retained for their ability to solve problems (Jonassen et al., 2006).Passow and Passow's (2017) review of what engineering programs should emphasize found that an engineer's ability to solve problems was the most important skill and core engineering practice.Regarding time usage, Robinson (2012) discovered that practicing engineers spent nearly 39% of their time understanding information and problem-solving, which were by far their most dominant skills.In essence, engineers are seen as problem-solvers and engineering as a method of solving problems (Korte, Sheppard, & Jordan, 2008). The issue that arises is that while employers view engineering graduates as bright and technically sound, they also view them as weak in the 21 st century skills such as teamwork, leadership, critical thinking and, most importantly for our purposes, problem-solving (Ellis & Peterson, 2011).Part of this issue seems to occur because of the misalignment between the types of problems faced in educational programs and the types faced in the workplace.Many of the problems faced by engineering students lack the complexity, ambiguity, and contextualization that make workplace problems so challenging.Most workplace problems also require extensive teamwork in which different knowledge and skills are distributed amongst team members and can be solved in numerous ways with project success rarely measured by only engineering standards (Jonassen et al., 2006).The problems most often faced by engineering students when they are in school are end-of-chapter textbook problems designed to assess knowledge of important concepts that follow a systematic path of reasoning (Douglas, Koro-Ljungberg, McNeill, Malcolm, & Therriault, 2012;McNeill, Douglas, Koro-Ljungberg, Therriault, & Krause, 2016;Shaw, 2001).Hence, it is important to ensure that the curriculum embeds workplace problem-solving because learning to solve well-structured [classroom] problems does not necessarily transfer to solving ill-structured workplace problems (Jonassen et al., 2006). Though there are certainly schools and programs that embed ill-structured, workplace problemsolving into the curricular experiences of students, this remains an area in need of improvement in engineering and computing education.In response to this, it is generally agreed that "if we hope to educate a workforce and citizenry who will be equipped to thrive in an increasingly complex and interdependent world, we need to incorporate twenty-first-century skills into a wide range of educational curricula" (Stawiski et al., 2017, p. 336).As early as ABET's Engineering Criteria 2000 document has the need for 21 st century skills, including problem solving, been used as the impetus for curricular change (ABET, 1997).McNeill et al. (2016) have demonstrated that because students have difficulty solving ill-structured, complex, open-ended problems, students to need engage with these types of problems early and throughout their coursework, so they gain experience dealing with constraints, ambiguity, and numerous possible solutions.Morin, Thomas, and Saadé (2015) also believe that these types of problems should be included when working in an online environment because this is the future of much collaboration.Currently, many programs seem to include more openended problem-solving experiences in their first and final years of study, but this needs to be changed so that students practice these skills throughout their years of study (Douglas, et al., 2012). Besides incorporating this type of problem-solving throughout the curriculum, this type of problemsolving should also be collaborative because that is the type of problem-solving in which working professionals engage (Jonassen et al., 2006;Zou & Mickleborough, 2015).This may mean shifting to more of a problem-based curriculum that has meaningful collaboration, including evaluations, embedded into it (Jonassen et al., 2006).In fact, during such a course redesign Stawiski et al. (2017) found that students reported more improvement in "problem-solving suggesting creative and innovative solutions to help solve project challenges" (p.344).Beyond the classroom, internships have shown to be effective mechanisms to promote numerous skills and competencies including problemsolving.Through a large-scale survey and a set of targeted interviews, Strayhorn & Johnson (2016) asserted that there is "persuasive evidence supporting the conclusion that engineering majors engagement in internships and co-ops produce significant learning gains in terms of problem-solving, communication, and learning more about work" (p.10).In support of this, Floyd, Johnson, and Rabb (2017) have found that students recognize the importance of internships to enhance problemsolving skills.During a summer internship program with 2 nd and 3 rd year engineering students, Floyd et al. found that problem-solving was the skill students felt was most developed through their summer experience.If curricular modifications, whether inside or outside of the classroom are made, the disconnect between engineering and computing education and the focus on the technical skills will be minimized, so that students are better prepared to solve workplace problems. METHOD In order to collect student data pertaining to problem-solving this study utilized the Computing Professional Skills Assessment (CPSA).The CPSA is an assessment tool that that has continually been improved over the past six years (Danaher, Schoepp, Ater Kranov, & Wallace, 2018) and has been used with both undergraduate and graduate students (Danaher, Schoepp, Ater Kranov, 2017).The CPSA is an assessment method able to assess all six of ABET's Computing Accreditation Commission's (CAC) professional skills learning outcomes that are problem-solving, teamwork, ethical, legal and security aspects, communication, impacts of computing, and continual learning.The CPSA learning outcomes have changed slightly to have different wording than that of the CAC in order to be better aligned with the CPSA method.Table 1 shows the alignment between the CPSA and ABET CAC as they pertain to problem-solving.(b) An ability to analyze a problem, and identify and define the computing requirements appropriate to its solution. For the CPSA, the learning outcome of problem-solving has been simplified slightly, but the CPSA includes an expanded definition that is used to guide the criteria for the rubric (see Table 2).The criteria are 1) problem identification, 2) recommendations for solutions, and 3) stakeholder perspective.While problem identification and recommendations for solutions are obvious criteria for the skill of problem-solving, stakeholder perspective is also important because a focus on this forces participants to examine alternate perspectives which is frequently important to the development of meaningful solutions.The rubric has six levels of performance and is scored from 0 to 5. The five levels are 0-Missing, 1-Emerging, 2-Developing, 3-Practicing, 4-Maturing, 5-Mastering.Levels 1 and 2, and levels 3 and 4 share the same descriptors as they are seen as closely related.Students do not identify the problems in the scenario. Students begin to define the problems.Attempts to define the problems may be general, narrow, and/or inaccurate. Students define the problems with reasonable accuracy and differentiate between them with limited justification. Students do not identify the problems in the scenario. Recommendations for Solutions Students do not make any recommendations for potential solutions. Students may recommend potential solutions that don't fit the identified problems.Students may make recommendations for potential solutions without identifying the problems first. Students do not make any recommendations for potential solutions. Students may recommend potential solutions that don't fit the identified problems.Students may make recommendations for potential solutions without identifying the problems first. Stakeholder Perspective Students do not identify stakeholders. Students begin to identify stakeholders and their perspectives. Students explain the perspectives of major relevant stakeholders and convey these with reasonable accuracy. Students thoughtfully consider perspectives of diverse relevant stakeholders and articulate these with clarity and accuracy. The CPSA is implemented through the use of an asynchronous online discussion board and is comprised of 1) a short computing-related scenario-there is a pool of equitable and similar-in-structure scenarios as different scenario topics are better aligned with specific courses, 2) a standard set of instructions and guiding questions, and 3) an analytic rubric with sections for problem-solving, teamwork, ethical, legal and security aspects, communication, impacts of computing, and continual learning.The procedure to use the CPSA is that small groups of approximately 4-5 students working online read a 1.5 page scenario related to computing in which an ill-defined, real-world problem that has no exact answer is addressed.Guided by the set of prompts and guiding questions, for 12 days students discuss the scenario and attempt to develop a reasonable solution to the problem posed. When the discussion ends, the discussion transcripts are evaluated according to the criteria presented within the CPSA analytic rubric by a team of trained faculty.In order to increase the students' familiarity with using the discussion board and the CPSA itself, prior to having a discussion formally assessed by faculty, students do a practice discussion where upon completion the strengths, weaknesses, and best practices of the discussion board transcripts are reviewed with their instructor. The theoretical underpinning for the CPSA method comes from Vygotsky's sociocultural theory (Vygotsky, 1978) SAMPLE Following approval from the institution's Research Ethics Committee, online discussion transcripts from courses appropriate for CPSA utilization, and where students had given consent to participate, were collected from the institution's learning management system.These transcripts were then collated and anonymized to ensure that student identities remain confidential. At the time of this study, computing students from the 2 nd , 3 rd , 4 th year, and master's levels had agreed to participate in this research.A number of faculty had agreed to utilize the CPSA in their courses.The process of sample collection was first to randomly select one participating course from each year of study and then randomly choose one group's discussion transcript from each of those courses.Each set of discussion transcripts represents a single student group of 4-5 students for a total sample size of 19 students. The student population from where the sample was taken is highly homogenous in that all of the students are Emirati nationals, most are first generation tertiary students at the traditional postsecondary age, Arabic is the native language, English is a foreign language, and at the undergraduate level students study in a gender segregated environment.Through the process of randomization, the selected undergraduate sample were all female Emiratis ages 18-24, while the master's students were a mix of male and female Emiratis ages 24-35. ANALYSIS For the initial phase in data analysis, general data concerning number of posts, total word count, and the mean length of posts was calculated.For the main phase of data analysis, the discussion posts were analysed using the framework provided by the CPSA rubric.Because online discussions offer ready-made transcripts, a form of transcript analysis was used to analyze the texts (Garrison, Cleveland-Innes, Koole, & Kappelman, 2006).Breen (2015) describes transcript analysis as a way to make valid and reliable interpretations from texts to their unique contexts.In this instance, the context was that groups of computing students from a face-to-face environment were participating in an online discussion where they were expected to begin to solve a problem and propose workable solutions as part of a team. The ratings process itself was iterative in nature and began through an initial reading and re-reading of all of the discussion posts.Posts that contained aspects of problem identification, recommendations for solutions, or stakeholder perspective were identified and labelled.These posts were then reread and the pertinent aspects were color-coded according to the criteria represented.In the next phase, the entire group of a specific criteria, for example, problem identification, were re-read and given an initial rating of 0-5 using the pertinent descriptors from the rubric.These rated posts were then re-examined and any of the initial ratings that seemed incorrect were adjusted.When completed, all data for each year of study were tabulated. Some of the posts have been included as examples within the results section in order to strengthen the findings by utilizing student voice.In using the student posts as examples any grammatical or spelling errors have been corrected to ease the readability, while at the same time ensuring that the meaning has not been altered.Because a variety of scenarios were used in different classes, there is a selection of topics on display as part of the student voice, specifically illegal downloading, encryption, and privacy on social media. RESULTS Initial results are shown by year of study and include general numerical data about the posts, overall instances of problems, solutions, and stakeholders, and then concludes with specific instances and ratings of problems, solutions, and stakeholders.This data helps answer research question 1.1.What is the prevalence of problem-solving within the discussions?The data that follows assists in answering research questions 1.2.How does problem-solving manifest itself throughout the discussions?and 1.3.Are there differences in the way problem-solving is manifested based on students' year of study?Examined in their entirety, the data offers a robust representation of problem-solving as it emerges within the CPSA across a range of years of study and addresses the overarching research question 1.What are the abilities of students to solve ill-structured, complex, and open-ended problems within the computing program? General data about the discussion posts are presented within Table 3.Though group sizes were similar, there were large differences in number of posts, total word count, and the mean length of posts. With the number of posts, both 2 nd year and master's students had at least 33 independent posts, while 3 rd year and 4 th year had only 21 and 23 respectively.Total word count followed a similar pattern with master's students having written over 7000 words, 2 nd year students more than 5500 words, while 3 rd year students had only 2646 words and 4 th year 4714 words.For the mean length of each post, master's students were at 214, 4 th year at 205, and 2 nd year at 165.The one anomaly was that the 3 rd year students only averaged a post length of 126 words, which was far less than any of the others.The next set of results illuminate the degree to which each group of students wrote about problems, solutions or stakeholders as these are the criteria that encompass the problem-solving component of the CPSA rubric.Each individual discussion post was analysed for these criteria and was labeled accordingly.Of course, it is possible that one post contains more than one criteria as it is quite natural for a student to write about both problems and solutions in a single posting, or describe how a problem might impact a specific stakeholder for example.Figure 1 presents this data as simple counts.Perhaps the most obvious count is that the 2 nd year students had 28 instances of a post discussing the problems that was the highest number recorded across any of the criteria by year of study.Another data set of interest emerged from the 4 th year students in that they had by far the fewest total number of posts referring to the three criteria with only 23 instances in total. Stakeholders, an important aspect of problem-solving in order to view the problem from multiple perspectives, was an area where all student groups but for the master's students recorded few instances.Master's students discussed stakeholders and their perspectives 13 times, while the other three groups combined only discussed them 18 times combined.The end product of effective problem-solving must be solutions, and with solutions it was again the master's students with the most posts discussing solutions at 25. Third year students were next with 23 posts, then 2 nd year at 16 posts, and finally the 4 th year group at only 6 solutions discussed.While instances help answer the first research question about prevalence of problem-solving in the discussions, prevalence is not an indication of quality of the discussions.The quality construct emerges through the upcoming instances and ratings tables and the qualitative analysis. Figure 1: Instances of problems, solutions, and stakeholders Taking the instances data and further breaking it down according to the actual ratings of each post is essential to identify the quality of the posts.This also allows us to illuminate differences in the quality of posts between year of study.Overall, whether analysing the constructs of problem, solution, or stakeholder there was a trend towards the more senior students achieving higher ratings for their posts.Given that the CPSA rubric has been designed in such a way as to roughly align with year of study, that is, the target for 1 st year students is a CPSA rating of 1-Emerging, the target for 2 nd year students is a CPSA rating of 2-Developing, and so forth, the ratings appear to support this alignment.Results will first be presented by problem identification, then recommendations for solutions, and finally stakeholder perspective.Each of these criteria are then described from 2 nd year to the master's level.As evidence to the ratings given, examples of student posts will be included throughout this section.For Table 4, problem identification, 2 nd year students failed to achieve even a single rating at the desired score of 2. In fact, on 5 occasions they were rated a 0-Missing because they were completely off topic.One student began to discuss issues surrounding security of information networks, an unrelated topic, while a number of other students contributed to this discussion thread without attempting to get the discussion back on track.For example, one student wrote the following about problems related to network security: Security is important for home networks as well as in the business world.Most homes with highspeed internet connections have one or more wireless routers, which could be exploited if not properly secured. According to Georgetown University the risks that threaten the security of information networks are technology with weak security such as passwords, third party entry and lack of encryption. While this post and some of the others were well-crafted and posed problems, they did not discuss the topic being examined and were scored accordingly.The remaining 23 posts were rated as a 1-Emerging.Also for problem identification seven of the posts of 3 rd year students were rated as a 1 and 9 were scored 2.An example of a 1 from this group is: The music and movie companies suffer from piracy because they lose sales and increases in intellectual property protection costs.Moreover, it affects the government in terms of lost tax revenue. Though the students identified the associated financial implications of online piracy, it has been done in a haphazard manner with no additional evidence or details provided to support what they have written.For an example of a post rated as a 2, a student shared: Illegal downloading is an issue that is not taken very seriously, probably because millions of people do it, and they get away with it.The primary issue in the article is illegal downloading, and the secondary issue is people not getting punished for their crimes.The problem isn't awareness, because in my opinion, all online users illegally downloading music or movies are aware that this is illegal and that they are stealing. Through this post the student was able to present a more nuanced understanding of the issue because they recognized that downloaders know what they are doing is wrong but do not seem to care.The student understands clearly some of the problem, but they do not add any additional evidence as support, or to begin to delve into the other complexities that exist.Fourth year students had a single post rated 1, two rated 2, five scored 3-Practicing, and two at 4-Maturing with problem identification.Posts rated as a 3 or 4 in problem identification are described in the CPSA rubric as-students define primary and secondary problems with reasonable accuracy and with justification.An example of a post rated 3 is: According to the article provided, the primary problem is the type of encryption used in some mobile apps like WhatsApp that is unbreakable.It makes it hard for the government to access data in any emergency that requires hacking.The secondary problem is that PKC (Public Key Cryptography) has some disadvantages regarding the privacy.As mentioned in the article, some countries spy on their citizens any terrorism related actions. In this post the students is identifying both primary and secondary problems and begins to explain why these are problems.The post begins to get at the complexities and trade-offs between ensuring privacy and yet maintaining security especially as it pertains to terrorism.Two posts were rated as 4, this post is an exemplar: I think that the primary issue that was discussed in the article, was whether or not governments had the right to spy on its citizens.It mentioned that some countries like Japan and Netherlands support strong encryption and give their citizens privacy of communication, while others -such as Turkey and Pakistan -have strict laws against that.There is a huge, globally scaled debate about this topic; with people either siding with it being acceptable or unacceptable.According to a poll conducted by… The second issue at hand here is that people in general think it okay for governments to monitor suspected terrorists, and anything that might cause a breach in national security.…But they will not accept monitoring their own self. Master's students had a range of scores from 1 to 5-Mastering, and though the majority of scores were rated as 2 (6 times) or 3 (9 times), they were the only group to achieve 5's for problem identification.The descriptor for a post to be scored a 5 is-students convincingly and accurately define the primary and secondary problems, providing justification.An exemplar of a 5 is: The primary cause of the problem is that people do not want to pay for content.Most individuals who opt to download music, software or films illegally want the content for free, and whenever an opportunity presents itself, they take it.Torrent websites and other sites to illegally download files are fuelled by these type of people causing massive financial rip-offs to the content creators.Secondly, these people may not be in a position to purchase the files they want, and downloading illegally might be their only option.For example, the music subscription platform iTunes requires quite a substantial monetary commitment.Additionally, software like the Windows Operating system or Adobe Photoshop is very expensive.For individuals who defend copyright, the argument may be that if you cannot afford it, leave it alone.However, for as long as people want to access content that they need and it is unaffordable, they will prefer to obtain it illegally if they can (Aguiar & Martens, 2016).Thirdly, many times music, films and software are not available legally in some regions or country.Content creators limit access for certain reasons, mostly economic and the populations there are deprived of a legal access. In this post, the student shared three causes of the problem, and provided a justification that others did not when they recognized high cost and the lack of availability that may push people towards illegal downloading.In addition, to strengthen their argument, they have cited some supporting work.Table 5 offers the instances and ratings for each of the student groups for the quality of solutions provided.Second year students discussed solutions numerous times, but the vast majority (12 times) of their discussions were rated as a 1-a rating of 1 means that potential solutions may be general or naïve.Four other posts were scored as a 2 or 3. Two examples of posts rated as a 1 are: A solution to this problem is to monitor the teens' social media activity.I recommend social media addicts limit their daily usage in social media and get a new hobby. While these are certainly solutions, they are quite general and naïve in that the solutions sound simple but would be terribly difficult to implement or put into action.Monitoring a teen's use of social media would not be easy for parents and being an addict means one is stuck in an addiction cycle that is difficult to break.Third year students did not demonstrate much more proficiency in recommendations for solutions than the 2 nd year students.In fact, they had no posting rated as a 3, but they were rated a 2 on six separate occasions.Though a 2 is again defined as general or naïve, these posts are superior in their sophistication: The entertainment industry can consider lowering their prices since their competition (the Internet) is offering the same product for free even if it is illegal.Blocking access to illegal file sharing websites is also another way, but it won't stop new file sharing websites from popping up. These are more sophisticated posts in that the solutions are solutions that have actually been implemented, but the complexity of lowering prices remains, for example, quite general.How, in what ways, and to what level, would prices be lowered to combat pirating are just some of the questions that arise.Though 4 th year students only discussed six solutions, four of the six were rated as a 3 or 4, which is more in line of what would be expected for senior students.For a rating of 3, students are expected to offer evidence that they have begun to formulate potential solutions from a computing perspective.In discussing the topic of encryption, a student mentioned the idea of creating a backdoor into these encrypted applications.That only a few could access with a court order, and in serious matters. The student demonstrated an understanding of the serious nature of back doors for encrypted applications but still felt they are essential in important matters.Finally, for solutions, it was the master's students who provided the most advanced solutions in that 19 out of 25 were rated as a 3 or 4. Examples of some of the 4's are: The facts about Internet piracy should be included in the school curriculum, that will give the next generation solid piracy awareness, and it also will make sure that they will be ready to make logical and conversant decisions about electronic theft.Education will emphasize the consequences of copyright infringement to the next generation, but parents also should participate in educating their children about the risks of Internet piracy before teaching them how to use a computer (Solutions for Digital Piracy, 2007). Therefore, the awareness should start from the educational sectors in committing as part of their duty to plant the concept of copyright and its importance.This could be done through several methods like seminars, programs, awareness emails, and sessions.Additionally, universities should send warning emails to those detected of illegal downloading and set penalties for them. These responses go far beyond less advanced posts where the solutions were often nothing more than raise awareness.Questions of to whom the awareness raising should be targeted or what the focus of awareness raising should be was rarely addressed.Unfortunately no students, master's students included, had solutions rated as a 5 in which they suggest detailed and viable potential solutions from a computing perspective.While possibly viable, the two examples of a 4 could not be described as detailed.Stakeholder perspective, presented in Table 6 above are an area where again as students progressed through the program, many of their responses were rated higher than the previous year of study, and master's students showed a much more mature understanding of stakeholder perspective.Beginning with 2 nd year students, there is a clear lack of awareness where stakeholders are concerned.In fact, two of the three posts about stakeholders were rated as a 0 because students do not identify stakeholders.In the example post that was scored a 0 below, the student has simply copied a paragraph about stakeholders that is unrelated to the scenario under discussion: Some examples of key stakeholders are creditors, directors, employees, government (and its agencies), owners (shareholders), suppliers, unions, and the community from which the business draws its resources.Not all stakeholders are equal.A company's customers are entitled to fair trading practices but they are not entitled to the same consideration as the company's employees.An example of a negative impact on stakeholders is when a company needs to cut costs and plans a round of layoffs. Third year students had 5 posts rated a 1 and another 3 rated as a 2 for stakeholder perspective.A score of a one is described as students beginning to identify stakeholders and their perspectives.Unlike more highly rated posts, these posts lack depth even though they demonstrate a knowledge of some obvious stakeholders.For example: another stakeholder for piracy issues are the singers and the actors because they will lose big amounts of money because of the drop off in music and movie sales, and they may lose their job also. Of the 3 posts rated a 2, still below the target for 3 rd year students, a student shared two clear stakeholders and were able to provide more than one explanation as to how a stakeholder is impacted.However, they were not able to provide much detail. In my opinion, the stakeholders of music and movie piracy are the companies of these music and movies and the government of the country….The music and movie companies suffer from piracy because they lose sales and because of rising intellectual property protection costs.Moreover, it affects the government in term of lost tax revenue. Students in the final year of the undergraduate program achieved two posts rated 3 and another five scored a 2, again below their target of a 4. To be rated a 3 students need to explain the perspectives of major relevant stakeholders and convey these with reasonable accuracy.An exemplar of a 3 from 4 th year students is: The major stakeholders are the government, but I would like to add that the users and the companies are also stakeholders in this case.The difference between the three stakeholders is the level of understanding how encryption works and why to use it.The companies are trying to satisfy the users' needs.In this case, the users are supporting the idea because they want to keep their own privacy safe, while the government has argued against this so they can investigate and predict any terrorist actions.The companies are trying to maintain the users' private life, but the government still has some other ways to gain access and keep track of any suspicious action. This post has a few stakeholders and accurately conveys some of their perspectives.Master's students were the only students to be rated a 4 or 5, and so were the only cohort to achieve their target, which was a 5. To be rated as a 4, students need to explain the perspectives of major relevant stakeholders and convey these with reasonable accuracy but have this done to a more sophisticated degree than would be a score of 3.With seven posts having been rated a 4, there were many examples to choose from.One of the exemplars is: Governments are major stakeholders in piracy.This is because hey have the obligation to protect people's work and efforts.As my colleagues mentioned previously piracy affects the industry and by this economy is affected.So far governments have placed policies and sanctions to stop piracy.This is considered not enough as piracy is still growing every day.Governments cannot stop this because the Internet is a vast mass of communications and it cannot be stopped once it is online it cannot be stopped. Though this post described other stakeholders, for the government stakeholder they demonstrated an obvious grasp of key elements as it relates to online piracy.To be rated a 5, students should thoughtfully consider perspectives of diverse relevant stakeholders and articulate these with clarity and accuracy as is done in the example below: In my opinion, the primary stakeholders are the artists, end users, and the hardware industry.Firstly, the artists, as l describe them, include all of artists, singers, composers, songwriters, filmmakers, software developers, authors and publishers.Illegal downloads directly affect them financially, and it is therefore in their best interests to protect their intellectual property.These stakeholders view piracy as a significant financial barrier, which does not allow them to grow as content creators.For established stakeholders, piracy needs to be stemmed with strict copyright laws (Fetscherin, 2004).Secondly, end users are the interested parties in the industry, both individuals and organizations like schools and libraries.Individual consumers of digital content are against restrictions on content usage and access and perpetuate piracy, either knowingly or unknowingly.These users are against piracy laws and copyright regulation that paint them as criminals.Organizational consumers like schools and libraries are concerned with fair usage and privacy but are against excessive control as it may affect their activity (Fetscherin, 2004). This example was one of two that achieved the target of 5 and showed the sophisticated levels of understanding possessed by the graduate students. DISCUSSION This discussion is framed around the answers to the four research questions as this provides an explicit narrative that targets the core elements of this study.After discussing the general prevalence of problem-solving within the discussions, the three criteria for problem-solving, problem identification, recommendations for solutions, and stakeholder perspective are discussed according to student performance. In terms of number of posts there was no real trend as second year students posted more than any other group, while in terms of length of posts a more obvious trend appeared.Master's students had the longest posts trailed closely by the 4 th year students.Perhaps it is that the more advanced learners had more to say when they posted which does point to more sophisticated and detailed postings, something which did emerge when the three problem-solving criteria were assessed.Further examination of the prevalence of problem-solving in the discussions showed both stakeholder perspective and recommendations for solutions had more posts from master's students than any other student cohort.Especially with stakeholders, the master's students seemed to have far more to say than any of the others in that the master's students had 42% of all the discussion about stakeholders.Conceivably, because the master's students are working professionals, they have a richer understanding of stakeholders and those impacted by computing decisions that they are involved in through the workplace.This could be an area where workplace experience is essential, so effective curricula needs to get students into work environments (Floyd, Johnson, & Rabb, 2017;Strayhorn & Johnson, 2016). The first criteria represented within the CPSA is problem identification.Of course, the ability to identify a problem is the initial step in being able to effectively solve a problem, especially when the problem is ill-structured, open-ended, and with no obvious answer.One consistent theme that emerged with problem identification is that overall students did not achieve the targets as established in the CPSA.Accepting Passow and Passow's (2017) finding that problem-solving is an engineer's most important skill, this points to a serious weakness.Remembering that there is a rough alignment between year of study and rating on the rubric (5-Mastering for master's students, 4-Maturing for 4 th year students, and so forth), while students at times reached the target, more often than not they fell short of their ratings.With problem identification only a few 4 th year and master's students achieved their respective target ratings of 4 and 5.However, a pattern that did emerge is that the senior students outperformed consistently the more junior students even if the targets were not being met.Viewed holistically, it seems as though student skills in problem identification improve as they proceed through the program.Early in the program it is a skill with major deficiencies, but nearing graduation or in the graduate program, students are beginning to identify problems at a much higher rate.While it is certainly positive that improvement is occurring, the fact that targets are not being met suggests that curricular revision towards a more problem-based curriculum as proposed by Jonassen, Strobel, and Lee (2006) should be considered. Recommendations for solutions is the second criteria for problem-solving represented in the CPSA.This criteria is of the utmost importance because it is where students actually put forth solutions to the problem they have encountered in the scenario, and researchers (Passow & Passow, 2017) have argued that problem-solving is the core skill for engineers, while others (Robinson, 2012) have noted that it is the skill in which they are most engaged.Similar to the problem identification criteria, recommendations for solutions was an area where most of the student groups did not meet the target. In fact, only the 2 nd year and 4 th year student groups had any ratings at or above their expected levels with the 2 nd year student group having the only rating of a 3 which is above the target.In addition, investigating recommendations for solutions overall, there was a less obvious pattern where the more senior students put forth more sophisticated solutions than the junior students.While the master's students did have the most advanced solutions, they also had numerous solutions well below expectations.Clearly, this is a skill that must be improved across the entire range of students and needs additional curricular interventions because these are the types of problems, ones that are ill-structured, open-ended, and with no obvious answer that have been identified as key to workplace success (Jonassen et al., 2006;Passow & Passow, 2017).In addition, students have to work on with these types of problems early and throughout their coursework, not just at specific points or the end of their program as often occurs (McNeill, et al., 2016). The final criteria that is used to describe the construct of problem-solving is stakeholder perspective, an important criteria since it provides a way to recognize and understand the perspectives of others. Viewing a problem through multiple lenses like this can only help one develop better solutions and become a better problem solver.Like problem identification, the pattern that emerged is that the senior students regularly outperformed the more junior students even if the targets were mostly not being met.Moreover, this was the one criteria where the master's students were far superior to the other students.The master's students twice attained their target of 5, and also were the only cohort to even achieve a rating of 4. Again, while speculative it may be that the work experience of the master's students means that they have much more experience thinking about how a computing problem impacts stakeholders because this is an authentic issue one faces in the workplace.If this is the case, a curriculum that promotes work experiences through methods such as internships seems essential (Floyd, Johnson, & Rabb, 2017;Strayhorn & Johnson, 2016).Not only have students recognized that they improve their ability to solve problems, it is where they learn about work and, in turn, the impact of stakeholders. LIMITATIONS AND FUTURE RESEARCH There are two major limitations that that should be considered when interpreting the results of this study.First is the use of an online asynchronous discussion board, and the second is the use of different scenarios amongst the student cohorts.With the discussion board, an issue may be that because the students lack familiarity with this medium in an academic setting, students are unable to perform to the best of their ability.However, to mitigate against this students engaged in a practice discussion board and received instructor feedback a few weeks prior to the formal assessment component of the CPSA.The scenarios are another potential limitation because a selection of scenarios were used in each of the courses.Different scenarios are used because they are chosen to best align with the curriculum of a particular course.Nevertheless, all of the scenarios are written based upon a set of guidelines and then undergo a rigorous review process before they are implemented into courses.The purpose of this process is to limit, besides the topic, any of the differences between scenarios. With the current life cycle of the CPSA, the major area for future research has to do with the student population.Currently, research using the CPSA has only been conducted at a single institution with a fairly unique context.Though a pilot implementation has been done at an external organization, this has not led to formal research at this time.Hence, further research needs to be conducted at other institutions or organizations where further checks on instrument validity can be done. CONCLUSION Given the importance for the computing field to have working professionals who are able to effectively solve workplace problems that are ill-structured, complex, open-ended, collaborative, have multiple solutions, and may have conflicting goals, curricula that meet this need is essential.Having students practice these skills throughout the curriculum, not just in final year experiences is required if their education is to cultivate meaningful engagement in this 21 st century skill.This paper described an instrument and method that uses an asynchronous online discussion board to assess these skills as students problem-solve in teams.Results showed that while students did increase their level of problem-solving from the 2 nd , 3 rd , 4 th year and master's levels, they generally failed to meet the desired level of performance.This supports the proposition that ill-structured problem-solving should be more thoroughly integrated into the computing curriculum in order to meet the demands of the 21 st century workplace.In addition, the instrument was effective in assessing problem-solving. Table 2 : The problem-solving skill in the CPSA rubric Definition: Students define and differentiate between the problems raised in the scenario with reasonable accuracy.Students recommend potential non-technical and technical solutions from a computing perspective.Students identify relevant stakeholders and explain their perspectives. and the Communities of Inquiry Model (The Community ofInquiry, n.d.).The former states that social interaction is essential to learning and that it is in the zone of proximal development where learners can interact with peers to advance learning.The latter model, designed specifically for asynchronous online discussion boards, includes both cognitive and social presence.Cognitive presence represents socially constructed knowledge developed through continuous communication, while social presence represents open and honest communication that is required to facilitate the development of cognitive presence.
2020-01-30T09:05:09.913Z
2020-01-28T00:00:00.000
{ "year": 2020, "sha1": "068a2a5a860070737705dacbb624dab5b55793cd", "oa_license": null, "oa_url": "http://www.jite.org/documents/Vol19/JITE-Rv19p001-016Danaher6039.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c97801fcd3667f04e8229fdc720b30c36ce25509", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
260098333
pes2o/s2orc
v3-fos-license
Identification of key modules and driving genes in nonalcoholic fatty liver disease by weighted gene co-expression network analysis Background Nonalcoholic fatty liver disease (NAFLD) is characterized by excessive liver fat deposition, and progresses to liver cirrhosis, and even hepatocellular carcinoma. However, the invasive diagnosis of NAFLD with histopathological evaluation remains risky. This study investigated potential genes correlated with NAFLD, which may serve as diagnostic biomarkers and even potential treatment targets. Methods The weighted gene co-expression network analysis (WGCNA) was constructed based on dataset E-MEXP-3291. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses were performed to evaluate the function of genes. Results Blue module was positively correlated, and turquoise module negatively correlated with the severity of NAFLD. Furthermore, 8 driving genes (ANXA9, FBXO2, ORAI3, NAGS, C/EBPα, CRYAA, GOLM1, TRIM14) were identified from the overlap of genes in blue module and GSE89632. And another 8 driving genes were identified from the overlap of turquoise module and GSE89632. Among these driving genes, C/EBPα (CCAAT/enhancer binding protein α) was the most notable. By validating the expression of C/EBPα in the liver of NAFLD mice using immunohistochemistry, we discovered a significant upregulation of C/EBPα protein in NAFLD. Conclusion we identified two modules and 16 driving genes associated with the progression of NAFLD, and confirmed the protein expression of C/EBPα, which had been paid little attention to in the context of NAFLD, in the present study. Our study will advance the understanding of NAFLD. Moreover, these driving genes may serve as biomarkers and therapeutic targets of NAFLD. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-023-09458-3. Background Nonalcoholic fatty liver disease (NAFLD), characterized by excessive liver fat deposition, is a continuous disease spectrum, including simple steatosis, nonalcoholic steatohepatitis (NASH), relevant liver cirrhosis, and even hepatocellular carcinoma in severe cases [1,2]. NAFLD accounts for 75% of chronic liver disease cases, and is also a common cause of liver transplantation [3][4][5]. With changes in modern lifestyles, such as high energy intake and sedentary activities, the incidence of NAFLD is rapidly increasing [6]. In addition, NAFLD increases susceptibility to chronic kidney disease, sarcopenia, hyperuricemia, type 2 diabetes and other metabolic diseases and malignancies [7,8]. Hence, NAFLD is paid much attention and many efforts have been made for its diagnosis and treatment [9]. The current gold standard for diagnosing NAFLD is histopathological evaluation of liver tissue biopsy, which is invasive and risky and with sampling errors [10]. It is challenging and yet tempting to seek non-invasive diagnostic biomarkers with easy detection and high accuracy for diagnosis and even potential treatment of NAFLD [11]. Thanks to the great strides made in bioinformatics in recent decades, we can analyze large and complex gene sequencing data, which has been accepted as an important method in life science research [12][13][14][15]. Weighted gene coexpression network analysis (WGCNA) is a novel bioinformatics method that explores the correlations between or within different genomes, as well as the correlations between genes and clinical features, by establishing co-expression modules or gene networks [16][17][18]. The modules are established based on differences in expression profiles and driving genes that are critical in triggering key cell signaling pathways in important types of cells [19]. It recognizes highly correlated modules, characteristics of gene modules and driving genes. WGCNA contributes to establishing correlations between gene modules and samples and to calculating module membership [20]. At present, WGCNA has been successfully applied in analyses of cancers (e.g., breast cancer, glioblastoma and prostate cancer) [21][22][23]. By investigating the correlations between tissue microarray data and clinical features, WGCNA predicts the survival outcomes of cancer patients and identifies candidate biomarkers or therapeutic targets of cancers [24,25]. In the present study, we analyzed the E-MEXP-3291 dataset using WGCNA. After establishing the correlations between gene modules and clinical data of NAFLD, it was found that the blue module was positively correlated with the severity of NAFLD. Subsequently, the overlap of genes in blue module and upregulated genes in GSE89632 was searched, and 8 driving genes, such as CCAAT/enhancer binding protein (C/EBPα), etc., was identified. After establishing a NAFLD model in mice, immunohistochemical data validated significantly upregulated C/EBPα in the liver tissues of NAFLD mice. We also identified turquoise module and 8 driving genes to be negatively associated with the severity of NAFLD, indicating the regression of the disease. Taken together, our findings provide novel directions and therapeutic targets of NAFLD. Dataset acquisition and data preprocessing The RNA microarray dataset GSE89632 [25,581,263] and the E-MEXP-3291 [21,737,566] profile were downloaded from the GEO (Gene Expression Omnibus) database (https://www.ncbi.nlm.nih.gov/geo) and Array-Express (https://www.ebi.ac.uk/arrayexpress/). The gene expression level of 24 healthy controls, 20 cases with simple steatosis and 19 cases with NASH were included in GSE89632, besides the steatosis percentage, fibrosis stage, lobular inflammation severity, ballooning intensity, NAFLD activity score, age, sex, liver arachidonic acid level, liver eicosapentaenoic acid level, and liver docosahexaenoic acid level. The gene expression level of 19 normal liver, 10 simple hepatic steatosis sample, 9 NASH with fatty liver and 7 NASH without fatty liver, containing gender and age, were included in the E-MEXP-3291. The summary of the datasets was provided in supplementary Table 1. In the present study, 19 normal liver, 10 simple hepatic steatosis sample and 9 NASH with fatty liver expression profile was used to construct the WGCNA network, and the obtained results were further validated using the GSE89632 dataset. Establishment of the NAFLD mouse model Animal procedures were strictly performed based on the ethics review of experimental animals and were approved by the Ethics Committee of Xiamen University. All methods were carried out in accordance with relevant guidelines and regulations of the Ethics Committee, and were reported in accordance with ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments, https://arriveguidelines.org) for the reporting of animal experiments. present study. Our study will advance the understanding of NAFLD. Moreover, these driving genes may serve as biomarkers and therapeutic targets of NAFLD. Immunohistopathological characterization of liver. Livers were sectioned from mice, washed with PBS, fixed with 4% paraformaldehyde/PBS and paraffin embedded. For C/EBPα immunofluorescences, crosssections were treated for antigen retrieval and incubated with primary antibodies (1:100) followed by fluorescent secondary antibody. Peroxidase activity was revealed by 3-30-diamino-benzidinetetrahydrochloride (DAB, Dako). Images were captured using an Upright Metallurgical Microscope (Leica DM4B, Germany). Negative controls were carried out by omitting the primary antibody. Construction of WGCNA The WGCNA R software package was constructed. In brief, genes with expression values > 10 in 43 samples were utilized to draw a hierarchical clustering tree (dendrogram) using the fashClust function. The soft-thresholding power selected by the pickSoft Threshold function was a standard value in the scale-free topology network to make the established network a power-law distribution. It reduced errors and made the results more characteristic of biological data by strengthening strong correlations and weakening weak correlations in a scalefree network feature. The scale-free topology fit index presented an exponential change. Therefore, a good correlation (R 2 = 1) indicated that the data network was in a scale-free topological distribution. Clinically significant modules Key modules were screened out by calculating the correlations between module eigengenes and clinical traits. In the linear regression between gene expression and clinical information, log 10 transformation of the P-value (GS = lgP) was considered gene significance (GS). The average GS of all genes in one module was considered module significance (MS). The module with the highest MS among all modules was believed to be the one that has a most significant correlation with clinical traits. Function enrichment analysis To obtain further insight into the function of genes in key modules, Gene Ontology (GO) enrichment analysis was performed for modules with the KOBAS tool (http://kobas.cbi.pku.edu.cn/kobas3). The gene lists of modules were uploaded, and we obtained the results of BP and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses. An adjusted p-value < 0.05 was regarded as significant. Validation of driving genes The "limma" R package was used to screen the differentially expressed genes (DEGs) between healthy control samples and NASH samples in dataset GSE89632 for validation. The cutoff value was log 2 |FC| >1, with an adjusted P-value < 0.05. The construction of the volcano plot and hierarchical clustering analysis were carried out using the R packages ggplot2 and pheatmap, respectively. A Venn diagram was created using the online tool jvenn (http:// jvenn.toulouse.inra.fr/app/example.html) to overlap the genes in key modules and differentially-expressed genes (DEGs). Statistical analysis The results were expressed as the mean ± S.E.M. Statistical analysis was performed using unpair T test for comparisons between two groups followed by the Student-Newman-Keuls test (Prism 5 for Windows, Graph-Pad Software Inc., USA). P values < 0.05 were considered statistically significant. Expression value analysis of microarray data The E-MEXP-3291 profile containing 43 cases was downloaded from ArrayExpress, including 20 healthy controls, 15 cases with steatosis and 8 cases with NASH. Using the R package, raw data of the E-MEXP-3291 profile were processed for background correction and normalization. Probes and gene symbols were matched using R package annotation. For the multi-matched genes, the median level was regarded as the final expression value. A total of 23,486 genes were identified, and those with an average expression level > 5 were selected for the following analysis. Ultimately, 6,731 eligible genes were included for cluster analysis. As shown in Fig. 1A, three clusters of 43 samples were classified. Construction of WGCNA and identification of key modules An appropriate soft threshold value was screened out to make the established network a scale-free distribution. The network topology analysis was conducted on the top 20 thresholding powers, aiming to identify the relatively balanced scale independence and mean connectivity of the WGCNA. The power value (β) was confirmed to be 9 ( Fig. 1B and C) to produce a hierarchical clustering tree of 6731 genes. The obtained adjacent and topological modules were subjected to a gene clustering function using dissimilarity. Subsequently, modules were cut by the dynamicprune algorithm for the establishment of the WGCNA network. Similar modules were merged as the MEDis-sThres set for 0.25, and 7 modules were generated ( Fig. 2A and B). Notably, the gray module represented genes that were unable to be allocated to modules. According to hierarchical clustering, different colors represent different modules, and those on the top are initially obtained modules through the dynamic-prune algorithm, while those on the bottom are the final merged modules. In detail, there were 609, 1154, 653, 1701, 1234, 527 and 853 genes in the black, blue, brown, green, grey, red and turquoise modules, respectively. Correlation between modules and key module identification Interaction association was analyzed among the seven modules, and a network heatmap was depicted (Fig. 3A). It is concluded that every module was validated independently to each other, indicating a relative independence of different genes in different modules. Subsequently, co-expression similarity in modules was investigated Fig. 1 Sample clustering and soft-thresholding power determination. A Clustering was based on the expression data of E-MEXP-3291, and the color intensity was proportional to disease status (healthy controls, simple steatosis and NASH), sex and age. B Analysis of the scale-free fit index for various soft-thresholding powers (β). C Analysis of the mean connectivity for various soft-thresholding powers. In all, 9 was the most fit power value by calculating eigengenes and clustering them based on the correlation, and two main clusters were obtained (Fig. 3B). In addition, the heatmap of driving gene network, depicted based on adjacencies, reveals similar results (Fig. 3C). In the present study, age, sex and stage were included as clinical traits. Pearson correlation analysis was performed on modules and clinical traits, where modules (clinical traits) were expressed as rows and the status was expressed as columns. Values in the modules represent the correlation and p-value. As shown in Fig. 3D, the blue module was positively correlated with stage, and the turquoise module was negatively correlated with stage, suggesting that the blue module could promote the progression of NASH and that the turquoise could inhibit the progression of NASH. The correlations between module membership and GS in the blue and turquoise modules were shown in supplementary Fig. 1. Therefore, blue and turquoise modules were ultimately selected for the following analysis. Function enrichment analysis As the blue module was positively correlated with disease stage, the gene in the blue was enrolled for further KEGG and GO analysis, p value < 0.05 and FDR < 0.05 were considered statistically significant. A total of 62 biological processes and 57 pathways were enriched in blue module. The blue module was mainly enriched in the regulation of the Wnt, MAPK and AMPK pathways, etc. (Fig. 4A and B). However, we did not identify significantly enriched pathways in the turquoise module. Identification of driving genes in the blue module The GSE89632 dataset was downloaded from the GEO database to identify the expression of driving genes. DEGs were screened out (log 2 FC>|1| and P < 0.05) as described [26,27], which are depicted as volcano plots (Fig. 5A) and shown in a hierarchical clustering heatmap ( Fig. 5B). Subsequently, a Venn diagram was constructed for the overlapping upregulated genes and genes in the blue module, and 8 overlapping driving genes (ANXA9, FBXO2, ORAI3, NAGS, C/EBPα, CRYAA, GOLM1, TRIM14) were obtained (Fig. 5C). The expression levels of the 8 driving genes in healthy controls and NASH cases from GSE89632 are shown in Fig. 5D, and C/EBPα was the most upregulated gene (log 2 FC = 3.33). Experimental validation of driving genes in the blue module To verify our identifications, the expression of CCAAT/ enhancer binding protein-alpha (C/EBPα), the most upregulated gene in blue module, was determined in the NASH model mouse and normal controls. In Fig. 6A, the brown color represents the positive staining of C/EBPα. The intensity was recognized and read using the image J software, and the value was then divided by the mean of that in normal group, resulting in the fold change of normal group (Fig. 6B). The immunohistochemistry result showed that the protein level of C/EBPα was upregulated in the NASH group compared with the control group in this mouse model, consistent with our bioinformatics analysis. Discussion NAFLD has emerged as the leading cause of chronic liver disease in many countries worldwide. NAFLD represents a spectrum of disease severity, ranging from simple steatosis to NASH, cirrhosis, and hepatocellular carcinoma (HCC) [28]. Compared with the general population, NAFLD patients are at increased risk of liver-related, kidney-related, cardiovascular and all-cause mortality [29,30]. However, with complex multifactorial pathogenesis, the genes or proteins related to progression of NAFLD remain obscure. In recent years, the identification of key genes of a certain disease using WGCNA has become popular [31]. Establishing a WGCNA network contributes to screening and identifying key modules and genes that are There was no significant difference in interactions among different modules, indicating a high-scale independence degree among these modules. B Hierarchical clustering of module genes that summarize the modules yielded in the clustering analysis. The modules with similarity over 0.2 was incorporated before clustering. C Heatmap plot of the adjacencies in the driving gene network. D Heatmap of the correlation between module eigengenes and the disease status of NAFLD. The turquoise module was the most negatively correlated with status, and the blue module was the most positively correlated with status responsible for specific features of a disease [32]. Traditional gene analysis mainly focuses on strong effector molecules rather than weak ones, although they are of significance as well [33,34]. WGCNA is a supplemental method for data mining of weak effector molecules. It strengthens the correlation of strong effector molecules after power function processing and conversely weakens that of weak effector molecules in the same processing, thus leading to a scale-free topology criterion of networks [35]. In the current study, we apply WGCNA on two datasets, that is E-MEXP-3291 and GSE89632. Both of them contain samples of healthy controls, simple steatosis and NASH. Although simple steatosis and NASH are parts of, not identical to, NAFLD, they are important components of NAFLD, and studies of E-MEXP-3291 and GSE89632 also use the term NAFLD to include simple steatosis and NASH [36,37], hence in the current study, we use expression of NAFLD to describe the clinical information/trait of the disease status in these datasets. The RNA-seq dataset E-MEXP-3291 was downloaded from the ArrayExpress database. A total of 6, 731 genes were screened out after excluding genes with an average expression level < 5, and these genes were subjected to WGCNA. Modules correlated with NAFLD were identified through cluster analysis. The data showed that the blue and turquoise modules were correlated with the stage of NAFLD. Afterwards, the two modules were subjected to GO and KEGG analysis, and they were determined to be mainly enriched in fat metabolism, insulin resistance and other biological processes that were of great significance in the development of NAFLD. Then we investigated driving genes in each module. The term of driving genes is similar to hub genes and yet a bit different from the latter. The driving genes in the present study are the overlaps between modules and DEGs and related to the disease; however, in general, the hub genes refer to those genes participating in the transcriptional regulation [38,39]. To ascertain driving genes in each module, another dataset, GSE89632, was introduced to identify differentially expressed genes between healthy controls and NASH patients. The GSE89632 dataset serves as the external validation set to ensure the stability of the results. By overlapping upregulated differentially expressed genes and genes in the blue module, a total of 8 genes were obtained. Among them, C/ EBPα was the top upregulated gene. To further validate our findings, we detected positive expression of C/EBPα in liver tissues of NAFLD mice by immunohistochemical staining. As expected, C/EBPα was significantly upregulated in NAFLD mice compared with mice fed a normal diet. CCAAT/enhancer binding protein (C/EBP) is a eukaryotic transcription factor containing six family members (C/EBPα, C/EBPβ, C/EBPδ, C/EBPε, C/EBPγ and C/ EBPζ) [40]. These proteins are extensively distributed in various types of tissues, organs and cells. Functionally, C/EBPα is involved in hepatocyte proliferation and adipocyte differentiation [41]. C/EBPβ is necessary for the immune function of macrophages [42]. C/EBPδ is synergistically involved in adipocyte differentiation [43]. And C/EBPε is specifically expressed in bone marrow cells, serving as a vital mediator of granulocyte production [44]. But C/EBPγ and C/EBPζ has been little studied. Structurally, the basic leucine zipper (bZIP) at the C-terminus of C/EBP family members is highly conserved and is responsible for dimerization and DNA binding [45]. Its heterodimer or homodimer regulates gene transcription by binding the conserved sequence 5'-T(T/G)NNGNAA(T/G)-3' , thereby participating in the immune and inflammatory responses [46]. C/EBP binding sites exist in promoter regions of many inflammation-related cytokines [47,48]. Hence, C/EBP family plays a critical role in the inflammatory response. It has been reported that knockdown of C/EBPβ in mouse type II alveolar epithelial cells downregulates IL-1β-induced expression of IL-6 [49]. Stimulation of mouse BMDMs (mouse bone marrow-derived macrophages) by low-dose LPS enhances the transcriptional activity of C/EBPδ [50]. Knockdown of C/EBPδ alleviates LPS-induced ALI/ ARDS symptoms, mainly manifesting as decreased numbers of neutrophils in bronchoalveolar lavage fluid, albumin (reflection of vascular epithelial permeability of lung tissues) and cytokines [51]. It is reasonable to speculate that suppressing the transcriptional activity of C/EBPα may alleviate the inflammatory response. However, through a literature review, the function of C/EBPα in the development of NAFLD has rarely been reported. Our bioinformatic analysis showed that C/EBPα was upregulated in the liver tissues of NAFLD, which was further confirmed in the NAFLD mouse model. Our result suggest that C/EBPα may aggravate the development of NAFLD. In Conclusion, we identified two modules and 16 driving genes, including 8 genes positively-correlated and 8 genes negatively-correlated with the severity of NAFLD, Fig. 5 Identification of driving genes in GSE89632. A Volcano plot visualizing DEGs in GSE89632 (19 with nonalcoholic steatohepatitis (NASH) and 24 healthy controls (HC)). The green nodes are downregulated genes, and the red nodes are upregulated genes (|fold change|>2, p < 0.05). B Heatmap hierarchical clustering reveals dysregulated genes in the NASH groups compared with the healthy controls. C-D Identification of common genes between upregulated genes and the blue module by overlapping them; C/EBPα was determined to be the most upregulated gene. E Identification of common genes between upregulated genes and the turquoise module by overlapping them which will advance the understanding of mechanism of NAFLD. Our findings provide novel directions and therapeutic targets of NAFLD.
2023-07-24T13:33:12.171Z
2023-07-24T00:00:00.000
{ "year": 2023, "sha1": "9b607ac44fc5df5d4b83a30e153977dbe7edeb33", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "9b607ac44fc5df5d4b83a30e153977dbe7edeb33", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16054765
pes2o/s2orc
v3-fos-license
Acute stress alters individual risk taking in a time‐dependent manner and leads to anti‐social risk Abstract Decision‐making processes can be modulated by stress, and the time elapsed from stress induction seems to be a crucial factor in determining the direction of the effects. Although current approaches consider the first post‐stress hour a uniform period, the dynamic pattern of activation of the physiological stress systems (i.e., the sympathetic nervous system and hypothalamic‐pituitary‐adrenal axis) suggests that its neurobehavioural impact might be heterogeneous. Here, we evaluate economic risk preferences on the gain domain (i.e., risk aversion) at three time points following exposure to psychosocial stress (immediately after, and 20 and 45 min from onset). Using lottery games, we examine decisions at both the individual and social levels. We find that risk aversion shows a time‐dependent change across the first post‐stress hour, evolving from less risk aversion shortly after stress to more risk averse behaviour at the last testing time. When risk implied an antisocial outcome to a third party, stressed individuals showed less regard for this person in their decisions. Participants’ cortisol levels explained their behaviour in the risk, but not the antisocial, game. Our findings reveal differential stress effects in self‐ and other‐regarding decision‐making and highlight the multidimensional nature of the immediate aftermath of stress for cognition. Introduction Exposure to stressful situations triggers the activation of physiological and neuropsychological responsesparticularly "fight-or-flight" responsesthat have been selected throughout evolution for their ability to facilitate coping with life threats (McEwen, 2007). The physiological stress responses comprise the rapid (and generally transient) activation of the sympathetic nervous system (SNS), closely followed by activation of the hypothalamic-pituitary-adrenal (HPA) axis (Herman et al., 2012), whose actions can target the brain and affect ongoing and subsequent behavioural and cognitive functions (Roozendaal & McGaugh, 2011;Hermans et al., 2014), including social behaviours (Sandi & Haller, 2015). It is therefore not surprising that decision-making processes are susceptible to modulation by stress [for reviews, see (Starcke & Brand, 2012;Morgado et al., 2015)]. In our society, stress is rather ubiquitous in contexts where people are required to make important economic, social or political decisions. Decisions can vary in their targets; e.g., they can primarily affect the decision-making agent, other individuals, or both. Acute stress appears to modulate decisions related to these different targets. For self-related decisions, the emerging picture outlines risky decisions for gainsnot losses (but see Pabst et al., 2013a, b;Robinson et al., 2015) as being particularly affected by stress (Lighthall et al., 2009;Porcelli & Delgado, 2009;Buckert et al., 2014); however, there is no consensus as to whether stress has any effect at all (Lempert et al., 2012;Gathmann et al., 2014); or turns individuals less (Lighthall et al., 2009;Buckert et al., 2014) or more (Porcelli & Delgado, 2009) risk averse. Similarly, evidence is mixed regarding the effects of acute stress in other-regarding decisions. While some reports underscore prosocial effects of stress (von Dawans et al., 2012;Margittai et al., 2015), others describe antisocial effects (Vinkers et al., 2013;FeldmanHall et al., 2015;Margittai et al., 2015;Steinbeis et al., 2015). These discrepancies may be accounted for by different factors, such as gender effects (Preston et al., 2007;van den Bos et al., 2009;Lighthall et al., 2009), individual differences in hormonal stress responses (Coates & Herbert, 2008;Starcke et al., 2011;van den Bos et al., 2013b;Buckert et al., 2014;Kandasamy et al., 2014;Cueva et al., 2015), or the nature of stressors (Steinbeis et al., 2015). Importantly, recent evidence suggests that the time elapsed from stress induction to behavioural testing is a crucial factor in capturing the effects of stress on decision-making (Pabst et al., 2013a;Vinkers et al., 2013;Margittai et al., 2015). Several studies have focused on the distinction between two temporal domains (Joels & Baram, 2009): the first one taking place during the first hour after stress and the second lasting for several hours afterwards (Vinkers et al., 2013;Margittai et al., 2015). These phases are based on different mechanisms elicited by glucocorticoids (primarily cortisol in humans), the final products of the HPA axis that involve non-genomic, rapid actions in the first phase and slower, genomic actions in the second with each of them engaging divergent activation of large-scale brain networks (Joels et al., 2011;Hermans et al., 2014). So far, changes in decision-making were found during the first but not the second phase (Vinkers et al., 2013;Margittai et al., 2015), highlighting the first post-stress hour as the critical period for immediate stress effects on risk taking. From an evolutionary point of view, the sensitivity of the first hour in the aftermath of stress to changes in risk aversion makes sense as it arguably corresponds with the period when responses to encountered threats are most needed. However, this first post-stress hour may not be a uniform period with regards to decision-making, but one that varies with time, as suggested by a recent study in which decisions changed at different time points throughout the hour (Pabst et al., 2013a). This view aligns well with the different physiological states experienced during this period, as exemplified by the dynamic pattern of activations elicited by acute stress on the SNS (very rapid and transient) and the HPA axis (typically, glucocorticoid levels arise slowly and peak at around 15-30 min post-stress onset followed by a slow decline over the subsequent 30 min period). Therefore, we hypothesized that exposure to psychosocial stress would decrease risk aversion and lead to anti-social decisionmaking, with effects varying at discrete time points throughout the first post-stress hour. We set this study to investigate these hypotheses regarding risky economic decision-making in the gain domain for self-and otherregarding decisions, and explicitly asked whether stress effects would vary across different time points within the first hour following stress exposure. Given the lack of information, we did not make specific predictions for each time point. We used two economic games given to different cohorts of control and stressed participants at different time points within the first hour after stress induction. Although a marked bias towards risk aversion has been observed in several species, including humans, both individual differences and intra-individual changes in risk taking have been documented (Markowitz, 1952;Kahneman & Tversky, 1979). We included males and females to test for gender effects, and measured heart rate and saliva cortisol levels to, respectively, assess SNS and HPA axis responses. Participants Healthy male and female participants were recruited at the University of Lausanne and Ecole Polytechnique F ed erale de Lausanne (EPFL). Exclusion criteria included current medication usage; pregnancy, or breastfeeding; experiencing a major life change or an unusual amount of stress; smoking more than five cigarettes per day; having a history of medical or psychiatric illness, insomnia, night shift work or a history of drug or alcohol abuse. Three separate experimental blocks were conducted. Participants completed sessions in groups of five or six. The final sample size was 352 participants, randomly assigned to either stress (n = 173: 67 females and 106 males) or control (n = 179: 75 females, 104 males) conditions. Sessions took place daily either between 14:00 and 16:00 or between 16:00 and 18:00. We conducted one stress and one control session on each day, with session order counterbalanced across experiment days. Participant demographics are listed in Table 1. An additional group of 55 participants was recruited separately to play the role of second movers. These volunteers did not make any decisions, but received a cash payment depending on whom they were paired with for a series of games (mean payment = CHF 21.80). This study was approved by the Hautes Etudes Commerciales (HEC) Ethics Committee of the University of Lausanne. Experimental procedures The procedure is outlined in Fig. 1A. One week before the experiment, participants filled out a battery of questionnaires online, including the State-Trait Anxiety Inventory (Spielberger, 1983); and a 10-min timed version of the Bochumer Matrizen-test (Hossiep et al., 1999). Upon arrival to the laboratory, participants read and signed information and consent forms. They were then fitted with a heart rate monitor (POLAR CSX800; Polar Electro, Kempele, Finland). Saliva samples were collected using Salivette sampling devices (Sarstedt, N€ umbrecht, Germany), and visual analogue scales (VAS) were given to assess subjective stress levels at different times throughout the experiment (see T1-T6 in Fig. 1A). Economic games were explained and participants completed trial games in advance to ensure their full understanding of the tasks. Following instructions, participants were told which condition they were assigned to and were given 10 min to prepare for the interview. Participants in the stress group were exposed to the Trier Social Stress Test for Groups (von Dawans et al., 2011), which involves the preparation and delivery of an oral presentation simulating a job interview, as well as performing a mental arithmetic task before an unresponsive jury and video cameras. Participants in the control group were given a text to read in a low voice, followed by an easy counting task. These measures have been shown to control for different factors of the TSST-G procedure excluding the psychosocial stress component (von Dawans et al., 2011). Following each of the speaking and arithmetic tasks, both groups played a series of economic games, including the standard risk and anti-social risk games. Participants performed these two games only once. Different cohorts of control and stressed participants performed these games at different time points within the first hour after the stress induction; i.e., immediately after, and 20 and 45 min from onset. At the end of the experiment, all participants completed an attention test (Brickenkamp & Zillmer, 1998) to ensure potential differences in participants' performance did not arise due to a lack of engagement. We verified the experiment's credibility by asking participants whether they truly believed they were matched with a live person in the anti-social risk game, on a scale of 0 (no doubt at all) to 100 (highly doubtful) during the debriefing session. Most participants had little or no doubt (Mean = 27.05, SD = 33.24). At the end of the experiment, payoffs were calculated. Participants were paid 45 Swiss Francs (CHF 45; CHF 1 = 1.03 USD) for participation and an additional amount based on their game choices, which varied between CHF 0 and CHF 35. The standard risk and anti-social risk games Participants were given two choice lottery games (see scheme in Fig. 2), one testing for individual risk (i.e., the standard risk game) and the second for other-regarding risky behaviour (i.e., the antisocial risk game). Following the strategy method, subjects were first asked to indicate the probability P at which they would choose a lottery with a 20 CHF gain over a certain outcome with a 10 CHF gain (standard risk game). This measure, dubbed the switching probability threshold [p(switch)] is an indicator of risk aversion, as the higher the probability threshold, the more risk averse the subject. The game was then played a second time, whereupon a social dilemma was introduced (anti-social risk game). While the game structure remained the same, the outcome of the decision-making impacted a third party. When the subject obtained the certain gain, the third party also obtained 10 CHF. However, when the subject obtained the lottery, the third party obtained nothing. Thus, a pro-social move would be to give a high p(switch), minimizing the likelihood of obtaining the lottery and its concomitant antisocial outcome. Each participant performed these two lottery games only once, corresponding with a single testing time point for each risk condition. Different cohorts of participants were tested at different time points with regards to the stress or control manipulation. For the calculation of the payment to the participants, a computer-generated random probability was assigned to the subject (p (random)). When p(random) was higher than p(switch), the subject obtained the payoff from a lottery with a chance of winning equal to p(random) and a gain of 20 CHF. When p(random) was smaller than p(switch), subjects were paid the 10 CHF from option A. In the anti-social risk game, a third party was paid 10 CHF whenever the participant chose the certain gain (option A). Note that feedback regarding the results of games and payment were only given once the subjects finished all experimental procedures. Cortisol assessment Saliva samples were stored at À20°C until processed. The assay protocol was conducted as follows: samples were first centrifuged at 3000 rpm for 15 min at room temperature, then salivary cortisol concentrations were measured by enzyme immunoassay (Salimetrics, Fig. 1. The stress induction protocol successfully induced a stress response in stress-group subjects. Subjects performed either a TSST-G stress procedure (TSST-1 and TSST-2 denote the respective interview and mathematical portions of the TSST-G stressor) or control procedure and then performed tasks as outlined (A). Subjects in the stress group reported significantly higher levels of subjective stress during each saliva measurement (B), exhibited increased heart rates (C) and cortisol levels (D) during the experiment compared to controls. A portion of subjects exhibited a pattern of cortisol response such that they could be classified as either responders or non-responders, with non-responders demonstrating similar cortisol levels to control groups, and responders exhibiting significantly higher levels. Data are presented as mean and standard error, except for (E) which depicts the means. [Colour figure can be viewed at wileyonlinelibrary.com]. Suffolk, UK) according to manufacturer instructions. The analytical sensitivity of the cortisol assay is 0.007 lg/dL with standard curve ranging from 0.012 to 3.00 lg/dL. Coefficients of variation for low and high commercial controls were 4.75% for intra-assay and 8.2% for inter-assay. Data analysis Data were analysed using STATA (2013, StataCorp). All simple comparisons and analyses, unless otherwise specified, were performed using between-subjects factorial ANOVA, and reported statistics are relative to group differences. Analyses involving covariates and interactions were performed using moderated regression with robust standard errors. Coefficients and significance levels are always reported in relevant tables for regression, and interaction terms are defined as such. Baseline parameters Following recruitment, subjects were randomly distributed into control and stress groups and exposed to the experimental procedure ( Fig. 1A). As shown in Table 1, although these two groups significantly differed in age and trait anxiety, mean differences between groups are very small (e.g. only 6 months difference in age), and should not represent functional differences. Otherwise, no differences in baseline cortisol, cognitive scores or psychometric variables were found between control and stressed subjects. Stress induction Subjects in the stress condition gave higher subjective stress ratings on the visual analogue scale (VAS) (Fig. 1B), and showed significantly elevated cortisol (Fig. 1C) and heart rate levels ( Fig. 1D) relative to participants in the control condition, indicating successful stress induction. We found no significant differences in subjective stress ratings between groups at the first time-point, [F 1,226 = 0.63, P = 0.43], nor at the last [F 1,226 = 1.35, P = 0.25]. At all other time-points, however, (T1-T4 in Fig. 1A), a difference in VAS ratings emerged (F 1,226 = T1: 50.5, P < 0.001; T2: 55.65, P < 0.001; T3: 4.58, P = 0.033; T4: 31.21, P < 0.001). Similarly, salivary cortisol measures validated the stress induction procedure: no group differences were found in cortisol levels in samples taken prior to stress induction (both Fs < 0.64, both ps > 0.42), however, participants in the stress condition exhibited higher levels of salivary A schematic representation of the game used to assess risk preferences, under both standard risk and anti-social conditions. Participants (denoted here as Agents) provided a switching probability 'p(switch)' at which they would accept a minimum probability of winning to enter a lottery for 20 CHF (Option B) over the certain gain of 10 CHF (Option A). cortisol relative to controls following stress induction (all Fs > 19.75, all ps < 0.001). In addition to these analyses, participants were further split in to responder and non-responder groups. Responders included those participants that showed a cortisol-specific response to the stressor, defined as the ratio of cortisol at a specific time point to a baseline level. A subject was defined as a responder if the summed reactions of all time points fell at least one standard deviation above the mean reaction level of the control group. According to this criterion, 18% of controls group and 45% of participants in the stress condition qualified as responders. Figure 1E represents the average reaction for responders and nonresponders, for both control and stress groups. Heart rate measures showed differences between control and stress groups, from the minute after the start of the measurement [F 1,221 = 4.07, P = 0.044] once participants received instructions for their task, further confirming that the TSST-G procedure was effective in eliciting a physiological stress response. Time-dependent effects of stress on risk aversion We then investigated whether there were time-specific effects of stress on risk aversion. Subjects were asked to choose between a sure gain of 10 Swiss Francs (10CHF; CHF 1 = USD 1.08); or playing a lottery in which they could win 20 CHF with probability P or gain nothing (CHF 0) at a probability 1ÀP. Responses were collected using the strategy method: participants were asked at which probability P of winning 20 CHF they would choose the lottery over the sure gain. A switching probability threshold greater than 0.5 indicates risk aversion and increases in risk aversion are reflected in increases in switching probabilities. As shown in Fig. 3, there was a significant effect of stress on risk aversion. A two-way factorial ANOVA revealed a significant interaction between time and stress (F 2, 346 = 4.92, P < 0.01). Stress significantly decreased risk aversion early after stress exposure, as demonstrated by a significantly decreased switching probability in stressed subjects compared to controls (t = À2.90, P < 0.01), but this effect of stress was absent in later decision-making (F 1,226 = 0.80, P = 0.35). Control subjects exhibited risk aversion that was stable over time (F 2,176 = 0.86, P = 0.43). A three-way factorial ANOVA revealed no differences between male and female participants (F 1,340 = 0.75, P = 0.39), and no interaction of gender with stress (F 2,340 = 1.09, P = 0.29) or timing (F 2,340 = 0.18, P = 0.83) on risk aversion. We then asked whether cortisol responsiveness to the stress and control manipulations interacts with risk aversion. Cortisol response had a similar effect on the standard risk game, both within control and stress groups. As shown in Table 2, we performed a moderated regression to analyse the effects of time, stress and cortisol response on risk behaviour. Similar to the analysis looking at stress condition only, cortisol response (t = 2.09, P < 0.05) and stress (t = 2.26, P < 0.05) both interacted with time: the further away the decision point was from the stressor, the more risk-averse the participant. Time-dependent effects of stress on anti-social risk aversion To examine the role of stress on anti-social risk aversion, we performed a second game in which participants were told they were matched with a randomly selected anonymous opponent also participating in the study. They were given the same choice as above; take a sure gain or play the lottery but they were also told that their decision impacted the other individual. The latter would obtain the same gain of 10 CHF should the participant choose the sure gain or get nothing should the participant opt for the lottery. They were then asked the same questions as above (i.e., at what probability P of winning the 20 CHF would they choose the lottery over the sure gain). In this scenario, higher switching probability thresholds indicate more other-regarding behaviour and vice versa. A two-way factorial ANOVA revealed a significant negative effect of stress on anti-social risk aversion on switching probabilities (F 1,346 = 7.39, P < 0.01), a general effect of time on anti-social risk aversion (F 2,346 = 18.54, P < 0.001) but no difference of the effect of stress across time (F 2,346 = 0.43, P = 0.65). Thus, when risk encompassed an anti-social component, stressed participants did not modify their behaviour to take the other person into consideration as much as control subjects do. Control subjects, however, significantly increased their switching probabilities over time (F 2,176 = 7.17, P = 0.001), suggesting a decrease in antisocial risk behaviour in the later time points. As with the previous game, a three-way factorial ANOVA revealed no difference between male and female participants (F 1,340 = 0.04, P = 0.84), and no interaction with time (F 2,340 = 0.84, P = 0.43) or stress (F 1,340 = 0.25, P = 0.61). Effects of stress on cortisol responders In the standard individual lottery game, there is an influence of both the experimental manipulation and the cortisol response on risk Significance is indicated by asterisks at the P < *0.05, **0.01, and ***0.001 level. aversion. The stressor and the cortisol have a similar effect on behaviour. Thus, those subjects who showed a salivary cortisol response exhibited a stronger behavioural reaction than subjects who were not responsive. The smaller fraction of controls that exhibited a cortisol response also displayed the same pattern of altered behaviour than those in the treatment condition (t = À2.00, P < 0.05, Table 2, model 1). Non-responders in the stress condition exhibited the same reaction as responders in the control condition (Wald test, F 1, 330 = 0.00, P = 0.96). Responders and stressed participants had lower switching probability thresholds, thus indicating lower risk aversion, but this effect dwindled with time. In the case of the anti-social lottery, multiple regression only shows an effect of time on anti-social behaviour, but no further effect of cortisol response or stress on behaviour (Table 2, model 2). Individual differences in risk aversion and their interaction with stress A moderated regression was performed to analyze the effect of both cognitive ability test (CAT) and anxiety on both standard risk-aversion and anti-social risk aversion. Table S1 shows the result of the four regression models. Models 1 and 2 study the effect of the Cognitive Test on standard risk aversion and anti-social risk aversion, respectively, and models 3 and 4 address the effect of anxiety. CAT results significantly predicted risk aversion (t = 2.96, P < 0.01), although this effect dwindled with time (interaction of time and CAT: t = À2.46, P < 0.05). There were no effects or interactions of anxiety on standard risk (t = 0.93, P = 0.35), interaction of stress and anxiety: t = À0.37, P = 0.71), or anti-social risk behaviours (t = À0.41, P = 0.68, interaction of stress and anxiety: t = À0.08, P = 0.93). Table S2 shows that the main results discussed are robust when the regressions are run using a large series of covariates, including age, gender and personality. Discussion Stress is a complex phenomenon involving multiple physiological, behavioural and cognitive adaptations that follow dynamic timedependent patterns. Although the neurobehavioural sciences tend to consider the aftermath of acute stress exposure as a homogeneous period in terms of its modulatory influences, in fact, the first poststress hour is rather multidimensional. This is clearly illustrated by the changing pattern of activation typically exhibited by the SNS and HPA axis during the first hour following stress exposure. Specifically, at the peripheral level, a prototypic stress response consists of an initial transient predominance of SNS activation followed by a gradual increase in bloodstream cortisol levels that peak around 15-30 min post-stress, and then steadily decline (van den Bos et al., 2013a). In the brain, different acute stress-induced waves of neurochemical changes (Joels & Baram, 2009) are believed to correspond to differential regulation of multiple functional networks (Hermans et al., 2014). This dynamic picture of different neurophysiological states could engender different cognitive dispositions. Our study supports this view by showing that decision-making processes are differentially affected at three different time points within the first hour following exposure to psychosocial stress (immediately after, 20, and 45 min from stress onset). Importantly, risk preferences under stress are selfish but follow a steady slope from reduced to enhanced risk aversion in the course of half-an-hour. Therefore, a major finding in our study is that risk aversion, as evaluated in the gain domain, shows a time-dependent change across the first post-stress hour: shortly after stress, individuals are less risk averse for gains, an effect that progressively vanishes with time, switching towards more risk averse behaviour at the 45 min poststress time point. Thus, risky decisions following stress exposure show a positive slope in which behaviour turns in opposite directions; i.e., from a first reduction to a subsequent increase in riskaverse choices. In the immediate aftermath of stress, individuals are more risk taking (i.e., less risk averse) despite the advantage provided by a risk avoidance strategy, which is in line with the proposed evolutionary value of the 'flight or fight' response (Starcke & Brand, 2012). At this early time point, there is a prevalence of SNS activation. Later in time, as cortisol responses increase, risk aversion is established. Importantly, cortisol responding showed the same effects as stress in risk taking, with high-responder subjects displaying the described fluctuating changes in risk aversion with time. Our findings hold important implications for understanding discrepancies in the literature surrounding the effects of acute stress on risk and antisocial behaviour. For example, at first glance, our results seem to be at odds with those of von Dawans et al., who did not find evidence of changes in nonsocial risk taking after exposure to the same stressor as in our study (von Dawans et al., 2012). However, in their study, different variants of the game were played several times within 15-30 min from stress onset and results averaged across different time points, which does not evaluate potential time-dependent differences. In studies that measured behaviour shortly following stress (Starcke et al., 2008;Lighthall et al., 2009;Porcelli & Delgado, 2009;Buckert et al., 2014), risk aversion was decreased in stressed subjects, thus supporting our findings and underscoring the importance of factoring time into behavioural measurements. Moreover, games in the von Dawans et al. (2012) study did not include the possibility to be antisocial. Thus, our studies measure different types of preference. However, in the only study that, to our knowledge, has assessed decision-making at several time points following exposure to stress (TSST; note that this was performed in isolation, not in groups as in our study), more risk aversion was observed in the immediate aftermath of stress that became riskier when subjects were tested 28 min from stress onset (Pabst et al., 2013b). The discrepancy with our results is probably due to the different nature of the games [the Game of Dice (Starcke et al., 2008)] used in the respective studies. Particularly, our subjects were only presented with choices for gains, while in the Pabst et al. study each choice engendered gains and losses (Pabst et al., 2013b). This is a key difference, as the domain/s engaged in economic choices (i.e., gain, loss, or both) are known to affect decision-making (Kahneman & Tversky, 1979). In fact, higher risk aversion was also reported by another study that tested participants for choices related to gains and losses following exposure to a brief stressor (Porcelli & Delgado, 2009; cold pressor combined with memory task). In support of our interpretation, no effect of psychosocial stress in the Game of Dice was found in the gain domain when participants were only given choices for gains and tested 10 min post-stress (Pabst et al., 2013b), a finding that fits with the lack of effects observed in our study at that time point. Moreover, some of the studies reporting results discrepant to ours (Pabst et al., 2013a) included several trials through which participants received feedback about gains and losses, a learning component that is absent in our experiment. Importantly, when feedback was not provided and individuals were tested immediately (0-15 min) after stress exposure, other studies found more risk taking for gains using financially incentivized lotteries similar to ours; however, the effects were sometimes only apparent for subjects that showed a robust cortisol response (Buckert et al., 2014). Our findings identifying high cortisol responder subjects as particularly affected in their risk taking behaviour with time are in agreement with these and other studies that also found riskier (van den Bos et al., 2009) and less strategic (Leder et al., 2013) behaviours in subjects showing high cortisol responses to psychosocial stress. Our second main finding is that stress led to selfish decisions: stressed individuals focused on their own choices and neglected the negative consequences to other social agents. This effect was globally present across all testing times, as stressed subjects were significantly more likely to make risky decisions when the outcome was antisocial. Across time, choices from control subjects progressively took into account the anti-social consequence of choosing to play the lottery when a second subject was involved (remember that the second player would only receive earnings if the participant chose the sure gain, and nothing when choosing the lottery), correcting their decisions towards a higher preference for the certain gain, rather than the lottery. However, stressed subjects appeared locked in their own decisions, as they did not modify their choices to account for the negative consequences to the other player. These observations are in line with emerging evidence indicating that stress can have deep effects on social behaviours (Sandi & Haller, 2015). However, in this case, cortisol did not explain stress effects in the anti-social choice game which conflicts with studies in animals implicating glucocorticoids in stress-induced aggressive behaviour (Haller, 2014). It is important to note that the degree of stress experimentally induced in the animal literature is typically well beyond the one recreated in human experimental settings. Accordingly, we cannot exclude the anti-social influence of glucocorticoids in humans under circumstances involving higher cortisol levels or actual social confrontations. Our behavioural results in the anti-social game contrast with previous studies in which stressed male participants showed increased pro-social behaviours, including elevated levels of generosity when making decisions (von Dawans et al., 2012;Margittai et al., 2015). It is important to note that in those studies participants were directly and explicitly asked to decide whether to give money to another subject. On the contrary, in our study, the decision regarding the other subject is implicit. Specifically, participants were requested to make a choice regarding their preference on a risk game with consequences for themselves and told that their choice would have monetary consequences on another subject. Moreover, whereas our study allowed participants to ponder their preferences for risk across a full range of risk taking options, games in other studies (von Dawans et al., 2012) included binary choices (e.g., trustworthiness or no trustworthiness, sharing or no sharing). Furthermore, detailed analyses of the reported stress-induced generosity indicated that it emerges only when socially close individuals are affected in a modified version of the Dictator game shortly after stress exposure (Margittai et al., 2015;TSST-G). In our study, the 'other' was anonymous and unfamiliar, and hence fits with lower generosity reported in stressed subjects when a donation was given to a charitable organization (Vinkers et al., 2013). Interestingly, this effect in trust behaviour (measured with the Ultimatum game) reported by Vinkers et al. was time-dependent as it was observed immediately after stress exposure, but not 75 min later (Vinkers et al., 2013). Additionally, in line with our findings, Starcke et al. also found more egoistic decision-making when confronted with social dilemmas in participants that were tested in the immediate aftermath of social stress (TSST; Starcke et al., 2011). Control subjects in our study showed persistent signs of risk aversion (scores around 0.65), which is in agreement with behaviour observed in humans and many other species , (Caraco et al., 1980;Barkan, 1990). However, and surprisingly, their other-regarding decision-making showed a progressive change across the three testing times. Although by the second and particularly the third time points, control subjects were willing to forego lotteries with higher winning probabilities to take into account the anti-social consequences of playing the lottery, this behaviour was not observed when they played the game shortly after the 'control' manipulation. Although this effect was somewhat unexpected, there are possible explanations that could account for this differential behaviour when behaviour is measured at different time points following the 'control' manipulation. In the control manipulation, subjects had to read a text followed by an easy counting task, all in a low voice and, as opposed to the stress manipulation in which subjects performed out loud and each participant spoke one at a time, subjects in the control group did their reading and counting simultaneously. However, the fact that subjects in the control group had to perform these tasks in close proximity to the other participants (note that they were tested in groups of six) and in front of a jury, even if 'friendly', is likely to induce mild arousal. An indication for this interpretation seems to be the mild increase in heart rate observed in Fig. 1C from baseline to the control manipulation, which might not only reflect changes elicited by changing position from sitting to standing but also a certain arousal. In fact, there is evidence that controls subjected to the same experimental procedures display increased markers of SNS (e.g., increased heart rate, increased salivary alpha-amylase, which is under adrenergic control and therefore an indirect marker of SNS activity) activation shortly after 'control' manipulations in the TSST, but not at later time points (Pabst et al., 2013a;Vinkers et al., 2013). This suggests that, during the early testing time point, control subjects responded under increased arousal and sympathetic activationsupposedly paralleled by increased brain noradrenergic activationand might explain why at this, but not later time points (note that the control group do not mount a cortisol response as observed in the stress group), they did not 'correct' their decisions to take into account the consequences to the other subject, resembling the pattern observed in stressed subjects. In addition, it is worth noting that control subjects in our study had higher trait anxiety levels than their experimental counterparts. Although the difference was rather small, we cannot discard its potential influence on subjects' reactivity immediately after exposure to the 'control' manipulation. However, note that, overall, the reported results in the two games regarding the effects of stress did not differ when anxiety was treated as a covariate. Regarding individual differences, we found no effects of gender or anxiety on self or anti-social risk-taking. The lack of gender effects in risk-taking are surprising, as former studies found that, as opposed to males, females become more risk-averse following stress exposure (Preston et al., 2007;van den Bos et al., 2009;Lighthall et al., 2009;Mather & Lighthall, 2012). Similarly, gender has been proposed to be an important modulatory factor in the social impact of stress, with females' responses following a pattern of "tend-andbefriend", whereas a "fight-or-flight" pattern is pursued in both males and females (Taylor et al., 2000), although the recent literature has not always validated this distinction (von Dawans et al., 2011). Our data does not confirm a gender distinction for risk and anti-social risk responding under stress; however, a lack of statistical power might be responsible for this absence of gender effects. In addition, our study found that high cognitive scores as assessed by the CAT test predicted performance in the games (higher scores corresponded to higher risk aversion) but did not interact with stress effects on either game. These findings are somehow at odds with emerging evidence indicating that individuals that score higher in tests for executive function are less vulnerable to performance deficits in cognitive tasks resulting from stress or anxiety (Johnson & Gronlund, 2009;Owens et al., 2014;Edwards et al., 2015;Thoresen et al., 2016). Time-dependent effects in behaviour and cognition occurring from the minutes to hours following exposure to stress have been previously highlighted, with the temporal distinction mainly placed between two time points, one occurring within the first post-stress hour with an engagement of brain noradrenergic mechanisms and non-genomic corticosteroid effects and the second one from about 1-4 h post-stress and corresponding to genomic corticosteroid effects (Hermans et al., 2014). Two receptors are involved in corticosteroid actions: the mineralocorticoid (MR) and the glucocorticoid (GR) receptors. Recent integrative models of stress actions on cognition (de Kloet et al., 2016;Vogel et al., 2016) propose a key role for the MR in mediating the rapid behavioural, cognitive, and neural adaptations that follow exposure to acute stress, supposedly in close interaction with the known rapid activation of catecholamines (Arnsten, 2009), and a subsequent engagement of the widely distributed lower affinity glucocorticoid receptor (GR) involved in subsequent management of stress adaptation (de Kloet et al., 2009). Thus, within the post-stress time window considered in our study, the predominant rapid brain mechanisms are supposed to engage noradrenergic-and MR-mediated mechanisms, engaging a salience network (Hermans et al., 2014) and a shift towards cognitively less-demanding processing and allowing a quick response to a situation (Vogel et al., 2016). These processes have been proposed to occur at the cost of an executive control network, which will be activated in a second temporal window normalizing emotional reactivity and enhancing higher-order cognitive processes (Hermans et al., 2014). Thus, rapid actions taking place immediately after stress exposure have been shown to engage striatal pathways (Schwabe & Wolf, 2013;Vogel et al., 2016), which might correspond with the increased interest for the incentivized lottery observed in our study in the immediate aftermath of stress. In addition, the noradrenergic activation taking place at this early time point may also play a role in modulating more selfish behaviour. Indeed, noradrenergic blockade has been shown to decrease utilitarian judgment (Terbeck et al., 2013). However, a limitation of our study is the fact that our experimental design included a potentially confounding factor regarding stress timing. We followed previously published TSST-G procedures involving two stress induction blocks (see Fig. 1A; von Dawans et al., 2011Dawans et al., , 2012Goette et al., 2015) which implies that participants tested in the late testing time in our study were not only tested late from stress onset but also following an additional and recent stress induction procedure. Although these observations imply that we should consider our late time-dependent effects of stress with caution, the linear pattern observed for the responses to the standard risk game with time and the sustained effect of stress in the anti-social risk game across the different testing times supports the validity of our conclusions. Importantly, our data reporting time-dependent effects occurring at three different time periods within the first post-stress hour argues for the need to redefine dynamic mechanisms occurring within this period. Therefore, our findings argue for the need to investigate the dynamic pattern of neural dynamics at different time points within the first post-stress hour in order to better understand the correspondence between the progressing pattern of neurobiological processes triggered by stress and the flexibly allocated behavioural and cognitive adaptations. Conflict of interests The authors declare no competing financial interests with respect to authorship or the publication of this article. Supporting Information Additional supporting information can be found in the online version of this article: Table S1 Individual differences in cognitive ability and trait anxiety on risk taking. Table S2 Test of robustness of main results when including a various set of covariates.
2018-04-03T05:23:55.905Z
2016-09-23T00:00:00.000
{ "year": 2016, "sha1": "0b958a30bba6fd0d9fb2fefcadce496346e867c7", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ejn.13395", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0b958a30bba6fd0d9fb2fefcadce496346e867c7", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
254328344
pes2o/s2orc
v3-fos-license
Facile Synthesis of Formaldehyde-Free Bio-Based Thermoset Resins for Fabrication of Highly Efficient Foams Bio-based biodegradable foams were formulated from a crosslinkable network structure combining starch, furfuryl alcohol, glyoxal, and condensed tannin in the presence of p-toluenesulfonic acid (pTSA) and azodicarbonamide (AC) as a foaming agent. More importantly, the reinforcement of gelatinized starch–furanic foam using tannin, originating from forestry, resulted in an excellent compressive strength and lower pulverization ratio. Moreover, the addition of tannin guaranteed a low thermal conductivity and moderate flame retardancy. Fourier transform infrared (FTIR) spectroscopy approved the successful polycondensation of these condensing agents under the employed acidic conditions. Moreover, the catalytic effect of pTSA on the foaming agent induced liberation of gases, which are necessary for foam formation during crosslinking. Scanning electron microscopy (SEM) showed foam formation comprising closed cells with uniform cell distribution and appropriate apparent density. Meanwhile, the novel foam exhibited biodegradation under the action of Penicillium sp., as identified by the damage of cell walls of this foam over a period of 30 days. Introduction Foam materials can be used as building and packaging materials because of their advantages such as lightweight, heat insulation, flame retardancy, sound insulation, and shock absorption [1][2][3][4]. However, the oil-derived foams on the market, such as polyurethane (PU) [5,6] and polystyrene (PS) [7,8], are nonrenewable materials that exhibit high carbon emission mode dominated by fossil energy. Although PU foams have good thermal insulation, they burn easily to release the highly toxic hydrogen cyanide, which causes great harm to human health and the environment. The mainstream energy of industrial development in most countries is still biased towards fossil energy. This results in a high level of carbon emission, which seriously affects the development of low-carbon environment and energy conservation [9]. Biomass energy is clean, nontoxic, widely sourced, and renewable. It has become the fourth largest source of energy after coal, oil, and natural gas [10,11]. Improving the utilization rate of biomass energy is an effective way to achieve a low-carbon environment. Biomass materials originating from agriculture and forestry are important components of biomass energy. Among them, the planting area and total output of corn are second only to rice and wheat, thus they are regarded as an important renewable biomass resource from crops [12]. However, corn grains are mainly used as food, while corncobs are usually removed via burning in open air without efficient utilization. A large amount of residues, resulting from complete combustion, cause the release of numerous quantity of SO 2 and CO 2 into the atmosphere, which is further polluting the environment [13,14]. Mimosa (Acacia mearnsii De Willd) tannin extract powder (T) was purchased from the Wuming Grilled Rubber Factory (Guangxi, China). Corn (Zea mays L.) starch (S) was provided by Jilin COFCO Biochemical Energy Sales Co., Ltd (Changchun, China). Furfuryl alcohol (FA, with a purity of 98%), formaldehyde (F, with a purity of 37%), glyoxal (G, with a purity of 40%), p-toluenesulfonic acid (pTSA, with a purity of 97.5%), and silicone oil (with a viscosity of 100 mm 2 /s) were obtained from Sinopharm, Beijing, China. Azodicarbonamide (with a purity of 98%) was obtained from Macklin's Reagent Co., Ltd. (Shanghai, China), Penicillium sp. colonies were prepared as follows: Wild Armillaria mellea was collected in Daguan, Zhaotong, China. The bacteria was placed in a sealed bag, then wetted at room temperature and cultured for 1-3 days. After the mycelium growth on the bacterial surface, it was inoculated into a sterile medium and cultured for 1-2 days at a temperature of 28 • C and relative humidity of 75% to obtain the Penicillium sp. colonies. Table 1 gives the detailed formulation for preparation of TSGFA resin-based foam. First, starch, tannin, and furfuryl alcohol were mixed in a beaker using a stirrer (JJ-200 Chengdu Test Instrument Co., Ltd., Chengdu, China) for 4 min, then glyoxal was added and the stirring was continued for 4 min before pTSA (65% aqueous solution) was added slowly to obtain TSGFA resin within few minutes. The other resins, tannin-starch-formaldehydefurfuryl alcohol (TSFFA), prepared by replacing glyoxal with formaldehyde, starch-glyoxalfurfural alcohol (SGFA), and tannin-starch-furfural alcohol (TSFA), were also prepared for comparison with TSGFA resin. Subsequently, silicone oil as a release agent and AC as a foaming agent were added into the TSGFA, TSFFA, SGFA, and TSFA resins, respectively, and the corresponding mixtures were homogenized using a simple agitator (HM-955, Dong Ling Electric Co., Ltd., Guangzhou, China) at a speed of 1500 r/min for 10 min. Then, each mixture was poured into a mold with a size of 90 mm × 90 mm × 90 mm. Afterwards, the mold was transferred to an oven (101, Rongshida Electronic Equipment Co., Ltd., Kunshan, China), where curing was achieved at 80 • C for 24 h to obtain TSGFA-, TSFFA-, SGFA-, and TSFA-derived foam samples, respectively. The preparation process of TSGFA-based foam is shown in Scheme 1. Characterizations The prepared foam samples were placed in a room at 20 °C and 50% relative humidity for 1 day. Subsequently, the performance was evaluated by conducting some tests, in which every test was repeated five times, and the average value was considered. The structure of foam samples was elucidated using a Varian-1000 infrared spectrometer (Varian, Palo Alto, CA, USA). The foam samples were ground into powder (particle size around 35-38 µm) with a grinder (jms-130a, Jingfu, Guangzhou, China). One g of KBr was mixed with 0.01 g of each foam powder, and the runs were conducted over the wavenumber range of 400-4000 cm −1 . The measurements of apparent density were accomplished according to the Chinese Scheme 1. Preparation process of TSGFA-derived foam. Characterizations The prepared foam samples were placed in a room at 20 • C and 50% relative humidity for 1 day. Subsequently, the performance was evaluated by conducting some tests, in which every test was repeated five times, and the average value was considered. The structure of foam samples was elucidated using a Varian-1000 infrared spectrometer (Varian, Palo Alto, CA, USA). The foam samples were ground into powder (particle size around 35-38 µm) with a grinder (jms-130a, Jingfu, Guangzhou, China). One g of KBr was mixed with 0.01 g of each foam powder, and the runs were conducted over the wavenumber range of 400-4000 cm −1 . The measurements of apparent density were accomplished according to the Chinese national standard GB/T 6343-2009. The apparent density was calculated using Equation (1). where m is the mass of the foam sample in g, v is the volume of the foam sample in mm 3 , and ρ is the apparent density in kg/m 3 . A scanning electron microscope (s-4160 Fe, Hitachi, Tokyo, Japan) was used for observation of the microstructural details of foam samples with a size of 10 mm × 10 mm × 10 mm. The cell size and cell wall thickness of the various samples were calculated from the obtained SEM images with the help of Nano Measurer 1.2 software (Microsoft, Redmond, WA, USA). A universal testing machine (AG-50KN, SHIMADZU, Berlin, Germany) was used to evaluate the compressive strength of foam samples, cut to a size of 30 × 30 × 30 mm 3 , at 25 • C and relative humidity of 45-65% by employing a compression rate of 2 mm/min. The pulverization of the foam samples was evaluated according to the Chinese national standard GB/T 12812-2006 on samples with a size of 5 cm × 5 cm × 5 cm. The foam samples were placed horizontally on sandpaper (400 mesh) with a length of 250 mm, while 200 g iron prop was placed on the foam. The foam sample was pulled from one section of the sandpaper to the other end, and the pulling speed was ensured to be the same every time. When a repetition of 30 times was completed, the remaining quantity of each foam sample was recorded by employing Equation (2). where M is the pulverization rate, %; m 0 is the initial weight of the foam, g; m 1 is the weight of the foam after being pulverized by the sandpaper, g. The tensile parameters of the foam were examined according to the Chinese national standard GB/T528-2009, on samples with a size of 30 × 30 × 15 mm 3 , while the load rate was set at 1 mm/min. A thermal conductivity meter (Ybf-2, Dahua Technology Co., Ltd., Hangzhou, China) was used to measure the thermal conductivity on foam samples, customized into a cylinder shape with a radius (R) of 50 mm and 10 mm height (h), according to Equation (3): where λ is the thermal conductivity, W·m −1 ·K −1 ; m is the mass of the lower copper plate, g; c is the specific heat capacity of the bottom copper plate of the instrument, R p and h p are the radius and thickness of the lower copper plate, mm; R is the radius of the foam sample, mm; h is the height of the foam sample, mm; T 1 − T 2 is the temperature difference between the upper and lower copper plates; dT dt |T = T 2 is the cooling rate of copper plate exposed to air. Thermogravimetric analyzer (TG 209 F3, Netzsch, Selb, Germany) was used to investigate the thermal degradation behavior of the foam samples, where a heating rate of 20 • C/min was employed under nitrogen atmosphere over the temperature range 30 to 800 • C. Penicillium sp. was used to check the biodegradability of the TSGFA-based foam sample according to other reported studies [45,46]. The TSGFA-based foam sample was placed in a Petri dish, inoculated with Penicillium sp., then the Petri dish was covered with a parafilm (pm996, Bemis, Neenah, WI, USA). Then, it was kept at 28 • C and 75% relative humidity for 30 days. Eventually, the weight change of the TSGFA foam sample after the action of Penicillium sp. was recorded and compared to the initial weight. After that, the TSGFA foam treated by Penicillium sp. until the 30th day (a size of 3 mm × 3 mm × 3 mm) was fixed using 2.5% glutaraldehyde at 4 • C for 12 h. The growth and action of Penicillium sp. colonies on the TSGFA foam sample were additionally studied using a field emission scanning electron microscope (FESEM), Hitachi su8010 (Hitachi, Ltd., Tokyo, Japan). Figure 1 shows the FTIR spectra of S, T, SGFA, and TSGFA foams. The stretching vibration of -OH appeared at 3379-3449 cm −1 . However, due to the induction effect of different side groups, the characteristic absorption peaks of -OH in SGFA and TSGFA foams seemed different from those of S and T; the peak at 1720 cm −1 represents the skeleton vibration of aromatic hydrocarbons. The hydroxymethyl of furfuryl alcohol reacts with C6 or C8 in the A-ring of tannin, the P-Π conjugation effect makes the skeleton vibration of aromatic hydrocarbons more intense, so that the peak at 1720 cm −1 became more obvious in the case of TSGFA. The peaks in the range 1652-1613 cm −1 are attributed to the C-H stretching vibration of all polymeric and small structural units. Due to the influence of the side groups connected to the aromatic rings, the C-H absorption peak of TSGFA foam was different from that of S in terms of intensity and position because the conjugation effect of the aromatic rings caused the C-H absorption peak of TSGFA foam to undergo a shift. The absorption peak at 1567 cm −1 in the case of TSGFA foam, which is due to the etherification of the tertiary carbon atom of the side group of furfuryl alcohol, looks dissimilar in the case of SGFA, T, and S, indicating formation of condensation products from reaction of tannin with furfuryl alcohol. The peaks at 1518-1455 cm −1 , representing the aromatic skeleton, are absent in the relevant spectrum of S, while the peaks at 1283-1125 cm −1 are due to C-O absorption of all polymeric, oligomeric, and basic structural units. Furthermore, the peak at 1006 cm −1 refers to the C-OH of the furfuryl alcohol high oligomers. The peak at 747 cm −1 is indicative of the C-C stretching vibration. Due to the induction effect of different structural units connected by covalent bonds in the case of TSGFA, the characteristic absorption peak of C-C, appearing originally at 747 cm −1 , shifted to 795 cm −1 . It can be now recognized that the condensation reactions between starch, tannin, furfuryl alcohol, and glyoxal occurred successfully. The main reactions of the TSGFA foam system are shown in Scheme 2a,b. SEM images, apparent density, and cell size distribution of the different prepared foams are shown in Figure 2. All foams have closed cell structure, and the cells are either round or oval-shaped. This indicates that the AC foaming agent decomposes by the action of pTSA to produce carbon dioxide, nitrogen, and other gases that diffuse evenly during the resin solidification to generate homogeneous foams. However, the average cell size and cell wall thickness of the different foam samples are varied. The average cell wall thickness and apparent density of the TSGFA-based foam sample are greater than those of SGFA and TSFA foams, indicating that the addition of tannin and glyoxal underwent polycondensation reactions with starch and furfuryl alcohol, which increased the integrity of the resin and further upgraded the network structure of the foam system. At the same time, TSGFA-and TSFFA-based foam samples acquired similar average cell size, but the TSFFA-based foam showed smaller apparent density with respect to that of TSGFA-based foam, indicating that as a crosslinking agent, the activity of formaldehyde is higher than that of glyoxal [35], which makes the reaction of tannin, starch, and furfural more efficient, the number of oligomers in the case of TSFFA resin system less, the compatibility of the foaming agent and TSFFA resin better with respect to TSGFA, and the foaming process more uniform, while more bubbles are generated. According to SEM images, the cell shape of TSFFA foam is uniform compared with TSGFA, almost SEM images, apparent density, and cell size distribution of the different prepared foams are shown in Figure 2. All foams have closed cell structure, and the cells are either round or oval-shaped. This indicates that the AC foaming agent decomposes by the action of pTSA to produce carbon dioxide, nitrogen, and other gases that diffuse evenly during the resin solidification to generate homogeneous foams. However, the average cell size and cell wall thickness of the different foam samples are varied. The average cell wall thickness and apparent density of the TSGFA-based foam sample are greater than those of SGFA and TSFA foams, indicating that the addition of tannin and glyoxal underwent polycondensation reactions with starch and furfuryl alcohol, which increased the integrity of the resin and further upgraded the network structure of the foam system. At the same time, TSGFA-and TSFFA-based foam samples acquired similar average cell size, but the TSFFA-based foam showed smaller apparent density with respect to that of TSGFA-based foam, indicating that as a crosslinking agent, the activity of formaldehyde is higher than that of glyoxal [35], which makes the reaction of tannin, starch, and furfural more efficient, the number of oligomers in the case of TSFFA resin system less, the compatibility of the foaming agent and TSFFA resin better with respect to TSGFA, and the foaming process more uniform, while more bubbles are generated. According to SEM images, the cell shape of TSFFA foam is uniform compared with TSGFA, almost circular. Therefore, the apparent density of TSFFA-based foam is lower with respect to TSGFA. Figure 3 shows the pulverization ratio and tensile strength of the different starch-based foams, which reveals the extent of damage on the foam when it is cut [41]. The lower the pulverization ratio, the less likely it is to undergo damage. The pulverization ratio of the SGFA foam sample (1.6%) is lower than that of TSGFA (2.1%) and TSFFA (1.7%), which indicates that the addition of tannin can easily improve the pulverization resistance of the foam. This depends on the high hardness acquired by the tannin resin after curing [29]. Results and Discussion Meanwhile, according to the tensile strength data of the different foams, the tensile strength of the SGFA-based foam is higher than that of the TSGFA foam sample. Therefore, the addition of tannin improves the foam hardness; however, the toughness of the foam is reduced. At the same time, the pulverization ratio of the TSFFA foam is lower than that of the TSGFA foam. On the contrary, the tensile strength of the TSFFA foam is higher than that of the TSGFA foam, which is accounted for by the higher reactivity of formaldehyde with respect to glyoxal to build up a stronger network structure. However, glyoxal, as a crosslinking agent, improved the crosslinking between tannin, starch, and furfuryl alcohol system, and further reduced the pulverization ratio of the foam, which reached 4.3% in the case of TSFA-based foam. More importantly, compared with some biomass foams, such as tannin-furanic-soybean protein isolate (SPI)-based foam (3.68%) [47] and tannin-formaldehyde-furanic foam (16.49%) [48], the TSGFA-based foam exhibited a lower pulverization ratio. circular. Therefore, the apparent density of TSFFA-based foam is lower with respect to TSGFA. Figure 3 shows the pulverization ratio and tensile strength of the different starch-based foams, which reveals the extent of damage on the foam when it is cut [41]. The lower the pulverization ratio, the less likely it is to undergo damage. The pulverization ratio of the SGFA foam sample (1.6%) is lower than that of TSGFA (2.1%) and TSFFA (1.7%), which indicates that the addition of tannin can easily improve the pulverization resistance of the foam. This depends on the high hardness acquired by the tannin resin after curing [29]. Meanwhile, according to the tensile strength data of the different foams, the tensile strength of the SGFA-based foam is higher than that of the TSGFA foam sample. Therefore, the addition of tannin improves the foam hardness; however, the toughness of the foam is reduced. At the same time, the pulverization ratio of the TSFFA foam is lower than that of the TSGFA foam. On the contrary, the tensile strength of the TSFFA foam is higher than that of the TSGFA foam, which is accounted for by the higher reactivity of formaldehyde with respect to glyoxal to build up a stronger network structure. However, glyoxal, as a crosslinking agent, improved the crosslinking between tannin, starch, and furfuryl alcohol system, and further reduced the pulverization ratio of the foam, which reached 4.3% in the case of TSFA-based foam. More importantly, compared with some biomass foams, such as tannin-furanic-soybean protein isolate (SPI)-based foam (3.68%) [47] and tannin-formaldehyde-furanic foam (16.49%) [48], the TSGFA-based foam exhibited a lower pulverization ratio. Figure 4 shows the stress-strain curves of the different prepared foams. It can be seen that the compressive strength at yield of TSGFA-based foam (1.751 MPa) is higher than that of SGFA-based foam (1.486 MPa), which corroborates that addition of tannin improves the compressive strength of the foam, and this depends on the phenolic ring structure of tannin which provides higher hardness [29]. At the same time, the strength at yield of TSFFA-based foam (2.311 MPa) is higher with respect to that of TSGFA-based foam, which is attributed to the higher reactivity of formaldehyde in comparison of glyoxal. It can be also seen from Figure 2 that the cell wall of TSFFA-based foam is thicker than that of TSGFA-based foam, which is consistent with the results of pulverization ratio. At the same time, the compressive strength at yield of TSFA-based foam (1.183 Figure 4 shows the stress-strain curves of the different prepared foams. It can be seen that the compressive strength at yield of TSGFA-based foam (1.751 MPa) is higher than that of SGFA-based foam (1.486 MPa), which corroborates that addition of tannin improves the compressive strength of the foam, and this depends on the phenolic ring structure of tannin which provides higher hardness [29]. At the same time, the strength at yield of TSFFA-based foam (2.311 MPa) is higher with respect to that of TSGFA-based foam, which is attributed to the higher reactivity of formaldehyde in comparison of glyoxal. It can be also seen from Figure 2 that the cell wall of TSFFA-based foam is thicker than that of TSGFA-based foam, which is consistent with the results of pulverization ratio. At the same time, the compressive strength at yield of TSFA-based foam (1.183 MPa) is lower with respect to that of TSGFA-based foam, indicating the role of glyoxal to promote the condensation reaction of tannin, starch, and furfuryl alcohol. In addition, the compressive strength at yield of TSGFA-based foam is much higher than that of the tannin-formaldehyde-furanic foam (0.18 MPa) and tannin-furanic-soybean protein isolate (SPI)-based foam (0.5 MPa) [47,48]. Thus, bio-based foam structures prepared by combining tannin and starch as condensing agents in the presence of glyoxal as crosslinking agent and AC as foaming agent present potential for more applications compared with foams based exclusively on tannin. It is worthy to note that the strain at break of the TSFFA-based foam is not affected even with the elevation of maximum strength. Figure 4 shows the stress-strain curves of the different prepared foams. It can be seen that the compressive strength at yield of TSGFA-based foam (1.751 MPa) is higher than that of SGFA-based foam (1.486 MPa), which corroborates that addition of tannin improves the compressive strength of the foam, and this depends on the phenolic ring structure of tannin which provides higher hardness [29]. At the same time, the strength at yield of TSFFA-based foam (2.311 MPa) is higher with respect to that of TSGFA-based foam, which is attributed to the higher reactivity of formaldehyde in comparison of glyoxal. It can be also seen from Figure 2 that the cell wall of TSFFA-based foam is thicker than that of TSGFA-based foam, which is consistent with the results of pulverization ratio. At the same time, the compressive strength at yield of TSFA-based foam (1.183 MPa) is lower with respect to that of TSGFA-based foam, indicating the role of glyoxal to promote the condensation reaction of tannin, starch, and furfuryl alcohol. In addition, the compressive strength at yield of TSGFA-based foam is much higher than that of the tannin-formaldehyde-furanic foam (0.18 MPa) and tannin-furanic-soybean protein isolate (SPI)-based foam (0.5 MPa) [47,48]. Thus, bio-based foam structures prepared by combining tannin and starch as condensing agents in the presence of glyoxal as crosslinking agent and AC as foaming agent present potential for more applications compared with foams based exclusively on tannin. It is worthy to note that the strain at break of the TSFFA-based foam is not affected even with the elevation of maximum strength. Thermal conductivity is an important index to check the efficiency of thermal insulation for a foam material. Figure 5 shows the thermal conductivity of the different prepared foams, which are all characterized by closed cell structure. Therefore, compared with some commercial foams, such as polyethylene foam (0.047 W·m −1 ·K −1 ) [9], the starch-based foam acquired lower thermal conductivity. More importantly, although the average cell size of TSGFA-based foam (210 µm) is higher than that of SGFA foam (200 µm) (Figure 2), the addition of glyoxal improves the crosslinking of the foam system and produces a more uniform closed cell distribution, which leads to lower thermal conductivity of TSGFA-based foam (0.030 W·m −1 ·K −1 ) compared with that of TSFA (0.035 W·m −1 ·K −1 ). Meanwhile, the thermal conductivity of TSGFA-based foam, prepared using glyoxal instead of formaldehyde, is similar to that of TSFFA, which is prepared using formaldehyde. These results show that the TSGFA-based foam has considerable mechanical strength and thermal insulation, which expands its application prospects. Figure 6 displays the TG-DTG curves of the different foam structures prepared in this study. The mass of the foam decreases slightly between 100 and 280 • C, which reveals humidity and the gas generated by residual blowing agent (AC) when it interacts with pTSA at higher temperature. Figure 6b indicates that with the increase of temperature, the highest degradation rates were accomplished at 200-280 and 420-470 • C, respectively. At the temperature range of 200-280 • C, the starch in the foam starts to degrade, whereas the blowing agent decomposes into carbon dioxide and nitrogen. Moreover, at the range of 420-470 • C, the starch oligomers begin to degrade further. It is obvious that the mass loss in the case of SGFA-based foam is higher with respect to TSGFA-based foam, which indicates that the tannin addition improves the heat resistance of the foam. In addition, the mass loss in the case of TSFA-based foam is larger because no glyoxal or formaldehyde is added, considering that the reactivity of tannin or starch is poor with furfuryl alcohol in the absence of any of these aldehydes. It can be also seen from the curves that TSGFA-Polymers 2022, 14, 5140 9 of 13 and TSFFA-based foams behaved almost the same over the range of 300-800 • C, which illustrates their similar heat resistance. starch-based foam acquired lower thermal conductivity. More importantly, although the average cell size of TSGFA-based foam (210 µm) is higher than that of SGFA foam (200 µm) (Figure 2), the addition of glyoxal improves the crosslinking of the foam system and produces a more uniform closed cell distribution, which leads to lower thermal conductivity of TSGFA-based foam (0.030 W·m −1 ·k −1 ) compared with that of TSFA (0.035 W·m −1 ·k −1 ). Meanwhile, the thermal conductivity of TSGFA-based foam, prepared using glyoxal instead of formaldehyde, is similar to that of TSFFA, which is prepared using formaldehyde. These results show that the TSGFA-based foam has considerable mechanical strength and thermal insulation, which expands its application prospects. Figure 6 displays the TG-DTG curves of the different foam structures prepared in this study. The mass of the foam decreases slightly between 100 and 280 °C, which reveals humidity and the gas generated by residual blowing agent (AC) when it interacts with pTSA at higher temperature. Figure 6b indicates that with the increase of temperature, the highest degradation rates were accomplished at 200-280 and 420-470 °C, respectively. At the temperature range of 200-280 °C, the starch in the foam starts to degrade, whereas the blowing agent decomposes into carbon dioxide and nitrogen. Moreover, at the range of 420-470 °C, the starch oligomers begin to degrade further. It is obvious that the mass loss in the case of SGFA-based foam is higher with respect to TSGFA-based foam, which indicates that the tannin addition improves the heat resistance of the foam. In addition, the mass loss in the case of TSFA-based foam is larger because no glyoxal or formaldehyde is added, considering that the reactivity of tannin or starch is poor with furfuryl alcohol in the absence of any of these aldehydes. It can be also seen from the curves that TSGFA-and TSFFA-based foams behaved almost the same over the range of 300-800 °C, which illustrates their similar heat resistance. A butane spray gun was used for characterization of the combustion performance of TSGFA-, TSFFA-, SGFA-, and TSFA-based foams, and the results are presented in Figure 7. At the beginning of the test, the four foam samples were all located at the same position at the nozzle of the spray gun to reach the same combustion temperature. After 65 s, TSGFA-and TSFFA-based foams were partially burned to red (Figure 7c,g). After cooling for a certain time, a small part in both cases was broken with the flame-contacted part, while the untouched part remained intact (Figure 7d,h). However, after 65 s, the SGFA and TSFA foam samples could be observed to be ignited (Figure 7k,o). After cooling for a certain time, both samples were completely broken and carbonized. It can be explained that the addition of tannin and glyoxal to the starch-furfuryl alcohol resin system could make the resin build a dense network structure, which enhanced the flame retardancy of the material by delaying the flame diffusion. A butane spray gun was used for characterization of the combustion performance of TSGFA-, TSFFA-, SGFA-, and TSFA-based foams, and the results are presented in Figure 7. At the beginning of the test, the four foam samples were all located at the same position at the nozzle of the spray gun to reach the same combustion temperature. After 65 s, TSGFAand TSFFA-based foams were partially burned to red (Figure 7c,g). After cooling for a certain time, a small part in both cases was broken with the flame-contacted part, while the untouched part remained intact (Figure 7d,h). However, after 65 s, the SGFA and TSFA foam samples could be observed to be ignited (Figure 7k,o). After cooling for a certain time, both samples were completely broken and carbonized. It can be explained that the addition of tannin and glyoxal to the starch-furfuryl alcohol resin system could make the resin build a dense network structure, which enhanced the flame retardancy of the material by delaying the flame diffusion. cooling for a certain time, a small part in both cases was broken with the flame-contacted part, while the untouched part remained intact (Figure 7d,h). However, after 65 s, the SGFA and TSFA foam samples could be observed to be ignited (Figure 7k,o). After cooling for a certain time, both samples were completely broken and carbonized. It can be explained that the addition of tannin and glyoxal to the starch-furfuryl alcohol resin system could make the resin build a dense network structure, which enhanced the flame retardancy of the material by delaying the flame diffusion. has a marked degradative effect on polysaccharides [49]. It can be seen from Figure 8a that at the beginning, Penicillium sp. (marked by a rectangle) did not contact the foam sample. On the tenth day, Penicillium sp. (marked by an arrow) had surrounded the TSGFA foam. After 20 days, Penicillium sp. (marked by an arrow) had run into the TSGFA foam. By completion of 30 days, the foam was covered by Penicillium sp. (marked by arrow). The mass loss of materials is the most commonly used method to follow the degradation induced on materials by fungi [50]. Figure 8b shows the mass loss of TSGFA-based foam induced by Penicillium sp. The mass loss of the foam was only 0.24% on the 10th day. With the elapse of the time, the mass loss reached 0.68% by the 30th day. In order to further confirm the biodegradation of TSGFA-based foam by Penicillium sp., the treated foam with Penicillium sp. for 30 days was examined using FESEM. The results are shown in Figure 9ac, which reveal a large number of Penicillium sp. colonies (marked by arrow) diffusing into the TSGFA sample. More importantly, some cell walls of TSGFA-based foam were damaged (marked by the square). Further, it is clear from Figure 9d that the cell wall of TSGFA-based foam was perforated by some Penicillium sp. colonies (marked by the arrow). These results present strong proof that TSGFA foam is liable to undergo biodegradation. Polymers 2022, 14, x FOR PEER REVIEW 11 of 14 arrow). The mass loss of materials is the most commonly used method to follow the degradation induced on materials by fungi [50]. Figure 8b shows the mass loss of TSGFA-based foam induced by Penicillium sp. The mass loss of the foam was only 0.24% on the 10th day. With the elapse of the time, the mass loss reached 0.68% by the 30th day. In order to further confirm the biodegradation of TSGFA-based foam by Penicillium sp., the treated foam with Penicillium sp. for 30 days was examined using FESEM. The results are shown in Figure 9a-c, which reveal a large number of Penicillium sp. colonies (marked by arrow) diffusing into the TSGFA sample. More importantly, some cell walls of TSGFA-based foam were damaged (marked by the square). Further, it is clear from Figure 9d that the cell wall of TSGFA-based foam was perforated by some Penicillium sp. colonies (marked by the arrow). These results present strong proof that TSGFA foam is liable to undergo biodegradation. Conclusions 1. Different bio-based foam structures in crosslinked form can be prepared from a polycondensation reaction incorporating starch, furfuryl alcohol, glyoxal, and condensed tannin under mild acidic conditions and an appropriate foaming agent. The selection of catalytic system and foaming agent determines, to a large extent, the cell formation characteristics of the foam structure, while the mechanical strength is a parameter that is dependent on the condensing agents' formulation. 1. Different bio-based foam structures in crosslinked form can be prepared from a polycondensation reaction incorporating starch, furfuryl alcohol, glyoxal, and condensed tannin under mild acidic conditions and an appropriate foaming agent. The selection of catalytic system and foaming agent determines, to a large extent, the cell formation characteristics of the foam structure, while the mechanical strength is a parameter that is dependent on the condensing agents' formulation. 2. It can be concluded that the addition of tannin into the formulation contributed significantly to the high compressive strength and low pulverization ratio. 3. The crosslinking between tannin, starch, glyoxal, and furfuryl alcohol under the employed reaction conditions provoked formation of closed cells with uniform cell distribution and appropriate apparent density, which contributed to the good thermal insulation and flame retardancy of the foam. 4. The induced biodegradability of the prepared foams using Penicillium sp. is ascribed to the bio-based nature of the structural units involved in building the chemical skeleton of the foam.
2022-12-07T20:22:58.829Z
2022-11-25T00:00:00.000
{ "year": 2022, "sha1": "4df270776210f869923ac07f3bb1def01ad2c155", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/23/5140/pdf?version=1669388100", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "18bd69acdc6d2373776cc210fb33dbb1b6f902ec", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
226237461
pes2o/s2orc
v3-fos-license
Incorporating rivalry in reinforcement learning for a competitive game Recent advances in reinforcement learning with social agents have allowed such models to achieve human-level performance on certain interaction tasks. However, most interactive scenarios do not have performance alone as an end-goal; instead, the social impact of these agents when interacting with humans is as important and largely unexplored. In this regard, this work proposes a novel reinforcement learning mechanism based on the social impact of rivalry behavior. Our proposed model aggregates objective and social perception mechanisms to derive a rivalry score that is used to modulate the learning of artificial agents. To investigate our proposed model, we design an interactive game scenario, using the Chef’s Hat Card Game, and examine how the rivalry modulation changes the agent’s playing style, and how this impacts the experience of human players on the game. Our results show that humans can detect specific social characteristics when playing against rival agents when compared to common agents, which affects directly the performance of the human players in subsequent games. We conclude our work by discussing how the different social and objective features that compose the artificial rivalry score contribute to our results. Introduction The social aspects of interaction are usually overlooked when optimizing an artificial agent through reinforcement learning [1].Most of the training loop is done in an offline manner, or focuses on optimizing objective metrics that do not directly involve social aspects, for instance by using planners [2] or human annotation feedback [3].Some common success metrics in this regard involve solving the task in fewer steps, reducing predicted values, or achieving some predefined intermediate objective goals.When interaction with humans is the main goal, these artificial agents are evaluated mostly based on their objective performance [2].In the few examples where humans are present in the loop, the success measures are mostly related to the embodied interaction [4,5], and not to the underlying decision-making process that these agents learned. One scenario where these problems are very evident is competitive interaction.In a competitive game, an agent can learn to adapt towards its opponents by using reinforcement learning [6], even when these opponents are humans [7].However, it is extremely difficult to measure the social aspects of this interaction, without relying on typical human-robot or human-computer interaction schemes [8].Although providing important insight on some social aspects, these evaluations usually focus on controlled lab-scenarios [9] and on the production of different robotic behaviors [10] and dialogues [11].This in turn neglects exploring how the agents' various learning strategies influence their explicit behaviours and their interaction with humans [12], despite it being one of their most important characteristics.This can be evidenced even in the new area of explainable reinforcement learning [13,14]. In this study, we address the problem of including social aspects in the learning strategies of artificial agents in a competitive scenario.We propose an objective human-centered metric based on rivalry [15], to compose the reward function of the agents.Rivalry is a subjective social relationship arising between two actors, based on the competitive characteristics of an individual, as well as the increasing stakes and psychological involvement in the situation [16].We chose rivalry as it showcases the competing relation between individuals, which often affects their motivation and performance during gameplay [16,17].We model rivalry as a function of objective factors (such as game performance) and subjective information (such as certain personality traits and competitiveness level), and evaluate our model using the Chef's Hat Card Game [18] To obtain the social features of rivalry and map the intrinsic personality traits arising from human perception of learning agents, we run first an exploratory experiment where human players face artificial agents implemented using Deep Q-Learning (DQL) [19] and Proximal Policy Optimization (PPO) [20].Both learning agents are implementing COPPER [21], a continual learning adaptation for Chef's Hat agents.Using questionnaires, we collect how these agents impact the human players in terms of competitiveness, and how humans perceive the social characteristics of these agents. Using the compiled information from this experiment, we measure how humans perceive such agents in terms of rivalry.We then attach different social characteristics to each artificial opponent and use them in a rivalry synthesizing mechanism, to calculate the rivalry of an agent towards an opponent.We then run two ablation studies to develop a social characteristic predictor that will be used by the agents both when perceiving their opponents, and for finding the best manner to integrate rivalry in the learning routines. Finally, we run a second experiment, where each artificial agent synthesizes a rivalry score against human players.By collecting the same information from questionnaires as in the first experiment, we can contrast the impact of the rival agents on the game when compared with non-rival agents. Our results demonstrate that both learning agents are perceived as different social agents when playing against a human, in particular when compared with the random agent.When the learning is modulated by the rivalry score, we observe a strong contribution of rivalry on the performance of the human players.We discuss these results in terms of the contribution of the social and objective features for the formation of the rivalry function, and how they impact human perception.Ultimately, we detail how the performance of each agent changes when using the rivalry modulation. Related Work Reinforcement learning has received ample attention in the last years, in particular on the development of artificial agents.However, understanding the social role and the derived impact of social interaction within a learning mechanism is not yet fully explored.In particular, in multi-agent competitive scenarios, there is still a focus on performance-based metrics, which make it difficult to summarize, or even to verify, the social components which are directly affected when these models are deployed in scenarios involving humans.In this section, we detail the most relevant literature in this field, and the ones we based our entire explorations on. Reinforcement Learning in Competitive Games In the late 1990s, several researchers tried to identify the impact of the Deep Blue artificial chess player [22] on the development of artificial intelligence [23,24].They all argue that beyond the technical challenge of beating a human, there is an underlying impact on how this agent affects the opponents' behavior during the entire interaction.Over time, these investigations were let aside by the mainstream community, which focused mostly on solving more complex problems.This vision is reflected by the recent development of deep reinforcement learning and the research on training artificial agents to play competitive games that have flourished since [25].AlphaGo [26] demonstrated that these agents can play competitively against humans in very complex games.The recent development of agents that play the StarCraft computer game [7] pushes these boundaries even further.These agents learn how to adapt to dynamic environments, how to map hypercomplex states and actions, and how to learn new strategies [27].Most of the studies however focus on the final goal of these agents: how to be competitive against humans.As such, none of them focus on understanding the impact that these agents have on human opponents. DQL and PPO Playing Behavior on the Chef 's Hat Card Game In the same development wave, it was recently investigated the design and development of reinforcement learning agents to play the four players Chef's Hat competitive card game [28].These agents were based on Deep Q-Learning (DQL) [19] and Proximal Policy Optimization (PPO) [20], and achieved success in learning how to win the game in different tasks: playing against random agents, self-play, and online adaptation towards the opponents.However, it was observed that these agents present different behavior during gameplay, while maintaining a similar objective performance measured by overall wins over a series of games.What has not yet been done is to measure the social impact that such strategies can have when the agents play against humans. Human-centric Analysis of RL When analyzing the impact of artificial agents on humans, there are now decades of studies focused on Human-Robot [29] and Human-Computer [30] Interaction (HRI and HCI respectively).Social and affective computing research suggests humans tend to treat computers as social actors, [31] where attributes such as personality and emotions are modeled to affect how these agents are perceived [32,33].In an interaction setting, these attributes can be used to change the behavior of humans or improve interaction in different contexts [34,35].In RL, most of these studies however focus on optimizing RL agents to solve a specific task, even social ones, without having much feedback on the social aspects of the task as part of their learning mechanism.Such agents are usually designed to learn an expected outcome, such as improving engagement [4], or imitating humans [6].None of the most recent studies focus on extracting the intrinsic behavior bias that different learning schemes apply to the final agent. Rivalry in Cooperative and Competitive Games One way to explain the behavior of an agent as a factor of its learning strategy is to measure its impact on humans.In cooperative scenarios, there exist several social metrics that take interaction into consideration [36,37], but most of them focus on the subjective impressions humans have of the embodied interaction [38], the subjective quality of the interaction [39], or the efficiency of the interaction to solve a task [40]. In a competitive game however, one of the most informative metrics is the rivalry [15] between the human and the agents.Rivalry is defined as a competitive relation between individuals or groups, characterized by the subjective importance placed upon competitive outcomes (i.e., win or lose) independent of the objective characteristics of the situation (e.g., tangible stakes) [16,17]. A proposed theoretical model of rivalry suggests that antecedents of rivalry are similarity factors, competitiveness, and relative performance of the agents [16]. The presence of a rivalry effect, in turn, affects the motivation of the individual and their performance [17].We aim to evaluate how different agents affect the user's perception and their performance due to the increased competitiveness and rivalry effects. In competitive games, rivalry is a central concept that directly affects the opponent's behavior through their motivation in play.In human-to-human scenarios and economics, a healthy rivalry is considered to be an important factor that can positively affect the performance of opponents; while in other situations, it can also contribute to unnecessary risk-taking behavior [41]. In human-in-the-loop online learning scenarios for competitive games, the absence of rivalry or competitiveness might result in the human opponent to lose motivation to play the game and show sub-optimal performance during gameplay.The agents who are learning actively from the human, are bound to learn from this bad performance input, which in turn would result in suboptimal learning.By introducing the notion of rivalry, we expect to increase motivation and engagement in the game and thus also the user performances. Proposing Artificial Rivalry Our rivalry modulation acts directly on two types of agents: A Deep Q-Learning (DQL) one and a Proximal Policy Optimization (PPO) one.Both agents were recently adapted and optimized for the Chef's Hat game through the COPPER modulation [21].COPPER introduces an opponent-specific experience-prioritizing memory used to improve the continual learning capabilities of each agent when playing against known opponents.We use these agents' implementation as our artificial opponents, and apply the rivalry modulation to their learning mechanism.It is important that our agents continually update their playing strategy when playing against the human opponents, so the rivalry score is created and updated accordingly. A Chef 's Hat Agent The DQL and PPO implementations of the agents were chosen due to their success on learning different strategies [28], and their good performance when playing against human players [21].Both agents are implemented as COPPERbased agents, and are set to keep learning during all of our experiments. The Chef's Hat game, illustrated in Figure 1, is a multiplayer competitive card game.At the beginning of the game, each player receives 17 cards, and the player that discards all of them first wins the match.The entire rules of the game and the capability of agents to learn different strategies were recently explored in different studies [18,21,28].The game state is composed of 28 values, referring to the 17 possible cards each player has in their hands, and 11 cards on the game board.The action space is represented by 200 different discard actions an agent can do, which reflects the complexity of the strategy formation in this game. where S is the state, in our case represented by the 28 values composed by the cards at hand and the cards at the board.The actions, A, are expressed using the 200 discrete values for all the possible actions. To update the Q-values, the algorithm uses the following function: where t is the current step, α is a pre-defined learning rate and T D is the temporal difference function, calculated as: where r t is the obtained reward for the state (s t ) and action (a t ) association, γ represents the discount factor, a modulator that estimates the importance of the future rewards, and maxQ (s t1,at ) is the estimate of Q-value for the next state. This agent also implements a target model, which is a time-delayed policy which receives a snapshot of the original policy after a certain number of training steps. The target model is used to obtain the target Q learning when calculating the updated T D: where maxQ t (s t1,at ) is the Q-values obtained from the target network. PPO Our Proximal Policy Optimization (PPO) [20] agent implements an actor-critic model: where V (s) represents the critic value for a given state, and V (s ) for the future state.The advantage function is used to stabilize training the actor-network, while the critic network uses the discounted rewards as the target. The actor-critic base model is updated by implementing adaptive penalty control, based on the Kullback-Leibler divergence, to drive the updates of the agent at each interaction.It showed to be important to learn different and more optimized strategies [28], probably due to how this method simplifies the necessity of having a large-memory replay [42,43]. COPPER Both agents implement the COPPER modulator [21] that expands the prioritize experience replay (P ER).Traditional P ER can be expressed as: P ER(i) is calculated based on the network's loss after calculating T D in a forward pass of the network (using an input i): where a indicated how much we want to rely on the priority, p is the priority, and k the total number of saved experiences.COPPER, on the other hand, introduces a new opponent-specific term (o): In recent experiments, COPPER was shown to be much more effective on learning new strategies against recurrent opponents, in particular when playing against humans [21]. Modeling Rivalry From a Human Perception To optimally define the impact each agent has on the players, we used a standard formalization of Rivalry [16].Rivalry can be defined as a subjective social relationship arising between two actors based on the competitive characteristics of an individual, as well as the increasing stakes and psychological involvement in the situation.Thus, a proposed theoretical model of rivalry, illustrated in Figure 2, suggests that antecedents of rivalry are similarity factors (S a ), competitiveness (C), and relative performance (P ) of the agents [16]. Fig. 2 The theoretical model of rivalry used for our new rivalry metric proposition, based on the framework proposed by [16]. Rivalry research suggests that individuals tend to evaluate their abilities by comparing their performance to persons who have similar characteristics to themselves.The similarity can be measured in demographics [16], gender [44], personality [16], perceived traits [45] and rank in competition [17] to assign social or behavioral attributes to the other. For this work, in addition to the ranking and performance similarity, we included trait similarity as a factor consisting of competence, agency, and communion traits as the indicators of relevant behavioral attributes.These traits are frequently used in social stereotypes research (e.g., gender, nationality, age, status) [46] and have been shown to affect liking [47].Moreover, social perception [48] has been previously shown to affect competitive behavior [49].Agency, competence, and communion traits have also been used in virtual agent research to examine users' judgments of expected agent behavior [50].These traits were chosen as representative ones due to their comprehensibility, which facilitates the self-and other assessment [51].We collect the agency (ag), competence (ct), communion (cm), and competitiveness (C) assessments of each of the agent players, from a human perspective, through an exploratory experiment.Once these values are calculated, we can define a rivalry score as (R a ): P a = (points h − points a )/15 (10) where the index a defines one of the opponents, the index h defines the human ratings about the social behavior of the opponents, obtained in a prior human study.The individual performance of each player is calculated by the Chef's Hat environment1 as the sum of the average score of each game (given by the sum of the points in each round, divided by the number of rounds in the game), divided by the total number of played games. The rivalry score (R a ) is used as our main evaluation metric on how the agents impact the human players, thus we expect it to change accordingly to the game development.As the participants evaluate the entire game behavior of an agent, and the only difference among artificial players is the way they play the game, the measure of rivalry reflects directly the user's perception of the outcome of the reinforcement learning strategies. Rivalry as a Learning Modulator Once rivalry can be defined from a human perspective, we need to model it from an agent perspective, and use it as a learning modulator.This will make the agents create a rivalry sense against the other players, and most important, will help the agents to develop a behavior that can make it be identified by a human as a rival. To achieve a rivalry modulator, we calculate rivalry in a similar manner as we do from a human perspective.To predict similarity, though, the agents used a similarity predictor trained based on the humans' responses collected during our exploratory experiment.The agent-player similarity predictor (pr a (h)) matches state+action chosen by humans (h) with the human-assessed competence, agency and communion traits (ag a , ct a and cm a ), and it follows the survey scales used in virtual agent research in terms of calculation of the scores for each trait [50]. Our similarity predictor was built as a Multi-Layer Perceptron (MLP) neural network mapping an interval of action/spaces with a set of similarity scores.The similarity predictor of each agent inferred during gameplay the traits of each human the agents played against.Each type of agent (DQL and PPO) also have a single set of traits associated, derived from the human judgments provided in the exploratory study.Thus, the similarity based on an agent's (a) perspective (S h ) is given as: The performance measure of the agents was given by their own assessment of their actions.To achieve this, each agent computed the introspective confidence (ic a ) [52] of each action, which focused on scaling the selected Q-value of an action towards the final goal, using a logarithm transformation which computes the probability of success.In our case, this was the probability of winning the game.The introspective confidence gives us a self-assessment of the agent's actions, based on its own game experience.We use the introspective confidence as the agent's competitiveness: where act, is each of the actions the agent took during the game. The relative performance (P h ) is calculated similarly to the human's perspective but considers the agent's perspective.Thus, the predicted rivalry (R h ) is defined as the mean of the previous three factors: To guarantee the agent learns how to be a rival of an opponent, we include the predicted rivalry (R h ) into the final reward of the agent by a simple weighted average, optimized in an ablation study described below. Experimental Setup Our rivalry modulation is directly related to a very specific reinforcement learning task: multi-agent competitive interaction.In this regard, the Chef's Hat Card game was chosen as our primary investigation environment.As our experiments involve a good mix of human-based studies and artificial agents optimization, we separate them into four categories: first, we perform our exploratory study to understand how the PPO and DQL agents are perceived by humans.Our second experiment uses the information collected from the exploratory study to train and validate the similarity predictor neural network.The third experiment optimizes and evaluates the rivalry learning modulator. And finally, we run a human-based study to identify the impact of the rival agent on human perception. Chef 's Hat Environment The Chef's Hat Environment is an OpenAI GYM-based implementation of the Chef's Hat card game 2 .It includes all the rules and mechanics of the original game and can be used to train and evaluate artificial agents.The game itself is based on turns, and on each turn a player can do a discard action or a pass action.For every match played, the players gain points based on their finishing position, with the winner gaining 3 points.A full game consists of several matches until one of the players reaches 9 points. Following our previous experiments with the Chef's Hat environment, we use a global reward scheme that only gives a full reward once the agent performs an action that leads to it winning the game.We also start the game with pretrained agents, which learn how to play the game using a self-play strategy [28] available through the Chef's Hat Player's Club repository3 . To collect human data, and to allow humans to play against the agents, we use the Chef's Hat Online software 4 , which is a browser-based interface for the Chef's Hat Environment and allows human players to participate in the game. Exploratory Study: Understanding the Agents In our first experiment, we perform an exploratory study using the Chef's Hat Online environment.The goal of this study is two-fold: we want to understand and measure the impact of each learning agent on the rivalry attribution.Also, we collect the human attributions to these agents, and use them to describe both agents socially, when synthesizing rivalry. For this study, we implement three agents, a DQL, a PPO, both with COP-PER, and a naive agent that only performs random actions during the entire game.A human plays a 9-points game against these agents, and we collect the entire game status over the entire experiment, which includes all players' performance.We also run two questionnaires, one at the beginning of the game to collect the human players' self-assessment, and one by the end of the game to investigate the participants' perspective about the agents. In our questionnaire, we follow the standard traits questions to measure competence, agency, and communion [46].In particular, we measure competence score as an average of the scores of the items related to ambition, courage, decisiveness, and aggressiveness; agency as an average of intelligence, innovation, organization, and compassion; and communion as an average of compassion, affection, emotional response, and sensitiveness.Each of these terms is represented with a Likert scale that varies from 1 (Not at all) to 5 (Very). We also asked humans to self-assess their competitiveness and the perceived competitiveness of each agent, once the game was over.Each agent was only identifiable by one of three names (Evan, Dylan and Frankie), in order to make the entire evaluation based on their game behaviour alone.The questionnaires are available on our Appendix.In total, 28 different persons played the game. Using the collected information, we calculate the rivalry score from the human perspective for each agent.We then proceed with a statistical test to identify the contribution of each of the terms that compose rivalry (similarity, relative performance and competitiveness). Rivalry Ablation: Similarity Predictor The goal of this experiment is to achieve a reliable similarity predictor, to be used by the agents.We implement it as a fully-connected multi-layer perceptron (MLP) neural network, that receives as input a sequence of pairs of action and cards on board, and predicts a similarity label.We fine-tune it and evaluate it using the data obtained from the previous exploratory study. To optimize this network we used a Tree-Parzen Optimizer (Hyperopt [53]), and a cross-validation with 70% of the collected data for training and 30% for testing.We report the optimization search-space and the final architecture in the Appendix Section A.3.We calculate the performance in terms of mean accuracy over 30 runs. Rivalry Ablation: Rivalry Learning Modulator To maximize the effect of the rivalry modulator in the agent's learning, but without losing the focus on winning the game, we run an optimization study to find the best weight when adding rivalry in the agent's reward function.Our rivalry score is defined in a way that it would increase when an agent plays against itself, as the social behavior, performance, and game strategy of the agents are the same.So, we optimize the rival weight towards the reward by maximizing the rival score, while maintaining a similar performance against each other. We run 1000 simulation games, where an agent plays against three other agents, among which one is another instance of itself.During all the games, we add the rival score weight as an updatable parameter of the network, and optimize it towards the opponent that has the same implementation of the agent.We stop the training when the rival score of both agents' instances towards each other is maximized.We track the rivalry evolution over time, together with the agents' performance, to guarantee that the optimization reaches a satisfactory state. Rivalry Impact: Playing Against Humans Once the rival agents are implemented, and we found the optimal reward weight, we deploy them into the same scenario as our first exploratory experiment.We repeat the same settings, but now the human play the game against a rival agent, a non-rival COPPER agent that continues learning and a nonrival agent that does not learn during the game.In this experiment, as we are interested on the differences within these agents, we only implemented a DQLbased agent.The goal of this experiment is to verify the impact of the rivalry in terms of perceived game play.We collect the same data, using the same questionnaires, and calculate the rivalry score of all the participants towards the agents. We then run statistical tests to identify the impact of the rival agent in the game, in particular when compared to the non-rival agents.This experiment aims to identify the contribution of rivalry in terms of social perception, but also on the final performance of the agents. Exploratory Study A total of 28 games were completed and were therefore included in all subsequent evaluations in this experiment.Of these games 13 were played in English, 14 in Portuguese, and 1 in Italian.All of them, however, played using the same Chef's Hat Online platform, so following the same game rules.Of the participants who played these games, 54% were between the ages 31-50, 36% reported to be between 18-30, 7% reported to be aged over 50, and one participant chose not to disclose their age. On average, each game was played for 3.34 matches (SD=.47), with an average score of 2.25 (SD=.71).We calculated the rivalry scores for each participant towards each of the three agents, based on the proposed equations (see Section 3.3), by using the Similarity, Competitiveness and Performance measures. The relative performance scores stand of the three agents showed significant difference (χ 2 = 28.1,df=2, p< .001),where both the DQL (Durbin multiple comparisons, p< .001)and PPO (p< .001)agents were significantly different from the random agent.No significant difference was seen between the DQL and PPO agent (p=.23).Finally, we used the self attributed values for the competitiveness scores for all the users (M=.89,SD=.17).Rivalry scores calculated using these three measures were significantly different among the agents (χ 2 =25, df=2, p< .001).Pairwise comparisons revealed that both DQL (p< .001)and PPO (p<.001) agents were significantly different than the random agent.No difference was found between the DQL and PPO agents (p=0.791). Similarity Predictor In our First Experiment, we collected a total of 16,000 action/cards on board pairs, and associated them with the "Decisive", "Innovative" and "Creative" labels self-assessed (from humans), or given by (from the agents) during the human data collection, using the average of them as our final similarity score.We report the optimized search-space and the final architecture in Table 1. Table 1 Search space and final architecture, in bold, used to optimize the similarity predictor. Parameter Search Space Concatenated Actions [ The final performance of the similarity predictor after running 30 crossvalidation routines, when trained with the data collected from humans and agents, is 83% accuracy, with a standard deviation of 0.5. Rivalry Learning Modulator After running 1000 games, the final optimized weighted value for the reward is as follows: where R o is the original reward of the agent, and R h is the rivalry score.We also tracked the performance of the agent over time.Without rivalry, when playing in the same setting, a DQL agent obtained an average of 1.3 (game score), with a standard deviation of 0.1.When the rivalry modulation was used, the average wins stayed at 1.2, with a standard deviation of 0.2. Rivalry Impact In this experiment, a total of 25 games were completed.From the final set of the completed games, 9 were played in English, 9 in Portuguese, 3 in Spanish and 4 in Italian.From the participants who played these games, 55% reported to be aged between 31-50 and the remainder reported to be between 18-30. On average, the games were played for 3.65 matches (SD=.48), and with an average human score of 3.06 (SD=.68) which is significantly higher than the participants' scores from Study 1 (t(46)=-3.98,p<0.001).Similar to the previous study, the rivalry scores for each participant were calculated using the proposed equations in Section 3, by using the Similarity, Competitiveness and Performance measures. An analysis of the similarity scores, computed using the Equation 9(see Section 3.3), showed that the agents were not rated as significantly different in terms of their overall similarity with participants (Friedman test: χ 2 =1.57, df=2, p=0.457).The relative performance scores of the three agents showed significant differences (χ 2 = 23.7,df=2, p < .001),where Durbin-Conover multiple comparisons test revealed that all agents performed significantly differently than each other.Pairwise comparisons showed the rival DQL agent performed the best, being significantly better than both COPPER DQL (p= .036)and offline DQL agents (p< .001).The COPPER DQL agent was also significantly better (p<.001) than the DQL agent.Finally, the rivalry scores calculated using these two measures and the selfattributed competitiveness scores of the participants (M=.79,SD=.18) were significantly different among the agents (χ 2 =15.7, df=2, p< .001).Pairwise comparisons revealed that the rival DQL agent had significantly higher rivalry scores compared to both the COPPER DQL (p= .036)and DQL (p<.001) agents.Further, the COPPER DQL agent had significantly higher rivalry scores (p=.009) than the DQL agent.However, the participants' motivation to play with agents were not significantly different among different types of agents (χ 2 =0.2133, df=2, p=0.9).Table 2 shows the values for the relative performance and rivalry scores of the three agents. Discussions In our study, we are mostly interested on how humans perceive the impact of different learning strategies when interacting with artificial agents.In particular, we evaluate if we can modulate this perception in a controlled manner using the rivalry term.Our experiments demonstrate that we can indeed achieve such manipulation, although in a limited manner.In this section, we discuss these findings in more detail. 6.1 Are Learning Agents Perceived as Rivals? Our experimental results provided insights to identify whether agents trained with different RL strategies yield distinct rivalry and whether including rivalry in the reward function enable agents to modulate human's response on a rivalry scale.Our first exploratory study showed that agents trained with DQL and PPO strategies yield distinct rivalry if compared to a random agent when playing the Chef's Hat card game against humans, which confirms that their capability of learning a strategy is indeed perceived by humans.However, we failed to find a significant difference between DQL and PPO agents, which mirrors the lack of significant difference between their relative performances and their scores for the "Decisive", "Innovative" and "Creative" traits. Agents Optimization When developing the agents, we needed to adapt them towards using rivalry as a learning modulator.Our experiments demonstrate that in our simulation environment, the inclusion of the rivalry term does not affect the general performance of the agent, and helps on changing their underlying behavior.This behavior can be explicitly demonstrated when a DQL rival agent plays a game against a non-rival version of itself, a PPO agent and a random agent.Figure 3, illustrates a match between a DQL agent, with rivalry, a DQL agent without rivalry, a random agent, and a PPO agent.We observe that the rivalry value increases constantly for the DQL agent, as they share the same strategy and social traits.Against the PPO agent, the rivalry score increases at a different pace, which demonstrates that the rival agent can detect the strategy of the PPO agent, and associate a different social trait set to it.A random agent however does not have a strategy, and the rival score fluctuates for each action it takes as expected. Fig. 3 Example of the rivalry score calculation of a rival DQL agent playing against a non-rival DQL, a PPO and a random-based agent. The Role of Rivalry in Chef 's Hat Our last experiment, where we measure the impact of rivalry, showed that agents trained with a predicted rivalry as a part of their reward function can modulate human responses on a rivalry scale, and in turn yield significantly higher rivalry results.Moreover, the rivalry scores for the rival, COPPERbased, and offline learning agents were all significantly different from each other, with the rival agent exhibiting the highest score.Similarly to the exploratory study, these differences mirrored the relative performances of the agents.The similarity results instead were not significantly different for any of the agents.This suggests that agents' behaviors were not providing enough information to be perceived as different in terms of their agency, communion and competence traits. Our experiments confirmed the importance of agent's performances to trigger a sense of rivalry.Moreover, the agents learned to modulate their behavior enough to yield a significantly higher rivalry.However, their performances did not reach those of the participants, suggesting there is still room for improvement. The addition of rivalry and the better performances of the agents seemed to increase the motivation of participants to play better, as it can be seen in the increased average scores from the first exploratory to the second one.Although we cannot exclude that this might depend also on a difference between the two players' samples, such result would be expected from rivalry literature [16].We saw that the significant difference in rivalry scores did not had a significant effect on participants' motivation to play with different agents.Motivation to play again could be further investigated by allowing participants to only play with one type of agent in future studies.Another observation is that the rival RL agents were not associated to different judgments in terms of social traits such as agency, competence or sense of communion, on which the similarity estimation is based.This might be due to the scenario of the game, where the agent's behavior could be appreciated only from its choices in the game.This point could be further investigated in a study where agents have more indicators of social traits such as an embodiment, gestures or emotional expressions. However, even in such a socially impoverished setting, the rival agent succeeded in triggering a higher rivalry in the human participants, revealing the potentiality of introducing rivalry in social reinforcement learning research. In comparison with the first human experiment, the subjects of the second human experiment played slightly more turns, but achieved a significantly higher average score.This might be an indication that the rival agent made the game more motivating, and more challenging.Observing these characteristics from the behavior of each agent alone is a strong indication that when adding explicit social traits to the agent's behavior, the rivalry modulation could be a game-changer on social reinforcement learning research. Conclusion In this work, we examined the inclusion of social aspects in the learning strategies of reinforcement learning (RL) agents in a competitive card game scenario.We proposed the social metric of rivalry, based on the background research in social psychology, and trained RL agents with a reward function that reflects this metric.The resulting agents, trained with the rivalry metric, were successful in yielding significantly different play styles, distinguishable in terms of a main antecedent of rivalry: relative performance.However, the similarity scores calculated based on social traits -another main factor in rivalry -failed to show differences.We plan to further address this issue in future studies by equipping RL agents with distinguishable social signals.Our results suggest that using social concepts such as rivalry shows promise in training agents to perform in human interaction contexts. 5. How ambitious are you when playing games? 1(Not at all) 2 3 4 5 (Very) 17. How experienced are you with competitive card games? Fig. 1 Fig. 1 Illustration of the Chef's Hat card game environment used for all our experiments.Available at: https://github.com/pablovin/ChefsHatGYM 1 . Which was the nickname you choose? 2. How competitive were your opponents? Table 2 Relative performance and rivalry scores of the three agents.
2020-11-04T02:00:48.267Z
2020-11-02T00:00:00.000
{ "year": 2022, "sha1": "783ecd31b139bf5b6c99f5ee11cd3c70fe6e19b2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00521-022-07746-9.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "2fd728fded933a9f01f78df1be4c6675a81e41bf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
249251478
pes2o/s2orc
v3-fos-license
The Use of Twitter to #DefendDACA & DREAMers The future of the Deferred Action for Childhood Arrivals (DACA) program, as well as the welfare of its recipients, in the United State has become a constant feature in the news since President Trump announced his intentions to end the program in September 2017. In response, a social movement of significance was engineered utilizing social media as one of its core pillars to support the program. This study analyzes the content of tweets with the #DefendDACA hashtag, tweeted within 30 days of Trump’s initial announcement, in order to understand the intersection of digital activism and DACA, including functions, purpose, and tone. Results from the analysis found tweets primarily centered on call-to-action, asking participants to defend DACA. Tweets also disseminated vital information, particularly with a positive tone. These findings aid in explaining the movement’s strength. The Deferred Action for Childhood Arrivals (DACA) program is an Obama-era program that prevented eligible young undocumented immigrants who came to the U.S. when they were children (under the age of 16 at time of arrival and had lived in the U.S. as of 15 June 2007) from deportation. On September 5, 2017, the Trump administration announced the end of DACA, citing President Barack Obama's executive order as an 'open-ended circumvention of immigration laws was an unconstitutional exercise of authority by the Executive Branch' (Sessions, 2017). The renouncement generated a wave of anxiety and fear around the country as the program's recipients, DREAMers, realized their status and presence in the U.S. were at risk. Scholars have hypothesized that a complete repeal of DACA without a permanent solution would have dire consequences. For example, recipients could lose their work permits, impacting formal employment, which would then affect their ability to afford higher education (Golash-Boza & Valdez, 2018;Martinez & Salazar, 2018). Feelings of anger and shock permeated throughout communities where undocumented immigrants lived and organizations serving DACA programs operated (Uwemedimo et al., 2017). The end of DACA generated a deep economic impact across the nation (Stone, 2017;Svajlenka et al., 2017). The renouncement also generated commentary on social media, particularly on Twitter, where DACA recipients and supporters utilized the hashtag #DefendDACA to express their disdain, share their stories, and highlight the U.S. as the only home they knew. In times where social media platforms like Twitter and Facebook are under extreme scrutiny for data breaches (Cadwalladr & Graham-Harrison, 2018;Kushwah & Verma, 2021;Tromble, 2021) and intervention of bots and trolls in elections (Boichak et al., 2021;McGill, 2016;Nonnecke et al., 2021), the examination of discourse on social media seems both timely and all the more relevant because the movement that begun in social media later found an expression out on the streets (O'Connor, 2017). The use of the #DefendDACA hashtag among DREAMers and its supporters creates an opportunity to examine the use of Twitter for digital activism. It is hard to establish the precise contribution of social media and the internet to collective action, but it is clear that, to some degree, they facilitate and support traditional ways of offline activism through the distribution of information, or calls to action (Boulianne, 2015;Skoric et al., 2016;Svajlenka et al., 2017;Valenzuela, 2013;Van Laer & Van Aelst, 2010). The purpose of the current study is to examine the tweets surrounding the initial renouncement of DACA that utilized the #DefendDACA hashtag, as well as the tone of the tweets and the differences and similarities between those who tweeted. To address these questions, the researchers employed the lens of digital activism to analyze 1,550 tweets that were tweeted in a 30-day period following the 5 September 2017 renouncement of DACA by the Trump administration. Deferred Action for Childhood Arrivals Program In June 2012, by executive order, Obama signed DACA into federal law. The primary objective of the order was to provide temporary, but renewable, deportation relief for children of undocumented immigrants (Mayorkas, 2012). Obama's executive order was in response to a long and tumultuous battle by immigrants and immigrant rights activists demanding immigration reform (Abrego, 2018). In 2010, activists and politicians were pushing for both Comprehensive Immigration Reform and the Development, Relief and Education for Alien Minors (DREAM) Act (Nicholls, 2013;Unzueta Carrasco & Seif, 2014). Only the latter made it to Congress for a vote, which failed by a small margin in December 2010. DACA has provided many educational benefits to almost 800,000 individuals since its inception (Abrego, 2018). DACA has not only afforded recipients the opportunity for higher education (Hooker et al., 2015), but research has also found DACA recipients to place a more significant value on higher education (Kevane & Schmalzbauer, 2016). The program has promoted civic engagement in local communities (Wong & Valdivia, 2014) and motivated recipients to avoid any unlawful or illicit behavior, as doing so would jeopardize their DACA status (Golash-Boza & Valdez, 2018). Research has argued that, for DACA recipients, higher educational institutions, coupled with social justice organizations and cultural programs, promote the development of youth activists through the fostering of oppositional consciousness (Martinez & Salazar, 2018). In a similar fashion, DACA has contributed to economic development through employment opportunities, affordable health insurance, and access to bank accounts and credit cards (Golash-Boza & Valdez, 2018;Gonzales et al., 2014;Wong et al., 2013). Many DACA recipients experience stressful financial situations or household poverty, and the ability to gain employment through the program reduced the level of economic stress (Golash-Boza & Valdez, 2018). Along the rest of the population, gender identity is a barrier to DREAMers. Undocumented queer immigrants, referred to as undocuqueer, continue to encounter difficulties regarding family acceptance and encounter employment discrimination (Cisneros & Bracho, 2019). Moreover, most of the research on DACA, both academic and pragmatic, explicitly showcase its educational and economic successes. However, there are also studies that highlight the legal ramifications for recipients and their families. The instability of DACA has threatened recipients and their family members who were often of mixed immigration status. Abrego (2018) highlighted negative consequences and limitations associated with DACA. "Having access to new resources and possibilities emphasized for recipients the family's internal stratification as some members still lacked protections" (Abrego, 2018, p. 14). On 15 August 2017, just a few days before Trump's announcement, immigrants' rights advocates and DREAMers organized protests and rallies in 40 cities across the United States demanding lawmakers to uphold DACA. As part of this movement, the #DefendDACA hashtag, among others, was used to disseminate information and as a call to action (Conley, 2017;Johnson, 2017). Several lawsuits were filed against the Trump administration immediately following DACA's termination. The Trump administration immediately petitioned the Supreme Court for review and the court agreed. On 18 June 2020, the Supreme Court ruled 5-4 that the way in which DACA was rescinded by the Trump administration was unlawful, restoring the program completely. The ruling, however, left open the possibility for the Trump administration to end DACA in the future, provided they give proper justification. Therefore, the goal of the current study is to examine the social media engagement around the initial termination of the Program in 2017. This was when the hashtag #DefendDACA was created and went viral. Examining tweets surrounding this decision and timeframe will help understand the ways in which digital activism was employed by both proponents and opponents of DACA. Digital Activism The process of citizens using digital tools to affect social and political change is known as digital activism, cyber activism, or e-activism (Amin, 2010). Digital activism has been used to address political, social, and religious inequities and injustices all around the world, modifying the ways in which media is used to capture information and disseminate it to global citizens (Chiluwa & Ifukor, 2015). In recent years, social media has played a more critical role within grassroots social movements and protests (Calvo, 2015;Howard & Hussain, 2011;Young et al., 2019). The interconnectedness and mass number of users on social media facilitates the development of large social movements (Marwell & Oliver, 1993). Movements on social media help mobilize and organize protests (Raynauld et al., 2016;Shen et al., 2020). Many of these social movements, such as Occupy Wall Street and Spain's Indignados, have relied heavily on social media and the internet to coordinate their activities to promote and support their development (Bennett & Segerberg, 2013). Social media expands the potential of these movements (Hopke, 2012) and allows users to reach broader audiences with whom they establish a connection (Bonilla & Rosa, 2015). Digital media, and the social movements generated and fostered on these platforms, have significantly contributed to promoting political activism (Howard et al., 2012). When young adults are involved in social movements on digital media, specifically, young activists become more critically involved in political participation (Boulianne & Theocharis, 2020;Owen, 2006;Park et al., 2009;Raynes-Goldie & Walke, 2008;. Young people are more likely to engage in protests if they participate in these sites (Dalton et al., 2010;Maher & Earl, 2019). In Chile, Scherman et al. (2012) found that online platforms were significantly influential on political participation. Similar results were found across countries (Bakker & de Vreese, 2011). The most optimistic scholars argue that democracy could be revitalized through participation in social activismpeople who would not, otherwise, take part in political conversations (Bekkers et al., 2011;Copeland & Römmele, 2014;Raynauld & Greenberg, 2014). Social media have contributed to the erosion of political power from large media trusts and stakeholders through a process of segmentation and decentralization (Gibson, 2015;Kavada, 2015;Raynauld, 2014;Webster & Ksiazek, 2012). Social media provide people with different social and political concerns, thus increasing the flow of information among participants (Neuman et al., 2011;Theocharis et al., 2015;Turcotte & Raynauld, 2014). Social media also facilitates the organization of people who care about similar issues (Bimber et al., 2012;Shirky, 2008;Tye et al., 2018). Digital activism has been criticized for its lack of physical action (Cabrera et al., 2017;Gladwell, 2010) and for little political or social impact (Morozov, 2009;Stekelenburg et al., 2013;Vissers & Stolle, 2013). Others have argued that it diverts attention away from the more genuine types of collective action (Dean, 2005;White, 2010). Nonetheless, digital activism continues to be utilized by millions of minorities and grassroots movements. Hence, this paper will examine the primary and secondary functions (purpose) of tweets using the #DefendDACA hashtag to understand how DREAMers and their supporters organized online as a reaction to the Trump's Administration announcement of the end of DACA. Twitter and Immigration One platform in particular that has been extensively utilized and studied for digital activism is Twitter. Valenzuela et al. (2018) explored the effectiveness of Twitter on weak-tie networks and the political participation of young people in Chile. People who belong to a social network site can stay informed of the group's activities, exchange timely information, and thus increase their opportunity for activism (Gil de Zúñiga & Valenzuela, 2011). Social media also promote, largely, socialization and interaction with family, friends, and peers . One of the many forms of digital activism Twitter has been utilized for is that of immigration reform. Activists and DREAMers have harnessed the power of networked communication to advocate for social and civil rights. Twitter has been used as a means to facilitate, motivate, and supplement on-the-ground organizing (Zimmerman, 2016). Harlow and Guo (2014) used Twitter and Facebook to examine how activists, rather than undocumented immigrants themselves, employ the social media platforms in activism. Twitter was used to generate public awareness on issues of immigration, recruitment, and mobilization, and to coordinate actions both online and offline. Research has also shown Twitter to be a site of political intervention on behalf of nonprofit organizations (NGOs). Li et al. (2018) examined tweets by, and conducted interviews with, immigrant-focused NGOs post the 2016 U.S. Presidential Election. The NGOs used Twitter to disseminate information on immigration-related issues and policies, recruit participation to influence political change, and engage in conversations with external stakeholders. Zimmerman (2016) uses the term transmedia testimonio to describe "a personal narrative that represents a collective experience, and that is shared across various media platforms," in which undocumented youth activists reveal their legal status, provide accounts of their immigration experiences, and document their participation in civil disobedience (p. 1887). Declarations are strategically made through social media in the form of videos and podcasts and supplement of physical real-word protests and meetings. Transmedia testimonios are not separate from other forms of activism and are a form of political agency used "as a way for undocumented students to participate in counter public spaces where they can invent and circulate discourses and formulate oppositional interpretations of their identities, interests, and needs" (Zimmerman, 2016(Zimmerman, , p. 1887). This paper does not limit the analysis to only DREAMers and other undocumented individuals but extends it to anyone and everyone who utilized the #DefendDACA hashtag on Twitter. These individuals may be undocumented, allies, or opponents of DACA. As previous research has documented, a Latino cyber-moral panic promotes "dehumanization, discrimination, oppression, and racial profiling of all Latinos who live in the U.S." (Flores-Yeffal et al., 2011, p. 583). Therefore, there may be adverse sentiment toward #DefendDACA expressed on social media. Digital activism encompasses all participants in a given digital space. Thus, the purpose of this study is to look at social movements through the voice of actual immigrants. Utilizing the theoretical lenses of digital activism, as well as guidance by transmedia testimonios, we offer the following research questions: RQ1. What were the primary functions of the tweets using the #DefendDACA hashtag? RQ2. What were the secondary functions of the tweets using the #DefendDACA hashtag? RQ3. What was the tone of the tweets using the #DefendDACA hashtag? RQ4. Were there differences between users' purpose in the tweets using the #DefendDACA hashtag? Method To examine the content generated by #DefendDACA tweeters after the Trump administration announced it was going to end the program, a content analysis was conducted. The sample was selected in a two-step process. First, all the tweets with at least one #DefendDACA hashtag tweeted between 5 September 2017 (the day the Trump administration announced the end of DACA) and 5 October 2017 (30 days after the announcement) were collected. A total of 116,349 tweets were collected during this process. Second, following Raynauld et al. (2018) the sample was narrowed by randomly selecting 50 tweets posted on each day. A total of 1550 was selected from the total to analyze for the present study. All tweets containing the #DefendDACA hashtag were collected via Tweet Archivist, a web-based Twitter analytics platform that has been used by other scholars studying political communication and participation via Twitter (Boynton et al., 2014;Croeser & Highfield, 2014;Raynauld et al., 2016Raynauld et al., , 2018. Tweet Archivist is used to search, archive, analyze, visualize, and export tweets based on a search term or #hashtag. Hashtags have been used to collect tweets by several scholars researching digital activism (Dubois & Ford, 2015;Gruzd & Roy, 2014;Harlow & Benbrook, 2019). Tweet Archivist does not have access to the historical records of all tweets ever tweeted. Rather, tweets need to be collected in real time through their platform, but once the collection starts, they assure that all tweets related to a keyword, phrase or a hashtag will be collected (Tweet Archivist, n.d.). Upon activation, Tweet Archivist collected Twitter content with the #DefendDACA hashtag, creating a database of content specific that was then downloaded into a Microsoft Excel file. Coding Instrument Tweet Archivist automatically assigns an ID number to each tweet, and that number was used to identify each tweetthe unit of analysis. Each tweet was coded for the primary and secondary function of the tweet (information/dissemination, call to action/mobilization, consequences, coming out, attacks, accolades, others), tone of the tweet, original tweet or retweet, hashtags used, hyperlinks used, mentions, date and who posted the tweet. The codebook was developed considering recent studies on digital activism on Twitter (Agarwal et al., 2014;Raynauld et al., 2016Raynauld et al., , 2018. Coder Training and Intercoder Reliability This study calculates Krippendorff's alpha to determine the degree of agreement among coders. The floor for intercoder reliability was set at .8the level that is largely considered acceptable in mass communication research for non-exploratory content analysis (Lombard et al., 2002). Three coders participated in a training session and then independently coded 10% of the sample for intercoder reliability testing. All of them worked independently. A timeline was given to each coder to promote a prompt completion. Bring a sign!! They need us/We them. RSVP below… https://t.co/bAceksooen.' The second most important function of these tweets was to disseminate information among participants; 33% of them were in this category. Examples of tweets whose primary function was to disseminate information included: • A user tweeted: 'this account stands with immigrant families who worked hard to be in US and to the children on the DACA program. https://t.co/BLtTJCyals' • And many activist organizations retweeted a popular singer: 'They should be free to laugh and live without walls, borders, bans or repeals #DefendDACA #IStandWithTheDreamers #DACA.' In third place, 10.8%, were the tweets that alerted participants to the consequences of eliminating DACA. Among these tweets, some examples became viral: • A DREAMer tweeted: 'I paid $400,000 in taxes last year and all I got was a free trip back to Slovakia#DefendDACA.' • A user tweeted: 'Illinois can't afford to lose DACA. Ending DACA would cost the state $2.2 billion in annual GDP loss. #DefendDACA;' and a politician similarly tweeted 'Colorado can't afford to lose its 17,000 DREAMers. Ending DACA would cost the state $850 million in annual GDP. #DefendDACA' Finally, a small percentage of the tweets, 1%, were dedicated to congratulating or acknowledge the support or endorsement of famous artists and only .5% of the tweets, 8 out of 1550 tweets, were dedicated to negatively attack DACA. An example of an attack tweet is one is '‼I refuse to #DefendDACA There are too many Americans with dreams that are neglected. #AmericanKidsHaveDreamsToo.' RQ2 asked about the secondary functions of the tweets using the #DefendDACA hashtag. Results indicated that information dissemination was the most used secondary function, 58.1% of them were dedicated to informing participants about DACA. On the other hand, 24.4% of the tweets did not seem to have a clear secondary motive, while 14.5% called participants to take action to either support of reject DACA and only 1.8% of the tweets' secondary functions were dedicated to note some of the consequences from rejecting DACA (See Table 2). RQ3 asked about the tone of the tweets that used the #DefendDACA hashtag. Most of the tweets, 93.3% of them, had a positive tone towards DACA. In 4.8% of the tweets the tone was unclear while in .8% of them the tone was negative. RQ4 inquired about whether there were differences between different types of users regarding the purpose of tweeting. A large number (61.1%) of tweeters were individuals who were concerned with the future of DACA. On the other hand, 8.3% of the tweets were posted anonymously and 14.9% were posted by users whose accounts have been suspended or do no longer exist. Together, these participants posted 84.3% of the total number of tweets. Moreover, organizations, other than news organizations, posted 5.5% of the tweets. Immigrant organizations were responsible for 3.4% of the tweets. Tweeters and Primary Purpose of Tweet Individuals supporting DACA posted 51.5% of their tweets with the primary purpose of calling participants to take action, to get involved. While 31.4% of the time they posted tweets to disseminate information about DACA. On the other hand, 12.2% of the postings were to call attention to the potential socioeconomic consequences of revoking DACA. The remaining 4.9% were dedicated to other purposes. Of those who posted their tweets anonymously, 49.2% of them called on participants to take action, to get involved. On the other hand, 35.2% posted their tweets to disseminate information and 10.2% called the participants' attention to the possible consequences of rejecting DACA. The reminding 5.5% of tweets were dedicated to other primary sources. From those tweets posted by participants whose accounts cannot be verified, 45.9% of the tweets call participants into action, 34.2% disseminated information, 12.1% highlighting consequences for the end of DACA, and 6.1% were posted without a clear primary purpose. The reminding 1.7% of the postings' purposes are divided into less relevant categories. Tweets posted by immigrant organizations were in its majority about some type of call to action (67.3%), disseminating information (21.2%) or highlighting the consequences of the elimination of DACA (7.7%). Journalists, news organizations and other organizations focused mostly on disseminating information. Tweets posted by organizations had mostly the purpose of dissemination of information (48.2%), call to action (41.2%), or highlight consequences of the elimination of DACA (5.9%). Journalists and news organizations posted information about the program (50%), calls to action (45.5%), and consequences for the end of DACA (4.5%). Tweeters and Secondary Purpose of the Tweet Individual participants supporting DACA posted 61.7% of their tweets with the secondary purpose of disseminating information about DACA; whereas 24.6% of the tweets are of unclear purpose, and 12.1% were posted to call participants into action. The remaining 1.6% is divided into .8% noting potential consequences and the rest into other categories. Anonymous participants posted 59.4% of their tweets to disseminate information; 18.8% posted to call participants intro action and 18% were of unclear secondary purpose. The reminding 3.9% was divided among accolades (2.3%), consequences of revoking DACA (.8%), and coming out as a DREAMer (.8%). Postings from those individuals whose accounts could not be verified revealed that the secondary purpose of their tweets was to disseminate information (53.2%) or call into action and the rest for accolades (17.7%). However, there was an important number of tweets with unclear secondary purpose (23.4%). The secondary purposed of tweets posted by immigrant organizations was mostly to disseminate information (65.4%), some type of call to action (9.6%), or highlighting the consequences of the elimination of DACA (9.6%). In this category, there was also an number of tweets with unclear secondary purpose (15.4%). Tweets posted by journalists and news organizations had mostly the secondary purpose of dissemination of information (45.5%) or call to action (13.6%), also in this category a good number of tweets with unclear secondary purpose was present (36.4%). Other organizations posted tweets with a secondary purpose of disseminating information (47.1%), calls to action (25.9%), consequences for the end of DACA (3.5%), and coming out as an undocumented immigrant (1.2%). This was also a category with an important number of tweets with unclear secondary purpose (21.2%). Discussion Digital technology and social media are transforming how grassroots movements, such as #DefendDACA, can organize and disseminate information, and even influence media coverage. Young people, in particular, are attracted by social media and their structures and modes of interaction foster some of the conditions needed to build social movements (Bennett, 2008;Boulianne & Theocharis, 2020;Dalton et al., 2010;Shen et al., 2020). The current study examined the primary purpose of the tweets containing the #DefendDACA hashtag following the renouncement of DACA on 5 September 2017. It also examined the tone of the tweets and the differences and similarities between those who tweeted. Results indicated the most important primary function of the tweets was to call participants into action and supplement on-the-ground organizing as found in previous movements (Zimmerman, 2016). The hashtag itself, is a call to defend the program. Tweets asked citizens to protest and call lawmakers and almost 30% of the tweets directly mentioned politicians, asking them to vote and intervene on behalf of DREAMers. As with previous social movements, social media not only help mobilize and organize but also force formal political actors to acknowledge their presence (Raynauld et al., 2016;Shen et al., 2020). The most important secondary function was to disseminate information about DACA, primarily educating the public on the program. These findings align with previous research that online activism facilitates and supports traditional ways of offline activism through the distribution of information and calls to action (Boulianne, 2015;Skoric et al., 2016;Valenzuela, 2013;Van Laer & Van Aelst, 2010). In the case of DACA, the findings evidence that both functions go hand-in-hand. Proponents of DACA educated Twitter users about the program, its recipient (the DREAMers), and its economic impact in hopes that they would in-turn defend the program. The tone used in most of the tweets using #DefendDACA were positive, evidencing this was a hashtag created and disseminated by DREAMers and immigrant activists. When taken in conjunction with the findings of disseminating information, the tweets highlighted the many positive benefits of the DACA program, which is also aligned with previous research on the benefits of the program in regards to education (Abrego, 2018;Hooker et al., 2015;Kevane & Schmalzbauer, 2016), civic engagement (Wong & Valdivia, 2014), the avoidance of illegal activities (Golash-Boza & Valdez, 2018), and the economy (Golash-Boza & Valdez, 2018; Gonzales et al., 2014;Martinez & Salazar, 2018;Wong et al., 2013). What this study adds to this previous research is the use of tweets to emphasize the consequences of terminating DACA. Almost 11% of the tweets warned of the impact on the economy and national workforce, as well as underscored the humanitarian and ethical ramifications on DREAMers themselves. There were very few negative-valenced tweets that used #DefendDACA. Rationale for this could be attributed to the wording of the hashtag itself, it stresses defense of the program. Opponents of the program used the hashtags #DefundDACA and #DACAshame, among others. Future research should also examine the counter-position. Above and beyond policy outcomes, it would be interesting to further explore the impact that social media have in promoting other social movements and to what extent this question will continue to be relevant in the academic analysis of social movements. Nonetheless, the findings from the study reinforce the importance of social media and hashtags, particularly on Twitter, in digital activism surrounding a specific national event as demonstrated in previous research. Twitter continues to be used as a space to encourage political participation (Gil de Zúñiga & Valenzuela, 2011;Valenzuela et al., 2018), immigration reform (Zimmerman, 2016), and provide outlets and interventions for activists (Harlow & Guo, 2014) and NGOs (Li et al., 2018). Whereas a previous Latino cyber-moral panic has promoted a 'dehumanization, discrimination, oppression, and racial profiling of all Latinos who currently live in the United States,' (Flores-Yeffal et al., 2011, p. 15), this study highlights the use of #DefendDACA to humanize, indiscriminate, support, and contextualize immigrants, many of whom are Latinos. The findings from this study have broader global implications. The study evidences social media, particularly Twitter, as a space for fostering not only civil discourse and activism, but also empathy and humanization. It provides academics and governing bodies a better understanding of the effects of policy on migrants' lived experiences, safety, and well-being. The study also highlights that online social movements are not monolithic, but rather intersectional and encompass call-to-actions, education, affect, and humanity. Polices similar to DACA in other global regions are most likely just as litigious and the implications of their cessation should be viewed in regard to economic and humanitarian implications. The hashtag #DefendDACA demonstrated such implications in the voice of those directly affected by the policy. Previous studies highlight immigrant social justice movements on social media instigated by activists and NGO's online (Harlow & Guo, 2014;Li et al., 2018), whereas this study's findings highlight personal narratives from immigrants themselves on social media, very similar to transmedia testimonios (see Zimmerman, 2019), in addition to activists. The movement to #DefendDACA has built its own momentum and will continue to be of both political and social importance as its stability continues to be in limbo. At the time of writing, the Biden administration introduced a new proposal under the Build Back Better Plan to preserve aspects of DACA, however, many legislatures and government bodies continue to argue for congressional action on the policy. Immigration continues to be an enduring and salient issue not only the U.S., but in the world at-large. The current study highlights the weight that Twitter, and its hashtags, have on social justice issues. Social media continue to instigate action and influence political mobilization in physical-tangible spaces.
2022-06-02T15:18:53.551Z
2022-04-06T00:00:00.000
{ "year": 2022, "sha1": "7502b3c0fd1c3fb7c3c9d14587f96c45bed5e6b6", "oa_license": "CCBYNC", "oa_url": "https://www.ejecs.org/index.php/JECS/article/download/968/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5b4df99b3b4d5698af0fce603e798f3080ea4c72", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [] }
232294754
pes2o/s2orc
v3-fos-license
Heparin-induced thrombocytopenia and COVID-19 Heparin-induced thrombocytopenia (HIT) has not been included as a possible cause of thrombocytopenia in Coronavirus Disease 2019 (COVID-19) patients. We report a case of HIT in a patient with COVID-19 treated with heparin. A 78-yearold man was admitted to our hospital for acute respiratory failure and acute renal failure due to SARS-CoV-2 infection; in intensive care unit, one 5000IU heparin dose (day 0, platelet count 305000/μL). On day 2, haemoglobin started to decrease and heparin was stopped. On day 10, platelet count was 153000/μL and 5000IU calcium heparin subcutaneously twice daily was started. The platelet further decreased, reaching 49000/μL on day 17, and the patient was investigated for suspected HIT: an IgG specific chemiluminescence test for heparin- PF4 antibodies was positive and a femoral DVT was found at ultrasound. Argatroban was started, platelet count increased without any bleeding and thrombosis complication. Our experience shows that HIT may develop in heparin treated COVID-19 patients and should be included among the possible cause of thrombocytopenia in such patients. Introduction Recent reports indicate that Coronavirus Disease 2019 (COVID-19) is a prothrombotic disease and the presence of the "Covid-19-associated coagulopathy" is associated with adverse outcomes. 1 The incidence of thrombosis in patients with COVID-19 is high and varies considerably according to the severity of disease and the presence of additional thrombotic risk factors. 2,3 A very high venous thromboembolic (VTE) prevalence, including a high proportion of potentially life-threatening proximal deep vein thrombosis (DVT), in mechanically ventilated SARS-CoV-2 patients was observed despite standard pharmacological thromboprophylaxis. 4 Anticoagulant treatment seems to confer a survival benefit in hospitalized patients with COVID-19, 5 in particular the administration of heparin was associated with lower mortality in hospitalized patients with COVID-19. 6 As a result, more intense antithrombotic regimens have been suggested in this population. 7 Heparin-induced thrombocytopenia (HIT) is a rare complication of heparin treatment. 9 It is associated with increased in vivo thrombin generation provoking both arterial and venous thrombosis. In case of HIT, heparin in any form should be immediately withdrawn. Patients with HIT require nonheparin anticoagulants and high therapeutic levels of anticoagulation are needed to control such hypercoagulable state. Pooled analyses of prospective cohort studies with historical controls have shown that untreated HIT can be complicated by further thrombotic events in 30-75% of cases with 5-10% mortality. 9,10 Since thrombocytopenia is a common finding in patients in ICU, and severe COVID-19 is often associated with thrombocytopenia, 11 HIT is seldom suspected and investigated. Here, we report a case of HIT in a patient with severe COVID-19 that received unfractionated heparin (UFH) treatment, highlighting that HIT may occur also in such patients. Case Report A 78-year-old man (weight 76 Kg) with a history of chronic kidney disease, arterial hypertension and recurrent deep vein thrombosis was admitted to our hospital for acute respiratory and renal failure. He was on therapy with warfarin, amlodipine 5 mg once a day, atorvastatin 20 mg once a day, cinacalcet 30 mg three times a week, calcium and vit D supplementation. He described generalized malaise, muscle ache and fever the week before. In the emergency department, the patient's temperature was 37.8 °C, he had sinus tachycardia with 108 beats per minute, blood pressure was 190/90 mm Hg, respiratory rate 18 breaths per minute, and oxygen saturation 97% on room air. Laboratory findings showed a hypocapnic hypoxemia with metabolic acidosis, serum creatinine was 9 mg/dL, sodium 126 mmol/L, kalium 5.7 mmol/L, INR was 8.37. Warfarin was stopped and he received intravenous Vit K. A diagnosis of viral pneumonia was based on computed tomographic (CT) scan and the patient was transferred to ICU. A nasopharyngeal swab for SARS-CoV-2 was performed but was negative, nevertheless a treatment based on hydroxychloroquine, azithromycin, steroids, tocilizumab was started given to the high COVID-19 suspicion. During the following two days, INR decreased from 3.04 to 1.5, renal function did not improve and hemodialysis was started three days after admission with the insertion of a catheter in the right femoral vein. One bolus dose of 5000IU sodium UFH was used during the first dialysis treatment (Day 0, platelet count: 305×10 3 /μL). The trend of platelet count is shown in Figure. 1 On day 1 he received one dose of 40 mg enoxaparin. On day 2, haemoglobin started to decrease and heparin was stopped. On day 4 hemoglobin was 8.5 g/dL and melena were observed. On day 5, hemoglobin was 7.8 g/dL, platelet count was 219×10 3 /μL and he underwent a transfusion of packed red blood cells. From day 5 to day 9, no bleeding was observed and haemoglobin values were stable. On day 10, platelet count was 153×10 3 /μL, the femoral catheter was removed, and 5000IU calcium heparin subcutaneously twice a day was started. As shown in the figure, the platelet count further decreased: on day 17 platelet count was 49×10 3 /μL, calcium heparin was stopped and HIT was suspected. The pretest clinical score (4 T's) 12 for the diagnosis of HIT was 4 (viral pneumonia and tocilizumab as a possible cause for thrombocytopenia) and the patient was investigated for a diagnosis of HIT. An IgG specific chemiluminescence test for heparin-PF4 antibodies (AcuStar; HIT-IgGPF4-H) was positive (9.44 U/ml). The presence of HIT could not be confirmed by a platelet aggregation test because the platelet aggregation test is no longer available in the Bologna area. On day 18 (platelet count 41×10 3 /μL), he complained of right lower extremity pain, a whole leg ultrasound showed a right common femoral DVT (4T's score 6) and argatroban was started. During argatroban treatment, platelet increased: from 51×10 3 /μL on day 19 to 267×10 3 /μL on day 31 and no recurrent thrombotic event or bleeding complication was observed. The patient was discharged on warfarin. In ICU, the patient required 48 hours of non-invasive ventilation, a nasopharyngeal swab for SARS-CoV-2 was repeated and the RT-PCR for SARS-CoV-2 was positive; on day 14, serological test showed positive IgG and IgM against SARS-CoV-2. Thus, the diagnosis of acute renal failure and pneumonia due to SARS-CoV-2 infection was confirmed. When he was discharged, serum creatinine was 6.14 mg/dL, sodium 137 mmol/L, kalium 5 mmol/L, INR 2.7. During the 12-week follow-up by our anticoagulation clinic, there were neither thrombotic nor bleeding events. Discussion We describe a HIT case occurred during SARS-CoV-2 infection. Despite the obvious limitations of a case report, our experience demonstrates that HIT may develop in patients with COVID-19 treated with heparin and it should be considered among the possible cause of thrombocytopenia in such patients. Guidelines recommend prophylactic or intermediate doses of low-molecular-weight heparin to prevent venous thromboembolism in patients with COVID-19. 8 In Italian hospitals, heparin was used at intermediate or anticoagulant doses in most of the COVID-19 patients. The prevalence of HIT increases in parallel with the dose and the type of heparin and can reach 1% in medical patients. 13 In line with the risk of HIT that is higher for unfractionated heparin than for low molecular weight heparin, 9 HIT occurred in a patient treated with calcium heparin. Despite the widespread use of unfractionated heparin and low molecular weight heparin in COVID-19 patients, few cases have been described so far. [14][15][16] It is a common finding that patients in ICU have a decreased platelet count, 17 as well as coagulation disorders. Moreover, thrombocytopenia is common in COVID-19 patients: it has been detected in 5-41.7% of patients, 18 and a meta-analysis of 7163 COVID-19 patients showed that thrombocytopenia might be a risk factor for COVID-19 progressing into a more severe state. 11 The cause of thrombocytopenia in COVID-19 patients is not clear and several pathophysiological processes have been postulated: a direct infection of hematopoietic stem cell, a damage to the lungs by autoantibodies and immune complexes by coronavirus, a decreased thrombopoietin production, an increased platelet clearance and platelet consumption. 18 Interestingly, there is no data on the role of thrombocytopenia in increasing the risk of bleeding in COVID-19 patients. Nevertheless, HIT has never been included as a possible cause of thrombocytopenia in COVID-19 patients. In our experience, HIT has been seldom investigated during SARS-CoV-2 infection, probably because thrombocytopenia is always ascribed to SARS-CoV-2 infection. Several drugs used in patients with COVID-19 may lead to thrombocytopenia. Tocilizumab is often used and thrombocytopenia is one of its most common adverse events. In a recent study, 14% of COVID-19 patients treated with tocilizumab developed thrombocytopenia. 19 A tocilizumab associated thrombocytopenia was highly unlikely in the present case. The presence of HIT antibodies, the platelet trend and the thrombotic complication at the lowest platelet level were compatible with HIT and not with tocilizumab associated thrombocytopenia. Conclusions Several limitations of the present case report should be acknowledged. Firstly, a platelet aggregation test could not be performed. Thus, HIT diagnosis was not confirmed. However, the chemiluminescent Case Report test yielded a moderate-strong result and, taking in the account the 4T's score result (at least 6), our patient had more than 90% chance of HIT, 20 which strongly supports a diagnosis of HIT. Whole leg ultrasound was performed only when HIT was suspected, and we cannot exclude that thrombosis was already present, even if DVT symptom occurred during platelet fall. Despite the limitation of a single case report, our observations reveal that HIT occurs in COVID-19 patients treated with heparin and support the intriguing hypothesis that in some COVID-19 patients the thromboembolic events may be secondary to anti-PF4-heparin antibodies. In summary, HIT should be suspected and investigated also in heparin treated COVID-19 patients who develop thrombocytopenia.
2021-03-22T17:33:51.389Z
2021-03-05T00:00:00.000
{ "year": 2021, "sha1": "6f8935ca06b2ba7fdc00836a258cacd2ed58aef5", "oa_license": "CCBYNC", "oa_url": "https://www.pagepress.org/journals/index.php/hr/article/download/8857/8537", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f8935ca06b2ba7fdc00836a258cacd2ed58aef5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8916731
pes2o/s2orc
v3-fos-license
Texture Analysis of T2-Weighted MR Images to Assess Acute Inflammation in Brain MS Lesions Brain blood barrier breakdown as assessed by contrast-enhanced (CE) T1-weighted MR imaging is currently the standard radiological marker of inflammatory activity in multiple sclerosis (MS) patients. Our objective was to evaluate the performance of an alternative model assessing the inflammatory activity of MS lesions by texture analysis of T2-weighted MR images. Twenty-one patients with definite MS were examined on the same 3.0T MR system by T2-weighted, FLAIR, diffusion-weighted and CE-T1 sequences. Lesions and mirrored contralateral areas within the normal appearing white matter (NAWM) were characterized by texture parameters computed from the gray level co-occurrence and run length matrices, and by the apparent diffusion coefficient (ADC). Statistical differences between MS lesions and NAWM were analyzed. ROC analysis and leave-one-out cross-validation were performed to evaluate the performance of individual parameters, and multi-parametric models using linear discriminant analysis (LDA), partial least squares (PLS) and logistic regression (LR) in the identification of CE lesions. ADC and all but one texture parameter were significantly different within white matter lesions compared to within NAWM (p < 0.0167). Using LDA, an 8-texture parameter model identified CE lesions with a sensitivity Se = 70% and a specificity Sp = 76%. Using LR, a 10-texture parameter model performed better with Se = 86% / Sp = 84%. Using PLS, a 6-texture parameter model achieved the highest accuracy with Se = 88% / Sp = 81%. Texture parameter from T2-weighted images can assess brain inflammatory activity with sufficient accuracy to be considered as a potential alternative to enhancement on CE T1-weighted images. Introduction Multiple Sclerosis (MS) is a chronic autoimmune inflammatory disease of the central nervous system featured by the onset of multifocal white matter (WM) inflammatory foci resulting in irreversible parenchymal damage. Shortly after its introduction in clinical practice, magnetic resonance imaging (MRI) became the most sensitive imaging modality in the detection of chronic lesions as well as the assessment of inflammatory activity [1]. Conventional MR examination usually includes fluid attenuated inversion recovery (FLAIR) and T2-weighted (T2-W) imaging for lesion load delineation, together with contrastenhanced (CE) T1-weighted (T1-W) imaging to detect foci of brain blood barrier (BBB) disruption due to local inflammation. Diffusion-weighted imaging (DWI), from which mapping of the apparent diffusion coefficient (ADC) is derived, may give additional information about cell loss and/or ultrastructural disorganization within diseased parenchyma. Though diffuse involvement of the CNS with the MS disease process has been highlighted by histopathological studies, acute inflammatory foci occur, which may be assessed either by the a posteriori demonstration of lesion size enlargement and/or de novo lesion appearance on serial T2-W images at the chronic phase, or by contemporary contrast-enhancement on T1-W images of a single examination at acute phase [2]. In the latter condition, the BBB breakdown allows leakage of the gadolinium chelates from the vascular compartment to intercellular interstitium resulting in a local T1 time shortening of adjacent spins producing hyper signal intensity on CE T1-W images. Despite recent technical advances in DW and diffusion tensor imaging, changes in diffusion parameters in MS remain equivocal e.g. regarding the link between ADC values and inflammation within CE lesions on T1 images [3][4][5]. Texture analysis (TA) has been investigated as an alternative quantitative approach to detect contrast-enhanced MS lesions [6], differentiate MS lesions from cerebral microangiopathies [7], characterize different sub-areas (core, rim) within lesions undergoing 'active' demyelination [8], differentiate between relapsing and remitting MS lesions [9], study perfusion characteristics of MS lesions [10], act as surrogate markers of lesion load and tissue integrity in MS [11,12], differentiate between primary progressive and relapsing-remitting MS phenotypes [13], differentiate MS lesions in patients with advanced vs mild disability status [14], make an outcome prognosis in patients with a clinically isolated syndrome [15], and assess the persistence or recovery of acute lesions in relapsing-remitting patients [16]. Texture refers to the spatial arrangement of primitive attributes, either visual or actual, of a surface. A brick in a brick wall,-or on a smaller scale the grains of a brick, constitutes a trivial example of a primitive attribute of the surface following a regular spatial arrangement. In medical imaging, primitive attributes are defined by image pixels, and texture refers to the visual appearance-or perceived properties-of the image, which can be more or less coarse, fine, uniform, granular, periodic or irregular. In mathematical terms, texture refers to the spatial distribution of the gray levels in the image matrix. Contrary to bricks in a brick wall, gray levels in medical images often follow more complex patterns, requiring high-order statistics or frequency approaches to characterize their arrangement. Numerical expressions have thus far been developed to assess the contrast, homogeneity, coarseness, and more broadly, all complex (non-visible to the human eye) variations in the distribution of the gray levels. All these numerical expressions have been referred to as 'texture parameters' [17,18]. In practice, TA generates a set of parameters that captures the pictorial content of the image, which may be useful for detection or classification purposes. The rationale behind the concept is that texture results from the process that created the surface. In MR imaging of MS lesions, it is assumed that the distribution of gray levels within the lesion results from the underlying ultra-structural properties of tissues affected by the disease processes, with or without therapeutic interventions [19]; a concept which has recently been validated by the histopathological analysis of brain white matter lesions appearing hyper-intense on T2-W MR images [20]. In a pioneering study, Yu et al. differentiated between enhanced and unenhanced brain MS lesions using a combination of 8 texture parameters with a sensitivity (Se) of 88% and a specificity (Sp) of 96% [21]. T2-W MR images were acquired from a spin-echo sequence on a 0.28T MR system in a small group of 8 patients. To our knowledge, this study has remained the only one specifically investigating TA as a potential alternative to CE T1-W imaging to identify acute inflammation within MS lesions. One major reason to repeat this study design was that TA critically relies on image quality as well as on numerical solutions to measure it [22,23]. The experiment initially designed by Yu was repeated on a 3.0T system offering increased signal-to-noise ratio resulting in improved spatial resolution. Two TA methods and three statistical classifiers were implemented. A threestep assessment was undertaken: (i) texture and ADC parameters were compared in MS lesions vs normal appearing white matter (NAWM), (ii) the performance of individual parameters in identifying CE lesions was evaluated, and (iii), parameters were combined into multi-parametric models, the performances of which were assessed after cross-validation. The availability of such an alternative model to contrast-enhanced MR imaging for monitoring inflammatory activity in MS patients is clearly beneficial in an era of economic constraints and for limiting systemic risks in persons with impaired kidney function. Institutional EC board approval The study was approved by our institutional ethics committee (CEBHF, Commission d'Ethique Biomédicale Hospitalo-Facultaire, Université Catholique de Louvain). Written informed consent of patients in the retrospective group (group 1) of the study was obtained for retrospectively reprocessing their imaging data extracted from the institutional PACS. Written consent was also obtained from patients in the prospective group (group 2) for repeating twice the T2-W acquisition before and after CA perfusion. Inclusion criteria, patients' groups, and study design Inclusion criteria in the study were as follows for the two (see below) patients' groups: (i) a definite diagnosis of MS according to the 2010 revised McDonald's criteria for dissemination in space (DIS) and dissemination in time (DIT), (ii) a relapsing-remitting disease course, (iii) the presence of enhanced inflammatory lesions on CE-T1 images at the time of inclusion, and (iiii) the absence of any other co-existent neurological disorder. The study included two distinct groups of patients: Patients from the retrospective group 1 were from the routine clinical practice in which patients receive intravenous injections of CA at a standard dose of 0.1 mmol.kg -1 of gadobenate dimeglumine (Multihance1, Bracco Imaging Europe1, Wavre, Belgium) outside the MR system. The timing of CA administration is synchronized with the end of the examination of the preceding patient. The MS patient is then introduced into the MR system almost immediately after CA perfusion. A standardized protocol is then applied with T2-W, FSE-FLAIR, and DWI sequences being acquired before the acquisition of CE T1-W images. Therefore, a constant delay ranging from 10 to 15 minutes between CA perfusion and T1 images acquisition is obtained. Twenty-one patients were extracted from the clinical database and PACS. 44 contrast-enhanced lesions, 37 unenhanced lesions and 44 regions of interest (ROI) in NAWM were delineated on CE T1-W images of patients in group 1. Patients from the prospective group 2 had a different examination protocol. An intravenous access line was installed before examination. An initial pre-contrast T2-W sequence was acquired before CA perfusion. After CA perfusion of 0.1 mmol.kg -1 of gadobenate dimeglumine followed by 30 mL saline flush at a rate of 2 mL.s -1 with an automated power injector, a two-minute pause was observed and a similar protocol as for group 1 was thereafter applied including a repeated post-contrast T2-W sequence at the start. Both groups of patients had thus far post-contrast T1-W images in a similar delay ranging from 10 to 15 minutes after CA perfusion. Nine patients were recruited in this group, in which 14 contrast-enhanced lesions were delineated on CE T1-W images. TA was then performed on both pre-and post-contrast T2-W data. The rationale for recruiting the two different groups of patients was the concern that TA was performed on post-contrast T2-W data in the main retrospective group 1. Since T2-W images were unaffected by CA perfusion at visual examination, the a priori hypothesis was that texture parameter values should also be unaffected by CA perfusion. To verify the hypothesis, the second prospective validation group 2 was subsequently recruited. Images analysis MR images of all patients (group 1 and 2) were consensually reviewed by both a senior resident and an experienced neuroradiologist (1 and 25 years of experience, respectively). MS lesions were categorized as enhanced or non-enhanced from the analysis of CE T1-W images. For each lesion, the slice with the largest cross-sectional dimensions was selected. The region of interest corresponding to the whole lesion was manually segmented on the T2-W image in a similar slice location (Fig 1). A contralateral mirrored ROI in NAWM was generated thereafter. Only lesions with homogeneous enhancement of 5 mm in diameter or more (according to the long axis) were considered for analysis. Prior to the calculation of texture and ADC parameters, DWI was spatially registered with T2-W images using a rigid transformation [24], thereby replicating ROIs drawn on the T2-W images on the diffusion-weighted ones, which resulted in an anatomical match between texture and ADC ROIs. The visual texture of ROIs was analyzed using the gray level co-occurrence matrix (GLCM) and the run length matrix (RLM) [17,25]. From the GLCM, nine texture parameters describing the gray levels' interdependence in the image matrix were estimated. Computation parameters were: distance of one pixel between two neighboring pixels, average of the angular relationships on the four main directions, and five bits of gray levels. From the RLM, eleven texture parameters describing the distribution of runs of gray levels in the image were estimated with the same computation parameters. The mean value (over all pixels in the ROI) of the texture parameters was calculated. The list of parameters is given in Table 1. Statistical analysis Parameters values were expressed as the mean ± standard deviation. In the first analysis, texture parameters and ADC values within both enhanced and unenhanced MS lesions vs mirrored ROIs in NAWM were compared. A Wilcoxon signed rank test was performed as a non-parametric test because the normality of data distribution was not verified by the D'Agostino-Pearson test. A Bonferroni-type correction for performing three comparisons was applied and a p-value < 0.0167 was therefore considered as statistically significant. In the second analysis, the performance of individual parameters in the discrimination of enhanced vs unenhanced lesions was assessed by a non-parametric receiver operating characteristic (ROC) curves analysis. Performance was interpreted as follows: AUC < 0.7 = poor, 0.7 AUC < 0.8 = fair, 0.8 AUC < 0.9 = good, 0.9 AUC < 1.0 = excellent. Parameters were ranked according to their performance by comparing Areas Under the ROC Curves (AUC). In the third analysis, texture parameters and ADC were combined. Three multi-parametric classifiers were tested: linear discriminant analysis (LDA) [26], logistic regression models (LR) [27], and partial least squares (PLS) models [28]. As one cannot know a priori how many and which parameters played a significant role in the classification of MS lesions, all possible combinations of 2 to 21 parameters of the 21 parameters (20 texture parameters plus 1 ADC parameter constituting the independent variables of the analysis) were successively submitted to the classifiers. No variable reduction technique was used. To estimate how accurately the classifiers would perform in practice, a leave-one-out crossvalidation was applied [29]. The percentage of correctly classified enhanced lesions defined the classifier sensitivity (Se) and the percentage of correctly classified unenhanced lesions defined the classifier specificity (Sp). Se and Sp were used to identify the set of parameters yielding the best classification models of enhanced lesions. All calculations were carried out with Matlab (Matlab R2011b, MathWorks1, Natick, MA, USA) and R-Project for Statistical Computing (http://www.r-project.org/). Open source codes "KeyRes-Technologies" and "grayrlmatrix" under Matlab were used for computing texture parameters. The software ImageJ (http://rsbweb.nih.gov/ij/) was used for the segmentation of the ROIs. Texture within NAWM vs MS lesions Texture parameters and ADC values are given in Table 2 together with the significance levels for the statistical differences. Differences between enhanced lesions and NAWM, similarly to those between unenhanced lesions and NAWM, were statistically significant (p < 0.0167) for all parameters except for texture parameter LRHGE. Performance of individual texture parameters AUC values, sensitivity and specificity of selected cut-offs are given in Table 3, while ROC curves are displayed on Fig 2. ROC analysis showed that the performance of texture parameters ranged from poor (AUC sum variance = 0.638) to good (AUC RLN = 0.835). Individually, three parameters (Sum variance, LRHGE, ADC) did not perform better than a random classifier (p (AUC > 0.5) > 0.0167). A comparison of AUCs for parameters with a performance rated at least 'good' did not yield any statistically significant difference (p > 0.384, regardless of the comparison). A clear-cut ranking of these parameters according to their performance was therefore impossible, as was subsequently the identification of the best performing parameter. Performance of multi-parametric models In the retrospective patient's group (group 1), the best model from LDA classified enhanced lesions correctly in 31/44 cases (Se = 70%) and unenhanced lesions in 28/37 cases (Sp = 76%), relying on eight texture parameters (Entropy, Correlation, Sum Variance, SRE, LRE, RLN, RP, SRHGE) (Fig 2). The best model from PLS classified enhanced lesions correctly in 39/44 cases (Se = 88%) and unenhanced lesions in 30/37 cases (Sp = 81%), relying on two different sets of six texture parameters as follows: either the combination of (Correlation, Inverse Difference Moment, Sum Variance, GLN, RLN, LRHGE) or the combination of (Energy, Contrast, Correlation, Inverse difference Moment, GLN, LRHGE). According to the Youden index, the best model was based on LR and relied on ten texture parameters (Entropy, Homogeneity, Inverse Difference Moment, Difference Variance, LRE, RLN, RP, LGRE, SRHGE, LRLGE) through which enhanced lesions were classified correctly in 38/44 cases (Se = 86%) and unenhanced lesions were classified correctly in 31/37 cases (Sp = 84%). The best performing logistic regression model can be written as F(z) = e z /(1+e z ), where F(z) is the probability of presence of the characteristic of interest and z is defined as follows: It should be noted that LDA, LR or PLS relying on other combinations and/or a larger number of parameters did not improve the classification. Table 2. Mean values (± standard deviation) of texture parameters and ADC parameter. The highly significant p-values observed demonstrate that the texture within NAWM is different when compared to MS lesions (enhanced or unenhanced), suggesting differences in the actual structure of the two tissues. Entropy was found higher in enhanced lesions when compared to unenhanced ones, suggesting that the randomness of gray levels was higher. This was confirmed by the lower Homogeneity and Energy in this type of lesion. Overall, this may suggest that the histologic substrate of enhancing lesions is more heterogeneous; an assumption that, however, needs to be investigated on experimental models allowing comparison between texture patterns and anatomopathological substrate to be confirmed. Finally, the LR model previously identified as the best classification model was used in the prospective patients' group (group 2) of the study. Enhanced lesions were correctly classified as active lesions in 14/14 (Se = 100%) either they were characterized with pre-contrast texture parameters or with post-contrast texture parameters, thereby demonstrating that, (i) CA perfusion has no substantial effect on texture parameters computed from T2-W images, and (ii) the 10 texture parameter model enabled identification of enhanced lesions with a high sensitivity. Discussion The first observation drawn from the study was that all but one of the texture parameters were significantly different within white matter (WM) lesions than within normal appearing white matter (NAWM) as seen by visual examination of the T2-W images. This observation confirmed the ability of the technique to discriminate between normal and diseased WM and was consistent with previously published results in the field [20,[30][31][32]. It also supported the assumptions that, (i) texture parameters are suitable for brain tissue classification and that, (ii) texture parameters can be used to evaluate local changes in the MR appearance of the white matter e.g. for monitoring the disease processes. The second observation from the study was that the performance of eight of the individual texture parameters was evaluated as 'good' for differentiating lesions from NAWM. However, these mono-parametric models displayed high specificity but only fair sensitivity, thereby Table 3. Performance of individual parameters in differentiating between enhanced and unenhanced MS lesions assessed by non-parametric receiver operating characteristic (ROC) curves. The significant p-values observed show that individual texture parameters are able to differentiate between the two types of MS lesions. Eight texture parameters displayed a level of individual performance that was at least 'good'. None of these eight parameters was found to be significantly better performing than the other. precluding accurate identification of acute inflammatory enhanced MS lesions. In turn, multiparametric models based on texture parameters from T2-W MR images enabled differentiation between enhanced and unenhanced lesions with high sensitivity. We therefore confirmed the results reported by Yu et al [21] by using an updated MR technique, a larger data set data and by testing different statistical classifiers for the decision rule. The performance level of our analysis appeared lower (Se = 86% / Sp = 84% based on LR) compared to that reported in Yu's study (Se = 88% / Sp = 96%). We assumed that differences in MR protocols (3.0T vs 0.28T, inplane spatial resolution 0.45x0.45 mm vs 1x1 mm, slice thickness 3 mm vs 6 mm, higher homogeneity of the RF field with the Achieva system, higher signal to noise ratio with the 32-channel receiver-only SENSE head coil) yielded improved image quality of the T2-W images [33], which in turns affected texture parameter values and TA performance. The second reason for the difference in performance may arise from a difference in sample size and the absence of cross-validation in Yu's study, though such validation is mandatory to obtain an unbiased estimate of the predictive accuracy. The use of techniques such as cross-validation, bootstrapping or Bayesian confidence interval should be systematic in such studies to obtain a reliable assessment of the classifier's performance, which is both useful to estimate the relevance of the working hypothesis, and mandatory for clinical implementation. The ADC parameter was not demonstrated to contribute significantly to the identification of inflammatory lesions as defined by enhancement on post-contrast T1-W images. Microstructural tissue damage in MS leads to an overall reduction of biological barriers of highly anisotropic healthy brain tissue. Disorganization and barrier breakdown (e.g. myelin) theoretically leads to an increase in free water diffusivity within damaged tissue compared to contralateral NAWM in patients or normal WM of healthy subjects. Several studies have confirmed an increase in water diffusivity within MS lesions [3,[34][35][36], resulting from an increase in 'free' extracellular space either by extracellular edema at the acute inflammatory phase, or by demyelination at the chronic phase. However, the assumption that acute inflammation within enhanced lesions could display significantly different ADC values (or mean diffusivity values when diffusion tensor imaging is used) than within the chronic gliotic/demyelinated scar tissue of unenhanced lesions is still unverified and the reasons for variability in ADC value changes within enhanced lesions remain controversial [37,38]. Complex and mixed transient pathophysiological mechanisms such as acute inflammation, ongoing demyelination and maybe secondary remyelination may compete and modify changes in diffusivity in one direction or another. There are methodological limitations to our study. This study is mainly retrospective using clinical material issued from routine practice. Sensitivity of TA was assessed on a limited number of lesions. Therefore, while our first set of data served for model learning, a larger set of patients' data would be required to validate the performance of the model, and to confirm that CA administration had no substantial impact on texture parameters values in an additional group of patients accepting the repeated pre-and post-contrast T2-W acquisitions in the same imaging session. Further tests in machine learning should be carried out since other types of classifiers than those tested in this study can be implemented, with a potential impact on the structure and performance of the model [39]. Several methods of texture analysis (2D or 3D) exist from which numerous texture parameters can be derived [8,11,17,26,40,41]. None of these approaches is superior to the other since their effectiveness basically relies on the visual properties of the images to which they are applied and on the task performed. Combining various texture parameters may improve the characterization of MS lesions as demonstrated by our data. However, increasing the number of parameters involves the use of variable reduction techniques prior to classification and the use of sophisticated machine learning classifiers, as well as larger training datasets. All these requirements may delay the routine clinical applicability of the processing. Finally, although the ADC parameter was not useful in identifying enhanced lesions, other diffusion measurements such as fractional anisotropy, which has been reported to be significantly lowered in active lesions [42], could demonstrate relevance here. In conclusion, this study provides additional evidence that texture analysis of T2-W MR images may be relevant in the identification of brain inflammatory activity in MS patients. These results are promising enough to trigger further investigation. Additional recruitment and tests are being performed to validate the structure and performance of the model. Such a fully automated post-processing method implemented using a computer-aided diagnosis (CAD) system for clinical use could be used for the innocuous and non-invasive detection of subtle changes in texture properties within white matter during relapses and the monitoring of the overall MS disease process. Author Contributions Conceived and designed the experiments: NM TD. Performed the experiments: DR GM. Analyzed the data: NM AG CS. Contributed reagents/materials/analysis tools: NM AG. Wrote the paper: NM DR GM CS TD.
2018-04-03T00:00:40.077Z
2015-12-22T00:00:00.000
{ "year": 2015, "sha1": "20e5fa1a908c5935443c6f2267c1f0728c1f6e38", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0145497&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "20e5fa1a908c5935443c6f2267c1f0728c1f6e38", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
114128958
pes2o/s2orc
v3-fos-license
Reasons for changes in the value of unit pressure of compression products supporting external treatment The paper presents the basics of modelling compression products with intended values of unit pressure for body circumferences with fixed and variable radius of curvature. The derived relationships referring to the dimensions of the fabric's circumferences in a relaxed state of the product were based on Laplace law, local values of the radius of curvature, and the characteristics of stretching and relaxing (deformation) of the knitted fabric, described by experimental relation for the stress and relaxation phase for the 6th hysteresis loop, taking into account confidence intervals. The article indicates the possibilities of using 3D scanning techniques of the human body to identify the radius of curvature of various circumference of the human silhouette, for which the intended value of the unit pressure is designed, and quantitative changes in the body deformation due to compression. Classic method of modelling and design of compression products, based on a cylindrical model of the human body does not provide in each case the intended value of unit pressure, according to specific normative requirements, because it neglects the effect of different values of the radius of curvature of the body circumference and the properties of the viscoelastic knitted fabrics. The model and experimental research allowed for a quantitative and qualitative assessment of the reasons for the changes in the value of unit pressure of compression products supporting the process of external treatment. Introduction Compression therapy is a method of supporting external treatment used among others in case of varicose veins, lymphedema, post-burn and post-surgical scars. An important parameter determining the effectiveness of this method is the value of unit pressure which the product exerts on the user's body. The range of values of this parameter, depending on the type of therapy and the severity of the patient's condition is determined from the medical point of view [1][2][3]. The unit pressure can be constant along the whole length of the body (e.g. in treating post-burn scars) or graded -as in the prevention of varicose veins and lymphatic edema. Compression products used in adjuvant therapy are designed based on the cylindrical model of the body. Deviations from this model apply in particular to the human trunk, where there are circumferences whose curvature substantially differs from the circle. In most studies [4][5][6][7] modelling of unit pressure is based on assuming circular geometry of body circumferences. Ref. [4] presents the results of modelling unit pressure by finite elements method for a cylinder and a cone. However, in [8] the pressure exerted on the legs by the cuffs of socks was evaluated by theoretical and experimental method for the actual geometry of the leg circumference. Research on the influence of radii of curvature body circumference of on the value of the pressure unit is shown in [7]. The aim of this study is to document, on the basis of qualitative and quantitative analyzes, the reasons for changes in the value of unit pressure of compression products resulting from the following factors: differences between the cylindrical body model and the actual geometry of the circumference, deformability of the body and the resulting changes in body shape under the influence of the compression product, characteristics of stress and relaxation in knitted fabrics related to their rheological properties. Basis for modelling compression knitted fabrics Modelling and designing of compression products is based on Laplace law (1). It describes the relationship between the unit pressure exerted on a cylindrical body model of the circumference G 1 and circumferential force F for a stripe of a knitted fabric of the width s. In order to design the dimensions of a compression product in free state, with the intended value of unit pressure, it is necessary to know the mechanical characteristics of the knitted fabric, in the form of a relation between force and relative elongation F = f (ɛ). Relative elongation of the knitted fabric ɛ on the covered part of the body circumference is described by a well-known dependence (3). (1) -unit pressure, -circumferential force in the stripe of a knitted fabric of width s, -circumferences of the body parts, -width of the fabric stripe, -fabric circumference in free state, -relative elongation of the fabric. After substituting equations (2) and (3) to equation (1), and after necessary transformation we obtain dependence for the value of the fabric circumference G o in free state as a function of the required value of pressure P and body circumference G 1 along the length of the covered body part for the given characteristics of the knitted fabric (2). Subject of study Compression products used in the treatment of burns are most often made of plain stitch warp-knitted fabrics with elastomeric threads, whose stitch is shown in Figure 1. The presented fabric is a three-guide knitted fabric composed of a binding stitch made of a textured polyamide silk with a linear density of 78 dtex (76 %), and vertical wefts made of polyurethane yarn with a linear density of 480 dtex (24 %). The parameters describing the fabric are course density P r = 720, wale density P k = 154 and surface mass G = 244 g/m 2 . Mechanical characteristics of the knitted fabric Research on knitted compression products performed by the procedure shown in [8], was carried out for a range of relative elongation ɛ ϵ <0; 1> in separate stretching areas enlarged by 0.1 of the relative elongation. For each value of the strain, the test was carried out for 5 rectangular samples with a length of 200 mm and width of 75 mm, stitched with a flat seam along the shorter side, cut from different locations of the compression fabric and submitted (by loop method), to stretching and relaxation at a speed of 200 mm/min on the Hounsfield tensile testing machine using needles stabilizing the width of the fabric. For each stretching range 6 hysteresis loops were performed. Figure 2 shows the changes in the average values of the force as a function of the individual ranges of the strain for the 6th hysteresis loop for the stress and relaxation phase, taking into account for, the subsequent relative elongations, maximum forces in the stress phase and minimum forces in the relaxation phase from the confidence intervals. Confidence interval for the mean value is defined by the formula wherein: x -the arithmetic average counted on the basis of n -elemental sample population, S -experimental estimation of the standard deviation U α -value of the random variable U of standardized normal distribution (N (0,1)) defined in such a way so as to satisfy the relation (6): (6), wherein for 1-α = 0.95; U α = 1.96 [10]. Significant differences between the values of forces in stress and relaxation phase, for the same elongation values result from the rheological properties of the compression fabric. Qualitative interpretations of these differences can be explained on the basis of standard Zener rheological model, according to which the process of forces relaxation is described by the relationship (7). where: C,C 1 -relative longitudinal rigidity, relative absolute viscosity Transferring model interpretations onto the behavior of a compression product during use it should be noted that the circumferential forces in the fabric will tend to be equal to C·, because the expression exp(-t·C 1 /) for t   takes the value 0. This explains the significant differences between the values of forces for the stress and relaxation phases. 3. Influence of the geometry of the circumference on the value of unit pressure As mentioned above, compression products are designed according to Laplace law. Human body contains numerous circumferences of variable curvatures, and therefore the obtained pressure values are different in places where the radius differs from the one taken into account while designing the product for a cylindrical model of the body part. Figure 3 depicts the geometry of scanned circumferences selected from the trunk of a female body with and without a compression product. The observed difference in the geometry of the circumferences results from the body's susceptibility to pressure. The measurements were carried out In order to determine the effect of various values of the radius of curvature on the value of unit pressure, the values of R n radii were determined using the Pythagorean theorem. The radius Approximation is based on minimizing the mean square error (distance between the points and the circle), while the weight of error at the extreme points is reduced by half. The determined values of the radii of curvature of the given circumference in the range of angles 0 ÷ 360° before and after putting on the compression product, i.e. including the body's susceptibility to pressure are presented in (Figure 4). Figure 5 shows the changes in the unit pressure P for the geometry of the circumference with and without the compression garment. Assuming that the value of the longitudinal force at the circumference is constant, we obtain equation (9) describing the value of unit pressure along the circumference. n R R P P   int (9), where P int is the intended value of pressure equal to 20hPa, R n values of the successive radii of curvature of the circumference, R values of radius of curvature of the circumference treated as a circle. As a result of the body's susceptibility to pressure, the circumference with the compression clothing was reduced, resulting in reduction of relative elongation of the fabric and circumferential force. Local changing the radius of curvature of the circumference in compression clothing and the circumferential force causes a change of unit pressure ( Figure 5). 4. Changes in the value of unit pressure, taking into account the impact of the force function and relative elongation of the knitted fabric for the stress and relaxation phase In the next stage of the study the unit pressure changes occurring by the same relative elongation ɛ for the stress and relaxation phase were determined. In that stage, firstly the values of relative elongation for the body circumference without compression clothing were calculated according to the procedure currently used in the design of compression products of an intended unit pressure P = 20 hPa. Then, for the same value of relative elongation unit pressure values were determined using the functions describing the forces in the relaxation phase (10). The results presented in Figure 8 show a significant decline in the value of unit pressure under the influence of stress relaxation. The presented graph shows that only in the stress phase, in those places of the circumference in which the radius of curvature is equal to the radius for which the product was designed, the intended value of the unit pressure was achieved, while in the relaxation phase this value has not been reached. Fig. 8 Unit pressure values taking into account force function and relative elongation of the fabric for stress and relaxation phase. Counting parameters: G 1 =66.8cm, R=10.63 cm, ɛ= 0.33 elongation ɛ of the knitted fabric, using the determined confidence intervals for both phases as a function of circumference G 1 and the intended value 20hPa are presented in Figure 9. Changes in the value of unit pressure taking into account the impact of force function and relative elongation of the fabric, using the determined confidence intervals for the stress and relaxation phase as a function of circumference G 1 and the intended value 20 hPa. In the last stage of the research some generalization was made of unit pressure for a wide range of circumferences G 1 . In the first stage of calculations, values of relative elongations were determined for the stress phase and the intended pressure P = 20 hPa for each circumference G 1 ϵ <5;110> cm. Then, the calculated values of relative elongation were introduced accordingly to the equations (11), (12) to calculate the value of unit pressure: -for the stress phase (11) -for the relaxation phase (12) . The calculated pressure values for the stress and relaxation phase, taking into account the impact of the force function F and relative The values of unit pressure in the relaxation phase are significantly lower than the intended ones, which confirms the significant impact of the relaxation phenomenon on the value of unit pressure. 5. Conclusions On the basis of qualitative and quantitative analyzes the paper presents the changes in the value of unit pressure exerted by compression products on the user's body resulting from the following factors: a) the characteristics of stress and relaxation of the knitted fabric resulting from its rheological properties. The study shows that the procedure currently valid for determining mechanical characteristics in the form of the relation between the force and relative elongation of a knitted fabric limited to the stress phase leads to a significant lowering of values of the unit pressure, because it does not take into account the relaxation processes occurring during the usage of products, which in case of many therapies are worn practically all day and over a period of several months. b) the actual geometry of body circumferences having different radii of curvature. Design of compression products for circumferences of circular geometry makes it possible to obtain the intended values of unit pressure only in the body areas in which the real radius of curvature is the radius of a circle, which in some cases may reduce the effectiveness of the therapy. Thus, the local value of the radius of curvature should be in certain cases taken into account while designing compression products. c) the susceptibility of the body to pressure. This factor is the reason for subsequent changes in the values of unit pressure, due to changes in the circumferences and the radii of curvature.
2019-04-15T13:05:58.401Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "b073f8de25956f6f4049372d126334f6bce7c5a5", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/141/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4b2cd711a67b5cc2d97b9b3667aa4a9b82a897d4", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
256628274
pes2o/s2orc
v3-fos-license
Balancing new technology: Virtual reality for balance measurement case report Rationale: Falling and the inability to maintain balance are the second leading cause of unintentional injury deaths globally. There are a number of chronic and acute conditions characterized by balance difficulties, including neurological diseases, and sport injuries. Therefore, methods to monitor and quantify balance are critical for clinical decision-making regarding risk management and balance rehabilitation. New advances in virtual reality (VR) technology has identified VR as a novel therapeutic platform. VRSway is a VR application that uses sensors attached to a virtual reality headset, and handheld remote controllers for measurement and analysis of postural stability by measuring changes in spatial location relative to the center of mass and calculates various postural stability indexes. This case report evaluates balance measures in 2 healthy participants with no previous history of balance disorders using the VRSway software application and compares to output generated by the current gold standard of balance measurement, force platform technology. Case Presentation: The primary objective of this case study was to validate the VRSway stability score for evaluation of balance. Here, we present posturography measures of the VRSway in comparison with force plate readouts in 2 healthy participants. Body Sway measurements were recorded simultaneously in both the force plate and VRSway systems. Data calculated by proprietary software is highly correlative to the data generated by force plates for each of the following measurements for participant-1 and participant-2, respectively: Sway index (r1 = 0.985, P < .001; r2 = 0.970, P < .001), total displacement (r1 = 0.982, P < .001; r2 = 0.935, P < .001), center of pressure mean velocity (r1 = 0.982, P < .001; r2 = 0.935, P < .001), ellipse radius 1 (r1 = 0.979, P < .001; r2 = 0.965, P < .001), ellipse radius 2 (r1 = 0.982, P < .001; r2 = 0.969, P < .001), and ellipse area (r1 = 0.983, P < .001; r2 = 0.969, P < .001). Conclusions: Data from this case study suggest that VRSway measurements are highly correlated with output from force plate technology posing that VRSway is a novel approach to evaluate balance measures with VR. More research is required to understand possible uses of VR-based use for balance measurement in a larger and more diverse cohort. Introduction Balance is the ability of the body to maintain the center of mass relative to the base of support, thereby resisting equilibrium changes. [1] Maintaining balance is achieved by the complex integration and coordination of multiple systems including the vestibular, visual, auditory, motor, and higher-level premotor systems. [2,3] Falls are the second leading cause of unintentional injury deaths globally, [4] and falling and the inability to maintain balance are major risk factors for adults age ≥60, who suffer the greatest number of fatal falls annually. [4] A variety of chronic and acute conditions are also characterized by balance difficulties, including neurological diseases and sport injuries. [5] Methods to monitor and quantify balance are therefore critical for clinical decision-making regarding risk management and balance rehabilitation. [2,3] The current gold standard for accurate balance measurement are force platforms. Utilizing force measurement technology, force platforms measure ground reaction forces to calculate force development and center of pressure (CoP) positioning. [6] This reflects the neuromuscular response to movements in the center of gravity, and thus, closely approximates the center of gravity in slow-moving or static conditions. [7] While this technology undoubtedly provides accurate and consistent force measurements, there are several drawbacks. Specifically, force platforms are expensive to purchase, require frequent Informed consent was obtained from the patient for publication of this case report details. calibration, and substantial maintenance. In addition, a significant amount of space is required, limiting accessibility for some research, and clinical facilities. [8] New advances in virtual reality (VR) technology has identified VR as a novel therapeutic platform. Most VR devices consist of a wearable 3D display device comprising a pair of glasses and headphones that are connected to a computer or cell phone. Some VR systems also have accelerometers, which are sensors that quantitatively measure static and dynamic acceleration in 3D and are common components in digital platforms that require human locomotion monitoring. Thus, VR systems provide computer-generated input to multisensory, interactive 3-dimensional (3D), unique experiences that can be considered a therapeutic distraction. Relatedly, VR systems are under investigation for management of anxiety and pain for patients undergoing certain medical treatments [9][10][11][12][13][14] and for stroke rehabilitation therapy. [15][16][17] Similarly, and in relation to the current case study, VR has been successfully utilized for gait and balance training/rehabilitation. [18,19] Tri-axial accelerometers are a key feature of VR hardware, which has been previously used to analyze, monitor and affect postural stability. [20,21] Moreover, the VRSway software was developed with the intention to be used in a variety of active rehabilitation applications, with the Sway balance mobile application most recently having been validated against the force platform technology for balance measurements. [20] This case report evaluates balance measures in 2 healthy participants with no previous history of balance disorders using the VRSway software application and compares to output generated by force platform technology. Specifically, VRSway is a VR application that uses sensors attached to a virtual reality headset, and hands remote controllers for measurement and analysis of postural stability by measuring changes in spatial location relative to the center of mass and calculates various postural stability indexes. Case presentation The primary objective of this case study was to demonstrate the feasibility of the VRSway stability score for evaluation of balance. For this reason, participants with preexisting symptoms overlapping with major symptoms of cyber-sickness such as headache, vertigo, ataxia, nausea, vomiting or any neurological disorders that can impact balance were excluded from this case study. Here, we present posturograpy measures of the VRSway in comparison with force plate readouts in 2 healthy participants, 1 28-years old male and 1 29-years old female, with no history of any conditions that can affect stability or their ability to use a VR environment. Both participants provided informed consent. Study staff assistance was provided to participants with placement of the VR equipment. This included a VR head piece referred to as the head mounted display (HMD), a right hand sensor and left hand sensor. The VR hardware is connected to a computer capable running VRSway software for balance assessment output. Both participants were unmarried and single, spoke Hebrew, went to the Academy, and are Jewish. This study was conducted at the Center for Advanced Technologies in Rehabilitation of Sheba Medical Center and was approved by the Institutional Review Board of Sheba Medical Center. In total, participants performed 30 measurements under 6 conditions, which comprised the 3 systems involved in balance: visual, vestibular, and sensorimotor. For each condition, participants were asked to stand still on the force plate with legs pelvic width apart, hip joint in neutral position, and feet parallel (Fig. 1). Body Sway measurements were recorded simultaneously in both Advanced Mechanical Technology Inc, (AMTI, model OR6-7) and VRSway systems under the 6 conditions described in Table 1. Participants were asked to perform 2 sets of tests. The first set of tests included conditions 1 to 4 (detailed above), which was performed first on a stable surface and then on a dynamic unstable surface (Airex Elite Balance Pad, Switzerland). The duration of each test is 30 seconds with a 1-minute break between each test. The second set of tests included conditions 5 and 6, which incorporated a visual conflict presented in the VR environment. As before, the participants were required to perform the test on a stable surface for 30 seconds and then on a dynamic surface for a duration of 30 seconds, but this time there was a 2-minute break between each test. Investigational conditions 1, 2, and 5 were measured 6 times for each participant, while conditions 3, 4, and 6 were measured 4 times for each participant. The VRSway results were calculated from the VR system HMD) with 6 degrees of freedom raw data (X, Y, Z, Roll, Pitch, Yaw) sampled in 90Hz, recorded from the system HMD, data from right hand sensor and left hand sensor were collected but not used for the analysis. The gold standard data were collected from AMTI force plate which generates orthogonal raw data (X, Y, Z) in 120HZ. Both VRSway and AMTI force plate output was synthesized using MTALAB 2016b. For each condition, the Sway index was calculated using an algorithm developed by XR Health specifically for the VRSway software. A correlation between the 2 balance measurement systems was calculated for 6 different Sway indexes: total displacement, CoP mean velocity, ellipse radius 1, ellipse radius 2, and ellipse area measures. [6,[22][23][24] A Pearson correlation coefficient was calculated for each participant, for all measurements according to standard practice ( Table 2, Fig. 2). Discussion This case report demonstrates results calculated by proprietary software is highly correlative to the measurements generated by force plates, which is currently the gold standard of balance measurement. Of note, the correlation coefficients for all 6-tested measurement indexes indicate a strong level of predictability between the 2 balance measurement platforms. Given this is a small case study of 2 healthy participants, the next phase for this research requires validation of these results in a larger cohort including both healthy participants and participants with balance disorders. This will provide more insight into how VR balance measurement can provide gains in mobility, functional independence and quality of life to a variety of individuals. While the current data surrounding VR-based therapies and rehabilitation remains controversial, there is mounting evidence to support combining conventional based-treatments with VR-augmented therapies improve patient outcomes. [17,[25][26][27][28] However, the fact that VR platforms can provide treatment without leaving the comfort of your home a major advantage in providing medical care to patients in more remote, less accessible areas of the world and takes the concept of telehealth to the next level. Given that falls are a major leading cause of unintentional injury deaths globally, particularly in individual ≥60 years of age, there is a large area of unmet need VRSway technology may be able to fulfill as both a potential VR-based balance therapy or a preventative medical application. [18] In addition to being highly correlative to force plate balance measurements, VRSway technology is less expensive to implement. Specially, the training, set-up and maintenance of the current force plate platforms requires specialized personnel to install, calibrate and maintain. The force plates also require a significant sized footprint, which limits the number of research facilities and medical centers that are able to accommodate such a system. VR-based technology on the other hand, require little space and as a result becoming commonplace in medical settings. This technology does not require an official install, offers remote training and infrequent, user-friendly calibration that does not require specialized personnel. Another advantage of VRSway technology is the ability to create a visual conflict inside the VR platform by defining only the parameters of the visual conflict, such as adjusting how fast the screen moves up-down-right-left and for how long. Up until recently, visual conflict has only been able to be inserted with heavy and expensive machines. [29][30][31][32] A limitation of this method for balance measurement is the learning curve of the VR headset, particularly for those not comfortable with new technology. Other common VR limitations such as motion sickness, eye fatigue, disorientation, nausea and neurologic conditions like epilepsy are also necessary to take into consideration when implementing this method for balance measurement. However, with upfront training and educating of study staff, participants, patients and caregivers on how to access and utilize the VR headset, user difficulty can minimize the learning curve. Moreover, training of the VRSway platform for larger scale use requires less time, specialized personnel, calibration and maintenance than the current gold standard force plate platform. In addition, the VRSway platform may allow for greater accessibility across clinical specialties and expand the abilities to identify specific patient populations that may benefit from balance interventions and therapies. Taken together, data from this case study identifies VRSway as a novel approach to evaluating balance with VR and more research is required to understand possible uses of VR-based use for balance measurement in a larger and more diverse cohort. Visual conflict on a firm surface: some vision present but information conflicts with vestibular information This condition brings in more vestibular and somatosensory inputs 6 Visual conflict on a dynamic surface: some vision present but information conflicts with vestibular information. Evaluate the mediation of visual with and without vestibular and somatosensory inputs Table 2 Correlations between VRSway and force plate for balance measures.
2023-02-08T06:17:50.873Z
2023-02-03T00:00:00.000
{ "year": 2023, "sha1": "9d54be59c2ae6b9222108081d080dea83b5eeeea", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "a6b3986eb26f515cee165ba6b32fb35392298826", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
55827957
pes2o/s2orc
v3-fos-license
Course of hatch and developmental changes in thyroid hormone concentration in blood of chicken embryo following in ovo riboflavin supplementation : The influence of riboflavin on the function of the hypothalamo–pituitary–thyroid axis during chicken embryogenesis is poorly understood. Therefore, examination of the effects of in ovo riboflavin supplementation on the possible linkage between the thyroid gland function and chick embryonic development seems to be interesting. Eggs on the sixth day of incubation were injected with 0 (control), 60, or 600 µg of riboflavin/egg. Blood samples were collected on the 12th, 15th, 18th, and 20th days of embryogenesis. Thyroxine (T 4 ) and triiodothyronine (T 3 ) concentrations in plasma samples were determined by radioimmunoassay method. The time of external pipping and hatching of each chick as well as the body weights of sampled and newly hatched chicks were recorded. Generally, riboflavin supplied in ovo did not affect chicken embryo mortality and hatchability; however, a dose of 600 µg of riboflavin/egg had a tendency to reduce embryo body weight. Chicks exposed to 60 µg of riboflavin/egg hatched 3.7 h earlier in comparison with controls and were characterized by a higher synchronization degree of hatching. Both applied doses of riboflavin significantly elevated T 4 concentrations in blood plasma of the chicken embryos; however, on day 20 of embryogenesis, both applied doses of riboflavin decreased T 3 levels in blood circulation. The data presented here suggest that riboflavin supplementation at the early stages of embryogenesis markedly affects embryonic development and influences thyroid hormone metabolism during the second half of embryogenesis. In vertebrates, flavin deficiencies lead to diseases such as glossitis, cheilosis, and organic acidurias (4). A deficiency of riboflavin is very rare in adult birds; however, its disappearance from the organism is associated with the overall hypovitaminosis (5)(6)(7). In avian species, riboflavin is required for proper embryonic development; therefore, it is accumulated in the developing chicken egg at an amount of about 300 µg per egg (8). Riboflavin deficiency is often related to mutations in an Rd gene encoding a riboflavin-binding protein (RfBP), which is responsible for deposition of riboflavin in eggs (9). In rd/rd mutants, effects of deficiency become apparent after day 10 of incubation (5). Embryo death occurs suddenly around day 13 of incubation and is the result of inhibition of activity of flavin-dependent enzymes, hypoglycemia, and impaired fatty acid oxidation. The adverse effects of the Rd gene mutation can be abolished by in ovo administration of free riboflavin or FMN but not RfBP (10). Riboflavinsupplemented embryos survived and developed properly, but those injected with RfBP died. This indicates that the unbound riboflavin that is injected into egg whites can be used by the developing embryo to ensure its proper development, while the injected apo-RfBP is detrimental. The fates of embryos were dependent on the relative amounts of the injected riboflavin and RfBP; excess of the latter diminished the availability of riboflavin to the chicken embryos, leading to their mortality (10). In humans and other mammals, the relationship between plasma concentrations of riboflavin and thyroid hormones (THs) has been already described. It has been established that 3,3' ,5-triiodo-L-thyronine (T 3 ) regulates flavocoenzyme biosynthesis by determining the activities of flavocoenzyme-forming enzyme (11,12). Possibly, low concentrations of B vitamins adversely influence the hypophysis, pituitary gland, and/or thyroid gland functions (12). The chicken embryo is the most useful and sensitive model for drugs, xenobiotics, and vitamin investigations (5,(13)(14)(15)(16). Nevertheless, in the available literature there are no data showing the influence of riboflavin on the function of the hypothalamo-pituitary-thyroid (HPT) axis during chicken embryogenesis. Taking into consideration that THs play a crucial role during chicken embryogenesis and affect the time of hatching and the length of incubation (17,18), the present study was designed to answer the question of whether the supplementation of the chicken embryo with riboflavin may change 3,3' ,5,5"-tetraiodo-L-thyronine (thyroxine; T 4 ) and T 3 levels in blood circulation. Consequently, in this study the effects of this vitamin on body and organ weight, as well as the hatching parameters of embryo mortality and timing of external pipping and hatching (during the last days of incubation), were determined to find out the possible linkage between the thyroid gland function (affected by riboflavin) and embryonic development. Experimental design The experimental and animal procedures were approved by the Local Animal Ethics Committee in Krakow. Eggs of the parental broiler line Ross 308 (Aviagen) were used in the experiment. Before the start of incubation, eggs were numbered and weighed. Incubation was carried out in an incubator (Massales 65, Spain) under standard conditions (temperature of 37.8 °C; relative humidity of 50%), and eggs were turned once an hour at an angle of 90° during days of incubation from days 1 to 18 (E1-E18). During incubation, eggs were candled on days 6, 8, and 18 (E6, E8, and E18). During E6 candling, eggs with unfertilized and dead embryos were discarded, while in E8 candling, embryos that died after injection were removed. On E18, candled eggs with evidence of living embryos were transferred from the turning trays into hatcher baskets and incubation was continued at 37.2 °C and 55%-65% relative humidity. The sixth day of incubation was considered optimal to carry out in ovo supplementation according to previous experiments and publications (13)(14)(15), in order to reduce the sensitivity of the embryo to manipulation and to ensure the action of administered substances for the longest period. Eggs candled on E6 with living embryos (n = 600) were divided into 3 equal groups, which were injected with riboflavin at doses of 0 (control), 60, or 600 µg/egg. The lower dose of riboflavin was based on the publication of Lee and White (10) as a minimum quantity of this vitamin sufficient for the proper development of the chick embryo, while the higher dose exceeds by about 2 times the amount of riboflavin contained in the chicken egg (8). Riboflavin (R9504, Sigma, USA) was dissolved in 50 µL of sterile 0.7% NaCl solution (Polpharma SA, Poland). Before the injection, the shell at the site of the injection was disinfected with 70% ethanol and a window was made in the eggshell (diameter of about 5 mm). Injection was performed using a pipette with 100 µL volume tips via the air chamber at a depth of 5 mm under the chorioallantoic membrane to the albumin without injuring the blood vessels and the amnion. The riboflavin solution was stored in a light-impervious vessel. After injection, the hole was sealed with Parafilm tape (Sigma) and eggs from every group were subsequently divided to into 2 subgroups: 1) a "hatchability" subgroup for checking hatchability and course of hatch (n = 145 eggs per group) and 2) a "sampling" subgroup for tissue sampling (n = 55 per group). Incubation was then continued under normal conditions. Hatching course and result of hatch analyses All eggs discarded during candling or unhatched from the "hatchability" subgroups were analyzed for the stage of development and malformations. Moreover, on E18 candled eggs from these subgroups with evidence of living embryos were transferred to hatching baskets and used to monitor the course of the hatching process. The process of hatching was checked every 2 h from the 460th hour of incubation. The time of external pipping and the time of hatching of each chick were recorded. At the end of the experiment, the unhatched eggs were broken and the stage of embryonic development was noted (8,19). Blood sampling, body and organ weight recording, and hormone analysis in plasma Blood samples were collected from 12 randomly selected embryos of each "sampling" subgroup on E12, E15, E18 (stage before internal pipping), and E20 (stage of external pipping). Blood was sampled into test tubes with heparin (Coaparin, Polfa Warsaw Ltd., Poland). Plasma samples were kept at -20 °C until hormone determination. After blood collection, each embryo was drained and body weights were recorded. Subsequently, the heart and liver were dissected and weighed. Thyroid hormone T 4 and T 3 concentrations in plasma samples were determined by means of radioimmunoassay using T 4 and T 3 kits (BRAHMS, Germany). The lowest limits of T 4 and T 3 assay sensitivity were 0.7 ng/mL and 0.09 ng/mL, and mean recoveries as performed in our laboratory were 96.3% and 95.0%, respectively. The intra-and interassay coefficients of variation for T 4 and T 3 analysis were 4.0% and 5.3%, and 3.5% and 6.3%, respectively. The cross-reactions of T 4 antibodies with T 3 and rT 3 (3,3' ,5'-triiodo-L-thyronine) were <0.2% and 5%, respectively, while with other iodothyronines and iodothyronine-like compounds they were below 0.5%. The cross-reaction of T 3 antibodies with T 4 was 0.06%, and with other iodothyronines and iodothyronine-like particles it was below 0.2%. Statistical analyses Results of the pipping and hatching courses were presented as medians and means ± SDs and were analyzed by the Kruskal-Wallis one-way analysis of variance (ANOVA) on ranks for failed normality test. Data of each group were demonstrated with a linear regression of y = a + bx, where y is percentage of the pipped/hatched chicks; x stands for the incubation hour; a is the intercept, i.e. the estimated start of the pipping or hatching process; and b is the slope, i.e. the degree of the synchronization of pipping or hatching processes in the time (h) necessary to pip or hatch 1% of the chicks (14,15). The hatchability and mortality data were statistically analyzed by z-test while the results of TH concentrations as well as body and organ weight were studied by 2-way ANOVA, followed by Tukey's multiple range test. The statistical analyses were performed using Sigma-Stat 2.03 (SPSS, USA) while figures were made with Grapher 8.0 (Golden Software Inc., USA). Because the radioimmunoassay revealed that there were no significant differences in plasma TH levels between male and female embryos during the incubation process, which is in agreement with previous findings (20), the data from both sexes were combined. The results were presented as mean ± SEM and were considered significant at P ≤ 0.05 and highly significant at P ≤ 0.01. Embryo hatchability and mortality Generally, in comparison with the control group, riboflavin supplied in ovo did not affect chicken embryo mortality and hatchability, except at the dose of 60 µg/egg. In this case, many embryos died immediately after manipulation (between E6 and E8), resulting in a decrease in hatchability by 11% (P ≤ 0.05; Table 1). ANOVA revealed that the embryonic body weight (EBW) was significantly influenced by stage of development (P ≤ 0.01) and riboflavin administration (P ≤ 0.05). The first effect was evoked by a gradual increase in EBW from 9.5 ± 0.25 g on E12 to 38.4 ± 0.90 g on E20 in the control group. The second was a result of a significant reduction of EBW by the higher dose of riboflavin (600 µg/egg) by 11% and 10% (P ≤ 0.05) on E12 and E15, respectively ( Figure 1). The relationship between the EBW and stage of embryogenesis and dose of riboflavin can be described by the following equation: y = -35.496 + 3.606x -0.003z, R 2 = 0.927, where: y is the body weight of the embryo, x is the day of incubation, and z is the dose of riboflavin. The weight of the liver and the heart gradually increased in all groups during embryogenesis (P ≤ 0.01), but there was no significant effect of riboflavin administration on the weight of these tissues, nor on their relative weight (P > 0.05; data not shown). Course of hatching Riboflavin treatment did not affect the time of external pipping (Table 2). However, statistical analysis of the course of hatching revealed that chicks exposed to 60 µg of riboflavin per egg hatched 3.7 h earlier than controls (P ≤ 0.05; Table 2). This was caused by a shorter length of hatching by 2.1 h (P ≤ 0.05; Table 2). Moreover, chicks from this group demonstrated a higher synchronization degree of hatching process in comparison with the controls (P ≤ 0.05; Table 2). Regression equations revealed that 1% of chicks treated with 60 µg of riboflavin needed 11.4 min for pipping and 7.2 min for hatching, while chicks of the control group needed 13.2 and 13.8 min, respectively ( Table 2). Administration of riboflavin at a dose of 600 µg did not influence hatching indicators. Hormone concentration in blood plasma Concentration of T 4 in blood plasma of chicken embryos on E12 and E15 was relatively low; however, it significantly increased from E18 (P ≤ 0.01; Figure 2a). In the control group, on E12 it was 1.88 ± 0.13 ng/mL, and it did not change on E15 (Figure 2a). On E18, the plasma level of T 4 sharply increased, reaching a value that was 4.9-fold higher in comparison with E15 (P ≤ 0.01). An additional elevation in T 4 plasma concentration was observed on E20, as it was 1.2-fold higher in comparison to E18 (P ≤ 0.05; Figure 2a). Both applied doses of riboflavin significantly elevated T 4 concentrations in the blood plasma of the chicken embryo on E15 by about 60% (P ≤ 0.01). The stimulatory effect of riboflavin was also found on E18 and E20; however, only the higher dose of riboflavin effectively increased (by 24% and 23%, respectively) T 4 concentrations (P ≤ 0.05; Figure 2a). On the other hand, the lower dose of riboflavin (i.e. 60 µg/egg) significantly reduced T 4 levels on E12 by 16% (P ≤ 0.05; Figure 1a). In the control group, the plasma concentration of T 3 was 0.90 ± 0.06 ng/m: on E12 (Figure 2b). It was significantly decreased (1.45-fold) on E15 (P ≤ 0.05), and subsequently, on E18, it rose by 1.8-fold in comparison to E15 (P ≤ 0.01). A sharp increase (2.9-fold in comparison with E18; P ≤ 0.01) in T 3 concentration in blood plasma of chicken embryos was observed on E20 (Figure 2b). Discussion The results of this experiment reveal that the injection of riboflavin at the beginning of embryogenesis may disturb the course of chick embryo development. It was found that only the lower dose of riboflavin (i.e. 60 µg) decreased hatchability by 11%, which was evoked by a significant elevation in embryonic mortality (by 20.7%) from 1-2 days following the manipulation. Taken together with the fact that the mortality among the 3 groups did not differ on the following days, this suggests that the observed effect could also be due to an uncontrolled external factor rather than embryo toxicity. This explanation can be supported by fact that the sensitivity of the chicken embryo to in ovo manipulation is very high at early stages of embryogenesis and it decreases gradually during embryonic development. It is thought that disturbance in embryo homeostasis caused by the applied in ovo manipulation is the main reason for its mortality (13). Nevertheless, results of experiments performed by Lee and White (10) indicate that the fate of an embryo is dependent on the proper ratio of RfBP and free riboflavin. In ovo manipulation might disturb this ratio, and the temporal excess of apo-RfBP might be detrimental to embryos. We may speculate that this is one of the reasons for mortality of embryos injected with saline or riboflavin during the present experiments. Perhaps the low dose of exogenous riboflavin predisposes RfBP to diminish its binding properties. Several lines of evidence indicate that a reduction in body weight during the pre-and postembryonic period and in chickens (6,21) is associated with a riboflavin deficiency. In sharp contrast, in our experiment the highest dose of riboflavin (i.e. 600 µg/egg) substantially decreased the weight of the chick embryo. A reduction in gains was also observed in the growing broiler chicken supplemented in ovo with 1.5 and 3 mg of riboflavin (22). It cannot be excluded that the reduction in body weight following riboflavin treatment is associated with an increase in the basal metabolic rate as a result of elevation in the activity of flavin-dependent enzymes (i.e. flavoprotein monooxygenases, acyl-CoA dehydrogenases, or cytochrome P450 reductase) (1). Moreover, riboflavin is a cofactor in the conversion of pyridoxine (vitamin B6) to pyridoxal phosphate. It seems to be likely that an excess of riboflavin accelerates depletion of available pyridoxine. Deficiency of vitamin B6 can also be a cause of chicken BW loss (21). The statistical analysis of the course of pipping and hatching reveals that the elevation in riboflavin availability on E6 may affect the rate of hatching. It can be assumed that this is related to changes in the embryonic metabolism associated with alterations in TH concentration in blood circulation. It is well known that THs play an important role in the development of many systems in all vertebrates, including birds (23,24). The rapid increase in T 3 at the end of incubation is necessary not only for stimulation of growth and differentiation, but also for preparation of the chick for a life outside the egg by regulating processes such as yolk sac retraction, the onset of pulmonary respiration, hatching, and the initiation of endothermic responses (25,26). The low T 3 concentration observed during most of embryogenesis is a result of high activity of D3 deiodinase, which in the liver and kidney converts T 3 to 3,3'-diiodo-L-thyronine (3,3'-T 2 ). A sharp increase in T 3 concentration, which appears slightly later in comparison with the peak of T 4 , occurring during the hatching period, is associated with the transition of the chick embryo from chorioallantoic to pulmonary respiration. It is associated with a decrease in activity of D3 deiodinase and elevation in activity of D1 deiodinase, which metabolizes T 4 to T 3 (23,24). In the present experiment alterations in TH concentration correspond with the changes described above; however, it should be noted that the release of T 4 from the thyroid gland into the blood circulation during hatching (i.e. on E20) in the group treated with riboflavin at a dose of 600 µg/egg was significantly higher in comparison with the control group. On the other hand, at the moment of pipping, the concentrations of T 3 in blood plasma in both experimental groups were significantly lower than in the control group. It can be concluded that the observed changes in T 4 levels are the result of HPT axis activity under the influence of riboflavin and changes in the metabolism of this hormone in peripheral tissues such as the liver. Recently, Grommen et al. (27) revealed that the elevated plasma T 4 levels observed during the last trimester of chicken embryogenesis associated with the increased synthesis and secretion of T 4 by the thyroid gland are caused by the increase in thyroglobulin, sodium/iodine symporter, and thyroid peroxidase mRNA expression. Therefore, the increased concentration of T 4 on E18 and E20 in the group treated with the highest dose of riboflavin might be connected with a direct influence of this vitamin on expression of thyroid-specific genes. Nor can it be excluded that the increase in T 4 concentration with a concomitant decrease in T 3 level could be evoked by the elevation in the iodotyrosine deiodinase (IYD) activity as a result of increased availability of FMN as a cofactor for this enzyme. The IYD is the only flavin-dependent deiodinase that facilitates the recovery of iodide in the thyroid tissue by catalyzing deiodination of mono-and diiodotyrosine (28,29). However, in order to verify this hypothesis, further studies are needed. Because in avian species almost all circulating T 3 is of peripheral origin (24), it can be assumed that the changes in T 3 levels following the riboflavin administration are the result of its impact on the activity of D1 and D3 deiodinases. The decrease in T 3 /T 4 ratio in the blood of embryos treated with vitamin B2 supports this assumption and suggests that riboflavin directly increases the activity of D3 deiodinase in the final stages of embryogenesis. The existence of the specific relationship between riboflavin and thyroid gland function observed in our experiment has already been postulated. In mammals, it has been shown that there is a correlation between the levels of riboflavin and concentration of iodothyronines in the blood circulation. In humans and rats with hypothyroidism, FAD levels decrease in the liver; they are similar to those observed in animals fed a vitamin B2-deficient diet. This phenomenon results from the T 4related activity of flavokinase, the enzyme responsible for the conversion of vitamin B2 into FMN and FAD. Although in the hyperthyroid state the activity of this enzyme is doubled, there is no increase in the level of FMN and FAD as a result of their increased expenditure (30). Therefore, hormone replacement therapy in adults with hypothyroidism normalizes the metabolism of riboflavin. However, this treatment in newborns with congenital thyroid dysfunction does not change the levels of vitamin B2 and FMN in the blood, in spite of obtaining the appropriate level of iodothyronines (30). In summary, riboflavin administration at the early stages of embryogenesis markedly affects embryonic development and influences thyroid hormone metabolism during the second half of embryogenesis. Negative effects of riboflavin on hatching success in the present experiments seem to be related to the early timing of its administration. Beneficial effects of riboflavin on posthatch immunity may be expected in birds supplemented in ovo at the final stages of development, as has happened in the case of vitamin E (3). In order to understand the molecular mechanism of riboflavin action in the chicken embryo, more research is needed
2018-12-07T01:32:28.760Z
2014-05-18T00:00:00.000
{ "year": 2014, "sha1": "3ffafe24f4755a62a3588fc53c81dcee84621f1a", "oa_license": "CCBY", "oa_url": "https://ruj.uj.edu.pl/xmlui/bitstream/handle/item/6103/plytycz_et-al_course_of_hatch_and_developmental_changes_2014.pdf?isAllowed=y&sequence=1", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "fe2fe3361c35540b27e3384e02d6a8b6e1d9cb92", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
224930490
pes2o/s2orc
v3-fos-license
Spatio-Temporal Changes of Housing Features in Response to Urban Renewal Initiatives: The Case of Seoul : Over the past two decades, Seoul has been in a transitional period in terms of urban renewal approaches. Housing is a fundamental element of citizens’ lives and the built landscape, thus, it deserves thoughtful scrutiny. As such, this study empirically investigates the dynamics of the spatial and temporal characteristics of housing stock within the context of new urban renewal policies in Seoul. A fine-grained and multifaceted analysis shows that the supply of new apartments has decreased over time, revealing that denser housing redevelopment in the inner city has become more difficult. In addition, an exploratory spatial data analysis indicates that although spatial clustering of old housing units has been reduced, new housing units have become more spatially distributed and outwardly dispersed over time. Since the physical and locational changes of housing stock are closely related to urban renewal initiatives, this study suggests that the city government needs to incorporate the concept of sustainable urban growth management into its housing supply and renewal policies. Introduction The topic of restructuring cities has attracted the attention of many urban scholars and planners. In the early stages of the post-industrial era, urban spatial structures featured a central business district (CBD) consisting of high-rise office buildings, and the inner city provided residential space to working-class communities [1]. Rife with dilapidated, low-rise housing, inner-city areas have long suffered from poor housing services, clustered poverty, and high crime rates [2,3]. While some cities face a continued population march to peripheral areas [4,5], many others have embarked on largescale inner-city redevelopment projects in order to upgrade urban environments that can attract new residents and retain old ones via housing reform [6][7][8]. In particular, the increase in inner-city apartment stock in denser residential developments has reflected a growing demand on the part of residents to be close to the city center in order to shorten commute time and enjoy vibrant city amenities and services [8,9]. Seoul, the capital of South Korea (hereafter Korea), has experienced significant restructuring in the past twenty years. Starting in the early 2000s, government leadership in Seoul initiated inner city redevelopment projects to promote high-rise residential development in urban centers and upgrade old, dilapidated housing units in those areas. These projects often required large-scale clearance of low-rise residential areas and redevelopment into high-rise apartments. Since the clearance and redevelopment approach produced negative externalities, such as community destruction, displacement of incumbent residents, and loss of affordable housing [9][10][11], the city government announced a new approach to housing renewal in 2012 called the New Town Exit Strategy [12]. The New Town Exit Strategy has pursued rehabilitation, spot demolition, and infill programming within old, low-rise residential areas in lieu of clearance and redevelopment. The effort made it more difficult to overhaul previously developed areas with denser high-rise development plans. Recognizing that Seoul has taken several turns over the past two decades in terms of its restructuring approaches, this paper attempts to holistically understand the spatial and temporal features of the housing stock in Seoul. Although Seoul's planned approach for urban renewal has vacillated, denser residential development in inner city areas has long been considered a key component of smart growth and sustainable development [13,14]. However, realizing successfully dense development projects is neither simple nor straightforward, as the process of disinvestment and reinvestment in the built environment involves uneven space and geographical diversity [15,16]. While existing studies have examined the dynamics of spatio-temporal patterns of housing in various urban contexts, few have investigated how the housing market and urban renewal policy are intertwined. This paper attempts to furnish an in-depth review of urban spatial patterns with a finegrained and multifaceted analysis with the perspective that sustainable urban growth management requires an integrated, area-specific planning approach at local levels [17,18]. This paper aims to provide a holistic review of the dynamic of spatial and temporal features of housing units, with a particular focus on urban renewal initiatives. Taking Seoul as a case study, this paper first looks into the change in housing stock that has occurred in the past decades. Then, it explores whether and how locational clustering patterns of old and new housing units have changed over time. Lastly, it addresses the question of how physical and locational changes in housing stock relate to Seoul's urban renewal initiatives. The paper is organized as follows: Section 2 explores the history of planned intervention for Seoul's urban renewal initiatives over the past two decades. Section 3 elaborates the study area, dataset, and methodology. Section 4 provides analysis of the changes in the spatial and temporal characteristics of housing in response to new urban renewal policies. Finally, Section 5 summarizes the findings and their implications for housing and urban renewal policies. Housing Development and Renewal Policies Korea occupies a unique place in housing and urban development. In response to rapid economic growth and urbanization processes, private-driven housing development has received wide advocacy to create a mass housing supply and improve residential communities [19,20]. In particular, Seoul, the capital city, has accommodated an enormous influx of residents from other regions of the country. In the 1960s, 2.44 million people resided in Seoul. The population doubled within 10 years and quadrupled in 30 years, housing 10.60 million residents by 1990. As the city's population rapidly grew, the government constructed more and more apartments to accommodate the tremendous inbound migration of rural Koreans [21,22]. During this building boom, the total quantity of high-rise apartments increased as a percentage of total housing stock from 3.9% in 1970 to 31.0% in 1990. Despite the massive construction efforts that took place in Seoul, the housing supply rate, as measured by the ratio of the number of housing units to the number of households, was still just 57.9% in 1990 [22]. It was against this backdrop that the Korean government embarked on its Two-Million House Construction Drive (1988)(1989)(1990)(1991)(1992) to develop five new towns on the outskirts of Seoul. This initiative was regarded as a serious attempt to increase the supply of new housing on a large scale [20]. Since the towns were developed by converting agricultural land to residential land with a master plan approach, the newly developed areas boasted high-quality residential environments with abundant green spaces. However, since those new towns were located far from the city center, residents faced longer commutes, which raised concerns about the negative impacts of excess energy consumption and environmental pollution. In response to these side effects, the Seoul government announced plans to introduce inner-city redevelopment initiatives in October 2002. The initiatives embraced new in-city town developments in the form of high-rise apartments and cleared away old, low-rise houses, as described in Figure 1. Since these projects were mainly carried out by the private sector, the government's role in the process of demolition and redevelopment was limited to area designator. With electoral stakes in mind, the government designated areas for the new town development projects. Within ten years, the target areas of the new town development were slated to account for 10% of the total area of Seoul [23,24]. Due to this large designation, many undeveloped areas were suspended in an incomplete state, which raised concerns about vacant housing and crime in these areas. [24]; road views were retrieved from https://map.naver.com. Intending to usher in a paradigm shift for housing renewal, on 30 January 2012, the Seoul government announced its New Town Exit Strategy, as summarized in Table 1. In place of large-scale clearance and redevelopment, the New Town Exit Strategy pursued rehabilitation, spot demolition, and an infill program within the old, low-rise residential areas [12,24]. This gradual rehabilitation plan was backed by several pieces of legislation. The Act on the Improvement of Urban Areas and Residential Environments (revised in 2012) promoted and supported small-scale infill redevelopment, and the Special Act on the Promotion of and Support of Urban Regeneration (signed into law in 2013) aimed to rehabilitate the aging residential environment without displacing tenants. While these projects received criticism for their ineffectiveness at improving residential quality, others supported them for establishing residential stability. Additionally, the government tightened regulation standards around apartment redevelopment. The original consideration timeframe for redevelopment, 20 years, was increased to 20-40 years based on construction year. • Mayor Park announced new renewal initiatives after 3-month public hearing and discussion. • Focus moving from owners to residents; from profit-based demolition to community building. • Establishment of tenant resettlement systems (reinforcement of housing rights). • Reinvestigation designated numerous new town development areas (610 areas) in Seoul. • Reclassification of designated new town areas after collecting residents' opinions: -Released areas-converted to rehabilitation project based on community wishes. -Continued new town areas-simplified processes with administrative supports. • Operating residential regeneration support center dedicated to mediating conflicts. • Changes in fundamental perspectives on renewal projects from business to human rights. Seoul has undergone considerable change over the past two decades. In the 2000s, the shift toward urban and housing renewal was encapsulated in the in-city new town development strategy (2002) and the follow-up New Town Exit Strategy in 2012. Based on the changes created by these urban renewal initiatives, I now attempt to empirically assess the dynamics of the spatial and temporal characteristics of the housing stock between 2000 and 2018 and discuss the effects that innercity renewal initiatives have had on Seoul's housing stock. Study Area and Data The study area, Seoul, currently accommodates 10 million residents and 5.3 million economically active people. As described in Figure 2, Seoul has three business districts: the Central Business District (CBD), the Yeouido Business District (YBD), and the Gangnam Business District (GBD). To understand the spatial and temporal housing characteristics of Seoul in the 2000s, I attempted to secure a housing inventory by exploring two types of publicly released datasets. I first considered the building registry data provided by the Ministry of Land, Infrastructure and Transport. Although the dataset yielded detailed information on residential buildings, such as physical characteristics and year of construction based on legislative address, the data was only available from 2014 onward. This is too short a period to capture housing stock changes over the time frame in question. Therefore, I chose to utilize non-aggregated housing census data from 2000 to 2018, which was accessed through Microdata Integrated Services with the approval of Statistics Korea. Constructing the spatio-temporal dataset from the housing census consisted of two steps. First, I calculated the number of old and new housing units from each administrative district based on a summation of household data. Using the Microdata Integrated Services, I generated a cross-analysis table of housing characteristics separated by administrative spatial level for each year in 2000, 2005, 2010, 2015, and 2018. Since less than five values in the cross table were masked in the process of extraction, the classification was minimized. Due to the fact that high-rise apartments are the main type of housing in Korea, housing type was divided into two categories: apartment and nonapartment, where apartments are defined by housing over five stories high. Old housing stock was measured by the number of housing units present more than 30 years after construction. Since the legal age constraint for redevelopment was originally set at a minimum of 20 years, I also measured the number of extant housing units more than 20 years after construction for comparison purposes. New housing stock was measured by the number of housing units constructed within the past five years. Second, I used a cartographical approach to spatially join the five-year data based on administrative districts. Studying the spatial structure first required defining the spatial unit with an understanding of spatial arrangement [25]. During the analysis period from 2000 to 2018, the administrative spatial boundary in Seoul changed three times. It was 522 districts in 2000 and 2005, changed to 425 districts in 2010, and has been 424 districts since 2015. The spatial district boundaries were redefined as indicated in Figure 2 below. When two or more administrative districts were integrated into one district, I used the newly established spatial boundary as a criterion to link and add the housing characteristics over time. However, when an existing administrative district was eliminated or a new administrative district was established, new boundaries were defined to cover both the initial boundary and the adjusted ones. In sum, I made 419 spatial boundaries to link the housing stock data between 2000 and 2018. Methodological Approach This paper attempts to provide a holistic review of the spatio-temporal features of housing stock with a particular focus on urban renewal strategies employed in Seoul over the past twenty years. I first investigate how the housing type ratio (non-apartments to apartment units) and the ratio of old to new housing units has changed in the past several decades. Then I ask whether and how locational clustering patterns of old and new housing units have changed over time. Lastly, by assessing the evolving spatial relationship between new housing supply and old housing stock, I address the question of whether the supply of new housing units has come from a large quantity of old housing stock. This study uses descriptive statistics, exploratory spatial data analysis, and rank correlation analysis to explore these questions. With the aid of descriptive statistics, I conducted an exploratory spatial data analysis to examine the spatial location and extent of statistically significant spatial clusters of old and new housing units. Applying Getis-Ord statistics, I detected global and local spatial autocorrelation. The Getis-Ord statistics have been used in prior studies to analyze spatial patterns of crime or accident occurrence [25,26], accessibility to grocery stores within residential areas [27], and urban vitality based on bicycle-sharing data [28]. I first used the global G statistic to evaluate whether old and new housing units are significantly clustered at different time periods. Then, to determine the location and the extent of spatial clusters, I used hot spot analysis based on the local Gi* statistic and mapped it to show high and low positive spatial autocorrelated regions, which are referred to as hot spots and cold spots [25,29]. In addition, rank correlation analysis was used to explore the spatial relationship between new housing supply and old housing stock. Spearman's rank correlation analysis is used to explain the extent to which the ranks of two variables of interest are correlated. Since this correlation analysis operates based on the rank of the data rather than the raw values, this method is relatively insensitive to outliers and useful for verifying a monotonic association between two variables [30,31]. The Spearman rank correlation coefficient is calculated as the Equation (1), where is the number of items, or the number of spatial units in this study, and is the difference in ranking between the variables of interest in district i. As the rank correlation coefficient has a value between −1 and +1, a greater positive coefficient means a stronger correlated association between the two variables. In this study, spatial association in ranking between old and new housing stock was estimated in each year to compare the tendency to supply new housing units in areas with many old and dilapidated housing units. The supply pattern of new housing units showed differences depending on the housing type. New apartments have decreased since 2000, while new non-apartments have shown an upward trend. According to a national survey by Ministry of Land, Infrastructure, and Transport, more than half of survey respondents prefer apartments (53.3%), while 38.1% prefer detached houses, 5.6% prefer row houses, and 3.0% prefer other housing types [32]. However, although apartments are the most preferred housing type due to affluent communal services [22], residents who prefer new apartment services have fewer options to choose from, whereas the number of new non-apartment options has increased over time. Spatial Clustering of Housing Growth and Deterioration To explore the existence of the spatial clustering of old and new dwelling units, the global and local Getis-Ord statistics were applied to each time frame (2000, 2010, and 2018). To calculate the Getis-Ord statistics, an inverse distance-based weighting was applied with a distance of 2000 m in consideration of the average district scale. Additionally, in order to control for the housing market size of each district, the quantities of old housing units and new dwelling units were divided by the number of total housing units of each district. As illustrated in Figure 4, I observed statistical evidence for strong spatial autocorrelation for old housing units and relatively weak spatial autocorrelation for new housing units. The statistics test whether an area with a high rate of old housing units is surrounded by other areas with high rates of old housing units. The global G statistics for old housing clustering are highly statistically significant, with significance levels of 0.01 for 2000 and 2010, and 0.05 for 2018. Although the ratio of old housing increased from 4.45% to 6.97% between 2000 and 2010, as described in Table 2, spatial autocorrelation was rather relaxed. The degree of spatial clustering for old housing units was recorded at 21.310 in 2005, 9. Despite the assumption that the new housing supply ushered in by the redevelopment process would expand into adjacent communities, and therefore produce spatial clustering [15], this study only shows weak locational clustering patterns within the new housing supply. While the test for global spatial autocorrelation indicates whether old and new housing units are statistically significantly clustered overall, it does not provide the locations or extents of the clusters. Figure 4 shows the results of hot spot analysis with the maps of the specific locations of statistically significant clusters of old and new housing units for each year. For old dwelling units, the year 2000 shows one major hot spot with a significance level of 0.01 in or around the urban center. This urban center, known as the old downtown of Seoul, contains some traditional houses called "Hanok" [33], but old and dilapidated row houses make up a fairly large portion of the housing in this area. Over time, the hot spots of old housing units began to spatially disperse, and by 2010, they had formed two clusters in the urban center. In 2018, the hot spots consisted of four clusters with a lower Z score, but the spatial autocorrelation coefficient is still statistically significant. The hot spots are located in or around the city center as well as in the south-eastern area. The hot spots of new housing units shifted location each time and did not appear spatially concentrated but relatively dispersed in comparison with old housing units. Taking a closer look, the hot spots of new housing units in 2000 were discovered in or around the urban center as well as on the outskirts of the city. In 2010, the spatial autocorrelation of new housing units was not statistically significant, meaning that the new housing supply was not in spatially contiguous regions. In 2018, hot spots were detected on the western and the south-eastern borders, known as the 'Kimpo' and 'Wirye' new towns, respectively. In addition, many pockets of new housing unit cold spots emerged, especially in urban centers. So far, I have explored the temporal variations of the Seoul housing stock in terms of physical and locational characteristics. The next section brings our attention to why this change happened and how it relates to government policies for urban renewal. Given the fact that the Seoul government announced a new strategy for housing renewal in 2012, I analyzed the Spearman correlation analysis before and after this initiative, using the absolute number of housing units of old and new housing units by each housing type. While the ratio-based analysis is useful to describe the tendency of the location of housing growth overall by controlling for the housing market size of each district, the absolute number approach provides a better understanding of the areas where new housing is supplied with regard to a shift in urban renewal initiatives. The coefficients were calculated between the rankings of the number of new housing units for apartments or non-apartment units in the current year, and the rankings of the number of old housing units for apartments or non-apartments in the past year. Table 3 shows the results of Spearman rank correlation analysis. In 2010, in areas with many old apartments, the supply of new housing units was not significant regardless of the presence of apartments or non-apartments. Instead, both new apartments and new non-apartments significantly increased in areas with a large inventory of old non-apartments. However, after the launch of the New Town Exit Strategy, the tendency to supply new apartments in areas with many old nonapartments disappeared in 2018. This indicates that redevelopment efforts from low-rise nonapartments to high-rise apartments were effective in 2010 throughout the new town development projects, but this type of redevelopment is no longer valid in the new era of urban renewal. The installation of new renewal initiatives that have pursued spot demolition and infill programming within old, low-rise communities has resulted in a change in housing supply patterns across the city. It has made denser and higher residential development more difficult, leading to a decline in the supply of new apartments in the last decade. Note. *** p < 0.01, ** p < 0.05, * p < 0.1. Conclusions and Discussion The city government of Seoul changed its urban renewal initiatives in the early 2000s. In 2002, the in-city new town development strategy embraced the form of high-rise apartments and cleared away old, low-rise houses. However, the follow-up New Town Exit Strategy in 2012 pursued rehabilitation, spot demolition, and an infill program in lieu of large-scale clearance and redevelopment. Given the fundamental importance of housing in cities, this paper has attempted to provide an in-depth review of the temporal variations of housing stock in terms of physical and locational characteristics. Using a fine-grained and multifaceted analysis, this study has sequentially looked at how the physical and locational features of housing stock have changed over time and how this relates to changes in Seoul's urban renewal initiatives. First, the results of temporal variations of housing units show that even if the share of new housing supply in relation to the total housing stock remains at a similar level over several years, there is a big difference depending on the housing type. In particular, the share of new apartments has decreased from 32.4% to 10.4% over two decades, while the share of new non-apartments has increased from 6.8% to 21.8% during the same period. Unlike the housing supply patterns, the share of old housing units showed a similar increasing trend regardless of housing type. These numeric data identify a phenomenon wherein on an individual level, the option of choosing a new apartment has decreased although apartments are the most preferred housing type. Moreover, on a city-wide level, denser housing development with high-rise apartments has become more difficult over the past years. As Seoul's approach for urban renewal has avoided clearance and redevelopment, it has consequently resulted in a housing stock that is far from market expectations. Second, looking at the spatial clustering of housing growth and deterioration, the global G statistics showed a relatively weak spatial autocorrelation for new housing units, but a strong spatial autocorrelation for old housing units. More specifically, the clustering maps based on the local Gi* statistics showed that new housing units have become more spatially distributed and dispersed outward over time. In recent years, the hot spots of new housing supply were detected on the outskirts of the city, while cold spots have emerged in the urban center. For the old housing units, the degree of spatial clustering has decreased over two decades, while a strong hot spot of old housing units in the urban center has become smaller and moved to the southeast. Third, this study revealed that the physical and locational changes of housing stock are related to urban renewal initiatives. The decrease in rank correlation coefficients between old and new housing units over time indicates that the tendency to supply new housing units in areas with many old and dilapidated housing units has gradually decreased. Furthermore, after the launch of the new strategy for urban renewal in 2012, which avoided large-scale clearance and redevelopment and pursued spot demolition and infill programming, the tendency to supply new apartments in areas with many old non-apartments disappeared. This empirical result verifies the fact that redevelopment from low-rise housing to high-rise apartments is no longer valid in the new era of urban renewal. Based on these findings, this study concludes that Seoul's renewal plans have actually led to the spatial dispersion of new housing supply and a decrease in the new supply of high-rise apartments. This phenomenon, discovered in the last decade, runs contrary to the academic discussions on sustainable compact planning and development. As the housing market and urban renewal policy are significantly intertwined [34], sustainable urban management requires continuous monitoring to strike a balance between housing deterioration and new growth at local and city levels. Given the fact that successful urban renewal urges strategic spatial planning to reduce the social costs associated with urban activities [35,36], this study suggests that the city government needs to embrace the concept of sustainable urban growth management in formulating its renewal policies. It is also important to develop a spatial decision support system and to identify the locational features of housing stock in the early planning phases of urban renewal projects. Despite the fruitful findings provided in this study, further studies need to consider two aspects. First, this study uses housing census data to explore housing deterioration and new growth at the same time. Further efforts could focus on expanding data sources such as housing, building permits, and demolition data to establish a deeper and clearer understanding of the dynamics of the housing market. Second, this study only focuses on physical changes, despite the importance of social and economic changes [37,38]. It needs to address the potential for exclusionary displacement and housing affordability problems with urban renewal processes. Those efforts will enrich the literature of spatial temporal housing dynamics by linking them to urban renewal policies. Conflicts of Interest: The author declares no conflicts of interest.
2020-10-19T18:10:10.898Z
2020-09-24T00:00:00.000
{ "year": 2020, "sha1": "f63899a20769ed0f81caf7411c1c51033af7fed8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/19/7918/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "540905b2ef1365c491609839673bc9f554b0633d", "s2fieldsofstudy": [ "Geography", "Sociology" ], "extfieldsofstudy": [ "Geography" ] }
119378493
pes2o/s2orc
v3-fos-license
$\nu_\tau$ Oscillation Experiments and Present Data Our goal in this paper is to examine the discovery potential of laboratory experiments searching for the oscillation $\nu_\mu(\nu_e) \rightarrow \nu_\tau$, in the light of recent data on solar and atmospheric neutrino experiments, which we analyse together with the most restrictive results from laboratory experiments on neutrino oscillations in a four-neutrino framework.} Introduction If neutrinos have a mass, a neutrino produced with flavour α, after travelling a distance L, can be detected in the charged-current (CC) reaction ν N ′ → l β N with a probability where U is the mixing matrix. The average includes the dependence on the neutrino energy spectrum, the cross section for the process in which the neutrino is detected, and detection efficiency for the experiment. The probability, therefore, oscillates with oscillation lengths ∆ij 2 = 1.27 where E is the neutrino energy. For oscillation lengths such that ∆m 2 ij ≫ 1/(L/E) the oscillating phase will have been over many cycles before the detection and therefore it will have averaged to 1/2. On the other hand, for ∆m 2 ij ≪ 1/(L/E), the oscillation did not have time to give any effect. Present data from solar and atmospheric neutrino experiments favour the hypothesis of neutrino oscillations. All solar neutrino experiments 2 find less ν e than predicted theoretically. As for atmospheric neutrino experiments, two of them 3,4,5 measure a ratio ν µ /ν e smaller than expected from theoretical calculations. Nevertheless, this interpretation requires confirmation from further experiments, in particular from laboratory experiments, where the experimental conditions, in particular the shape, energy, and flux of the neutrino beam are under control. At present all laboratory neutrino experiments report no evidence for neutrino oscillation 6 with the possible exception of LSND 7 , which looks for ν µ → ν e oscillations. In addition a number of new experiments are in the process of starting taking data of being propossed both at CERN 8 and Fermilab 9 . Our goal is to examine the discovery potential of these experiments searching for the oscillation ν µ (ν e ) → ν τ , in the light of recent data on solar and atmospheric neutrino experiments, which we analyse together with the most restrictive results from laboratory experiments on neutrino oscillations in a four-neutrino framework 1 . Four-Flavour Models Naive two-family counting shows that it is very difficult to fit all experimental information mentioned above with three neutrino flavours. In the spirit of Pauli, one is tempted to introduce a new neutrino as a "desperate solution" to understand all present data. The nature of such a particle is constrained by LEP results on the invisible Z width as well as data on the primordial 4 He abundance. Those rule out the existence of additional, light, active neutrinos. In consequence the fourth neutrino state must be sterile. If one assumes a natural mass hierarchy with two light neutrinos with their main projection in the ν s and ν e directions and two heavy neutrinos with their largest component along the ν µ and ν τ flavours the mixing matrix can be parametrized in a general way as with c i = cos θ i and s i = sin θ i . For the sake of simplicity we have assumed no CP violation in the lepton sector. We also required that the sterile neutrino does not mix directly with the two heavy states to verify the constraints from big bang nucleosynthesis 10 . Such a hierarchy appears naturally, for instance if one advocates an L e ± L µ ∓ L τ discrete symmetry for the mass matrix 12 . In Ref. 11 a similar mass pattern is also generated via a combination of see-saw mechanism and loop mechanism. In this approximation m 1 , m 2 ≪ m 3 , m 4 and The value of m 3 ≈ m 4 is inferred from the dark matter data. Currently, the best scenario to explain the data considers a mixture of 70% cold plus 30% hot dark matter 13 . This implies m 3 ≃ m 4 = 2-3.5 eV. Such a mass pattern has been argued to yield satisfactory results in Cold+Hot Dark Matter scenarios 14 . We define (3) Transition probabilities between the different flavours will have therefore contributions from the three oscillation lengths due to the three different mass differences in the problem which we will denote sin 2 (∆ solar /2), sin 2 (∆ AT /2), and sin 2 (∆ DM /2), respectively. Global Analysis At present the most precise laboratory experiments searching for neutrino oscillations are 6 the reactor experiment at Bugey, which looks for ν e disappearance, and the CDHSW experiment at CERN, which searches for ν µ disappearance. The E776 experiment at BNL searchs for theν µ →ν e appearance channel and the E531 experiment at Fermilab for the ν µ → ν τ channel. Neither of these experiments shows evidence for neutrino oscillation on those channels. Recently the Liquid Scintillator Neutrino Detector (LSND) experiment 7 has announced the observation of an anomaly that can be interpreted as neutrino oscillations in the channelν µ →ν e . Most of the oscillation parameters required as explanation are already excluded by the E776 and KARMEN experiments. For the Bugey reactor experiment the relevant transition probability is the ν e survival probability. For any value of the atmospheric mass difference this probability will always verify For CDHSW the relevant probability is the ν µ survival probability For E776 the situation is somehow more involved, since the value of the oscillating phase sin 2 (∆ DM /2) varies in the range ∆ DM = 4-10 eV 2 due to the wiggles of the resolution function of the experiment. Also, the experiment is sensitive to the atmospheric mass difference. We find that the limit is verified for any value of the atmospheric mass difference and the µτ mixing angle, if The limit from E531 on the mixings eµ and eτ is always less restrictive than the previous ones for any value of ∆m 2 AT and ∆M 2 DM . Combining these constraints we obtain that eµ and eτ mixings are constrained to where the range of sin 2 (2θ eµ ) depends on the specific value of ∆M 2 DM . If we now turn to the effect due to the oscillation with ∆ AT , we can rewrite the relevant probabilities for the different experiments expanding in the small angles eµ and eτ : With the constraints in Eq. (7), the Bugey experiment is not sensitive to oscillations with ∆ AT . The relevant exclusion contours for each channel are shown in the Figures. We now turn to the atmospheric neutrino data. Neutrinos are produced when cosmic rays hit the atmosphere and initiate atmospheric cascades. The mesons present in the cascade decay leading to a flux of ν e and ν µ which reach the Earth and interact in the different neutrino detectors. Naively the expected ratio of ν µ to ν e is in the proportion 2 : 1, since the main reaction is π → µν µ followed by µ → eν µ ν e . However, the expected ratio of muonlike interactions to electron-like interactions in each experiment depends on the detector thresholds and efficiencies as well as on the expected neutrino fluxes. Currently four experiments have observed atmospheric neutrino interactions. Two experiments, Kamiokande 3,4 and IMB 5 , have observed a ratio of ν µ -induced events to ν e -induced events smaller than the expected one. In particular Kamiokande has performed two different analyses for both sub-GeV neutrinos 3 and multi-GeV neutrinos 4 , which show the same deficit. On the other hand, the results from Fréjus and NUSEX 15 appear to be in agreement with the predictions. The results of the three most precise experiments in terms of the double ratio R µ/e /R MC µ/e of experimental-to-expected ratio of muon-like to electron-like events are R µ/e /R MC µ/e = 0.55 ± 0.11 for IMB R µ/e /R MC µ/e = 0.6 ± 0.09 for Kamiokande sub-GeV R µ/e /R MC µ/e = 0.59 ± 0.1 for Kamiokande multi-GeV R µ/e /R MC µ/e = 1.06 ± 0.23 for Fréjus The statistical and systematic errors have been added in quadrature. The systematic error contains a 5 % contribution due to the neutrino flux uncertainties. For the Montecarlo prediction we have used the expected fluxes from 16 depending on the neutrino energies. Use of other flux calculations would yield similar numbers. In each experiment the number of µ events, N µ , and of e events, N e , in the presence of oscillations will be where and Here E ν is the neutrino energy and Φ α is the flux of atmospheric neutrinos ν α ; E β is the final charged lepton energy and ǫ(E β ) is the detection efficiency for such charged lepton; σ is the interaction cross section ν N → N ′ l. The expected rate with no oscillation would be R MC µ/e = N 0 µµ /N 0 ee . The double ratio R µ/e /R MC µ/e is then given by We perform a global fit to the data in Eq. (9). In Fig. 3 the results are shown for zero mixings eµ and eτ as in a two-family scenario. Figure 4 shows the effect of the inclusion of the mixings. As seen in the figure the inclusion of the eτ mixing leads to a more constrained area for the oscillation parameters. The effect of the eτ mixing is to increase the value of the double ratio since there is a decrease on the number of ν e . Therefore a larger amount of µτ oscillation is needed to account for the deficit. Due to the small values allowed, a non-zero mixing eµ does not modify the analysis of the atmospheric neutrino data. Finally just to coment that for solar neutrino experiments the presence of new mixings affect very little the analysis performed in the two family scenario, the small mixing MSW 17 solution is still valid 18 . The large mixing solution is also in conflict with the constraints from big bang nucleosynthesis 10 . ν τ Oscillation Experiments:Discovery Potential The two upcoming ν µ (ν e ) → ν τ experiments, CHORUS and NOMAD 8 , are ν τ appearance experiments, i.e., they search for the appearance of ν τ 's in the CERN SPS beam consisting primarily of ν µ 's, with about 1% ν e 's. The mean energy of the ν µ beam is around 30 GeV and the detectors are located approximately 800 m away from the beam source. Their expected performance are summarized in Table 1. There are a number of future ν µ (ν e ) → ν τ experiments being discussed at present. As a specific example of these experiments we have considered a suggestion to upgrade the NOMAD detector 20 . We will refer to this future detector with the generic name of Neutrino ApparatUS with Improved CApAbilities (NAUSICAA). The detector performance is summarized in Table 1. At Fermilab, a neutrino beam will be available when the main injector becomes operational, around the year 2001. Compared with the CERN SPS beam, the main injector will deliver a beam 50 times more intense, but with an average energy around one third of that of the SPS neutrinos. There are currently two experiments proposed to operate in this beam 9 . One is a shortbaseline experiment, E803, and MINOS a long-baseline experiment, which proposes two detectors, separated by 732 km. This experiment can perform sev-eral tests to look for a possible oscillation ν µ → ν τ in the small mass difference range. The expected performance of E803 and MINOS are summarized in Table 1. Finally we will consider the possibility of installing the NAUSICAA detector as an alternative or a successor to E803 in the Fermilab beam which will improve by one order of magnitude its sensitivity. After implementing the limits derived in Sec. 3 and considering the sensitivity of the experiments, one sees that for all facilities the only observable ν e → ν τ transition oscillates with an oscillation length ∆ DM such that P eτ ≃ sin 2 (2θ eτ ) sin 2 ( ∆DM 2 ). Figure 1 shows the regions accessible to the experiments in the (sin 2 (2θ eτ ), ∆M 2 DM ) plane. For transitions ν µ → ν τ a four-neutrino framework predicts (unlike the naive two-family framework) two oscillations, dominated by the characteristic lengths ∆ DM and ∆ AT . All experiments are in principle sensitive to both oscillations depending on the values of the mixing angles P DM µτ ≃ sin 2 (2θ eµ ) sin 2 (θ eτ ) sin 2 ( ∆DM 2 ) P AT µτ ≃ sin 2 (2θ µτ ) cos 2 (θ eτ ) sin 2 ( ∆AT 2 ) (14) In Fig. 2 we show the regions accessible for the oscillation with oscillation lenght ∆M 2 DM ) to the different experiments for an optimum value of the eτ mixing. Figures 3 and 4 show the region accessible to the experiments for the oscillation in ∆m 2 AT ) for different values of the other mixings.
2019-04-14T02:42:58.591Z
1995-10-26T00:00:00.000
{ "year": 1995, "sha1": "27bc5d447a199c80f74e20b79a55ab9dbae92d58", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "673198479b53c1bdf05c8abf66f40ee12108a100", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
73434147
pes2o/s2orc
v3-fos-license
Qualitative and health-related evaluation of point-of-use water treatment equipment performance in three cities of Iran Background Application of the point-of-use water treatment (POU-WT) systems has consistently increased during the last decade in Iran. In this study, the qualitative performances of reverse osmosis-based POU devices in selected cities of Iran were investigated. Methods This applied- descriptive study was conducted in three cities of Tehran, Rasht, and Ahvaz in 2016 (selected based on the level of POU devices sale index in three phases). After choosing the most popular five brands of six stages POU devices, 360 water sampling zones and POU consumer households of the selected cities were measured. Also, the awareness of the consumers about POU-WT systems selection and performance was investigated through a designed questionnaire. Results The qualitative parameters in the three cities were acceptable (p < 0.05) for tap water (except for EC in Ahvaz), the output water were as follows: pH = 6.05–7.5, EC = 49.8–58.2 μs/cm, TOC = 0.01–0.23 mg/L and Nitrate = 0.52–4.5 mg/LNO3 (lower or within the range of regulatory limits), Total Hardness = 33–41.5 mg/L and Fluoride = 0.01–0.23 mg/L (which were lower than the admissible limit, with p < 0.05), HPC values were in the range of 543–676 CFU/mL, which exceeded the regulatory level. Results of ANOVA analysis showed significant differences between the selected cities. The results of the questionnaire survey showed that the dissatisfaction of tap water quality and health-related concerns were the two main reasons for household POU-WT systems; awareness levels of 64% of these households about the performances of their POU systems were weak. Also, social media were mostly used by POU-WT users for brand selected. Conclusion Based on the results of the tap-water quality application of POU-WT systems are not recommended in Tehran and Rasht, and regarding the outputs of these systems, side effects of softened water, lack of Fluoride and a remarkable increase of the number of bacteria should be considered. In Ahvaz, application of POU-WT systems can decrease the health-related problems and it is necessary to increase the access to read POU-WT efficiency information for the consumers. Introduction Safe drinking water and convenient sanitation are crucial for poverty reduction, sustainable development and for attaining any and every one of the Millennium Development Goals [1]. Besides, it is a principal to establish an effective policy on health and well-being protection [2]. According to the World Health Organization (WHO), between 50 and 100 l of water per person per day is needed to ensure that most basic needs are met and few health concerns arise [3]. Hence, the quality of drinking water has progressively been questioned from a health viewpoint in recent decades [4]. The contamination of urban water supply can be caused by various sources, such as water distribution system's pollution [5]. Extensive data principally received at a governmental level from most area announced that the tap water characteristic in Iran meets drinking water standards [2]. However, some people are more inclined to use many Point-Of-Use (POU) Water Treatment (POU-WT) systems especially Reverse Osmosis (RO)-based POU systems. It is made an increasingly common due to extensive advertising by sellers and the public worries about drinking water supplies quality such as aesthetic characteristics (taste and odor), Hardness, fluoride, nitrate, and etc. [6]. The POU-WT systems are marketed as being effective to remove undesirable odors and tastes and to eliminate any unpleasant pollution in the tap water. Based on USA Water Quality Association (WQA) report, there are at least 325 POU-WT producers which means that 41% of all American houses consumed such systems in 2000 [7]. Generally, RO-based POU-WT systems are a multi-stage system that has pretreatment, post-treatment stages, and an RO membrane unit. Pretreatment contains sediment filters or microfilters and activated carbon. Post-treatment also contains activated carbon filters [8]. Such systems are usually situated at purifying tap water from a public water supply and can be located under the kitchen sink. The monitoring and maintenance are one of the important issues to use the POU-WT systems. The maintenance of these systems includes the displacement of pre, post-filters every 6 to 18 months, exhausted membrane 2 to 3 years, and cleaning of storage. The price of the system charges, according to the different brands and the flow rate is within the range of US$ 200 to 700. Their annual operation charges are about US$ 85-135 [9]. In general, the most usual drawback of RO-based POU-WT systems includes complex and relatively expensive installation process, service and replacement requirement, a distinctive source water quality, and the probability of bacterial growth [6,[9][10][11]. Fahiminia et al. [12]. Analyzed the data from 240 water samples in the input and output of different POU-WT systems in Iran and concluded that these systems were able to decrease the dissolved solids more than 90% and produce soft water. A number of heavy metals removed by POU-WT systems varied from 5% for Al to 86% for Cd and their removal average were 43%. Adel et al. [13] investigated the efficiency of domestic water filters in Kuwait and showed that the RO filter was exposed to severe damage by the residual chlorine in the water. The impaired membrane unable to decrease water salinity effectively and causing high total bacteria counts in the filtered water. Tarun et al. [14] studied Attenuation of Trace Organic Compounds (TOC s ) in Water by POU-WT systems demonstrated that these devices have the high capacity to remove significant amounts of organic contaminants in water. However, removal of a specific compound depends on its molecular properties, treatment technology, water quality and the lifetime of the cartridge. In view of the above, this work focuses on evaluation of the tap water, and RO-based POU-WT systems quality in a residential areas of Tehran, Rasht, and Ahvaz, compare analysis of the selected water types with the regulatory guidance (Institute of Standards and Industrial Research of Iran (ISIRI)), (World Health Organization (WHO)), (US Environmental Protection Agency (US-EPA)), with emphasis on the health impact, and an assessment of the necessity to utilize the POU-WT instead of the tap water and users awareness. Materials and methods This applied-descriptive study was carried out in order to investigate the chemical and microbiological quality of POU-WT systems which was installed in the residential houses of Tehran, Rasht, and Ahvaz cities of Iran, in 2016. Description of study area Tehran, the capital of Iran, is situated at an altitude of 1100-1800 m above sea level and within the residents of 11.7 million, with 730 km 2 area. The drinking water in Tehran was supplied by treated five water treatment plants (70%) and 480 wells30%. Rasht, the center of Gilan province, is located in the north of Iran and southwest of the Caspian Sea, with a total population of 748,711. The source of drinking water in Rasht is the Bijar Reservoir Dam, which is located in Bijar, 35 km from Rasht, on the Zilaki River. Ahvaz, the center of Khuzestan province, with a total population of 1302,000, is located in the south of Iran as one of the major metropolises in Iran. Karoon River is the source of drinking water for the city of Ahvaz (Fig. 1). RO-based POU water treatment The POU-WT systems studied in this survey was designed to generate between 10 to 12 l of drinking water per day. Each six stages POU-WT systems includes 5-μm sediment filter, a carbon pre-filter, a cellulose-acetate RO membrane, 1-μm sediment filter, a carbon post-filter, mineral filter and a small storage tank. The POU-WT systems were the NSF-certified protocol for aesthesis effects and reverse osmosis (NSF / ANSI 42, NSF / ANSI 58). The age of facility of POU-WT systems studied such as used filters in three cities was between two to six months. Sample collection A total of 360 water samples from the input (tap water) and output of three most common POU-WT brands were collected according to Cochran's sample size and randomly from private homes. The selected brands cover almost 68% of the Iran market. The collection, conservation, and analysis of samples were performed according to standard methods for water quality examination [15]. The analyzed parameters were pH, Electrical Conductivity (EC), Hardness, Nitrate, Fluoride, Total Organic Carbon (TOC) and microbial tests (Heterotrophic Plate Count (HPC), Total Coliform Count (TCC) and Fecal Coliform Count (FCC) at the standard conditions. Consumer awareness of POU systems In order to investigate the knowledge and training of people who consume the POU-WT systems were considered questionnaire-based surveys. In the preliminary investigation, the numbers of residential areas which use these kinds of systems were identified. From this preselected set of residents, 100 home were randomly chosen in each studied city. Then questionnaires and face-to-face interviews were carried out with these 100 participants. The questionnaires completed during the study consisted of two parts. The first determined the general information and personal background of the participants, such as their family name, age, gender, education level and occupation. The second one focused on the condition of operation and maintenance, user's knowledge, information and satisfaction, conducting quality tests, and guidance by device providers. The information about POU-WT systems was gathered from manufacturers and sellers. The validity of the questionnaire was evaluated using content validity. To do so, the intended questionnaire was given to 20 faculty members of the Faculty of Health of Tehran and Iran University of Medical Sciences to be examined based on the objectives of the study and the questions relating to attitude and awareness. Furthermore, the reliability of the questionnaire was evaluated using Cronbach′s alpha (α = 0.86) [16]. Physicochemical analyses The pH and EC were analyzed using a portable pH, conductivity multi-parameter meter (HQ40d, HACH, USA). Chemical analyses Total Hardness (TH) was calculated by titration method. Nitrate (mg/L No 3 ) and fluoride (F − ) ions were specified using the spectrophotometer (DR600). The optimum fluoride concentration for a society may be determined by obtaining the mean maximum temperature. It was determined by the following formula: Optimal Fluoride Concentration (mg/L) = 0.022 / 0.0104 + 0.000724 × AMMT. Where AMMT is the Annual Mean Maximum Temperature in selected cities. Total organic carbon (TOC) was determined by catalytic combustion catalytic oxidation using an online TOC analyzer (VCSH-Shimadzu). Microbiological analyses The Total Fecal Coliform Count ((TCC) and (FCC)) were analyzed by the most probable number (MPN) method. The heterotrophic plate count (HPC) by pour plating method was used to examine the bacterial count in POU-WT systems [15]. Statistical analysis One-way analysis of variances (ANOVA) was performed to the experimental values comparison in selected cities with each other and the national and international guideline. The presumption of normality of data was confirmed with the One-Sample Kolmogorov-Smirnov Test. All the statistical analyses were performed using SPSS 16.0, and a p value of less than 0.05 represents a significant difference between groups (confidence level 95%). Questionnaires and face-to-face interviews The results of the questionnaire about consumer knowledge of POU-WT systems were found that most of POU-WT systems users (86%) complained about tap water hardness, and all of them believed that high content of solids in water will lead to diseases such as stone formation in bladder and kidney, and blocked arteries. These systems were introduced to the 72% of users through advertisements and the social media (Fig. 2). Information levels of 64% of POU-WT systems users about the treatment process and operational principles of the device were not reasonable (Fig. 2). These systems were installed by sellers and brief information and in most cases, non-scientific and non-documentary will be offered to customers. Judgment about the performance of POU-WT systems by users was based on the taste of treated water and creating sediment in the kettle. Only in one of the Brands of POU-WT systems had conducted the quality test for performance of the device. Period of time for cleaning and replacing of POU-WT systems filters and membranes in Tehran and Rasht was between 6 months to a year while in Ahvaz this period was less time (replacing the first phase filter was sometimes less than a month). None of the users were aware of biofilm formation and bacterial regrowth in the filter of POU-WT systems. Overall satisfaction of 72% of users was great with support services provided by sellers. Figure 3 indicates pH variations in three selected brands of POU-WT systems' (input and output) at studied cities. The pH variations of input in Tehran, Rasht, and Ahvaz were in the range of 7.65-7.75, 6.50-7, and 7.28, respectively. Based on the pH results, it can be stated that all samples of input had permissible levels [2,17]. pH variations of output reduced in all of the samples (the lowest in Rasht(5.70)). The EC variation of input in Tehran, Rasht, and Ahvaz was 483-503, 445-450 and 1443.5 μS/cm, respectively (Fig. 3). The findings showed that the values of EC in all samples of output were lower than 60 μS/cm. The maximum water hardness in the input of Tehran, Rasht, and Ahvaz was found as 166.5, 287.5, and 210 mg/L as CaCO 3 respectively, were within the admissible limits [17]. For output of POU-WT, the findings showed that the values of water hardness in all samples were lower 52.5 mg/L as CaCO 3 (Fig. 4). According to the Fig. 4, the maximum levels of Nitrate in the input were 16.3, 5, and 1.8 mg/L NO 3 , respectively for Tehran, Rasht, and Ahvaz which reached 4.8, 1.5, and 0.4 mg/L in the output water, respectively. Based on the figure, the levels of Nitrate in input and output POU-WT systems were lower than the maximum contaminant level (50 mg/L NO 3 ) which is proposed by WHO [2]. The range of nitrate in Tehran was the highest value. Figure 5 shows the concentration of fluoride in three studied cities in the input and output of POU-WT. WHO and also ISIRI have set an admissible contaminant value for fluoride in drinking water. As can be seen in the figure, the maximum of fluoride in the input was 0.38, 0.33, and 0.51 mg/L respectively for Tehran, Rasht, and Ahvaz. Among these, all samples of brands had the concentrations which were below admissible levels [2,17]. Furthermore, the fluoride level in the Ahvaz was the highest. The results revealed that the maximum of Total Organic Carbon (TOC) were 0.24 and 0.12, and 1.69 mg/L for Tehran, Rasht, and Ahvaz (Fig. 5) WHO and ISIRI have not set a guideline for TOC in DW. The output of brands in the city of Ahvaz was higher than two studied cities. Microbial quality HPC level of the samples was summarized in Fig. 6. ISIRI proposed a 100 CFU/mL MCL for HPC count in the desalinated drinking water in industrial and household systems [18]. However, as seen in the figure, the maximum values of HPC in the input of brands in Tehran, Rasht, and Ahvaz were 5, 4, and 44 CFU/mL, respectively, which reached 593, 542 and 700 CFU/mL at the output. Furthermore, the results showed that the level of MPN indexing microbiological tests (total coliform and fecal coliform) in 360 water sample (input and output of POU-WT system) were negative for three cities. Comparative analysis The statistical analysis of the results of input and output of POU-WT systems in Tehran, Rasht, and Ahvaz was summarized in Tables 1 and 2, respectively. The oneway analysis of variance (ANOVA) for the input and output of POU-WT systems indicated that except EC and Total Hardness (TH) of output, the mean levels of all parameters are different in selected cities (p < 0.05). The quality of the tested waters was compared to WHO, EPA, and ISIRI guidelines. There is no significant difference between the value of parameters in the input of POU-WT and the above-mentioned guidelines. Discussion The pH level of drinking water shows the power of an acidic or basic character [19]. It is difficult to clarify any obvious relationship between human health and the drinking water pH. The water with a pH < 6.5 can be acidic, clearly soft, and corrosive. Acidic water may cause aesthetic problems, such as a metallic or bitter taste and can also be corrosive to metal pipes, thereby releasing harmful metals such as copper, lead, etc. The potable water pH will shift depending on the substance and distribution system parts [20]. A health-based guideline value for pH has not been advised by WHO, while in US-EPA standard pH considered in the table of secondary drinking water standards putting a maximum contaminant level (MCL) value limited between 6.5 and 8.5 [21]. Based on Fig. 3, the level of input pH was within the admissible limit of ISIRI and also EPA guidelines [2,21], which is decreased in all of the brands' output in selected cities. Therefore, it can be concluded that using the RO process led to pH reduction and ultimately to the health problems. Tables 1 and 2 showed that there is a significant difference (p < 0.001) between the selected cities in term of mean pH of the input and output of POU-WT systems. In Yari et al. study, none of the selected POU-WT systems didn't meet the admissible limit [22]. In another similar study, all samples of output had pH value within the admissible limit of ISIRI [23]. In addition, the results of a research on quality of water extracted from desalination facilities in cities and villages of Iran showed that pH of POU systems' output tended toward acidity and corrosion. Electrical Conductivity (EC) is the measure of the ability of water to conduct electrical current. This capacity depends on the concentration of ions, ionic mobility, and temperature. The EC of water directly related to its total dissolved solids content [24]. The range of EC values in the different POU-WT inputs in studied cities was observed from 445 to 1443.5 μs/cm. The results revealed that the levels of EC in Ahvaz were the highest value. All of the POU-WT water samples have EC value below 60 μs/cm and the samples output in B1 showed the lowest value (Fig. 3). We don't have any guideline value for EC by WHO and ISIRI, while European Union (EU) is considered 400-1000 μs/cm for desirable drinking water [25]. It is noted that inputs of all POU-WT systems were in the limited level except Ahvaz (1307 ± 59.8). While all samples of output were lower than the minimum permissible limit. The results of EC value in water samples of POU-WT in Qom have indicated 99-1590 μs/ cm, while 47% of the EC value was less than 400 μs/cm mean EC value in input of POU-WT systems was Less than the significant level (p < 0.05), while output was not significant. The water Hardness Classified, according to WHO in 2004 [26]. Based on the classification and Fig. 4, input water of Tehran was Moderate Hard and Rasht and Ahvaz belonged in the hard category, while the all samples of output from POU-WT systems were in soft category of hardness levels. There are not a sufficiently strong evidence to the relation between total hardness (TH) in drinking water and harmful health consequences in humans [1]. In the opposite, there is an adverse correlation between the TH in drinking water and cardiovascular diseases(CVD S ) [27,28]. Some surveys offer that drinking water with a TH of below 75 mg/L may have a bad efficacy on mineral balance [1]. The results of this study agreed on these findings. According to Fahiminia et al. results, the average concentration of magnesium, calcium, total hardness and fluoride of the samples were 0.9, 5.7, 60 and 0.05 mg/L respectively [12] which regarding to the TH standard (above 500 mg/L in DW) is considered to be aesthetically unsuitable [29]. The use of high nitrate value drinking water may enhance the hazard of cancer in adults and methemoglobinemia in infants or young children [30]. The levels, which were allowed by the WHO, ISIRI, for nitrate in drinking water is 50 mg/Las nitrate ion [1,2]. Based on Fig. 4, the levels of nitrate in the input and output of all brands were lower than ISIRI [2]. the highest level of nitrate inputs (13.75 ± 5.2) was reported in Tehran, also the all of selected brands' output in the three studied cities was significantly reduced, which were inconsistent with the Verma et al. results [31]. There is a significant difference (p < 0.05) between the selected cities in term of mean nitrate of the input and output of POU-WT systems (Tables 1 and 2). Water fluoridation is noticed as one of the most efficacious processes in the decrease of dental caries on the social health level and its greatest impact on publicly care of children with higher outbreak of tooth decay [32]. WHO has recognized dental caries as a global epidemic and also suggested to enhance fluoride to DW, which has less than optimal fluoride levels [33]. The optimal fluoride levels of ISIRI are between 0.5-1.5 mg/L. In the present study (Fig. 4), the highest level of fluoride ions in the output of POU systems samples were 0.26 mg/L in Ahvaz. The fluoride level in Tehran and Rasht was about zero which significantly was lower than the regulatory limit of WHO and ISIRI permissible limit [2]. This result is inconsistent with Miranzadeh et al. results [34]. According to Tables 1 and 2, The POU-WT fluoride's of input and output are significantly different (p < 0.05) in studied cities. TOC in drinking water is a medium for the formation of disinfection by-products (DBP). Drinking water containing Fig. 6 The Range of HPC parameter variations in POU water samples in selected cities DBP in excess of the limit may lead to adverse health effects, liver, or kidney problems, or nervous system effects, and may lead to increase the risk of getting cancer [35]. As can be seen in Fig. 5, the highest level of the input TOC was in Ahvaz with 1.69 ± 0.05 mg/L, while The TOC value of output of the POU-WT water in all cities are fewer than 0.15 mg/L (Fig. 5). Therefore, the DPB formation in the presence of chlorine is very low. The TOC of input and output samples in POU-WT systems are significantly lower (p < 0.05) in studied cities. The microbiological quality characteristic of the POU-WT systems in selected cities was assessed using the HPC and MPN indexing [36]. An increased HPC count in drinking water does not represent a remarkable health risk, and no health-based guideline is recommended so far [37]. ISIRI proposed an MCL of 100 CFU/mL for HPC count in desalinated drinking water in industrial and household systems [38]. The input of POU-WT studied in the Ahvaz with the level of 44 ± 21.4 CFU/mL was in the highest value (Fig. 6). The HPC counts confirmed an increase the level of heterotrophic bacteria in the output water of all brands (Above 510 CFU/mL), which is higher values than ISIRI limit [38]. It was reported that High value of HPC in some samples indicates the potential for microbial growth and contamination Storage of treated water [13]. These findings have met the ISIRI and also WHO guidelines for drinking water. POU-WT HPC's of input and output are significantly different (p < 0.05) in studied cities. The number of total and fecal coliforms were negative in all inputs and outputs in Tehran, Rasht, and Ahvaz indicated that these microorganisms were absent in the studied POU-WT systems ( Table 1) Conclusion This paper has highlighted an evaluation of three different POU-WT systems' brands accessible for Tehran, Rasht, and Ahvaz consumers. The finding data was compared with the quality of tap waters and regulatory from WHO, US-EPA and ISIRI. It is shown that the quality of tap water in three studied cities (except EC value in Ahvaz) is either lower or meets the national and international standards, and it is not a risk to consumer's health. Based on our findings, due to the different water quality in each city (p < 0.05), the output of each brand also varied. In general, the POU-WT systems in three cities eliminated useful minerals such as calcium, magnesium, fluoride, and increasing the growth of heterotrophic bacteria, hence the long-term usage of them not only causes health promotion but also causes adverse effects on human health. Finally, before installing RO-based POU-WT systems, it is not able to know the primary quality of the tap water and to detect a balance between the potential advantages of these systems and the potentially harmful effects of declining the mineral's amount. It is proposed since reverse osmosis (RO) is not especially beneficial to a nutritional viewpoint; RO-based POU-WT is used particularly when it is suitable to eliminate inorganic chemicals of potential concern, such as nitrate. In conclusion, the use of RO-based POU-WT systems in cities with similar tap water quality in the present study is not extremely suggested, especially if the systems are not constantly and properly maintained and operation by the specifically proficient people. Considering the Rasht and EC value water hardness and hardness of Ahvaz which are more than the desirable level, in the current situation it is recommended to use the softening systems instead of RO-based POU-WT systems.
2019-03-08T14:09:58.112Z
2018-09-19T00:00:00.000
{ "year": 2018, "sha1": "4e4b713a4495704be6953d663aa958ec5a9a15ff", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc6277344?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "4e4b713a4495704be6953d663aa958ec5a9a15ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
13928983
pes2o/s2orc
v3-fos-license
Effects of large-scale Amazon forest degradation on climate and air quality through fluxes of carbon dioxide, water, energy, mineral dust and isoprene Loss of large areas of Amazonian forest, through either direct human impact or climate change, could exert a number of influences on the regional and global climates. In the Met Office Hadley Centre coupled climate–carbon cycle model, a severe drying of this region initiates forest loss that exerts a number of feedbacks on global and regional climates, which magnify the drying and the forest degradation. This paper provides an overview of the multiple feedback process in the Hadley Centre model and discusses the implications of the results for the case of direct human-induced deforestation. It also examines additional potential effects of forest loss through changes in the emissions of mineral dust and biogenic volatile organic compounds. The implications of ecosystem–climate feedbacks for climate change mitigation and adaptation policies are also discussed. INTRODUCTION The climate and air quality in Amazonia depend strongly on the character of the vegetation cover, through its influence on the physical properties of the land surface properties and biogeochemical fluxes. Large-scale changes in vegetation cover, for example a reduction in the current forest area, would therefore be expected to modify the local climate. Moreover, a reduction in forest cover would also be expected to contribute to global climate change through the release of stored carbon contributing to the rise in atmospheric CO 2 . Vegetation cover change, mostly in the form of deforestation, is currently occurring as a direct result of human activities in the Amazon region. By 2001, the original forest area of approximately 6.2 million km 2 had been reduced to 5.4 million km 2 , 87% of the original area (Malhi et al. 2008). Current plans for infrastructure expansion and integration could further reduce forest cover to 3.2 million km 2 , which is 53% of the original area, by 2050 (Soares et al. 2006). Global climate change may also lead to changes in the Amazonian vegetation cover, especially if it leads to significant reductions in precipitation in this region. The relationship between the warming of global average temperatures and changes in regional precipitation patterns is highly uncertain, but a number of climate models suggest that global warming could lead to particular patterns of warming in the north Atlantic and tropical east Pacific sea surface temperatures (SSTs), which change the atmospheric circulation such that precipitation is reduced across part or all of Amazonia. (Good et al. 2008;Harris et al. 2008). Strong drying of Amazonia or northeast South America is simulated by variants of the Hadley Centre climate model (Betts et al. 1997;Cox et al. 2000;Murphy et al. 2004), although it must be emphasized that many other climate models do not simulate such a drying in this region (IPCC 2007;Li et al. 2008). This paper reviews simulations performed with the Hadley Centre climate model including changes in the vegetation cover to quantify and compare several processes through which large-scale Amazon forest degradation may affect climate. Specifically, these involve changes in the physical properties of the land surface , and net emissions of carbon dioxide, dust and isoprene to the atmosphere (Cox et al. 2000;Sanderson et al. 2003;Woodward et al. 2005). The discussion considers the roles of these effects as feedbacks on global climate change, should this lead to a drier climate and forest loss in Amazonia and also their roles as forcings of climate change due to direct human-induced deforestation. BIOPHYSICAL EFFECTS OF FOREST DEGRADATION ON REGIONAL AND GLOBAL CLIMATES In the HadCM3LC coupled climate-carbon cycle model, the regional warming and drying of the Amazonian climate simulated for the twenty-first century lead to a 'dieback' of large areas of forest (figure 1a). The forest loss itself plays a key role in the simulated drying of the Amazonian climate. Relative to bare soil, vegetation (especially forest) can enhance the evaporative flux of moisture to the atmosphere through the extraction of moisture deep in the soil by plant roots for transpiration. Furthermore, the vegetation canopy can capture a greater fraction of precipitation that is then re-evaporated back to the atmosphere, compared with bare soil that holds less water on the surface before run-off and infiltration. In addition, the higher aerodynamic roughness of a vegetated land surface can promote the flux of moisture to the atmosphere through enhanced turbulence. Changes in the nature of vegetation cover, particularly from forest to non-forest, can therefore significantly alter the surface moisture budget and exert further effects on the surface energy budget. Forest loss reduces evaporation, causing a greater proportion of the available energy at the land surface to flow to the atmosphere in the form of sensible heat rather than latent heat; this exerts a warming influence on the near-surface air temperature. Reduced evaporation also reduces the flux of moisture to the atmosphere, potentially decreasing the quantity of moisture available for precipitation. Betts et al. (2004) examined these feedbacks with two simulations with HadCM3LC, one including interactive vegetation and the other with global vegetation cover fixed at the present-day state. In order to remove carbon cycle feedbacks and isolate the biogeophysical feedbacks, CO 2 concentrations were prescribed to the standard IS92a scenario in both simulations. This scenario projects the atmospheric CO 2 concentration to rise to 713 ppmv by 2100. For comparison, pre-industrial and present-day concentrations are 278 and 378 ppmv, respectively. The general global patterns of climate change were similar in the two simulations, with almost all changes in temperature and precipitation being of the same sign irrespective of the inclusion of vegetation feedbacks. This implies that vegetation feedbacks do not have significant influence on the atmospheric circulation in comparison with the greenhouse gas (GHG) forcing. However, some of the regional climate changes were significantly affected by vegetation feedbacks. In particular, the precipitation reduction over Amazonia was greater with interactive vegetation than with prescribed present-day vegetation. With present-day vegetation, the precipitation reduced by 1.9 mm d K1 , but with forest dieback the reduction was K2.4 mm d K1 (table 1). Biogeophysical feedbacks from the forest dieback therefore enhanced the local drying by approximately 26%. In the western part of the basin, the feedback was greater still, magnifying the precipitation reduction by over 30% (figure 1b). The larger precipitation decrease in western Amazonia was attributed to drought-induced dieback of the eastern forests contributing to further rainfall reductions in the west. The forest loss also increased surface albedo that reduced convection and moisture convergence, providing a further positive feedback on rainfall reduction. The Amazon forest dieback therefore magnified the local drying of the climate, providing a reason for why drying in this model is more extreme than the other climate models. This result is consistent with the previous model results, suggesting that human-induced deforestation would impact the regional climate of Amazonia, principally by reducing local precipitation and increasing temperature (Lean & Rowntree 1997). It therefore provides further evidence that forest degradation, by whatever cause, would lead to a hotter, drier climate in Amazonia. The positioning of Amazonia on the equator means that large-scale forest loss could also exert more farreaching effects by modifying the global atmospheric circulation. Amazonia lies beneath the region of ascending air that moves northwards and southwards across the equator with the seasons, and which derives its energy from the solar heating of the Earth's surface below. With forest present, higher rates of evaporation cause a larger proportion of the energy to be transferred to the atmosphere in the form of latent heat, which allows energy to be transported higher into the atmosphere before conversion to sensible heat upon condensation of the water vapour. This mechanism drives deep convection that enhances ascent and the overturning motion of the Hadley circulation. Gedney & Valdes (2000) found that, in a climate model, removal of the Amazonian forest caused more energy to be transferred to the atmosphere as sensible heat, heating the lower atmosphere rather than higher levels and providing a weaker driver of ascent. The subsequent reduction in the Hadley circulation modified the atmospheric circulation at higher latitudes through the poleward propagation of Rossby waves, altering regional climates many thousands of kilometres from Amazonia. CONTRIBUTION OF FOREST DEGRADATION TO RISING CO 2 AND GLOBAL WARMING The forest dieback in HadCM3LC also exerted feedbacks on global and local climate changes through the carbon cycle, and again these were isolated by further HadCM3LC simulations in which various processes were enabled or disabled Cox et al. 2004). In a HadCM3LC simulation that simulated carbon fluxes between the atmosphere, oceans and terrestrial biosphere, but in which the radiative forcing by rising CO 2 was omitted, uptake of carbon by the oceans and terrestrial biosphere due to increased dissolution in ocean waters and enhanced photosynthesis caused the rise in CO 2 to be approximately half the rate of anthropogenic emissions throughout both the twentieth and twenty-first centuries (table 2). The simulated CO 2 concentration at 2100 was 700 ppmv (table 2), close to that in the standard IS92a scenario provided to the prescribed CO 2 simulations described in §2, which similarly was generated without consideration of the effects of climate change on the carbon cycle. Relative to pre-industrial, the total uptake of carbon by the global oceans by 2100 was 367 gigatonnes of carbon (GtC), and uptake by the terrestrial biosphere was 633 GtC by 2100, with 64 GtC of this being in South American vegetation (largely in Amazonia). However, in the simulation with CO 2 radiative forcing included, the climate change led to a number of changes in the oceanic and terrestrial carbon cycles that overall exerted a positive feedback on the CO 2 rise and global warming (table 1). Ocean carbon uptake increased to 495 GtC by 2100, but this was more than offset by the terrestrial biosphere becoming an overall net source of carbon instead of a sink. The main process was an increase in soil respiration in response to higher temperatures, but Amazonian forest dieback also played a part. The overall loss of carbon from the terrestrial biosphere relative to pre-industrial was 98 GtC, with global soils losing approximately 150 GtC and South American vegetation losing 73 GtC-vegetation elsewhere in the world still largely gained carbon-and the total global vegetation carbon uptake was approximately 60 GtC. Compared with the uptakes when climate change was excluded, the global terrestrial carbon deficit was therefore 731 GtC, with 137 GtC of the deficit coming from Amazonian vegetation carbon decreasing rather than increasing. The overall atmospheric increase, accounting for both ocean and terrestrial feedbacks, was 590 GtC, so Amazonian forest dieback provided 22% of this global feedback. In the simulation with CO 2 concentrations prescribed to the IS92a scenario that ignored climatecarbon cycle feedbacks, global average temperature rose by 48C (table 1). When carbon cycle feedbacks were included, global warming was 5.58C (table 1). Approximating the global temperature response to be proportional to the CO 2 rise, the Amazon forest dieback therefore increased global warming by approximately 0.38C. Compared with the non-feedback warming of 48C, Amazon forest loss increased the rate of twentyfirst century global warming by approximately 8%. The regional drying in Amazonia was also more severe in the simulation with carbon cycle feedbacks than that with these feedbacks neglected. Without carbon cycle feedbacks the precipitation reduction had been K2.4 mm d K1 , but with carbon cycle feedbacks the reduction was K3.0 mm d K1 . Assuming the local precipitation change to be linearly related to global mean temperature change, Amazon forest dieback therefore further enhanced the local drying by approximately 0.05 mm d K1 through its contribution to global carbon cycle feedbacks. As a feedback on global warming, the process of Amazon forest loss relies on particular responses of the regional climate to the radiative forcing. Since not all Table 2. Global and South American carbon storage changes between 1860 and 2100 with and without effects of climate change on the carbon cycle (adapted from Cox et al. 2004 MULTIPLE FEEDBACKS BETWEEN CLIMATE CHANGE AND FOREST DEGRADATION The Amazonian drying and forest dieback in HadCM3LC is therefore a complex coupled process involving multiple interactions between atmospheric CO 2 , radiatively forced climate change, regional temperature and precipitation patterns, and vegetation. The drying is initiated by atmospheric circulation responses to particular patterns of SST change, associated mainly with radiatively forced climate change but also modified by physiological forcing of climate via vegetation responses . Despite CO 2 fertilization, the climate warming and drying cause forest dieback that then exerts two positive feedbacks on the precipitation reduction: reduced forest cover causes further suppression of local evaporative water recycling (a biogeophysical feedback); and carbon release contributes to a global positive feedback on CO 2 rise, which accelerates global warming and magnifies the associated patterns of precipitation change ( figure 2). This analysis helps to explain why the Amazonian precipitation reduction simulated by HadCM3LC is more extreme than that simulated in other GCMs. In the fully coupled climate-carbon cycle simulation, approximately one-third of the precipitation reduction in Amazonia is attributable to a combination of biogeophysical and global carbon cycle feedbacks. In addition, a small part of the precipitation reduction is attributable to physiological forcing by the rise in CO 2 concentration, both in Amazonia and across the globe. These processes are often not included in other GCM simulations of future climate change. Direct human-induced forest degradation could initiate parts of the above multiple feedback process by emitting CO 2 and reducing evaporation (figure 2). INCREASED DUST PRODUCTION AND ITS EFFECTS ON RADIATIVE FORCING Forest degradation could result in increased exposure of bare soil, especially if this were accompanied by a drying climate. This raises the possibility of further effects on climate through the release of mineral dust-this can affect climate by exerting radiative forcings in both the short wave and the long wave. The net effect is complex and depends on other factors such as the albedo of the underlying surface. Woodward et al. (2005) used a fully interactive dust scheme (Woodward 2001) within the atmospheric component of HadCM3LC to simulate the changes in atmospheric dust load as a consequence of global vegetation change including forest loss and the associated drying climate in Amazonia. The model included six size classes of dust from 0.03 to 30 mm radius, and produced dust from the bare soil fraction of a grid box when the friction velocity exceeds a threshold, which depends on soil moisture and particle size. Horizontal and vertical dust flux calculations are based on those of Marticorena et al. (1997). Dry deposition through gravitational settling and turbulent mixing in the boundary layer and below cloud scavenging processes are included. Radiative properties were calculated using refractive index data from a range of sources, in an attempt to produce globally . Schematic of potential feedback processes involved in Amazonian climate change and forest degradation, involving either or both global warming or direct human impacts on the forest. Feedbacks involving specific SST changes, atmospheric circulation and Amazon precipitation rely on particular responses of regional climate change in the Hadley Centre climate models-these are seen in some other models, but not all. A large number of studies suggest impacts on regional climate through reduced evapotranspiration following deforestation. T * , surface temperature. representative values, rather than properties applicable to one particular source region. Two 10-year simulations were performed with the atmospheric model HadAM3, using prescribed vegetation states, SSTs and CO 2 concentration obtained from the simulations for 2000 and 2100 in the HadCM3LC coupled climate-carbon cycle simulation described in §2. In the 2100 simulation, Amazonia was a greater dust source than the presentday Sahara (figure 3) due in part to loss of vegetation cover and drying of the soil. However, the area of bare soil was much smaller than the Sahara and the soil was not as dry. The strength of the new dust source in Amazonia was largely due to increased speed of the surface winds, which occurred as a consequence of reduced aerodynamic roughness of the landscape due to loss of the forest. This reflects the fact that dust flux increases with the cube of the wind speed, but only linearly with area. Mineral dust absorbs and scatters incoming shortwave radiation, giving a negative surface forcing, but the change in top of the atmosphere short-wave flux depends not only on the properties of the dust, such as size distribution and refractive index, but also on the underlying albedo. Short-wave top of the atmosphere forcing tends to be positive over bright surfaces such as ice and deserts or over cloud, and negative over dark surfaces such as ocean or forests. Dust absorbs long-wave radiation, and the top of the atmosphere long-wave forcing is positive. In the case of the Amazonian dust over the source region, the longwave forcing dominates, but the short-wave forcing is also predominantly positive, leading to decadal mean positive net forcing in excess of 10 W m K2 locally (figure 3). The equivalent net surface forcing is negative and also exceeds 10 W m K2 . The experiments were designed to calculate the direct radiative forcing due to dust excluding any feedbacks, and as such do not simulate changes in climate due to the dust. However, it may be supposed that the cooling of the surface and the warming aloft caused by the dust would tend to reduce convection and low-level winds, thus producing a negative feedback on dust production. Lower surface temperatures could also result in reduced evaporation and a somewhat moister soil, again producing a negative feedback. However, these effects are likely to be much smaller than the climate changes driving the desertification of Amazonia. The dust produced by the drier, windier, desertified Amazonia was transported considerably beyond the confines of the Amazonian region itself ( figure 3). Particularly high atmospheric dust loads were simulated above the equatorial east pacific, but dust loads were also increased above the whole tropical Pacific. Dust loads also increased over the north and south Atlantic, although it is difficult to determine the relative contributions of the Amazonian and Saharan dust sources to these. SST anomalies in the equatorial east Pacific and the Atlantic, and in particular the north-south SST gradient in the Atlantic, have been identified as drivers of regional climate change in Amazonia Good et al. 2008;Harris et al. 2008), and these SSTs could be affected by the radiative forcing exerted by changes in dust loading above ( figure 3). Emissions of mineral dust aerosol from Amazonia could therefore provide a further feedback on the regional climate change by modifying the SSTs and the associated atmospheric circulations. These results also have important implications regarding the effects of human-induced forest degradation. Although the drying of the Amazonian climate due to global warming is uncertain, large-scale removal of the forest could expose more bare soil and also lead to local precipitation reductions as discussed in §2. This could lead to increased dust emissions. Moreover, increased wind speed due to forest loss has been identified as a key factor in increasing dust emissions. These results suggest that Amazonia has the potential to become a significant new dust source, whether forest degradation occurs through global warming or direct human action. CHANGING EMISSIONS OF BIOGENIC VOLATILE ORGANIC COMPOUNDS AND THEIR EFFECTS ON RADIATIVE FORCING AND AIR QUALITY Changes in the cover of vegetation exert a further impact on climate, radiative forcing and air quality, via surface ozone levels. Tropospheric ozone levels have increased since the pre-industrial era (Volz & Kley 1988), which exert a positive radiative forcing (IPCC 2007). Vegetation emits a wide range of volatile organic compounds (referred to as BVOCs), of which the most important is isoprene. These BVOCs are highly reactive with correspondingly short lifetimes (Kesselmeier & Staudt 1999). They can create or destroy ozone, depending on the levels of nitrogen oxides (NO x ,Z[NOCNO 2 ]). When NO x levels are low, these BVOCs will react directly with ozone, reducing its levels. However, when NO x levels are larger, net ozone production occurs. The consequences of forest degradation in the Amazon region would be to reduce the emission of BVOCs, with a subsequent impact on ozone levels at the surface. The impact of Amazon forest degradation and other global vegetation changes on BVOC emissions and surface ozone levels was studied by Sanderson et al. (2003). These authors used the HadCM3LC model coupled to a global Lagrangian chemistry model, STOCHEM (Collins et al. 1997). The emissions of the BVOCs were calculated using the algorithms developed by Guenther et al. (1995), which use temperature, radiation intensity and various plant data, such as leaf area index and vegetation type. For this study, the vegetation changes were calculated using the prescribed levels of CO 2 . The direct effect of CO 2 on isoprene was not included (Rosenstiel et al. 2003). There was no direct feedback between climate and changes in carbon uptake or loss by the vegetation, but the vegetation can change dynamically in response to the changes in climate. Isoprene emissions and surface ozone levels were simulated for the 1990s and 2090s. Two simulations were performed for the 2090s, one with 1990s vegetation and the other with 2090s vegetation including Amazon forest dieback, so the impact of changed vegetation on projected future ozone levels could be assessed. Global emission totals of anthropogenic pollutants were taken from the IS92a scenario and distributed over the globe according to the IPCC SRES A2 scenario. With the vegetation distribution fixed at that for the 1990s, isoprene emissions were projected to increase from 550 to approximately 740 Tg yr K1 by 2100. However, a smaller increase to 700 Tg yr K1 was simulated when the 2090s vegetation distribution was used. Isoprene emissions were therefore approximately 40 Tg yr K1 lesser if vegetation change was included in the simulations. Changes in summertime surface ozone levels between the 2090s and the 1990s are shown in figure 4. When the vegetation distribution was fixed at the 1990s state (figure 4a), ozone levels over Amazonia were projected to be up to 25 ppbv larger in the west and 5-15 ppbv larger in the east. When the changed vegetation distribution is used (and global isoprene emissions are smaller), the increase in surface ozone levels is projected to be approximately 5 ppbv smaller in eastern Amazonia (figure 4b). A significant loss mechanism for ozone is dry deposition, where ozone is irreversibly removed at the surface. Deposition to vegetation is the major loss route, thus any changes in vegetation will also affect the dry deposition sink. However, the global deposition fluxes calculated for the two future simulations were almost identical and differed by less than 1%. The simulations for the 2090s included the effect of increasing levels of CO 2 on dry deposition via reduced stomatal conductance (Sanderson et al. 2007). Reduced stomatal opening due to higher CO 2 led to a reduced flux of ozone into the stomatal cavities within leaves. For these particular simulations, the increase in deposition fluxes caused by larger surface ozone values has been at least partly offset by reduced stomatal conductance. Changes in the dry deposition sink are therefore not the cause of the different future ozone levels in these simulations. Isoprene emissions have a significant impact on the projected future surface ozone levels. Ignoring vegetation changes has meant that future simulated ozone levels were greater by 5-10 ppbv, owing to larger isoprene emissions. This may have implications for air quality in the region, with potential implications for the health of humans, animals, ecosystems and crops. Although tropospheric ozone is a GHG, so a relative reduction in ozone due to Amazon forest dieback would provide a negative feedback on radiatively forced climate change, the changes simulated here as a consequence of Amazon forest degradation would exert only a minor radiative forcing so that they are likely to provide only a small feedback on climate change. However, this feedback effect could be larger if the ozone changes affected carbon uptake by vegetation with consequent effects on atmospheric CO 2 (Sitch et al. 2007). A reduction in surface ozone concentrations would decrease the damaging effect of ozone on plants and therefore partly ameliorate any reduction in carbon uptake that may occur as a result of ozone poisoning. Carbon uptake could therefore be slightly larger as a consequence of the reduced isoprene emissions, providing a further negative feedback on climate change. CONCLUSIONS It is concluded that future forest degradation in Amazonia could interact with climate and air quality in complex ways, acting as both a feedback on climate change from other causes and a driver of climate change in its own right. The extreme twenty-first century precipitation decrease and forest dieback simulated in Amazonia by the HadCM3LC coupled climate-carbon cycle model are a coupled process emerging from multiple interactions between the atmosphere, the oceans and the land ecosystems of the Amazon and elsewhere. Following Amazon forest degradation, by whatever cause, biogeophysical and carbon cycle effects can all act to reduce the local precipitation, although it can be speculated that dust effects may partially decrease drying. Isoprene emissions, affecting local air quality through ozone concentrations, may also be affected by forest degradation. Global emission reductions policies may need to take account of these feedbacks and their associated uncertainties if GHG stabilization targets are to be aimed for. Policies to avoid deforestation in Amazonia may have greater benefits than previously assumed, through both reducing the vulnerability of wider areas of forests and facilitating easier adaptation to climate change.
2014-10-01T00:00:00.000Z
2008-02-11T00:00:00.000
{ "year": 2008, "sha1": "91a3aa6daf77c8d6e458bbdce50e641dc091e14e", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rstb.2007.0027", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f20c94129afdfb40be91d87fb9ce8ab33344049f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
17275560
pes2o/s2orc
v3-fos-license
Relationship between phospholipase C zeta immunoreactivity and DNA fragmentation and oxidation in human sperm Objective The study aimed to evaluate the feasibility and reproducibility of measuring phospholipase C zeta (PLCζ) using immunostaining in human sperm and to investigate the relationship between PLCζ immunoreactivity and DNA fragmentation and oxidation in human sperm. Methods Semen samples were obtained from participants (n=44) and processed by the conventional swim-up method. Sperm concentration, motility, normal form by strict morphology, DNA fragmentation index assessed by terminal deoxynucleotidyl transferase dUTP nick end labeling method and immunofluorescent expression for 8-hydroxy-2'-deoxyguanosine (8-OHdG) and PLCζ were assessed. Results When duplicate PLCζ tests were performed on two sperm samples from each of the 44 participants, similar results were obtained (74.1±9.4% vs. 75.4±9.7%). Two measurements of PLCζ were found to be highly correlated with each other (r=0.759, P<0.001). Immunoreactivity of PLCζ was not associated with donor's age, sperm concentration, motility, and the percentage of normal form as well as DNA fragmentation index. However, immunoreactivity of PLCζ showed a significant negative relationship with 8-OHdG immunoreactivity (r=-0.404, P=0.009). Conclusion Measurement of PLCζ by immunostaining is feasible and reproducible. Lower expression of PLCζ in human sperm may be associated with higher sperm DNA oxidation status. Introduction The sperm-specific phospholipase C zeta (PLCζ) is a gametespecific 70 kDa protein which is predominantly localized to the equatorial region of the human sperm, with relatively lower level in the acrosomal and post-acrosomal regions [1]. PLCζ exhibits the expected properties of a sperm-associated oocyte-activating factor in human [1,2]. Previous studies have demonstrated that PLCζ is the physiological agent responsible for inositol 1,4,5-trisphosphate pathway-mediated Ca 2+ release in oocytes, a process known as oocyte activation [1,3]. A deficiency in the mechanism of oocyte activation is regarded as the principal cause of fertilization failure, or abnormally low fertilization after intracytoplasmic sperm injection (ICSI), and can occur in several cycle attempts [2]. Given the fundamental role of PLCζ in activating oocyte after gamete fusion, it was reported that reduced PLCζ protein level or mutated form of PLCζ are correlated with specific type of male infertility such as repetitive ICSI failure and globozoospermia [4]. Accordingly, numerous studies have reported assisted oocyte activation as a treatment for failed or low fertilization after ICSI [2][3][4]. In addition to its therapeutic role, PLCζ is thought to repre-Ju Hee Park, et al. Phospholipase C zeta in human sperm sent a prognostic biomarker of sperm quality. Several studies have demonstrated the differential expression of some key PLCζmRNAs in infertile males compare to fertile males [5,6]. In addition, a significant reduction in PLCζ mRNAs levels in individuals with low or failed fertilization with ICSI compared to fertile controls was reported [7]. DNA fragmentation is an important factor in the etiology of male infertility [8]. Men with high DNA fragmentation levels have significantly lower odds of conceiving, naturally or through procedures such as intrauterine insemination and in vitro fertilization [9]. The most common cause of DNA fragmentation in spermatozoa is reactive oxygen species and oxidative stress [10]. It is clinically important to investigate the sources of oxidative stress such as smoking, because, in most cases, they are modifiable. Although PLCζ, DNA fragmentation and oxidation have been investigated a lot as a possible biomarker of sperm quality, there has been no study looking at the association among those three factors. In addition, most human data about PLCζ used PLCζ mRNAs and no studies have evaluated the clinical feasibility of sperm PLCζ level measured by immunostaining. Based on foregoing, we aim to investigate the feasibility and reproducibility of measuring PLCζ expression using immunostaining in human sperm. We also aimed to investigate the relationship between PLCζ immunoreactivity and DNA fragmentation or oxidation status in human sperm. Study subjects Semen samples were obtained from male participants between April 2013 and February 2014. Informed consent for enrollment into the study and for the use of semen in analysis was obtained from all participants. The institutional review board at Seoul National University Bundang Hospital reviewed and approved the study (B-1205-155-003). Subjects without a history of genital inflammation or genital surgery, subjective symptoms, self-reported medical risk factors and taking any prescription medications were included. Conventional semen analysis Semen samples were collected by masturbation after 3 days of sexual abstinence. After liquefaction for 30 minutes at room temperature, sperm quality was assessed using computer-assisted semen analysis (SAIS-PLUS 10.1, Medical Supply Co., Seoul, Korea) and classified according to World Health Organization guidelines published in 2010. Strict criteria for the definition of normal spermatozoa were used during morphological assessment. Baseline semen characteristics were as follows: volume, 2.9±1.4 mL (range, 1.0 to 6.0 mL), concentration, 99±82 million/mL (range, 18 to 460 million/ mL), motility, 54.6±15.3% (range, 10.1% to 75.8%), total motile sperm, 177±238 million (range, 17 to 1,485 million); and normal forms by strict morphology, 11.3±5.6% (range, 2.3% to 25.0%). All semen samples contained motile sperm and no sample had significant numbers of round cells or leukocytospermia in accordance with World Health Organization guidelines (<1 million round cells/mL). Conventional swim-up The semen was processed by the conventional swim-up method. After centrifuging the semen (300 xg for 5 minutes), a pellet was obtained via removal of seminal plasma. The pellet was suspended in fresh Ham's F10 (1.5 mL) supplemented with 10% serum substitute supplement (Irvine Scientific, Santa Ana, CA, USA). After centrifugation (300 xg for 5 minutes), the supernatant was discarded and Ham's F10 with 10% serum substitute supplement media (0.5 mL) was gently layered on the pellet and incubated at 37°C in a 5% CO2 atmosphere for 1 hour. The supernatant (0.5 mL) was then transferred to a conical tube. Computer-assisted semen analysis and normal forms by strict morphology were assessed in the processed samples. Terminal deoxynucleotidyl transferase dUTP nick end labeling assay Nuclear DNA integrity was measured by the terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay as described previously [11]. The samples were smeared on a silane-coated slide (DAKO, Glostrup, Denmark) and air-dried. Sperm samples were fixed with 4% paraformaldehyde for 1 hour at 15°C to 25°C, and then washed with phosphatebuffered saline (PBS). Sperm were permeabilized with 0.1% Triton X-100 in 0.1% sodium citrate (Sigma-Aldrich, St. Louis, MO, USA). A commercial apoptosis detection kit was used (In Situ Cell Death Detection Kit, Roche Diagnostics GmbH, Mannheim, Germany). The remaining procedures were performed as per manufacturer's instructions. Counterstaining was performed using a mounting medium with 4',6-diamidino-2-phenylindole (Vector Laboratories, Burlingame, CA, USA). Sperm with fragmented DNA had nuclei stained green, whereas the nuclei of other cells were blue. Sperm head with >50% of the area stained green was considered positive. At least 500 sperm were counted per experimental set and the percentage of sperm with fragmented DNA was determined as DNA fragmentation index (DFI). Immunofluorescence assay for 8-hydroxy-2'-deoxyguanosine This method was used for the detection of 8-hydroxy-2'deoxyguanosine (8-OHdG), a known biomarker for oxidative stress. A specific antibody (Argutus Medical OxyDNA Test, BD Biosciences, Franklin Lakes, NJ, USA) conjugated to fluorescein isothiocyanate (FITC) was used. The intensity of FITC fluorescence was then assessed under fluorescent microscopic evaluation (Fig. 1). Briefly, sperm samples were fixed by 4% paraformaldehyde and permeabilized. The primary antibody was then added for 1 hour, according to the manufacturer's instructions. At least 500 sperm were counted in different areas of each slide. Sperm head with >50% of the area stained green was considered positive. Phospholipase C zeta immunostaining PLCζ was detected by immunofluorescent staining with a polyclonal anti-PLCζ antibody as described before [12]. Sperm samples were fixed with 4% paraformaldehyde/PBS, permeabilized with 0.5% (v/v) Triton X-100/PBS and stored at 4°C until use. Sperm smears were created on pre-coated glass slides ( Results Forty-four males were enrolled regardless of their fertility status. The mean age of the participants was 32.0±5.5 years (range, 23 to 49 years). Of those 44, 16 (36%) were smokers. While 47% (21/44) of the study participants were married, fertility was proven in only 5/21 married participants. The remaining 16 participants were not infertility patients. Table 1 shows the basic sperm characteristics of participant samples including DFI and immunoreactivity of 8-OHdG and PLCζ in swim-up samples. When duplicate PLCζ tests were performed on the same sperm samples from each of the 44 participants, similar results were obtained (74.1±9.4% vs. 75.4±9.7%). Two measurements of PLCζ immunoreactivity were found to be highly correlated with each other (r=0.759, P<0.001) (Fig. 3). Thus, the PLCζ test by immunostaining was highly consistent within the same sample. The mean intra-assay coefficient of the variation for PLCζ was estimated to be 3.4%. In Table 2, the correlation coefficients among sperm parameters, immunoreactivity of PLCζ and 8-OHdG and DFI were presented. Immunoreactivity of PLCζ showed a negative relationship with 8-OHdG immunoreactivity (r=-0.404, P=0.009). Because 8-OHdG immunoreactivity was negatively correlated with sperm concentration, multivariate analysis was performed to eliminate the confounding effect of sperm concentration; as a result, immunoreactivity of PLCζ showed insignificantly negative correla- When performed a subgroup analysis between smokers and non-smokers, no differences were found in the sperm parameters, immunoreactivity of PLCζ and 8-OHdG and DFI before or after swim-up between smokers and non-smokers (data not shown). In smokers, degree of cigarette exposure (pack-years) was not correlated with the sperm parameters including immunoreactivity of PLCζ and 8-OHdG and DFI before or after swim-up. Discussion Semen analysis based on semen concentration, motility and morphology has been used for the diagnosis for male fertility for many years. However, a significant number of patients with normal sperm parameters still undergo difficulty in achieving successful pregnancy [13]. Several new diagnostic biomarkers for semen quality have been investigated recently including PLCζ, oxidative stress and DFI [14]. Here, we first demonstrated that sperm PLCζ immunoreactivity has a negative correlation with a oxidation marker, While PLCζ has been investigated a lot as its role in oocyte activation was known, this study is the first to validate the clinical feasibility of measuring sperm PLCζ immunoreactivity in human sperm. Although the sample size was small, the high reproducibility of sperm PLCζ immunostaining in this study may suggest a possible clinical application of sperm PLCζ test in evaluating male infertility. The negative association between sperm PLCζimmunoreactivity and 8-OHdG immunoreactivity may be interpreted as a detrimental effect of oxidation on sperm-mediated oocyte activation which is consistent with previous studies. Morado et al. [14] reported that reactive oxygen species (ROS) levels differed between activated and nonactivated human oocytes. In our study a positive association was observed between 8-OHdG and DFI (r=0.216) but this was insignificant. Because oxidative stress is known to be a major causal factor of DNA fragmentation, it is now widely agreed that excess ROS contributes significantly to sperm DNA damage [15]. Previous studies reported that there is a significant positive relationship between sperm 8-OHdG levels and DFI [16,17]. The absence of the statistical significance might be attributed to small sample size in our study (i.e., weak power). In our study, there was no difference in 8-OHdG immunoreactivity and DFI between smokers and non-smokers. This is consistent with a previous study reporting no association between smoking and DNA fragmentation in sperm of healthy men [18]. However, our result is discordant with two previous studies reporting a significantly higher amount of 8-OHdG in sperm in smokers than non-smokers [19,20]. In fact, 8-OHdG immunoreactivity in smokers (21.1±15.9) was higher than that of non-smokers (14.6±10.7), but this did not reach a statistical significance, thus further large-scaled study would be needed to clarify this topic. The clinical strengths of this study can be summarized into two points. First, this is the first study to evaluate the laboratorial feasibility of measuring sperm PLCζ immunoreactivity which may propagate its use in the field of male infertility. Second, by looking at the possible association between sperm PLCζand 8-OHdG immunoreactivity, our result provides a significant pilot data for future lager studies to investigate further the efficacy of sperm PLCζ as a biomarker which can predict the semen quality and, in turn, assisted reproductive technique outcomes. There are several limitations of this study. Because of the small number of participants, it was impossible to make populationbased inferences based on our findings. Also, our study subjects were unselected population and subjects' fertility status could not be controlled. Finally, the effect of smoking was not evaluated thoroughly because of the small number of subjects. In conclusion, sperm PLCζ can be measured using immu- nostaining and be possibly used as a semen quality marker in conjunction with other biomarkers such as semen analysis and 8-OHdG. Because of the relatively small number of subjects, however, a definitive judgment on the efficacy of sperm PLCζ measurement as a semen quality marker cannot yet be rendered. Further larger studies will be needed to determine whether sperm PLCζ has an impact on embryo development and ART pregnancy outcomes.
2017-10-26T10:49:06.751Z
2015-05-01T00:00:00.000
{ "year": 2015, "sha1": "b3901cf3073bb5308676f33372b932d4656c3680", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5468/ogs.2015.58.3.232", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b3901cf3073bb5308676f33372b932d4656c3680", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269848542
pes2o/s2orc
v3-fos-license
Duck Tembusu virus in North Vietnam: epidemiological and genetic analysis reveals novel virus strains Tembusu virus (TMUV) is an important infectious disease, causing economic losses in duck production. Since the first report of TMUV infection in Vietnam in 2020, the disease has persisted and affected poultry production in the country. This study conducted epidemiological and genetic characterization of the viral strains circulating in north Vietnam based on 130 pooled tissue samples collected in six provinces/cities during 2021. The TMUV genome was examined using conventional PCR. The results indicated that 21 (16.15%) samples and 9 (23.68%) farms were positive for the viral genome. The positive rate was 59.26% for ducks at ages 2–4 weeks, which was significantly higher than for ducks at ages >4 weeks and < 2 weeks. Genetic analysis of the partial envelope gene (891 bp) sequences indicated that the five Vietnamese TMUVs shared 99.55–100% nucleotide identity, while the rates were in the range 99.59–100% based on the pre-membrane gene sequences (498 bp). The five Vietnamese TMUV strains obtained formed a novel single subcluster. These strains were closely related to Chinese strains and differed from the vaccine strain, suggesting that Vietnamese TMUV strains were field viruses. It needs to be further studied on vaccine development to prevent effects of TMUV infection on poultry production across Vietnam. Introduction Duck farming is a long-standing agricultural venture in Asia.However, this sector faces challenges from various infectious agents, including duck Circovirus, Sitiawan virus, and duck Tembusu virus (TMUV) (1)(2)(3).TMUV, classified within the genus Flavivirus of the Flaviviridae family, is an arthropod-borne virus characterized by a single-stranded, positive-sense RNA genome.It was originally isolated in 1955 from Culex tritaeniorhynchus mosquitoes in Kuala Lumpur, Malaysia (3).Nevertheless, its potential implications for human and animal health remain incompletely understood.Within the Flavivirus genus, several members, such as West Nile virus (WNV), dengue virus (DENV), yellow fever virus (YFV), Japanese encephalitis virus (JEV), and Zika virus, exhibit zoonotic properties and serve as major vector-borne pathogens responsible for millions of annual infections.These infections can manifest with a spectrum of clinical presentations, ranging from mild febrile symptoms to severe and potentially fatal hemorrhagic or neurologic diseases.This viral genus is characterized by its 30-60 nm icosahedral envelope capsid containing a singlestranded positive-sense RNA genome approximately 11 Kb in length. The genomic organization encoding the singular polyprotein is three structural proteins of capsid (C)-premembrane (prM)-envelope (E) and seven non-structural (NS) proteins (NS1, NS2A, NS2B, NS3, NS4A, NS4B, NS5) that are produced through the actions of viral and cellular proteases.Surrounding the coding region are untranslated regions at both the 5′ and 3′ ends, which form conserved stem-loop structures (3,4).Primarily, the composition of the viral particle is shaped by the structural proteins, which play a pivotal role in mediating viral fusion with host cells.These proteins are essential in various processes, including binding to virus receptors, facilitating entry, and associating fusion events.In contrast, the non-structural (NS) proteins are primarily engaged in activities, such as viral RNA replication, the assembly of virions, and the evasion of innate immune responses (5).Within natural ecosystems, avian species are reservoir hosts for a multitude of flaviviruses, such as West Nile virus (WNV) (6), Sitiawan virus (3), Usutu virus (7,8), and Bagaza virus (9).In April 2010, the causative agent of duck egg-drop disease in China was first recognized as duck TMUV.This ailment is distinguished by a decrease in egg yield, an abrupt reduction in feed intake, and the emergence of neurological manifestations among afflicted egg-laying and breeder ducks (4).Egg-drop disease impacts a diverse array of duck breeds, encompassing both meat-producing and egg-laying categories.This spectrum includes Pekin ducks, Cherry Valley ducks, Shaoxing ducks, Jinyun ducks, Longyan ducks, Jinding ducks, Khaki-Campbell ducks, Muscovy ducks, and domesticated mallards (10).The virus is still circulating and affecting duck production and many countries.It would be further studied on epidemiological and genetic characterization of the viral strains, which gives important information for developing vaccine and preventive strategies. In Vietnam, the first study on TMUV infection was reported by Dang et al.Those authors used a PCR method to detect the TMUV genome in clinically suspected ducks in Hanoi city.Phylogenetic analysis of the partial NS5B gene sequences suggested that the three Vietnamese TMUV strains were genetically related to the DK/TH/CU-1 strain (KR061333) detected in Thailand (11).The aim of the current study was to carry out further epidemiological and genetic characterization of the TMUV strains circulating in several cities and provinces in north Vietnam based on their partial E and prM gene sequences. Ethics statement The study did not consist of any studies involving human participants.Samples were collected from ducks farmed in north Vietnam under the auspices of the Vietnam National University of Agriculture and the protocol for sampling purposes was submitted and approved by the Committee on Animal Research and Ethics of the University (CARE-2021/04).Permission was obtained from the duck farm owners before sampling. Sample collection In total, 130 tissue samples (brain, lung, liver, kidney and bursa of Fabricius) of broiler ducks aged 2-7 weeks were collected from Bacgiang, Haiduong, Hanoi, Hungyen, Thaibinh and Thainguyen in 2021 (Figure 1).In each flock, 2-6 diseased ducks were collected by local veterinarians, necropsied, packed with dry ice, and transported to the Vietnam National University of Agriculture for laboratory analysis.Tissue samples of each duck were pooled for further testing.A 10% homogenization of pooled samples was prepared in phosphatebuffered saline. RNA extraction, cDNA synthesis and conventional PCR for TMUV detection A Viral Gene-spin™ Viral DNA/RNA Extraction Kit (iNtRON Biotechnology; Seoul, Korea) was used for the RNA extraction from homogenized samples, following the manufacturer's instruction.M-MLV enzyme (Invitrogen; Carlsbad, CA, United States) was used to synthesize cDNA.In total, 20 μL of reagents, consisting of 4 μL of 5X M-MLV buffer, 1 μL of dNTP, 1 μL of random primer (Invitrogen; Carlsbad, CA, United States), 1 μL of MMLV reverse transcriptase (Invitrogen; Carlsbad, CA, United States), and 9 μL of distilled water, were mixed with 4 μL of RNA.Then, the mixture was placed in the thermal machine at 25°C for 10 min, 37°C for 1 h, and 65°C for 10 min. Identification of the TMUV genome was carried out using PCR to detect the target 400 bp gene, with the TV-3F and TV-3R primers (Table 1), as described elsewhere (13).PCR was performed using GoTag™ Green Master Mix (Promega) at 94°C for 5 min, 40 cycles of 94°C for 30 s, then 55°C for 30 s and 72°C for 30 s, with an extension step Nucleotide sequencing Two pairs of primers (E-F/R and prM-F/R; Table 1) were used for amplification of the partial prM and E gene sequences of the TMUV strains.The thermal conditions were 94°C for 4 min, followed by 35 cycles of 94°C for 60 s, then 50°C for 60 s and 72°C for 60 s, with a final extension step at 72°C for 5 min.The PCR products were loaded in 1.5% agarose gels for electrophoresis.The PCR products were purified using GeneClean ® II Kits (MP Biomedicals; Santa Ana, CA, United States).Sequencing of the TMUV strains was performed by 1st BASE (Malaysia). Genetic and phylogenetic analyses The nucleotide sequences from the Vietnamese TMUV strains identified in this study were aligned using the Bioedit software supplemented with Clustal W (14,15).Nucleotide identity among the Vietnamese and other sequences downloaded in GenBank were identified using the Basic Local Alignment Search Tool (BLAST) 1 and the GENETYX version 10.0 software (GENETYX Corp.; Tokyo, Japan).In total, 40 and 38 sequences of the E and prM genes, respectively, from different TMUV strains from GenBank (Table 2) were used to construct phylogenetic trees and further conduct genetic characterization of the viral strains.Maximum likelihood methods, based on the Tamura 2-parameter model, were used to established phylogenetic trees.The MEGA X 2 software was used with bootstrapping of 1,000 replicates to determine the confidence values of tree branches.The partial E and partial prM gene sequences obtained were deposited into GenBank, with the accession numbers OR727885 to OR727894, respectively. Analyses of recombination and natural selection profiles The selected TMUV strains obtained in this study and other sequences from GenBank were used to examine recombination events (13).Evaluation of natural selection profiles was performed using the FUBAR (a Fast Unconstrained Bayesian AppRoximation) method ( 16). 3 Statistical analysis Significant differences in the rate of the TMUV genome between geographical regions, ages, or flock size groups were detected using Fisher's exact test.A value of p < 0.05 was used to determine significant differences. The percentage of TMUV-positive ducklings at age 2-4 weeks was 59.26%, which was significantly higher than for those aged >4 weeks (5.43%), while there were no positive samples from ducks aged <2 weeks for the viral genome using the PCR method.In this study, the farm levels were divided into three levels (1 with the number of ducklings <500, 2 with the number of ducks in the range 500-1,000, and 3 with the number of ducks >1,000 ducks).The positive rates were 17.24, 18.75, and 10.81% for levels 1, 2, and 3 flocks; however, these rates were not significantly different (Table 4). Phylogenetic trees were established based on the partial E (891 bp) and prM (498 bp) sequences of the five TMUVs and other viruses from GenBank.The results indicated that the five Vietnamese TMUV strains obtained in this study formed a novel single cluster (2.1b).The five Vietnamese TMUV strains were genetically related to Chinese strains and differed from the vaccine strain China/JXSP-310/2017 (MZ031023.1),as shown in Figures 2, 3. No recombination event was found among the Vietnamese TMUVs obtained in this study based on the partial E gene sequences.The selection profiles of the five obtained TMUV strains and other strains from GenBank were analyzed based on the partial E protein (297 amino acids).These results indicated that 169 sites had negative selection (Supplementary Table 2), whereas no positive selection was found on the partial E protein of the Vietnamese TMUV strains. Discussion It has been reported that TMUV was successfully isolated from mosquitoes in Malaysia in 1950s (17).Subsequently, the virus spread and has been reported in many countries globally (1,2,4,7,11,18).Understanding infection is important in creating strategies to control the disease.In Vietnam, the first report of TMUV infection was conducted in 2019 (11).The current study has continued to describe viral infection among ducks raised in some provinces in north Vietnam.Epidemiological analysis was first conducted in this study, with the five novel Vietnamese TMUV strains obtained in this study genetically forming a single cluster (2b), separated from the vaccine strain based on the partial E gene sequences (891 bp).This finding suggested that these strains were field strains. Another study noted that TMUV infection could reach 90% among ducks within a farm, while the mortality rate varied from 5 to 30% (19).In the current study, The rates for TMUV-positive samples and farms were 16.15 and 23.68%, respectively, based on PCR.These rates were lower than the rate of 46.59% (41/88) reported in China during 2010-2016 (20) due to the fact that the sampling number, location, and time parameters were different, resulting in differences in the infection rates.Li et al. conducted a study to access pathogenicity of TMUV strains isolated in China (21).One-, 3-, and 7-weeks-old ducks were inoculated with TMUV strains.The study pointed out that ducks at one-week-old indicated severe symptoms, while other groups showed milder pathological lesions (21).This finding agreed with a previous study revealed that young ducks were more sensitive to TMUV strains than older ducks, suggesting a relationship between viral susceptibility and age (21,22).Later researchers found that ducks aged 18-21 weeks and 55 weeks were susceptible to the disease caused by TMUV (23,24).Changes in age-related susceptibility occurred among ducks infected with TMUV strains.It is due to the fact that host immune responses, different viral strains, or evolution among viral strains may contribute to age-related susceptibility.Further studies should be conducted to elucidate these points.The current study indicated that the highest rate of viral infection was in young ducks aged 2-4 weeks (59.26%), compared to older ducks aged >4 weeks (5.43%), with no positive samples being detected in ducks aged <2 weeks.Older ducks may have a stronger immune response, leading to lower positive rates (21).Further studies should be conducted to explain the relationship between age and infection rates in north Vietnam. Genetic and phylogenetic analyses based on the partial E and prM gene sequences revealed that the current five Vietnamese TMUV strains formed a novel subcluster, which was closely related to Chinese strains.The results suggested that the TMUV strains obtained in this study might have a similar origin to Chinese strains.N-linked glycosylation, which contributes to the entry of the virus into host cells, was predicted in the E protein of TMUV at the 103, 154, and 314 residues (25,26).However, in the current study, substitutions were not found at these three residues, suggesting the conservation of the glycosylation sites of E protein among Vietnamese TMUV strains, which was similar to the findings of Huang et al.Fritz et al. suggested that protonation of histidine's plays an important role in the membrane fusion of flaviviruses (27).No substitutions of histidine was observed at the 144, 153, 163, 219, 246, 263, 285, 320, and 398 residues of the E protein of the five Vietnamese TMUVs in the current study, nor were substitutions found in the epitopes 220-226 and the 374-380 residues, located in domains DII and DIII in the E protein of the Vietnamese TMUV strains.In the current study, three substitutions were found among Vietnamese TMUV strains: residues 110 (K → E), 157 (A → V), and 345 (D → N).To our knowledge, no evidence on the contribution of these substitutions has been reported; other studies will be conducted to evaluate this point. One of the evolutionary processes is recombination, was first found among TMUV strains in China (28).The current study did not find any recombination events among the Vietnamese TMUV strains based on analysis of the partial S gene.Expanding analysis of the complete genome and increasing the number of sequences are necessary to examine possible recombination events among Vietnamese TMUV strains.Dai et al. (16) reported that negative selection was strong among Chinese TMUV strains.A few positive selection sites were found among the TMUV strains as a result of genetic drifts, similar to other flaviviruses (29, 30).In the current study, 169 residues of Vietnamese TMUV E protein had negative selection. Conclusion In this study, TMUV infection in samples and on duck farms were 16.15 and 23.68%, respectively, in six provinces/city in north Vietnam in 2021.The infection was most commonly detected in young ducks aged 2-4 weeks (59.26%), at a significantly higher level than in ducks aged <2 weeks and > 4 weeks.Genetic and phylogenetic analyses of the five Vietnamese TMUV strains based on the partial E and prM gene sequences supported that the current Vietnamese TMUV strains belonged to a novel subcluster, which was closely related to the Chinese strains and differed from the vaccine strain.No putative recombination event was detected among the Vietnamese TMUV strains.Strong negative selection was found among the Vietnamese TMUV strains, based on the analysis of the partial E protein.Further studies should be conducted to better understand the evaluation of TMUV strains across the country. FIGURE 2 A FIGURE 2A Maximum likelihood phylogenetic tree of partial E gene (891 bp) sequences of Vietnamese Tembusu virus strains compared with those available in GenBank.GenBank sequences are indicated by country name/accession number.The maximum likelihood method in the MEGA X software was used to establish phylogenetic trees (1,000 bootstrap replicates).Numbers at each branch point indicated bootstrap values ≥50% in the bootstrap interior branch test.The current Vietnamese strains are indicated by black-filled circles, while the vaccine strain is marked by a black-filled square. FIGURE 3 A FIGURE 3A Maximum likelihood phylogenetic trees of partial prM gene (498 bp) sequences of Vietnamese Tembusu virus strains compared with those available in GenBank.GenBank sequences are indicated by country name/accession number.The maximum likelihood method in the MEGA X software was used to establish phylogenetic trees (1,000 bootstrap replicates).Numbers at each branch point indicated bootstrap values ≥50% in the bootstrap interior branch test.The current Vietnamese strains are indicated by black-filled circles, while the vaccine strain is marked by a black-filled square. TABLE 1 Primers used in this study. TABLE 2 Description of Tembusu virus strains used in this study. TABLE 3 Identification of Tembusu virus in individual ducks and farms from different locations in north Vietnam. TABLE 4 Detection of Tembusu virus genome in field samples based on age and flock size. TABLE 5 Nucleotide identities of partial E (891 bp) gene sequences among Vietnamese and vaccine TMUV strains. TABLE 6 Nucleotide identities of partial prM gene sequences (498 bp) among Vietnamese and vaccine TMUV strains. TABLE 8 Amino acid substitutions of E protein among Vietnamese TMUV strains compared with vaccine strain. a The consensus sequence was released with 100 partial E gene sequences of TMUV strains from GenBank by the GENETYX software.b same as the consensus sequence.10.3389/fvets.2024.1366904Frontiers in Veterinary Science 07 frontiersin.org
2024-05-19T15:53:49.113Z
2024-05-14T00:00:00.000
{ "year": 2024, "sha1": "ecaf7ebd026ce84b8547895ba30fe7ec76b71123", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2024.1366904/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b13d29534de853da87c23106d5335d3473a5111", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
79676814
pes2o/s2orc
v3-fos-license
A Comparative Study to Assess the Fetal and Placental Outcome among Anaemic and Non-Anaemic Mothers of Selected Hospital of District Mohali, Punjab, India Pregnancy is the period from conception to birth. Pregnancy begins with fertilization of ovum, a sperm and subsequent implantation of the egg that leads to the conception. Pregnancy may be determined by cessation of menses, enlarged uterus and positive results of pregnancy are the pregnancy test. High risk pregnancy one of the greater risk to the International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 6 Number 9 (2017) pp. 2814-2823 Journal homepage: http://www.ijcmas.com Introduction Pregnancy is the period from conception to birth. Pregnancy begins with fertilization of ovum, a sperm and subsequent implantation of the egg that leads to the conception. Anaemia is a pathological condition in which the oxygen carrying capacity of red blood cells is insufficient to meet body's needs, which leads to problems to mother and poor fetal and placental outcome. The aim of the study was to compare the fetal and placental outcome among Anaemic and non-anaemic mothers of selected hospital of district Mohali, Punjab.A quantitative research approach with comparative design was adopted. Total 100 mothers out of which 50 anaemic and 50 non-anaemic were taken by using non probability, purposive sampling technique to identify the fetus and placental outcome difference in anaemic and non-anaemic mothers. Tool was made up of protocol to assess fetal outcome and protocol to assess placental outcome. Collected data was analyzed by descriptive and inferential statistics. The analyzed data out of 100 mothers majority of the anaemic mothers 74% were in the age group of 21-30 years and in non-anaemic 86% were in age group of 21-30 years, 66% anaemic mothers were from joint family and 52% nuclear family and in non-anaemic 71% were from nuclear family, most of the anaemic mothers 48% of mothers had 5001-10000 family income and in non-anaemic mothers 66% had 5001-10000, maximum 44% anaemic mothers had Sikh religion and in non-anaemic 58% also from Sikh religion, 46% of anaemic mothers had primary and 32% of nonanaemic mothers had secondary education, 78% of anaemic mothers and 66% of nonanaemic mothers were home maker, 54% of anaemic mothers were vegetarian and 66% of non-anaemic mothers were non-vegetarian, 58% of anaemic mothers were from rural and 60% of non-anaemic mothers were from urban area. Fetal outcome the 28.0% anaemic and 72.0% non-anaemic mothers had good fetal outcome and 68.0% anaemic and 32.0% nonanaemic mothers had poor fetal outcome. Placental outcome the 18.0% anaemic and 82.0% non-anaemic mothers had good placental outcome and 46.0% anaemic and 54.0% non-anaemic mothers had poor placental outcome. χ 2 value showed that there was statistically no significant association with demographic variables of anaemic and nonanaemic mothers. mother and on her baby. The risk is complicated by factors that adversely affect the pregnancy outcome. Some of the high risk factors in reproductive history are Anaemia, Pre-eclampsia, Eclampsia, Grand multi parity, medical surgical disorders associated with pregnancy, previous still birth and previous preterm labour etc. 1 In pregnancy, anaemia is a condition in which the haemoglobin level is lower than the one third of normal level. Anaemia is not a specific disease but a sign of underlying disorders. It is of two type-Hypoprolifrative anaemia: in this the bone marrow cannot produce adequate number of erythrocytes. It may results from bone marrow damage due to side effect of any medication or lack of factors like iron, folic acid etc. Haemolytic anaemia: premature destruction of erythrocytes result in the liberation of haemoglobin from the erythrocyte destruction in to the plasma .2 During antenatal period mother needs an extra attention about her diet to get healthy baby. If any alterations in health of mother like minor disorders like nausea, vomiting and other minor illnesses may lead to anaemia. Anaemia is not disease but it is clinical condition characterized by deficiency of nutrients especially iron and vitamins that requires immediate attention by the health personnel to reduce morbidity and mortality rate of mother and fetus. Hence it is considered as a life threatening condition. 3 Anaemia during pregnancy is most common and considerable health problem in developing countries, despite the fact that most of anaemia cases seen in pregnancy and it is largely preventable and easily cured if detected in time it still continue to be a common cause of mortality and morbidity in India. Factors that are often reported to influence maternal anaemia are low socioeconomic status, less mother's age, multiple pregnancy, pre-pregnancy underweight, faulty dietetic habit, faulty absorption mechanism, increased iron loss (through sweat, repeated pregnancies, excessive blood loss during menstruation) inadequate supply of nutrients like iron, folic acid and vitaminB 12 , proteins, amino acids, vitamins A, C and other vitamins of B-complex group. 4 Anaemia affects placenta and fetal outcome dreadfully. Placenta is a developing organ during pregnancy for providing nutrition, oxygen supply for the fetus and to eliminate excretory wastes, acts as protective barrier. The placental outcome in healthy mother are as follows the timing for delivery of placenta is within 15 minutes, weight of placenta is 500 gm, shape of placenta is oval in shape, diameter is 15cm, thickness is 1.5cm,number of cotyledons is 20,colour is thick white and red, length of umblical cord is 50 cm, vessels are three 2 arteries and one vein and insertion of the cord on the fetal surface is central insertion. If the placenta has affected due to anaemia, it adversely effects on the growth of the fetus. Neonates can be having pathological conditions like, birth asphyxia, prematurity, IUGR, low birth weight and also the placenta varies in its measures that includes its weight, morphometry, number of cotyledons and its thickness. The normal parameters of healthy new-born is the APGAR score is 7-10, weight is 2.6-3.1 kg, temperature is 99.5 0 F, crown-heal length is 50cm, crown-rump length is 35cm, head circumference is 33-35 cm, chest circumference is 30-33 cm and all the reflexes like rooting, glabeller, grasp, moro, suckling and swallowing should be present in normal newborn. 5 Hence a good fetal outcome depends on mothers health and her diet during antenatal period. The aim of the study is to compare the fetal and placental outcome among Anaemic and non-anaemic mothers of selected hospital of district Mohali, Punjab, India. The Objectives of this study to assess the fetal outcome in anaemic and non-anaemic mothers. The placental outcome in anaemic and non-anaemic mothers. To compare the fetal outcome and placental outcome in anaemic and non-Anaemic mothers. Also determine association between fetal outcome and placental outcome with selected demographic variables in both anaemic and non-anaemic mothers. Assumption There will be significant difference between fetal and placental outcome among anaemic and non-anaemic mothers at selected hospital district Mohali, Punjab. Delimitations The present study is delimited to the mothers who are: This study is delimited to selected hospital of Mohali. This study is delimited to anaemic and nonanaemic mothers. Materials and Methods In present study, a quantitative approach with comparative research design was adopted. By Purposive sampling technique 100 mothers (50 anaemic 50 non-anaemic) were selected. Data was collected by protocol to assess the fetal and placental outcome in anaemic and non-anaemic mothers. Analysis of data was done using descriptive and inferential statistics. A study was conducted in the month of March 2016 Formal written permission was obtained from the SMO of civil hospitals of Mohali and Kharar. After discussing the purpose and objectives of the study. Analysis and interpretation of data was done according to objectives of the study by using descriptive and inferential statistics. Ethical consideration With the view of ethical consideration the researcher has taken permission from Principal of Mata Sahib Kaur College of nursing Mohali. After that the researcher has discussed the type and purpose of the study with the SMO of civil hospitals of Mohali and Kharar and written permission were obtained. Also the mothers were explained about the purpose of the study and verbal consent was taken from them for their participation in study. They were explained about the right to refuse from participating in the study. Results and Discussion Major findings of the study Section I: Findings related to socio demographic variables The majority of the anaemic mothers 74% were in the age group of 21-30 years and in non-anaemic 86% were in age group of 21-30 years, 66% anaemic mothers were from joint family and 52% nuclear family and in nonanaemic 71% were from nuclear family, most of the anaemic mothers 48% of mothers had 5001-10000family income and in nonanaemic mothers 66% were 5001-10000, maximum 44%anaemic mothers belong to Sikh religion and in non-anaemic 58% also from Sikh religion, 46% of anaemic mothers had primary and 32% of non-anaemic mothers had secondary education, 78% of anaemic mothers and 66% of non-anaemic mothers were home maker, 54% of anaemic mothers were vegetarian and 66% of nonanaemic mothers were non-vegetarian, 58% of anaemic mothers were from rural and 60% of non-anaemic mothers were from urban area (Table 1). Section V: Findings related to the comparison In anaemic mothers out of 50 mothers 28.0% had good fetal outcome and 72.0% had poor fetal outcome and in non-anaemic mothers out of 50 mothers 68.0% had good fetal outcome and 32.0% had poor fetal outcome. In anaemic mothers out of 50 mothers 18.0% had good placental outcome and 82.0%had poor placental outcome and in non-anaemic mothers out of 50 mothers 46.0% had good placental outcome and 54.0% had poor placental outcome (Tables 5 and 6). Calculated χ 2 value showed that There was no significant association between fetal outcome and placental outcome according to the age of mothers, type of family, total family income per month, dietary habits and residence of mothers. To compare the fetal outcome and placental outcome in anaemic and non-anaemic mothers. Based on the objective of the study in anaemic mothers out of 50 mothers, 28.0% had good fetal outcome and 72.0% had poor fetal outcome and in non-anaemic mothers out of 50 mothers 68.0% had good fetal outcome and 32.0% had poor fetal outcome. Based on the objective of the study, in anaemic mothers out of 50 mothers, 18.0% had good placental outcome and 82.0% had poor placental outcome and in non-anaemic mothers out of 50 mothers 46.0% had good placental outcome and 54.0% had poor placental outcome. The present study findings were supported with the comparative study to assess placental weight and fetal outcome among normal and anaemic mothers. Purposive sampling technique was used, sample size was 20 normal and 20 anaemic mothers who are s.no Socio-demographic variables admitted in labour room for the delivery at selected hospitals of Bijapur. The study was competed in 6-7 weeks and it showed that 67% anaemic mothers had less placental weight and fetal outcome and 92% normal had good placental weight and good fetal outcome (Tables 7 and 8). In present study, Fetal outcome were poor APGAR score, Weight of new-born, crownheel length and Poor Placental outcome were weight of placenta, thickness of placenta, condition of placenta present in the anaemic mothers baby and placenta. The findings of study revealed that there is need to enhance knowledge regarding importance of diet during antenatal time. There are many MCH programs are organised by Government of India regarding prevention of anaemia and ASHA workers are also there who provide health related information to the mothers and regular visit to hospital reduces the risk of anaemia. It was enlighten study experience.
2019-03-17T13:07:28.424Z
2017-09-20T00:00:00.000
{ "year": 2017, "sha1": "d47237bc507ef89690eb0645a2cf3e151a9a8cbc", "oa_license": null, "oa_url": "https://www.ijcmas.com/6-9-2017/Anupama%20Sharma,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9343a28a2f7bcb7965486a006859e3459aac5b41", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
91495927
pes2o/s2orc
v3-fos-license
Characterizing Neck Shrivel in European Plum A BSTRACT . Neck shrivel is a physiological disorder of european plum ( Prunus · domestica L.) fruit, characterized by a shriveled pedicel end and a turgescent stylar end. Affected fruit are perceived as of poor quality. Little is known of the mechanistic basis of neck shrivel, but microcracking of the cuticle has been implicated. The objective of our study was to quantify transpiration through the skin surfaces of european plums with and without symptoms of neck shrivel. Cumulative transpiration increased linearly with time and was greater in the susceptible european plum cultivar Hauszwetsche Wolff with neck shrivel, compared with fruit of the same cultivar but without neck shrivel and compared with fruit of the nonsusceptible unnamed clone P5-112. Cumulative transpiration of epidermal skin segments (ES) excised from symptomatic ‘Hauszwetsche Wolff’ from near the pedicel end exceeded that from ES excised from near the stylar end. The permeance of ES from near the pedicel end of ‘Hauszwetsche Wolff’ with neck shrivel (12.4 ± 2.6 · 10 L 4 m (cid:1) s L 1 ) exceeded that of ES from near the stylar end (2.9 ± 0.4 · 10 L 4 m (cid:1) s L 1 ) 4.3-fold. However, in the clone P5-112, the same difference was only 1.6-fold (1.3 ± 0.8 · 10 L 4 m (cid:1) s L 1 vs. 0.8 ± 0.3 · 10 L 4 m (cid:1) s L 1 ). Microscopy revealed numerous microcracks near the pedicel end of symptomatic ‘Hauszwetsche Wolff’ fruit but markedly fewer microcracks near the stylar end. The microcracks near the pedicel end were oriented parallel to the pedicel/style axis, whereas those near the stylar end were randomly oriented. Juices extracted from near the pedicel end of susceptible cultivars had consistently more negative osmotic potentials [ c S (e.g., for Doppelte Hauszwetsche L 5.1 ± 0.1 MPa)] than those from near the stylar end (e.g., for Doppelte Hauszwetsche L 4.0 ± 0.1 MPa) or that from fruit without symptoms of neck shrivel (e.g., for pedicel end and stylar scar regions of Doppelte Hauszwetsche L 3.8 ± 0.1 vs. L 3.3 ± 0.1 MPa, respectively). Our results indicate that increased transpiration through microcracks near the pedicel end may contribute to neck shrivel but that the causes of neck shrivel are likely more complex. Neck shrivel is a nonpathogenic, physiological disorder of european plum that occurs preharvest during late fruit development and that continues to develop postharvest. Symptomatic fruit is perceived to be poor quality so has reduced commercial value (Widmer and Stadler, 2003). Visual symptoms are a loss of turgescence in the pedicel (proximal) end of the fruit, whereas the stylar (distal) end remains turgescent. Cultivars differ in susceptibility to neck shrivel, but several commercial cultivars are susceptible. In Germany, these include several clones of 'Hauszwetsche'. There has been no systematic research on neck shrivel in european plum. Causes of the disorder are unknown. The lack of a mechanistic understanding makes it difficult to derive effective counterstrategies for breeders or to develop cultural practices for its mitigation. European plum is not the only fruit susceptible to preharvest shrivel. Shrivel symptoms are also reported in grapes [Vitis vinifera L. (Bondada and Keller, 2012)], kiwifruit [Actinidia chinensis Planch. (Burdon et al., 2014)], and sweet cherries [Prunus avium L. (Schlegel et al., 2018)]. The only article in a peer-reviewed journal in more than 30 years on neck shrivel in european plum is that of St€ osser and Neubeller (1985). The authors hypothesized that neck shrivel was associated with 1) cracks in the cuticle; 2) a thinner cuticle in the pedicel end, compared with the stylar end; 3) a decrease in cuticle constituents believed to inhibit transpiration (wax, alkanes, fatty acids); and 4) large day/night temperature fluctuations. The latter were thought to cause tension in the cuticle, which was thought to cause microcracking. Unfortunately, no theoretical or experimental evidences were presented in support of these supposed causal relationships. Consequently, the aforementioned conclusions remain largely speculative. Widmer and Stadler (2003) conducted a questionnaire in a fruit-growing region in Switzerland to try to identify common factors involved in the disorder. The results are that: 1) european plum cultivars differ in susceptibility to neck shrivel; 2) the rootstock has no effect on the incidence of neck shrivel; 3) hot and dry weather conditions increase neck shrivel; and 4) cuticular microcracks do not necessarily result in neck shrivel. The evidence suggests that increased transpiration through microcracks in the cuticle may be an important factor contributing to neck shrivel. We do know that microcracks impair the barrier properties of the cuticle and this increases transpiration (Knoche and Peschel, 2006). Hence, fruit exhibiting neck shrivel is expected to have a greater rate of transpiration than fruit without neck shrivel. Furthermore, for a given fruit with neck shrivel, transpiration near the pedicel end should exceed that near the stylar end. The objective of our study was to test this hypothesis. We quantified transpiration through the surfaces of european plums with and without symptoms of neck shrivel. Materials and Methods PLANT MATERIAL. Ripe fruit of european plum cultivars (Hauszwetsche Wolff, Doppelte Hauszwetsche, Hauszwetsche Etscheid, Toptaste) and of the unnamed clone P5-112 were sampled from experimental orchards at the horticultural research stations of the State Education and Research Center of Viticulture, Horticulture and Rural Development Rheinpfalz, Ahrweiler, Germany (lat. 50°32#N, long. 7°05#E) and of the research station of the Leibniz University Hannover in Ruthe, Germany (lat. 52°14#N, long. 9°49#E). The cultivars Hauszwetsche Wolff, Doppelte Hauszwetsche, Hauszwetsche Etscheid, and Toptaste are considered susceptible to neck shrivel, whereas the unnamed clone P5-112 is not. All trees were grafted on 'GF 655' rootstocks. Fruit were selected for uniformity of size and color and were processed fresh or held at 4°C for no longer than 7 d before processing. TRANSPIRATION ASSAYS. For transpiration assays, fruit with and without neck shrivel symptoms were selected. Transpiration was quantified on a whole-fruit basis or using ES mounted in stainless-steel diffusion cells (Knoche et al., 2000). The ES were excised using a razor blade, carefully blotted, and then mounted on diffusion cells using high-vacuum grease (Korasilon Paste hochviskos; Kurt Obermeier, Bad Berleburg, Germany). Diffusion cells were filled with deionized water and then sealed with clear transparent tape. Whole fruit and the diffusion cells were then placed in a closed polyethylene box above dry silica gel at 22°C. The diffusion cells were placed upside down such that the ES in the orifice faced the silica gel. Fruit and/or ES were weighed repeatedly. The rate of transpiration (F) was then determined as the slope of a regression line fitted through a plot of cumulative mass vs. time. The permeance (p) was calculated using Fick's law of diffusion: The F of water vapor across the skin per unit time was divided by the product of the transpiring skin surface area of the ES (A) and the driving force (DC) for water transport; i.e., the difference in water vapor concentration between the cut inner surface of the ES (C i ) and the outer surface of the ES (C o ). Because the water vapor concentration above dry silica gel is essentially zero (Geyer and Sch€ onherr, 1988), the concentration of water vapor on the inner surface of the ES represents the driving force for transpiration. The C i equals the water vapor concentration at saturation at the respective temperature. Values for C i are tabulated for various temperatures (Nobel, 1999). The permeance estimate so obtained is a material constant that is characteristic for a particular fruit surface and, hence, is independent of experimental settings such as surface areas or driving forces. Thus, it is a useful parameter for comparisons of cultivars and treatments. Using this experimental setup, time courses of whole fruit transpiration of susceptible 'Hauszwetsche Wolff' with or without neck shrivel and of the nonsusceptible clone P5-112 were established. Subsequently, the time courses of transpiration of ES excised from near the pedicel and stylar ends of symptomatic 'Hauszwetsche Wolff' and from P5-112 (no shrivel) were determined as described previously. In another experiment, transpiration of excised fruit skins of susceptible 'Hauszwetsche Etscheid' with and without neck shrivel were compared. For this experiment, fruit from the same batch were selected for the presence and absence of neck shrivel. To identify the role of the pedicel in transpiration, transpiration of whole fruit of 'Doppelte Hauszwetsche' with and without the pedicel attached was quantified as described previously (Athoo et al., 2015). To summarize, fruit without pedicels were prepared by cutting the pedicel and sealing the cut end of the pedicel stub (including the pedicel/fruit junction) using epoxy glue (UHU plus schnellfest; UHU, B€ uhl/Baden, Germany). Subtracting the mean rate of transpiration of fruit without pedicels from the mean rate of fruit with pedicels yielded the mean transpiration rate of the pedicels. Excised detached pedicels were used as a control. MICROCRACKS. Microcracks were inspected using the procedure described by Peschel and Knoche (2005). To summarize, fruit of susceptible 'Hauszwetsche Wolff', 'Doppelte Hauszwetsche', and of the nonsusceptible clone P5-112 were immersed for a minimum of 5 min in an aqueous solution containing 0.05% acridine orange. Thereafter, fruit were removed, rinsed with deionized water, and then carefully blotted. ES were excised from the pedicel end and the stylar end such that the orientation of the ES relative to the pedicel/ style axis was known. The ES were inspected using a fluorescence microscope (MZ10F and GFP filter module; Leica Microsystems, Wetzlar, Germany). Calibrated images were taken in incident and fluorescent light (DP 73; Olympus, Hamburg, Germany). The lengths and numbers of microcracks per unit area and the orientation of microcracks relative to the pedicel/style axis were quantified using image analysis (cellSens Dimension 1.7.1; Olympus). ISOLATION OF CUTICULAR MEMBRANES (CMS). Epidermal discs were excised from susceptible 'Hauszwetsche Wolff' with and without neck shrivel and from nonsusceptible P5-112 using a cork borer (15 mm in diameter). Discs were incubated in 50 mM citric acid buffer containing pectinase [90 mLÁL -1 (Panzym Super E fl€ ussig; Novozymes, Bagsvaerd, Denmark)] and cellulase [5 mLÁL -1 (Cellubrix L, Novozymes)] (Orgell 1955). The pH was adjusted to pH 4.0 using NaOH. Microbial growth was prevented by adding NaN 3 at a final concentration of 30 mM. The isolation medium was refreshed periodically until CMs separated. The isolated CMs were carefully cleaned from adhering cellular debris using a soft, camel-hair brush, desorbed in deionized water (at least five changes), air-dried, and then weighed on an analytical balance. OSMOTIC POTENTIAL. To investigate whether the stylar end of fruit with neck shriveling may have dehydrated the pedicel end due a more negative y S , juice was extracted from fruit with or without neck shrivel and the y S quantified. Fruit were cut perpendicular to the pedicel/style axis into three sections of about equal height. y S of the expressed juice from the pedicelend and the stylar-end sections were determined, and the equatorial sections were discarded. Juice was extracted using a garlic press and analyzed by water vapor pressure osmometry (Vapro 5520; Wescor, Logan, UT). Preliminary experiments established that y S did not differ significantly between crude juice, the supernatant, or the pellet of centrifuged juice (M. Knoche, unpublished data). DATA ANALYSIS AND TERMINOLOGY. Data for permeances were log transformed before analysis of variance (ANOVA) to obtain normal distributions. The ANOVA and mean comparisons (Proc. GLM) were carried out using SAS (version 9.1.3; SAS Institute, Cary, NC). We refer to the mass loss that occurs when incubating intact fruit or ES mounted in diffusion cells above dry silica gel as transpiration. The mass loss attributable to respiration is negligibly small. Furthermore, transpiration comprises the loss of water along several parallel pathways, including the cuticle, stomata, microcracks, and-if present-the pedicel surface. Because the density of water at room temperature is %1 kgÁL -1 (Nobel, 1999) and transpiration was quantified gravimetrically, the data are given in units of mass. Results The cumulative amount of water transpired increased with time and was significantly larger in susceptible 'Hauszwetsche Wolff' and in fruit exhibiting neck shrivel, compared with fruit of the same cultivar without neck shrivel or to fruit of the nonsusceptible clone P5-112 (Fig. 1A). Also, transpiration of ES excised from symptomatic 'Hauszwetsche Wolff' was larger when ES were obtained from near the pedicel end compared with those from near the stylar end. Compared with susceptible 'Hauszwetsche Wolff', transpiration through ES of nonsusceptible P5-112 was markedly lower (Fig. 1B). In 'Hauszwetsche Wolff', permeance of the ES from neck shrivel fruit taken near the pedicel end exceeded that of those from near the stylar end 4.3-fold, but in P5-112, the pedicel end/stylar end difference was only 1.6-fold (Table 1). Comparison of fruit with and without neck shrivel within the susceptible 'Hauszwetsche Etscheid' revealed consistently greater transpiration for those ES excised from near the pedicel end compared with near the stylar end. In addition, transpiration of skins from symptomatic fruit exceeded that from fruit without neck shrivel, regardless of skin region (Fig. 1C). These differences were all significant at P # 0.05. Visual inspection and fluorescence microscopy revealed numerous microcracks in the skin near the pedicel end of symptomatic 'Hauszwetsche Wolff' fruit but markedly less near the stylar end of the same fruit. There were only few microcracks in the nonsusceptible P5-112, either near the pedicel or stylar ends (Fig. 2). Furthermore, in 'Hauszwetsche Wolff', microcracks near the pedicel end differed from those near the stylar end. The former was more frequent, as indexed by their larger cumulative crack length per unit area and a greater length of microcracks (Table 2). Furthermore, a striking difference was the high degree of orientation of microcracks near the pedicel end of 'Hauszwetsche Wolff' fruit with neck shrivel (Figs. 2 and 3). Frequency distributions of crack orientation indicate essentially all cracks near the pedicel end run parallel to the pedicel/style axis and hence, viewed from the pedicel end, appear oriented like the spokes in a wheel (with the pedicel being the hub). Generally, the cuticle of the susceptible 'Hauszwetsche Wolff' had a greater mass per unit area compared with the nonsusceptible P5-112 (Table 3). Furthermore, in symptomatic 'Hauszwetsche Wolff', cuticle mass per unit area near the pedicel end exceeded that near the stylar end of the same fruit and also that near the pedicel end of nonsymptomatic fruit of the same cultivar (Table 3). For fruit of susceptible european plum cultivars, the y S of juice extracted from near the pedicel end and also from near the stylar end was consistently more negative for fruit with neck shrivel than for asymptomatic fruit of the same cultivar (Table 4). Furthermore, whether fruit was symptomatic, the y S of juice from near the pedicel end was more negative than that from near the stylar end. Also, the difference in y S between pedicel end and stylar end was greater for fruit with neck shrivel than for fruit without neck shrivel. 'Toptaste' was an exception, where this difference was not significant. It is particularly interesting that pedicel transpiration also contributed to the water lost by a detached fruit (Fig. 4). Cumulative fruit transpiration increases linearly with time up to 120 h but was slightly greater for fruit with the pedicel attached than with the pedicel detached. Similarly, transpiration from the pedicel (calculated by subtracting the cumulative transpiration of fruit without pedicels from the cumulative transpiration of fruit with pedicels) also increased about linearly up to 120 h (Fig. 4A). In contrast, cumulative transpiration of detached pedicels continued only for about 24 h, the rate (slope) decreasing to zero over this period; i.e., cumulative transpiration asymptoted as the isolated pedicel dried out (Fig. 4B). Discussion Our paper establishes that 1) skin permeance is greater near the pedicel end than near the stylar end-particularly so in fruit exhibiting neck shrivel symptoms; 2) the high skin permeance of the skin near the pedicel end is likely the result of extensive cuticular microcracking; and 3) in fruit with neck shrivel, the y S of the expressed juice is generally more negative near the pedicel end than near the stylar end. IN FRUIT WITH NECK SHRIVEL, THE SKIN NEAR THE PEDICEL END HAS GREATER PERMEANCE. The cuticle forms the primary barrier to transpiration, and microcracks impair this barrier function. Hence, cuticular microcracking results in increased water vapor permeance in european plum. This is also true in apple [Malus ·domestica Borkh. (Maguire et al., 1999)] and sweet cherry (Knoche and Peschel, 2006). In european plum, the vascular water inflow to the pedicel end does not keep pace with the transpiration water outflow from this part of the fruit. This may contribute to neck shrivel and also to the more negative y S of the flesh of the pedicel end of the fruit, compared with that of the stylar end. This observation was consistent among the three cultivars investigated. It is particularly interesting that the tendency for flesh cell contents to be more concentrated (y S more negative) by increased skin transpiration near the pedicel end was not offset by a redistribution of water from the flesh cells near the stylar end (y S less negative). The axial osmotic gradient would be expected to favor such a basipetal water movement. Fruit exhibiting neck shrivel had more negative y S near both pedicel and stylar ends, compared with nonsymptomatic fruit. Because mature european plum does not have significant turgor, fruit water potential is essentially equal to fruit y S (Knoche et al., 2014). Thus, it is intriguing that within a european plum fruit, there appears to be a standing gradient of water potential between the stylar end (less negative) and the pedicel end (more negative). This indicates the existence of a high internal resistance to water movement that somehow prevents establishment of water potential equilibrium within the flesh of a european plum fruit. We suggest increased transpiration near the pedicel end of the fruit is not the only factor involved in neck shrivel of european plum. Empirical observations indicate that vascular transport also may be involved. For example, it has been reported for a number of fruit crops, [e.g., sweet cherry (Grimm et al., 2017;Winkler et al., 2016)] that a wave of increasing xylem dysfunctionality progresses basipetally from the stylar end of the fruit to the pedicel end. If this was also the case in european plum, a diurnal backflow of xylem water from fruit to tree, under daytime conditions of high foliar evaporative demand, may dehydrate the pedicel end of the fruit (via its still-functional xylem), but not the stylar end of the fruit (which is protected by its now-dysfunctional xylem) Volz, 1993, 1998). Similarly, the pedicel transpiration that occurs postharvest in detached fruit also dehydrates the fruit by pulling water out of it. Hence, the fruit serves as a water reservoir for transpiration that supports the continuing turgescence of the pedicel tissues. Whether the dehydration of the fruit is limited to the pedicel end also will depend on the spatial distribution in the fruit of (still) functional and (now) dysfunctional xylem. MICROCRACKING PATTERN. We observed a drastic difference in the patterns of cuticular microcracking between the pedicel end and the stylar end of european plum. The skin near the pedicel end had numerous microcracks per unit area, and these were almost all orientated parallel to the fruit's pedicel/style axis. Meanwhile, the microcracks in the stylar end were much fewer and were oriented more or less at random. This observation was unexpected, as the european plum cultivars investigated here were essentially symmetrical in shape about the equatorial plane. This symmetry indicates the historical growth trajectories (and thus the patterns of skin strain and stress) in the proximal and distal parts of the fruit would have been very similar. Hence, the reason for the differential pattern of microcracking remains obscure. Several factors may be involved. Microcracking of the cuticle can result from a mismatch between the rate of surface area expansion and that of cuticle deposition as demonstrated in sweet cherry (Lai et al., 2016), grape (Becker and Knoche, 2012), and apple (Lai et al., 2016). This also applies to the equatorial region of european plum, where decreased cuticle deposition, the onset of elastic strain, and the beginning of microcracking all coincide (Knoche and Peschel, 2007). However, to account for differential microcracking between pedicel and stylar ends, one would anticipate that surface expansion and/or cuticular deposition also must differ between the two regions. There is no conclusive evidence for differential cuticle depositions in pedicel and stylar end regions of european plum. In our present study, the pedicel end of the susceptible 'Hauszwetsche Wolff' fruit with neck shrivel tended to have a thicker cuticle than the stylar end. The greater mass per unit area of cuticle near the pedicel end of a shriveled fruit (compared with that near the turgescent stylar end) may be the result of a release of elastic strain and thus skin shrinkage during shriveling. This is consistent with the smaller thickness difference between pedicel and stylar ends of nonshriveled fruit of 'Hauszwetsche Wolff'; it is also consistent with the absence of a basal/distal gradient in cuticle deposition in our earlier study (Knoche and Peschel, 2007). Nothing is known of the distribution of growth stresses and strains across the surface of european plums. However, the symmetry of this prolate spheroid makes differential growth stresses and strains between the two poles rather unlikely (see above). Previous studies established that surface moisture induces microcracks in the cuticle of sweet cherry (Knoche and Peschel, 2006) and apple fruit (Knoche and Grimm, 2008). The surfaces of both crops, however, lack a delicate fine structure in their epicuticular wax. This makes both surfaces easy-to-wet in contrast to european plum. In the latter, the bloom of the epicuticular wax renders the surface difficult-towet. This, and the absence of a pedicel cavity to harbor raindrops, makes moisture-induced microcracking less likely in european plum. Using the theory of plates and shells, Considine and Brown (1981) developed a physical model that predicts the distribution of mechanical stress and the associated failure pattern in a fleshy fruit. For a prolate spheroid, such as the european plum, the model predicts a lengthwise fracture pattern; i.e., the microcracks near the equator will run parallel to the long axis of the fruit. Meanwhile, any cracking near the pedicel and stylar ends will take the form of concentric rings, centered on the pedicel and stylar scar. Again, the observed differential microcracking between the two poles of the symmetric fruit is not accounted for. Clearly, further study is needed to identify the mechanistic basis of the observed highly differential incidence and orientation of cuticular microcracking in european plum. Conclusions Increased transpiration through microcracks near the pedicel end of susceptible european plum cultivars is an important factor that contributes to neck shriveling. Whether this is the only factor or whether dehydration of the pedicel end while still on the tree via pedicel xylem efflux is also involved remains to be investigated. Furthermore, the mechanistic basis of the pattern of microcracking near the pedicel end is unknown. The ''stress relaxation analysis'' of fruit skins, as recently proposed, may be helpful in identifying the mechanisms underlying microcracking (Knoche and Lang, 2017;Lai et al., 2016). sche Etscheid' european plum with and without the pedicel and of the pedicel only. Transpiration of the pedicel was determined from detached pedicels or calculated by subtracting the cumulative transpiration of a fruit with its pedicel, from that without it. Data are presented as means ± SE. Where error bars are not visible, they are smaller than the plotting symbols. (B) Same data for pedicel transpiration as those shown in A, but now redrawn on a different y-axis scale. SE bars were omitted in B for clarity.
2019-04-03T13:09:57.573Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "08654e5466f53b20bc2b32f8316b295476b0550c", "oa_license": null, "oa_url": "https://journals.ashs.org/downloadpdf/journals/jashs/144/1/article-p38.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "674b71b312b70c13b54a5b576ba31548ae956263", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
204741958
pes2o/s2orc
v3-fos-license
High-density EEG mobile brain/body imaging data recorded during a challenging auditory gait pacing task In this report we present a mobile brain/body imaging (MoBI) dataset that allows study of source-resolved cortical dynamics supporting coordinated gait movements in a rhythmic auditory cueing paradigm. Use of an auditory pacing stimulus stream has been recommended to identify deficits and treat gait impairments in neurologic populations. Here, the rhythmic cueing paradigm required healthy young participants to walk on a treadmill (constant speed) while attempting to maintain step synchrony with an auditory pacing stream and to adapt their step length and rate to unanticipated shifts in tempo of the pacing stimuli (e.g., sudden shifts to a faster or slower tempo). High-density electroencephalography (EEG, 108 channels), surface electromyography (EMG, bilateral tibialis anterior), pressure sensors on the heel (to register timing of heel strikes), and goniometers (knee, hip, and ankle joint angles) were concurrently recorded in 20 participants. The data is provided in the Brain Imaging Data Structure (BIDS) format to promote data sharing and reuse, and allow the inclusion of the data into fully automated data analysis workflows. In this report, we present a multimodal dataset from 20 healthy young participants that allows to study coordination of steps to the timing and rate of the auditory pacing stream as well as executive function in gait adaptation. Participants were instrumented with high-density EEG (108 channels), surface electromyography (EMG) with electrodes placed on the tibialis anterior muscle of both legs, pressure sensors on the heel to measure heel strikes, and goniometers measuring joint angles of ankle, knee, and hip. Participants walked on a treadmill at a constant speed while attempting to step in synchrony with an auditory pacing stream and were required to adapt their step length and rate to shifts in tempo of the pacing stimulus (e.g., to unexpected shifts to a faster or slower pacing tempo). To our knowledge this is the first published dataset featuring EEG recordings during a dynamic gait adaptation task requiring synchronization of steps to auditory cues and one of only three other public EEG-MOBI datasets recorded during walking [28][29][30] . We have used this dataset to investigate top-down inhibitory control in gait adaptation modeling source-resolved oscillatory cortical dynamics and event-related potentials time locked to cue tones and heel strikes following cue rate shifts 31,32 . These data could be used to support further studies of gait adaptation, error processing, and auditory-motor synchronization during walking, analyses that might give further insights into the underlying cortical mechanisms of auditory rhythmic cue training. This has significant importance for the field of gait rehabilitation in the elderly and Parkinson's disease [12][13][14][15] . The multimodal nature of the dataset allows for investigation of relationships between EEG and EMG during walking, including corticomuscular coherence, and joint analysis of movement parameters and EEG. The dataset contains data from 20 subjects, 18 of which have sufficiently clean EEG data for meaningful analysis, and 16 have kinematic data (not crucial for EEG analysis since heel strike markers come from pressure sensors, whose records are present for all subjects). The number of subjects in our study is relatively high compared to other EEG studies of walking with fewer subjects, in which significant effects have been demonstrated. These include studies using fewer than 15 participants that have demonstrated significant power modulations relative to the gait cycle 23,25,26,33,34 and significant effects of visual feedback during walking 33 . Other studies have demonstrated corticomuscular coherence during walking using fewer than 12 participants 35,36 and significant relationships between EEG and kinematics in only 6 subjects 37 . We therefore believe that our data set is suited to addressing many questions concerning the EEG brain dynamics that accompany cue-paced gait and gait adjustment. Methods Participants. Twenty healthy volunteers (9 females and 11 males, 22-35 years of age; mean 29.1 years, SD 2.7 years) with no neurological or motor deficits participated in this study. The EEG data of two subjects (participant IDs 19 and 20) was heavily contaminated by artifacts and was therefore excluded from the analysis for data validation. Nonetheless we provide the data of these two subjects for download and tag them as noisy since they might be useful for people developing tools for artifact removal. All participants reported being right handed. Research shows that footedness follows handedness in right handers, although not consistently so in left handers 38 . The experimental procedures were approved by the human ethics committee of the Medical University Graz, Austria. Each subject gave signed informed consent before the experiment. Data acquisition. The recordings were performed at the Institute of Neural Engineering at Graz University of Technology, Austria. Seven 16-channel amplifiers (g.tec GmbH, Graz, Austria) were combined to record EEG data from 108 passive scalp electrodes positioned as in the '5% International 10/20 System' (EasyCap, Herrsching, Germany) 39 . Each subject's head circumference was measured to allow for selection of an appropriately sized EEG cap. The cap was aligned on the head such that Cz was 50% of the distance from the nasion to the inion along the midsagittal plane and 50% of the distance from preauricular points. Electrode locations that extended below the conventional 10-20 System layout included F9, FT9, F10, FT10, P9, PO9, P10, PO10, I1, Iz, I2 (for a schema of the electrode layout see Fig. 1b). Reference and ground electrodes were placed on the left and right mastoids, respectively. All EEG electrode impedances were brought below 5 kΩ before recording. Electromyographic (EMG) signals were recorded from the skin over the tibialis anterior muscles of both legs using standard adhesive-fixed disposable Ag/AgCl surface electrodes. These EMG channels were also recorded using left and right mastoids as reference and ground, respectively. The EEG and EMG data were sampled at 512 Hz, high pass filtered >0.1 Hz, low pass filtered <256 Hz and a notch filter was applied at 50 Hz to remove power line noise. Foot-ground contacts were measured by mechanical foot switches placed over the calcaneus bone in the heel of each foot. These switches produced event markers for gait cycle heel strike and heel off events. We also recorded data from three flexible segment twin axis goniometers placed on the right hip, knee and ankle (Plux, Wireless Biosignals, Arruda dos Vinhos, Portugal). The goniometers were attached to the body using medical adhesive tape and placed such that the center of the goniometer was over the joint on the right side of the leg (compare to 29 ). The data were recorded with the TOBI SignalServer, a custom software application for data acquisition developed at the Institute of Neural Engineering at Graz University of Technology 40,41 , and a Simulink script running on MATLAB 2013 (The MathWorks). The TOBI SignalServer, a cross-platform data acquisition system implemented in C++, is designed to support concurrent multirate acquisition from different hardware devices with a focus on performance and stability. The code for the TOBI signal server can be obtained from tools4bci.github.io/ SignalServer. In each trial the Simulink model calculated step-by-step cadence (time interval between heel strikes) and adjusted the pacing of the auditory stimulus in the preferred (steady-state) walking condition using the mean of the most recent 6 steps of uncued walking. We also classified left versus right heel strikes in the Simulink model to make sure that the first auditory cue in each trial, as well as the cues marking the tempo changes always occurred relative to a right heel strike. An application in Ruby (https://www.ruby-lang.org/en/) was developed to play the auditory cues. www.nature.com/scientificdata www.nature.com/scientificdata/ To record the exact timing of the auditory cues, we recorded the auditory stimulation as a digital input to one of the amplifiers. To this purpose we split the auditory output of the computer so as to feed the auditory cues into a digital circuit that amplified the analog signal so as to drive a transistor into saturation, thus providing a 5-V (TTL) signal to be directly connected to a digital input of the EEG amplifier. All triggers were synchronized via the hardware input of the amplifiers. The goniometers were synchronized to the EEG data using the TOBI SignalServer. During the recording we monitored the EEG data for possible large muscle and movement artifacts. We instructed the participants to relax their shoulders to avoid neck muscle artifacts and to move their head as little as possible. The amplifiers were placed next to the treadmill on two stacked tables so that they were close to the participant's head. Experimental design and procedure. Trial structure. During the experiment participants walked on the treadmill and were instructed to synchronize their steps to a regular auditory pacing stream into which were introduced infrequent sudden shifts to a slower or faster tempo. Participants were asked to adapt their steps as (b) Electrode layout. We recorded 108 EEG channels placed according to the 10 per cent system. Reference and ground electrode were placed on left and right mastoid. (c) Treadmill speed was adapted and fixed at a comfortable walking speed by the participant and remained fixed throughout the experiment. During each trial, participants first walked for ~10 s without auditory cues, then walked for 10-18 s while attempting to synchronize their foot falls to brief cue tones delivered at their then-prevailing step rate and phase. Thereafter, beginning at a right heel strike, a sudden (accelerated or decelerated) tempo shift occurred in the pacing cue sequence. In response, participants were instructed to adapt their step length, rate, and phase as quickly as possible, so as to again synchronize their steps with the cue tones at the new tempo. After 30-70 steps, the next trial began immediately, returning again to 10 s of uncued walking during which participants were instructed to return to their most comfortable step rate. The tempo shift always occurred relative to a right step, the first deviant tone indicating the new tempo by being early (in step-advance trials) or late (e.g., in step-delay trials). www.nature.com/scientificdata www.nature.com/scientificdata/ quickly as possible to the new tempo so as to synchronize with the auditory cues. In each trial (Fig. 1), participants walked at their self-selected comfortable pace without auditory cueing for 10 s, after which a stream of auditory cue tones was delivered at their current step tempo. To make it easier for participants to synchronize with the auditory cue stream, the first tempo-shifted cue onset was always close to a right heel strike. The tempo of the auditory cue tones was computed as the mean heel strike interval (heel strike to heel strike) across their 6 most recent non-cued steps. This was done to ensure that auditory cues always matched their current comfortable walking speed, which slightly varies across trials. Auditory cue tones were delivered via in-ear headphones. The cue sequence was an alternating series of high and low tones presented so as to allow a match to the participant's alternating right and left heel strikes; high/low tone assignment to left/right or right/left steps, respectively, was randomized over subjects (the auditory tones were 100 ms in duration; low tones were sinusoids at 325 Hz, high tones at 512 Hz. Auditory tones were generated in MATLAB (The Mathworks), and played as .wav files). Participants were asked to attempt to synchronize their heel strikes to the regular sequence of auditory cue tones, thus building an expectation of when the next cue would occur. After walking 8-12 s to auditory cues at the preferred cadence, the tempo of the cue stream was suddenly increased ('step-advance' perturbation) or decreased ('step-delay' perturbation) by one-sixth of a step cycle, plus a random ≤±25 ms jitter. The cue tempo shift always occurred relative to a right heel strike (as in 42 ). In step-advance or step-delay perturbations, the cue marking the tempo shift would thus seem to participants to arrive either 'too early' (in step-advance) or 'too late' (step-delay). Participants were instructed to adjust their steps as quickly as possible in order to synchronize their heel strikes to the tone cues at the new pacing tempo, which was maintained for 30-70 steps (see Fig. 1). Since the treadmill moved at a constant speed throughout the experiment, participants had to implement gait adjustments either by producing (in step-delay trials) one-sixth longer steps, or (in step-advance trials) one-sixth shorter steps. After 30-70 steps at the new stepping rate, the next trial began immediately, again with uncued walking. Participants were instructed to return to their most comfortable step length and tempo during this period. We conducted a total of 60 step-advance and 60 step-delay trials in 10 blocks of 12 trials. Each block was comprised of 6 step-advance and 6 step-delay trials presented in random order. Between blocks, 5-min breaks were given if asked for by participants. Participants either remained standing or sitting on a chair that we placed on the treadmill during breaks if required. For a picture of the experimental setup and paradigm see Fig. 1. In addition, we recorded two blocks of four minutes with participants walking on the treadmill without cues for two minutes, we then stopped the treadmill and participants remained standing on the treadmill and listening to auditory cues at the cadence of their previous footfalls. These blocks were randomly interspersed with the task blocks. Training. The experiment used a conventional treadmill (Kettler, Track S4, Ense-Parsit, Germany). Before starting the experimental procedure, we asked participants to walk on the treadmill for 2-3 min to become familiar with treadmill walking. During the subsequent practice period, we asked participants to adapt the belt speed to their most comfortable walking speed, and so determined, for each participant in the experiment, a belt speed that was held fixed throughout the experiment. Walking speed ranged from 3.0 to 3.7 km per hour between participants, lower than the typically reported comfortable overground walking speed (near 4.6 km/h for women and 5.2 km/h for men 43 ). To familiarize themselves with the task, participants then practiced walking on the treadmill for about 5 min while attempting to step in synchrony to an auditory cue tone stream. Before beginning the experiment we made sure the participant understood the walking task and was able to perform the gait synchronization task to an acceptable performance level, meaning that participants lengthened or shortened their steps appropriately to adapt their steps to cue advance or delay tempo shifts in the auditory cue sequence, so as to synchronize to the new pacing tempo. Data Records All the published data sets are de-identified. All data files are available at OpenNeuro.org with accession number ds001971 44 , organized and archived following the EEG-Brain Imaging Data Structure (BIDS) 45,46 . The study was converted to EEG-BIDS using the EEGLAB-to-BIDS plug-in by Delorme & Pernet (github.com/sccn/ bids-matlab-tools) for EEGLAB 47 , running on MATLAB (The MathWorks). The BIDS specification 45 is a human brain research community standard for organizing and sharing brain imaging data within and between laboratories which has become widely used for archiving functional magnetic resonance imaging (fMRI) data 45 . Linked standards for magnetoencephalographic (MEG) 48 and EEG data 46 have recently been published. See bids-specification.readthedocs.io/en/stable/01-introduction.html for an overview. To preserve detailed information about experimental events occurring during the data recording, BIDS uses the Hierarchical Event Descriptor (HED, version 2.0) system described at HEDtags.org 49,50 . Datasets are available in.fdt format containing EEG, EMG, goniometer and event data in the same files. Multiple.fdt files are available for each subject representing multiple runs (blocks in one session) of the experiment. For a description of the files available for each run see Table 1 below. For a description of channel types see Table 2. For a full description of event markers see Online-only Table 1. technical Validation EEG data. The EEG setup was carefully prepared to minimize potential artifacts and maximize data quality (see Data Acquisition above). For a basic estimation of data quality and validity of the experimental conditions, we compared the change of power in sensorimotor rhythms between standing and walking periods of the experiment. Event-related desynchronization (reduction in power, ERD) in the α and β bands during movement are typically smaller than during non-movement rest, as has been widely documented for upper 51 www.nature.com/scientificdata www.nature.com/scientificdata/ The EEG data analysis for technical validation was performed on the data of 18 subjects using scripts written in MATLAB 2014a (The MathWorks) incorporating functions from EEGLAB 14.1.2 47 . The EEG data were high-pass filtered above 1 Hz (using a zero-phase FIR filter, order 7500) to minimize slow drifts, and low pass filtered below 200 Hz (using a zero-phase FIR filter, order 36). EEG channels with prominent artifacts were identified by visual inspection and removed. On average, 106 channels per participant (SD ± 2.2; range 102-108) were retained for analysis. The EEG data were then re-referenced to common average reference. To perform automatic rejection of large movement-related artifacts, we applied artifact subspace reconstruction (ASR, in EEGLAB) 53,54 . Thresholds used for artifact rejection were conservative; the threshold for window rejection was disabled and the burst threshold was set to 20. The data were then segmented into step-locked epochs, from 1 s before to 3 s after each right heel strike during the uncued walking period. Epochs containing potential values exceeding ±3 SD of the mean were rejected. Because of the large number (>1000) of gait cycles per participant, from the uncued gait periods of the experiment we randomly selected 500 epochs from each participant's data for further analysis. Differences in log spectral power between standing and walking were obtained for each channel. These differences were then averaged over subjects and average relative log power in the mu band (8-12 Hz) and beta band (14-25 Hz) were projected onto the scalp using the EEGLAB function topoplot. We also plotted the mean log spectral power for uncued walking and standing for electrode locations Cz, Pz, I1 and I2 (see Fig. 2). The expected smaller power of sensorimotor rhythms during movement compared to non-movement periods 25,33,34,51 was also observed in these data. As shown in Fig. 2a,b, there is clearly less alpha and beta band power over the central scalp when the participants are walking compared to standing. By contrast, EEG power over lateral scalp areas was larger during movement than during standing, likely because of contributions from neck and facial muscle EMG during walking. At electrode locations Cz and Pz, power at higher frequencies (>30 Hz) during walking did not differ from standing (Fig. 2b,c), while at lateral electrode locations I1 and I2 during walking power at all frequencies was larger than during standing. Artifactual contamination of lateral electrode signals by neck muscle EMG during walking has been shown in previous studies 22,25,26 and can be minimized by using blind source separation, typically Independent Component Analysis (ICA 55,56 ) or frequency clustering 26 . Quality of foot switches and temporal precision of auditory cues. The quality of the foot switch data was essential to this experiment since the timing of auditory stimulation was adjusted online to match the pace of footsteps during the experiments. The foot switches were therefore continuously monitored during the whole experiment. If there were faulty activations, the treadmill and the paradigm was stopped so that the foot switches could be adjusted. To make sure that we recorded the exact timing of the auditory cues we split the auditory output of the computer to feed the auditory cues into the digital input of the amplifier as described in the methods above. During post-processing, by subtracting the intended latencies of auditory cue onsets with the actual moments at which the cues sounded, we determined that there was a jitter in auditory cue timings of up to ±25 ms. EMG data. The quality of the EMG signals was assessed before beginning the experiment. Participants were instructed to repeatedly execute brisk foot dorsiflexion. During analysis, the EMG data were re-referenced to bipolar derivations, then high-pass filtered above 30 Hz (using a FIR filter, order 226), then rectified and low-pass filtered below 5 Hz (using a FIR filter, order 846) to obtain the signal envelope. The data were then segmented −1 to 3 s around right heel strikes during uncued walking. We then time warped the envelopes of the signal to the median step latency (across subjects) using linear interpolation. This procedure aligned the latencies of right and left heel strikes across trials. www.nature.com/scientificdata www.nature.com/scientificdata/ As shown in Fig. 3a, the muscle activations can be clearly seen during the walking condition; as expected, the right leg tibialis anterior is maximally active shortly before the right heel strike. The EMG recorded from the tibialis anterior muscle of the left leg was noisier and is not displayed. File name Description Goniometer data. The goniometers were carefully set up according to the manufacturer instructions (compare to 29 ). After attaching the three goniometers to hip knee and ankle, the signals were checked while the participant walked on the treadmill. The goniometer signals were continuously monitored to ensure minimal interference and data corruption. To visualize the goniometer data, we high-pass filtered the data above 0.5 Hz (FIR filter order 3380) and then low-pass filtered the data below 5 Hz (FIR filter order 846). The data were then Fig. 2 Technical assessment of the EEG data. (a) Scalp maps showing the scalp distribution of log power differences in mu and beta power between walking and standing periods. Cool colors represent negative differences, warm colors positive differences. Over central scalp, the maps show a clear reduction in power in the mu and beta bands during walking compared to standing. (b) Log power spectra for channels Cz and Pz during standing and walking. Mu and beta power is higher during standing (red trace) compared to walking (blue trace). Envelopes show ±3 standard errors of the mean. (c) Log power spectra for channels I1 and I2 during standing and walking. Power, especially at higher frequencies, is higher during walking (blue trace) compared to standing (red trace). www.nature.com/scientificdata www.nature.com/scientificdata/ segmented (−1 to 3 s from right heel strikes) during uncued walking. We then time warped the goniometer data envelopes to the median step latency (across subjects) using linear interpolation. This procedure aligned time points of right and left heel strikes over trials. Sixteen of 18 subjects had usable goniometer data. Figure 3b-d shows the hip, knee, and ankle joint angles for 16 subjects during the uncued walking period on the treadmill. Usage Notes Since EEG data recorded during walking are usually noisy, we recommend the use of advanced artifact rejection and artifact correction methods (available for example in the EEGLAB toolbox, freely available from sccn.ucsd. edu/eeglab/index.php 47 . For analysis of EEG data recorded during walking, see the following references: 22,[24][25][26]31 . Other studies have looked at the types and morphologies of movement artifacts in EEG data during walking; these reports can be helpful to identify EEG artifacts recorded during MoBI experiments (see [57][58][59] ). Code Availability The code we developed to record EEG/EOG and goniometer data, run the paradigm and adapt cue rate online to the participants' cadence in each trial are based on the TOBI SignalServer freely available online (tools4bci.github. io/SignalServer) a custom SimulinkModel running in MATLAB (The MathWorks) and a Ruby application for playing the auditory cues which is available upon request from the authors.
2019-10-17T14:37:11.496Z
2019-10-17T00:00:00.000
{ "year": 2019, "sha1": "19c2edf95ad836e194fd8d463a7dd6ba60fd0cbc", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41597-019-0223-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1eb030752384f4465ae81d6785116a849afa4baa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
195064079
pes2o/s2orc
v3-fos-license
Language-Agnostic Model for Aspect-Based Sentiment Analysis In this paper, we propose a language-agnostic deep neural network architecture for aspect-based sentiment analysis. The proposed approach is based on Bidirectional Long Short-Term Memory (Bi-LSTM) network, which is further assisted with extra hand-crafted features. We define three different architectures for the successful combination of word embeddings and hand-crafted features. We evaluate the proposed approach for six languages (i.e. English, Spanish, French, Dutch, German and Hindi) and two problems (i.e. aspect term extraction and aspect sentiment classification). Experiments show that the proposed model attains state-of-the-art performance in most of the settings. Introduction Sentiment analysis (Pang and Lee, 2008) is often target-centric. In aspect-based sentiment analysis (ABSA), we aim to identify the polarity of expressed sentiments towards a feature or aspect. These features or aspects are usually explicitly mentioned in the text. Also, a sentence may contain more than one aspect terms, and the task is to assign separate sentiments to each of them, e.g. in "The food was great! But service was below par." there are two aspects ('food' and 'service'), and the expressed sentiment towards food and service are positive and negative, respectively. Such analysis offers finegrained information to a user or an organization who seeks users opinion towards any specific entity. For example, based on the users' feedback, an individual can draw a general perception about the specific attribute or aspect of a product or service, and he/she can make an informed decision about the product or service under observation. Similarly, an organization can utilize the feedback to refine its product/service or to take a decision in the business model. Aspect-based sentiment analysis (Pontiki et al., 2014(Pontiki et al., , 2016 has two subproblems at its core, i.e., aspect term identification (or opinion target extraction) and aspect sentiment classification. Given a text, aspect term identification task aims to find the boundaries of all the aspect terms present in the text, whereas aspect sentiment classification task classifies each of these identified aspect terms into one of the predefined sentiment classes (e.g., positive, negative, neutral etc.). A sentence may contain any number of aspect terms or no aspect term at all. The terms 'aspect term' and 'opinion target' are often used interchangeably and refer to the same span of text. Motivation and Contribution A survey of the literature for ABSA suggests a number of works for different languages Brun et al., 2016;Ç etin et al., 2016). Although the reported performance for these works are good, they usually suffer in handling the language diversity, i.e., the systems that reported state-of-theart performance for one language typically do not work well for the other languages. The unavailability of such a generic system motivates us to build a language-agnostic model for aspect based sentiment analysis. We propose a generic deep neural network architecture that handles the language divergence to a great extent. Our model is based on Bidirectional Long Short-Term Memory (Bi-LSTM) network (Graves et al., 2005) that also utilizes extra hand-crafted features. We evaluate our proposed approach for four European (i.e., Spanish, French, Dutch & German), one Indian (i.e., Hindi) and English languages. The contributions of our work are three-fold: a) we propose an efficient and generic neural network architecture that works across multiple languages; b) we utilize a small set of handcrafted features (one each for aspect extraction and aspect classification) for the training and evaluation; and c) we provide the new state-of-the-art performance for two problems of ABSA across six different languages. Rest of the paper is organized as follows: In Section 2, we present the literature survey. The proposed methodology has been discussed in detail in Section 3. In Section 4, we furnished experimental results and provided the necessary analysis. Finally, we conclude in Section 5. For ABSA, System GTI (Alvarez-López et al., 2016) used a Support Vector Machine (SVM) and Conditional Random Field (CRF) based approach for aspect extraction and sentiment classification, respectively. They used language-dependent features like lemmas and Part-of-Speech (PoS) tags to achieve the state-of-the-art score for aspect extraction in Spanish. IIT-TUDA also used a number of hand-crafted features like character n-grams, dependency relations, prefix and suffix for SVM and CRF. They achieved comparable performance for Spanish, French & Dutch. System XRCE (Brun et al., 2016) used a feedback ensemble network that obtained the best performance for aspect classification on the French dataset. System TGB (Ç etin et al., 2016) used a Logistic Regression based model to address the aspect sentiment classification and reported to achieve the best score on Dutch dataset. Mishra et al. (2017) used a Bi-LSTM based model, whereas Naderalvojoud et al. (2017) adopted a deep recurrent neural network model for the German dataset. developed an aspect based sentiment analysis datasets for Hindi. They employed CRF and SVM for aspect term extraction and aspect sentiment classification, respectively. For aspect based sentiment analysis in English, Kiritchenko et al. (2014) reported the best performance in SemEval-2014 shared task on ABSA (Pontiki et al., 2014). There have been few attempts at injecting handcrafted features into the neural network architecture for enhancing the overall performance Araque et al., 2017) of sentiment analysis. combined CNN representation and optimized features for learning a Support Vector Machine. Authors in (Araque et al., 2017) proposed a classifier ensemble model that combines surfacelevel features and generic word vectors for the sentiment classification. However, our work differs from these systems in the following ways: a) we perform aspect level sentiment analysis for six different languages (belong to different language family); b) we propose four different architectures to successfully combine the neural network learned representations and the handcrafted features; c) the proposed architectures handle both aspect extraction (a sequence labelling task) and aspect sentiment classification (a classification task); and d) we achieve better performance for most of the problem/language pairs. Proposed Method Overall, aspect based sentiment analysis can be thought of as a two-step process, i.e. aspect term extraction and aspect sentiment classification. Aspect term extraction is a sequence labelling task where each token of a sentence needs to be classified as either inside the boundary of an aspect term or outside. We adopted BIO notation to mark each token as either Begin, Intermediate or Outside of an aspect term. A 'B' signifies the beginning of an aspect term and successive 'Is' signify a multi-token aspect term (e.g. spicy tuna rolls). A single-token aspect term will be tagged as 'B'. For the second problem, i.e. aspect sentiment classification, we define a context window of size ±5 around each aspect term and consider all the tokens within the window for an instance. The intuition behind such an approach is that the sentiment-bearing clue words often occur close to the aspect terms. An example scenario is depicting in Table 1. Review: Rice was good but the main attraction was spicy tuna rolls . BIO Notation: Rice and Spicy tuna rolls Context window (±5) P rev 5 P rev 4 P rev 3 P rev 2 P rev 1 Aspect term N ext 1 N ext 2 N ext 3 N ext 4 N ext 5 Rice null null null null null Rice was good but the main Spicy tune roll but the main attraction was spicy tuna roll . null null nulll null Aspect Sentiment: Positive for Rice and Positive for Spicy tuna rolls. Table 1: An example review from restaurant domain and its respective processing for aspect term extraction (i.e. BIO notations) and aspect sentiment classification (i.e. contextual processing). Our proposed neural network architecture employs a Bi-LSTM network for learning sentence embeddings, which are then fed to a fully-connected dense layer for classification. Given a sentence, we first compute the word embeddings of each word and feed them into the Bi-LSTM network at different time steps for the prediction. We refer to this architecture as A1. In addition, we inject extra hand-crafted manual features to assist the neural architecture. We design three architectures (i.e. A2, A3 & A4 in Figure 1) for the successful combination of word embeddings and the hand-crafted features. The basic difference among these three architectures are the way features are injected into the model. A high-level architecture of our proposed method is depicted in Figure 1. Architecture A1 makes use of word embeddings as the sole input for the network. In A2, we concatenate the word embeddings with the hand-crafted features at the input and then feed this combined input to the network for learning. In comparison, architecture A3 learns the sentence embedding through Bi-LSTM network on top of word embedding only, which is then merged with the hand-crafted features before feeding into the fully connected layers for prediction. In contrast, architecture A4 utilizes two separate Bi-LSTM networks for word embeddings and hand-crafted features, respectively. Subsequently, the learned sequences of each Bi-LSTM are concatenated and fed into the fully-connected layers for further prediction. The choice of separate Bi-LSTMs for the hand-crafted features in architecture A4 is driven by the fact that the dimension of a word embedding is usually very high as compared to its corresponding hand-crafted features. If trained together, as in architecture A2, extracted features of low dimension usually get overshadowed by the high-dimensional word embeddings. Thus making it nontrivial for the network to learn from the extracted features. Further, to exploit the sequence information of words in a sentence, we pass hand-crafted features of each word through a separate Bi-LSTM layer. E.g. in the following sentence there is one negative word (i.e. horrible) and one negation (i.e. not) but no positive words. However, in a model that takes into account only the simple polar word score, the sentence would have high relevance towards the negative sentiment. However, the sequence information of the phrase "not any more" dictates the positive sentiment of the sentence. "It was used to be a horrible place to eat but not any more." In contrast to A4, architecture A3 does not rely on the sequence information of the extracted features and allows the network to learn on its own. We use 300 dimension Word2Vec (Mikolov et al., 2013) word embeddings for the experiments. Each Bi-LSTM layer contains 100 neurons while two dense layers contain 100 and 50 neurons, respectively. Features As additional features, we extract the following information for each token in an instance. -Aspect term extraction: Distributional thesaurus (DT) 1 (Biemann and Riedl, 2013) defines the lexicon expansion of a token based on a similar context. It is usually very effective for the handling of unseen text. If a token in the test set never appears in the training set, it becomes a non-trivial task for the classifier to make a correct prediction. By employing DT feature, the classifier can additionally utilize lexical expansion of the current token for mapping with the training set, thus minimize the chance of unseen text. For each token, we use its top 3 DT expansions as features. Datasets We evaluate our proposed approach on the benchmark datasets of SemEval-2016 shared task on aspect based sentiment analysis (Pontiki et al., 2016) (Task 5), which contain user reviews across multiple languages. The datasets of English, Spanish, French and Dutch are related to the reviews of consumer electronics and restaurants. We also evaluate our approach on the GermEval-2017 shared task on ABSA (Wojatzki et al., 2017), which comprises of reviews in the German language. The training datasets contain 2,070, 1,733, 1,711 & 19,432 reviews in Spanish, French, Dutch and German, respectively. Whereas, test datasets contain 881, 696, 575 & 2,566 reviews for the respective languages. For Hindi, we employed ABSA dataset developed by Akhtar et al. . There are total 4469 aspect terms in 5417 sentences across 12 domains. We perform 10-fold cross validation for the evaluation in this work. Table 2 lists the brief statistics of the various datasets for different languages. Preprocessing We extract each instance from the SemEval and the GermEval dataset to take into account only the relevant information and remove the XML tags. We use NLTK 2 (Shallow parser 3 for Hindi) to tokenize each sentence of the dataset. The aspect terms can span over multiple words in a sentence and hence, we use the BIO encoding scheme. In this notation, B, I and O denote the beginning, internal and outside tokens of aspect term respectively. for each language/problem pair. In aspect extraction problem, architecture A4 yields the best F1-score for Spanish (73.0%), German (24.0%), English (64.9%) and Hindi (53.5%), whereas for French and Dutch we obtain the best F1-score with architectures A2 (67.8%) and A3 (65.7%), respectively. We observe similar trends for aspect classification as well with architecture A4 performing better for Spanish (87.2% accuracy), German (87.2% F1-score), English (83.4% accuracy) and Hindi (66.9% accuracy). Similar to aspect extraction, architectures A2 and A3 report better performance for French (75.34%) and Dutch (81.9%), respectively. Among all four architectures, architecture A1 has the least performance across all six languages for both the problems. It suggests that the hand-crafted features -when fused into the network-assist the system to learn in a better way than the system learnt with only word embeddings. We also perform statistical significance test (T-test) on the obtained results and observe that the performance of the architecture A4 is significant with 95% confidence for English, Spanish, German and Hindi for both the problems. Further, we compare our proposed system with state-of-the-art systems as listed in Table 4. Our proposed system shows an improvement over the existing state-of-the-art for 9 out of 12 language/problem pairs. For aspect extraction, the system achieves an improvement of 4.5, 1.2, 8.8, 2 and 12.5 points for Spanish, French, Dutch, German and Hindi, respectively. Our system manages to improve the score of sentiment classification for Spanish, Dutch, German, and Hindi by 3.56, 4.17, 12.3 and 1 points, respectively. Improvement of the system performance across the language/problem pairs suggests about the generic nature of our proposed approach. Also, significance T-test shows that improvement of the proposed method over the state-of-the-art systems are statistically significant with p-values< 0.05. From Table 3, we observe that architecture A4 performs the best for four languages, i.e., Spanish, German, English and Hindi irrespective of the problems. Similarly, the performance of the architectures A2 & A3 is best for French and Dutch, respectively. Since architecture A4 is the clear winner in 8 out of 12 language/problem pairs and also reports comparable performance in other cases -with maximum 2.9 points below the best architecture as reported in Table 3 -, we recommend it as the default choice for all the languages and problems. Error Analysis We perform error analysis on the predicted outputs, using automatic translations (Google) for languages we are not proficient in. Following are the few cases where our proposed system often faces challenges. Aspect term extraction: Aspect term extraction is a quite challenging task. The BIO notation is an effective solution for tagging an aspect term; however, it is highly skewed towards the O class, i.e., only a small percentage of tokens in the vocabulary qualify for the aspect term. Despite this limitation, BIO notations result in decent outputs with the few exceptions. In Table 5, we list a few common error patterns along with the examples. Our system faced difficulties when one or more terms can independently qualify as an aspect term. In the first two examples, our system misclassifies the multi-token aspect terms 'customer service' and 'atencin del personal' (attention of the staff) as single aspect terms. It predicts the first token of the aspect term (i.e., 'customer' (first example) and 'atencin' (attention) (second example)) as one aspect term and the last token (i.e., 'service' and 'personal' (staff)) as the other aspect term. Despite both the tokens of aspect term 'customer service' is identified as aspect terms, it results in recall=0 and precision=0. 'and', 'with' etc.) in the multi-token aspect terms (i.e. 'riz arborio aux truffles' (arborio rice with truffles)). In general, 'and', 'with' or other conjunctions does not qualify for the aspect term except in the company of multi-token aspect terms. However, such occurrences are not very common, and the underlying system misclassifies them as outside aspect term, i.e., O. The second example (i.e. 'atención del personal' (attention of the staff)) may also qualify for the similar reason. Aspect sentiment classification: For aspect sentiment classification, we observed two most common sources of errors across languages, i.e., lack of polar information inside the defined context window (±5 neighbouring words) and presence of the sarcastic or metaphoric phrase in the review. We list a few error cases in Table 6. The first example belongs to the Spanish language, which contains an aspect term 'calidad-precio' (quality-price). The actual sentiment towards the aspect term is positive; however, in the absence of clue words (i.e. 'restaurantes de referencia de Zaragoza' (recommended restaurants of Zaragoza)) inside the context window, our proposed system predicts its sentiment as neutral. Predicting sentiment for the sarcastic and metaphoric text are usually challenging due to the difference in its textual-meaning and actual-meaning (i.e., what is said is not meant or vice-versa). Our system also finds it non-trivial to correctly classify an aspect term in the presence of sarcastic (second example of Table6) or metaphoric (third example) text. In the second example, the staff's unresponsiveness behaviour irked the writer, who had to ask for a table sarcastically. Similarly, in the third example writer was not amused by the quality of lemon chicken and compared it with the sticky sweet donuts as figure-of-speech. Conclusion In this paper, we have proposed a language-agnostic deep neural network approach for solving the problems of aspect-based sentiment analysis. Our system employs Bi-LSTM network for learning the sentence embeddings, which is assisted by a few handcrafted features. To show the effectiveness, we evaluated the proposed approach on six languages (i.e. English, Spanish, French, Dutch, German and Hindi) and two problems (i.e. aspect term extraction and aspect sentiment classification). We also evaluated different ensemble architectures to combine sentence embeddings and handcrafted features. Comparisons with the existing system suggest that our proposed approach attains the state-of-the-art performance for almost each of the language/problem pair. Acknowledgement Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).
2019-06-20T13:14:48.907Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "8ca8c662823aaccfbe7f15e8670f70d5455b6c17", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/W19-0413.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "8ca8c662823aaccfbe7f15e8670f70d5455b6c17", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
8850264
pes2o/s2orc
v3-fos-license
Improved nutritional status and bone health after diet-induced weight loss in sedentary osteoarthritis patients: a prospective cohort study BACKGROUND/OBJECTIVES: Obese subjects are commonly deficient in several micronutrients. Weight loss, although beneficial, may also lead to adverse changes in micronutrient status and body composition. The objective of the study is to assess changes in micronutrient status and body composition in obese individuals after a dietary weight loss program. SUBJECTS/METHODS: As part of a dietary weight loss trial, enrolling 192 obese patients (body mass index >30 kg/m2) with knee osteoarthritis (>50 years of age), vitamin D, ferritin, vitamin B12 and body composition were measured at baseline and after 16 weeks. All followed an 8-week formula weight-loss diet 415–810 kcal per day, followed by 8 weeks on a hypo-energetic 1200 kcal per day diet with a combination of normal food and formula products. Statistical analyses were based on paired samples in the completer population. RESULTS: A total of 175 patients (142 women), 91%, completed the 16-week program and had a body weight loss of 14.0 kg (95% confidence interval: 13.3–14.7; P<0.0001), consisting of 1.8 kg (1.3–2.3; P<0.0001) lean body mass (LBM) and 11.0 kg (10.4–11.6; P<0.0001) fat mass. Bone mineral content (BMC) did not change (-13.5 g; P=0.18), whereas bone mineral density (BMD) increased by 0.004 g/cm2 (0.001–0.008 g/cm2; P=0.025). Plasma vitamin D and B12 increased by 15.3 nmol/l (13.2–17.3; P<0.0001) and 43.7 pmol/l (32.1–55.4; P<0.0001), respectively. There was no change in plasma ferritin. CONCLUSIONS: This intensive program with formula diet resulted in increased BMD and improved vitamin D and B12 levels. Ferritin and BMC were unchanged and loss of LBM was only 13% of the total weight loss. This observational evidence supports use of formula diet-induced weight loss therapy in obese osteoarthritis patients. INTRODUCTION Obese subjects often show micronutrient deficiencies. 1 --5 The reason for this is complex. Obesity reduces the bioavailability of several vitamins and there may be an alteration in nutrient metabolism. Furthermore, the quality of ingested foods may be poor. 2 Obesity and being overweight represent a rapidly growing threat to the health of populations in an increasing number of countries. 6 Positive energy balance deriving from excessive food intake in relation to energy expenditure is the pathophysiological basis of obesity in most cases. Weight loss is expected to result in a significant reduction in risk of the majority of these comorbid conditions. 7 Weight loss has, however, also been associated with (a potentially harmful) loss of muscle mass and bone in obese individuals. 8 A variety of weight loss methods are available today, including diet therapy approaches such as low-calorie diets and lower-fat diets, change in physical activity patterns, behavior therapy techniques, pharmacotherapy, surgery and combinations of these techniques. Among these, bariatric surgery is the most effective but it is found to aggravate the insufficient state of several micronutrients. 9 As obesity and micronutrient deficiencies are related to and associated with increased risk of morbidity, one must consider the nutritional value and capacity of weight-loss treatments to secure adequate amounts of nutrients and prevent detrimental effects while losing weight. In the current study, we used a prospective cohort of sedentary obese knee osteoarthritis patients, who completed a weight loss trial to look at the effect of a formula low-energy diet on micronutrient status, that is, vitamin D, vitamin B 12 , ferritin as well as on body composition. Clinically, osteoarthritis causes painful joints and is a leading cause of impaired mobility in the elderly; most patients with symptomatic knee osteoarthritis have limitations in function that prevent them from engaging in their usual activities. 10 Our objective was to assess and evaluate changes in micronutrient status (vitamin D, B 12 and ferritin) and body composition in obese knee osteoarthritis patients after 8 weeks of low-energy diet followed by 8 weeks of a hypo-energetic diet, including two formula diet products daily. PARTICIPANTS AND METHODS The results presented in this paper are from a prospective cohort of 192 well-characterized obese knee osteoarthritis patients over 50 years of age. In this study, we assessed micronutrient status and body composition, including total body bone mineral content (BMC) and bone mineral density (BMD) in the completer population, that is, all the participants who entered and completed the weight loss trial, using data from the baseline and 16-week assessment. The CAROT study ('Influence of weight loss or exercise on CARtilage in Obese knee osteoarthritis patients Trial') was a randomized controlled trial designed to answer the question of how to maintain the anticipated symptomatic effect sustaining weight loss for 1 year (ClinicalTrials.gov identifier: NCT00655941). All the participants initially received dietary support for 16 weeks, in order to loose body weight and obtain a clinically important reduction in pain, improvement in physical function and mobility. 11 The current study is looking into the nutritional benefit and harm following the 16-weeks diet scheme. Setting Participants included in this pragmatic trial were recruited between November 2007 and August 2008 from the outpatient clinic at the Department of Rheumatology, Frederiksberg Hospital, Frederiksberg, Denmark, through advertisements in newspapers and on the website of the Parker Institute. Additionally, local general practitioners were informed about the possibility of assigning patients to the project. Participants Individuals who were 450 years of age with confirmed knee osteoarthritis according to standing radiographs were eligible for inclusion, 12 and obese as defined by a body mass index (BMI) X30 kg/m 2 . Exclusion criteria were: lack of motivation to loose weight, inability to speak Danish, planned antiobesity surgery, total knee alloplasty and receiving pharmacological therapy for obesity. In all, 192 patients were enrolled in the trial. The participants were asked not to change any medication or nutritional supplement during the study. The study was approved by the ethics committee of the Capital Region of Denmark (H-B-2007 --088) and all participants signed an informed consent form. Interventions The first phase of the study consisted of an 8-week weight reduction program where the participants were using either an all-provided very low energy diet (VLED) with 420 --554 kcal/d (1743 --2327 kJ/d) or a low energy diet (LED) with 810 kcal/d (3402 kJ/d) in a supervised dietary program (products provided by the Cambridge Diet, the Cambridge Weight plan, UK). Participants were weighed on a decimal scale and given nutritional and dietetic instructions by an experienced dietician in weekly sessions of 1 1 2 --2 h. The VLED program consisted of powdered formula mixture dissolved in water. Women below a height of 173 cm were given three sachets a day E415 kcal per day (1743 kJ per day, 43.2 g protein). Men and women taller than 173 cm were given four sachets a day E520 kcal per day (2327 kJ per day, 57.6 g protein). The LED program consisted of powdered formula mixture dissolved in skimmed milk and water. Participants were given four sachets a day, three of which were dissolved in milk using 7.5 dl of milk per day and one in water (total: 3402 kJ per day, 83.9 g protein). Both programs met all recommendations for daily intake of essential amino acids, fatty acids, vitamins and minerals. Daily intake of vitamin D was 5 mg, B 12 vitamin was 2 mg, iron was 14 mg and calcium was 912 mg in the VLED group. In the LED group, daily intake of vitamin D was 7.3 mg, B 12 vitamin was 6.4 mg, iron was 19 mg and calcium was 2146 mg. Daily intake of protein was at least 43.2 g, and of essential fatty acids, linoleic acid and linolenic acid was 3 and 0.4 g, respectively. Dietary fiber intake was 7.2 g per day at minimum. The second phase of the study consisted of 8 weeks' hypo-energetic diet program of B1200 kcal per day (5040 kJ per day) incorporating two formula diet products daily. All participants were taught to make diet plans with 5 --6 small meals a day. The principles of the diet were in line with the guidelines for healthy eating issued by the Danish National Board of Health, that is, low fat, low sugar and high fiber. The two daily diet products supplied 3.4 mg of vitamin D, 1.4 mg of vitamin B 12 , 9.4 mg of iron and 608 mg calcium. The aim and focus of the dietary education was to modify long-term habitual eating patterns. Variables Body weight was measured on digital scales (TANITA BW-800, Frederiksberg Vaegtfabrik, Frederiksberg, Denmark). Other outcome measures were changes in BMI calculated by a person's weight (in kg) divided by the square of his/her height (in m), where height was measured to the nearest 0.01 m, blood-hemoglobin, plasma-parathyreoidea hormone (PTH), plasma-25-OH-vitamin D3 (vitamin D), plasma-cobalamine (B 12 vitamin) and plasmaferritin (iron). All were measured at baseline and at week 16. All blood samples were analyzed at The Clinical Chemistry Department at Frederiksberg Hospital. Plasma-25-OH-vitamin D3 was measured on a Abbot Architect ISR using micro particle chemiluminescens immunoassay, plasma-cobalamine and plasma-ferritin was measured on a Abbot Architect i2000SR using two step immunoassay with chemiluminescens micro particle technology and PTH was measured on a Cobas e601 using sandwich immunoassay with chemiluminescens detection. Micronutrient deficiency was defined according to the references from the Clinical Chemistry Laboratory at Frederiksberg Hospital: cutoff values were P-25-OH-vitamin D3 o50 nmol/l, P-cobalaminer o200 pmol/l and P-ferritin o12 mg/l. The rationale for selecting these three micronutrients is that these three are linked to important processes in the body, and both obesity as well as older age increases the risk of deficiency. The cutoff value for too high levels of PTH was 6.9 pmol/l. Lean body mass (LBM, kg), body fat (kg), BMD (g/cm 2 ) and BMC (g) were determined by dual energy X-ray absorptiometry using a Lunar DPX IQ Full Body Bone Densitometer (GE Medical Systems, Madison, WI, USA) and was measured at baseline and after 16 weeks' diet therapy. The cohort was analyzed in total, as well as stratified by sex. The rationale for this is that both blood levels of certain vitamins and minerals, as well as body composition, are dependent on gender. Statistics The overall statistical analysis plan scrutinized the null hypothesis that none of the outcome measures included had changed significantly during an intensive weight-loss program. Thus: H 0 was DX ¼ 0, which was tested using 1-sample, paired t-tests. A priori we considered a P-value o0.05 (two-sided) as indicating a rejection of the null hypothesis. For sensitivity, in order to support the results from the group level of the 1-sample t-tests, we also applied Spearman's correlation analyses to assess whether there was an association between the weight change and subsequent change in nutritional status and/or bone health on the level of the individual patient. The SAS statistical package (version 9.2; SAS Institute Inc., Cary, NC, USA) was used for all statistical analyses. RESULTS Of the 192 participants randomized to the trial, 175 (91%) completed the study (returned for final data collection at week 16). Only participants returning for the final examination are included in these analyses. The baseline characteristics of the cohort are presented in Table 1. The mean age of the participants ( ± s.d.) was 62.6 ± 6.3 years. The majority of the participants were women, which is typical for knee osteoarthritis (142 of the 175). The mean weight at baseline was 102.4±14.5 kg, corresponding to a BMI of 37.1±4.4 kg/m 2 . LBM was 50.6 ± 8.7 kg and fat mass was 46.6 ± 9.2 kg. The mean P-25-OH-vitamin D3 was 48.9 ± 20.1 nmol/l at baseline with 84 (48%) participants having lower values than 50 nmol/l, which is the limit for insufficiency, a threshold applied by the hospital laboratory. The mean value of B 12 at baseline was 293.2 ± 120.1 pmol/l; 34 (19.4%) participants had values below the recommended level at 200 pmol/l, a threshold applied by the hospital laboratory. The mean ferritin was 117.1±94.6 mg/l with two (1.1%) participants having values lower than the threshold of 12 mg/l. The mean parathyroid hormone was 6.4 ± 2.2 pmol/l with 55 (31.4%) participants having excessive values, that is, above 6.9 pmol/l. The mean BMC was 2780.7±462.5 g and the mean BMD was 1.20 ± 0.09 g/m 2 . After the first 8 weeks, the participants had lost 12.0 kg (95% confidence interval (CI): 11.4 --12.5 kg; Po0.0001) and showed statistically significant increases in all three micronutrients (see Appendix I). As illustrated in Figure 1, at week 16 the relative change from baseline in the group of 175 participants was in favor of the intensive weight loss program---having substantial improvements in vitamin D (31.3%) and vitamin B 12 (14.9%), with a clinically relevant weight loss (13.7%) to a large extent because of loss of fat mass (23.6%) rather than LBM (3.6%). The participants had lost a mean of 14.0 kg (95% CI: 13.3 --14.7 kg; Po0.0001). The BMI was reduced by 5.1 kg/m 2 (95% CI: 4.8 --5.3 kg/m 2 ; Po0.0001). Sixty-two participants (35.4%) had a BMI o30 kg/m 2 at week 16. Table 2 .0001) and the number of participants with values of PTH, which were too high had fallen to 28 (16%). We did not find any change in BMC (À13.5 g (95% CI: À33.3 to 6.2 g; P ¼ 0.18)). Being aware of that BMC was a secondary outcome (amongst many), we cannot exclude the possibility that this finding may be due to a type-2 error (see Ancillary analyses). Finally, there was a statistically significant decrease in the bone area of 20.5 cm 2 (95% CI: À36.6 to À4.5 cm 2 ; P ¼ 0.013) and an increase in BMD of 0.004 g/cm 2 (95% CI: 0.001 to 0.008 g/cm 2 ; P ¼ 0.025). Ancillary analyses Spearman's correlation analyses were carried out to answer the question concerning release of vitamin D bound in fat with weight loss, on the individual patient rather than the group level, as fat is a known storage location for vitamin D. 13 A strong correlation between weight loss and vitamin D increase was found, whereas a lesser, but still statistically significant, correlation was seen between fat loss and vitamin D increase (see Appendix II). In order to prospectively explore whether it is reasonable to accept the null hypothesis that the intensive weight loss does not change the BMC level, we performed a prospective power analysis under the assumption that 175 patients were in a new study (like the present): For a paired t-test of a normal mean difference with a two-sided significance level of 0.05, assuming from Table 1 a (conservative) common s.d. of 475 g and correlation r ¼ 0.95, a sample size of 175 pairs has a power of 0.219 (that is, statistical power o80%) to detect a mean difference of 13.5 g. This is also supported by the width of the 95% CIs, with the lower limit (À33.3 g) implying a potentially clinically relevant loss in BMC. In a study concerning vitamin D one will always be aware of possible differences in sun exposure with time of year, when in a Northern country. Looking at this aspect in our population, we Tables 1 and 2. carried out a post hoc analysis of variability of vitamin D between the groups who started treatment between January and April, and the groups who started treatment between May and August. No difference between the groups was found. DISCUSSION Our study showed that intensive weight loss achieved by use of a low-energy formula diet was accompanied by significant increases in vitamin D and B 12 levels. This is striking, as nearly half of our participants had a deficiency of vitamin D at baseline and about one in five showed deficiency in vitamin B 12 . At week 16, the percentage of participants being deficient in vitamin D and B 12 had decreased significantly. The correction of these vitamin deficiencies may very likely have been due to the formula products given, as the product was enriched in both vitamin D and B 12 . However, some of the vitamins responsible for the improvements, that is, vitamin D may have been liberated from fat tissue during the weight loss. 13 The participants lost about 10% of their weight during this short period of 16 weeks and as this weight loss was mainly due to fat loss from fat stores, it could have been a source of vitamin D, far larger than that given in the supplement. 14 The increase in vitamin D was paralleled by a decrease in PTH (Pearson's correlation coefficient r ¼ 0.21; Po0.01). One may speculate if this was influencing the unchanged BMC and even increased BMD during the program, a most interesting finding. Measurement of BMD by DXA is the most widely used surrogate marker of the bone status. However, using BMD to determine a response to therapy may take 1 --2 years. 15 From other weight loss studies, changes in BMC and BMD has though been seen already after 3 months. 16 Our results are in disagreement with earlier studies with other weight loss programs, which led to decreased BMC and BMD and accelerated bone turnover. 16 --19 In a calorie restriction study by Redman et al., (2008) the participants were offered diets providing the recommended daily intake of all essential vitamins and minerals, and the participants did not experience any negative effect on bone status with weight loss. 20 This supports that by making sure that the diet applied includes all essential nutrients (like vitamin D and calcium) it is possible to minimize or prevent loss from the muscle and bone. The formula diet program provided at least 100% of the daily-recommended intake of calcium and vitamin D for adults between 18 and 65 years of age at the time the study was conducted. The Danish guidelines recommended a daily intake of vitamin D of 5 mg and of calcium 800 mg. It has previously been shown that diets high in calcium or dairy products can suppress bone resorption. 21,22 As calcium absorption is dependent on the presence of 1.25 (OH) 2 vitamin D, the requirement of calcium can only be meaningfully discussed if the vitamin D status is sufficient. The optimal vitamin D intake is not known and there is evidence suggesting that the present recommended intake is actually inadequate and needs to be increased. 23 However, based on our data we can conclude that the intake of both vitamin D and calcium was sufficient to cause an increase in BMD and to prevent loss of BMC. With their weight loss, 460% of the participants experienced clinically significant improvements in pain and disability. 11 This could also have caused an increase in physical activity and, if this was the case, increased physical activity could explain the relatively low loss of LBM as well Nutritional status after weight loss P Christensen et al as low loss of bone observed in our participants. The participants were though not advised to change their physical activity pattern during the study, but were advised to stick to their usual routines. Measurements of BMC and BMD by dual energy X-ray absorptiometry are known to be affected, although to a minor degree, by layer of excessive fat. 24 There are also differences between dual energy X-ray absorptiometry scanners used. In general, Hologic scanner measurements show an increase in BMD whereas Lunar scanner measurements show a decrease in BMD with weight loss. 24 In our study, we used a Lunar scanner (GE Medical systems). Despite of this we found an increase in BMD with weight loss. We therefore must conclude that this observation is real and if anything the increase in BMD is measured as too low. In this study we measured the blood levels of the micronutrients, we expected to be of clinical importance in relation to our study population. However, it would have been interesting to measure changes in a wider panel of micronutrients in connection to this type of weight loss program, as it is well known that obesity is often accompanied by a low status. General implications Obesity may be associated with nutrient deficiencies and the average overweight subject may suffer from a nutritionally inadequate diet. When trying to lose weight by consuming less food, individuals may unwittingly reduce essential nutrient intake even further. This creates an important role for nutrient-dense foods like formula diets, which allow adequate intake of macro-and micronutrients although still providing smaller amounts of energy. Given the growing rate of obesity, it is important for subjects deciding to reduce their energy intake to maintain a nutritionally sound diet, providing adequate vitamins, minerals and macronutrients. Our data suggest that weight loss can be achieved effectively and safely with low-energy formula products, as long as this diet contains a sufficient amount of nutrients. This is supported by the Look Ahead Study, where the number of meal replacements consumed in the first 6 months was significantly related to weight loss at week 26 (r ¼ 0.32, Po0.001), as was the total number consumed for the year to weight loss at week 52 (r ¼ 0.30, Po0.001). 25 As our obese patients need support to keep to a healthy diet, the formula diet may take the burden of having to deliberately choose the low-fat healthy option at each meal time, and replacement of one or two meals a day with the low-calorie formula diet may be the 'medicine', which helps the patients to keep their micronutrients at acceptable levels as well as the obtained weight loss on a more permanent scale. CONFLICT OF INTEREST AR Leeds is employed as medical director of the Cambridge Manufacturing Company (Cambridge Weight Plan). Pia Christensen, Henning Bliddal, Birgit Falk Riecke, Robin Christensen and Arne Astrup received travel grants to attend scientific meetings from the Cambridge Manufacturing Company.
2014-10-01T00:00:00.000Z
2011-12-21T00:00:00.000
{ "year": 2011, "sha1": "f3c7b99df2fd61709d0bb0337208215de0e1e47f", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/ejcn2011201.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f3c7b99df2fd61709d0bb0337208215de0e1e47f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233829273
pes2o/s2orc
v3-fos-license
Dual-mode dual-band bandpass filter design utilising cylindrical TM-mode cavities This letter presents the design of a novel dual-mode dual-band band- pass filter that utilises the TM210 and TM020 modes of cylindrical cavity resonators for millimetre-wave operation when fed with standard WR-10 waveguide ports. In this manner, the two selected modes of the cylindrical cavity resonators are demonstrated as a single propagation path for fixed dual-passband responses without the need of cavity perturbations or tuning screws. The passbands are designated for centre frequencies at approximately 102.8 and 110.9 GHz and exhibit a four-pole Chebyshev characteristic in each of the passbands, which are sep- arated by a transmission zero location. Simulated and measured results of the prototype are presented to verify the design. Introduction: With an ever-increasing demand on wireless communication systems, methods for increasing the capacity of satellite and terrestrial communications systems have required successive advancements in regard to design schemes as well as novel technologies. To overcome many of the stringent requirements that are imposed by manufacturers, multi-band systems have been proposed as a viable solution because of their inherent compact size and lowered material cost. Although these multi-band systems have been demonstrated in a variety of technologies, standard rectangular and circular waveguide technologies have been on the forefront of high-frequency applications due to superior characteristics such as high quality factor, low loss and high power handling [1][2][3][4]. As trends continue for the allocation of high-frequency bands well into the terahertz and sub-terahertz regions, multi-band designs depend on continuous filter developments in order to achieve these innovative demands. To the best of the authors' knowledge, only a few dual-band bandpass filters (DBBPFs) have been demonstrated in the WR-3 and WR-10 bands [5][6][7][8][9]. Each of these designs has been able to demonstrate notable results by taking advantage of multiple paths through the waveguide or by splitting a broad passband into dual sub-bands. In this letter, we seek to demonstrate a novel dual-mode dual-band filter that is based on the concepts introduced in [10][11][12], which exploits the TM210 and TM020 modes in each of the filter's resonator cavities. In this manner, a DBBPF devised of unperturbed cylindrical cavity resonators is presented in single-path operation by taking advantage of the passband locations that are determined by the resonance of each mode; the two distinct modes share commonality of the resonator shape and, therefore, provide predictable and fixed bandpass locations that can be exploited in high-frequency designs where tuning-means become difficult or impractical to implement. To this end, the design demonstrates a four-pole Chebyshev filtering characteristic in the upper W-band and lower Dband through a dual-mode resonator path, which is fed by standard WR-10 waveguide ports. The prototype is designed and manufactured for centre frequencies at approximately 102.8 and 110.9 GHz, effectively taking advantage of the larger dimensions to support both frequencies of operation. Along with maintaining narrow passband bandwidths of approximately 1%, the use of the higher mode resonators in waveguide technology allows for a low insertion loss to be obtained in each of the passbands. Filter design: For the design of the filter structure, cylindrical cavities are selected for their TM-mode properties and are connected to rectangular waveguide input/output sections. Many other designs with similar interconnecting waveguide structures have been able to demonstrate good results in the literature for single-or multi-band use in this manner, several examples being [2][3][4][11][12][13][14][15][16][17][18][19], where in contrast to most, this filter utilises cylindrical resonators to create dual-passbands within common resonator dimensions without the need for tuning screws or perturbations within the cavities. The use of these types of larger cavities is favourable not only for their higher quality factor, but also for their larger and less restrictive dimensions during the milling procedure. For the design of a cylindrical cavity resonator, the resonant frequencies and initial dimensions can be found from of [20] for each mode, where c is the speed of light, μ r is the relative permeability, ε r is the relative permittivity, n, m and l are the mode numbers, p nm is a table coefficient determined from [20], and a and d are the radius and height of the cavity, respectively. Modelling of the cavity in CST Microwave Studio's eigenmode solver helps to discern the desirable field distributions for possible filter operation. For the case at hand, a cavity with a radius of 2.35 mm and height of 1.27 mm is selected for its TM210 and TM020 mode properties. As the selected TM modes are related only to the radius of the cylindrical cavities, the height of 1.27 mm was selected to match the milling depth of standard WR-10 waveguides. Figure 1 depicts the electric field distributions of both of these modes within the desired cavity. In order to utilise the cylindrical resonators in a higher-order design, we cascade the filter in the same manner as [10][11][12] by utilising a non-resonating node (NRN) section as an interconnect between the second and third resonators. Figure 2 demonstrates the topology of dual-mode paths through the filter, which is comprised of a dual NRN section, where each of the modes is sharing a quarter-wavelength inverter path. This, in turn, also effects the definition of the coupling matrix, which must be extended to handle NRN's in the diagonal column for M 33 and M 44 , as discussed by Amari and Rosenberg [10]. The quarter-wavelength inverter between the NRN's is set to unity (M 34 = 1) for convenience in the same manner as [10][11][12]. This section is first defined for a centre frequency of 106.85 GHz (the centre of the two passbands). Since there is loading effects from the cylindrical resonators to the NRN section, the slot is extended and, therefore, acts as a quarter-wave inverter section for each of the passbands. A 3-D view and the corresponding dimensions of the filter are shown in Figure 3. A general coupling matrix can be formulated as as presented in [21], where the coupling coefficient k takes the form of (3), the frequency transformation as (4), and the external quality factor as (5), where f 1 , f 2 , f 3 and f 4 are the resonate peaks of two coupled dualmode dual-band resonators, Q e = 43.12, γ = 57, ω 1 = 2π · 102.82 GHz, ω 2 = 2π · 110.9 GHz, ω m = 2π · 107.337 GHz, and Q en for n = 1,2 are the external quality factors of each mode. Figure 4 demonstrates the effect of changing iris dimensions I 1 and I 2 , as defined in Figure 3 (2). A comparison is made in Figure 5 between the lossless simulated results and coupling matrix profile of (2) over the range of 98-116 GHz. The lossless simulated response of each passband demonstrates a return loss that is better than 20 dB and has corresponding bandwidths of approximately 1%. The centre frequency of the first band is located at approximately 102.8 GHz, the upper end of the W-band, while the second band is located at the lowest end of the D-band and centred at approximately 110.9 GHz. Simulation of the TM210 and TM020 modes within the filter is depicted in Figure 6. These images serve as a visual representation of the electric field interactions (in magnitude) throughout the structure. Manufacturing and results: For the manufacture of the structure in waveguide technology, the filter is split into five separate blocks to be milled by CNC (computer numerical control). The given dimensions of the cascaded structure are defined in Figure 3(b), while the milling radius is designated as 0.4 mm. It can be noted that the use of the WR-10 waveguide input/output ports allows us to designate a passband response in the lower D-band while still taking advantage of the WR-10 waveguide's larger and less restrictive dimensions. Although this technique is well known in industry, it remains as a suitable method of overcoming manufacturing issues in very high frequency components. Brass has been selected as the cutting material due to machinability and final surface finish. Figure 7 depicts the manufactured pieces before final assembly. The brass component shown in the centre of Figure 7 houses each of the four main resonator cavities on either side of the structure, while the NRN section is milled through the block to connect resonators 2 and 3. The other brass pieces shown act to enclose the resonator cavities of the centre brass section as well as house the input/output waveguides and their associated irises. Once assembled, the filter is tested using a Rohde & Schwarz ZVA67 with W-band up-converters. Figure 8 presents a comparison of the simulated and measured results of the WR-10 dual-mode dual-band filter over 98-116 GHz. This direct comparison demonstrates good measured results over the entire frequency region of interest. The measured return loss is better than 20 dB in both the first and second passbands. A small shift in centre frequency can be observed, which has pushed both passbands to slightly lower frequencies. The simulated insertion losses at the centre frequencies of the lower and upper passbands are better than 0.96 and 1.12 dB, respectively, when the conductivity of brass is taken as 1.59e+07 S/m. The measured insertion loss values reach approximately 1.57 and 1.8 dB for the lower and upper passbands, respectively, which is less than 1 dB of loss at each of the centre passbands when compared with the simulated results. Although the simulated conductivity of brass is viewed as an overvaluation, additional losses can be attributed to the final-milled surface roughness as well as any misalignment or gaps between each of the five brass parts after assembly. Table 1 is provided as a general comparison of dual-band WR-10 waveguide filters that have been proposed in the literature. Although this design utilises the fixed frequencies based on the modes of a shared resonator size and path, the measured results are quite similar to the achievements presented in [8] and [9]. Conclusion: A new dual-mode dual-band passband filter that utilises the TM210 and TM020 modes of cylindrical resonators has been presented for operation in the upper W-band and lower D-band, where, in this manner, the shared resonator size and selected modes allow the designer to use fixed and predictable centre frequency locations. A discussion on the design approach and coupling matrix profile has been presented. The prototype has been manufactured as five separate brass pieces and tested in the laboratory. Measurements of the filter have shown a return loss that is better than 20 dB and insertion loss better than 1.8 dB in each of the passbands. The measured results agree well with the proposed filter simulations, thus allowing for the design approach to be verified. A table outlining the existing dual-band WR-10 waveguide filters in the literature has been presented for comparison of general characteristics. This work provides a progressive step for the implementation of higherorder cylindrical cavities at millimetre-wave frequencies without the use of tuning screws or cavity perturbations.
2021-05-07T00:03:11.070Z
2021-03-05T00:00:00.000
{ "year": 2021, "sha1": "1825ef2715d455963524104b544c13db349a315d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1049/ell2.12127", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2a6f2490805bf39e704e4aee48749ca8fd9a5ef4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
3319666
pes2o/s2orc
v3-fos-license
Visible to near-IR fluorescence from single-digit detonation nanodiamonds: excitation wavelength and pH dependence Detonation nanodiamonds are of vital significance to many areas of science and technology. However, their fluorescence properties have rarely been explored for applications and remain poorly understood. We demonstrate significant fluorescence from the visible to near-infrared spectral regions from deaggregated, single-digit detonation nanodiamonds dispersed in water produced via post-synthesis oxidation. The excitation wavelength dependence of this fluorescence is analyzed in the spectral region from 400 nm to 700 nm as well as the particles’ absorption characteristics. We report a strong pH dependence of the fluorescence and compare our results to the pH dependent fluorescence of aromatic hydrocarbons. Our results significantly contribute to the current understanding of the fluorescence of carbon-based nanomaterials in general and detonation nanodiamonds in particular. . Schematic drawing of the custom built in-solution fluorescence spectroscopy setup. Experimental parameters used for data acquisition using the above setup Pulse repetition rate: 80 MHz (fluorescence spectra) / 10 MHz (fluorescence decay) Spectrometer/ CCD camera: 5 pixel binning, 10 seconds integration time Table S1. Nanoparticle (NP) solution, HCl and NaOH solutions used to prepare the samples investigated in this study. Sample 4 (no HCl or NaOH) was used to investigate the excitation wavelength dependence shown in Figure 2 and 3 in the main text. Sample / pH 4 / 6.1 5 / 7.7 6 / 9.7 7 / 10.5 8 / 11.8 9 / 12.7 Figure S3. Energy-dispersive X-ray spectroscopy (EDS) results for DND particles show carbon (Kα at 0.277 keV) to be the predominant element in our sample. It also contains significant amounts of oxygen (Kα at 0.525 keV) and we find trace amounts of Cu, Si, Zr and Ca. Particles were deposited on a holey carbon TEM grid. Figure S4. Electron energy loss spectroscopy (EELS) results for the DND particles compared to glassy carbon (GC) used to determine sp 2 and sp 3 carbon content in our samples. A: The low loss EELS spectra for DND and GC. The dominant feature is the plasmon peak which is be a measure of the effective density of the material, assuming a free elctron gas model. For example, GC has a plasmon peak at 22.5eV which equates to a density of 1.54 g cm -3 . The plasmon peak of DND however consists of two distinct components with peaks at 22.3 eV (1.51 g cm -3 ) and 34 eV (3.51 g cm -3 ). This suggests the presence of diamond as well as graphitic material in our sample. B: Ionization K-edge spectra of the same samples. The presence of both sp 2 and sp 3 bonded carbon is also evident in the ionisiation K-edge spectra where the π* transition is minimal for DND compared to GC and shows that about 18% of the carbon bonds in our sample are sp 2 hybridised and 82 % sp 3 bonded as calculated below. Acquisition and analysis of EELS spectra Electron Energy Loss Spectra (EELS) were collected on a JEOL 2100F TEM operating at 200keV with a Gatan Imaging Filter (GIF Tridium) in imaging mode. The Carbon K-edge and low-loss plasmon spectra were both acquired. The K-edge spectra were processed by removing the inherent background and the contribution due to multiple scattering removed. To obtain the sp 2 fraction, the 1s-2π* feature was fitted using a Gaussian distribution and the intensity was compared to the intensity of (1s-2π*) + (1s-2σ). This ratio was compared to the K-edge spectra collected from a glassy carbon sample which is 100% sp 2 according to the below formula Where !" * is the integral under the 1s-2π* feature of the sample, ! (△ ) is the integral under the (1s-2π*) + (1s-2σ) features of the sample !" * is the integral under 1s-2π* feature of glassy carbon ! (△ ) is the integral under the (1s-2π*) + (1s-2σ) features of glassy carbon Figure S5. Raw fluorescence spectra for DND samples in water at neutral pH (colored lines) compared to water only (black line) for all excitation wavelengths as indicated in the graphs. Figure 3B in the main text. B: Fluorescence decay traces for the different excitation wavelengths as indicated in the graph. The long fluorescence lifetime component Tau2 was determined by fitting a single exponential to the decay traces a shown in the graph (black lines). We find the lifetimes determined this way to vary by ±0.35 ns depending on the exact region used for fitting, which is reflected in the error bars shown in Figure 3C in the main text. The same approach was used for the analysis of the pH dependent results shown in Figure 5 on the main text. pH 3.7 4.5 5.4 6.2 7.7 9.7 10.5 11.8 IRF Figure S8. A: Integrated fluorescence intensity as a function of time. A DND nanoparticle solution (300 µL, 1.33 mg/mL) was excited with 500 nm light and the fluorescence collected with a spectrometer at 5 frames per second. An aqueous solution of HCl (100 µL, 1 mM) was added at time t=0 seconds. The fluorescence decreases by 75% within less than 0.5 s and remains stable thereafter. This decrease is caused by a dilution of the starting solution by 33% as well as the decrease in pH, resulting in a decreases of fluorescence in agreement with Figure 5B in the main text. Nanoparticle aggregation is a diffusion-limited process. The mean displacement of a 5 nm spherical particle due to Brownian motion after 500 ms is below 1 nm, which makes a collision with another particle (as a prerequisite for aggregation to occur) within this timeframe highly improbable in our nanoparticle solutions. B: Same data as in A, but zoomed into the region where the HCl addition occurs. C: The same experiment as in A, but using NaCL instead of HCl. Here, the intensity decreases due to the dilution of the solution, but only by around 23% instead of the expected 33% dilution. This is likely caused by incomplete mixing, which can be difficult to achieve in these measurements. D: Fluorescence spectra of DND particles dispersed in water (black line) and in 250 µM NaCl (green line), which is the final HCl concentration used in panel A and NaCl concentration used in panel C. The spectra were measured ~ 30 s after the addition of either water or NaCl. In the presence of salt the spectrum shows a slight red-shift, which is typical for partially aggregated particles. Overall, the difference in fluorescence intensity is < 1%, which is within the experimental A B igure S9 Figure S11. Absorption spectrum of DND particles in water (0.04 mg/ mL) for the spectral range from 250 nm to 800 nm. The data was acquired using a Cary 700 absorption spectrometer (Agilent Technologies) and an integrating sphere. Calculation of the Debye length The Debye length λ D is commonly defined by the equation 1 : where ε 0 is the permittivity of free space, ε r the dielectric constant of the solvent, k BT the thermal energy, c i the ionic concentration of the i-th ion species in solution, e the elementary charge and z i the valency of the i-th ion species. The Debye length is a characteristic length for the range of the electrostatic potential into the solvent. Calculation of mean square displacement The mean square displacement X of a particle in 3 dimenasions was estimated using the equation = 3 where D is the Stokes-Einstein diffusion coefficient and t is time. D was calculated using the following equation and parameters: = ! 6 k B T = 4.11 E-21 J (thermal energy) η = 8.94 E-04 kg m -1 s -1 (viscosity of water) r = 2.5 nm (particle radius) For a diffusion time of 1 second and a particle size of 5 nm (most particles are larger than this so this is an upper bound for X) the mean square displacement is ~0.3 nm. Estimation of the average particle separation in solution We have used the Wigner-Seitz radius to estimate the average separation of DND particles dispersed in water at a concentration of 3 µM or ~ 1.8 e18 particles per liter (1 dm 3 ). The Wigner-Seitz radius in 3D is given by: where V is the volume of the solvent and N the number of particles. Using the values given above one obtains R s = 51 nm. The nearest neighbor distance would thus be 102 nm (2 R s ). This value is more than two orders of magnitude higher than the mean square displacement of 5 nm particles after diffusing for 1 second in water. Determination of the relative fluorescence quantum yield The quantum yield (Φ) was determined using the equation: where Φ DND and Φ F are the quantum yields of DND and fluorescein ,respectively, and the corresponding gradients b of the fits to the data shown in Figure S12 C and F using the equation = . The quantum yield of fluorescein in 10 mM NaOH of Φ F =0.93 was used as reported by Kubista et al. 2 This yields a value of Φ DND = 0.22% for the quantum yield of the DND particles.
2018-04-03T00:31:46.739Z
2018-02-06T00:00:00.000
{ "year": 2018, "sha1": "3acd447319118427b7e83a2bd673ea8309ca7945", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-20905-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3acd447319118427b7e83a2bd673ea8309ca7945", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
237592179
pes2o/s2orc
v3-fos-license
Global Thrombosis Test: Occlusion by Coagulation or SIPA? The global thrombosis test (GTT) is a point of care device that tests thrombotic and thrombolytic status. The device exposes whole blood flow to a combination of both high and low shear stress past and between ball bearings potentially causing thrombin and fibrin formation. The question arises as to whether thrombosis in the GTT is dominated by coagulation-triggered red clot or high shear-induced white clot. We investigated the nature of the thrombus formed in the GTT, the device efficacy, human factors use, and limitations. The GTT formed clots that were histologically fibrin-rich with trapped red blood cells. The occlusion time (OT) was more consistent with coagulation than high shear white clot and was strongly lengthened by heparin and citrate, two common anticoagulants. The clot was lysed by tissue plasminogen activator (tPA), also consistent with a fibrin-rich red clot. Changing the bead to a collagen-coated surface and eliminating the low shear zone between the beads induced a rapid OT consistent with a platelet-rich thrombus that was relatively resistant to heparin or tPA. The evidence points to the GTT as occluding primarily due to fibrin-rich red clot from coagulation rather than high shear platelet aggregation and occlusion associated with arterial thrombosis. Introduction The global thrombosis test (GTT, Montrose Diagnostics, London, United Kingdom) is described as the first physiologically relevant point-of-care test to assess the risk of thrombosis or bleeding, or to monitor antiplatelet medication without citrate. 1,2 GTT explains the flow scenario through the device as inducing an initial high shear stress stimulus followed by a low shear stress portion using a double ballbearing system located inside a conical test tube. Each bead has gaps between its surface and the inner tube wall. When blood is added to the tube, it flows through the gaps by the ball bearings, into large spaces above and between the beads, and the droplets are collected in a reservoir. Occlusive thrombus can be formed along a continuum that depends on a variety of factors and flow conditions. Virchow in 1856, described a triad to form a coagulationbased red thrombus. The triad consist of stasis or low shear rate conditions, nonendothelial surface, and hypercoagulable blood that is required for thrombosis. 3 This triad stimulates the coagulation cascade by the intrinsic pathway to form fibrin with trapped red blood cells (35-60%), sometimes called a red clot because of its appearance. [3][4][5][6][7] Coagulation is strongly blocked by the anticoagulants heparin and citrate. Dissolution of these fibrin-rich red clots could be triggered by tissue plasminogen activator (tPA) to break down fibrin. Abstract The global thrombosis test (GTT) is a point of care device that tests thrombotic and thrombolytic status. The device exposes whole blood flow to a combination of both high and low shear stress past and between ball bearings potentially causing thrombin and fibrin formation. The question arises as to whether thrombosis in the GTT is dominated by coagulation-triggered red clot or high shear-induced white clot. We investigated the nature of the thrombus formed in the GTT, the device efficacy, human factors use, and limitations. The GTT formed clots that were histologically fibrin-rich with trapped red blood cells. The occlusion time (OT) was more consistent with coagulation than high shear white clot and was strongly lengthened by heparin and citrate, two common anticoagulants. The clot was lysed by tissue plasminogen activator (tPA), also consistent with a fibrin-rich red clot. Changing the bead to a collagen-coated surface and eliminating the low shear zone between the beads induced a rapid OT consistent with a platelet-rich thrombus that was relatively resistant to heparin or tPA. The evidence points to the GTT as occluding primarily due to fibrin-rich red clot from coagulation rather than high shear platelet aggregation and occlusion associated with arterial thrombosis. In contrast, occlusive arterial thrombi, that form under high shear rate conditions, can form over collagen exposed after plaque rupture to capture von Willebrand factor (vWF) that, in turn, aggregates platelets to form a white clot. 8,9 In the case of arterial thrombosis, a stenotic atherosclerotic plaque is the responsible for creating pathologically high wall shear rates ranging from 5,000 to 100,000 1/s, much higher than the typical wall shear rate of <1,000 1/s found in normal arteries. 10,11 When a plaque cap ruptures, prothrombogenic collagen from the extracellular matrix is exposed. The exposed collagen surface binds vWF, and subsequent platelet adhesion and shear-induced platelet aggregation (SIPA) occurs. The formation of an occlusive white clot occurs in less than 5 minutes for microfluidic dimensions. Histology studies estimate white clots to be approximately 50 to 80% platelets by volume, 12 with smaller amounts of fibrin. These two red and white thrombi are morphologically different and created by different factors, so can be distinguished experimentally by different methods, including Carstairs staining 13 where different components react giving different colors. Platelet function tests can be designed to measure blood samples for their propensity to form thrombi by these two mechanisms, since venous thrombi are associated with low shear and arterial thrombi are formed under high shear conditions. GTT, as a point-of-care test, purports to be "the first, pathologically relevant, point-of-care test of thrombotic and thrombolytic status." 2 It has been suggested that shear activated platelet-derived procoagulant activity plays a crucial role on thrombi formation, 14,15 but no experiments have been reported to reveal the nature of the thrombi formed within the test section of the GTT. Beyond the formation of occlusion in the device, the GTT may detect endogenous thrombolytic activity, which may be another major determinant of hemostasis. 16 Correlations between "lysis time" (LT) values and major adverse cardiovascular events (MACEs) have been observed, but not between "occlusion time" (OT) and MACE. 17 The GTT flow system is a simple and clever tool that appears to have both low and high shear zones. We investigate whether the GTT creates occlusions (OT) from the low shear coagulation or high shear platelets, and if LT represents fibrinolysis with restoration of blood flow. We then modify the GTT to create a different type of thrombus. Materials and Methods To evaluate the type of thrombi formed in the GTT, we evaluated the histological appearance of formed thrombi in situ by Carstairs staining, 13 compared OT against expected clotting times from SIPA versus coagulation, sensitivity of OT to anticoagulants, and reaction of the system to tPA (Sigma Aldrich, United States). Carstairs staining clearly differentiates coagulation clots as red, different from SIPA clots that are blue. 12,13,18 We then modified the surface characteristics and shear rate zones in an attempt to identify alternative mechanisms of thrombosis. This series of tests was used to distinguish thrombosis in the GTT as being primarily from coagulation or SIPA. The GTT device has a main unit where all the electronic equipment is located. The measured quantities of the tests are stored on an SD card and the results displayed on a screen on the front of the device. A second part consists of a disposable cartridge that consists of a test tube with a conical taper at the bottom trapping two beads aligned vertically in series. Gaps are formed in between the spherical beads surface and the molded inner wall of the test tube. The device used in this study is the latest GTT-3 model, that could be operated as GTT-2 and GTT-3. GTT-2 records OT and LT, while GTT-3 can additionally assess "thrombus stability" time and rate of thrombolysis by applying external pressure. Unless otherwise stated, we operated our instrument in the GTT-2 mode where blood flow was driven through the test tube by gravity. As thrombus is formed, blood flow is gradually reduced. The instrument detects the time interval, d, between two consecutive blood drops falling into a reservoir by a light sensor (►Fig. 1). d increases with time as the flow rate decreases. When d !15 seconds the device reports the elapsed time that is displayed in seconds as OT. OT has a maximum value of 900 seconds. After OT, a period follows (typically set to 300 seconds described by the manufacturer as "thrombi stabilization period"). The first drop of blood detected by the photosensor after this "stabilization period" indicates the beginning of spontaneous thrombolysis, according to the manufacturer. The total time from the beginning of the test until this point is called T1. The lysis time (LT) is defined as LT ¼ T1-OT. If lysis does not occur until 6,000 seconds after OT (LT cutoff time), "no lysis" is displayed. 1 We first examined the beads, and the test section dimensions and material. The manufacturer does not provide details as to the composition or surface coating on the beads. We scanned the beads in the tube by using microcomputerized tomography scan (micro-CT) (Scanco uCT50) to visualize the size and number of gaps in between the beads and the test tube. To run the device, 4 mL of blood from a syringe is injected into the test tube by hand within 3 to 5 seconds. Since the procedure was manually performed, we scrutinized if results were user dependent and evaluated their variability. Once the blood was injected, it flows by gravity passing through the gaps. The time interval between two drops is recorded onto the SD card and used to quantify the OT and LT of the sample with the standard parameters set up in the device for the mode GTT-2. The results are additionally displayed on the front screen of the device and the raw data stored on the removable SD card. Shear rate in the space between the gaps and the inner wall test section was estimated by approximating the gap to be a rectangle with a length L (length of the gap arc) and width w. Estimation of shear rate in the gap was done using the following equation for 2D channel flow where the flow rate (Q) was estimated experimentally by measuring the mass rate assuming a blood density of 1.06 g/mL. 19 The flow shear rate generated in the gaps was stated by the manufacturer to be 4,000 to 12,000 1/s without further evidence. Using our estimated shear rate from Eq. (1) and the initial geometrical characteristics on the channels, we used the predictive model of SIPA thrombus formation developed by Mehrabadi et al 20 to predict the OT from SIPA in the gap. Once clots were formed and OT reached, histology of retrieved clots allowed us to identify the composition of thrombus formed in the test tube. We harvested the clots at and after OT and used Carstairs' stain 13,21,22 to identify erythrocyte, platelet, and fibrin content. Blood was drawn from healthy volunteers at the Georgia Tech Stamps Health Laboratory (IRB Protocol Number: H18238) using 21-gauge butterfly needles and syringes, without or with anticoagulants. The preloaded anticoagulants used were heparin sodium or sodium citrate (Sigma Aldrich, United States). Blood samples were collected under the following conditions: 4 mL of fresh blood without any anticoagulant was tested within 15 seconds of drawing; 20 mL of blood heparinized at 3.5 USP units/mL; or 18 mL of blood treated with 2 mL citrate solution (3.2 wt% in 0.9% saline). The anticoagulated blood was stored at room temperature on a shaker before testing within 3 hours. OT and LT were measured for the blood samples. For separate samples, LT was measured for blood treated with PBS enriched with 50 nM of tPA (study) or phosphate buffered saline (PBS, Sigma Aldrich, United States) only as control. We hypothesized that to produce SIPA, there should be a surface for vWF and platelets to attach. 18,23-25 Fibrillar collagen is a known, potent surface for SIPA. 26 We modified the original GTT device replacing the two ceramic beads by one glass bead of 4 mm diameter (soda-lime glass, Sigma Aldrich) that we coated with collagen (Fibrillar type I collagen, Sigma Aldrich, United States) as a prothrombotic surface. We called the GTT with this modification included, "modified GTT" or mGTT. Finally, we compared the OT and LT in the mGTT to our microfluidic thrombosis assay (MTA) representing an arterial stenosis as described by Griffin et al. 27 The MTA incorporates the major factors for arterial thrombosis including fibrillar collagen and a well-defined high shear zone to create a predictive model of SIPA that has been validated against clinical arterial thrombosis. Occlusion Time OT obtained for fresh human blood in our GTT device was 526 AE 188 seconds with an intra-assay variation of 15% (n ¼ 7) and an inter-individual variation of 36% (n ¼ 7). The mean value for OT is consistent with several reference articles, but not to others (►Table 2). Clot Histology Although blood clots were expected to appear at the gap between beads and the wall, 28 we did not see clots at this location at the time of OT. It could be that clots were too small to be visible to the naked eye. Clots were clearly visible after OT, between 500 and 1,000 seconds. To observe the time course of clot growth, we tested identical blood samples and rinsed the tube with PBS at different times after the OT to remove all blood components except the ones forming the clot. The blood clot was not visible at an OT of 415 seconds but visible at 800 seconds between both 3 and 4 mm ceramic beads. The clot attached to beads was located not only between two beads but also above the 4 mm bead. We harvested those clots and used a Carstairs' stain to visualize fibrin, red blood cells, and platelets. The histology of the GTT clot was predominately fibrin (red) without platelets (►Fig. 2). Some light blue tinge appears close to all artificial Occlusion happens due to several stages of activation and aggregation. 1 GTT, global thrombosis test; OT, occlusion time. Table 1 TThe bead diameter (D), arc, lengths (L), width (w), area (A), flow rate (Q), and shear rate (γ) in each gap around the 4-and 3-mm beads for a total flow rate of 3.8 (mL/s) estimated from the experimentally measured total mass rate of 4 mg/s surfaces. Dark blue concentrations could be observed at the extensions (presumably the gaps), with more platelets at the 3 mm bead. We could not distinguish from histology whether flow occlusion was predominantly from clots at the gaps or from the large clot forming between the beads. Lysis Time LT was obtained for fresh blood and with added tPA. For the standard GTT procedure, the average LT of 1,660 AE 495 sec-onds corresponding to "normal spontaneous thrombolytic activity" reported in the GTT user manual (LT <2,000 seconds). In an effort to characterize the composition of the occluding clot, the blood samples were treated with tPA in PBS at the beginning of the experiment, before measuring OT. The tPA group was compared with a control group with added PBS solution so that dilutional effects were factored into the experiments. Blood from the same individuals collected at the same time were used for both groups. In the tPA group, LT was reduced to 865 AE 404 seconds which is approximately half of the time seen in the control group (►Fig. 3). The paired t-test reveals the significance of the difference between the two groups (p ¼ 0.004, n ¼ 6, including two measurements for each individual from three individuals). The reduction of LT is consistent with a previous GTT study administering a similar concentration of tPA. 28 Heparin To further explore the properties of clots in the GTT, we used additives that differentially affect the formation of coagulation versus shear-induced platelet aggregation (SIPA). Heparin strongly inhibits coagulation and the formation of fibrin so that we might expect increases in OT if clots are fibrin rich. In the GTT device, fresh blood without heparin occludes with an average OT of 527 AE 204 seconds whereas, heparinized blood has an average OT of 847 AE 106 seconds; as a lower bound (p ¼ 0.00005, n ¼ 12 measurements, including two measurements from six individuals) while nine out of 12 measurements never reached the OT beyond the limit of GTT (>900 seconds (►Fig. 4). This strong effect by heparin suggests that the main cause for occlusion in the GTT is coagulation. For comparison, the OT for fresh blood of 527 AE 204 seconds is almost double the predicted OT of 222 seconds from SIPA (►Fig. 4). 20 Thus, the exposure of blood to the ceramic contact surface, OT being longer than SIPA, the histological appearance of a predominantly fibrin-rich red clot, the short LT by tPA, and the lengthening of OT by heparin all point to thrombotic occlusion in GTT as strongly dominated by coagulation. Modified GTT (mGTT) We next explored whether the system could be modified to create high shear platelet-vWF rich thrombi instead. SIPA thrombi form in the presence of three factors: (1) vWF and platelets in whole blood; (2) high shear rates; and (3) fibrillar collagen surface. 18,[23][24][25] The first two factors are already present in the GTT. Previous work has demonstrated that fibrillar collagen can be deposited on glass. 26 Consequently, we modified the original GTT (mGTT) to promote the creation of platelet-rich clots at high shear rates by replacing the two ceramic beads with a single glass bead of 4 mm in diameter, coated with fibrillar type I collagen. With a single bead, we also avoid the low shear zone in between the beads, where the clots formed in the GTT. The blood flow through the gap of the coated 4 mm bead was 16 AE 1 mg/s, corresponding to the measured volumetric flow rate of Q ¼ 15 ìl/s. The resulting initial shear rate under these conditions was 6,700 1/s, two times higher than the average of 3,500 1/s reached in the gap of the original 4 mm ceramic bead of the GTT device. Occlusion Time We tested mGTT using fresh blood from the same seven healthy subjects tested on the GTT system. The average OT in mGTT was 217 AE 71 seconds about half of that obtained in GTT (OT ¼ 526 AE 188 seconds p ¼ 0.0002, n ¼ 14 including two measurements from seven individuals) (►Fig. 5). Notice that variability decreases approximately 10% in the mGTT compared with the GTT device. The OT of 217 AE 71 seconds obtained by the mGTT is also close to the empirical prediction of 222 seconds for high shear SIPA developed by Mehrabadi et al. 20 Clot Histology Histology of thrombi obtained in the mGTT system revealed platelet-rich clots as visualized in blue with Carstairs staining of clots harvested at approximately 200 and 400 seconds. The mGTT clots did show the presence of some fibrin (red) (see ►Fig. 6). At approximately 200 seconds the blood clot is barely visible as a dot on the bead, which then grows as a tail by 400 seconds. We measured LT for mGTT in the same way done for GTT by enriching the blood sample with tPA. For comparison purposes, we used blood from the same individual for the tPA group (blood with tPA/PBS solution) and the control group (blood with PBS solution). In the control group (n ¼ 3), the average LT was 4375 AE 1660 seconds while with tPA, the value obtained was 2110 AE 2116 seconds (p ¼ 0.12, n ¼ 6 including 2 measurements from 3 individuals). Note, the values for LT in the mGTT are more than the double that produced in GTT. Heparin and Citrate We tested the effects of anticoagulants on the OT for the mGTT. Heparin was expected to prevent fibrin formation, so the effect of heparin may inform the amount of fibrin contributing to the formation of the clot. In the mGTT system, heparinized blood at 3.5 USP units/mL increased the OT to 244 AE 53 seconds compared with 148 AE 30 seconds for nonheparinized blood (p ¼ 0.002, n ¼ 10 including 2 measurements from 5 individuals), suggesting that there is some contribution of fibrin in the SIPA thrombi formation within the GTT (►Fig. 7). Notice that the OT in the mGTT was smaller than the GTT even with heparin, so occlusion was happening faster in mGTT than in GTT. The variance is also lower in the mGTT than the GTT for fresh blood measurements. We also compared the OT from GTT and mGTT results to an equivalent OT obtained using an MTA 27 with heparinized blood. The equivalent OT in the MTA was 189 AE 50 seconds (n ¼ 44 measurements in total. 8 measurements for each individual for five individuals plus another four measurements of one extra individual). p ¼ 0.01 between mGTT OT of 244 AE 53 seconds for heparinized blood compared with MTA OT. p-Value was obtained under unpaired t-test without assuming equal variance. OT measured on the MTA is also much shorter than the OT obtained on the GTT of 847 AE 106 seconds for heparinized blood (p ¼ 5.3 Â 10 À11 under unpaired t-test without assuming equal variance). To further explore the stability of clots formed in the mGTT, we used blood enriched with citrate, a known anticoagulant used for platelet studies that is reported to prevent occlusion in the GTT. 28 The OT in the mGTT using citrate, yielded an OT of 453 AE 240 seconds (►Fig. 7), similar to the OT for GTT using fresh blood. Use of Noncoated Glass Beads We have previously shown that the collagen-coating is critical in the formation of SIPA occlusions. 18,[23][24][25] To confirm the same effect in the GTT, we tested the difference in OT between the collagen-coated and a clean glass bead in the mGTT device using blood from the same population without an anticoagulant or with heparin. Clean glass beads (without coating) led to an OT of 457 AE 240 seconds which is 3 times higher than that of 148 AE 30 seconds obtained for collagencoated glass beads (p ¼ 0.0003, n ¼ 10, including 2 measurements from 5 individuals, see ►Fig. 8). We did not find a significant difference in OT using a single native glass bead or two ceramic beads in GTT. Adding heparin to the blood sample induced a lengthening for mGTT with a clean glass bead that produced an OT of 628 AE 215 seconds compared with 244 AE 53 seconds when the glass surface was coated with collagen (p ¼ 0.00003, n ¼ 10, including two measurements from five individuals). The surface properties of the bead also affect the OT using blood samples with heparin (p ¼ 0.006 in ►Fig. 8 between ceramic and glass surfaces in the GTT device). The ceramic beads in GTT only have 3 out of 12 samples with detectable OT within 900 seconds (OT ¼ 847 AE 106 seconds n ¼ 12, including two measurements from six individuals), but the clean glass bead in mGTT leads to occlusion for 7 out 10 sample (OT ¼ 628 AE 215 seconds n ¼ 10, including two measurements from five individuals). Beads and Gaps Thrombosis requires a nucleating surface and the material can significantly alter its thrombogenicity. 29 The GTT has been described as having beads made of steel [30][31][32] or ceramic. 14,31 The test tubes supplied to us in early 2019 had beads that were white ceramic of 4 and 3 mm in diameter. We attempted to verify the number of gaps and the gap size between the beads and the inner wall surface of the test tube by micro-CT scan shown in ►Fig. 9. The gaps were much thinner than published drawings, measuring approximately 40 microns. Three unevenly sized gaps were seen by micro-CT with the 4-and 3-mm beads. As the beads are in a conical section, small changes in angulation of the tube may cause distortions in the visualization of the gaps and also in the wall thickness. The ambiguity in gap size made calculation of the true shear rates difficult. Shear Rate Through the Gaps Once blood was injected, it flowed through the gaps between beads and inner wall of the test tube at an average measured mass rate of 4 AE 2 mg/s (n ¼ eight measurements), corresponding to a flow rate Q ¼ 3.8 μL/s for a blood density ¼ 1.06 g/mL. Using the images from the micro-CT scan, we measured the total arc angle of gaps, α, to estimate the length of the gap, L, using Eq. (2). The width of gaps (w) is approximated by the width of the widest section among gaps. The 4-mm bead has three gaps with an arc angle of 113, 101, and 110 degrees. The 3-mm bead has three gaps with arc angles of 102, 65, and 13 degrees by micro-CT (►Table 1). We do not know if other test tubes are manufactured differently. The equivalent gap's length (L) is multiplied by the gap's width (w) to obtain the gap's transversal area (A) (►Fig. 10). The flow rate (Q) passing by each gap will be proportional to its area. The shear rate, , is then calculated for each gap using the Eq. (1). The resulting shear rates produced in each gap around the 4-and 3-mm beads are summarized in ►Table 1. The configuration of the gaps in GTT creates different shear rate zones. Fig. 8 Box charts of OT obtained for three different bead surfaces: ceramic, glass, and collagen coated. GTT and mGTT were tested with blood samples without anticoagulants (none in graph) and with heparin as an anticoagulant. OT, occlusion time. Fig. 9 Micro-CT scan of a 4 mm and 3 mm beads (a) and (b), respectively. The gaps are the dark lines in between the ceramic bead (in white) and the test tube (gray) and indicated by red arrows. Yellow arrows indicate the places where the bead and the test tube touch their surfaces. Fig. 10 The gap geometry is approximate to be a rectangle of length L using Eq. (2), where α is the gap arc angle and R the bead radius, as shown as an example in the image, and width w, which is obtained from roughly the middle of the gap indicated by the blue arrow in the image. The gap area is then calculated as A ¼ L Â w. Injection Rate To our surprise, the results in OT were quite dependent on manual injection rates, pressures, air inside the syringe while injecting blood, and resultant bubble formation in the tube. Initially, we had poor reproducibility following the manufacturer's instructions and discussion by email. Eventually, we obtained more consistent results after >50 injections of blood in tubes placed on a rack where we could visualize the blood after injection. The injection rate and angle affected the ability of blood to not flow at all or flow fast through the exit. In our hands, the rate of injection and time of injection could be easily varied to yield different results. After weeks of trials, we were able to settle on a consistent technique of injecting the blood over 4 second each time, just off axis. Data Readout We found that there were inconsistent values of OT and LT between the values displayed in the front panel and the values obtained from the saved traces in the SD card in the majority of the cases. There are no user functions to modify the data points. These stored values had jumps back and forth in time (x axis), even showing some negative time points (►Fig. 11b and c). The y-axis displays the time between individual drops. On ►Fig. 11b, one can see that no drops were recorded between 1 and 316, then rapid dropping is established. We also noted that the numbers along the x-axis are not in seconds, but an unknown fraction of a second. The manufacturer confirmed that the time points are not in seconds. Drop Size Due to these inconsistencies, we decided to remove the test tube and measure OT outside the device looking at each drop directly. We observed that drop size was not the same for all tubes but varied quite a bit. This variation in size produced different OT depending on the tube. In a similar manner, we corroborated that LT is measured according to the definition as the time interval between the first drop to fall after the "thrombi stabilization period" and OT. Thus, the value for LT value also depended on the drop size and was ultimately tube dependent. GTT-3 Pressure Mode When the device was used in GTT-3 or hyper-shear mode, the test tube was connected by a metal needle, through silicone tubing, to an air-pump through a flow/pressure-adjusting bleeding valve. The test tube is then a closed chamber and can only vent through the silicone tube to the machine. The injection pressure can escape only through the valve and the idle motor, both inside the instrument, relatively far from Fig. 11 Examples of the readout from the SD stored data versus the data shown in the display. The x axis of the graphs should be cumulative running time, and the y axis should display the time, d, between two consecutive drops. In (a) the stored values increase smoothly and monotonically, as it is expected from the measurements. In contrast, other traces are shown in (b) and (c), where the stored values had jumps back and forth in time (x and y axis), even showing negative time points. In (b), the displayed data was OT 612 and LT 1925, but the readout from the SD card is OT 317 and LT 2167. Also, there is a long time-delay for the first drop to register. The x-axis records time in a nonstandard unit that is not in seconds. In (c), it is difficult to understand this output. LT, lysis time; OT, occlusion time; SD, standard deviation. the test tube. As such, any extra pressure can escape from the system much more slowly, compared with test tube with a vent hole, which makes injection of blood different from the GTT-2 mode. We attempted to use the device in GTT-3 mode but were not able to obtain consistent results. One of the channels lifted the entire tube during the pressure cycle. The pressure cycle was not specified, the tube once sucked blood back into the machine, and the manufacturer later told us to adjust the length of the tubes shipped with the machine as one tube was 3 mm longer than the others. The manufacturer stated that the length of the silicon tubing is critical to avoid any malfunction during the test, particularly to avoid lifting of the syringe after blood injection to happen. We fixed the length; nevertheless, results did not improve. Because of this, we decided to use the device only in GTT-2 mode for all our measurements. We do not know if our device was broken for the GTT-3 mode, or if it is simply an unstable feature. Discussion Our experiments indicate that occlusion in the GTT is dominated by coagulation and the clots are fibrin rich. Coagulation red clots were consecutively shown by (1) histological staining of fibrin, (2) an OT longer than SIPA, (3) significant lengthening of OT by adding anticoagulants, and (4) a shortening of LT by adding tPA. Coagulation induced by Virchow's triad would require a stagnant or low shear rate zone, which exists between and after the beads where the red clots formed. Thus, the formation of blood clot at a long OT is consistent with a mechanism of the coagulation cascade rather than the faster SIPA. We demonstrated that an alternative mechanism of occlusion by SIPA could be induced by modifying the test section, where SIPA was characterized by (1) histological staining of platelets, (2) more rapid occlusion (shorter OT), (3) relative resistance to anticoagulation, and (4) a long LT even with tPA. The mean OT of 526 seconds in our version of the GTT was similar to the OT of 495 seconds by Suehiro et al, 32 481 seconds by Yamamoto et al, 14 and 524.9 seconds by Otsui et al 33 that were published after 2010. This OT appears to be much longer than reported values using earlier versions of GTT from 2003 and 2006 (►Table 2). Unfortunately, comparisons between papers are obscured as the GTT has been produced with different gaps and beads that are not always reported. At times, the GTT had two, three, and four gaps. Similarly, the bead has been variously made of steel and a ceramic. It is unknown when and what changes were made over the years to both the cartridge and the instrument. We do not know the dimensions of the gaps, nor the size of the opening to produce the droplets. These dimensions could have a strong influence on the measured OT between studies. The intrasample variability in our study was 15% (n ¼ 7) and an interindividual variation of 36% (n ¼ 7). Our intraassay CV was higher than the intra-assay reported by Rosser et al 34 of 6% (n ¼ 1) and interindividual variation of 27% (n ¼ 32) reported by Yamamoto et al. 14 A large interindividual variation (i.e., large false positive and false negative values) would reduce the ability of this test to distinguish normal versus abnormal clinical populations. Point-of-care devices are meant to be used at the location of blood draws. Citrate or heparin might be used to extend the time-to-testing. Unfortunately, heparin effectively destroyed the OT for the GTT that verifies previous reports that anticoagulants should not be used with this test. A previous study also reported occlusion suppression (OT >1,000) by using citrate and ethylenediaminetetraacetic acid (EDTA). 35 In contrast, heparin and citrate had a smaller effect on the mGTT, likely due to the SIPA mechanism of occlusion. Both heparin and citrate treated blood led to OTs in the mGTT that might be correlated back to the untreated blood values, since the variance was not high. The importance of surface (steel, ceramic, glass, and collagen) on the performance of the GTT was evident. The GTT system changes in bead material, gap size, gap number, shear rate, user injection rate, and/or problems in the electronic readouts are not described by the manufacturer. Care should be taken to not rely on results from outdated versions of the GTT as representative of newer versions with system changes. We came to appreciate that the system-defined LT only requires a single drop to emanate from the tip of the test tube. As the drop grows slowly, the drop size and detachment were highly variable and influenced greatly by tube, and a "skin" holding the last drop. The company describes LT as "endogenous lysis" suggesting reperfusion of an end organ. We never witnessed the running of normal blood flow at LT without tPA. An alternative explanation may be that serum continues to seep through the permeable clot and collects as a drop. Such thrombus permeability has been described and quantified for fibrin and platelet-rich thrombi. Consequently, LT may be related to clot structure rather than endogenous lysis. We experienced some limitations with the device. The GTT device we evaluated was purchased directly from the manufacturer in December 2018 and was used according to all provided instructions. Our struggles with injection rate early in the testing may reflect our inexperience rather than any issues with the machine. Thus, we have excluded any results from the first 6 months of use for this reason. GTT timing on the chip is not in seconds, but some value less than a second since 10 minutes yielded many more than 900 time points. The printout has spurious results. The pressure tubes in our unit, and thus GTT-3, did not work. The manufacturer's manual does not specify whether there are two, three or four gaps in between beads and tube walls. Thus, we used a microCT to establish that our test tube had three gaps of different size. Apparently, the manufacturer has changed the beads from steel to ceramic, but it is unknown when this took place and if the OT values from prior iterations are consistent with current devices. The drop size was highly variable by injection speed, tube, and over time. Additionally, it is unknown if normal ranges are the same as described by the manufacturer or need to be calibrated to each center like PT/PTT. The definition of LT is defined as the time for a single new drop to appear. The interpretation of LT may reflect endogenous plasminogen levels, thrombus permeability, or something else. Conclusion GTT is a point-of-care device designed to test for thrombi from untreated blood. We explored the behavior of the GTT in terms of the possible different mechanisms of thrombosis: coagulation or SIPA. GTT develops a red clot that is rich in fibrin, occludes in approximately 8 minutes consistent with the kinetics of the coagulation cascade, has a low shear zone and an artificial surface needed for Virchow's triad, and is affected strongly by heparin. The LT could be shortened by tPA. These results consistently point to occlusion by a coagulation mechanism, despite the existence of a high shear gap in the system. GTT could be modified to induce SIPA with a platelet-rich thrombus, that occluded much faster over collagen, and was relatively resistant to heparin or tPA. The evidence points to the GTT as occluding primarily due to fibrin-rich red clot from coagulation rather than high shear platelet aggregation and occlusion associated with arterial thrombosis. 'What is Known on This Topic' • The global thrombosis test (GTT) is a point of care device that tests thrombotic and thrombolytic status from untreated blood. • The end points to evaluate thrombotic and thrombolytic status are occlusion time and lysis time, respectively. • The test is performed exposing untreated blood to flow in a zone that produces both high and low shear stress, where clots are formed. 'What Does This Paper Add?' • We demonstrated that the GTT occludes primarily due to fibrin-rich red clot from coagulation rather than high shear-induced platelet aggregation (SIPA) and occlusion associated with arterial thrombosis. • We demonstrated that an alternative mechanism of SIPA or white clot might be induced in the GTT by modifying the device. • Coagulation as red clots were shown by histology, an occlusion time longer than SIPA, significant lengthening of occlusion time by adding anticoagulants typically used in clinics (heparin and citrate), and a shortening of lysis time by adding tissue plasminogen activator human, tPA. Funding The study is funded by Atlanta Center for Microsystems Engineered Point-of-Care Technologies, ACME-POCT with funding number of NIH-5U54EB027690.
2021-09-23T05:14:32.108Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "117db673855d3dcb13f6882f8fa6304ece2eb0af", "oa_license": "CCBY", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0041-1732341.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "117db673855d3dcb13f6882f8fa6304ece2eb0af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1263863
pes2o/s2orc
v3-fos-license
Comparison of persistence rates of acetylcholine-esterase inhibitors in a state Medicaid program. Objective To compare levels of persistency between cholinesterase inhibitors (ChEIs) among a Medicaid patient population of older adults. Methods Survival analysis was used to assess differences in discontinuation between ChEIs (donepezil versus rivastigmine and galantamine), and for difference in patient gender, age, race, and care setting. Results Rates of discontinuation increased from 42.7% (95% CI = 39.9–45.5) at 12 months to 84.8% (95% CI = 82.3–87.3) at 24 months. In multivariate models, no significant difference in discontinuation existed prior to 365 days. However, patients dispensed donepezil were less likely to discontinue as compared with users of the other two ChEIs after the first year (RR = 0.70; CI = 0.499–0.983; p < 0.04). Patients of white race were less likely to discontinue (RR = 0.549; 95% CI = 0.43–0.82; p = 0.0015), while gender, care setting, and age were not associated with discontinuation. Conclusions One-year persistence rates were similar between different ChEIs. Among patients persisting with ChEI medication for at least 12 months, users of donepezil were slightly more likely to continue to persist at 24 months. Nearly half of patients failed to persist with ChEI therapy for at least 12 months. Our findings underscore the limitations of the ChEI medications and the urgent need for effective and tolerable therapeutic options for patients having dementia. Background Alzheimer's Disease (AD) is an irreversible progressive disorder characterized by neuronal deterioration that results in loss of cognitive functions, such as memory, communication skills, judgment and reasoning (Lanctot et al 2003). It is a common (Fratiglioni 1993;Zuard 2001) and chronic dementia disorder among elderly people (Fratiglioni 1993), responsible for nearly 70% of all dementias (Zuard 2001). Approximately 4.5 million Americans suffer from AD in the US population, and this number is expected to increase almost 3-fold, to 13.2 million by 2050 (Hebert et al 2003). The incidence and prevalence of AD increases exponentially between the ages of 65 and 85, approximately doubling every 5 years of age (Rocca et al 1991). The proportion of new onset cases who are 85 years of age or older is expected to increase from 42% in 1995 to 62% in 2050 (Hebert et al 2001). In 1995, 7.1% of all deaths in the US were attributable to AD, placing it on a par with cerebrovascular diseases as the third leading cause of death (Ewbank 1999). The 1991 estimate of total prevalent cost of the disease was $67.3 billion ($173,932 per case), with $20.6 billion in direct costs ($47,581 per case) (Ernst and Hay 1994). It is a disease with signifi cant economic burden and a high societal impact, with the proportion of older adults increasing in the population (Fratiglioni 1993;Hebert et al 2003). The FDA approved 4 cholinesterase inhibitor drug therapies for AD: tacrine, donepezil, galantamine, and rivastigmine, collectively known as acetyl-cholinesterase inhibitors (ChEIs). By inhibiting the breakdown of the enzyme acetylcholine-esterase, these agents are hypothesized to prolong the action of acetylcholine at the postsynaptic receptor by preventing its hydrolysis. Cholinesterase inhibitors are also prescribed for other conditions with cholinergic system dementia such as vascular dementia, Parkinson's disease and multiple sclerosis dementia (Kloszewska 2002). Though not curative, these medications can slow the progression of AD rather than reverse its progressive decline and have been shown to have a modest benefi cial impact on neuropsychiatric outcomes for AD patients (Trinh et al 2003). The major therapeutic effect on ChEI is to maintain a cognitive function at a stable level during a 6-to 12-month period (Giacobini 2000a, b;Giacobini 2001a, b;Giacobini 2002). Additional drug effects are to slow down cognitive deterioration, improve behavioral problems, and increase ability to perform daily living activities (Jann 1998;Giacobini 2000a, b;Giacobini 2001a, b;Giacobini 2002) and improve the patient's mood (Grutzendler and Morris 2001). Recent studies show that the cognitive stabilization effect may be prolonged up to 24 (Giacobini 2000a, b;Giacobini 2001a;Giacobini 2002) to 36 months (Giacobini 2001b). The four therapies for AD differ by selectivity and specifi city for the brain tissue, as well as the ability to interact with other drugs, adverse events on the nervous system and gastrointestinal tract, and hepatotoxicity (Zuard 2001). Tacrine is no longer marketed in the US because of safety precautions (Clark and Karlawish 2003), and donepezil was the most frequently prescribed ChEI (Auriacombe et al 2002;Bullock and Connolly 2002) at the time of the study. Persistence to these agents is expected to be suboptimal, as patients are often poorly adherent to chronic medications (Barat et al 2001;McDonald et al 2002). In a number of chronic illnesses, noncompliance to medications has been shown to have a signifi cant negative health impact (Luscher et al 1985;Col et al 1990;Psaty et al 1990;Chin and Goldman 1997;McDermott et al 1997;Bergen et al 1998;Paterson et al 2000;Tsuyuki and Bungard 2001), and is estimated to cost the US $25 billion annually when indirect costs are included (Sullivan and Hazlet 1990). Older adults are especially prone to be non-adherent (Gray et al 2001) because of susceptibility to adverse events (Monane et al 1998;Golden et al 1999), defi cits in physical dexterity, cognitive skills and memory, and because of the large number of medications they are prescribed (Cramer 1998). Some researchers have found that older patients are generally more likely than younger patients to discontinue their medication (Applegate 2002;Benner et al 2002;Jackevicius et al 2002). Patients may discontinue ChEI drug therapy as a result of intolerable adverse events, rapid clinical deterioration, or failure to improve, stabilize, or reduce the rate of decline in AD (Fillit and Cummings 2000). While persistence with specifi c ChEI medications has not been adequately examined by researchers, switching between different ChEIs has been frequently reported (Auriacombe et al 2002;Bullock and Connolly 2002;Emre 2002). There is limited information outside the clinical trial setting (Mauskopf et al 2005) and results of previous studies examining persistence to ChEIs have been inconsistent (Sicras and Rejas-Gutierrez 2004;Mauskopf et al 2005;Sicras-Mainar et al 2006) or have included only one type of ChEI (Roe et al 2002), thus preventing a comparison. The objective of this research was to compare rates of persistence between donepezil and other types of ChEIs in usual care settings among a patient population of older adults enrolled in a state Medicaid program. Studies have reported donepezil to have better tolerability than the other ChEIs (Rogers et al 1998;Emre 2002;Inglis 2002;Wilkinson et al 2002;Birks 2006). Thus, we assessed persistence with each ChEI medications separately, and overall. Study population and design We conducted a retrospective cohort study among patients enrolled in the Rhode Island Medicaid program between January 1, 2001 and December 31, 2003, and who received at least one dispensing for a ChEI medication. New users of ChEIs were identifi ed by selecting patients receiving an initial prescription for a ChEI medication between July 1, 2001 and December 31, 2003. Those included had no prior dispensing for a ChEI medication in the previous 6 months, and an initial prescription prior to June 30, 2003, such that all patients had at least 6-months of follow-up time. Cases were excluded if they were less than 50 years of age. Information describing demographic and other patient characteristics was made available. Members with greater than a 6-month period (180 days) between refi lls, or between the last refi ll and the end of the study period were considered to have discontinued the drug. Switching to other types of cholinesterase inhibitors was assessed separately among those that continued their medication at 6-months and at 1 year. Persistence rates of acetylcholine-esterase inhibitors in a Medicaid population Independent variables included class of ChEI dispensed (donepezil versus rivastigmine and galantamine), gender, age (age 50-69 years; or age 70 years or greater), race (white versus nonwhite), and care setting (long-term care setting versus community-dwelling). Categorizing age at 70 years was determined after performing an assessment of the parametric form for the age variable. Descriptive statistics were used to determine the frequencies of various patient characteristics. Survival analysis was used to assess differences in persistence among ChEIs product dispensed, and by the patient characteristics identifi ed above. Kaplan-Meier (KM) curves were independently constructed for each of the predictor variables (class of ChEIs, gender, age, race, setting) and the log-rank statistic was used to evaluate group differences in persistence. Extended Cox proportional hazards models were used to estimate rate ratios (RR) and 95% confi dence intervals (CI) for the association between patient characteristics and ChEIs discontinuation. Statistical analyses were carried out using SAS statistical package version 8.01. Results A total of 1564 patients met the study's inclusion criteria. Baseline characteristics of the population are presented in Table 1. The mean age of these patients was 83 years and 76% were female. Most of the patient population were in long-term care (LTC) (86%) and 61% where of white race. Donepezil was the most widely dispensed ChEI accounting for 56.4% of the patients. Table 2 presents the discontinuation rates at 12 months and 24 months, overall and according to patient characteristics. Overall discontinuation increased with time from 42.7% (95% CI = 39.9-45.5) at 12 months, to 84.8% (95% CI = 82.3-87.3) at 24 months. In univariate analyses, the initial type of dispensed ChEI medication was not associated with discontinuation (p = 0.22). During the fi rst 12 months, males were more likely to discontinue than females (p = 0.04) and whites were less likely to discontinue (p Ͻ 0.0001) than non-white patients. A small percentage of patients switched ChEI medication type during the study timeframe. Among patients who were started on donepezil and continued the medication for 6 and 12 months, 96% remained on donepezil at 6 months and at 12 months. At 6 months, 1% switched to rivastigmine and 3% switched to galantamine and at 12 months 2% were receiving rivastigmine and 2% were receiving galantamine. Among patients who were started on rivastigmine 92% were still on rivastigmine at 6 months while 4.4% were switched to donepezil and 3.5% to galantamine. At 12 months, 89% were still on rivastigmine, 7.8% were receiving galantamine and 3.2% were receiving donepezil. Finally, among patients who were started on galantamine, 98% remained on galantamine at 6 months, while 1.5% were switched to donepezil and 0.8 % to rivastigmine. At 12 months, 98.7% of these patients were receiving galantamine and 1.3% were receiving rivastigmine. The Medicaid plan had no infl uence on drug switches, as no restrictions on the use of any particular ChEI medication were in place during the study timeframe. In multivariate models, persistence rates did not differ among ChEI medication use as assessed during the fi rst 12 months of therapy (RR = 1.002; CI = 0.807-1.243; p = 0.9879). Among those persisting for at least 12 months, users of donepezil were less likely to discontinue during subsequent months as compared with users of the other two ChEIs (RR = 0.70; CI = 0.499-0.983; p Ͻ 0.0397). Overall, patients of white race showed better persistence than of nonwhite race (RR = 0.549; 95% CI = 0.43-0.82; p = 0.0015). This, however, was based on 66% of the population since 34% had missing values for race. Gender, care setting, and age were not associated with differences in discontinuation. These results are presented in Table 3. Figure 1 shows the survival curves for those dispensed donepezil vs other ChEI medications. Discussion Our fi ndings suggest that discontinuation rates for ChEIs are high, as previously reported (Roe et al 2002;Sicras and Rejas-Gutierrez 2004;Mauskopf et al 2005;Sicras-Mainar et al 2006), and indicate slightly better persistence with donepezil than other ChEIs beyond 1 year of therapy, adjusting for gender, age, race, and living arrangements (adjusted RR = 0.70; CI = 0.499-0.983; p Ͻ 0.0397). Similar rates of persistence for all ChEI medications for the fi rst 12 months of use were observed (adjusted RR = 1.002; CI = 0.807-1.243; p = 0.9879). Patients of white race were shown to have better persistence than nonwhites (41% vs 69%), although this fi nding was based on an analysis of only 66% of the study population, as 34% of individuals had no information describing race in the available data sources. Despite the high rate of missing values for this variable, we believed that it was important to include this covariate in our analysis given the magnitude of difference in the percentages persisting, and because race has also been reported to be among factors associated with noncompliance (Balkrishnan 1998). The smaller percentage of nonwhite cases overall merits further analysis, as it is possible that nonwhite patients were less frequently prescribed ChEI medications. In the multivariate analysis, we did not fi nd persistence rates to differ signifi cantly when assessing by patient age, gender, or by living arrangement. Other researchers evaluating these factors have described inconsistent fi ndings (Coons et al 1994;Balkrishnan 1998). Persistence rates of acetylcholine-esterase inhibitors in a Medicaid population Two studies carried out in primary care health centers in Spain (Sicras and Rejas-Gutierrez 2004;Sicras-Mainar et al 2006) also demonstrated better persistence of donepezil than rivastigmine and galantamine. Mauskopf et al (2005) found similar rates of persistence between rivastgmine and donepezil in a retrospective community-based study. This research, however, looked at 6 months of medication persistence and did not evaluate persistence rates after 1 year. Thus, our results are consistent since we found no signifi cant difference in persistence prior to 1 year of therapy. Furthermore, Mauskopf et al (2005) recognized that the limited sample size of rivastgmine patients might have limited the authors' ability to detect differences in persistence. Leading reasons for discontinuation of ChEI therapy include patient or physician-perceived ineffectiveness, intolerance of side effects, or an inconvenient dosing schedule (Mauskopf et al 2005). Reported side effects include nausea, vomiting, and diarrhea (Rogers and Friedhoff 1996;Rogers et al 1998;Emre 2002;Mauskopf et al 2005). Donepezil has been reported to have better tolerability and a milder side effect profi le than other ChEIs (Rogers et al 1998;Emre 2002;Inglis 2002;Wilkinson et al 2002) which might explain the observed difference in persistency rates beyond 1 year. This, nonetheless, does not explain why similar persistency rates were observed prior to 1 year, since side effects are expected to start before 1 year. Additionally, based on comparison of persistence rates, our results do not support reports that rivastigmine provides a higher magnitude of benefi ts than donepezil (Rogers et al 1998;Emre 2002;Inglis 2002;Wilkinson et al 2002). The cost-effectiveness of ChEI therapy has been questioned, particularly given the high direct cost for these medications (Clegg et al 2001(Clegg et al , 2002Fillit and Hill 2004;Curtiss 2005;Loveman et al 2006). Outcome evaluations should also consider the quality of life of the patient and of care givers, and the importance of developing a quality of life instrument for both (Loveman et al 2006). It is diffi cult to quantify benefi ts as reported in the literature since improvements in tests such as ADAS-cog (Alzheimer's Disease Assessment Scale cognitive subscale) may not be refl ected in changes of daily life (Clegg et al 2001). Because there is currently no cure for AD, one cannot expect the initial cognitive improvement observed in the fi rst few months of therapy to be sustained indefi nitely. However, one should expect that some patients who are treated early and persistently with AD medications will show less evidence of behavioral and cognitive deterioration over a period of time than one would expect in the absence of pharmacotherapy, and less decline over the long term (Geldmacher et al 2006). By reducing cognitive and functional declines over time, long-term therapy may enable patients to stay at home longer and decrease the burden faced by patients, caregivers, and society (Geldmacher 2003). Geldmacher et al (2003) reported that taking donepezil for 9-12 months delays nursing home placement. Hill et al ( 2002) demonstrated lower costs for 204 AD patients in a large Medicare managed care plan on donepezil compared with 204 patients not receiving therapy with matched characteristics, where annual costs of prescriptions and medical services were $3,891 lower for the study group. Longer term therapy (Ն270 days) also achieved lower costs compared to shorter term therapy. In our study population, 686 (57.3%) patients remained on therapy for 12 months, a fi gure that may represent a cost-savings considering the potential outcomes described above. Several limitations to this study can be described. Clinical data regarding diagnosis, doses used, and exact reason for discontinuation were lacking. Prescription dispensing data were used to evaluate persistence, and fi lling the prescription does not ensure that the drug was actually consumed. Nonetheless, refi ll data are considered more objective than self-report which can overestimate compliance (Choo et al 1999), and is a useful tool to assess the drug use in population-based studies (Steiner and Prochazka 1997). We could not account for use of samples or hospitalizations during a follow-up period. However, we believe that our criterion of 6 months without a prescription being dispensed to be classifi ed as nonpersistent mitigates the potential infl uence of these factors because it would be diffi cult to obtain drug samples that cover such a long period of time and it is a long period for a continuous hospitalization. Severity of AD was not assessed, but including newly treated patients in the study addresses this concern to some degree, and these medications were approved for only use in mild to moderate disease as of the time of the study. We were unable to control for potential confounders pertaining to patient co-morbidities, and 34% of the population had missing values for race. Additionally, as all the patients were enrolled in a Medicaid program, they are expected to be of lower incomes and to have no co-payments for medications. This, however, might limit generalizability of the results to other populations. While the improvements gained with the use of ChEI medications may be small or modest, sustained benefi ts of therapy can be realized only by patients who persist with therapy. Yet our results indicate that persistence rates with these medications are quite low. Persistence rates were higher for donepezil users compared with those who received other ChEI medication beyond 12 months, yet given the marginal difference and the limitations of our data source, we are unable to conclude that this difference suggests superiority of donepezil. Conclusions We found similar persistence rates for each of the three available ChEI medications, with a 42% rate of persistence at 12 months, and those dispensed donepezil were slightly more likely to persist at 24 months. While we were not able to ascertain if discontinuation was due to lack of effi cacy or lack of tolerability, our analyses revealed that nearly half of patients failed to persist with ChEI therapy for at least 12 months. Our fi ndings underscore the limitations of the ChEI medications and the urgent need for effective and tolerable therapeutic options for patients having dementia. From a drug policy perceptive, the consideration of the utility of ChEI medications should include both results from clinical trials and insights from observational studies such as ours, which reveal that for many patients ChEI medications cannot be relied upon to provide longer-term benefi t in managing dementia. Reports describing measures of therapy persistence can be important to caregivers and patients in forming expectations for pharmacotherapy.
2017-04-19T18:04:27.211Z
2008-02-02T00:00:00.000
{ "year": 2008, "sha1": "d3b70c3ce45c270bb54ebfe9d9442546f1a51414", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=3216", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "182255806294da746dfd7c80736fed29f65f4b0d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
159338480
pes2o/s2orc
v3-fos-license
Adoption and Impact of the Improved Fallow Technique on Cotton Productivity and Income in Zambia An improved fallow is a soil fertility agroforestry technique that has commonly been used in the staple maize production systems of Zambia and sub-Saharan Africa. Several studies have assessed the adoption and impact of the improved fallow on maize production. Generally, it has been observed that though the improved fallow does increase maize yields, its efficacy on welfare in terms of increased income is low. The use of the technique on cash crops that could significantly contribute to household welfare has rarely been investigated. This study assessed the factors affecting the adoption and impact of improved fallows on a commonly grown cash crop, cotton, in the cotton growing provinces of Zambia. The study used a sub sample (N=1206) of the nationally representative 2014/15 Rural Agricultural Livelihoods Survey (RALS) data which was randomly collected by the Indaba Agricultural Policy Research Institute (IAPRI) and Central Statistical Office (CSO) of Zambia. The determinants of improved fallow adoption among the cotton farmers were examined through the use of the probit model while the impact of the technique on cotton production and income was evaluated by using the propensity score matching and the endogenous switching regression models. Among the socioeconomic factors significantly increasing the probability of improved fallow adoption included: increases in age, education level, and per capita productive assets of the farmer, in addition to the area under cotton production and the distance of the homestead to the market. Institutional factors found to increase the farmer’s likelihood of adopting the improved fallow in the cotton production systems included; farmer membership to a cooperative, receiving improved fallow seedlings from the government projects and having information on agroforestry tree species. On the other hand, an increase in land size per capita was found to negatively affect the likelihood of improved tree fallow adoption. Impact estimates showed significant cotton yield and income increases as a result of adopting the technique. The continuous provision of information on relatively new techniques such as the improved fallows preferably in farmer organized groups, and support towards the provision of the technique’s planting materials are some of the areas requiring government and NGOs attention. In addition, the study recommends that the farmers’ formal education level should be enhanced and that improved tree fallows should also be explicitly promoted on cash crops that have similar agronomic requirements to maize such as cotton. Keyword: improved tree fallows, adoption, probit model, propensity score matching, endogenous switching regression model, Zambia Introduction and Background Deforestation and land degradation are some of Zambia's key environmental issues (Vinya et al., 2011).Deforestation rates are significant in Zambia, with approximately 300,000 ha of forest cover lost per year (Day et al., 2014).Apart from wildlife reduction, loss of biodiversity and ecosystem lost value, land degradation is one of the key environmental problems in Zambia that is linked to rampant deforestation (Slunge, 2010).This problem constrains poor households' income opportunities through lowering agricultural productivity and access to various non-timber products (Slunge, 2010).Agroforestry systems of soil improvement can be an efficient strategy to ease the problem of land degradation and nutrient depletion and therefore address food security issues by potentially improving crop productivity, sustaining crop yield increases, diversifying smallholder farmers' income and protecting the environment (Leakey, 2010).Agroforestry systems in the form of improved tree fallows also commonly referred to as fertilizer tree fallows can help farmers improve their yield in addition to improving the microbiological, chemical and physical conditions of the soil.In most cases, the fallows can also control weeds and are a source of useful by-products such as firewood and medicine (Ajayi et al., 2005). The improved tree fallow technique is an ecologically robust approach to soil fertility improvement that is composed of fast growing, mostly nitrogen fixing, trees of Faidherbia albidia, Sesbania sesban, Gliricidia sepium, Tephrosia vogelii and Cajanus cajan, that guarantees the shortest soil restoration period of 2-3 years.Thereafter, farmers can grow their crop on formerly improved fallow plots for the next 3-4 years minus applying any fertilizer.Agroforestry technologies are cheaper and do not require any direct cash expenses associated with mineral fertilizers (Ajayi, 2007).However, unless farmers widely adopt these technologies as part of their farming system, the potential benefits of agroforestry on livelihoods and the environment will not be realized.Despite positive results from on-station and field controlled experiments, the farmer uptake of the improved tree fallow technologies in Zambia has been generally dismal (Kuntashula and Mungatana, 2014). In Zambia, most research and evaluation studies (Ajayi, 2003;Franzel, 2004;Kuntashula and Mungatana, 2014) on adoption and impact of improved tree fallows have concentrated on the staple maize yields and associated income.The use of improved tree fallows and impact thereof on cash crops other than maize such as cotton which has similar nutrient demands as maize has rarely been investigated.Rigorous literature search show that the experimentation and promotion of the technology has largely focused on maize as a beneficiary crop for nutrients fixed by the fallows.As maize grown mainly for subsistence requirements, the impact estimates of the technique on outcomes such as household income could be underestimated.Moreover, if farmers observe relatively small impacts due to the use of the technique on a crop that does not produce significant cash outlay, the adoptability potential could be low.Will the impact of the technology on cash crops be more pronounced, and hence encourage adoption?Thus, the main objective of this study was to determine the impact of improved tree fallows on cotton production which is solely grown as a cash crop in selected cotton producing areas of Zambia, and to identify the determinants of this adoption.This study contributes to literature on improved fallow adoption and impact estimation in two ways.First, as far as our literature search is concerned, this is the only study in Zambia that has assessed adoption of improved tree fallows on a cash crop such as cotton.Secondly, to ensure high quality impact estimates that account for endogeneity bias, only matched samples of adopters and non-adopters are subjected to the robust endogenous switching regression impact estimation.Results show that factors such as membership to a cooperative, receiving improved tree fallow seedlings from the government projects and having information on agroforestry tree species as well as farmer characteristics such as increased age of the household head, education level of the household head, productive assets per capita and area of cotton field have an effect on adoption of the technique in cotton production.Impact estimates proved the technique's efficacy in increasing both cotton yields and income from the crop. Data and Data Sources The study used a sub sample (N=1206) of the nationally representative 2014/15 Rural Agricultural Livelihoods Survey (RALS) data which was randomly collected by the Indaba Agricultural Policy Research Institute (IAPRI) and Central Statistical Office (CSO) of Zambia in collaboration with the Central Statistical Office (CSO) and the Ministry of Agriculture (MA) between June and July 2015.For the nationally representative sample, CSO draws the sample from all the districts in Zambia.For sampling purposes, the CSO subdivides each administrative division of a district into Census Supervisory Areas (CSA) and Standard Enumeration Areas (SEAs).Each SEA contains between 100 -150 households.A total sample of 680 CSAs is allocated nationally to each Province and district proportional to its size in terms of households.About 20 households are randomly selected from each of the 680 SEAs in the sample (RALS12 Sampling Manual). Recognizing the fact that not all regions are dominant cotton producing areas in Zambia, we imposed an inclusion condition that the households to be included in our sub sample for analysis should be those coming from the major cotton producing areas.With this requirement, this study used a sub sample of 1, 206 households (out of RALS15 sample of 7934) from Central, Eastern, Muchinga and Southern, provinces which were found to be the major cotton growing provinces in Zambia. Farmer and Household Characteristics Studies by Ajayi et al. (2003), Ajayi et al. (2006), Gladwin et al. (2002) and Kuntashula et al. (2004) showed that the variables: age, availability of information about the technology, the technology perceived relative advantage and usefulness, land or farm size and tenure influence the farmer's adoption of agroforestry in general, and improved fallows in particular.Other factors which increase the probability of adoption of improved fallows among farmers include the level of formal education and level of environmental awareness (Kwesiga et al., 2003).In a review of gender studies and agricultural productivity in Sub-Saharan Africa (Quisumbing, 1996;Thapa, 2009;Peterman et al., 2010;and Ragasa et al., 2013), it was shown that male-headed households were more likely to adopt new technologies compared to their female-headed counterparts.Jera & Ajayi (2008) and Kassie et al., (2012) contend that females respond less favorably to new technology as compared to the male headed households, though some female headed households are also enthusiastic enough and would as well be more willing to try new technology.Some of the limitations affecting adoption of agroforestry technologies such as improved fallows include labour for establishing the trees every year and dependency on late rainfall for trees to become established (Thangata et al., 2008).Further, Kuntashula and Mungatana (2015) showed that farmers with access to large quantities of inorganic fertiliser are less likely to adopt improved fallows.This is because improved fallows and inorganic fertiliser are direct competitors in the provision of soil fertility.A study by Matata et al., (2010), on socio-economic factors influencing adoption of improved fallow practices among smallholder farmers in Western Tanzania found that that lack of awareness on improved tree fallows, unwillingness and lack of inability to wait for two years are the major limiting factors of improved tree fallow adoption. Market Access and Institutional Factors A study by Mafongoya et al., (2006) in Zambia on the impact of improved tree fallow technology showed that to make a viable impact, agricultural technology innovations should be directed to the real needs of farmers in significant locations, through active encouragement of user modification and adaptation of the technology.The study also showed that adoption of the technology by farmers is not a direct association centered wholly on technological characteristics, but is influenced by several factors including institutional and policy factors such as fertilizer subsidies, spatial and geographical factors and household-specific variables.Nyoka et al., (2011) discovered that inadequate availability of seed and seedlings is one of the barriers to adoption of improved fallows.Additionally, according to Namwata et al. (2010), extension contact has a significant positive impact on the adoption of agricultural technologies.The role of extension contact in influencing adoption is similarly reported elsewhere (Kwesiga et al.,2003;Solomon et al., 2011;Ayinde et al., 2010).The noted studies showed that frequency of contact with extension agents influences technology adoption decisions of farmers.Distance to both input and output markets could also be a significant contributor to adoption of improved fallows (Kuntashula and Mungatana, 2015).Haggblade et al. (2004) indicated that while economic considerations and short-term profitability of renewable soil fertility replenishment technologies generally increase the probability of its adoption, economic models alone do not fully explain the farmers' adoption behavior regarding these technologies.The farmers' adoption decisions appear to be guided by their household's level of resource endowments and the prevailing social context such as customs, obligations and beliefs which are highly affected by factors such as farmers' formal education level, age, and family size.Beliefs which were chosen to affect the farmers' adoption decisions include belief in witchcraft to become successful and belief in prayer is more important than hard work for success. Regional Characteristics Spatial and geographical location of a farmer significantly influences adoption of agricultural technologies (Mafongoya et al., 2006).Regions differ in several biophysical as well as climatic factors thus influencing the performance of some agricultural technologies such as the improved fallows.For instance, Nyanga et al., (2011) and CFU (2007) contend that conservation agriculture (which encompasses improved fallows) cannot do well in certain regions with high rainfall in Zambia. Factors Hypothesized to Affect Adoption of Improved Fallows Factors that were hypothesized to influence adoption and hence impact of improved fallows in the key cotton growing areas are shown in Table 1.Among the socioeconomic variables, age of household heads was expected to have either a positively or negative influence on adoption of the improved fallows.Older farmers could have the relevant experience in adopting technologies while equally young farmers could be more enthusiastic to try out new ideas.Gender of the household head is also hypothesized to have an ambiguous effect on adoption of improved fallows in cotton production due to reasons already discussed in the section above.Formal education level of the household head was expected to have a positive influence on adoption of the improved fallows. Marital status of the household head was hypothesized to having an ambiguous relationship with adoption of improved fallows.It was also expected that full time labour equivalence, area under cotton production, total land size, assets, secure land tenure, quantity of fertiliser use, would all positively influence farmer's decisions to adopt the technology (Table 1). Among the institutional factors, the following variables were hypothesized to have a significant influence on adoption of improved fallows in cotton producing areas.Access to information on agricultural goods prices was hypothesized to have an ambiguous relationship.Membership to various network groups such as membership to a farmer group, saving group and/or women group was expected to have a positive effect on improved tree fallow adoption just like receiving improved fallow seedlings from the government.Distance to the market or town would negatively influence the farmer's adoption of improved tree fallows while receiving extension message on the technology was expected to increase the probability of adoption. Cultural factors which were expected to affect the probability of adoption of improved fallows included trust in prayers and trust in witchcraft for success.Both had an ambiguous expectation of the adoptability potential of improved fallows in the major cotton producing areas of Zambia (Table 1). The Probit Model The behavioral preference of the cotton farmers to either adopt or not adopt improved tree fallows was analyzed using a discrete choice probit model for binary choice (yes, no) responses referring to adoption and non-adoption, respectively.The probit model is a statistical probability model with two groups in the dependent variable (Liao, 1994).The probit analysis is founded on the cumulative normal probability distribution.The binary dependent variable   Y , takes the values of either one or zero (Aldrich and Nelson, 1984).For instance, in this study, in the case of adoption and non-adoption, the dependent variable took the values of one and zero respectively. Following Greene (2011), the probit model is generally specified as: where pi is the probability of the outcome while r P , 0 ,  represents the probability that an observation with particular characteristics will fall into a specific one of the groups, the observed outcome of the binary choice problem ,explanatory variables on i individuals, the constant, regression coefficients (impact of changes in X on the probability) and the cumulative distribution of a standard normal random variable, respectively.In this study, the correlation between a specific explanatory variables and the outcome of the probability was interpreted by means of the marginal effect, which explains the ceteris paribus effects of changes in the regressors affecting the outcome.The regression coefficients of the explanatory variables in this analysis were used to indicate the significance and direction of the particular variables towards the farmer's adoption of improved tree fallows.The marginal effect associated with continuous explanatory variables , holding the other variables constant, can be derived as follows (Greene, 2011): Where  represents the probability density function of a standard normal variable. is the density function of the standard normal distribution evaluated at  x ' .And  x ' is the product of the row vector of selected covariate values, x and the column vector of parameter estimates,  .The marginal effects    on dummy variables   d refer to discrete changes in the predicted probabilities and are specified differently as; where X represents the means of all the other variables in the model (Greene, 2011).The statistical package, STATA14 was used to implement the probit algorithm analysis. Propensity Score Matching The propensity score matching and the endogenous switching regression were used to estimate the impact of the improved fallows on cotton production.In this context the propensity score is the probability of adopting improved tree fallows conditional on the covariates or observable characteristics of the farmers   Xi .Essentially, the Propensity score matching framework matches observations of adopters and non-adopters based on the predicted propensity of adopting a superior technology (Rosebaum and Rubin 1983;Heckman et al., 1998;Smith and Todd, 2005;Wooldridge, 2005).The impression of propensity score matching (PSM) is to match adopting and non-adopting individual farmers, who based on observables, have a very similar probability of adopting the improved tree fallow techniques.With such matching, the difference in the outcome variable can be attributed to the effects of adoption.In other ways, the propensity score reflects the probability of the cotton farmers adopting improved tree fallows subject to their different observed characteristics.Following Rosenbaum and Rubin (1983) and the propensity score can be expressed as; where X is a vector of the covariates that are postulated to affect adoption of improved tree fallows, i D stands for a dummy variable equal to 1 if cotton farmers adopted the improved tree fallows and equal to 0 otherwise,  . E is the expectation operator.The average treatment effect on the treated or adopters (ATT) of the technique was then calculated as the mean difference in outcomes (crop yields and income) across the adopters and non-adopters.Letting 1 Y and 0 Y to be the yield or income outcomes of the adopters and non-adopters respectively, and T to be an indicator variable for adoption equal to one and non-adoption equal to zero, ATT requires that: Where; E is the expectation in the difference in the outcome between receiving treatment or adopting, T =1 and the counter factual outcome if treatment or the technology had not been received, T = 0. Two important assumptions are required for successful estimation of the impact of a technology using the propensity score.The first is the overlap or common support assumption which states that for each value for X, there is a positive probability of being both adopters and non-adopters of the improved fallows.The second one is the conditional independence which simply ensures that there exists a set X of observable covariates such that after controlling for these covariates, the potential outcomes are independent of the treatment status.With the above two conditions, within each cell defined by X, technique adoption assignment is random, and the outcome of control households can be used to estimate the counter factual outcome of the adopting households in the case of no treatment (adoption) (Nannicini, 2007).Several models were specified in the analysis till the most comprehensive and robust specification that fulfilled the balancing tests and establishment of the common support region was obtained.To ensure some robustness within the PSM estimation, two matching algorithms were used, nearest neighbor and kernel matching that gave a trade-off between matching superiority and effectiveness of the estimators (Caliendo and Kopeinig, 2008). Endogenous Switching Regression (ESR) The endogenous switching regression was used to control for unobservable characteristics that could bias our results.ESR model consist of the selection equation and two continuous regressions that define the behavior of the farmer as he faces the two regimes of adopting or not the technologies.The selection equation in this study was stated as: Where * i I is the unobservable variable for technique adoption and i I is its observable counterpart which is the dependent variable which equals one, if the farmer has adopted and zero otherwise and  are vectors of parameters while X i are vectors of exogenous variables also included in output equations 5 and 6. i Z are non-stochastic vectors of variables that explain only the selection process and have no direct effect on the outcome.These variables also cited as instruments are very significant for identification purposes.i  is a random disturbance related with the adoption of the technologies (Maddala and Nelson, 1975).The two outcome regression equations where farmers faced the regimes of adopting or not to adopt were defined as follows: Where ji Y are the outcome variables (crop yield or crop income) in the continuous equations; β of parameters; and i 1  and i 2  are random disturbance terms.The selection equation and the outcome regression equations were all together estimated using the full information maximum likelihood (FIML) estimation (Lokshin and Sajaia, 2004).Upon its estimation, the endogenous switching regression model was used to compare the various conditional expected outcomes of the farm households. Socio Economic Characteristics of the Cotton Farmers in the Study To determine the statistical differences in the sample's social economic characteristics, a two sample Ttest was carried out between improved tree fallow adopters and non-adopters.Socio economic characteristics were grouped under four broad categories namely; farm household characteristics, market access and institutional characteristics, cultural factors and regional characteristics as shown in Table 2.In Table 2, there are three major columns showing a description of the total sample, adopters of improved tree fallows and non-adopters.Within these columns are the variable means and their standard deviations for the total sample, improved tree fallow adopters and non-adopters.Estimates showed that 17.2% of the 1206 cotton farmers adopted the improved tree fallows.The significantly different characteristics between the adopters and non-adopters; that is those variables whose t-statics is greater than two included level of formal education of the household head, land size per capita, total land holding size, cotton yield, total cultivated land, tropical livestock units, receiving seedlings from the government, distance to the market and receiving advice on improved fallows.In terms of the location of the farmers in this study, it was found that 5%, 71%,14% and 11% of the adopters were located in Central, Eastern, Muchinga and Southern Province respectively.On the other hand, 11%,72%,10%, and 7.2% of the non-adopters were located in Central, Eastern, Muchinga and Southern Province respectively.The average land size per capita in the whole sample was about 0.73 hectares, with 0.53 hectares for adopters and 0.78 hectares for non-adopters.The average quantity of fertiliser applied by the farmers in the whole sample was 162.04kg while 106.48 kg and 174.11kg was applied by adopters and non-adopters respectively. The average land holding size of the household head was 4.2 hectares, with 2.93 hectares for adopters and 4.48 hectares for non-adopters correspondingly.Non-adopters had large hectares which might have in the first place made it difficult to plant and manage large hectares of improved fallows.The average cotton yield for the whole sample was 1078kg/ha and the average cotton yield for adopters and non-adopters was 1290.5kg/hand1035kg/ha respectively.The average total cultivated land for the total sample was 3.0 ha while the adopters average total cultivated land was 2.44 ha and the non-adopters total cultivated land was 3.07 ha.The non-adopters had significantly higher livestock holding than the adopters suggesting that cotton farmers that have more livestock are generally less likely to adopt improved fallows.The reason may be that livestock are devoted to provide labour for conventional practices such as ploughing.However, this finding does not agree with results from studies done by Kassie et al. (2012), Phiri et al. (2004) and Keil et al. (2005).About 4% of the total sample received improved tree fallow seedlings from the Government while approximately 10% of the adopters and 0.2% of the non-adopters received improved fallow seedlings from the Government. The average distance to the market for the whole sample was 46.0 kilometres while the adopters and non-adopter's average distance to the was 57.72 kilometres and 44.2 kilometres respectively.About 44% of household heads in the whole sample received advice on improved fallows.On average, 60 % of the adopters received advice on improved fallows while 41% of the non-adopters received advice on improved fallows.This finding that availability of information on the technology and extension services have a positive impact on the farmer's adoption of technology has been acknowledged and documented in many studies such as Solomon et al., (2011), Ayinde et al., (2010), Namwata et al., (2010), Odoemenem and Obinne (2010), Matata et al., (2010), Kwesiga et al., (2002) Boahene et al., (1999), andOmoregbee (1998). The whole sample 's average cultivated area for cotton was about 0.93 hectares.Meanwhile, the adopters' average cultivated area for cotton was 0.90 hectares and the non-adopters average cultivated area for cotton was 0.93 hectares.Non-adopters had larger hectares of cultivated area for cotton which might have in the first place made it seemingly difficult for them to plant and manage larger hectares of improved fallows.Cotton yield was statistically higher for adopters than non-adopters.This result is similar to the findings of Kuntashula and Mungatana (2013), Quinion et al., (2010), Ajayi et al., 2009, Ajayi et al., (2007), Franzel (2004), and Place et al., (2002) which show that improved fallows increase crop yields though not necessarily cotton.These studies originally contributed to the reason why this study was carried out because no study had evaluated the efficacy of improved fallows on cotton yields. Overall, the average household size of the whole sample, for adopters and non-adopters was about 6 people.A look at education levels shows that on average household heads had about 6 years of formal education.The adopters and non-adopter's household head's average number of years of formal education were 6 years and 5 years respectively.The fulltime equivalence for the whole sample was about 6 workers.In terms of marital status, about 3% of the households in the whole sample were single while 3% of the adopters and 3 % non-adopters were also single.The average age of the household heads in the whole sample was 46.0 years while the adopters and non-adopter's average ages were 45 years and 46 years respectively as shown in Table 2.In terms of the gender of the household head, about 85 % of the total sample were male headed.About 2.0 % of the whole sample, for both adopters and non-adopters had a land tenure.The average value of productive assets per capita was about ZMW 2,730 for the whole sample and ZMW 2460 for non-adopters and ZMW 2,800 for adopters.The average value of cotton sales at the actual price was ZMW 2501.8 for the whole sample, ZMW 2986 for adopters, and ZMW 2401 for non-adopters respectively.On average about 1% of the total sample had access to information on agricultural good's prices.Other socio economic characteristics which were included in this study were membership to a farmer group, membership to a saving group, membership to women group, belief in prayer than hard work for success, belief in witchcraft to become successful and the location of the farmer (regional characteristics).Despite statistical insignificances in some variables, nearly all variables had similar mean values as seen in Table 2. Therefore, it is obvious that both groups were comparable in their characteristics.This observation inspired the need to approximate the impact of improved fallows in a more robust way.Note: A TLU (Tropical Livestock Unit) is an animal unit that refers to an animal of 250kg live weight, and is used to aggregate diverse species and classes of livestock as follows: Bullock:1.25;cattle 1.0, goat, sheep and pig:0.1;guinea fowl, chicken and duck:0.04 and turkey:0.05.(Compiled after janke 1982) Factors Affecting the Adoption of Improved Fallows among Cotton Farmers in Zambia Estimates on factors affecting the adoption of improved tree fallows in cotton production using the probit regression are shown in Table 3.Among the significant factors that positively influenced the farmers' adoption of improved fallows included: the age of the household head, education level of the household head, area under cotton production, value of productive assets per capita, distance to the market in kilometers, receiving improved tree fallow seedlings from the government, receiving advice on improved fallows and the farmer's location.Landholding size per capita negatively influenced adoption of the improved fallows among the cotton farmers. An increase in the age of the household head by one year was found to increase the probability of improved fallow adoption by 4.4%.It could be speculated that the older the people become, the wiser they could become in trying out productive and sustainable technologies not only in the production of staple foods but cash crops such as cotton as well.An increase in education level by one year would increase the probability of improved fallow adoption by 3.2%.Improved fallow has generally been touted as being a knowledge intensive technology.Therefore, farmers who are more educated are likely to adopt the technology. Other studies (Matata et al., 2010;Kwesiga et al., 2003) have also shown a positive relationship between education and adoption of technologies.An increase in cultivated area for cotton by one hectare increases the farmer's probability of improved fallow adoption by 9.9%.This finding could imply that the adopters of improved fallows in the cotton producing areas are doing so out of a need to have maximum soil fertility improving benefits in the production of cotton.When the household's value of productive assets per capita increases by one thousand Zambian Kwacha, the probability of improved fallow adoption increases by 1%.Generally, productive assets can contribute to increasing the farmers' opportunities to be engaged in a wide range of farming enterprises.For example, farmers that own oxen are capable of cultivating larger pieces of land within a short time or they would hire out oxen for extra resources to pay for labour or purchase other inputs.Therefore, farmers who own productive assets are likely to try out new technologies such as the uptake of improved fallows.Earlier studies by other authors such as Keil et al., (2005) and Phiri et al., (2004) postulated a similar relationship.An increase in distance to the market increases the probability of adoption of improved tree fallows by 0.4%.It appears like farmers in the remotest areas are more likely to take up the technology than those near trading centers.This could probably be attributable to reduced soil fertility options among the farmers far from markets. Source: Author's own calculation-RALS 2015 As expected, receiving seedlings from government projects and improved fallow technique advice increases the probability of improved fallow adoption by 62.4% and 15.6%, respectively.It should be noted that there were some farmers who received seedlings and advice on improved fallows but did not adopt the technique.With the culture of input subsidies still rife among farmers, the free provision of inputs such as seedlings would definitely increase the adoptability potential of the improved fallows.Similar results were obtained by Nyoka et al., (2011) and Kabwe (2010).The positive relationships between receiving improved fallow technique advice and adoption of the technique amplifies the importance of knowledge sharing on technologies taken to farmers.Several studies (Mwase et al., 2015;Nyoka et al., 2011;Solomon et al., 2011;Kwesiga et al., 2003;Adesina et al., 2000) have shown that there is a positive relationship between information on the technology and adoption of the technology.In terms of the farmer's location, the results showed that farmers in Muchinga Province were significantly more likely to adopt the improved tree fallows on cotton production compared to central Province which was used as a base region.Cotton farmers in Muchinga Province are 37% more likely to adopt the improved fallows. An increase in land size per capita was found to significantly affect negatively the adoption of improved fallows among cotton farmers.An increase in landholding size per capita by one hectare reduces the probability of improved fallow adoption by 27.3%.The negative relationship between land holding size per capita and improved fallow adoption may possibly be because an increase in land owned by the farmers makes management to be difficult, hence they are less likely to venture into extra farm activities like planting of the improved fallows. Cotton Yields and Sales Differences between Adopters and Non-adopters of Improved Fallows The cotton yields and sales of the improved fallow adopters and non-adopters are shown in Table 4. Descriptive analysis shows significant differences in the cotton yields and value of cotton sales between adopters and non-adopters.Adopters of the improved fallows had significantly higher cotton yields and revenue from cotton sales.Without controlling for confounding factors, it is difficult to attribute these differences to improved fallow adoption. Impact of Improved Fallow Adoption after Controlling for Observable Factors Out of the numerous existing methods of impact evaluation, this paper favors to use the PSM model for the reason that it is not dependent on the functional form and distributional assumptions.The method also helps in comparing the observed outcomes of technology adopters with the outcomes of counterfactual non-adopters (Heckman et al., 1998).The propensity scores in this study were estimated for improved tree fallow adopters and non-adopters by using the probit model.In this study, several models of Propensity Score Matching (PSM) models were tried out till the most comprehensive and robust specification that fulfilled the balancing tests and the establishment of the common support region was obtained.The PSM model results for improved fallow adoption are shown in Table 5. Variables that were significant for the estimation of the propensity score included: age of the household head, formal education level of the household head, area for cotton planting, distance to the market in kilometers, receiving improved fallow seedlings from government projects, receiving advice on improved tree fallows, having information on agroforestry trees, land size per capita and the farmers being located in Muchinga Province.All observations that did not meet the conditions of the common support region were dropped from analysis.This was done in order to improve on the quality of results obtained from both the PSM and ESR models.About 1203 observations fell within the common support region.The propensity scores were used to estimate the ATTs emanating from adoption of improved fallows in the cotton producing areas of Zambia.Results from the two matching algorithms, the nearest neighbour and the kernel matching approaches are shown in Table 6.In both cases, there is evidence that controlling for observable covariates, the technique does increase cotton yields and income from cotton sales.The adoption of improved fallows in cotton production increased cotton yields by about 279 kg ha -1 and 302 kg ha -1 when estimated using the nearest neighbour and kernel matching algorithms, respectively.The technique also increased cotton income by about ZMW 822 ha -1 and ZMW 793 ha -1 when estimations were done by the nearest neighbour and kernel matching respectively. Impact of Improved Fallow Adoption after Controlling for Unobservable Factors In the Tables, 7 and 8, the full information maximum likelihood estimates of the endogenous switching regression model are shown.The first and second columns in both tables show the welfare functions (cotton yield and value of cotton sales) for households that did and did not adopt the improved fallow technique while the column at the end represent the instrument variable used in the selection equation on adoption of improved fallows.Receiving tree seedlings from the government was highly correlated with adoption of improved fallows while it was uncorrelated with either cotton yields or income from cotton sales.This variable therefore was a suitable instrument in both models.The correlation coefficient between the adopter's regime and the selection equation in the cotton yields model was negative and significantly different from zero.This means that farmers who adopted improved fallows got higher cotton yields than a randomly selected farmer from the sample would have achieved.There exist both observed and unobserved factors influencing the decision to adopt improved fallows and this welfare outcome given the adoption decision.The switching regression model's results on the expected welfare outcomes of yield and income under actual and counterfactual conditions are shown in Table 9.The results show that adopters who decided to adopt the technology had significant higher yields than in their counter factual state i.e. had they not adopted.This result suggests that improved fallows have a positive effect on cotton yields.Those who adopted the technique increased cotton yields by 188.5kg solely as a result of using improved fallows.Earlier studies by other authors such as (Obuoyo and Ochola ,2015;Place et al., 2005;Kuntashula & Mungatana, 2014) obtained similar results on crop yield increases other than cotton.Similarly estimates on the value of cotton sales show that adopters significantly gain as a result of using improved fallows.The causal effect of the technique per hectare was estimated at around ZMW 500.For the non-adopters, the predicted estimates show that they could have obtained lower cotton yields had they adopted the technique.However, the yield difference between non-adopters and had they adopted was statistically insignificant.The difference in the value of cotton sales/ha between non-adopters and their counterfactual state was equally statistically insignificant.Given their unobservable characteristics, perhaps this could be one reason why the non-adopters did not adopt in the first place.Note: TT treatment effect on the treated (adopting-had not adopted), TU treatment effect on the untreated (had they adopted-not adopted), BH Base heterogeneity (adopted-had they adopted), TH Transitory heterogeneity (TT-TU). Conclusion and Policy Implications Evaluation of sustainable land management practices such as the improved fallows in sub Saharan Africa has mostly been conducted on the staple maize crop.We assessed the factors affecting adoption and estimated the causal effect of improved fallows on a cash crop, cotton among the cotton producing farmers in Zambia.We used probit regression and propensity score matching techniques complemented with endogenous switching regression models in our estimation.Factors which were identified to positively affect the adoption of improved fallows include membership to a cooperative, receiving improved fallow seedlings from the government projects and having information on agroforestry trees, an increase in age of the household head, education level of the household head, productive assets per capita, cultivated area for cotton and distance to the market.On the other hand, an increase in land size per capita was found to negatively affect the adoption of improved fallows among cotton farmers. Both the propensity score matching and endogenous switching regression suggest that the improved fallow adoption significantly increased the farmer's cotton yields and value of cotton sold at actual prices and eventually increased the farmer's cotton income.This was particularly true for the adopters of the technique.For the non-adopters, predicted results showed that controlling for unobservables, the technique had no significant influence on these outcome variables.In this case, the finding seems to suggest the importance of discovering the hidden role of unobservable characteristics in influencing adoption and impact of technologies.In addition to what is observed or measured, more differences are likely to exist between the adopters and the non-adopters of improved fallows in the cotton producing areas of Zambia.Thus, more detailed anthropological studies might help in untangling the equally underlying factors that influences the impact of the improved fallows in cotton production. Given the positive impacts of improved fallows on cotton production among the adopters, the study recommends that the government should explicitly promote the techniques to be used in cotton production in addition to the common practice of promoting it on maize.This could be done through improving extension services and messages, and continuous training farmers on the use of innovative techniques such as improved fallows.Providing the farmers with improved fallow seedlings alongside an increase in productive assets enhance the probability of adoption thus just like is the case with the fertiliser subsidies, the government should consider expanding the subsidies on improved fallow seedlings.The farmers' formal education levels should be improved as it was found in this study that an improvement in the farmers 's formal education level increased their adoption of improved fallows. Table 4 . Average Differences in several outcome variables between adopters and non-adopters Adopters(n=208) Non-adopters(=998) Min difference t-stat Cotton yield(kg ha -1 assumed, figures in parentheses are standard errors of the mean Source: Author's own Analysis-RALS 2015 Table 1 . Description of variables used in the probit model Table 2 . Descriptive statistics of the study sample Table 3 . Estimates of the probit regression model on adoption of improved fallows in cotton producing areas of Zambia *Significant at the 10% level; **Significant at the 5% level; ***Significant at the 1% level. Table 5 . Estimated propensity score model results for improved tree fallow adoption Table 6 . ATT estimation of various outcome variables using nearest neighbour matching Table 7 . Full information maximum likelihood estimates of the switching regression model on cotton yield Table 8 . Full information maximum likelihood estimates of the switching regression model on value of cotton sales Table 9 . Endogenous switching regression model results
2019-05-21T13:03:51.347Z
2019-02-08T00:00:00.000
{ "year": 2019, "sha1": "bebc63624db28cdf79c99159353ec71dc49099aa", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/sar/article/download/0/0/38464/39180", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bebc63624db28cdf79c99159353ec71dc49099aa", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics" ], "extfieldsofstudy": [ "Business" ] }
22603925
pes2o/s2orc
v3-fos-license
Automating human intuition for protein design In the design of new enzymes and binding proteins, human intuition is often used to modify computationally designed amino acid sequences prior to experimental characterization. The manual sequence changes involve both reversions of amino acid mutations back to the identity present in the parent scaffold and the introduction of residues making additional interactions with the binding partner or backing up first shell interactions. Automation of this manual sequence refinement process would allow more systematic evaluation and considerably reduce the amount of human designer effort involved. Here we introduce a benchmark for evaluating the ability of automated methods to recapitulate the sequence changes made to computer‐generated models by human designers, and use it to assess alternative computational methods. We find the best performance for a greedy one‐position‐at‐a‐time optimization protocol that utilizes metrics (such as shape complementarity) and local refinement methods too computationally expensive for global Monte Carlo (MC) sequence optimization. This protocol should be broadly useful for improving the stability and function of designed binding proteins. Proteins 2014; 82:858–866. © 2013 Wiley Periodicals, Inc. INTRODUCTION Computational protein design has been used to design proteins with new structures or functions. The new functions range from small-molecule binding to specific protein binding to catalytic activity. 1-4 The computational design of proteins that bind reaction transition state models, and ligands more generally, often starts from a set of naturally occurring protein scaffolds of known structure. It proceeds by first identifying placements of the ligand in the scaffolds and second, optimizing the surrounding residues for favorable interactions with the ligand without compromising the overall stability of the protein. The resultant designed proteins are usually inspected by a researcher and modified before they are experimentally tested. These modifications are based on human intuition about protein stability, aggregation, and binding interactions. Sequence changes far from, or facing away from, the designed site are often reverted, and larger residues substituted for smaller ones (very small clashes during fixed-backbone computations may disfavor larger residues, with better packing, from being selected). Automation of these human intervention steps is desirable for systematically optimizing the design process, for reducing the human time required for design, and more generally, for making protein design more broadly accessible. Automation of a process requires a benchmark for evaluation of performance. Several types of benchmarks have previously been described for protein-smallmolecule interaction modeling. These include smallmolecule docking, prediction of small-molecule-protein interaction affinity, 5-7 and amino acid sequence recovery at natural protein-small-molecule interfaces. 8,9 The problem of how to alter the sequence of a naturally occurring protein to bind a new small-molecule is much more challenging, and not directly addressed by existing benchmarks. For example, it is necessary to consider whether an amino acid substitution that increases the apparent binding affinity for a new ligand overly compromises the stability of the protein scaffold. To guide the automation of human intuition in the manual stages of protein design, we assembled a benchmark set of 51 proteins that tests the ability of a method to recapitulate mutation decisions made by human protein designers in realistic novel-design situations. We also developed a new local sequence optimization procedure that uses a greedy algorithm and allows multiple sampling methods to be carried out in serial using metrics too computationally expensive for global sequence design. We show that the new protocol improves on traditional design methods on the human designer benchmark. Monte Carlo (MC) based Rosetta design together with the novel greedy optimization provide a fully automated pipeline for computational design of enzymes and ligand-binding proteins with minimal human intervention. Match-design-order benchmark: human design interventions on Rosetta designed proteins The match-design-order (MDO) benchmark consists of proteins gathered from our protein engineering efforts: design of a de novo Morita Baylis Hillman catalyst (MBH prefix) 10 ; design of a phosphorylated-ester binding protein (1kux1 prefix; Niv on unpublished); design of a binding protein for digoxigenin (DIG prefix; Tinberg et al.) 4 ; design of a de novo galactosidase (GA and GF prefixes; Bjelic unpublished); design of a binder for the fluorophore 3,5-difluoro-4-hydroxybenzylidene imidazolinone (DFHBI; MB prefix; Bick unpublished); design of a beta-lactamase (BL prefix; Khersonsky unpublished); and design of de novo chorismate mutase (dCM prefix; Richter unpublished). The MDO benchmark is available via the link: http://robetta.bakerlab.org/ downloads/ligand_design_benchmarks/MDOBENCH/ Overall the design set differs from the native (match) set by a mean of 19.0 (SD 5.5) mutations (Fig. S1A, Supporting Information). The design set differs from the order (human modified) set by mean of 10.3 (SD 4.0) mutations (Fig. S1A, Supporting Information). Human designers place fewer mutations than Rosetta does, and have less variance in the number of mutations introduced. The design and order set of structures differ by a total of 527 mutations, of which 62.4% (328) are reversions to the native sequence identity. The mutations in the order set are not strongly weighted toward hydrophobic or polar residues, with 22.4% going from hydrophobic in the design to polar in the order, and 25.2% going in the opposite direction. Mutations from the design to order set are slightly more likely to increase amino acid size (54.5%) rather than decrease it (45.5%). The most frequent type of change was a slight size increase of 10-20 Da, for example, adding a methyl group (Fig. S1B, Sup-porting Information, mass distribution). Mutations made by human designers in the order set range from adjacent to the ligand (3-4 Å distance from residue CA to the ligand) up to second shell (12-13 Å distant) with a small minority of mutations over 13 Å (Fig. S1C, Supporting Information, distance distribution). The greedy protocol performs sidechain repacking with a stochastic MC algorithm, and therefore it is not deterministic. To estimate sample variation we tested the best greedy Protocol (see Results section, below, ES10_b-road2) over five independent runs, giving a mean of 8.75 with SEM 0.03. Because the SEM is small, we report the results of single runs over the full benchmark set, and only draw conclusions from differences at least three times the SEM (0.1 mutations). Native sequence recovery benchmark for protein2ligand complexes We chose a representative member from each protein class (binding, immunological, transport, etc.) in the Binding Mother Of All Databases (MOAD) to construct the sequence recovery benchmark. 11 The proteins in MOAD are well resolved (<2.5 Å ) with biologically relevant ligands (small organic molecules and cofactors, but not crystallographic additives, salts, etc.) and binding data derived from the literature. The proteins in each class were inspected manually, curated to include only binders of natural ligands in the affinity range of 10 mM or lower. Structures were prepared for Rosetta calculations as described in Supporting Information Appendix A. Our data set was directed specifically toward natural small-molecule binders and excludes enzymes and catalytic antibodies. Small-molecule binding proteins should be evolutionarily optimized only for binding and overall stability, which we can effectively model. Enzyme modeling would require additional information about the functional residues, such as the requirement for a catalytic triad at a specified set of distances from a substrate peptide in a protease. Catalytic antibodies were also excluded from the benchmark as they did not have an evolutionary timeframe over which to evolve, and have less converged sequences. Transition metal-binding proteins were also removed from the benchmark, as these require additional metal-specific interactions with amino acids to be included for optimal performance. The resulting set consists of 51 proteins with ligands, as summarized in Supporting Information Appendix B. The protein-ligand native sequence recovery benchmark is available as part of the standard Rosetta package on github at: Rosetta/ main/tests/scientific/biweekly/enzdes_benchmark Sampling and algorithm The protein-ligand native sequence recovery benchmark enabled evaluation of new scoring terms and new sampling algorithms for protein-ligand interaction design. These were tested by adding the modifications to the standard Rosetta energy terms one at a time. 12,13 Evolutionary information from a Multiple Sequence Alignment (MSA) was introduced via a position-specific scoring matrix (PSSM) to give a likelihood score to each residue at each position. The MSA implementation uses a PSSM generated by sequence alignment of homologs with a maximum E-value cutoff of 0.0009 using blastpgp. This gives a log-odds score derived from the relative proportion of each amino acid and the prior probability of observing each amino acid. 14 The influence of PSSM score on sequence recovery was investigated by iterating the PSSM weight over a set of 11 discrete values: 1, 2, 3, 4, 5, 10, 20, 50, 100, 200, and 300 in the native recovery benchmark. Energy terms for sidechain repacking are represented in a graph-like data structure with connections between each residue describing their pairwise interactions. 15 The enzyme design protocol 16 is typically run with a higher weight on protein-ligand interactions in the energy graph, so that alterations in these energies will play a larger role during MC sidechain repacking steps. This up-weighting is only applied to residues that change identity, and not applied during minimization, when residue identity is static. However, finding an appropriate value for this protein-ligand interaction adjustment is problematic without a large training set. Here proteinligand interactions were up-weighted between one-and threefold in increments of 0.2. The default benchmark was always run with a weight of 1.8. Modulation of the repulsive part of the Lennard-Jones (LJ) potential has been demonstrated to improve sampling and free energy calculations, 17 and a reduced-repulsion "soft" LJ term allows for higher sequence recovery of natives during design. Here we test different methods for applying the "soft" LJ potential during design. Deeper sampling of rotamers can be accomplished by increasing the number of MC cycles within a trajectory or by running multiple trajectories in parallel. Rotamer sampling is carried out using MC while temperature is slowly decreased, in a simulated annealing method. Each step in the temperature cycle is an "inner" iteration, with a set number of rotamer sampling steps. The "outer" iterations carry out each "inner" cycle a set number of times while the temperature is varied. A set number of "inner" quench cycles can optionally be performed N times with the multi-cool annealer (MCA) (after N independent runs of temperature annealing, the best individual final score is passed on as the output structure). The MCA may allow for better sampling in many cases, due to the stochasticity of an MC trajectory. Here we tested the effect of outer iteration scaling and MCA sampling on sequence recovery in the native sequence benchmark (Supporting Information). In the enzyme design protocol applied here a "design cycle" is a set of MC rotamer substitution (as described above) and a gradient minimization. We determined the effect on sequence recovery of increasing the number of design cycles up to five. The number of cycles defaults to two for the enzyme design protocol and always allows at least one round of sampling with a soft LJ potential while the last cycle is performed with a hard potential. A higher number of design cycles will increase overall sampling, but may lock the structure into any energy minima encountered during the earliest cycles, or minimization steps may perturb the backbone and introduce errors. Rotamer sampling can be improved by utilizing the existing sidechain rotamers from the input structure. These rotamers are used until they are swapped for lower energy ones, which eventually leads to the loss of these particular rotamers from the set of allowed conformations. Code version and availability Rosetta deposited SVN revision 51912 was used throughout the study to enable the reproducibility of the results presented here. 15 Sequence recovery calculations over the protein-ligand benchmark were carried out with the enzyme design application (which is used for any protein-ligand design problem, "enzyme" design being accomplished by introducing a set of extra geometric constraints in addition to Rosetta scoring). 16 The final greedy optimization Rosetta protocol (in RosettaScripts format 18 with an example run in Supporting Information Appendix C) for recapitulation of human design intervention is available in the standard Rosetta package: Rosetta/main/ source/src/apps/public/enzdes/ES10_broad2.xml RESULTS AND DISCUSSION Match-design-order benchmark: recapitulating human design interventions Native protein sequences have been optimized over an evolutionary timescale to have optimal stability and function. Protein design algorithms that optimize overall protein stability can correctly recover many of the native residue identities. 8 Alternative design methods can be evaluated based on the extent of recovery of native sequence over a set of monomeric proteins, and a similar approach can be used to optimize ligand-binding design methods. However, native sequence recovery is an imperfect measure of the performance of a method for designing new small-molecule binding sites. The protein backbone is pre-configured for ligand binding, the second-shell (and further) sidechain interactions are also preconfigured to buttress first-shell interactions, and the ligand is already placed in the optimal conformation and orientation. In contrast, in a novel design scenario, neither the backbone nor the surrounding sidechains are likely to be precisely configured to support the new binding site. A native sequence recovery approach also cannot be used to assess the utility of a bias toward the native sequence, which is often used to reduce the incidence of potentially destabilizing mutations from the native sequence. We have developed a new benchmark of raw Rosetta designs along with the final human-designer modified sequences to test design algorithms in a more realistic context, design of a novel function into a protein backbone structure previously lacking that function. We call this the MDO benchmark. The benchmark consists of 51 protein-triplets: (a) a native PDB structure (here the output from the matcher, or "match," that has all native sequence except at important catalytic or binding positions specified in advance), (b) the raw output from Rosetta with designed residues around the ligand of interest ("design"), (c) and the final human modified sequence, which is often substantially different from the raw design output ("order"). The MDO benchmark consists of proteins gathered from protein engineering efforts in the Baker lab (see Materials and Methods section for details of the proteins and mutations made by human designers). Algorithm choice and development We sought an algorithm to recapitulate the changes introduced by human designers over the 51 protein MDO set, essentially a piece of software that would produce an output design as similar as possible to a human designer's sequence. This algorithm should be as general as possible, allowing for hypothesis testing; for example, does filtering using shape complementarity 19 measures between protein and ligand help imitate human behavior? It should allow for complex scoring and sampling methods that require long computation times. Since human designers typically consider residue positions one-by-one, we chose an algorithm for sequence optimization that tests mutations one-by-one around the active site (with adjustable sampling and scoring methods) and then incorporates those changes in rank order by score. This protocol for navigating a tree of decisions in a multi-parameter search space is a greedy algorithm, as it evaluates each possibility independently, sorts them by a selection function, and then takes the best options first. Greedy algorithms may not be able to locate a global optimum, instead getting stuck in a local minimum, but in some cases they very quickly converge on a global optimum. One would expect a greedy algorithm to perform well when the starting sequence is already close to the optimum sequence, but not to do well in an overall sequence optimization problem starting from a random sequence. For the late-stage design optimization problem considered here the input is already MC optimized and somewhat close to a global optimum. Greedy algorithms have been applied to many problems in computational biology including sequence alignment, 20 fragment selection, 21 RNA structure building via a stepwise approach, 22 protein-peptide specificity prediction, 23 and a recent study using a greedy algorithm after a MC rotamer search for sidechain placement. 24 The greedy algorithm applied to the protein design problem is most similar to the Self-Consistent Mean-Field method 25,26 but with mutations applied in rank order, and without iteration or a check for self-consistency. The protocol uses a variety of easily swappable structure assessment conditions, called filters, and sampling methods, referred to as movers. 18 The protocol operates on a designed structure and examines each position that has been altered from the native structure. Every amino acid point mutant and rotamer state at every position is sampled independently as follows: After rotamer optimization, gradient minimization of all neighbor sidechains within an 8 Å sphere, and a user-defined further optimization (termed the "mover"; e.g., rotamer optimization in a larger sphere, ligand torsion-angle minimization, and others as detailed below), the total energy is stored. Substitutions that fail any user-defined quality filters (e.g., shape complementarity) are eliminated. After all point mutants and/or rotamers have been evaluated, substitutions at each position are sorted by energy, and positions are rank-ordered by the energy of the optimal substitution at each position. Substitutions are combined by first attempting placement of the optimal substitution at the optimal position, evaluating the total energy, and accepting if the total score improves. The substitution at the second ranked position is then attempted, accepted, or rejected, and the process is continued until substitutions at all positions have been attempted. Due to the deterministic nature of the algorithm, this approach converges reliably to nearly identical solutions, with slightly more variation if a more aggressive mover is applied. We do not know whether human intuition systematically improves designs, and a difficulty in even formulating an answer to this question is that every human has somewhat different preferences in design. The MDO framework allows us to begin to frame and rigorously test hypotheses about how to improve protein design; without such a benchmark, evaluation of new algorithms is largely anecdotal. Recapitulation of designer interventions in the match-design-order set We ran a series of tests using the MDO benchmark to find the optimal mover, filter and native sequencefavoring weight to use with the greedy protein design Automating Intuition for Protein Design refinement protocol. We ran the different protocols on the design set to produce an output set of structures, calculated the average number of mutations between this output set and the order (human modified) set, and used this as the scoring metric. Lower numbers are better, and zero indicates that the protocol has perfectly recapitulated all of the human design decisions in the order set. We also calculated the number of mutations in the output set to the native sequence (match), and the number of mutations to the input design set (designs) to keep track of how many sequence changes the protocol is making. The starting point is 10.3 mutations to the order set, 19.0 mutations to the natives (and 0.0 to the design set, which is the input). We experimented with the use of a favorable weight on native residues through the favor sequence profile (FSP) mover in Rosetta. For example, an FSP weight of 2 gives a bonus of 22 Rosetta Energy Units (REU) to the native residue at each position, while any other residue has no added bonus. The number of mutations to the order set achieves a broad minimum from 1.5 to 3 as FSP weight is adjusted, centered around 2-2.5 [ Fig. 1(a)]. Optimization of a native-sequence favoring weight with a traditional sequence recovery benchmark is impossible. The recovery of native sequence would simply increase monotonically with increasing nativesequence weight. The MDO benchmark makes this test possible. We tested a number of different movers in the greedy algorithm for energy minimization upon introduction of each mutation at each position (Table I). The same filter is used in all cases, the shape complementarity (SC) filter with a weight of 25 in addition to the total energy. These movers range from a relatively simple mover (local repack around the mutated residue followed by a minimization of the protein-ligand interface) to more complex movers with multiple cycles of repacking and minimization (Table I). A mover of (repack interface with low LJ repulsion ! minimize interface ! repack with normal LJ repulsion ! minimize) has the best performance. More complex movers are not able to improve the number of mutations to the order (Table I). Larger design shells give a lower number of mutations to order. The standard Rosetta design protocol optimized the identity of residues within 6 Å of the ligand, or 8 Å if the residue Ca-Cb vector is pointing toward the ligand. Expanding the design shell to 10/12 Å allows the protocol to alter 3312 residues in 51 proteins, versus 1049 residues in the standard design shell. Human designers tend to make changes outside of the standard design shell, for example, adding backing-up interactions to keep first-shell residues in place. The smaller design shell by definition cannot recapitulate human design decisions outside of the sphere of residues it examines. The best mover produces on average 8.8 mutations from the order set (vs. 10.3 in mutations in the starting structures; Table I). For comparison, the same protocol over an expanded shell without any bias toward native sequence produces 17.7 mutations from the order set. We tested the greedy protocol with a variety of filters to find the optimal behavior in the MDO benchmark set for recapitulation of human designed sequences and found similar behavior for total energy alone or total energy plus a SC filter with a weight of 25 (Table S1, Supporting Information). All other filter combinations gave worse behavior, such as a heavy negative weight on SC or any weight on the SASA filter. Sequence analysis of outputs from the best greedy protocol Most sequences in the MDO benchmark are slightly improved, with 0 to 8 fewer substitutions [ Fig. 1(b)]. In some cases there is actually an increase in the number of mutations to the ordered sequence [negative numbers in Fig. 1(b)], but in most of these cases the method has simply placed a number of reversions to native that were not placed by the human designer. The best case is MB11 with 15 mutations from the ordered sequence in the input design and only 6 in the output from the greedy protocol [ Fig. 1(c); residue identities in the design on the left in blue, residues after the greedy protocol on the right, in orange]. More than half of the residues that are altered have a Ca-Cb vector pointing away from the ligand. These mutations from wild type are unlikely to favorably impact ligand-protein interaction energies. In this case all nine of the correctly altered amino acid positions are reversions to the amino acid identity in the original scaffold. The case with the least improvement is BL23 with only 5 mutations from the ordered sequence in the input design, and 14 in the output from the greedy protocol. Again most of the changes introduced by the greedy protocol are reversions to native, but in this case those changes do not agree with those made in the ordered sequence. Native sequence recovery benchmark With results from the MDO benchmark and the new greedy protocol in hand, we sought to optimize the MCbased design protocol used before manual modification or the greedy protocol. This MC-based protocol has previously been optimized for monomeric native proteins, not for protein-ligand interaction design. Our results from the MDO indicated that the preservation of native sequence is important to maintain the stability of engineered proteins. We aimed to optimize the MC enzyme design protocol in Rosetta to minimize the need for sequence reversion in subsequent greedy optimization, The greedy protocol was run over the MDO test set with the mover specified in the and, more generally, to improve the quality of design outputs. To optimize the existing MC-based computational design protocol for the protein-ligand design problem, we assembled a protein-small-molecule benchmark set of wild-type (as opposed to computationally designed) protein structures (as described in more detail in the Materials and Methods section and Supporting Information Appendices A and B) with biological ligands, high structure resolution, and measured binding affinity better than 10 mM. The benchmark samples 1041 amino acid positions in 51 proteins, which gives an average of 20 designable residues per protein active site. We used this benchmark to assess protein sequence recovery with different design algorithms or score functions. The overall sequence recovery for the benchmark set with the standard Rosetta enzyme design protocol is 44%. 16 Monte Carlo design algorithm improvement with the native sequence benchmark General features of sequence recovery benchmark To quantitatively evaluate how the MC design algorithm behaves with different scoring and sampling methods we first examine the complexes with highest and lowest sequence recovery. The best case is the 1DB1 27 complex in which 22 residues out of 35 are recovered [ Fig. 2(a)], with a total sequence recovery of 63%. In the case of the 2PFY 28 complex only 2-3 residues out of 11 are correctly predicted [ Fig. 2(b)], with a resulting sequence recovery of only 24%. In general the sequence recovery is correlated with the chemical composition of the active site and the ligand, as the energy function performs better with more hydrophobic amino acids. 8 1DB1 is a nuclear receptor in complex with vitamin D, which is large and relatively hydrophobic [ Fig. 2(a)]. 2PFY is an extra-cytoplasmic receptor bound to pyroglutamic acid, which is quite small and polar [ Fig. 2(b)]. Incorporating evolutionary information with a position specific scoring matrix Protein design onto an existing protein structure can benefit from knowledge of the close evolutionary homologs encoded in a PSSM 29 and included in the energy function as an additional term. Including a PSSM term provides a relatively large increase in sequence recovery (15%) with only a very small increase in total Rosetta score [ Fig. 2(c)]. We observe an optimal PSSM weight above which sequence recovery deteriorates; this behavior is different from a native-sequence favoring weight, which would simply produce perfect recovery at a high level [ Fig. 2(c)]. Sequence design for a novel function might benefit less from a PSSM term, although conserved residues that are vital to stability would be preserved using this method. We next explored the combination of the force field and algorithm improvements with each other and with the native sequence and rotamer bias terms. The combination of the best sampling method, MCA, with the Song et al. force field corrections yielded an additional small improvement in sequence recovery (Table II; other combinations did not generally lead to improvements). The best sequence recovery was achieved with a PSSM score term, MCA sampling, and the Song energy corrections (55.4%), and we recommend this combination for most protein-ligand design cases (Table II). The scoring behavior of Song et al. 12 and Leaver-Fay et al. 30 is default in Rosetta as of git tag @2fac63a via the "talaris2013" scoring function (Supporting Information). Native rotamer inclusion leads to an even better sequence recovery of 56.0%, but as soon as one needs to redesign the active site to introduce a new function (instead of recapitulating the native ligand-binding site as we do here) it is advantageous to use the more general PSSM information instead with weight set to one. CONCLUSION The human-design benchmark (MDO) is uniquely suited to evaluating the ability of algorithms to recapitulate human intuition during the design of novel function into protein scaffolds. It formalizes a test system for design algorithms, allowing for rigorous hypothesis testing without resorting to individual design examples. Of course we do not know if human intuition improves upon computationally designed proteins. Now, with the greedy algorithm and the MDO benchmark, we can systematically evaluate different human-imitating algorithms (e.g., one emphasizing shape complementarity, another emphasizing solvent accessible surface area). The optimal greedy protocol combines the best mover, FSP weight, and filters. The sampling in this protocol is local-an attempted mutation is introduced at a given position and only nearby residues are optimized-and not global over the entire designed interface, as is the case in standard Rosetta MC-based sampling. This protocol should perform well for small-molecule binding proteins. A separate optimization would be required for other problems, such as protein-protein interaction design, with an appropriate benchmark set. The two primary bottlenecks in the production of high numbers (hundreds) of computational designs are the human time required to evaluate and refine each structure, and the cost and complexity of synthesizing large numbers of genes. The greedy protocol reduces the amount of time required to produce each design, while increasing the likelihood that individual designs are stable and functional. The recent developments in arraybased DNA synthesis will increase the number of testable independent sequences. 32 In many instances of computational protein design a very small fraction, approximately 1%-2% of designs, is folded and active. The combination of the greedy optimization protocol and array-based DNA synthesis could significantly increase the chance of success for difficult design challenges. M3 with wild-type residue rotamer added g 50.9 Each test was run with Rosetta options as indicated in the legend.
2018-04-03T05:54:11.248Z
2014-05-01T00:00:00.000
{ "year": 2014, "sha1": "51c897ec1ddf74cf015f179c6245475441c05397", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/prot.24463", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "bf694df6b4ec953c293a482050bedcfda5bdb23d", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
14625091
pes2o/s2orc
v3-fos-license
Preparation and Characterization of Pioglitazone Cyclodextrin Inclusion Complexes Pioglitazone, a class II Biopharmaceutical Classification System drug having poor water solubility and slow dissolution rate may have a negative impact on its subtherapeutic plasma drug levels leading to therapeutic failure. In order to improve its water solubility and thus dissolution, cyclodextrin complexation technique was followed. The phase solubility studies were carried using three different types of cyclodextrins viz., β, methyl-β and γ-cyclodextrins. The Gibbs free energy was calculated in order to determine ease of the complexation. Binary systems of pioglitazone with cyclodextrins were prepared by kneading method and spray drying method. The phase solubility profiles with all the three cyclodextrins were classified as AL-type, indicating the formation of 1:1 stoichiometric inclusion complexes. The complexation capability of cyclodextrins with pioglitazone increased in the order of methyl-β > β > γ-cyclodextrin. The Gibbs free energy was found to be in the order γ > methyl-β > β cyclodextrin. Characterization of inclusion complexes was done by solubility studies, in vitro dissolution studies, Fourier transformation-infrared spectroscopy, scanning electron microscopy, differential scanning calorimetry and X-ray powder diffractometry studies. Inclusion complexes exhibited higher rates of dissolution than the corresponding physical mixtures and pure drug. Greater solubility was observed with spray-dried methyl-β cyclodextrin complexes (2.29 ± 0.001 mg/ml) in comparison to the kneaded methyl-β cyclodextrin complexes (1.584 ± 0.053 mg/ml) and pure drug (0.0714 ± 0.0018 mg/ml). responsible for development of insulin resistance leading to type 2 diabetes mellitus. TZDs act through nuclear hormone, peroxisome proliferator activated receptor-γ (PPARγ) which increases insulin sensitivity by enhancing the expression of proteins responsible for modulating glucose and lipid metabolism leading to improved insulin sensitivity in the liver, muscle and adipose tissue. Members of the class TZDs include troglitazone, pioglitazone, darglitazone and rosiglitazone. [3,4] Pioglitazone, a member of the thiazolidinedione group of drugs used for the treatment of type 2 diabetes mellitus. The solubility of its hydrochloride salt is low which may have a negative impact on its dissolution rate, leading to subtherapeutic plasma drug levels affecting therapeutic action. Methods for increasing the aqueous solubility of poor soluble drugs include: Complexation, addition of surface active agents, preparation of soluble prodrug, cosolvency, salt formation, hydrotropism, crystal engineering and addition of ionic liquids. Among these approaches, preparation of cyclodextrin (CD) inclusion complexes of the drug has proven effective in enhancing the solubility of poorly water-soluble drugs. [5][6][7] CDs are glucopyranose units linked by α-(1, 4) bonds consisting of six to more than 100 glucose units. Intramolecular transglycosylation reaction of starch by CD glucanotransferase (CGTase) enzyme yields a mixture of CDs like α-, βand γ-CDs consisting of six, seven or eight glucose units, respectively. CDs form inclusion complexes with a wide range of hydrophobic molecules. In solution, the hydrophobic CD cavity is occupied by water molecules which are bounded by "weak forces" that can be substituted by hydrophobic molecules such as drugs to form complexes. Depending on the size of the internal cavity, one or two hydrophobic molecules can be entrapped by one, two or even three CDs. [8] In the present research work, inclusion complexes of pioglitazone with CDs were prepared by kneading method for comparative evaluation of β-cyclodextrin, its water soluble derivative methyl-β cyclodextrin and γ-cyclodextrin. As per the solubility studies of kneaded complexes, M-βCD was selected for preparation of complexes by spray drying method to further enhance the solubility of pioglitazone. Further formation of complexes was confirmed by solubility studies, in vitro dissolution studies, Fourier transformation-infrared spectroscopy (FTIR), scanning electron microscopy (SEM), differential scanning calorimetry (DSC) and X-ray powder diffraction (XRPD) studies. MATERIALS AND METHODS Pioglitazone HCl was a gift sample from Dr. Reddy's, γ-CD and M-βCD were kindly provided by Roquette, France, β-CD was kindly provided by Signet Chemicals, Mumbai. All other chemicals and reagents were of analytical grade. Phase solubility studies The stability constants for inclusion complex formations between pioglitazone and CDs were determined using the phase solubility method (Higuchi and Connors). [9] Samples were prepared by adding 20 ml of distilled water as a media to 30 ml screw-capped bottles containing successively increasing concentration of cyclodextrin as follows 0.5, 1, 1.5, 2.5, 5 and 10 mM. Excess amount of pioglitazone was added to each bottle to maintain saturated conditions, in aqueous solutions of β-CD, Mβ-CD and γ-CD solutions. Each bottle was capped and shaken for 24 h at 25 ± 1°. Aliquots were filtered with Whatman filter and complexed pioglitazone was analyzed by UV spectrophotometer (270 nm). The apparent stability constants (K S ) were calculated from the straight line portion of the phase solubility using the following Eqn., K s = Slope/S 0 (1−Slope), where, S 0 is the solubility of drug at 25° in the absence of CDs and slope means the corresponding slope of the phase-solubility diagrams. The Gibbs free energies of transfer of drug from aqueous solution to the cavity of cyclodextrin have been calculated from following Eqn., ΔG 0 = −2.303RT log [S 0 /S S ], where, S S and S 0 are the solubility of drug in the presence and absence of cyclodextrin, respectively. [10] Preparation of solid binary systems The following binary systems of pioglitazone and CDs were prepared at 1:1 molar ratio. Preparation of physical mixture of pioglitazone and cyclodextrin The physical mixture (PM) of pioglitazone and CDs viz., β, methyl-β and γ-CD in 1:1 molar ratio was prepared by mixing individual components that had previously been sieved through mesh number 60. [11] Preparation of inclusion complexes by kneading method Drug and cyclodextrin are weighed in terms of equal molar ratios (1:1). This molar ratio was selected after phase solubility studies. β-CD, Mβ-CD or γ-CD was wetted in a ceramic mortar with 50% ethanol solution until a paste was obtained. Required amount of pioglitazone was added slowly and kneaded for about 45 min. The product was dried at 40° for 24 h. The dry mass so obtained was powdered and passed through sieve no. 60. Prepared complexes were stored in a desiccator for further studies. [12] Preparation of inclusion complexes by spray drying method Drug and CD are weighed in terms of equal molar ratios (1:1). This molar ratio was selected after phase solubility studies. Drug was dissolved in 25 ml ethanol-acetone (1:1) mixture with constant stirring and added to a solution of cyclodextrin (Mβ-CD) dissolved in 25 ml of same solvent mixture. This mixture of solutions was sonicated for 15 min and this feed was fed to (Lab Ultima-222, Mumbai) mini spray drier and sprayed into the chamber from a nozzle with a diameter 0.7 mm (700 μm) under the following conditions: Inlet air temperature of 65°, outlet air temperature of 55°, cool temperature of 50°, inlet high of 75°, outlet high of 60°, feed rate of 3 ml/min, atomization air pressure of 2.5 kg/cm 2 , aspirator speed 30% and vacuum in the system of 100 wc/mm. The product thus obtained was collected, packed in aluminum foil and stored in desiccator for further studies. Characterization of pioglitazone CD inclusion complexes Solubility studies Excess of prepared inclusion complexes were dispersed in the 20 ml of distilled water in screw-capped bottles to get a supersaturated solution. These bottles were shaken continuously for 4 h at ambient temperature until equilibrium was attained. Aliquots were filtered with Whatman filter and total pioglitazone was analyzed by spectrophotometer (270 nm). Solubility studies were also performed for pure drug. [10] In vitro dissolution studies In vitro dissolution of pure drug and inclusion complexes were studied in USP XXIII dissolution apparatus (Electrolab) employing a paddle stirrer at 75 rpm and using 500 ml of distilled water at 37 ± 0.1° as dissolution medium. Complexes equivalent to 15 mg of pioglitazone was used in each test. Aliquots of dissolution medium (5 ml) were withdrawn at known intervals of time and filtered through Whatman filter paper and 1 ml of the filtrate was made up to 10 ml with 0.2N HCl in 10 ml volumetric flasks. Suitable dilutions were further made when required. The absorbance of the samples was read at 270 nm against blank. The aliquot withdrawn at each time interval was replaced with the same volume of fresh dissolution medium. All the experiments were run in triplicate. [13] Fourier transformation-infrared spectroscopy Infrared spectra were obtained using Shimadzu Fourier transformation-infrared (FTIR)-8700 spectrophotometer using KBr disks. The samples of pioglitazone, Mβ-CD, physical mixtures of pioglitazone and Mβ-CD and inclusion complexes were previously ground and mixed thoroughly with KBr. The KBr disks were prepared by compressing the powder. The scanning range was kept from 4000 to 450 cm −1 . Scanning electron microscopy studies The surface morphology of pioglitazone, physical mixtures and inclusion complexes was examined by using Scanning electron microscopy (SEM). The small amount of powder was manually dispersed onto a carbon tab adhered to an aluminum stubs. Then these stubs were coated with a thin layer of gold by employing POLARON-E 3000 sputter coater. The samples were examined using Joel JSM 840A SEM and photographed under various magnifications with direct data capture of the images onto a computer. Differential scanning calorimetry studies Differential scanning calorimetry of pioglitazone, Mβ-CD, physical mixtures and inclusion complexes were conducted using Differential scanning calorimetry (DSC) Q2000 V24.2 Build 107 instrument. The mass of empty pan and reference pan were taken into account for calculation of heat flow. The sample mass varied from 3-10 ± 0.5 mg and it was placed in sealed aluminum pans. The coolant used was liquid nitrogen. The samples were scanned at 10°/min from 20° to 300°. X-ray powder diffractometry studies X-ray powder diffractometry (XPRD) patterns for pioglitazone, Mβ-CD, physical mixtures and inclusion complexes were traced employing X-ray diffractometer (Bruker Axs D8 Advance, Germany) with scanning rate 4°/min and voltage/current used was 40 kV/50 mA and the target/filter was copper. Stability studies The prepared selected formulation was stored at 40°C ± 2°C/75% RH ± 5% RH in Newtronic Temperature/ Humidity Control Chamber QLH-2004 for a period of 6 months. The samples were withdrawn every month and were evaluated for drug content and in vitro dissolution studies. Phase solubility studies The phase solubility diagram for the complex formation between pioglitazone and β-CD/Mβ-CD/γ-CD are presented in the Figure 1. The plots show that aqueous solubility of pioglitazone increases linearly as a function of cyclodextrin concentration up to 2.5 mM. It is clearly observed that the solubility diagram of pioglitazone in the presence of β-CD/Mβ-CD and γ-CD can be classified as A L type according to Higuchi and Connors. As slope is less than 1 and the plot is A L type, 1:1 ratio of drug and the respective CD is selected for complexation. The apparent stability constants (K S ) were calculated from the straight line portion of the phase solubility diagram [ Table 1]. The cavity size of Mβ-CD seems to be optimal for entrapment of pioglitazone molecules as it provides the greatest solubilization effect. Stability constants obtained for pioglitazone are in the rank order of methyl-β > β > γ-CD. The change in Gibb's free energy (ΔG 0 ) is the net energy available to do useful work and is a measure of the "free energy" [ Table 2]. ΔG 0 gives criteria for spontaneity at constant pressure and temperature. [14] If ΔG 0 is negative, the process is spontaneous. As ΔG 0 becomes more negative, the reaction becomes more favorable. In the present case, the reaction consists of the solubility of the drug in cyclodextrin solution. ΔG 0 is related to the equilibrium constant of a reaction K, by the relation ΔG 0 = −2.3.3RT log K where K is calculated from (S 0 /S s ). It was observed that ΔG 0 values obtained were negative which increased with increasing CD concentration in all the different types of CDs evaluated in this study. This indicates that CD solutions offer a more favorable environment than water for pioglitazone. Preparation of inclusion complexes by kneading method Cyclodextrin complexes of pioglitazone were prepared using 1:1 M ratio of drug and the corresponding CD's (β-CD, Mβ-CD and γ-CD) as indicated by phase solubility studies. The products formed were free flowing in nature and white in color. The prepared complexes subjected to solubility analysis showed a constant reading after 4 h studies. The solubility studies indicated that among the prepared complexes, Mβ-CD complexes had maximum solubility (1.584 ± 0.053 mg/ml) and therefore selected for further evaluation. Preparation of inclusion complexes by spray drying method Spray drying method represents one of the most commonly employed methods to produce the inclusion complexes starting from a solution. The mixture is passed to a fast elimination system to produce complexes having high efficiency. [15] As per the solubility studies of kneaded complexes, Mβ-CD was selected for preparation of drug complexes by spray drying method. As pioglitazone is insoluble in water the same cannot be used as solvent. Precipitation must be controlled in order to avoid the particles blocking the atomizer or spray nozzle, hence clear solution is desirable. [16] Ethanol-acetone mixture in different ratios was used as solvent system for preparation of complexes based on solubility of drug and Mβ-CD. 1:1 ratio was selected as it gave a clear solution whereas other ratios formed a turbid solution. The process parameters for spray drying technique were optimized in preliminary studies. The various process parameters considered were inlet air temperature, outlet air temperature, feed rate, atomization air pressure and aspirator speed. The inlet air temperature and outlet air temperature was optimized to 65° and 55°, respectively, based on boiling point of the solvents used and stability of the drug to get dry and stable product. The different feed rates i.e., 3, 5 and 7 ml/min were selected. It was found that higher the feed rate, lower was the yield. When the feed rate was kept at 2 ml/min the yield was decreased and the time taken for the process was long. Therefore, 3 ml/min was selected as optimum feed rate to get maximum yield of the complexes. Atomization air pressure and aspirator speed were optimized at 2.5 kg/cm 2 and 30, respectively, so as to achieve good yield and decrease the time taken for the process. The product formed was white in color and free flowing in nature. However, the yield was less compared to kneading method due to adherence of the product to the drying chamber and accumulation in scrubber. The product obtained by this method had uniform particles which in turn improves the solubility and thus dissolution rate of drug in complex form. Solubility studies Pioglitazone complexes were subjected to solubility studies in water and the results were shown in Table 3. It was observed kneaded Mβ-CD complexes exhibited the solubility of 1.584 ± 0.053 mg/ml whereas kneaded γ-CD and kneaded β-CD complexes exhibited the solubility of 0.1448 ± 011 mg/ml and 0.425 ± 0.041 mg/ml, respectively. Based on this solubility data Mβ-CD was selected for preparing drug complexes by spray drying technique in order to further enhance the solubility of pioglitazone. Mβ-CD cyclodextrin complex prepared by spray drying method showed a maximum increase in the solubility of 2.29 ± 0.001 mg/ml. The complexes prepared by spray drying method showed a 32-fold increase in solubility than the pure drug (0.0714 ± 0.0018 mg/ml) as shown in Figure 2. In vitro dissolution studies The prepared kneaded complexes of β-CD, Mβ-CD and γ-CD along with Mβ-CD complexes prepared by spray drying technique were subjected for dissolution studies. It was observed that spray-dried complexes has shown the drug release of 84.66% in 60 min where as γ-CD, β-CD and Mβ-CD-kneaded complexes showed release of 34.71%, 41.64% and 51.13 % in 60 min. The enhanced dissolution rate of spray-dried products might be attributed to decreased particle size, increased surface area and formation of uniform-sized complexes. The dissolution profile obtained for inclusion complexes is shown in Figure 3. Characterization studies such as FTIR, SEM, DSC and XRPD were further carried for drug-MβCD kneaded and spray-dried complexes, in order to determine formation of inclusion complex. Fourier transforms infrared spectroscopy studies The IR spectra of pure drug, physical mixtures of pioglitazone-Mβ-CD and complexes are shown in Figure 4. The IR spectrum of pioglitazone revealed the presence of peak at 3085.89 cm −1 due to N-H stretching while peaks at 2927.74 and 2740.66 cm −1 is due to aliphatic C-H stretching. Strong absorption peaks observed at 1743.53 and 1689.53 cm −1 were assigned to drug carbonyl stretching vibration (C=O). A peak at 1612 cm −1 indicates the Scanning electron microscopy studies SEM study indicated that pure drug particles were irregular in shape, whereas Mβ-CD showed spherical particles. The physical mixture of the drug and Mβ-CD shows that drug particle remains dispersed and physically adsorbed on the surface of the Mβ-CD particles. Following inclusion complexation of drug with Mβ-CD by kneading and spray drying methods, Mβ-CD showed loss of sphericity, smooth surface and reduced size of the particles. A drastic change in the morphology and shape of the drug particles was observed in the inclusion complex, it was no longer possible to differentiate the two components, drug-Mβ-CD complexes and Mβ-CD. Hence changes in the particle shape and size, suggested an apparent interaction between drug and Mβ-CD [ Figure 5]. Differential scanning calorimetry studies The DSC thermogram of pure drug, Mβ-CD, physical mixtures and complexes are shown in Figure 6. The DSC thermogram of pure pioglitazone showed an endothermic peak at 201.9°, corresponding to its melting point. Peak at 68.13° observed for Mβ-CD thermogram corresponds to its dehydration process. The thermogram of pioglitazone and Mβ-CD (1:1) physical mixture showed two peaks. The peak at 63.13° is due to dehydration process of Mβ-CD and peak at 196.9° is the shift of drug peak to a lower temperature, indicating that a true complex has not formed. DSC can be used for the recognition of inclusion complexes. When guest molecules are embedded in CD cavities, their melting, boiling or sublimation points generally shift to a different temperature or disappear. The thermal curve of pioglitazone and Mβ-CD (1:1) complex prepared by kneading method showed only one peak at 181.1° and those prepared by spray dried showed a peak at 154.1° which is due to dehydration process of Mβ-CD. The disappearance of endothermic peak due to pioglitazone with these systems indicated the formation of true complex of pioglitazone and Mβ-CD at 1:1 M ratio. X-ray powder diffraction studies The X-ray diffraction patterns of pure drug, Mβ-CD and complexes are shown in Figure 7. In the X-ray diffractogram of pioglitazone it is possible to observe several sharp peaks in the range from 0 to 30 (degrees 2θ) suggesting that the drug is present in a crystalline form. On the other hand, the In the case of the physical mixtures the XRPD spectrum is simply the superposition of those of the single components. A decrease in the peak intensity, crystallinity loss, was observed in the physical mixtures diffractogram, probably due to the amorphous character of the cyclodextrin. XRPD is a useful method for the detection of cyclodextrin complexation in powder or microcrystalline states. The diffraction pattern of the complex is supposed to be clearly distinct from that of drug and CD. Crystallinity is determined by comparing representative peak heights in the diffraction patterns. In both kneaded and spray-dried complexes some characteristic pioglitazone peaks were still detectable but with lower intensity indicating that the crystallinity of the drug was reduced to a great extent and conversion into an amorphous state. No new peaks could be observed, suggesting the absence of interaction between the drug and the carrier. Stability studies The selected formulation was subjected to accelerated stability studies as per ICH guidelines for 6 months. The samples were tested for any changes in physical appearance, drug content and in vitro dissolution profile at monthly intervals. The results showed that there was no significant difference (P>0.05) in the drug content and dissolution behavior of the selected formulation. CONCLUSIONS In the present work an attempt has been made to achieve faster onset of hypoglycemic action with pioglitazone by enhancement of solubility through complexation with CDs. Phase solubility studies indicated the formation of complexes in 1:1 M ratio, forming highly stable inclusion complex with methyl β-CD followed by β-CD and γ-CD. Pioglitazone-Mβ-CD systems allowed a marked improvement of the initial drug water solubility. In particular, an increase of about 22 times was obtained for kneaded product and about 32 times for spray-dried product. FTIR, SEM, DSC and XRPD studies showed that complexes can be prepared by kneading and spray-drying methods demonstrating that spray drying can be efficient method for inclusion complex formation between pioglitazone and Mβ-CD.
2018-04-03T02:53:40.475Z
2011-10-01T00:00:00.000
{ "year": 2011, "sha1": "1cfd4a3a61d846ab6246e377d1d71cb42f7198b4", "oa_license": "CCBYNCSA", "oa_url": "http://www.jyoungpharm.org/sites/default/files/10.4103-0975-1483.90234.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1cfd4a3a61d846ab6246e377d1d71cb42f7198b4", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
233395273
pes2o/s2orc
v3-fos-license
Proteomic Analysis Identifies Potential Markers for Chicken Primary Follicle Development Simple Summary Our study presents a comprehensive approach elaborating the mechanism of primary follicle development in the chicken. The identified differentially expressed proteins of small and developing primary follicles (SPFs and DPFs) could be used as potential markers in chicken primary follicle development. The DEPs have their functional involvement in different processes including glycolysis, pyruvate metabolism, amino acid synthesis, and oocyte meiosis. The Anxa2, Pdia3, and Capzb have a connotation in primary follicle development. These findings were validated by real-time quantitative PCR and provided a basis for the exploration of DEPs as suitable makers related to the primary follicle development in chicken. Abstract Follicles’ development in chicken imparts a major impact on egg production. To enhance the egg-laying efficiency, comprehensive knowledge of different phases of follicular development is a prerequisite. Therefore, we used the tandem mass tag (TMT) based proteomic approach to find the genes involved in the primary follicular development of chicken. The primary follicles were divided into two groups—small primary follicles (81–150 μm) and developed primary follicles (300–500 μm). Differential expression analysis (fold change > 1.2, p-value < 0.05) revealed a total of 70 differentially expressed proteins (DEPs), of which 38 were upregulated and 32 were downregulated. Gene ontology (GO) enrichment analysis disclosed that DEPs were intricate with cellular protein localization, the establishment of protein localization, and nucleoside phosphate-binding activities. Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment pathway indicated the involvement of DEPs in different metabolic pathways such as glycolysis, pyruvate metabolism, galactose metabolism, and fructose and mannose metabolism. The current proteomic analysis suggested suitable markers such as Anxa2, Pdia3, and Capzb, which may serve as a potential role for primary follicle development. The present study provides the first insight into the proteome dynamics of primary follicle development and would play a potential role for further studies in chicken to improve egg productivity. Introduction Eggs are an inexpensive source of high-quality protein, essential vitamins, and minerals necessary for a well-balanced diet and a healthy life. Current global per capita egg consumption estimates approach 9 kg annually but vary greatly on a regional basis. By 2050, the world's population is expected to reach 9 billion, with the highest population growth rates occurring in regions suffering mostly from food insecurity [1]. Therefore, it is imperative to enhance egg production to fulfill the growing demand. In avian species, the egg production is mainly vested in follicles development as these follicles had oocytes surrounded by the layers of granulosa and theca cells, intact with the oocyte plasma membrane. Follicular development involved a series of complex process or steps which ultimately results in ovulation. At the same time, thousands of developing follicles undergo atresia [1,2]. The chicken is an exclusive experimental model owning to ovarian follicles used to study follicular development [3]. Various studies have been documented the mechanism which governs follicles' transition from prehierarchical to the ovulatory stage [4,5]. This transition phase is a highly coordinated biological process affecting different stages of follicular development including oocyte maturation, proliferation, and differentiation of granulosa and theca cells within the follicles controlled by several regulatory factors [6][7][8]. At the time of follicles selection, the candidate follicle initiates steroidogenic pathway via protein kinase A/cAMP signaling through involving steroidogenic acute regulatory protein (STAR) and hole sterol side-chain cleavage enzyme (Cyp11a) [9,10], while the inhibitory signals such as protein kinases retain the granulosa cells in an undifferentiated state in all follicles irrespective of the size [11]. Similarly, several proteins can also contribute to the follicle maturation process; for instance, in theca cells, the expression of annexin A2 (Anxa2) could induce angiogenic factors, which contribute to follicular development and eventually ovulation [12]. The matrix metalloproteinases (MMP) enzymes can intricate the follicle ovulatory process through regulating the protein disulfide isomerase A3 (Pdia3), which is further involved in protein folding via oxidation, reduction, and isomerization of a disulfide bond in proteins [13], and has also been involved in cell adhesion such as sperm oocyte interaction [14]. These have also been involved in cellular estradiol sequestering [15]. Recent advancement has been made in the field of chicken primordial follicle development [16,17], but very little is known about the mechanisms underlying the development of primary follicles. The atretic mechanism is the major hindrance in which all the primary follicles do not pass into the second stage, although naturally, only a few of them develop into the maturation stage [18]. Thus, increased primary follicle selection for further development can improvise the chicken egg-laying capacity. Therefore, this necessitates a comprehensive understanding of the mechanism of primary follicular development. The proteomic methods are important molecular-level approaches used to analyze complex mechanisms that are helpful in understanding complex biological processes [19]. Furthermore, it could determine the functionally viable proteins, their properties, and mode of action [20]. The advanced proteomics procedures helped to identify the protein abundance, their differential expression, and sensitivity level. The present study aimed to investigate the molecular mechanism associated with the primary follicle involved in early development. This study also provided a comprehensive understanding of primary follicle development at the proteome level, which could pave a way for further molecularbased studies. Animal Selection The schematic illustration of the experimental design is presented in Figure 1. The Ethics Committee constituted by the Animal Welfare Department of the Guangxi University approved the present study. Six Guangxi yellow-feather chickens, 20 weeks of age, were selected and euthanized in the study. The ovaries were removed carefully and immediately transferred in normal saline solution at 39 • C. Isolation of Follicles Ovaries were thoroughly washed using normal saline and chopped into smaller pieces of 1.5 to 2 mm in size. The chopped material was again washed with phosphatebuffered saline/polyvinyl alcohol (PBS/PVA) solution. PBS/PVA solution was prepared by mixing 1g polyvinyl alcohol (PVA) in 100 mL of phosphate-buffered saline (PBS). Primary follicles were isolated by the enzymatic method. Briefly, the chopped ovaries Animals 2021, 11, 1108 3 of 12 were placed into a Petri dish and incubated (37 • C, 5% CO 2 , 25-30 min) with Trypsin Ethylenediaminetetraacetic acid (EDTA) solution (0.25%, Sigma-Aldrich, St. Louis, MO, USA), as described previously [21,22]. The mixture of enzymes and ovaries was incubated at 37 • C in the incubator at 5% CO 2 for 25-30 min. After digestion, all the enzymatic solution was removed by using a micropipette and washed with PBS/PVA solution to stop the digestion, and then dispersed follicles were placed into a Petri dish. Loosely attached follicles were isolated by using an insulin syringe and placed into a separate glass plate. All the isolated follicles were washed with PBS/PVA solution three times to remove debris and blood. After the isolation, follicles were graded as small primary follicles (SPFs, 81-150 µm) and developing primary follicles (DPFs, 300-500 µm), respectively. Ovaries were isolated from 20-week-old chickens. The small follicles were aspirated and divided into two groups based on their-tandem mass tag (TMT)-127 indicates small primary follicles (81-150 μm) and TMT-129 as developing primary follicles (300-500 μm). The proteomic analysis was conducted by using Liquid chromatography-mass spectrometry (LC-MS/MS). The obtained data were analyzed for differential expression of proteins. The differentially expressed proteins (DEPSs) were subjected to gene ontology analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis. The proteomic data were validated by real-time quantitative PCR. Isolation of Follicles Ovaries were thoroughly washed using normal saline and chopped into smaller pieces of 1.5 to 2 mm in size. The chopped material was again washed with phosphatebuffered saline/polyvinyl alcohol (PBS/PVA) solution. PBS/PVA solution was prepared by mixing 1g polyvinyl alcohol (PVA) in 100 mL of phosphate-buffered saline (PBS). Primary follicles were isolated by the enzymatic method. Briefly, the chopped ovaries were placed into a Petri dish and incubated (37 °C, 5% CO2, 25-30 min) with Trypsin Ethylenediaminetetraacetic acid (EDTA) solution (0.25%, Sigma-Aldrich, St. Louis, MO, USA), as described previously [21,22]. The mixture of enzymes and ovaries was incubated at 37 °C in the incubator at 5% CO2 for 25-30 min. After digestion, all the enzymatic solution was removed by using a micropipette and washed with PBS/PVA solution to stop the digestion, and then dispersed follicles were placed into a Petri dish. Loosely attached follicles were isolated by using an insulin syringe and placed into a separate glass plate. All the isolated follicles were washed with PBS/PVA solution three times to remove debris and blood. After the isolation, follicles were graded as small primary follicles (SPFs, 81-150 μm) and developing primary follicles (DPFs, 300-500 μm), respectively. Ovaries were isolated from 20-week-old chickens. The small follicles were aspirated and divided into two groups based on their-tandem mass tag (TMT)-127 indicates small primary follicles (81-150 µm) and TMT-129 as developing primary follicles (300-500 µm). The proteomic analysis was conducted by using Liquid chromatography-mass spectrometry (LC-MS/MS). The obtained data were analyzed for differential expression of proteins. The differentially expressed proteins (DEPSs) were subjected to gene ontology analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis. The proteomic data were validated by real-time quantitative PCR. Protein Extraction, Digestion, and Peptide Labeling Follicles from each group were incubated in 20 µL of lysis buffer in ice for 10 min; then, 2 µL of magnetic bead stock and 20 tetrafluoroethylene (TFE) were added to each tube. Then, sonication was performed for 15 min in a Bioruptor having each cycle of 30 s. After sonication, 0.75 µL of 0.1% formic acid was added, and heat exposure was provided to each tube for 5 min at 95 • C and placed tubes on the ice for 30 s. Then, tubes were incubated for 30 min at 45 • C in a PCR machine and supplemented with 5 µL of 400 mM iodoacetamide (IAA) and incubated for 30 min at 24 • C. The reaction was stopped by adding 5 µL of 200 mM dithiothreitol (DTT) to each tube. To purify the proteins, 1% formic acid and acetonitrile 100% (1:1) were added in 10 µL of lysate and incubated for 8 min. Then, the supernatant was removed, and the pellet was washed two times with 70% ethanol. Acetonitrile was added to the pellet and shifted to a magnetic stand. The digestion was performed by adding 10µL of trypsin and incubating at 37 • C for 16 h. In the last step, Animals 2021, 11, 1108 4 of 12 proteins were washed twice with acetonitrile, and then 10 µL of 2% dimethylsulfoxide (DMSO) and 1% formic acid were added to the tube. The digested peptides of SPF and DPF were labeled with TMT-127 (SPF) and TMT-129 (DPF), respectively, by labeling reagents according to the manufacturer's protocol. Briefly, 41 µl of anhydrous acetonitrile (ACN) was added to 0.8 mg of tandem mass tag (TMT) labeling reagent, and then the mixture was transferred to each digested peptide sample. The solution was incubated at room temperature for 1 h, and the reaction was stopped by adding 8 mL of 5% hydroxylamine. Equal quantities of TMT-labeled peptides from both samples were taken and then combined and evaporated under vacuum. Peptide Identification The desalted and dried fractions were eluted with 10 µL of solvent A (2% ACN and 0.1% formic acid). A 2 µL volume of each sample was applied to a trapping column (PepMap RSLC C18 column, 50 µm × 15 cm, 2 um nanofiber, Thermo Fisher Scientific, Bremen, Germany), applying a maximum pressure (600 bar) of 300 L/min. An analytical column (0.075 × 150 mm, 3 µm, 100 Å, Thermo Fisher Scientific, Bremen, Germany) was used for analysis. Next, we used buffer A (2% ACN and 0.1% formic acid) and buffer B (98% ACN and 0.1% formic acid). We added buffer A for 60 min gradient elution (5-40% buffer) separation 45 Minutes and buffer B (40-100%) for 10 min, and finally, 100% buffer B for 5 min to isolate the peptide. An LTQ-Orbitrap Elite Hybrid Mass Spectrometer (Thermo Fisher Scientific, Bremen, Germany) connected to an Easy-nLC 1000 nanometer liquid chromatography system (Thermo Fisher Scientific, Odense, Denmark) was used to analyze all online peptides. A data correlation model was used for mass spectrometry (MS) analysis in a scanning range of 350-1800 m/z and obtain 60,000 detection scans with a mass resolution of 400,000 m/z from an Orbitrap analyzer. In the linear ion trap, the 10 strongest precursor ions were selected for secondary mass spectrometry (MS2) analysis in high collision energy dissociation mode in the linear ion trap. The dynamic exclusion parameters included an exclusion count of two and an exclusion time of 40 s. The siloxane ion was used for internal calibration (m/z = 4,451,200). The raw proteomics data were presented in Supplementary Tables S1-S4. Bioinformatics Analysis The differential expression analysis to determine differentially expressed proteins (DEPs) was performed by using the shiny-based R program (https://infinityloop.shinyapps. io/TCC-GUI/, accessed on 15 January 2021). Gene ontology analysis of DEPs was performed by using an online database (v6.8; http://david.abcc.ncifcrf.gov/, accessed on 15 January 2021). The interactive network containing the identified DEPs was identified using the using software R. KEGG pathway analysis of DEPS was performed by using an online platform (https://www.kegg.jp/kegg/pathway.html, accessed on 15 January 2021). The DEPs are listed in Supplementary Table S4. The proteome profile of each replicate (n = 3) is listed in Supplementary Table S1-S3. Validation of Proteomics Data by Using Real-Time Quantitative PCR The proteomic data were validated using real-time quantitative PCR. For this purpose, total RNA from small primary follicles (SPFs) and developing primary follicles (DPFs) were extracted by using the Trizol method [23]. The extracted RNA was converted to cDNA by using a reverse transcription kit (6210A TaKaRa, Japan) following the manufacturer's instructions. The primers used in the present study are presented in Table 1. The PCR cycling profile was carried out under the following conditions: 5 min at 95 • C, followed by 22 cycles of 30 s at 94 • C, 30 s at 57 • C, and 30 s at 72 • C, with a final extension of 10 min for 72 • C. The 2 −∆∆Ct method was used to analyze qRT-PCR data [24]. The experiment was performed in triplicate, and β-actin was used as a reference gene for normalization. Results The present study elaborates on the proteome profile of SPFs (81-150 µm) and DPFs (300-500 µm). The precision in the proteomic was assessed by calculating the coefficient of correlation and it revealed a significant correlation between replicates of each group. The correlation matrix validated the stability of the experimental data, which can be used for subsequent analysis (Figure 2). The experiment was performed in three biological replicates and a total of 716 and 744 proteins were quantified in SPF and DPF groups, respectively, and 464 proteins were found to be common between both experimental groups. Figure 3 shows the Venn diagram presenting the number of identified and common proteins between both experimental groups. The differential proteomic analysis (p-value < 0.05 and fold change (FC) ± 1.2) showed 70 DEPs (38 upregulated and 32 downregulated). Figure 4 displays the heat map showing the expression pattern of differentially expressed proteins between the replicates of experimental groups. Supplementary Table S4 shows the details about differentially expressed proteins. The differential proteomic analysis (p-value < 0.05 and fold change (FC) ± 1.2) showe 70 DEPs (38 upregulated and 32 downregulated). Figure 4 displays the heat map showin Table S4 shows the details about differentially pressed proteins. Gene ontology enrichment analysis of the DEPs was performed to determine t role in the biological process, molecular function, and cellular component ( Figure 5). teins involved in the cellular process were associated with extracellular matrix disass bly, extracellular vesicles, extracellular region part, and extracellular exosome. The lecular functions of DEPs included nucleotide binding, small molecule binding, carb drate derivative binding, and organic cyclic compound binging. The DEPs were invo in biological processes such as localization activities, including cellular protein loca tion, cellular macromolecule localization, macromolecule localization, and establishm of protein localization. Gene ontology enrichment analysis of the DEPs was performed to determine their role in the biological process, molecular function, and cellular component ( Figure 5). Proteins involved in the cellular process were associated with extracellular matrix disassembly, extracellular vesicles, extracellular region part, and extracellular exosome. The molecular functions of DEPs included nucleotide binding, small molecule binding, carbohydrate derivative binding, and organic cyclic compound binging. The DEPs were involved in biological processes such as localization activities, including cellular protein localization, cellular macromolecule localization, macromolecule localization, and establishment of protein localization. The KEGG pathway enrichment analysis indicated that DEPs were associated mainly with glycolysis gluconeogenesis, carbon metabolism, pyruvate, galactose, fructose and mannose metabolism, oocyte meiosis, endocytosis, and lysosome activities. Figure 6 illustrates the distribution of proteins in various KEGG pathways. The proteomic data was validated by real time quantitative PCR analysis. The real time quantitative PCR revealed that gene expression and protein expression were having similar pattern (Figure 7). The validation of the proteomics data was performed by analyzing the expression pattern of Anxa2, Capzb, Pdia3, Park7, and Faf2 by using real-time qPCR. The expression level of Capzb, Pdia3, Anxa2, Faf2, Rps19, and Hsp40 were found to be upregulated, while Col6a1 and Col6a2 were found to be downregulated. The results of real-time quantitative PCR were found to be consistent with those of the LC-MS/MS. The KEGG pathway enrichment analysis indicated that DEPs were associated mainly with glycolysis gluconeogenesis, carbon metabolism, pyruvate, galactose, fructose and mannose metabolism, oocyte meiosis, endocytosis, and lysosome activities. Figure 6 illustrates the distribution of proteins in various KEGG pathways. The proteomic data was validated by real time quantitative PCR analysis. The real time quantitative PCR revealed that gene expression and protein expression were having The KEGG pathway enrichment analysis indicated that DEPs were associated mainly with glycolysis gluconeogenesis, carbon metabolism, pyruvate, galactose, fructose and mannose metabolism, oocyte meiosis, endocytosis, and lysosome activities. Figure 6 illustrates the distribution of proteins in various KEGG pathways. similar pattern (Figure 7). The validation of the proteomics data was performed by analyzing the expression pattern of Anxa2, Capzb, Pdia3, Park7, and Faf2 by using real-time qPCR. The expression level of Capzb, Pdia3, Anxa2, Faf2, Rps19, and Hsp40 were found to be upregulated, while Col6a1 and Col6a2 were found to be downregulated. The results of real-time quantitative PCR were found to be consistent with those of the LC-MS/MS. Discussion In chickens, follicular development involved different phases, with a follicle size ranged between 0.05 mm to more than 25 mm. The later follicular development stages such as prehierarchal follicles and preovulatory follicles are well characterized, while knowledge related to the mechanism of development and growth of primary follicles is not well documented in previous studies. The purpose of the present study was to explore mechanisms associated with primary follicle development by using a TMT-based quantitative proteomics approach. In the present study, the key role of metabolic pathways such as energy metabolism, insulin signaling pathway, and biosynthesis of amino acids during the development of primary follicles was identified. Role of Glycolysis during Primary Follicle Development The results of the present study demonstrated glycolysis as a key energy homeostasis pathway involved in primary follicle development, compared to oxidative phosphorylation, due to higher expression of glyceraldehydehyde-3-phosphate (Gapdh) and pyruvate kinase (PK) and lower expression of pyruvate dehydrogenase E1 component subunit alpha (Pdha1). Gapdh is a glycolysis intermediate and converted glyceraldehyde-3-phosphate into D-glycerate 1,3-bisphosphate, in the presence of nicotinamide adenine dinucleotide (NAD+) and inorganic phosphate, and also facilitate the formation of NADH and adenosine triphosphate (ATP) [18]. In the final step of glycolysis, the pyruvate kinase enzyme converted the phosphoenolpyruvate into pyruvate with the production of ATP [25], Discussion In chickens, follicular development involved different phases, with a follicle size ranged between 0.05 mm to more than 25 mm. The later follicular development stages such as prehierarchal follicles and preovulatory follicles are well characterized, while knowledge related to the mechanism of development and growth of primary follicles is not well documented in previous studies. The purpose of the present study was to explore mechanisms associated with primary follicle development by using a TMT-based quantitative proteomics approach. In the present study, the key role of metabolic pathways such as energy metabolism, insulin signaling pathway, and biosynthesis of amino acids during the development of primary follicles was identified. Role of Glycolysis during Primary Follicle Development The results of the present study demonstrated glycolysis as a key energy homeostasis pathway involved in primary follicle development, compared to oxidative phosphorylation, due to higher expression of glyceraldehydehyde-3-phosphate (Gapdh) and pyruvate kinase (PK) and lower expression of pyruvate dehydrogenase E1 component subunit alpha (Pdha1). Gapdh is a glycolysis intermediate and converted glyceraldehyde-3-phosphate into D-glycerate 1,3-bisphosphate, in the presence of nicotinamide adenine dinucleotide (NAD+) and inorganic phosphate, and also facilitate the formation of NADH and adenosine triphosphate (ATP) [18]. In the final step of glycolysis, the pyruvate kinase enzyme converted the phosphoenolpyruvate into pyruvate with the production of ATP [25], whereas Pdha1 is the part of the pyruvate dehydrogenase complex, which is responsible for the conversion of pyruvate into acetyl coenzyme-A [26]. Role of ANXA2, PDIA3, and CAPZB during the Primary Follicle Development The proteomic data obtained in the present study illustrated the higher expression of annexin A2 (Anxa2) in large primary follicles, compared to small primary follicles. The real-time quantitative PCR analysis also validated the findings of proteomic data. Our study is in line with the previous study of Zhu et al. [12], which presented the role of Anxa2 during follicle development. Various other studies have also been reported the key role of Anxa2 in cell proliferation and angiogenesis in various types of tissues [27,28]. Similarly, it is also involved in cellular proliferation and angiogenesis during chicken follicle development [29], as confirmed by the findings of the present study. Protein disulfide isomerase A3 (Pdia3) belongs to a family of 17 different protein disulfide isomerases (Pdis) capable of formation (oxidation), reduction, and rearrangement (isomerization) of the disulfide-bonding patterns of proteins and have a major role in the folding of newly synthesized proteins [30]. This is associated with matrix metalloproteinase, which plays a significant role in follicular development [13,31]. Therefore, it is suggested that higher expression of Pdia3 in small primary follicles, compared to the developing primary follicles, might be used as a marker of primary follicle development in chicken. However, further investigations are required to elaborate on its role during the chicken follicle development. The role of Pdia3 during follicle development could be further strengthened by the study conducted by Huo et al. [32], which revealed that in mammals, the protein disulfide isomerase may increase the secretion of follicle-stimulating hormone, which ultimately improvises the follicular development. The proteomic data from the present study suggested that large primary follicles exhibited higher expression of F-actin-capping protein subunit beta (Capzb), compared to small primary follicles. The actin cytoskeleton is an essential component of various dynamic biological systems and processes [33]. Capping proteins are recognized to enhance the actin filament depolymerization and promote cell motility [34,35]. In drosophila, during organogenesis, it is involved the actin cytoskeleton organization [36]. Recently, another study has also provided evidence of Capzb involvement in cellular growth and motility of cancerous cells [37]. Therefore, it could be divulged from the previous and present findings that Capzb may be associated with the actin cytoskeleton during the development of chicken primary follicle development. Taken together, proteomic data depicted the expression pattern of the proteome of primary follicles during their development. Some key genes were identified, which might have their functional role in promoting the development of primary follicles, cellular proliferation, and growth. Ultimately, these proteins are suggested to take part in imparting the developmental competence of follicles to attain ovulatory capacity. These proteins may also have their role in developing the follicles from prehirarchical stage to ovulation acquisition. Conclusions This is the first study elaborating on the mechanism of primary follicle development in chickens. In the present study, DEPs were identified between the small primary follicles and developing primary follicles, and these DEPs were mainly involved in glycolysis, pyruvate metabolism, amino acid synthesis, and oocyte meiosis. The identified key genes including Anxa2, Pdia3, and Capzb might be involved in primary follicle development, and their expressions were validated further by RT-qPCR. Moreover, the present study pave a way for further functional studies of follicular development in chicken.
2021-04-27T05:12:25.261Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "3e7a77eb8f49c80115472a394a8e120f12efea11", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/11/4/1108/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e7a77eb8f49c80115472a394a8e120f12efea11", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225592411
pes2o/s2orc
v3-fos-license
Decision-Making Tool for Enhancing the Sustainable Management of Cultural Institutions: Season Content Programming at Palau De La Música Catalana There has been an increasing relevance of the cultural sector in the economic and social development of different countries. However, this sector continues without much input from multi-criteria decision-making (MDCM) techniques and sustainability analysis, which are widely used in other sectors. This paper proposes an MCDM model to assess the sustainability of a musical institution’s program. To define the parameters of the proposed model, qualitative interviews with relevant representatives of Catalan cultural institutions and highly recognized professionals in the sector were performed. The content of the 2015–2016 season of the ‘Palau de la Música Catalana’, a relevant Catalan musical institution located in Barcelona, was used as a case study to empirically test the method. The method allows the calculation of a season value index (SVI), which serves to make more sustainable decisions on musical season programs according to the established criteria. The sensitivity analysis carried out for different scenarios shows the robustness of the method. The research suggests that more complex decision settings, such as MCDM methods that are widely used in other sectors, can be easily applied to the sustainable management of any type of cultural institution. To the best of the authors’ knowledge, this method was never applied to a cultural institution and with real data. Introduction "Cultural industries" (see United Nations Organization for Education, Science, and Culture (UNESCO) and the United Nations Development Program (UNDP) [1] for a discussion on the historic origins of the concept) broadly refer to "forms of cultural production and consumption that have at their core a symbolic or expressive element" [1] (p. 20). Over the years, they expanded to include a wide range of fields such as, visual and performing arts, publishing, film and audio-visual arts, and music, as well as crafts and design [2], which are not, strictly speaking, industries, but which are similar in their management and advertising [3]. This approach matches well with business management principles, setting issues like efficiency and high standards of quality and performance as primary objectives in cultural institutions' management. A broader term that is often used refers to "creative industries", which include "goods and services produced by the cultural industries and those that marketing mixes implemented by private and public Catalan museums; the study highlights the importance of an adequate education and professional training of cultural managers to enable them to design good marketing and communication strategies for the visitors. As a matter of fact, in Spain, for example, most of the professionals working in the cultural and creative industries usually have a degree in humanities, specializing afterwards in management. This could explain why cultural entities do not always have managers/directors specialized in strategic management and decision-making tools, and why the decision-making process is the outcome of a combination of the creativity, intuition, and personal work experience of their directors. In many countries, studies on cultural management and the preparation of professionals in the field are relatively recent initiatives, with authors like Foord [25] (p. 98) pointing to "the lack of business awareness amongst practitioners" in the cultural sector. For Hewison, Holden, and Jones [26] (p. 117), "effective leadership" is particularly critical in cultural and creative industries, being understood as "the ability to marry rhetorical power with practical innovations so as to create a sustainable, resilient, well-networked organization, capable of growing its own capacity to act, and providing high-quality results for its customers, staff, and funders". In the same fashion, Pérez-Pérez and Bastons [27] discuss the paradoxical impact of new technologies on the management of cultural institutions, claiming that they actually increase the role of managers (human beings), the survival of the organization in the long-run, and its profitability, which both depend on the leadership of the managers, who are thus challenged to continuously reinforce the mission of the organization. All in all, the increasing complexity of the performance measurement systems in the cultural industries advocates in favor of using alternative decision-making tools, such as multi-criteria decision-making (MCDM) and multi-attribute utility theory (MAUT), which are widely used in other fields [28,29]. Our main focus here is to verify how the application of MCDM tools could help improve, objectify, and better clarify the work processes and procedures in terms of future decisions in the cultural industries (see also [30]). To do this, we designed a two-step mixed methodological framework by combining qualitative and quantitative research methods. First, we perform qualitative interviews with professionals in the cultural management field in order to determine the parameters (performance criteria and weights) of the MCDM model. Next, we calculate optimal values of the performance criteria within the framework provided by the MCDA and MAUT with data from the Palau de la Música Catalana (Barcelona). The proposed model allows the calculation of a season value index (SVI) that can be used by cultural managers to optimally match the season's program with the performance and audience objectives. A sensitivity analysis has shown the capacity of the model to easily adjust to the characteristics and environment of any cultural institution, thus proving its robustness. Both the methodological setting and the empirical evidence are adding, in our view, to the literature on research methods in cultural management by focusing, on the one hand, on the analysis of a music institution, and, on the other, by providing a decision-making framework that combines technology (e.g., computer software), mathematics and economics (e.g., multiple criteria, utility), and knowledge supplied by human expertise through qualitative interviews. To the best of our knowledge, there is no paper in the academic literature addressing the application of an integrative research framework that combines specific decision-making tools, such as MCDA and qualitative data, to the strategic planning and management in creative industries-in particular, to music institutions. This analysis is intended to help generate a first registry of empirical evidence that could serve to improve the decision-making processes in the cultural industries in order to achieve financial self-sufficiency and long-term sustainability, and to provide an object of study for the training of future professionals in the field. Thus, the MCDM methods can be particularly useful in the assessment of the managerial decisions' outcomes in order to ensure the optimal growth and development of the cultural institution, in this case, by taking into account a whole range of different criteria. Moreover, the criteria used in the analysis can be tackled simultaneously. In the cultural sector, where most of the performance measures rely merely Sustainability 2020, 12, 5785 4 of 23 on economic profitability indicators, the use of this method allows the enlargement of the spectrum of the criteria considered to include others, which are often left aside due to their difficult quantification (e.g., the quality of an artistic event, social commitment with local language and culture, sustainable cultural practices, etc.). Due to their "predictive ability", MCDM methods can assist cultural managers in the design of sustainable season programs and contingency plans to reduce or avoid risks. Last but not least, MCDM methods make possible a process of individual and collective learning; during the interaction with the MCDM tool, individuals and teams reveal their preferences and choices, learn on a trial-error basis, and eventually agree upon a negotiated outcome [31]. The paper unfolds as follows: In Section 2, the method is presented; Section 3 discusses the empirical parameters of the model; Section 4 is dedicated to the case study of the Palau de la Música Catalana; Section 5 discusses some managerial implications. Multi-Criteria Decision-Making Framework: Introduction and Applications According to Bērziŋš [32], decision-making is one of the most important daily tasks of any manager and the main attribute of all the managerial functions at all management levels. In the same vein, Šarka and colleagues [33] mention that decision-making problems have always been especially significant for any state, company, or individual at any level of management, either strategic or functional. MCDM stands for a set of methods that allow the aggregation and consideration of numerous (often conflicting) criteria, multiple objectives, uncertainty, etc., to choose among, rank, sort, or describe a set of alternatives to "aid" a decision-making process (see, e.g., Mulliner et al. [34]). Intuitively, this kind of tool usually helps to build a "value tree" with a weight assigned to every branch of the tree. With many applications in different industry sectors, MCDM gives support to managers to achieve efficiency and sustainability, contributes to reducing eventual conflicts and arbitrary behavior in the decision-making process, and helps justify results in case of audit controls (see [35] for some examples). MCDM methods allow mathematics to come into place, taking advantage of the development of powerful computer programs. Due to this powerful combination, their use has increased substantially over the last decades [36]. Within the MCDM framework, several methodological approaches have been developed, enlarging the spectrum of the application fields and the complexity of the analyses performed (see [37][38][39][40] for various applications). These decision-making tools have already begun to penetrate into many new areas, with applications in public-sector decisions, negotiations, scientific areas, e-commerce, finance, engineering [30], and heritage conservation [41], among others. They could be equally useful in the cultural sector by contributing to improving the clarity of the nature and priority of the different types of processes and indicators involved in the strategic management of (cultural) businesses, given their capacity of enabling managers to plan ahead across various possible scenarios [32]. Much uncertainty still exists about the potential use of decision-making tools in cultural institutions, specifically in the music industry. As stated by Jones and colleagues [42] (p. 138), "managing creative enterprises involves many of the management disciplines, albeit with a specific emphasis, common to any business there are some distinctive challenges". This is the reason why it is important to improve the decision-making process, and we intend to do this here. Theoretical Model: The Multi-Attribute Utility Theory The theoretical framework of this paper builds on the multi-attribute utility theory (MAUT) developed by Keeney and Raiffa [43]. The MAUT was selected over other MCDM methods because it has a solid foundation and has been effectively applied in many areas, such as engineering, investments, and sustainability, among others. The conceptual setting is based on the existence of a utility or value function that represents the utility or value each alternative has for the decision-maker. The utility function integrates the different criteria, generally in conflict, thus reducing the multi-criteria decision Sustainability 2020, 12, 5785 5 of 23 problem to a multi-criteria optimization problem where the preferences of the decision-maker are expressed in terms of their utility. The degree of fulfilment of the objectives established is characterized by a set of criteria and subcriteria, which represent the aspects to be taken into account when making a decision. The weights represent the relative importance of the different criteria for the decision-maker, while the alternatives are the options being compared. In this research, the alternatives are the possible programming contents of a season. The indicators measure the behavior of the alternatives according to the criteria. The magnitudes of the different indicators cannot be compared directly, because in most cases, indicators measure in different units. To ensure the comparability of the alternatives, it is necessary to use value functions that transform the different measurement units of the indicators into units of value or satisfaction. When one alternative is preferred over another, the value associated with the former is greater than the value associated with the latter. Satisfaction or value is often measured by values between 0 and 1, where 1 corresponds to the maximum satisfaction and 0 to zero/null satisfaction. As shown in Figure 1, the value functions may show varying trends (increasing, decreasing, or mixed) and shapes (linear, convex, concave, or sigmoidal), depending on how satisfaction varies as the indicator varies. contents of a season. The indicators measure the behavior of the alternatives according to the criteria. The magnitudes of the different indicators cannot be compared directly, because in most cases, indicators measure in different units. To ensure the comparability of the alternatives, it is necessary to use value functions that transform the different measurement units of the indicators into units of value or satisfaction. When one alternative is preferred over another, the value associated with the former is greater than the value associated with the latter. Satisfaction or value is often measured by values between 0 and 1, where 1 corresponds to the maximum satisfaction and 0 to zero/null satisfaction. As shown in Figure 1, the value functions may show varying trends (increasing, decreasing, or mixed) and shapes (linear, convex, concave, or sigmoidal), depending on how satisfaction varies as the indicator varies. Therefore, when applying the MAUT to the season's programming in music institutions, a Season Value Index ( ) of the season can be defined as presented in Equation (1). It is a measure of the overall value provided by a season's program considering all of the relevant criteria to the institution and their importance. where is the importance or weight assigned to the criterion and is the total number of criteria. Criteria and a set of reference weights are provided in Sections 3.2.1. and 3.2.2., respectively. The is the value provided by season regarding the criterion . The SVI, expressed by a numerical index between 0 and 1, enables the evaluation and comparison of the success of different programs of a season based on several criteria. Therefore, when applying the MAUT to the season's programming in music institutions, a Season Value Index (SVI i ) of the season i. can be defined as presented in Equation (1). It is a measure of the overall value provided by a season's program considering all of the relevant criteria to the institution and their importance. where w j is the importance or weight assigned to the j criterion and n is the total number of criteria. Criteria and a set of reference weights are provided in Sections 3.2.1 and 3.2.2, respectively. The value ij is the value provided by season i regarding the criterion j. The SVI, expressed by a numerical index between 0 and 1, enables the evaluation and comparison of the success of different programs of a season based on several criteria. This tool was developed to provide objectivity to the programming within cultural institutions. It is intended to avoid intuition in decision-making in order to make decisions based on data and the degree of priority of each criterion in relation to the set of established criteria. Current State of Decision-Making in the Cultural Sector-Interviews with Experts The theoretical model proposed here considers data and first-hand information supplied by expert professionals engaged with the Catalan public and private cultural sector. The methodological design of the qualitative research builds on the generic framework offered by Grounded Theory [45], usually recommended when the information about the phenomenon under study is rather scarce. In this fashion, Grounded Theory techniques, such as the constant comparison of the data [46] (p. 337), inductive approach, and open coding to generate concepts from the data [47], are used to disentangle the decision-making process underlying the programming of the artistic season of a music institution. Based on the information provided by the experts interviewed, several key criteria are derived and molded within the analytic framework provided by the MAUT and the MCDA model. Thus, rather than building a new theory from the observed phenomenon, our purpose here is to offer a practical decision-making tool for cultural managers, a tool with the capacity of prediction and control [45,48,49]. Two types of semi-structured and long or intensive interviews [50] were performed to collect the data: (1) Interviews with top managers of some of the most relevant cultural institutions, such as the concert halls of Palau de la Música Catalana and l'Auditori de Barcelona, the opera house El Gran Teatre del Liceu, the international festivals Mercat de les Flors-Barcelona (music, dance, and performing arts), and Castell de Peralada (lyric and dance); and (2) interviews with other professionals, who are knowledgeable of the functioning of the Catalan cultural sector, from institutions such as ARTImetria, National Council for Culture and the Arts (CoNCA), and Universitat International de Catalunya. The combination of both internal and external expertise was meant to ensure a better understanding of the cultural institutions and cultural sector. The interviewer's creativity and initiative was also preserved, especially for the interviews with other experts in the field, allowing the eventual adaptation of the questions (or asking of new ones) in order to obtain more in-depth responses from the interviewees [51] (see also Bakir and Bakir [49]). Given that the Palau de la Música Catalana was taken as a case study, the other cultural managers and experts were also asked about it. Guiding questions included in the interviews (see also [49]) are provided in Table S1 of the Supplementary Materials. From the interviews with the experts, it was possible to extract information regarding the decision-making method currently used in the institutional Catalan cultural framework. In Table 1, the main concepts and criteria extracted from the data with open coding are summarized (see also [49]). Table 1. Main concepts and criteria derived from the data. Cultural Institution Concepts and Criteria Derived from the Data on Season's Programming Palau de la Música Catalana • Private institution; decisions based on experience; the artistic director uses ten criteria • Quality of the show • Formative value of the show, embedded in the quality • Specific department dedicated to analyzing the audiences • Shows must be attractive • Risk (innovative shows, new creators) • Singular and unique shows • Local agents should be involved (Catalan artists, the creation of the figure of the "resident creator") • Prestigious international orchestras • Cultural Center with educational vocation • Social commitment via artistic initiatives for those at risk of exclusion • Great variety of music genres included in a season's content • Efficient management (economic profitability, transparency) L'Auditori • Public (not for profit) institution; most active at national level; extensive programming • Season's content programming is a team task, led by the programming director • Internal criteria ordered by priority: Eclecticism, quality of the shows, internationality, the audience, educational and formative mission, economic profitability • External criteria: Artists' availability, city's main cultural events, great orchestras' tournaments, etc. • The criteria are different from those of Palau de la Música Catalana, where economic profitability is more binding Liceu Opera House • Various criteria: The title of the opera, the soloists, the budget, and the reference artists of the lyric scene, their availability and repertoire • Economic balance among: Their own productions, new ones, and those contracted from other theatres • Balanced repertoire: Italian and German opera, bel canto, contemporary music; avoid repetition • Key criteria: Quality and balance (also for the artists-frontline vs. new ones) • The audience: Of great tradition and nostalgic, interested in great voices and great artists • Season's content at Palau: Centered around frontline artists and orchestras with exclusive performances Table 1. Cont. Cultural Institution Concepts and Criteria Derived from the Data on Season's Programming The Castell de Peralada Festival • Season's content (with a backbone of ballet and music) is programmed one year ahead • Relevant criteria: Excellence in quality, reinforcement of the personality of the festival, and making a difference with the competitors (360 festivals, in Catalonia) • Other criteria: Festival's own identity; facilitating the incorporation of scene and theatre directors into opera shows • Undertake risk: Innovation and modernity must go hand in hand with tradition; • International projection and audiences • Economic profitability is not binding: Public pricing strategy, social commitment, and philanthropy • Aspect to be improved by Palau de la Música: The balance between the time allocated to visitors, artistic productions, and rehearsals • Decision-making in the cultural sector: Shift towards a more collaborative system (artistic and programming directors working together with the marketing department); increasing importance of efficacy measures, costs, and box office; the artistic director has converted himself into a "fundraiser par excellence", striving to promote the institution and so contributing to increasing its sustainability Mercat de les Flors Festival • Programming in the cultural sector is not based on objective and measurable methods and criteria • Criteria: Build a loyal audience; experimental shows; the thematic itinerary of the festival; quality-emerging from new thematic threads or new staging proposals; transmitting a new message through the capacity of the show and updating its meaning to reach new audiences; the local artists; new creators • Quality, measured in a romantic sense, consists of the magic emerging from the show, the aesthetic levels given by the work of the artists, etc. • The programming of a season is determined one and a half year ahead • Economic profitability is not binding; the festival a priori fixes the revenues to be obtained and so it has no deficit Universitat Internacional de Catalunya • Very few cultural institutions make programming decisions based on explicit criteria; "decision trees" are used to a great extent, but the underlying criteria are neither disclosed nor measured, analyzed, or explicitly compared • Good decisions are based on "good work", "good knowledge", and "good will" of people ARTImetria • No knowledge about any cultural institution using a parameterization method to automate multi-criteria decision-making • Difficult to parameterize the quality • Main criterion for any cultural institution: The audience (loyal and occasional); only few cultural institutions have marketing departments specialized in analyzing the audience • Other criterion: The budget Table 1. Cont. CoNCA • Trend towards a dual responsibility in the decision-making process (like in France or UK): The artistic director and the manager; in Spain and France, the artistic director has greater decision-making power, although exceptions exist (e.g., Liceu Opera Theatre, where the general director has a managerial profile) • Main criteria: The artistic concept, which is fundamental in the programming of a season's content, and the budget • Not-for-profit institutions must have well-defined objectives in order to establish quality standards • Palau de la Música Catalana: Quality should be the main programming criterion (the standard must be fixed by the artistic director, keeping in mind that it is about high culture); promotion of local artists and creators; transversal repertoire; the audience; educating the audience through music • Quality can be parameterized by establishing benchmarks (top artists, top orchestras, feedback from the audience, reviewers, how much artists get paid, etc.) As can be noticed from Table 1, all of the participants interviewed agreed in signaling that no standards, tools, or standardized and consolidated decision-making processes are applied in the cultural sector's management, and that most of the decisions made are based on the intuition and experience of the (artistic) managers: "In the field of culture, traditionally, there have been no scientific approaches. The decisions were made based on experience rather than reflection and rigor backed by results. Certainly, we must have a scientific knowledge of music or of the professional environment: Musicians, orchestras, the agencies that represent these artists... But it is also true that, so far, there are no manager associations" (Víctor García, former Artistic Director of Palau de la Música Catalana). Regarding the internal organizational process around the programming of the content of a season, as well as the specific criteria applied by each institution, the interviews revealed no common standards. Although each institution had some more or less defined criteria and evaluation procedures, no one was aware of the existence of any specific decision-making tool applied in any of these cultural institutions in terms of parameterization and multi-criteria analysis. Overall, in the Catalan music sector (this also applies to the rest of Spain), programming decisions are generally made according to the experience and intuition of the person in charge of making that decision, usually a great connoisseur of the subject. In this context, the Palau de la Música Catalana was found to be a special case, with ten well-defined criteria that were used to decide which concerts to include on a program and which ones to discard: Quality, audience, attractiveness, dose of risk, singularity, locality, internationality, education, social commitment, and efficient management (these criteria are described in detail in the next section). In the case of L'Auditori, the programming director explains that he must discuss and agree on the content of the season with the technical directors of the Barcelona Symphony Orchestra and the National Orchestra of Catalonia, as well as the Symphonic Band, and, at the same time, keep the program sufficiently eclectic to please all types of audiences: "The mission of L'Auditori [as a public institution] is the dissemination of music, and this governs the criteria under which a season is designed" (Robert Brufau). In contrast, the Liceu Opera House follows a pattern according to which each season should include some of the titles best known by the public (such as Mozart's 'The Marriage of Figaro'), as well as some other styles, trying not to repeat operas during a period of at least five years. For the Castell de Peralada Festival, the main goal is to offer performances that can be distinguished from the rest of the more than 360 annual festivals organized in Catalonia. The planning priority for the Mercat de les Flors Festival is the thematic line that forms the backbone of the content of the shows to be exhibited during a certain season. Concerning the issue of economic performance, the interviews have shown a general awareness of the importance of cultural value vs. economic profitability: "It is important to free ourselves from the stigma of profitability per person in purely economic terms when we speak about a cultural value. The danger of looking for economic profitability in a musical program is that we would necessarily tend to simplification and pop. We cannot live only on the symphonies of Beethoven or Bach's Mass" (Robert Brufau, L'Auditori). As the purpose of the interviews was to select a set of relevant criteria to be used in the design of a season's content (see Table 1), it is important to note that the procedures being applied (e.g., strategic plans, program contracts, etc.) allow for the analysis of the quantifiable results obtained after one or several specific seasons or projects. Thus, in the cultural sector, the results are measured after completing the season's program, and the potential explanatory causes of the results obtained are analyzed and applied to future seasons' programming. Hence, once the results of a season are known, the objectives that need to be further pursued (e.g., revenues, audience, quality, etc.) are established. However, the respondents pointed out the lack of a procedure that relates the extent to which each objective is achieved with the degree of interest each objective has for the institution. Thus, while some objectives would be a priority for a public institution, they would not be for a private one, and vice versa. When an institution has a great variety of shows to choose from, as well as different objectives to be achieved and different priorities, the difficulty of choosing a show is much greater. In short, the MAUT, which requires a mathematical tool that can specify the importance of the criteria and the value of the different alternatives available to make efficient decisions, could be very useful in the creative industry in choosing the contents to be included in a season's program. Criteria Following the qualitative interviews presented above, in this section, we describe the criteria identified as relevant for the programming of a season's content. More specifically, ten criteria were selected-the ones employed by Víctor Garcia, the former Artistic Director of the Palau de la Música Catalana (see also Cabré,[52]). The criteria were used to design and evaluate the 'Palau 100' program scheme of the Palau de la Música Catalana. Next, we give below a brief description of each criterion. 1. Quality: Condition of superiority or excellence by which the value of a particular good is judged. This is measured by the musical trajectory of the artist, their contribution to the musical market, national or international recognition, and participation in major festivals, as well as the level of technical difficulty required by the repertoire presented by the artist and, lastly, prizes in relevant international festivals. 2. Audience: Opinions, interests, musical or artistic tastes, or hobbies of the audience targeted by each artistic action proposed by the institution, considering that a balance in the programs should be maintained in order to increase and diversify the institution's audience. 3. Attractiveness: The event presented should arouse interest in the group and contain a discourse that is sufficiently provocative or eloquent to motivate public attendance at events. 4. Dose of Risk or Risky Programming: Beyond pursuing excellence, with unique and extraordinary artistic events involving consecrated international artists, the Palau de la Música Catalana is also committed to promoting and projecting the emerging local talent, Catalan composers, new creators, and minority genres that have not yet established themselves as recognized artists in the artistic market, but are on track to achieve this and become part of the Catalan cultural heritage. 5. Singularity: The distinctive quality for which the Palau is completely exceptional and original, which means that it is different from other institutions within the domain of classical music through the programming of activities and artists that no other institution includes in its programming. 6. Locality: Work with local agents of the territory. The institution must function as a cultural platform in which the local talent of the city of Barcelona is promoted, working with groups, orchestras, or musical associations as a strategy of inclusion and promotion in the job market. 7. Internationality: To design actions, projects, and shows with personality. That is, to go beyond expectations or present certain elements that are sufficiently attractive to the public, especially to foreign audiences. This requires international orchestras, which may bring visibility and prestige not only nationally, but also internationally. 8. Education: Generate awareness of cultural activity through training activities in order to build an audience with vocation and criteria, as well as a strategy of audience replacement in years to come. 9. Social Commitment: Generate activities within the program that facilitate access to members of social groups that are displaced and disadvantaged as a social inclusion strategy from the institution. 10. Efficient Management: The adequate, optimal, and efficient management of the economic, administrative, organizational, logistic, and functional resources of an institution with a view towards achieving sustainability over time. A key feature of the MCDM method is adaptability, since the criteria described here can be easily adjusted to different institutions or case studies within the cultural sector or to any other specific project according to its objectives or preferences. This mechanism is very versatile when making evaluations, since it allows the creation of scenarios in which one can add or remove a criterion or change its weight value. Weights The weights assigned to the criteria described above are presented in Table 2. These weights were assigned by Mr. García according to the values and mission of the Palau de la Música Catalana when defining the content of a season. Thus, greater weights (25% and 15%) were assigned to criteria considered as fundamental, an intermediate weight (10%) to those with an intermediate value, and a low weight (5%) to those that are less relevant, but that must still be considered. Indicators Indicators are the way of evaluating the performance of a season's content regarding the different criteria. The indicators of the criteria, from 1 to 9 (from quality to social commitment), are defined as a percentage scale ranging from 0% to 100%. Those with a 0% score correspond to a season program that does not comply at all with the criteria established, and a 100% score corresponds to those that fully comply with them. Some clear guidelines on how to evaluate each indicator should be established, indicating what circumstances have to occur in order to assign a specific ranking (between 0% and 100%) based on the results of the analysis of objective data. That is to say, if the institution has data on, for example, the attractiveness (number of tickets sold) of past performances, these data can be analyzed and correlations between attractiveness and other variables, such as number of people performing, style of the performance, etc., can be discovered. This data gathering and analysis is beyond the objectives of the paper. In order to assess the indicators' result for the whole season, including all of the shows, the indicators will be firstly assessed for each show of the season individually. Then, the indicators' result for the whole season will be calculated as the arithmetic mean of the indicators' result of all the individual shows of the season, as presented in Equation (2), where i is one of the shows of the season and n is the total number of shows in the season. In the case of criterion 10, efficient management, the indicator is defined as the profit (margin) of the season, that is, the sum of the economic results of each show, whether positive (profits) or negative (losses). Value Functions The value function transforms the units of the indicators-in this study, percentages (%) and monetary units (€)-into units of value or satisfaction ranging from 0 (null satisfaction) to 1 (maximum satisfaction). From the possible forms of the value function shown in Figure 1, the increasing linear value function was adopted for the criteria 1-3 and 5-9, which means that the higher the result of the indicator, the higher the value. The value function for these criteria is defined in Equation (3) and The criterion 4, dose of risk, does not follow the previous pattern. A result of 0% for the indicator of risk means that all of the shows of the season have a null dose of risk, whereas a 100% dose of risk in a season means that all of the shows of the season have the maximum risk. Neither of the two situations provides the highest value. According to the Palau de la Música, their programming is mainly traditionalist, and the optimal dose of risk for a season is around 40%. Therefore, the unimodal value function, as defined in Equation (4) The criterion 4, dose of risk, does not follow the previous pattern. A result of 0% for the indicator of risk means that all of the shows of the season have a null dose of risk, whereas a 100% dose of risk in a season means that all of the shows of the season have the maximum risk. Neither of the two situations provides the highest value. According to the Palau de la Música, their programming is mainly traditionalist, and the optimal dose of risk for a season is around 40%. Therefore, the unimodal value function, as defined in Equation (4) With respect to criterion 10, the value function has to transform the season's profit into units of value or satisfaction. According to the Palau de la Música, it is assumed that the maximum losses allowed per season are −500,000 euros and the maximum profit allowed per season is 3,000,000 euros. Establishing a maximum economic margin enables sustainability of programming. Thus, the situation of having an extremely positive economic margin at the expense of serious damage to the rest of the indicators is avoided, which surely leads to a loss of audience in the following seasons and, consequently, to the following seasons not being economically sustainable. Season programs that would produce margins higher or lower than the limits are initially discarded and not considered in the analysis. = , 0% ≤ Dose of risk of the season ≤ 40% − + , 40% < ℎ ≤ 100% (4) Taking into account these allowed margins, the value function for the sustainable management criterion is defined (Equation (5) and Figure 4). It is considered that the maximum value or satisfaction (1) is obtained when the margin of the season coincides with the maximum margin allowed. The minimum margin allowed per season provides the minimum value or satisfaction (0 With respect to criterion 10, the value function has to transform the season's profit into units of value or satisfaction. According to the Palau de la Música, it is assumed that the maximum losses allowed per season are -500,000 euros and the maximum profit allowed per season is 3,000,000 euros. Establishing a maximum economic margin enables sustainability of programming. Thus, the situation of having an extremely positive economic margin at the expense of serious damage to the rest of the indicators is avoided, which surely leads to a loss of audience in the following seasons and, consequently, to the following seasons not being economically sustainable. Season programs that would produce margins higher or lower than the limits are initially discarded and not considered in the analysis. Taking into account these allowed margins, the value function for the sustainable management criterion is defined (Equation (5) and Figure 4). It is considered that the maximum value or satisfaction (1) Evaluation of the 'Palau 100′ Programming Season 2015-2016 The data used to test the model belong to the so-called 'Palau 100′ programming scheme, covering the season 2015-2016. 'Palau 100′ was implemented with the purpose of becoming a 'label' of excellence by gathering together, each season, the best artists, musicians, and orchestras of the world. A brief description of the programming corresponding to the 'Palau 100′ season 2015-2016 is presented in Table 3. The ten qualitative criteria extracted from the analysis of the interviews with the experts are used Evaluation of the 'Palau 100' Programming Season 2015-2016 The data used to test the model belong to the so-called 'Palau 100' programming scheme, covering the season 2015-2016. 'Palau 100' was implemented with the purpose of becoming a 'label' of excellence by gathering together, each season, the best artists, musicians, and orchestras of the world. A brief description of the programming corresponding to the 'Palau 100' season 2015-2016 is presented in Table 3. The ten qualitative criteria extracted from the analysis of the interviews with the experts are used as parameters of the MCDM model. The weights used to rank the criteria are provided by the former artistic director of the Palau de la Música Catalana, Mr. García. Table 4 presents the individual evaluation of all the events included in the 2015-2016 season program regarding each one of the 10 criteria presented in Table 2 above by means of the indicators defined in Section 3.2.3. Mr. García did the evaluation based on data of that season; in further implementations, the assessment could be carried out by the different members of the team responsible for programming, either individually or by means of seminars. The result of the indicators for the whole season was calculated according to Equation (2). The resulting values, weighted values, and SVIs are presented in Table 5. They were calculated according to Equation (3) for the criteria 1-3 and 5-9, Equation (4) for the criterion 4 (dose of risk), and Equation (5) for the criterion 10 (efficient management). Finally, the weighted value was calculated by multiplying the value by the corresponding weight, and the SVI is the sum of all the weighted values, as described in Equation (1). As previously explained, the resulting SVI must be between 0 and 1, with 0 indicating the minimum satisfaction and 1 indicating the maximum satisfaction. In this case, the resulting SVI is 0.829, which shows that the 2015-2016 season is highly satisfactory according to the criteria and priorities established by the Palau de la Música Catalana. The season's program for 2015-2016 achieved an excellent performance regarding quality, the criterion considered to be the most important by the Palau de la Música, which reaches a value of 0.97 out of 1, almost the maximum. This means that it would be very difficult to improve the quality with a different season program. The other two most important criteria, audience and attractiveness, with a weight of 15% each, achieved a very good performance, too, with a value of 0.83 each. The following criterion in importance, dose of risk, with 10% of the weight, obtained a result of the indicator for season 2015-2016 of 39%, very similar to the optimal 40% benchmarking. It also has, therefore, an excellent satisfaction performance with a value of 0.98 out of 1. The criterion of singularity, with 10% of the weight, also had a very good performance with a value of 0.81. Lastly, the least important criteria, each accounting for a 5% of the total weight, exhibited good performance values, too: Education and social commitment each had a value of 1. This means that all of the shows of the season included educational activities and people belonging to socially disadvantaged groups. The criterion of internationality obtained a high value (0.83), while the criterion of locality obtained a low value (0.30), which could be improved by including local agents in more shows. The 2015-2016 season ended with losses of around 307,000 €, with the criterion of efficient management being the one with the lowest performance, with a value of only 0.06. The bad performance of this criterion has a limited impact on the global SVI, which is high (0.829). This is due to the low weight of the criterion of efficient management in the programming of the season. Sensitivity Analysis To assess the robustness and stability of the tool developed, a sensitivity analysis was performed considering different alternative scenarios of the programming of the season of the Palau de la Música Catalana. For this purpose, the initial weights assigned to the different criteria (see Table 2) were taken as a starting point and, afterwards, some variations were introduced in the weights of several criteria to test how these changes would affect the resulting SVI. In the case of the indicators, no changes were applied given that the content of the season's program did not vary. Hereafter, we briefly explain the results obtained for each alternative scenario tested. Sensitivity Analysis 1: Internationality The sensitivity analysis 1 (or scenario 1) assigned greater importance to the criterion of internationality, that is, to the quota of international artists in order for the institution to gain visibility worldwide. Hence, a weight of 20% (greater than the original one of 5%) was assigned to this criterion, while the weights of the remaining criteria were redistributed so that they maintained the same proportions as those corresponding to the original weights (Table 2), and the sum of the weights of all the criteria was 100%. As presented in Table 6, the SVI estimated for this scenario was 0.828, almost the same as in the original setting. This means that if the Palau de la Música Catalana were to prioritize internationality, a season program like the one scheduled in 2015-2016 would still be a very good choice according to the SVI coefficient. To simulate a scenario 2 in which the Palau de la Música Catalana assigned more importance to the participation of local artists (criterion 6), the weight of the locality criterion was increased from 5% (original weight) to 15%. The rest of the weights were redistributed accordingly to preserve proportionality with the original ones and thus ensure a total sum of the weights of all the criteria of 100%. As shown in Table 6, the SVI resulting from this scenario was 0.774, lower than the original one. As expected, by increasing the importance of a criterion with a bad performance (low result of the indicator) for the season analyzed, the new SVI returned a lower coefficient. The estimated SVI implies that if, in an alternative scenario, the Palau de la Música Catalana were to assign more importance to locality, the program of the 2015-2016 season could be improved by including more shows with the participation of local artists. Sensitivity Analysis 3: Audience, Dose of Risk, and Singularity A third alternative analysis proposes a scenario 3 in which greater importance is assigned to a risky program (the original weight of 10% for the dose of risk was increased to 15%) and to uniqueness (the weight of the singularity criterion was increased from 10% to 15%), so as to make it work as a factor of differentiation that would help build a competitive advantage for the institution. This would simultaneously enable the generation of new audiences, a criterion whose importance would also grow in this scenario (the original weight was increased from 15% to 20%). In sum, the criteria of dose of risk, singularity, and audience each increased their weight by 5%; the weights of the remaining criteria were redistributed proportionally to the original weights to ensure a total of 100% after summing all the weights. In this case, the obtained SVI was 0.839 (see Table 6), slightly greater than that obtained with the original weights. This means that even if the Palau de la Música Catalana decided to give more importance to audience, dose of risk, and singularity, the program scheduled for the 2015-2016 season would still be a very good one, although slightly better than the one with the original preferences. In sum, the SVI proved to be quite stable with the variation of the weights, as shown in the sensitivity analysis that was carried out. The robustness of the proposed MCDM method favors its application in strategic management decisions in the cultural sector, offering a useful decision-making tool to managers. They can thus not only test and design optimal programming scenarios for each season, but also implement strategic planning of future seasons' programming and resources, covering a mid-and long-range time period. Discussion and Implications for Management The research framework-combining qualitative and quantitative methods-and the empirical evidence presented here show that MCDM methods can also be applied to the management of cultural institutions. Our findings indicate that the MCDM method has several advantages to be considered by cultural institutions: It allows for the consideration of as many criteria as necessary to the decision-maker, and the weights assigned to the criteria can be varied depending on their importance and cultural institutions' priorities, which may vary from year to year. The (contradictory) interests of various stakeholders, eventually involved in the control and governance of the cultural institutions, can thus be taken into account, and the results obtained are justified with objective parameters. The introduction of these types of decision-making methods in the cultural sector (they are very much appreciated in other industry sectors where managers are subject to continuous industry changes and pressure for obtaining positive results) does not imply the substitution of the human decision, but rather, it represents an aid to increase the so-called rationality of the agents involved in decision-making processes, who are usually limited by the numerous variables, budget restrictions, and uncertainty involved in the analysis. The sensitivity analysis has shown that the method proposed is robust to changes in the weights (priorities) of the criteria considered, thus allowing for mid-and long-range design of alternative management scenarios by the managers. This aspect is particularly important in the cultural sector, where the managers can rely only on the 'posterior control' mechanisms, whose feedback can be introduced only after observing the results of a season and do not allow any foresight of the effects of the eventual changes applied. It must also be noted that one of the a priori requisites of this methodological framework is to know the importance (weights) of the criteria to be applied, as this will have a direct impact on the result of the optimization process, that is, the SVI. Although the information extracted from the interviews with the representatives of the cultural institutions shows that each one deals with this issue in a different way, it also reflects the importance of the managers' own expertise in the field and the nature of the activity analyzed. Furthermore, as the main objective of this type of method is that of acting as an aid in the decision-making process (and not to become a substitute for it), this framework postulates itself as an optimal combination between human expertise and greater capacity of data analysis with the help of mathematics and technological innovations (e.g., computer software). In the same vein, a key element for the successful implementation of this framework of analysis is teamwork-good understanding, definition, communication, and measurement of the criteria to be used in the analysis, as well as the identification of other potential criteria, require the participation of various administrative departments/areas in the institution (e.g., finance, human resources, marketing, etc.), as well as the expertise of all agents working in the institution and involved in the realization of an artistic event. The interviews with experts in the field have stressed the fact that, in order to adopt more integrative decision-making tools, cultural institutions must have well-defined and clear objectives, mission, and values, as their parameterization would be much easier. A good analysis and understanding of the audience's preferences and expectations with respect to the quality of the artistic events programmed (e.g., audience surveys, artist and orchestra rankings, box office data, etc., could be used to quantify the quality) is another important requisite. The training of the internal staff and future managers in these assessment procedures, together with the observed trend towards a more collaborative decision-making framework (involving the artistic director, the manager, and the marketing department), could serve as a preparatory step prior to the adoption of the MCDM method. The "predictive ability" of the MCDM method could also serve as a training tool to estimate future scenarios (with different weights and rankings of criteria) in order to adjust the results expected to the objectives of the institution. Eventually, teamwork will also contribute, in the mid-/long-term, to building a strong organizational culture, enhancing those values and missions that better suit the identity of the cultural institution, thereby increasing its competitive market advantage given that organizational culture is one of the few resources of any organization that is difficult to imitate. The use of this type of decision-making tool in performance measurement in the cultural sector could also contribute to increased transparency of the management process and of the results obtained, highlighting eventual financial difficulties and signaling the potential solutions to be applied to ensure long-run sustainability. Given that most of the cultural institutions also rely on public funding (subsidies) and/or private resources (donations, etc.), an efficient and transparent management system could contribute to increasing the social impact of their activities by insuring the accomplishment of all of the objectives chosen by the institution. It is also important to stress that the MCDM method presented here is easily adaptable to any type of organization in the cultural and creative sector, regardless of its organizational structure and size. Last but not least, in the present scenario, as they are threatened by the significant impact of the Covid-19 pandemic on the performing arts, the MCDM method can help cultural managers in the programming of sustainable season contents.
2020-07-23T09:02:27.638Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "d83e85d4af49e86cbb87992df620737000ada652", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/14/5785/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "30f9cdafe9a67af4371d9401dd617f831734294b", "s2fieldsofstudy": [ "Environmental Science", "Business", "Art", "Education" ], "extfieldsofstudy": [ "Sociology" ] }
235755517
pes2o/s2orc
v3-fos-license
Thermodynamic geometry and phase transition of spinning AdS black holes Employing the thermodynamic geometry approach, we explore phase transition of four dimensional spinning black holes in an anti-de Sitter (AdS) spaces and found the following novel results. (i) Contrary to the charged AdS black hole, thermodynamic curvature of the spinning AdS black hole diverges at the critical point, without needing normalization.(ii) There is a certain region with small entropy in the space of parameters for which the thermodynamic curvature is positive and the repulsive interaction dominates. Such behavior exists even when the pressure is extremely large. (iii) The dominant interactions in the microstructure of extremal spinning AdS black holes are strongly repulsive, which is similar to an ideal gas of fermions at zero temperature. (iv) The maximum of thermodynamic curvature, $ \left\vert R\right\vert $, is equal to $C_{{}_{P}}$ maximum values for the Van der Waals fluid in the supercritical region. While for the black hole, they are close to each other near the critical point. I. INTRODUCTION Thermodynamic fluctuation provides a unique frame for the geometrical description of thermodynamical systems in equilibrium. Particular interest goes to the covariant version, known as Ruppeiner geometry [1], which consists of a metric that measures the probability of a fluctuation between two thermodynamic equilibrium states. The Riemannian scalar curvature, known as thermodynamic curvature, arises from such a metric is a fundamental object in the Ruppeiner geometry which contains information about inter-particles interaction. More specifically, a negative (positive) sign of the thermodynamic curvature determines an attractive (repulsive) interaction between particles. While zero value for the thermodynamic curvature means there is no interaction between particles [2][3][4]. The absolute value of the thermodynamic curvature in the asymptotic critical region is related to correlation length in fluids [3]. Since the discovery of entropy and temperature of black holes [5,6], it has been well established that one can regard black hole as a thermodynamic system characterized by a set of thermodynamic variables. During the past decades, various thermodynamic properties of black holes, especially the phase transition and critical behavior, have been widely studied in the literatures [7][8][9][10]. In recent years, considerable attentions have been arisen to investigate thermodynamic phase transition of anti-de Sitter (AdS) black holes in an extended phase space, where the first law of black hole thermodynamics is extended by treating the cosmological constant as a thermodynamic variable [11][12][13][14][15][16][17]. The investigations on thermodynamic phase transition of black holes in the extended phase space have disclosed some interesting phenomena, such as Van der Waals liquid-vapor phase transition [17], zeroth-order phase transition [18], reentrant phase transition [19,20], triple critical point [21], super-fluid like phase transition [22] and many others. In the context of black hole thermodynamics, thermodynamic curvature in the Ruppeiner geometry provides a powerful tool to explore microscopic behavior of black holes. The obtained results can also be compared with accessible experimental systems. Thermodynamic curvature has been investigated for various types of black holes ( see e.g. [23][24][25][26][27][28] and references therein). It has been disclosed that thermodynamic curvature does not diverge at the critical point, contrary to the fluid systems. Recently, two new normalized thermodynamic curvature for a charged AdS black hole have been proposed, which diverge at the critical point of phase transition [29][30][31]. These thermodynamic curvature are constructed via the heat capacity at constant volume [29,30] and adiabatic compressibility [31] and have the same behavior for the large black hole. In [31] it was shown that the normalized thermodynamic curvature diverges to positive infinity for the extremal black holes. More recently, the behavior of these two normalized thermodynamic curvature was studied for several different black holes [32][33][34][35][36][37]. In this paper, we explore thermodynamic phase structure of four-dimensional rotating AdS black hole. We consider an extended phase space in the pressure (P ) and entropy (S) plane, in which the small-like and largelike black holes are separated by the maximum of the specific heat at constant pressure in the supercritical region. Besides, we provide simple analytical expressions for critical quantities. From the thermodynamic fluctuation metric in the entropy representation, we obtain a Ruppeiner line element of rotating-AdS black holes in the pressure-entropy coordinates, where it is also valid for the ordinary thermodynamic systems, such as the simple Van der Waals fluid. Then, by using the thermodynamic curvature, we explore the microscopic properties of the system and compare it with the one of the Van der Waals fluid system. In particular, we investigate the behavior of the maximum of the specific heat at constant pressure and minimum of thermodynamic curvature for these systems in the supercritical region. We find that, for both cases, the thermodynamic curvature diverges at the critical point and it goes to positive infinity for the extremal black holes. Finally, the critical behavior of thermodynamic curvature for the characteristic curves is studied and their critical exponents are calculated. The rest of the paper is organized as follows. In Sec. II,we first give a brief review on thermodynamics of fourdimensional rotating AdS black hole in the extended phase space and then determine thermodynamic phase structure in the P -S plane. Next, we obtain the Ruppeiner metric in (P -S) coordinates, and using this, we study in details the microscopic properties of the black hole and Van der Waals system in Sec. III. Section IV is devoted to investigating the thermodynamic curvature near the critical region. In Sec. V, we present our summary and discussion. In Appendix we calculate the thermodynamic curvature of Van der Waals system using the Ruppeiner metric in (P -S) coordinates. II. THERMODYNAMIC PHASE STRUCTURE Let us begin with a brief review of the thermodynamics of single spinning AdS black holes in four dimensions, based on Refs [12,38]. The mass of the Kerr-AdS black hole with the pressure (P ) is [12] M (S, P, J) = 1 2 (1) where S and J are the entropy and angular momentum, respectively. By identifying the black hole mass as the enthalpy, the first law of thermodynamics reads where T is the Hawking temperature, Ω the angular velocity, V the thermodynamic volume, which are given by The internal energy U is obtained from M via the Legendre transformation, U = M − P V , and it is given by In this representation, the first law of the black hole thermodynamics is written as Now, we turn to study the critical behavior of the rotating-AdS black hole by investigating the specific heat at constant pressure where we have also fixed J. For constant J and P = P c , the value of critical point can be determined by an inflection point Using the temperature formula in Eq.(5), the critical quantities are obtained analytically as These quantities are the same, numerically, as ones found in Ref. [39]. Here we present their analytical expressions for the first time in a compact form. For P > P c , the specific heat at constant pressure is positive, i.e., black hole is thermodynamically stable. However, below P c , there exists a certain range of quantities, for which the specific heat at constant pressure is negative (C P < 0). This corresponds to a thermodynamic instability of the black hole which is remedied by the Maxwell equal area construction, V dP = 0, indicating a first order phase transition between small and large black holes. The region of the first order phase transition, which is obtained from the the Maxwell construction, is identified in the P -S plane in Fig. 1. The small and large black hole phases are located at the left and right of the shaded region, respectively. In Fig. 1, the extremal black hole curve (corresponding to zero temperature) is denoted by the gray dashed line and the critical point is indicated by a black solid circle. The left region of the gray dashed curve is physically excluded because the temperature becomes negative. For the supercritical region, which is at higher pressures and entropies than the critical point, we illustrate the local maximum of the specific heat at constant pressure (C P ) in Fig. 1 by the purple dotted line. The local maximum of C P commences from ( P , S) ≈ (1.69, 1.45) and terminates at the critical point, where it goes to infinity and P = P/P c and S = S/S c are the reduced pressure and entropy, respectively. This curve can be viewed as an extension to the coexistence line, which divides the supercritical region into two phases [40,41]. Here, the small-like and large-like black holes are separated by the local maximum of C P in the supercritical region beyond the critical point. III. THERMODYNAMIC CURVATURE To set up a thermodynamic Riemannian geometry, we consider the rotating AdS black hole in the canonical (fixed J) ensemble of extended phase space so that its thermodynamic state is specified by the internal energy U and volume V . The line element of the geometry, which characterizes the distance between thermodynamic states, is given by [1] where S is entropy and x µ = (U, V ). Using the first law for rotating AdS black hole Eq.(7) and the Maxwell relation, one can express the line element Eq.(11) as follows 1 By computing the Riemannian curvature scalar, R, (thermodynamic curvature) from the metric, one can get some information about the interparticle interaction in the thermodynamic system. In particular, the positive (negative) sign of the thermodynamic curvature specifies that the dominant interaction is repulsive (attractive) [2][3][4]. On the other hand, R = 0 shows there is no interaction in the system [42]. In what follows, we examine the behavior of the thermodynamic curvature for the rotating AdS black hole and the Van der Waals fluid. 1 Although this line element is derived for the rotating AdS black hole, it remains valid for an ordinary thermodynamic system [43]. For the four-dimensional rotating AdS black hole, the thermodynamic curvature is readily calculated as where B( S, P ) is a complicated function of the reduced pressure ( P ) and entropy ( S) and T = T /T c is the reduced temperature. Note that R is proportional to the inverse of angular momentum in the reduced parameter space. The behaviour of R is depicted in Fig. 2 as a function of P/P c and S/S c . One can see from Fig. 2 that R is positive in some region of the parameter space. From Eq.(13), R diverges on T = 0 and (∂ T /∂ S) P = 0 corresponding to the extremal black holes and diverging specific heat at constant pressure, respectively. In order to examine the thermodynamic curvature more closely, we plot in Fig. 3 the vanishing (brown dotted line) and diverging (gray dashed line) curves of R as well as the transition curve (light blue solid line) of small and large black holes and local maximum of C P (purple dotted line), which were shown already in Fig. 1. In Fig. 3, the shaded regions represent positive values of R, where the dominant interaction is repulsive. In contrast, R is negative everywhere outside the shaded regions, indicating the dominant attractive interaction. Remarkably, the transition and diverging curves coincide at the critical point which is highlighted by a black spot. This situation also occurs for ordinary thermodynamic systems [24]. The white area to the left of the gray dashed line on the left side of the figure is excluded because of a negative temperature. One can see from Fig. 3 that the associated R for the large black hole phase is negative. However, for the small black hole phase, there exists a certain region with positive R, which is also present in the higher pressure regime. In this region, when approaching the gray dashed curve from above, R diverges to +∞ and dominant interaction becomes strongly repulsive. The inset in Fig. 3 reveals the existence of a region with negative R in the shaded region when P is greater than ≈ 242.78. Moreover, in Fig. 3 we also display the local minimum of R in the supercritical region by the thin green line, which begins from ( P , S) ≈ (1.41, 1.38) and ends at the critical point where R goes to negative infinity. In Fig. 4, we depict the coexistence curve (light blue solid line) of the Van der Waals vapor-liquid phase transition and maximum of C P (purple dotted line) as well as the diverging (gray dashed line) and minimum (thin green line) of R, where the expression of R is given in Appendix. According to Eq.(A4) and Fig. 4, R has negative values everywhere, indicating the dominant attractive interaction among the molecules. The coexistence and diverging curves coincide at the critical point,which is marked by a black dot. Furthermore, as also seen in Fig. 4, the maximum of C P and minimum of R curves match each other in the supercritical region. For the region below the coexistence curve, the Van der Waals model is inapplicable, so it is not considered here. IV. CRITICAL PROPERTIES To further clarify the critical behavior of thermodynamic curvature for the rotating AdS black hole and associated critical exponent, we investigate the thermodynamic curvature of characteristic curves around the critical point. To do so, in Fig. 5, we illustrate R along its minimum and maximum of C P curves as well as along the transition curve for small and large black holes in the neighborhood of the critical temperature. As evident from the figure, the large black hole is at higher |R| than the small black hole and upon approaching the critical point, R in both phases diverges as with a universal critical exponent of 2, where t = T /T c −1 is the deviation from the critical temperature. In the supercritical regime, the local minimum of R and maximum of C P curves are close together in thermodynamic curvature and they diverge from above T c as implying a critical exponent of 2. For the Van der Waals fluid, the thermodynamic curvature of the vapor and liquid along the coexistence curve near the critical temperature has the following form Moreover, upon approaching the critical point from above along the minimum of R and maximum of C P curves, R diverges with the exponent 2 as V. SUMMARY AND DISCUSSION Thermodynamic geometry of black holes provides a powerful tool to explore microscopic structure of these systems and disclose the nature of interaction between their ingredient particles. In this paper, we have presented simple exact analytical expressions for the critical quantities of the Kerr-AdS black holes and constructed the phase diagram in the pressure-entropy parameter space, where the small black hole and large black hole phases are separated by a first order phase transition region below the critical point. Based on the locus of the maxima of the specific heat at constant pressure, we divided the supercritical region into small-like and largelike black hole regions. Indeed, the line of maxima is used as the Widom line, which is characterized by the maximum of the correlation length. In addition, starting from the Ruppeiner geometry in an entropy representation, we have derived the thermodynamic metric for the Kerr-AdS black holes in the pressure-entropy coordinates that is also valid for any ordinary thermodynamic system. We have explicitly shown that, contrary to the charged AdS black hole [43], thermodynamic curvature of the Kerr AdS black hole diverges at the critical point, without needing normalization. Comparing to the simple Van der Waals fluid, which has negative thermodynamic curvature everywhere, we have found that there is a certain region for the spinning AdS black holes with small entropy in the space of parameters for which the thermodynamic curvature is positive and the repulsive interaction dominates. Such behavior exists even when the pressure is extremely large. Another distinction is that the dominant interactions in the microstructure of extremal Kerr AdS black holes are strongly repulsive, which is similar to an ideal gas of fermions at zero temperature [2]. Taking into account the fact that the magnitude of the thermodynamic curvature is related to the correlation length, we have used the locus of the maximum of |R| to characterize the Widom line. We have found the maximum of |R| is equal to C P maximum values for the Van der waals fluid in the supercritical region. While for the black hole, they are close to each other near the critical point. Finally, we determined the critical behavior of thermodynamic curvature of spinning AdS black hole and find out that governs by a universal critical exponent of 2, which is the same as the Van der Waals fluid. It would be interesting to study reentrant phase transitions and universal properties of higher dimensional rotating AdS black holes by employing the thermodynamic Riemannian geometry based on the fluctuations of the entropy and pressure. Note Added: When this work was completed, we learned that another article [45] had addressed the same issue where it was shown that the thermodynamic curvature has different behavior at small entropy. However, our results differ from [45] in that we find a region within repulsive interaction area in which the thermodynamic curvature has negative values. Appendix A: Van der Waals model In this Appendix, we calculate the thermodynamic curvature for Van der Waals fluid in the P -S plane. The specific Helmholtz free energy of the Van der Waals, which contains two parameters (a,b) reflecting intermolecular interaction and molecular size effects, is given by [44] where ζ = (m/2π) 3/2 and m is a mass of atom. Here, T and v are the temperature and specific volume, respectively. It is important to note that v > b. Using Eq.(A1), the pressure and entropy are obtained as which is expressed in terms of the reduced thermodynamic variables where s ≡ e (2S−5)/3 /ζ 2/3 and S is entropy. The critical quantities are P c = a 27b 2 , v c = 3b, s c = 2 11/3 a 27b 1/3 , T c = 8a 27b . (A3) Using the line element in (P -S) coordinates Eq.(12), the thermodynamic curvature is obtained as which is independent of a and b.
2021-07-08T01:16:27.806Z
2021-07-03T00:00:00.000
{ "year": 2021, "sha1": "07a43eeb54fe1a741c0f9ac10881332f76e48af4", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.104.104066", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "07a43eeb54fe1a741c0f9ac10881332f76e48af4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221590869
pes2o/s2orc
v3-fos-license
Structural brain damage and visual disorders in children with cerebral palsy due to periventricular leukomalacia Highlights • There is a strong correlation between brain lesion severity and visual function, evident also with a Structural MRI.• It is confirmed the validity of MRI semi-quantitative scale published by Fiori et al. (2014).• There is a frequent association of PVL with thalamic lesions with important repercussion on visual function. Introduction Children with CP very often have visual perceptual disorders including those that involve the dysfunction of anterior, posterior, or a combination of both visual pathways (Dutton and Jacobson, 2001;Good et al., 1994Good et al., , 2001Schenk-Rootlieb et al., 1994). Due to their relevance, visual perceptual disorders are considered a core symptom of CP, rather than an associated symptom (Rosenbaum et al., 2007). Cerebral Visual Impairment (CVI) is a major cause of low vision worldwide, and about 60-70% of children with CP manifest it (Schenk-Rootlieb et al., 1994). CVI causes include the posterior visual pathways dysfunction (including optic radiations, occipital cortex, visual associative areas). Its reported prevalence in the CP population varies greatly among studies, very often depending on the selection criteria (population-based studies, baseline data for clinical trials, and the like) and on the source of clinical information (registers, hospital record, direct testing) (Guzzetta, 2014). It is not surprising that the incidence of CVI within the CP population is so high, since brain areas whose injury determines the motor deficit resulting in CP are anatomically closely located to the distributed network of brain areas responsible for visual perception (Schenk-Rootlieb et al., 1994). Periventricular leukomalacia (PVL) is the lesion that most clearly illustrates this relationship between brain damage, CP and CVI, given that it is known to involve both the corticospinal tract and the visual pathways, mostly including optic tracts, posterior thalamus and optic radiations (Jacobson and Dutton, 2000;Lanzi et al., 1998). In children with PVL, posterior visual pathways dysfunction manifests through visual field abnormalities, reduced visual acuity, refractive errors, altered contrast sensitivity, abnormal stereopsis and optokinetic nystagmus. However, impairment of the oculomotor system is also typical in children with PVL, determining strabismus, abnormal fixation, following and saccadic movements disorder. Finally, PVL may also determine a typical fundus oculi appearance with a pale, cupped optic disc, reflecting an atypical form of secondary optic nerve hypoplasia (Ruberto et al., 2006). In recent years, there has been an increased interest in the understanding of the critical areas for the development of visual function, which seems to depend on the integrity of an enlarged network that includes not only optic radiations and the primary visual cortex but also other cortical and subcortical areas, such as the frontal or temporal lobes or basal ganglia (Ramenghi et al., 2010). However, despite its frequency and clinical relevance, there are few reports, generally including small samples, about the relationship between visual dysfunction and brain lesion characteristics in children with CP due to PVL (Uggetti et al., 1996). Furthermore, there has been increasing evidence that the thalami are often primarily or secondarily affected in children with PVL (Lin et al., 2001) and that they play a role in the development of visual function (Rushmore et al., 2005). In this sense, congenital brain lesions are a fascinating model to study the relationship between brain structure and function through dysfunction. The purposes of the present study were: a) to explore the relationship between brain lesion severity, and visual function in a large sample of children with PVL; b) to define the possible role of specific brain areas and structures in fixation, following, saccades, nystagmus, visual acuity, visual field, stereopsis and colour perception in the same group of children with PVL. Participants Participants were recruited at the IRCCS Stella Maris Foundation in Pisa, Italy, a research hospital devoted to neurodevelopmental disability. Children who had done at least one MRI after three years of age and a comprehensive visual assessment were considered eligible. The study was approved by the Ethics Committee of the Meyer's Hospital. Informed parental consent was obtained for all participants. Procedure All children underwent an assessment of visual function at the vision laboratory of the hospital. Children were assessed by a well-trained developmental therapist (G.P.) and a paediatric neurologist (F.T.), routinely involved in the clinical evaluation of children with CP and visual disorders. Brain lesions were scored using a semi-quantitative MRI-scale (sqMRI scale, (Fiori et al., 2014(Fiori et al., , 2015. The scale was applied by a child neurologist (S.F.) supported by a neuroradiologist (R.P.) when needed (Fiori et al., 2014). Both MRI raters were blinded to the outcome of the visual assessment. Visual score All children underwent a battery of age-specific tests assessing visual function including fixation, following, saccades, nystagmus, visual acuity, visual field, stereopsis and colour perception. Fixation was tested observing the ability of the child to fix on a black/white or coloured target; Following was tested observing the ability of the child to follow a coloured target horizontally, vertically and in a full circle. Saccades were tested observing the ability of the child to move quickly his/her eyes from a target to another one; Nystagmus was tested observing children's eyes. It is a condition in which the eyes move rapidly and uncontrollably; Acuity was assessed binocularly by means of the Teller acuity card procedure (Teller et al., 1986). This method is based on an inborn preference for a pattern (black and white gratings of decreasing stripe widths depicted on cards) over a uniform field. The threshold of acuity is taken as the minimum stripe width to which the subject consistently responds. Acuity values were compared to age-specific normative data reported in the literature (van Hof-van Duin et al., 1992). A result within 2 standard deviation was considered normal. Binocular visual fields were assessed using kinetic perimetry, according to the technique described in detail by van Hof-van Duin (van Hof-van Duin et al., 1992). The apparatus consists of two 4-cm wide black metal strips, mounted perpendicularly to each other and bent to form 2 arcs, each with a radius of 40 cm. The child is held sitting or lying in the centre of the arc perimeter, with the chin supported. During central fixation of a 6°diameter white ball, an identical target is moved from the periphery towards the fixation point, along one of the arcs of the perimeter, at a velocity of about 3°/s. Eye and head movements towards the peripheral ball are used to estimate the outline of the visual fields. Age-specific normative data are reported in the literature (Wilson et al., 1991). Stereopsis is the highest form of binocular coordination that can be assessed (Afsari et al., 2013). In this study it was evaluated by means of Frisby Stereopsis Screening Test (Frisby et al., 1996). Briefly, the participant's task is to detect a circle containing a pattern of geometric objects (target) visible within a mosaic of similar geometric shapes. The target and background are printed on opposite sides of a Perspex plate, and so differ in physical depth. The Frisby test comprises three plates, each of which can be presented at one of several different possible distances to obtain a range of disparities. A positive result is recorded if the subject's scanning eye movements stop consistently at the correct target upon repeated testing. Stereopsis values are expressed in sec/arc. Participants who could not identify target at 600 sec/arc were classified as stereo-negative. Colour perception was evaluated by means of Color Vision Test Plates for Infants. The child has to recognize the picture made by different coloured circles and if he/she is not able to talk, he/she can indicate with the hand or eyes in which part of the book is the picture that the observer tells him. For each item it is possible to give a score of 0 if it is not compromised or 1 when there is an impairment. A visual total score (VTS) was obtained from the sum of all of the items, ranging from 0 to 8. See Table in Supplementary Materials for a detailed description of visual severity scores in the population. MRI assessment MRIs were classified according to a previously described reliable and validated semi-quantitative scale for assessing brain lesion severity in children with cerebral palsy (Fiori et al., 2014). According to the scoring procedure described by by Fiori and collaborators (Fiori et al., 2014), brain lesion is graphically represented onto a six-axial-slices template. Raw scores for each lobe, subcortical structures (basal ganglia, thalamus, posterior limb of internal capsule (PLIC) and brainstem), corpus callosum and cerebellum are calculated. The scoring procedure results in summary scores for right and left hemisphere: lobar score (frontal, parietal, temporal, occipital, bilateral maximum score of 6 for each lobe); hemispheric score (the summary of frontal, parietal, temporal and occipital scores, bilateral maximum score of 24); subcortical score (the summary of lenticular, caudate, PLIC, thalamus, and brainstem scores, bilateral maximum score of 10); global score (hemispheric summary score on both side plus corpus callosum and cerebellum scores, maximum score of 40), with higher score representing more severe pathology. For the purposes of this study, all scores were calculated as bilateral. See Table in Supplementary Materials for a detailed description of brain lesion severity scores in the population. Examples of the scoring sheets are provided in Supplementary Figs. 1 and 2. Statistical analyses The hemispheric score, the subcortical score, the global score and the visual total score firstly were included in the correlation analysis; we further explored correlations between single lobes (frontal, temporal, parietal and occipital) and visual total score. For correlation analyses, we used the Spearman (rho) correlation coefficient and we evaluated statistical significance considering Bonferroni adjustement of p-value to account for multiple comparisons, thus setting significance level for those comparisons for p-value < 0.007. The relationships between single subcortical structures (lenticular, caudate, PLIC, thalamus, and brainstem scores) and visual total score, corpus callosum and cerebellum and visual total score were also explored. Moreover, each item of the visual assessment was included separately in the analysis, to check for differences in brain lesion site and severity underlying the item dysfunction (normal/abnormal). In order to better understand the role of brain lesion site on each visual function item, a between group t-test was performed by splitting the sample according to the abnormality of one specific function (i.e. those subjects with score 0 or 1 at that specific item). Mean and standard deviation (SD) were used to describe Global MRI score and Hemispheric score while student independent T-test was used for comparisons between groups. Subcortical scores were reported as median and 25°-75°percentile because of their skewed distribution and comparisons between groups were performed using Mann-Whitney U test (see Table 1). Correlation between gestational age and lesion (hemispheric score, the subcortical score, the global score) and the visual total score was evaluated using the Spearman (rs) correlation coefficient. In the same way we analyzed the correlation between age at test and visual total score. P-value < 0.05 was considered to be statistically significant. Demographics Ninety-four children (57 males, 37 females) with CP and a brain MRI indicating a PVL were recruited. One subject was excluded because of complete blindness for a prematurity-related stage III retinopathy, and 21 other subjects were excluded because of the lack of a visual assessment within 1 year from MRI exam. This is a retrospective study and it can explain the high percentage of missed subjects (more than 23%). The final sample consisted of 72 children with cerebral palsy (42 males and 30 females), mean gestational age 32.4 weeks (range 24-40; SD 4.6 weeks), mean age at visual assessment 5.6 years (range 3.2-14.4 yrs; SD 3.4 yrs), mean age at MRI 5.8 years (range 3-14.4 yrs; SD 3.7 yrs). We kept in consideration the visual assessment done at the same age of MRI or the nearest to the MRI exam. About Gross Motor Function Classification System, subjects were distributed in this way: I = 7; II = 21; III = 20; IV = 17; V = 4; unknown = 3. Fifteen subjects (21%) had no visual disorder, 4 of them (5%) had solely a peripheral visual disorder, 15 subjects (21%) had a cerebral visual disorder and 38 subjects (53%) had a mixed visual disorder. Correlation between visual total score and single lobe scores Concerning cortical lesion location damage, occipital lobes damage positively correlated with the VTS (p = .000; rho = 0.443), while no significant correlation was found with frontal, temporal and parietal lobes involvement. Correlation between visual total score and corpus callosum and cerebellum No correlation was found neither between corpus callosum and VTS, nor between cerebellum and VTS. Comparison of MRI scores between subjects with normal and abnormal scores for each visual item Global MRI score mean values resulted significantly higher in children with an impairment in all items except for nystagmus (see p-values in Tab. 1). Hemispheric severity scores were significantly higher (corresponding to more severe lesions) in children with impaired visual acuity, visual field, stereopsis and colour perception (see p values in Tab. 1). Subcortical score means values resulted significantly higher in children with an impairment in all the visual items except for nystagmus (see p values in Table 1). Correlation between gestational age and lesion and VTS and age and VTS We found no correlation between gestational age and lesion and between gestational age and Visual Total Score and age and Visual total score. Discussion To our knowledge this is the first study that correlates brain lesions severity and visual function impairment in a large sample of children with PVL. First of all, we found that the most compromised visual items in our sample were ocular motricity (including fixation 71%, following 77.7% and saccades 80.5%), visual acuity (65.3%) and stereopsis (74%). These results are perfectly in agreement with previously published literature and the most recent paper by Fazzi and colleagues (Fazzi et al., 2012), where the authors found an impairment in particular of saccadic movements and visual acuity in a sample of 129 children with cerebral palsy. The semi-quantitative MRI scale for assessing brain lesions severity demonstrated a relationship with visual function measures in children with CP due to PVL. The three summary scores (global, hemispheric and subcortical) of the semi-quantitative brain MRI scale positively correlated with visual total score in a highly significant statistical way. It means that children with higher severity of brain lesion have higher severity of visual dysfunction. Further, by considering single lobes, only occipital lobes' lesion severity correlated with visual total score. Occipital cortex is indeed the place where primary visual cortex (V1) is localized. In primates, including humans, the perception of visual information is mediated by a pathway from retina to primary visual cortex (V1, striate cortex) via the F. Tinelli, et al. NeuroImage: Clinical 28 (2020) 102430 lateral geniculate nucleus (LGN) of the thalamus and optic radiations (Felleman and Van Essen, 1991). V1 contains a detailed map of the whole visual field and is the first station in which binocular cells are present and, in the macaque monkey, strong orientation and direction selectivity cells were also described (Hubel and Wiesel, 1968). Moreover, during evolution, the expansion of primary visual cortex is associated with an increase in visual acuity (Mazade and Alonso, 2017). From V1, the visual information is then distributed to extrastriate cortical areas following two parallel pathways: the dorsal visual stream, which progresses to the parietal cortex via the middle temporal area (MT) and mediates visually guided behaviors; and the ventral visual stream, which reaches the temporal cortex via areas V2, V3 and V4, and mediates objects perception (Goodale and Milner, 1992). On the other side, among subcortical structures a strong positive correlation with visual total score was found with thalami severity scores and a tendency with PLIC. In the visual system the LGN of the dorsal thalamus is the gateway through which information reaches the cerebral cortex. The thalamus is a nexus connecting the subcortical and cortical oculomotor centres that orchestrate the coordination of voluntary and reflexive eye movements necessary for coherent visually guided behaviour. An oculomotor function for the central and posterolateral thalamus has been suggested by animal studies showing that saccades can be elicited by electrical stimulation of thalamic nuclei and that single units in them are active in relation to saccades (Schlag-Rey and Schlag, 1984). Abnormalities of voluntary and visually triggered saccades have been reported in patients with acute thalamic lesions, even if recently Rafal et al. (2014) demonstrated that the thalamus is involved in the control of fixation for visually triggered, but not for voluntary saccades. Moreover, the link between thalamus and PLIC can be easily understood since that the retrogeniculate part of the internal capsula contains fibers from the optic system, coming from the LGN of the thalamus and more posteriorly it becomes the optic radiation. In light of these premises, the results of the comparison of MRI scores and single items of visual function (normal or impaired) are logical. It is of great interest to analyze the differences we found between the hemispheric and the subcortical scores. The comparison between hemispheric score was statistically significant only for visual acuity, visual field, stereopsis and colour, indicating these functions might recruit primary visual cortex. Instead, when we considered the subcortical scores, a statistically difference was found for all visual items except for nystagmus but all subjects with subcortical impairment have also hemispheric compromission (see Table in Supplementary Material). What subcortical compromission seems to add is the important disorder in fixation, following and saccades. Therefore, once again the importance of subcortical structures integrity for the function of ocular motricity was confirmed. These results are in agreement with those reported by Ricci and colleagues (Ricci et al., 2006) where 6 of the 12 subjects with clear indications of atrophy of the thalami had severe and wide-ranging abnormalities of visual function in all testing domains i.e. ocular movements, acuity, visual field and fixation shift. What is clear now from literature is that thalamic involvement commonly accompanies PVL and is commensurate with the extent of white matter lesions (Lin et al., 2001). At the present, however, it is difficult to clarify the pathogenesis of thalamic lesions in infants with PVL. Some years ago Lin and colleagues (Lin et al., 2001) suggested that the intrinsic vulnerability is a probable factor related to the thalamic involvement. Metabolic demands of the thalami are considered to be higher than those of the white matter. Thus, the thalami are readily damaged by hypoxic-ischemic injury. Growth restriction was proposed as another explanation. Severe white-matter damage is likely to influence growth of the brain structures, which have some connection to the damaged white matter. Axonal damage in the cerebral hemisphere, which manifests as PVL may reduce growth of the thalamus. Kerbergen and colleagues (Kersbergen et al., 2015) sustained, instead, that it is the white matter damage in cystic-PVL that leads to axonal disturbances, which subsequently affect thalamic development. Impaired maturation of the late oligodendrocyte progenitors, known to be especially susceptible to ischemic damage, leads to a failure in myelination (Back and Miller, 2014). Afferent and efferent axons between the thalamus and cortex as well as the thalamus and the brain stem and cerebellum may be affected, thereby, impairing normal development of the connections that are being formed during the last trimester of gestation (Volpe, 2009). Alternatively, neuronal loss and gliosis may directly influence thalamic atrophy because of impaired input to the thalamus. In conclusion, this study demonstrates that visual disorders in children with PVL correlate with the severity of brain lesion assessed by a semi-quantitative MRI scale and that visual acuity, visual field, stereopsis and colour impairment seem to be linked to cortical damage, while ocular motricity disorders are strictly linked to subcortical damage.
2020-09-11T13:26:16.088Z
2020-09-11T00:00:00.000
{ "year": 2020, "sha1": "739b557b5e586c31f3bc025eb504ba8deb5e0a26", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.nicl.2020.102430", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5357cee1aba54bcd71f84ef896cd9ac5eabc429", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226238955
pes2o/s2orc
v3-fos-license
Visual–tactile object recognition of a soft gripper based on faster Region-based Convolutional Neural Network and machining learning algorithm Object recognition is a prerequisite to control a soft gripper successfully grasping an unknown object. Visual and tactile recognitions are two commonly used methods in a grasping system. Visual recognition is limited if the size and weight of the objects are involved, whereas the efficiency of tactile recognition is a problem. A visual–tactile recognition method is proposed to overcome the disadvantages of both methods in this article. The design and fabrication of the soft gripper considering the visual and tactile sensors are implemented, where the Kinect v2 is adopted for visual information, bending and pressure sensors are embedded to the soft fingers for tactile information. The proposed method is divided into three steps: initial recognition by vision, detail recognition by touch, and a data fusion decision making. Experiments show that the visual–tactile recognition has the best results. The average recognition accuracy of the daily objects by the proposed method is also the highest. The feasibility of the visual–tactile recognition is verified. Introduction Soft grippers made of soft materials have caused widespread concern for its capability of holding objects with various shapes, interacting effectively with unstructured environments, and performing tasks in a more dynamic manner. [1][2][3][4] Until now, there have been a large variety of soft grippers, including those who are made from elastomeric pneumatic actuators, [5][6][7] shape memory alloy (SMA)-driven, 8,9 shape memory polymer, 10,11 dielectric elastomer, 12,13 ionic polymer-metal composites, or electroadhesive polymer. 14,15 Among these, soft and smart materials were mainly applied and developed. Besides investigating the new materials, some research proposed a novel technique for direct 3D printing of soft pneumatic actuators 16 and fully multimaterial three-dimensional (3D) printed soft gripper. 17 With the innovations in material development, structural design, and manufacturing, the soft gripper has been utilized in building integrated systems for different application scenarios, such as rehabilitation 18 and assistance. 19 From the existing research on soft grippers, it is found that the majority are on the material and manufacturing techniques, the research on the practical application of the soft gripper are still deficient, which is, in fact, an important issue in constructing autonomous system regarding the soft gripper as the execution unit. Object recognition is the first problem to be solved in the application of soft gripper. In general, visual and tactile recognitions are the fundamental approaches that are commonly adopted in the research on grippers. Visual recognition uses a camera to obtain the object image and identify the features. 20 In recent years, the accuracy of visual recognition has gradually improved with the progress of computer hardware and algorithms. 21 However, there are several factors that affect the performance of extracted features and limit the performance of vision-based methods, such as scaling, rotation, translation, and illumination 20,22 Moreover, some characteristics of the object, for instance, hardness, temperature, and weight, cannot be identified by vision. Tactile recognition receives information from the tactile sensors installed in the grippers. For example, bio-Tac sensors are adopted to obtain vibration of the object or texture of the surface to classify object material and shape. [23][24][25] Angle and pressure sensors are utilized to analyze the bending information of the finger for recognizing different objects. 26 In summary, the influence of scaling can be removed in the tactile recognition as the real dimension and shape of the interacted object are mapped to the tactile sensor directly. In addition, tactile recognition can be used to capture properties like texture, roughness, spatial features, compliance, and friction, 27,28 which are difficult to be recognized by vision. Hence, it seems to be promising to adopt the tactile recognition in the autonomous system containing soft gripper. Since the existing tactile recognition methods are applied mainly to rigid grippers, there are two main questions if we want to apply them to the soft gripper: (1) Due to the infinite degree of freedom enabled by the soft material, tactile sensors for the rigid gripper might not be suitable for the soft gripper. The choice of available tactile sensors is limited. (2) Unlike the rigid gripper, whose motion and force feature are the main concern, the soft gripper also needs to consider the large deformation feature. In the research community, there are some prior attempts to develop soft sensors for tactile recognition. She et al. combined a resistive flexible sensor with an SMA driver for curvature detection and feedback. 29 Chossat et al. applied ionic and liquid metals to develop highly flexible strain sensors. 30 A flexible "skin" sensor that could identify pressure and strains independently was invented. 31 Similarly, a flexible and extensible capacitive sensor was designed by Li et al., 32 and a soft optical sensor for measuring fingertip contact forces was proposed by Cho et al. 33 In all these works, soft sensors are designed for specific soft grippers with specialized materials and structures. They are expensive and might not be applicable to most of the soft grippers, thus hard to be utilized in the practical grasping tasks. A more promising and more efficient solution is employing the existing sensors to the recognition of soft grippers. In the application of existing tactile sensors to the soft grippers, Homberg et al. 34 were the first ones to use bending sensors for a haptic recognition that provides configuration estimations to distinguish among a set of objects. Gandarias et al. 35 used the high-precision array type tactile sensor to detect the tactile images of the two-finger flexible gripper in contact with an object. Chen et al. 36 embedded the bending sensor into the soft pneumatic gripper and established the relationship between the diameter of the grasping ball and the output value of the bending sensor by curve fitting. In all these methods, plenty of experiments are implemented to capture accurate tactile information. They require strenuous effort if a wider range of objects are intended to be accurately recognized. Having realized the pros and cons of the existing visual and tactile recognition methods, we come up with an idea to combine the two methods for low-cost, efficient, and accurate object recognition. A visual-tactile recognition method is proposed in this article. Visual recognition is firstly applied to a rough classification of the objects, which allows recognizing objects with obvious features like color and shape. Tactile recognition is then applied to achieve accurate identification by further assessing the property of the object, such as size and weight. To elaborate on the proposed recognition method, a self-developed soft gripper is adopted as the study object. The organization of the article is as follows. The second section briefly describes the structure and fabrication of the soft gripper, where the camera and embedded tactile sensors are introduced. The third section summarizes the proposed visual-tactile recognition method, followed which the vision recognition based on a faster RCNN 37 algorithm, and the tactile recognition based on a machining learning algorithm is illustrated in the fourth and fifth sections, respectively. The sixth section introduces the control system. The experiments are given in the seventh section before the conclusions are drawn in the eighth section. Figure 1 shows a grasping robotic system consisting of an articulated serial robot and a soft gripper. The articulated serial robot is a UR3 robot 38 adopted for changing the position and orientation of the gripper. A Kinect v2 39 is selected as the visual sensor. As is shown in the figure, the Kinect v2 consists of a color camera, an infrared camera, and an infrared transmitter. The color camera is to obtain the RGB image of the view. The infrared transmitter emits infrared (IR), which will reflect if it touches the surface of the object. The reflected IR will be captured by the infrared camera. Judging by the time when to receive the reflected IR, the depth image of the objected is formed. Soft gripper and adopted sensors The proposed soft gripper has three fingers, which are actuated by the pneumatic actuators. As shown in Figure 2, each finger has an actuation part enabling the bending of the finger. The actuation part is composed of a multichamber and an inextensible layer. The former is made from silicone elastomeric material (Dragon Skin 30, Smooth-On Inc., Macungie, Pennsylvania, USA) and the latter is fiberglass mesh. By pumping air into the chambers, the pressure caused by the inflation of the chambers results in the bending of the fiberglass mesh. Considering the future tactile recognition, a perception part is designed to integrate the tactile sensors when fabricating the finger, in which bending sensors and pressure sensors are embedded. As shown in Figure 2(b), Spectra Symbol 40 is selected to be the bending sensor, which is applied to capture the bending information of the finger. The Spectra Symbol deforms along with the fiberglass mesh. Its bending angle is converted to the change of resistance. The length of it can reach up to 95.25 mm, which is long enough to measure the bending of the finger. It has good flexibility and thus is suitable to be attached to the surface of soft material. Figure 2(c) shows the chosen pressure sensor FSR 402 by Interlink Electronics Inc., Westlake Village, Southern California 41 that obtains the force of two contacting surfaces during object grasping. The resistance of the force sensor decreases as the contacting force increases. The thin and bendable structure allows it to be embedded in the soft material. The sensors are calibrated by experiments before embedding in the soft finger. The force sensors are calibrated by the strain dynamometer, where the output voltage of the force sensor is measured, as shown in Figure 3(a). With the increasing of the contact force, the resistance decreases and the output voltage increases accordingly. By exerting forces between 0 N and 10 N onto the pressure sensor, the corresponding voltage is measured. A cubic polynomial function is applied to fit the curve between the force and voltage as The input of the bending sensor is air pressure and the output is the voltage. When given certain air pressure, the soft finger bends and the curve is drawn, as shown in Figure 3(b). The voltage corresponding to this air pressure is measured. Hence, the bending information of the soft finger is demonstrated by the relationship between air pressure and voltage. The range for the air pressure is 0-60 kPa and for the output voltage is 1.74-2.26 V. As the air pressure increases, the voltage decreases. The soft finger is made by casting. Instead of assembling sensors and the soft finger after fabrication, the sensors are directly embedded into the soft fingers during fabrication. As shown in Figure 4, the molds are 3D printed, with which the soft finger is cast step-by-step, that is, firstly, the actuation part and then the embedded sensor part. The casting of the actuation part is summarized as follows: (1) Assemble molds A and B. Pour in the uncured silicone The casting of the embedded sensor part is shown in the following. (4) Pour the uncured silicone into mold C until it reaches the height 1.2-1.8 mm. This is to cast the base for sensors like the step 1. Repeat the heating up and cooling down process like step 2. (5) Attach the pressure sensors onto the base. Pour in the uncured silicone until the pressure sensor is covered. Repeat the heating up and cooling down process. Attach bending sensors onto the force sensor layer and repeat a similar process as above. On top of the bending sensor layer, put the fiberglass mesh and repeat the process as above. (6) The actuation part and the embedded sensor part are finally connected by the casting of uncured silicone. Visual-tactile object recognition method As mentioned above, visual and tactile recognition methods have their own pros and cons. Visual recognition is fast in localization, but accurate objective identification requires a camera with high performance and some features are difficult to be captured. Tactile recognition is precise in collecting the features and identifying the object. However, soft fingers are required to have one or more contacts with an unknown target object to obtain tactile information. The above entire process reduces object recognition efficiency. Inspired by the human perception process, which identifies object firstly by vision and then by touching and grasping, we propose a visual-tactile fusion recognition method for efficient and practical object recognition. As shown in Figure 5, visual recognition is firstly applied for object localization and initial classification, which is realized by the depth image and the RBG image obtained by Kinect v2, respectively. A region-based algorithm called faster RCNN 37 is selected to extract the object features from the images and then classify these features by a classifier. If the object cannot be recognized from the RGB image, the characteristics obtained from vision recognition are not enough to identify the object. Due to the lack of information, similar objects would fail to be recognized. We define such objects as attribute missing categories, for instance, balls with the same color but different sizes or the same bottles with different volumes of water. To solve this problem, the tactile recognition is then applied. For the tactile recognition, N sets (rough within the range 70-100) of object grasping experiments are implemented to collect tactile information from the embedded force and bending sensors. The tactile information is stored in a vector of tactile features. The "bagged trees" algorithm is adopted as a classifier. Randomly, select n 1 sets of experimental data to perform the data training and cross verification. The n 2 ¼ N À n 1 sets of experimental data are applied to validate the classification models. With the aid of trained models, more features of the objects are captured. Usually, one classifier corresponds to one tactile feature. For instance, the classifier 1 relates to the size of the object, and the classifier 2 is attributed to the weight. By assigning one feature to one classifier, each classifier requires less training data to classify objects. Thus, fewer experiments are needed and the efficiency of information collection improves. It is worth mentioning that the tactile features closely relate to the accuracy of the object recognition. The more quantity and better performance of the tactile sensors, the higher accuracy of the recognition results. In our work, force and bending sensors are adopted to compromise between accuracy and practical fabrication. More tactile sensors can be embedded to the soft fingers to collect more information if another soft gripper is employed and more complicated application scenarios are considered. Herein, visual and tactile recognitions deal with their own data separately by a decision-making step. 42 The results from the visual and tactile recognition are fused on the decision-making layer as the final recognition conclusion. The object is expected to be accurately recognized by the combination of the two recognition methods. Visual recognition based on faster RCNN As has mentioned, Kinect v2 is applied to obtain the depth and RBG images of the object. Since the information on the location of the object is also aimed to be acquired for the grasping, a target algorithm called faster RCNN 37 is selected to address the problem of identification, classification, and localization. In the faster RCNN model, the image is put into the convolutional neural network to generate a feature map. A region proposal network is then applied to come up with feature squares, from which the detail features are identified by a region of interest. The features are classified and the visual recognition results can be obtained. The details are carried out by the object detection API from Google as follows. To show the effectiveness of faster RCNN, another wellknown visual recognition algorithm called SSD 43 is applied. The visual recognition based on SSD is similar to the procedure above. The difference is in the second step, where the model for data training in the SSD algorithm is SSD MobileNet v1. Tactile recognition based on machining learning The tactile recognition process can be divided into two steps. First of all, object grasping experiments are implemented. Data from the bending and force sensors are collected, from which the tactile features are extracted. Then, these tactile features are classified by the machining learning algorithms and the grasping object can be recognized. Data collection The process of tactile data collection is provided in Table 1. An object is handed to the soft gripper by the operator with different orientations and positions for better recognition robustness. The soft gripper gradually touches and grasps the object. After the object is successfully lifted, the soft gripper remains still for *5-10 s. The data from the sensors are recorded and stored in a matrix Y i , where i represents i'th grasp experiments. The figure of the data in matrix Y i is drawn, from which the steady state is recognized, and the corresponding values are kept in a vector o i . An example is given to illustrate the data collection process as follows. Ten pressure sensors and two bending sensors were embedded in the soft fingers. After holding the Table 1. Data collection procedure. Algorithm 1: Tactile data acquisition and feature extraction algorithms While true do Pump inflates and the soft gripper begins to grasp. Grasp the object and maintain a steady state for 5*10 s. Record sensor data. Release the object. Import sensor data into the computer and save it in the matrix Y i . Calculate the output value of sensors when the grasping is stable. Concatenate these output values, obtaining the feature vector o i . End object 5-10 s, the data acquisition card started collecting the sensors' data. The acquisition frequency is *20-50 Hz. The data were stored in the matrix Y 1 , whose dimension is s  12 (shown in Figure 6), where s is the number of frames in haptic sequence. The middle 100-150 values were roughly constant. This period is selected as a steady state, as shown between the dotted lines. The same experiment was repeated 70-100 times for different grasping poses, and the average of the values at steady state was calculated. Finally, the tactile feature vector o 1 is obtained as Machine learning algorithms After obtaining the tactile feature of the objects, machining learning algorithms are adopted to train and classify the objects. We applied five different machining learning algorithms, as given in Table 2. The decision tree algorithm 44 is easy to be implemented, in which three decision tree classifiers are used in feature processing. For the discriminant analysis, linear and quadratic discriminant analysis classifiers are studied. Support vector machine (SVM) 45 has been widely used in various classification problems and has achieved good results. Herein, six different SVMs are used in feature processing. The K-nearest neighbor (KNN) algorithm has also been widely used in current tactile recognition, 46 and six KNN classifiers are employed for comparative analysis. In addition, some classifiers integrated from different algorithms have good behavior in classification problems. 47 Five ensemble classifiers 48 are adopted. In total, 22 different classifiers are applied for analysis, from which the one with the best accuracy will be selected for fusion recognition. Visual recognition In the visual recognition experiment, 20 different objects 28 that are commonly seen in daily life are selected, as shown in Figure 7 and Table 3. The objects are in different shapes, colors, sizes, and weights. Especially, the shape of the balls (C14, C15, C16, and C17) is the same but they are with different sizes (diameters are 63, 83, 98, and 120 mm, respectively). The color of C16 and C17 is the same but they are in different colors compared with C14 and C15. In addition, to test the recognition accuracy of the objects with the same shapes and color but different weights, the same bottle with different volumes of water is set. They are C18, C19, and C20. In the experiment, 2700 images of the objects are taken, in which 1800 images are used for training models and the remaining 900 images for verifications. The recognition results are provided in Table 4. Both models show acceptable accuracy if the objects are with distinguished shape and color (C1-C13, for instance). For the balls, C14 and C15 are well recognized but C16 and C17 could not be recognized by both models. For the bottles, the recognition accuracy of the SSD model is slightly higher than the faster RCNN model. However, neither of them reaches acceptable level. In conclusion, visual recognition is weak for the objects in different sizes and weights. Between these two models, the average recognition accuracy of the faster RCNN model is 80.78% (727/900), which is higher than the SSD model. Therefore, faster RCNN is chosen as the visual algorithm, and the tactile information is necessary for better recognition. Tactile recognition Tactile experiments are carried out on the same objects, as given in Table 3 and Figure 7. Each object is grasped by the soft gripper 70-100 times (see Figure 8), among which data of 50-70 times of grasping are regarded as the training set and the rest is the testing set. The training set is for the data training by machining learning algorithms given in Table 2. To avoid overfitting, a 10-fold cross-validation method 48,49 is applied to access the algorithms. The recognition accuracy is given in Table 5. Among the 22 classifiers of different machining learning algorithms, the average recognition accuracy of the bagged trees is the highest (87.8%). The testing set is applied to further access the recognition of the bagged trees. An object category label can be obtained during the data training by the classifier on the training set. The value of this label is named as the predicted value. Similarly, an actual category label is defined by the testing set, of which the value is named as the actual value. These two labels are adopted to the 10-fold crossvalidation method and a confusion matrix is generated. As shown in Figure 9, the horizontal axis denotes the predicted value and the vertical axis represents the actual value. The diagonal elements show the probability that the predicted and actual values are the same. By analyzing the confusion matrix, it is summarized that the average recognition accuracy of the bagged tree reaches 88.76% (545/614), among which apple (C1) and orange (C2), pencil sharpener (C5), and conditioner (C9) are 100%. In the visual recognition experiment, the objects with the same shape but different weights (C18, C19, and C20) failed to be recognized. In the tactile recognition experiment, the recognition accuracies of C19 and C20 are up to 91% and 94%, indicating that tactile recognition can distinguish objects with different weights. However, the recognition accuracy of C18, as well as the cup (C6) and ball 1 (C14), is less than 70%. The reason for this result might be that the information from the tactile sensors is not rich enough to recognize the objects with similar features, especially the objects with similar shapes, sizes, and weights. Visual-tactile recognition As has shown by the visual and tactile recognition experiments in "Visual recognition" section and "Tactile recognition" section, the visual recognition can efficiently distinguish objects with varied shapes and colors, but it is weak for the objects in different sizes and weights. The tactile recognition can solve the problem of size and weight. However, a lot of grasping experiments are necessary to collect enough information for better accuracy. To take full advantage of both the visual and tactile recognition methods, the visual-tactile recognition method is proposed in this article and the experiment is carried out in this section. The same 20 objects are adopted again by the visualtactile recognition experiment. As shown in Figure 9, tactile recognition is implemented to compensate for the missing attributes of objects after the visual recognition. Especially, the ball 3 (C16) and the ball 4 (C17) with the same color differ only in size. The empty bottle (C18), the half-full bottle (C19), and the filled bottle (C20) differ only in weight. The two groups of objects were difficult to recognize by visual and needed to be recognized by tactile. The trained faster RCNN model was used for visual recognition. Two tactile classifiers were used to recognize different sizes and qualities. As shown in Figure 10, visual recognition is firstly implemented. If the object can be identified, the recognition results will be forwarded to the decisionmaking level. If not, the tactile recognition is carried out, where four classifiers are assigned. Tactile classifiers 1 and 3 are used for identifying the size of the object, while tactile classifiers 2 and 4 are applied for recognizing different weights. During the implementation of the visual and tactile recognitions, the respective procedure shown in "Visual recognition based on faster RCNN" section and "Tactile recognition based on machining learning" section is followed. The 10-fold cross-validation method is adopted to access the accuracy of the recognition results. The accuracies of recognizing the 20 objects are accessed by the confusion matrix shown in Figure 11. On average, the accuracy is 98.70%, showing a good recognition result. Ball 3 (C16) and ball 4 (C17) fail to be recognized by the visual method. By combining the information from vision and tactile classifiers 1 and 3, the recognition accuracy of C16 and C17 is 100%. Similarly, the empty bottle (C18), the half-full bottle (C19), and the filled bottle (C20) cannot be identified by the visual recognition method. After recognized by the visual-tactile fusion method, the recognition accuracies of C18, C19, and C20 are all 100%. The recognition accuracy of balls and bottles is greatly improved compared with the accuracy of the tactile recognition method only. This might contribute to the combination of the information from both visual and tactile sensors. It shows that the visual-tactile fusion method at the decision-making level can make full use of visual and tactile recognition methods and improved the recognition accuracy of objects. The comparisons on the accuracy of different recognition methods are provided in Table 6. The average accuracy of visual, tactile, and visualtactile recognition methods is 80.78%, 88.76%, and 98.7%, respectively, indicating that the best recognition results can be obtained by the visual-tactile recognition. For the objects with the same shape but different sizes, the accuracy of visual recognition is only 20%. It improves a lot by the tactile recognition, whose accuracy reaches 86.36%. However, the best accuracy is achieved by the visual-tactile recognition that increases 11.37% compared with the tactile recognition. Similarly, for the objects with the same shape and color but different weights, the recognition accuracy of visual recognition is the lowest (28.15%), followed by the tactile recognition (86.29%), and the highest accuracy is from visual-tactile recognition (95.97%). The results show that the visual-tactile recognition method can identify daily objects with high accuracy. Conclusions A visual-tactile recognition method is proposed to efficiently and accurately identify the unknown object for a successful grasping of the soft gripper. A three-step procedure is presented, including initial recognition by vision based on the faster RCNN model, detail recognition by touch based on machining learning algorithm, and data fusion at decision-making level. Considering the visual and tactile sensors, the design and fabrication of the soft gripper are first implemented. A Kinect v2 is adopted, and the RGB and depth images of the object can be collected. Bending sensors and pressure sensors are calibrated and embedded into the soft finger during fabrication, which turns the bending and contacting forces into resistance. For the initial recognition by vision, the faster RCNN is applied for classification and localization of the object. The identified results will be directly regarded as the final result if the object does not involve size or weight and can be fully recognized. If not, the detail recognition by touch is carried out. Machining learning algorithms are adopted to train grasping data. The information from both vision and tactile is finally combined at the decision-making layer, and the output is the recognition result. Experiments are implemented to verify the proposed method. The average accuracy of the proposed method is higher than visual recognition and tactile recognition, confirming the feasibility of the visual-tactile recognition. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. (20) 80.78% 88.76% 98.70%
2020-10-29T09:07:24.959Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "eb9845bf04e94864fa95b8c4d56b9a5929b2a0e8", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1729881420948727", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "1229593cea123d55dc081cdd91f930f7f12b5548", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
6682471
pes2o/s2orc
v3-fos-license
Fundamental Superstrings as Holograms The worldsheet of a macroscopic fundamental superstring in the Green-Schwarz light-cone gauge is viewed as a possible boundary hologram of the near horizon region of a small black string. For toroidally compactified strings, the hologram has global symmetries of AdS_3 \times S^{d-1} \times T^{8-d}, (d =3,..,8), only some of which extend to local conformal symmetries. We construct the bulk string theory in detail for the particular case of d=3. The symmetries of the hologram are correctly reproduced from this exact worldsheet description in the bulk. Moreover, the central charge of the boundary Virasoro algebra obtained from the bulk agrees with the Wald entropy of the associated small black holes. This construction provides an exact CFT description of the near horizon region of small black holes both in Type-II and heterotic string theory arising from multiply wound fundamental superstrings. Introduction Consider a 'macroscopic' fundamental superstring wrapping p times around a circle of radius R in the limit of large radius. Some spatial directions transverse to the string could be compactified on a torus and the remaining are noncompact. In this case, the worldsheet theory living on such a macroscopic string is particularly simple. For a string winding the circle once, this theory consists of free bosons and free fermions corresponding to the transverse oscillations of the string. As long as the energy scales of excitations are much smaller compared to the string scale, the macroscopic string cannot break up or emit smaller loops of string. At very weak coupling, these low energy excitations along the string are expected to decouple from the surrounding supergravity fields. Moreover, the free worldsheet theory is manifestly superconformal. These observations raise the question if a fundamental macroscopic superstring could be interpreted as a hologram of some bulk dual theory. To find the holographic dual, one could examine how the spacetime geometry is modified by the backreaction of the string. The supergravity solution corresponding to such an infinitely extended fundamental superstring was found in [1,2] using the two-derivative string effective action. A fundamental superstring is in many ways the most basic 'solitonic' object in string theory and this solution is the most elementary brane solution in string theory. Indeed, all other p-brane solutions can be constructed from it simply by applying T and S duality transformations to the supergravity fields. A characteristic property of this solution in all dimensions is that near the core of the string, the effective string coupling g 2 s determined by the local value of the dilaton field goes to zero. This suggests that even after taking into account the backreaction, the worldsheet would continue to decouple from the bulk. On the other hand, the string metric near the core is singular and the curvatures become of the order of the string scale. This suggests that it would be necessary to take into account higher derivative terms in the tree-level string effective action to fully analyze the 'geometry' near the core. In fact, since the curvature is of the order of the string scale, corrections arising from various higher derivative terms would be equally important and an exact CFT description would be necessary. One might hope that after taking into account the corrections to the geometry to all orders in α ′ expansion of the tree level effective action and possibly exactly by using some bulk worldsheet conformal field theory, it would be possible to obtain the holographic dual of the fundamental string hologram. Further support for this idea comes from the investigations of higher derivative corrections to the 'geometry' and entropy of what have been termed 'small' black holes [3,4,5,6,7,8,9,10]. If we take the radius R of the circle along which the string is wrapping to be very small instead of very large, then one can view the string as a point-like object in one lower dimension. The string can carry in addition some quantized momentum q along the internal circle. In this case, one obtains a BPS point-like object with two charges q and p. From the perturbation analysis of the spectrum one finds that they have exponentially large degeneracy that goes as exp (c √ pq) as a function of the two charges where the constant c equals 4π for heterotic strings and 2π √ 2 for Type-II strings in all dimensions. It is natural to ask then if there is a two-charged BPS black hole whose entropy corresponds to the degeneracy of these microscopic states similar to the three-charge case [11]. This expectation is indeed borne out in a number of examples with a beautiful consistency between the macroscopic and microscopic aspects of the theory. The best studied examples are the heterotic small black holes in four dimensions with N = 4 supersymmetry. These black holes were analyzed using certain F-type four-derivative supersymmetric corrections to the effective action which depend on a particular quadratic contraction of the Riemann tensor. This analysis reveals that upon inclusion of these α ′ corrections, the geometry near the core is no longer singular but is of the form AdS 2 × S 1 × S 2 . The sphere S 2 has radius of order one in string units and can be regarded as the 'horizon' of this extremal small black hole. The dilaton no longer vanishes at the core and the four-dimensional string coupling g 2 4 ∼ 1/ √ pq is now small but finite. As a result, the area of the horizon measured in units of the four dimensional Planck length is large and scales as √ pq. The resulting entropy, incorporating the modifications due to Wald [12,13,14] to the Bekenstein-Hawking formula [15,16], is in perfect agreement with the microscopic degeneracy including the precise numerical coefficient. Inclusion of other higher derivative corrections is expected to correct the geometry further. Moreover, in string theory, the metric, like all other fields, is subject to field redefinitions. Geometric notions at the string scale determined by a given metric are not invariant under such field redefinitions. What makes the above analysis tractable and reliable is the fact that the Wald entropy of a black hole is a much more robust physical quantity than the 'geometry' of the horizon. To begin with, for these black holes, the absolute degeneracy of these states equals a topological index given by a helicity supertrace [9,8,7,10]. Furthermore, the system can be analyzed from a five-dimensional point of view. The radius of the circle S 1 of the near horizon region gets attracted to the near horizon value of q/p in string units irrespective of the asymptotic value R of the radius. The AdS 2 and the S 1 factor can then be combined into a fiber bundle as an AdS 3 with possible global identifications which could be viewed as the near horizon region of a small black string 1 . Using the larger symmetries of AdS 3 in this set-up, the Wald entropy can then be related to the anomaly in the boundary R-current and in turn to the bulk Chern-Simons terms [17,18]. These are already included in the four-derivative action and are not further corrected by other higher derivative terms. Thus the Wald entropy computed from the five-dimensional four-derivative supersymmetric action is determined entirely by symmetries and anomalies under the reasonable assumption that the near horizon region continues to have the symmetries of AdS 3 even after including all higher derivative corrections. This reasoning explains why analysis of the four derivative action is adequate for computing certain quantities such as the Wald entropy. One can also show explicitly using the entropy function formalism [6] that includes higher curvature terms that Wald entropy is invariant under field redefinitions barring singular ones that take AdS 3 × S 2 to a singular space 2 . One can actually go further and compare even subleading corrections to the statistical entropy in an asymptotic expansion in 1/ √ pq. The subleading corrections to thermodynamic quantities are of course ensemble dependent, but there are finite possibilities to choose from which can be compared with the microscopic counting to determine which is the correct one. The microscopic counting of these states is exact since it can be done in string perturbation theory. For the macroscopic analysis, one can use the ensemble proposed in [19] or in [8] with an appropriate measure [20,21]. One then finds that the macroscopic entropy and the microscopic entropy are in striking agreement to all orders in an asymptotic expansion which is governed by the same associated Bessel function. Since the asymptotic expansion is determined entirely by the saddle point quantities, this comparison is independent of subtleties having to do with the choices of contours for inverse Laplace transform that enters the definition of the ensemble. It is nontrivial that the same associated Bessel function appears in the two analyses that are a priori completely unrelated. Such a comparison of macroscopic and microscopic entropies to all orders constitutes a nontrivial check of the consistency of string theory. Even though the agreement between the microscopic counting with the Wald entropy is best understood for heterotic small black holes in four dimensions and the corresponding string in five dimensions, there are strong indications that many aspects of the story are true in all dimensions and also for Type-II strings. A general scaling argument due to Sen [22] gives the correct dependence √ pq of the entropy on the charges in all dimensions [23] assuming that upon inclusion of the higher derivative corrections the geometry near the core has a black hole horizon. The precise numerical coefficient cannot be computed because the supergravity analysis of higher derivative actions in higher dimensions is more complicated. The important point though is that the scaling argument seems to work uniformly in all dimensions and for all superstrings because it only relies on tree level bosonic action for NS fields that is common to all string theories. The scaling argument can also be successfully generalized to the states with spin assuming that upon inclusion of the higher derivative corrections the geometry near the core has a black ring horizon. The entropy in this case has the form √ pq − rJ with a correct dependence on the spin J and a dipole charge r in agreement with the microscopic counting [24,25,26]. One could elevate these observations to a general principle that corresponding to every solitonic system in string theory which has a large entropy, there must be a solution realizing a black object in the low energy effective action that has the same entropy. This would include not only big black holes and black rings but also the small ones. The microscopic and macroscopic structure of the theory can then be consistent with each other in a natural way. By the same token, and from the general experience in holography, one expects that any solitonic object with a worldvolume theory which typically will be conformal in deep infrared must have an AdS holographic dual as long as gravity decouples from the worldvolume. This reasoning suggests that corresponding to the worldvolume of the fundamental string, a holographic dual must exist in all dimensions. Encouraged by these general arguments and some of the successes, we examine in this paper the idea of a fundamental superstring as a hologram, taking seriously the AdS 3 symmetries of the near horizon region. For reasons outlined above, we choose to be guided by symmetries, anomalies, and Wald entropy and refer to the string scale geometry only as a shorthand for signifying the relevant symmetries. We find that these considerations lead us to a very tightly constrained theory describing the worldsheet dynamics of strings in the bulk. This worldsheet theory involves a noncompact WZW model SL(2) k=2 (and its heterotic counterpart) which gives us the correct entropy. Precisely for this theory, we find that the boundary (super)symmetries are realized in the bulk string theory. The discussion is organized as follows. In §2 we set up our conventions, review what is known about small black holes, and list the expected global superconformal symmetries of the near horizon region in this context containing an AdS 3 factor. This raises a number of puzzles which we outline and resolve in the subsequent sections. In §3 we consider the fundamental superstring as a hologram in the Green-Schwarz light-cone formalism. This analysis makes transparent how the global superconformal symmetries could be realized in the hologram and which of them can be extended to local superconformal symmetries. We also discuss an unusual light-cone gauge which is relevant for the comparison with the bulk dual. In §4 we specialize to the case of the five-dimensional, Type-II small black string and construct the dual bulk theory with the symmetries of AdS 3 × S 2 . In §5 we repeat the analysis for five-dimensional heterotic small black strings. In particular, we construct explicitly the boundary symmetries from the bulk, compute the boundary entropy from the bulk, and show that these are in agreement with the hologram. We conclude in §6 with a discussion of conclusions, open problems, and outlook. There a number of related works that have some overlap with considerations here [27, 28,29,30,31,32]. We will comment on some relations to these works during the course of discussion. Macroscopic Superstrings To discuss various toroidally compactified superstrings uniformly, we take the spacetime to be of the form IR 1,1 × IR d × T 8−d with coordinates X M ; M = 0, . . . , 9 split as M = (µ, i, m). The macroscopic string worldsheet extends along the Lorentzian space IR 1,1 with coordinates X µ ; µ = 0, 9 where X 0 is the time coordinate and X 9 is a circle coordinate, X 9 ∼ X 9 + 2πR. There are d noncompact transverse directions X i ; i = 1, . . . , d along a Euclidean space IR d and The worldsheet action in conformal gauge for these ten bosonic spacetime coordinates is given by where η M N is the 10d Lorentzian metric with signature mostly positive. We have defined σ ± = τ ± σ. In addition, there are worldsheet fermionic partners appropriate for the heterotic or the Type-II string and leftmoving bosons H I with I = 1, . . . , 16 for the heterotic string that parameterize an internal torus of E 8 × E 8 . The total action is subject to Virasoro constraints which we discuss in some detail later in §3. Now consider a fundamental string wrapping p times carrying quantized momentum q along the circle. We define dimensionless left-moving and right-moving momenta If we take the right-movers of the superstring to be in the ground state then this state is supersymmetric and the mass M saturates the BPS bound The left-moving oscillation number N L of the transverse oscillations satisfies the Virasoro constraint for the heterotic string and for the Type-II string. There is a large degeneracy d(q, p) of such states since this constraint can be satisfied by exciting various oscillations in many different ways. The statistical entropy given by the logarithm of d(q, p) goes as with c = 4π for heterotic and c = 2π √ 2 for Type-II. In the limit of large R for fixed q, this state can be viewed as an infinitely extended string that will act as the source for various supergravity fields. Let r be the radial coordinate along the noncompact directions r 2 = x i x i . The dilaton field Φ(r) in the (d + 2) noncompact dimensions is given by the transverse harmonic function where Ω is a geometric factor. The metric in the string frame then takes the form (2.8) and the nonvanishing components of the 2-form field B M N are given by Small Black Holes, Scaling, and Near Horizon Symmetries Taking into account the higher derivative corrections is in general very complicated because one has to solve higher order nonlinear differential equations. The task is greatly simplified using supersymmetry. In four dimensions using the superconformal formulation of higher derivative supergravity one can incorporate four-derivative F-type terms and find the BPS solutions [33,34,35,36,37]. The solutions corresponding to two-charge heterotic BPS states discussed above are found to have a string scale near horizon geometry of AdS 2 ×S 1 ×S 2 [3,4]. This system can also be analyzed from a five-dimensional point of view using the four derivative supergravity action of [38]. The corresponding small black string solution with an AdS 3 × S 2 near horizon geometry is discussed in [39,40]. The main virtue of the four derivative action in five dimensions is that it already incorporates the gravitational Chern-Simons interaction and all terms related to it by supersymmetry. Under suitable conditions which will be discussed in greater detail in §2.3, one can determine the Wald entropy of the black holes completely using symmetries and anomalies. The four derivative action is thus adequate to be able to draw reliable and useful conclusions about the entropy of the heterotic small black holes. The attractor values of the dilaton and the radius are determined entirely in terms of charges where g 5 is the 5d string coupling, g 4 is the 4d string coupling, and R is the radius of the circle in string units around which the string wraps. This shows in particular, that for large p, the near horizon string coupling can be made arbitrarily small. One can therefore consistently assume that the worldsheet of the fundamental string, which we will later interpret as the hologram, decouples from the massless supergravity fields. Let us now list the symmetries of this near horizon solution for the heterotic small black string. To start with, we expect to have the global symmetries of AdS 3 × S 2 which are SL(2, IR)×SL(2, IR)×SO(3). We also expect a local conformal symmetry V irasoro×V irasoro from a Brown-Henneaux construction [41]. The string is a half-BPS state to start with so we have eight unbroken global spacetime supersymmetries. Near the horizon, in the N = 2 formalism that we have used, the supersymmetry is enhanced to include 4 additional superconformal symmetries. So we expect altogether at least 12 superconformal symmetries and possibly 16 superconformal symmetries if the problem could be analyzed in a manifestly N = 4 formalism. In the Type-II case, if a small black hole were to exist, we would expect at least 12 + 12 and possibly 16 + 16 superconformal symmetries. As mentioned in the introduction, a general scaling argument suggests that a small black hole ought to exist in all dimensions [22,23]. If a small black string were to exist in higher dimensions for IR 1,1 × IR d × T 8−d compactifications with d = 3, . . . 8, we would expect possible near horizon geometries that have symmetries of If we assume that there is a left-moving Virasoro and a right-moving Virasoro as it happens for the D1-D5 system, then we expect to have for the right-movers at least a global SL(2, IR) symmetry. The supercharges must transform under Spin(d) × Spin(8 − d) and so we are led to look for a supergroup that contains the bosonic symmetry SL(2, IR) × Spin(d) × Spin(8 − d) and at least 12 and possibly 16 global superconformal supersymmetries 4 . Possible supergroups that contain sixteen supersymmetries are limited in number. The list of symmetries of heterotic small black strings with a possible supergroup containing them is summarized in the table (1). For example, the group OSp(8|2) contains Spin(8) × Sp(2) as a bosonic subgroup and the fermionic generators transform as vector of Spin(8) and a doublet of Sp(2) ∼ SL (2). Similarly OSp(4 * |4) contains SO * (2, 2) ∼ SL(2) × Spin(3) and Sp(4) ∼ Spin(5) as bosonic subgroups. See [42] for a nice introduction to supergroups in this string theory context. There are a number of puzzles that arise from these identifications of the supergroups for the global symmetries of the horizon. It is well-known that maximal allowed local superconformal symmetries are given by an N = (4, 4) superconformal theory that has SU(2) R-symmetry both on the left and on the right. This algebra of local currents has a closed subalgebra whose fermionic part consists of 8 + 8 = 16 global superconformal charges 5 the much larger global symmetries which for example require 16 + 16 = 32 supersymmetries for the Type-II case? For these reasons, we will not commit ourselves to the supergroups in table (1) and regard them as a tentative identification. We will be guided instead by the holograms discussed in §3 where it is easy to write down the symmetry algebras quite explicitly. The question of global and local symmetries is somewhat subtle even in the hologram and we shall discuss this issue in more detail in §3. The usual global supersymmetries are easy to display but the realization of global superconformal symmetries involves an analog of spectral flow. It is not possible to make all global and local symmetries manifest at the same time. Wald Entropy and Anomalies We now briefly review the arguments that utilize the AdS 3 symmetry and anomalies to compute the Wald entropy [17,18,43]. The dynamics of the theory in this background will be governed by an effective three dimensional action, obtained by compactifying the remaining directions including the angular coordinates of the horizon. This effective action will have the form 0 is a lagrangian density with manifest general coordinate invariance, and denotes the gravitational Chern-Simons term: Ω 3 being the Lorentz Chern-Simons 3-form and K is a constant. The action admits an AdS 3 solution ds 2 3 = L 2 e 2 (−r 2 dt 2 + r −2 dr 2 ) + (dy + erdt) 2 , where L is an overall scale. We have written it in the form of a fiber bundle. The fiber is a circle with coordinate y and the base is an AdS 2 with coordinates (r, t) so that e can be viewed as a unit of charge associated with the Kaluza-Klein reduction along y. One can then show, both in the Euclidean action formalism [17,18,44] as well as using Wald's formula [45,46], that the entropy of the black hole with near horizon geometry described in (2.13) has the form: where Q is the electric charge associated with the Kaluza-Klein gauge field, and 0 in (2.16) has to be evaluated on the near horizon background (2.13). This gives a concrete form of the Q dependence of the entropy in terms of the constants c L and c R . The constants c L and c R given in (2.15) can be interpreted as the left-and right-moving central charges of the two dimensional CFT living on the boundary of the AdS 3 [18,17,44]. The Kaluza-Klein momentum Q is interpreted as the momentum in this boundary CFT which is the (L 0 − L 0 ) eigenvalue of a given state in this CFT. The two cases in (2.14) correspond to L 0 = 0 and Q > 0 or L 0 = 0 and Q < 0. With these identifications, (2.14) can be interpreted as simply Cardy formula in this CFT. This argument can be summarized by saying that Wald entropy of the bulk equals the Cardy entropy of the boundary. If the theory has at least N = (0, 2) supersymmetry then one can actually do more and determine even c L and c R using anomalies. In our case the boundary theory will in fact have N = (0, 4) supersymmetry. In this case, the central charge c R is related to the central charge of an SU(2) R current algebra which is also a part of the N = (0, 4) supersymmetry algebra. Associated with the SU(2) R currents there will be SU(2) gauge fields in the bulk, and the central charge of the SU(2) R current algebra will be determined in terms of the coefficient of the gauge Chern-Simons term in the bulk theory. This determines c R in terms of the coefficient of the gauge Chern-Simons term in the bulk theory [18,17,43]. On the other hand from (2.15) we see that c L −c R is determined in terms of the coefficient K of the gravitational Chern-Simons term. Since both c L and c R are determined in terms of the coefficients of the Chern-Simons terms in the bulk theory, they do not receive any higher derivative corrections. This completely determines the entropy from (2.14). Furthermore the expression for the entropy derived this way is independent of all the near horizon parameters and hence also of the asymptotic values of all the scalar fields. Since this argument is quite general and three-dimensional, it is expected to work for higher dimensional small black strings as well with transverse space of the type S d−1 × T 8−d as long as the Spin(d) symmetry couples chirally to bulk gauge fields [47]. While the argument works beautifully for heterotic small black strings, it appears to fail spectacularly for Type-II small black strings. For instance, for Type-II on T 6 , the F-type fourderivative terms are zero and hence to this order the horizon continues to be singular with a vanishing horizon and the resulting Wald entropy would appear to be zero. What is worse, if we identify isometries of S d−1 with the conformal R-symmetries then one would conclude by similar reasoning that c L = c R = 0 giving vanishing entropy in contradiction with microscopic counting and the scaling argument. A correct interpretation of these results, as we will argue in the next two sections is that in the Type-II case, the geometric rotational symmetry of the horizon is nonchiral and does not correspond to the conformal R-symmetry. There are additional chiral gauge symmetries of stringy origin which can be identified with the conformal R-symmetries both for the right and the left-movers. The geometric symmetries of the horizon are a nonchiral linear combination of these R-symmetries. This structure will be quite clear also from the hologram that we discuss in §3. Now, the coefficient of the gravitational Chern-Simons term is proportional to c L −c R which vanishes. The gauge Chern-Simons term in supergravity being nonchiral also couples to c L −c R . Therefore, unlike the heterotic case, the Chern-Simons terms are not useful for determining the entropy 6 . This means in particular that analyzing only the four-derivative action is not adequate to find the correct entropy but one must take into account all α ′ corrections as suggested by the scaling argument. Application of the scaling argument will then tell us that entropy will have the right dependence on charges but determination of the precise coefficient is intrinsically stringy and not easily doable in supergravity. This explains why small black holes and black strings have been difficult to find in the Type-II case. Our stringy construction §4 will give a way to compute this entropy using an exact CFT construction of the worldsheet. Many of these confusing issues are neatly resolved by looking at the holograms that we expect for this system. We therefore turn next to the hologram for some guidance about the structure of various symmetries. The Fundamental Superstring as a Hologram There is a simple way to realize all the required symmetries expected for the near horizon of a small black string using a free field representation which is furnished by the worldsheet of a toroidally compactified Green-Schwarz macroscopic superstring in a particular light-cone like gauge. For our purposes, this specific free field representation is not only simple but will have a direct physical interpretation as the boundary hologram. It is an instructive exercise to work out this representation in some detail. In particular, it will illuminate the role of global and local symmetries and will provide some guidance as to which of the global symmetries can become local conformal symmetries. We would like to regard all transverse oscillations as the fields along the worldsheet and also solve the Virasoro constraints. For this purpose it will be useful to choose a slight variant of the usual light cone gauge using the compact X 9 direction as one of the light-cone coordinates. We discuss this 'compact light-cone gauge' and the various resulting algebras in §3.1. In §3.3 we choose a further variation of this gauge by using one of the internal compact directions to be the light-cone directions. This will prove useful for later comparison with the bulk holographic dual. A theory of p identical strings has also a symmetry S p which permutes the different strings. The full holographic theory is then a symmetric product of p strings. This is consistent with S duality which maps it to a theory of D-strings. In the bulk theory which we discuss in the following sections, the corresponding statement is that there are states with non-zero values of spectral flow number in SL (2). In this section, we shall discuss the symmetries of the system for which it is sufficient to consider the free theory of the transverse oscillations of the string. Holograms in the Compact Light Cone Gauge The action of the superstring in the conformal gauge is subject to Virasoro constraints Here T int ++ and T int −− are the stress tensor components of the fermionic and internal coordinates which we discuss more explicitly later. We would like to solve these constraints explicitly so that we have to deal with only the transverse physical oscillations. For this purpose, it is useful to define the 'compact' light-cone coordinates Ignoring the oscillators, the zero mode expansion of the two fields X 0 and X 9 is given by We now choose the following light-cone like gauge so that the X + coordinate has no oscillators. Note that this is different from a discrete light cone. Since the coordinate X 9 is compact, X 9 ∼ X 9 + 2πR, the light-cone direction spirals around the cylindrical (X 0 , X 9 ) space but has infinite extent. As in (2.2), we can define dimensionless left-moving and right-moving lightcone momenta, so that This allows us to solve the Virasoro constraints for the remaining longitudinal mode in terms of the transverse modes, where the superscript tr refers to all spacetime and internal degrees of freedom that are transverse to (X 0 , X 9 ). The mass-shell conditions (2.3), (2.4), and (2.5) follow from identifying the Fourier modes We see that in the limit of R → ∞, in the zero winding sector p = 0, this gauge reduces to the usual light-cone gauge. On the other hand, for nonzero winding p = 0 and for fixed q, it resembles the static gauge which is what we are interested in. Unlike the static gauge however, it has the virtue of the light-cone gauge that the Virasoro constraints are explicitly solvable 7 . For the naive static gauge X 9 = σR, the Virasoro constraints are quadratic in the X 0 oscillators and are difficult to solve at the level of quantum operators. Since all longitudinal degrees of freedom are now either gauge fixed or determined in terms of the transverse modes, we can focus on the physical transverse modes. Fermions can be incorporated in the usual way and we will use the light-cone Green-Schwarz formalism. The transverse action for the Type-IIB superstring on IR 1,1 × IR d × T 8−d compactification is given by Symmetries of the Holograms We would now like to view this theory defined by (3.10) as a hologram and in particular understand all its symmetries. Various global and local symmetries are very easy to work out because the computations are identical to those that appear in the first quantization of the light-cone superstring. The physical interpretation of these symmetries here is however completely different. The theory given by the action above should be viewed as the 'second quantized' string field theory action of the strings moving in the holographically dual bulk theory. Construction of the holographically dual worldsheet which we discuss in §4 gives the first quantized realization of this symmetry algebra. We consider here the special case d = 8 to simplify the discussion and also focus on only right-movers while discussing the chiral currents. The mode expansions of the basic fields are for bosons and for the fermions. The oscillators satisfy the usual canonical commutation relations and similarly for the left-movers. In addition there are bosonic zero modes x i and p i which satisfy the Heisenberg commutation relations To begin with, the action has a global Spin(8) rotational symmetries generated by J ij , which we write as 18) and similarly for the contributionsẼ ij andK ij 0 from the left-moving oscillators. Note that even though the oscillator contributions are chiral, the piece L ij which depends on the zero modes x i and p i is nonchiral and as a result the rotation symmetry generated by the J ij is nonchiral. This fact will be important later. In addition to this global, nonchiral symmetry, there are a large number of local, chiral symmetries. For the right-movers, we have the conformal symmetries generated by the spin-2 stress tensor T (σ + ), supersymmetries generated by the spin-3/2 currents Q˙a(σ + ) as well as Spin(8) affine algebra generated by the spin-1 currents K ij (σ + ). These operators are given by The index i transforms in the vector representation 8 v of Spin (8), the index a in the Majorna-Weyl spinor representation 8 s of positive chirality, and the indexȧ in the conjugate Majorana-Weyl spinor representation 8 c of negative chirality and γ i aȧ are the Clebsch-Gordon coefficients between these three representations. There are similar currents for the left-movers. In the heterotic case, one does not have the supersymmetries and the Spin(8) current algebra on the left but instead the E 8 × E 8 current algebra which contributes also to the stress tensor as usual. Using the mode expansions of operators above and the commutation relations (3.13), it is easy to obtain the Virasoro algebra with central charge c = 12p 8 , the commutators 23) and the Kac-Moody algebra There is in addition a nontrivial anticommutator [50], wherek andc are some constants. The modes of the supercurrent Q˙a m transforms as a spinor under the global rotations, There is an anomaly free subalgebra of the Virasoro algebra generated by (L 0 , L 1 , L −1 ) and (L 0 ,L 1 ,L −1 ) which generates the global SL(2, IR) × SL(2, IR) which can be identified with the isometries of an AdS 3 . We also have the rotational Spin(d) symmetries generated by J ij which can be identified with the isometries of a spherical 'horizon' S d−1 . There are sixteen supersymmetries (Q˙a 0 ,Q˙a 0 ). In addition, there can be conformal supersymmetries which we will discuss shortly. These symmetries together already give us enough reason to consider this worldsheet theory as the hologram of the near horizon geometry of a small black string in d + 1 noncompact dimensions discussed in the previous subsection that we are after. Taking this hologram seriously then predicts that the bulk theory must have not just these global symmetries but also the local Virasoro symmetries on the left and the right as well as additional chiral symmetries which we now discuss. The hologram also makes it transparent as to which symmetries can possibly be realized as chiral, local symmetries and which are only global symmetries. For example, it is clear that even though this algebra looks very close to a possible superconformal algebra of (8, 0) type with a possible Spin(8) conformal R-current, this is not true. This is because the commutator of K ij m with Q˙a n does not close and one obtains instead, so we see that the right-hand side does not equal i 2 γ ij aḃ Q˙b m+n as one might expect if this were to form a closed algebra and if Q˙a m were to commute as modes of a spinor operator under the R-symmetry. The reason for this failure is of course obvious since the generators Q˙a m commute as spinors only with the total angular momentum J ij and not if we consider only K ij 0 . More explicitly, Q˙a m defined in (3.20) contain terms that are proportional to p i which commute with K ij 0 and commute as a vector only when we take into account the orbital angular momentum L ij . This shows that even though we have a global Spin(8) R-symmetry that acts on the supercharges, it cannot be extended to a local, chiral conformal R-symmetry [51]. This is just as well because otherwise one would obtain a closed N = (0, 8) superconformal algebra from the commutators of (L m , Q˙a n , K ij l ) with Spin(8) chiral R-symmetry. This would contradict general theorems which state that the maximal allowed (right-moving) linearly realized superconformal symmetry is N = (0, 4) [52,53]. The failure of the R-symmetry to be chiral simply stems from the fact that p i must transform under Spin (8) in order that Q˙a m transforms as a spinor. This necessitates the inclusion of the nonchiral L ij piece in the R-symmetry generated by J ij . The action (3.10) does admit a N = (2, 2) and N = (4, 4) superconformal symmetry if we are willing to forgo the Spin(8) global symmetry. This fact is of particular physical significance in this context because it instructs us as to which of the symmetries of the near horizon of small black string we might hope to realize simultaneously and which not. Moreover, the N = (2, 2) superconformal symmetry is the minimum that is required for us to be able to apply the Kraus-Larsen argument to obtain the Wald entropy correctly. We now like to exhibit this N = (2, 2) superconformal symmetry and in particular, display the 12 superconformal global symmetries for the right-movers of which we have only seen 8 thus far, namely Q˙a 0 . For this purpose let us choose the embedding SU(4) × U(1) ⊂ Spin(8) under which the spinor and the vector representations decompose as where 4 is the fundamental representation of SU(4), 4 its complex conjugate, and superscript denotes the U(1). One can now define the local U(1) current J as 29) and the supercurrents where we have suppressed the worldsheet spin index and use the notation that S a+ transforms as 4 + and S − a as 4 − etc. It is easy to check that these modes of these currents along with L m and J n satisfy the usual N = 2 superconformal algebra. In particular in addition to the Virasoro algebra (3.22), we have [J m , J n ] = kmδ m+n . (3.33) Note that the anomaly in the current current commutator is proportional to k = 2p which is related to the anomaly in the Virasoro algebra c = 12p. To see the global OSp(2|2) algebra one has to use spectral flow and it is easy to check that ( have the desired commutations. We see from here that G ± 0 generate the usual supersymmetries and (G − +1 , G + −1 ) generate the conformal supersymmetries. It is useful to summarize this construction of N = 2 superconformal algebra using group theory. We have chosen above two linear combinations G ± of the eight supercharges Q˙a, so that they form a closed algebra with the J n . The spinor Q˙a transforms under 8 c . When the 8 v and 8 s decompose as in (3.28), the conjugate spinor representation 8 c decomposes as We are discarding 6 0 and keeping only 1 2 and 1 −2 in the form of G ± . In the same way one can use the decomposition SU(2) 4 ⊂ Spin(8) and use one of the SU(2) as the conformal Rsymmetry. Choosing an appropriate combination of the eight supercharges that transform as a doublet of this SU(2) and discarding others, one obtains a closed N = 4 superconformal algebra. It is not possible to construct a conformal R-symmetry that is larger than SU(2) consistent with the fact that N = 4 is the largest superconformal algebra that is allowed. We can repeat this analysis for the other compactifications where the global R symmetries are Spin(d) × Spin (8 − d). In all the cases, the full global symmetry cannot be extended to linearly acting local currents and the maximal local R currents possible is SU(2) which corresponds to N = (4, 4) superconformal symmetry. Holograms in the Internal Light Cone Gauge We now discuss a somewhat unusual gauge that is a slight variant of the compact light cone gauge. The reason for considering this particular gauge will be clear after our bulk construction of boundary symmetries in the next section. We will encounter there commutation relations between the Virasoro symmetries and the rotational symmetries that are somewhat unusual but are quite natural from the point of view of the gauge that we call the 'internal light cone gauge'. Let us first consider the bosonic side of the heterotic string. This consists of bosons {X M , M = 0, .., 9} and {H I , I = 1, .., 16}. Instead of picking the light cone gauge to be (3.6), we define the bosons Y, Y = 1 √ 2 (X 9 ±H 1 ). We define the internal light cone coordinates to be and fix the internal light cone gauge to be As before, we can define dimensionless left-moving lightcone momenta, so that We can solve as before the Virasoro constraints for the remaining longitudinal mode in terms of the transverse modes, where the superscript tr refers to all spacetime and internal degrees of freedom that are transverse to Y ± . In terms of modes, we have: : α a n−m α a m : −δ n . (3.40) where α a n are the modes of the fields φ a . L n ≡ q + L α − n obey the Virasoro algebra with c L = 24. So far, everything went like in the usual light cone gauge quantization. But since we did something funny, there are certainly differences. Note that the SO(32) or E 8 ×E 8 gauge currents of the heterotic string involve the boson H 1 . The manifest symmetries are therefore broken to SO(28) and E 7 × E 8 . The SO(8) rotation symmetry on the other hand does not get enhanced -the new transverse oscillator Y cannot be rotated into X i because of the non-zero winding around X 9 . Since we expect the theory to be independent of choice of gauge, we expect that the full gauge symmetry is actually restored. The SU(2) ⊂ E 8 symmetry would be generated by the operators ∂ −H1 , e ±i √ 2H 1 as before, except that these fields must now be expressed in terms of the appropriate transverse modes. We will not study this in detail for now, and simply note the fact that the currents involvingH 1 will no longer be good conformal currents under the conformal algebra described above. For example, the U(1) current generated by The modes of this current has a commutation relation with the conformal generators: This shows that even though the theory has manifest conformal affine symmetry in the compact light cone gauge, in this peculiar gauge, the current is not conformal. Thus, the conformal affine symmetry is not manifest and is broken by the gauge choice that mixes the compact direction with an internal one. Note that the zero mode of the current j 0 = dσ j however continues to commute with the Hamiltonian and still remains as a manifest symmetry. If we wanted to quantize the string, this is all we need, since the only physical objects are integrated worldsheet currents. We are however looking for a macroscopic extended string where the local currents on the worldsheet are important. For the superstring, we do a similar analysis. The internal direction in this case is a little more subtle and comes from within the fermionic lattice. We should then worry about how to construct the spin fields in order to get the spacetime supercharges and whether they transform correctly under the physical symmetries. We begin with the RNS fields X M , ψ M , M = 0, 1..9. We pick the fermions ψ 1,2,3 and bosonize two of them and get a system (θ, ψ θ ) where θ is at the free fermions radius. This breaks the symmetry from Spin(10) → Spin(7) × Spin(3). Defining as for the bosonic case the fields Y, Y = 1 √ 2 (X 9 ± θ), we define the new internal light-cone coordinates and fix the internal light cone gauge as in (3.35, 3.36). In addition, we also define the fermions (ψ Y , ψ Y ) in an analogous manner and set The remaining transverse fields are {X i , (i = 1...8), Y , ψ m , (m = 1, ..5), ψ Y }. As for the bosonic case, we can solve the theory explicitly and find that the operators L n ≡ q + R α − n obey a Virasoro algebra with c R = 12. In addition, we can also solve for ψ − ≡ 1 √ 2 (ψ 0 − ψ Y ) in terms of the other oscillators. As in the usual light cone gauge, the operators G n ≡ q + R ψ − n combine with the operators L n to form an N = 1 superconformal algebra. The manifest symmetries that remain in this gauge are Spin(5); the Spin(3) which rotates the directions (1, 2, 3) are broken by the choice of gauge. 9 As for the heterotic case, the Spin(3) current which rotates the fermions which used to be a conformal current of weight one obeys an equation like (3.45). The zero mode again is a manifest symmetry, and this can be added to the bosonic rotations to get back the Lorentz rotations. To summarize, there is a global Lorentz rotation which is manifest in the internal light cone gauge. The local currents which rotates the chiral spinors on the worldsheet are however, not manifest symmetries. Let us make a few comments about the Green-Schwarz spinors in this gauge. We can consider the spinors which are formed by bosonizing the RNS fermions ψ 1,..8 and refermionizing them. As in § 3.3, we can write the spinors as S aα transforming as (2, 4) under the Spin(3) × Spin(5). The SU(2) above which rotated ψ 1,2,3 can be rewritten as S aα σ ij ab S b α . Our choice of gauge breaks this local symmetry on the worldsheet, but the zero mode is recovered as transforming well under the Virasoro algebra. The first thing to note is that these spinors are not frozen by the choice of gauge. At the quantum level, we have set all the oscillator modes of the operator ∂ + ( 1 √ 2 X 0 + 1 2 X 9 + 1 2 θ) to be zero. However, the boson θ also enters the spinor lattice which is not fixed to be zero by this choice of gauge. The operators in the spinor lattice should of course be written in terms of the correct transverse modes. The second observation is that under the Hamiltonian L 0 , the spinor currents are not dimension one half as one may have thought since the boson θ is not a free transverse oscillator. Similar comments now apply for the supercharge Q aα -it seems to transform locally under the Spin(3) × Spin(5), but the Spin(3) local chiral rotations are not manifest. Because of the way the fermion is used in the choice of the light-cone, the construction of supercharges is a bit subtle in this gauge and more work is needed to fully understand it. However, since it is just the familiar worldsheet theory in an unusual gauge, it is clear that such a construction must exist. Holographic Dual of the Type II Superstring We now consider the special case of d = 3 for type II theory compactified on IR 1,1 ×IR 3 ×T 5 with the worldsheet of the macroscopic string hologram extending along IR 1,1 . Let us summarize the group theory associated with this compactification. The Spin (8) From these commutation relations of the symmetries of the hologram, we expect the near horizon theory to have V irasoro × V irasoro symmetry. The hologram also instructs us that there should be a Spin(3) chiral symmetry current corresponding to the symmetry generated by the K ij n . We further expect the symmetries of T 5 10 and at least eight supersymmetries that correspond to the zero modes Q aα 0 . We will focus on only right-movers since the discussion is similar for the left-movers. The Virasoro symmetry in the boundary is most naturally realized by having an AdS 3 factor. String theory on AdS 3 is by now well understood as a WZW model based on the SL(2) current algebra on the worldsheet (see for example, [54,55] and references therein). We therefore start with SL(2, IR) super-affine algebra at level k which factorizes into a bosonic SL(2, IR) affine algebra at level k b = k + 2 and three free fermions with total central charge Given such a super-affine SL(2, IR) algebra in the bulk, there is an elegant construction due to Giveon, Kutasov, and Seiberg to obtain the boundary Virasoro algebra which has central charge c R = 6kp, (4.2) where the integer p naturally enters to be identified with the winding number. Since we want to identify it with the right-moving transverse superstring which has central charge 12p, we are forced to the choice k = 2 if we want agreement with the physical Wald entropy. The central charge of the SL(2, IR) factor for k = 2 is thus 15/2 from (4.1). In addition, to account for the T 5 factor, we must have five bosons and their NSR fermionic partners with total central charge 15/2. Together these factors already account for all the central charge that is allowed for the rightmoving NSR superstring. Our bulk worldsheet thus has a target space This particular Type-II model at level k = 2 has been proposed earlier in the context of small black holes in [28] who arrived at it from different considerations taking a limit of magnetically charged states. However, the physical interpretation that we will advance here for the type II theory as well as for the heterotic theory in the next section will be substantially different especially with regards to the symmetries as we now summarize below. Given a bulk worldsheet target space as in (4.3), we are immediately led to a puzzle if we wish to identify it with the near horizon theory of a small black string. Since the allowed central charge of c = 15 has already been used up, there is apparently no room for anything that can account for the rotational symmetries of the horizon. It was even suggested in [28] that these symmetries may completely disappear in the near horizon limit. The existence of symmetries of the near horizon geometry of the fundamental string has been a confusing issue to understand at the level of supergravity solutions because the answer is hidden at the string scale. Using the holograms as our guide proves to be very useful here. From the analysis of the boundary hologram for the d = 3 case, there is no doubt about the existence of Spin(3) × Spin(5) symmetry. As we have seen in §2.2 , the existence of this symmetry is required also for understanding the R-symmetry and the entropy through its relation to anomalies. Therefore, if we wish to identify the bulk theory (4.3) as the holographic dual of the d = 3 hologram, we must correctly exhibit the symmetries of the hologram in particular the Spin(3) generated by K ij n . Otherwise, we would be led to conclude that the holographic identification is incorrect. It turns out that the rotational symmetries can be realized in a somewhat subtle way using some special properties of the k = 2 theory. For this purpose, we can view the target space theory as 11 This string background can be interpreted in a few different ways. In this paper, we always consider the time direction to be inside the coset, so that it is really a two dimensional black hole [56,57,58]. Having said that, we note that all the calculations are done in an Euclidean setting with H + 3 (e.g. [59]) as standard in string theory, and one has to perform a Euclidean continuation. 12 This spacetime has zero temperature, and thus admits supersymmetry. Although the irrational nature of the above conformal field theory introduces many subtleties, in many aspects, string perturbation theory can be understood as usual, in particular a modular invariant one-loop partition function can be written down. The partition function, symmetries and moduli space of precisely these theories for both the type II and heterotic cases were discussed in a slightly different context 13 in [61,62]. Now precisely when k = 2, the U(1) boson happens to be at the free fermion radius so that the U(1) symmetry is enhanced. The boson can then be fermionized into two fermions which together with its fermionic partner generates the SU(2) k=2 current algebra. Using this current algebra on the worldsheet, one can then construct the symmetry currents. We discuss this construction of boundary Virasoro algebra and the boundary Spin(3) symmetry in detail in §4.2 and §4.3 using a particular (almost) free field representation. The construction of symmetries raises a new puzzle. One finds that the commutation relations of the SU(2) currents with the Virasoro generators are not what one would expect from the modes of a dimension one current. However, we find that the commutation are precisely as would be expected in an internal light cone gauge if the SU(2) boson was used as one of the light cone direction. This commutation leads us to identify the bulk theory defined by (4.3) as the holographic dual of the Type-II microscopic string hologram for T 5 compactification but in the internal light cone gauge. Another related issue that we clarify in this section is that of the construction of supersymmetries in the bulk theory. As we discussed in §3.2, the boundary hologram clearly has (8,8) two dimensional supersymmetry. The supercharges we will construct commute with the Hamiltonian L 0 and are interpreted as the zero modes of the (8,8) supercurrents in the R sector -this implies [63] that the background is not pure global AdS 3 , rather the fermions in the bulk can have boundary conditions corresponding to a space which is not simply connected. An example of such a space is the extremal J = M = 0 BTZ black hole [64,65] which is singular in general relativity. The smooth string theory we have constructed seems to capture some aspects of the physics of this extremal black hole. It would be nice to understand the relation to earlier attempts to understand the entropy of this black hole using the symmetries of AdS 3 [66,67]. A consistent interpretation of our theory 14 is that it is the description of strings moving in the background which is one of the many Ramond ground states which make up the extremal massless BTZ black hole. As we shall see below, this vacuum carries maximal allowed R charge and can be identified with a smooth AdS 3 geometry with a constant gauge connection turned on which induces the fermions to change periodicity [68,69,70]. Let us make a quick comparison to the more familiar worldsheet construction of supersymmetric AdS 3 ×S 3 in [60]. That construction gave rise to eight supercharges from the leftmovers (and another eight from the rightmovers) which formed among themselves the closed subalgebra involving the lowest ± 1 2 modes of the supercharges of the N = (4, 4) superalgebra in the NS sector. Our construction of the spacetime supercharges is explicitly different. 15 We expand on this later when we discuss supersymmetry. Having explained the issues involved, we now present our discussion as follows. Starting with Type-II strings on (4.3), we shall choose variables on the string worldsheet such that we can build the symmetries of AdS 3 × T 5 . In these variables, it will be clear that there are also additional SU(2) × SU(2) symmetries in the system. These symmetries are "stringy" and replace the geometry at small scales. Superstrings on AdS 3 × T 5 Superstrings on AdS 3 were studied in [60] using the first order β-γ system relevant to the SL(2) symmetry. We instead use the fields (τ, θ, ρ) mentioned above which are better suited to the symmetry of our problem which near the boundary of the AdS space represent the global time, angular direction and the radial direction of global AdS 3 . 16 (τ, θ) are free fields and ρ has a linear dilaton of slope Q = 2 k with central charge c ρ = 1 + 3Q 2 . We discuss a zero temperature supersymmetric AdS 3 theory which is Euclidean, correspondingly the τ direction will be compact. The symmetry algebra that we will obtain is the SL(2) algebra with a timelike direction, and its infinite extension V irasoro. The correlation functions of the Lorentzian theory needs as usual, an analytic continuation. Our variables must not be thought of as being the standard AdS 3 variables 17 ; they are instead related to it by a "T-duality" discovered in [73] using Buscher's rules. The geometric action of this duality has a fixed point and actually even changes the boundary conditionsbut as we discuss below, demonstrating the infinite dimensional Virasoro algebra associated 14 We thank Per Kraus for a clarifying discussion on this point. 15 Such a construction was written down in the appendix of [60] where the discussion was restricted to theories where the SU (2) currents and the SL(2) currents do not mix on the worldsheet. The result was interpreted as a topological theory in spacetime after imposition of an additional constraint. In our case, an additional constraint is not required. 16 These variables were mentioned in [60] and discussed in more detail in [71,72]. 17 Note that our variables actually parameterize a flat three dimensional solid cylinder in string frame. with AdS 3 elevates it to an exact stringy statement. These type of exact string backgrounds were introduced in [74,75,76]. In addition, there are three fermions ψ τ , ψ θ , ψ ρ which make the worldsheet theory N = 1 supersymmetric. We add the torus T 5 represented by the free N = 1 system X i , χ i , i = 1..5. All the directions are euclidean. k is the supersymmetric level and the central charge c = 3 + 6 k + 3 2 + 5 × 3 2 . To make this a critical string theory, we need add the (b, c, β, γ) ghosts with c = −15. Demanding that the total central charge vanishes fixes k = 2. In these variables, there is a strong coupling singularity associated with the ρ direction. To keep string perturbation theory under control, we need to cap off this singularity. To do this, we notice that the variables ρ, τ, ψ ρ , ψ τ have the central charge equal to that of the N = 2 coset SL(2) k=2 /U(1). This "cigar" coset has a geometry which smoothly caps off the strong coupling region, there is a modulus associated with the value of string coupling at the tip 18 which can be made small so that string perturbation theory is well-defined. This is summarized in Appendix A. We have essentially spelt out the decomposition of the SL(2) WZW model as SL(2)/U(1)× U(1) which has been used recently in many discussions of the SL(2) model e.g. to understand spacetime supersymmetry [77,78], to understand the spectrum [79], the partition function [80] and interactions [81]. Like other related representations, this one has its advantages and drawbacks. The manifest spacetime SL(2) symmetry is lost, but we will recover explicit expressions asymptotically where we can use the above free field variables. In the full theory, we must use the coset algebra instead. On the other hand, the symmetries related to the other fields like θ, ψ θ is always manifest in our variables. The symmetry algebra currents which we will write down are non-normalizable towards the weak coupling end as vertex operators on the worldsheet, and therefore act on worldsheet configurations which are localized in that asymptotic region. These correspond to inserting operators in the boundary in the AdS/CFT correspondence [82]. As for the β-γ variables, this has precise implications for e.g. the understanding of the central charge [83]. The angular direction of the cigar is at a specific radius which in our case is the free fermion radius R = 2. 19 Since this direction is associated with the Euclidean time direction, its compactness is not directly significant to us, however the Euclidean AdS 3 geometry [60] dictates that the angular direction θ also be at the same radius. This leads to an enhancement of symmetry which we discuss below. For now, this implies that the vertex operators must have integer momenta in terms of this radius. In the asymptotic region where the string coupling is small, the currents of the N = 1 18 In the actual AdS 3 space, this modulus corresponds to the fixed value of the dilaton. 19 We keep α ′ = 2 throughout this section. superconformal algebra are (with Q = 2 k ): We choose exactly the same structure for the left movers. 4.2 The SL(2) R symmetry from the worldsheet If the above system indeed represents AdS 3 , it should be possible to find the infinite dimensional SL 2 symmetry algebra as operators built with these fields. Below, we construct such operators. As shown in [71], this is equivalent to the construction in [60]. 20 Consider the following dimension half operators on the worldsheet labeled by n ∈ Z Z): which obey the following OPE's: Firstly, note that the absence of higher order poles in the first OPE above involving the supercurrent implies that all the currents e −ϕ J n in the (−1) picture (J (0) n in the zero picture) are BRST invariant on the string worldsheet and thus act on physical string states. Secondly, the constant a can be changed by adding the BRST trivial operator e −ϕ e n(τ +iθ) (ψ τ + iψ θ ) mentioned above. In the zero picture this is a total derivative, and its addition can be thought of as shifting the vacuum energy by a constant. This is also obvious from the Virasoro algebra written below -the linear term in n in the central extension can be reabsorbed in a constant shift of L 0 . We set this to unity as is the usual convention. The charges L n = dzJ The central term arises from the second order pole dw dz 1 (z − w) 2 e 2n(τ +iθ) (z)e 2m(τ +iθ) (w) = dw ∂ w e 2n(τ +iθ) (w) e 2m(τ +iθ) (w) = n dw 2∂ w (τ + iθ) e 2(m+n)(τ +iθ) (w) We see that c = 12p, where p is measured by the integral (4.9) and is interpreted as the number of fundamental strings in the system [60] . As explained in [83], this central charge computation done in a single string Hilbert space is the one measured by the long strings near the boundary of AdS; the central charge measured by the short strings in the center of AdS arises from disconnected diagrams [82]. Note that if we want the SL(2IR) currents above to be local with respect to the Hilbert space of states involving (τ, θ), this needs the boson τ to be compact on a circle of the same size as the θ circle which we already have. The extrapolation from the semiclassical picture that we did above (4.5) thus seems to be consistent with the full quantum picture. Recalling that the radius of the circle is tied intrinsically to the enhancement of symmetry, we can restate the above statement as the following: the consistency of the perturbative string theory with the correct symmetries produces exactly the expected entropy of the system. We take this as strong evidence for the existence of the hologram. The SU(2) R symmetry from the worldsheet As mentioned earlier, we get an enhancement of symmetry since the boson θ is at the free fermion radius. The angular coordinate θ can be written in terms of two free fermions e ±iθ ≡ 1 √ 2 (ψ 1 ± iψ 2 ), which along with the fermion ψ θ ≡ ψ 3 generates a left moving SU(2) 2 current algebra with currents K i (z) and corresponding charges K i = K i (z). This SU(2) is a physical symmetry of the string theory as can be seen by the fact that its generators in the (−1) picture given by the dimension half currents ψ i have a single pole with the worldsheet supercurrent (4.5). From general arguments [83], we expect that these symmetries would be extended to current algebras on AdS 3 giving rise to an infinite set of conserved charges, just like the global SL(2) is extended to the infinite dimensional Virasoro algebra [41]. However, it seems difficult to extend the SU(2) global symmetry in such a manner -the technique above of using the null operator e n(τ +iθ) naively fails because the boson θ which generates the SU(2) zero modes is also involved in making the dimension zero operator e n(τ +iθ) . One could try to define the infinite set of operators by using the OPE between the null operators and the SU(2) currents above to define a normal ordering. This can be summarized in a nice way by defining the boundary currents K i (x) as an integral over the worldsheet weighted by a dimension zero operator Λ(x, z) [83]. This is a nice exercise and x gets the interpretation of parameterizing the worldsheet on the boundary. But even if we do that, there seems to be a puzzle. One expects [83] that the worldsheet currents K i lead to corresponding currents in spacetime which are conformal currents under the spacetime Virasoro, i.e. the charges K i should be thought of as zero modes K i 0 of a infinite set of charges K i n which have the commutation relations In particular, the zero mode should commute with all the Virasoro generators. We can check that the expected commutation relation above (4.10) of a conformal current does not hold. For example, the commutation relations of the charges K 3 0 = i ∂θ with the Virasoro charges (4.6) is: This puzzle is resolved by noting that this commutation relation is precisely the one in the internal light cone gauge of the boundary theory in the previous section! To summarize, on the bulk string worldsheet, the V irasoro generators and the SU(2) generators mix, this makes the conformal nature of the SU(2) currents in spacetime nonmanifest. The holographic dual of this statement is that the choice of the internal light cone gauge on the boundary string breaks the conformal nature of the SU(2) current in the same way. It would be very interesting to understand if there is a different formulation of the theory where the choice of gauge is not built in, but can be added and the change of gauge is covariant. The identification of the SU(2) R symmetries above also allows us to identify the spacetime vacuum more precisely. The integral (4.9) tells us that the spacetime vacuum carries the maximal allowed U(1) R ⊂ SU(2) R charge and therefore should be interpreted as the unique vacuum in the Ramond sector with the corresponding value of the charge. The T 5 symmetries from the worldsheet The translation symmetries associated to the T 5 at a generic point in its moduli space can be also be extended into a level one U(1) 5 × U(1) 5 current algebra in spacetime. The right and left moving operators for these symmetries are: P iR n = dz e −ϕ χ i e n(τ +iθ) (z), (4.12) and a similar one for left movers. The supersymmetries from the worldsheet Now that we have understood the bosonic symmetries fully from the worldsheet point of view, we turn to the supersymmetries. The supercharges must live in representations of the bosonic symmetries discussed above. To get spacetime supersymmetric theories, the standard procedure in the case of compactifications to flat space is to use an N = 2 algebra on the worldsheet [84]. In the case of theories on AdS 3 , it was pointed out [60] that the algebra expected from the boundary superconformal theory is actually reproduced in the bulk using a different construction wherein one simply makes spin fields out of ten free fermions and keeps those that are physical and mutually local. 21 We actually use the standard procedure of [84] using the N = 2 worldsheet structure. 22 This ensures that the supercharges we build are physical operators. The spacetime supercharges we thus obtain are indeed not those of the NS sector of a boundary N = (4, 4) algebra, but instead the supercharges of a N = (8,8) superalgebra which have zero conformal dimension. This is in accord with the discussion of the hologram in §3. To proceed, we spilt the worldsheet fields into two groups. The first consists of the cylinder formed by ρ, θ, ψ ρ , ψ θ . This has an N = 2 algebra (A.3) with a U(1) R symmetry J 1 R ≡ i∂φ = −iψ ρ ψ θ + i∂θ. This is summarized in Appendix A with X ≡ θ. 23 The rest of the fields τ, X i and their superpartners ψ τ , χ i (i = 1..5) are paired up to get a complex structure and a corresponding N = 2 structure. The fermions can be bosonized The U(1) R current is then expressed as a sum of the bilinears in these fermions J 2 R ≡ i(∂H 1 + ∂H 2 + ∂H 3 ). To perform a chiral Z Z 2 projection, we can use the symmetry generated by the U(1) R current J 1 R + J 2 R . In practice, the GSO projection is best implemented by introducing target space supercharges and demanding locality of physical operators, as in [85]. We introduce the (1, 0) supercurrent operator For the case k = 2, there does exist a different N = 2 structure which reproduces these supercharges as we briefly mention below. This is not true for generic k [80]. 22 Such a construction was sketched in the appendix of [60], and was interpreted (after an additional projection which threw out four of the eight supercharges) as a possible description of the R sector of the N = (4, 4) algebra of the D1/D5 system on T 4 . As we have discussed, the bosonic as well as the super symmetries in our boundary theory are explicitly different. 23 Note that in the above construction, since τ and θ are at the same free fermion radius, there is a different N = 2 supersymmetry on the worldsheet where X ≡ τ is fermionized, and ψ θ is paired with ψ 5 from the torus. Using this structure to build the supercharges gives the standard construction of [60] for the case k = 2 wherein the eight supercharges have conformal dimension ± 1 2 and form part of a spacetime N = (4, 4) algebra. Note that the two sets of eight supercharges are not local with respect to each other, so we have to choose one or the other. where S a is the spin field of SO(6) built out of the three pairs of free fermions and ϕ is the bosonized superghost. There are 2 4 = 16 such supercurrent operators, and 8 of them are mutually local, these are all of one chirality in the six dimensions. Of course, we would have obtained the same supercharges by simply making spin fields out of our ten free fermions and demanding consistency. For the type II theories, there is also a similar condition on the leftmoving side giving rise to the IIA or IIB theories. Now, we don't really have a SO(6) symmetry and we must arrange our supercurrents in the SU(2) × Spin(5) symmetry. From the reduction of Spin(6) → Spin (5), it is clear that the supercurrents are spinors under Spin (5). One can also check that they are spinors under the SU(2) -recall that the K 3 of the SU(2) is given by K 3 = dz ∂θ, the supercharges above all have a θ dependence in the exponent with a coefficient ± 1 2 . We then have eight mutually local supercurrents which fall into the minimal spinor of this group which is a (2, 4) with a (pseudo)reality condition using the antisymmetric charge conjugation matrices Ω ab and C αβ as described in § 3.3 and Appendix B. We accordingly call the supercurrents S aα (z). Note that the supercurrents are local on the worldsheet with respect to all the vertex operators generating the spacetime bosonic symmetry currents described earlier, in particular the spacetime Virasoro currents -again, we note the special nature of the k = 2 theory, this does not happen for generic k as was discussed in [78]. The algebra of the supercharges Q aα = dz S aα (z) can be deduced by examining the OPE of the currents (4.13) above. After performing the usual picture changing operation on the right hand side, we get: (4.14) where (i = 1..5), and L 0 = ∂τ . Since there is no τ dependence of the supercurrents (4.13), it is clear that all the supercharges have vanishing conformal dimension. If we restrict to the subspace where P i = 0; i = 1..5, we get the supersymmetry algebra discussed earlier. These supercharges are dimension zero under the spacetime Virasoro algebra (4.8), but they involve the boson θ, and hence suffer from the same problem as the SU(2) R symmetrythe supercurrents in spacetime seem not to be dimension half conformal currents. Again, this is what is seen in the boundary theory in the internal light cone gauge. Holographic Dual of the Heterotic String The heterotic string shares with the type II string a chiral set of fields and physics governed by these fields are similar. In this section, we shall try to emphasize the novel features of the heterotic theory arising from the leftmovers and the process of combining the two chiralities of the string fluctuations. From the leftmovers, we expect chiral symmetry currents of E 8 × E 8 × V irasoro. We then have the same bosonic fields ρ, τ, X m ; m = 1, . . . , 5, as on the right with c = 10 and the gauge lattice of E 8 × E 8 or SO(32) with c = 16 24 . This gives us already a total central charge of 26. Counting the central charge as before, this means that the target space must be of the form 25 SL To build a heterotic string theory, we need to combine these left movers with the rightmovers of (4.4) with k b = k + 2. For generic values of k, these heterotic cosets have not been studied very well, but it is known that the radius of the left moving boson generating the U(1) is related to the radius of its right moving counterpart by a factor of k b /k which is √ 2 in our case [28]. In the case of k = 2, we actually can understand this better -the modular invariance of the partition function dictates that the left moving boson θ generating the U(1) must be at the self dual radius (consistent with the above factor of √ 2) so that the symmetry is enhanced to SU(2) 1 [62]. If we wish to now use a construction similar to in the previous section to construct the Virasoro symmetry in the boundary, we would require such a bosonθ at self dual radius. Since we would like the torus to have free moduli corresponding to its radii, it must actually arise from within the E 8 × E 8 lattice in the same way that it arose from the SU(2) 2 represented by three free fermions for the supersymmetric side. This will non-trivial consequences which seem strange at first sight, but as we shall see, simply corresponds to a corresponding gauge choice in the boundary theory as in §3.3. The SL(2) L symmetry from the worldsheet In the heterotic theory, the form of the generators on the bosonic side are different -they are actually much simpler since there is no constraint arising from N = 1 worldsheet supergravity. The form of the SL 2 currents is very similar to the supersymmetric case (4.6, 4.7), but is simpler. For a boson θ with canonical normalization θ(z) θ(0) ∼ − ln z, using the techniques of the previous section, it can be checked that the currents θ . This implies that for the heterotic side, κ = 4 and c = 24p. The central charges are 24 For brevity we will often refer only to E 8 × E 8 but our considerations apply to both possibilities. then simply c = 6κp; with κ = 2 and κ = 4 for the supersymmetric and heterotic sides. As was noted in [28], this is consistent with the fact that the level of the supersymmetric coset and the bosonic coset are k = 2 and k B = k + 2 for the two theories. The interpretation of this fact in [28] was in terms of a "thermodynamic" entropy wherein the cigar angular variable is the Euclidean time in a finite temperature theory. It is not very clear what such an interpretation means when the radius of the circle 26 on the left and the right are not equal like in the heterotic case above. The microscopic computation of the entropy above follows from the central charge computation in the Virasoro algebras on the left and the right. The asymmetric nature of this circle direction is completely consistent with the factorization of the theory into left and right movers. The theory is at zero temperature consistent with supersymmetry; there are two non-interacting Hamiltonians and the two corresponding central charges c L,R arise simply from counting the various vacuum configurations. It is indeed interesting that for the type II case, such a thermodynamic calculation of the entropy agrees with our microscopic one. It would be nice to understand this better. As mentioned in the previous section, our interpretation of the resulting spacetime is also different. There is a stringy Spin(3) symmetry and maximal supersymmetry as discussed in detail there. where the J ab (z) are the dimension one gauge currents on the worldsheet. Note that because of the way we have 'borrowed'θ from the gauge lattice only the E 8 × E 7 (or SO(28)) part of the gauge currents are realized as conformal affine algebra. The SU(2) ⊂ E 8 × E 8 generated by the boson θ suffers from the same problem as the SU(2) R , i.e. it seem to be non-conformal in spacetime. This we interpret as for the SU(2) R symmetries to be a consequence of a particular gauge that we have chosen for this construction which seems to correspond to an internal light cone gauge. The 'non-conformal' commutators between the SU(2) currents (5.3) and Virasoro generators are then exactly what one expects in this particular gauge. In fact, we could have embedded the SU(2) in the original gauge lattice in many different ways which should probably be interpreted as different possible gauge choices. Note however that the global E 8 × E 8 symmetry generated by the zero modes of the currents 26 More precisely, the operator content of a boson at the given radius. commutes with the Hamiltonian and hence one can surely assert the existence of the full E 8 ×E 8 symmetry. Conclusions and Open Problems We have a proposed that a simple, free two dimensional SCFT living on a macroscopic superstring can be regarded as the hologram for the gravitational theory on AdS 3 in the vicinity of a macroscopic string. For the T 5 compactification, we have written down the holographic dual as an exact worldsheet in the bulk. As we have seen the logic of this construction is very tight which we summarize below. 2. For the heterotic string, the consistency of the perturbative string theory demands that the level of the left moving WZW model be k b = k + 2 = 4, which gives the correct left moving central charge. 3. If in addition we have a T 5 factor for the superstring and an additional level one E 8 × E 8 factor for the left-movers of the heterotic string, then one finds that the maximally allowed worldsheet central charge of 15 for the superstring and 26 for the bosonic string are already saturated. The target space therefore must be of the form (4.4) for the superstring and (5.1) for the left-moving heterotic string. 4. The form of the target space in (4.4) however raises an important puzzle about the symmetries. If this is to be identified with a small black string in a IR 3 × T 5 compactification then its global symmetries must contain a Spin(3) factor corresponding to IR 3 rotations. Fortunately, precisely for k = 2, the U(1) boson in (4.4) is at the free fermion radius and it is then possible to construct the Spin(3) currents using this fact. All global symmetries expected for the horizon of a black hole and independently from the boundary hologram can be constructed from the bulk theory. Similarly, for the heterotic string, the boson is at the self dual radius which makes it possible to recover all the symmetries. 5. One also expects Spin(3) affine currents from the bulk corresponding to K ij n in the hologram. The symmetry currents can be constructed from the bulk but one finds that the commutation relations with the Virasoro generators are unusual and are not what one might expect for the modes of a conformal dimension one current. We note however that the commutations are exactly what one might expect from the boundary hologram (3.45) if it was gauge fixed using an unusual internal light cone gauge discussed in §3. This suggests that one should identify the symmetry algebra constructed from the bulk constructed using these particular variables with the corresponding algebra in the hologram in a particular internal light cone gauge. 6. One can construct in the bulk eight chiral supersymmetries corresponding to the zero modes Q aα 0 expected from the boundary in the Ramond sector. 7. To obtain a small black hole from a small black string, we should identify along the length of the string to obtain a compact circle. The generator of such a translation is L 0 −L 0 . Note that both L 0 andL 0 commute with the Spin(3) and E 8 × E 8 currents and hence such an identification would commute with the symmetries. It is nontrivial that a such a consistent worldsheet theory exists. The bulk worldsheet construction is very tightly constrained. The requirements of the maximal allowed central charge of the bulk worldsheet and the physical requirements following from symmetries and Wald entropy lead almost uniquely to the theory that we have used. Using this theory we are then able to give a detailed construction of all boundary symmetries in a particular free field realization which seems to correspond to a particular choice of internal gauge in the boundary theory. The most unsatisfactory part of our construction is the necessity to choose this particular unusual gauge. If the basic identification of the target space (4.4) and (5.1) is correct, then it should be possible to construct the symmetries in a way that corresponds to the usual (compact) light cone gauge where they are manifest. This suggests that it may be possible to generalize the GKS construction [60] to construct the boundary Virasoro algebra abstractly only from the SL(2, IR)/U(1) factor that does not require us to borrow a U(1) factor. The coset theory does not admit SL(2, IR) symmetries. However, what we are really after are is a Virasoro symmetry in the boundary. The coset theory is expected to have an extended chiral algebra. For example, in the compact analog, SU(2) k /U(1) is just the parafermion theory that does not have SU(2) symmetry but admits a conserved spin-3 currents that generates the W 3 algebra which is nonlinear. Perhaps one can obtain realization of the boundary Virasoro algebra utilizing these additional (nonlinear) symmetries in the bulk. This is a very interesting open problem and could be related to large extended algebras as suggested in [29] (see [32] for a recent discussion). The existence of a worldsheet construction resolves many of the puzzles relating to small black holes and in particular gives a construction of the near horizon geometry of both heterotic and Type-II small black holes in four dimensions. The identification of the macroscopic string worldsheet theory as a boundary hologram is very useful in understanding the physics. In particular, the issues about global and local symmetries and the applicability of the Kraus-Larsen argument in this context becomes transparent. There are chiral stringy currents generated by K ij , a linear combination of some of which can be identified with an R-current. These do not correspond to the nonchiral gauge symmetries generated by J ij that are visible in supergravity for which the bulk Chern-Simons terms vanishes. There are a number of possible generalizations and open questions. 1. The holograms make it clear that there is nothing special about d = 3. This is consistent with what one might expect from scaling analysis in supergravity. So if a holographic dual exists for d = 3, it is expected to exist for all values. It seems likely that the other higher dimensional theories can be obtained simply by decompactifying the T 5 . This is what is required in the boundary hologram and it should be true also in the bulk. For example, when T 5 is replaced by a noncompact IR 5 , one can add to the angular momentum J mn an orbital angular momentum term involving L mn . The full Spin(8) symmetry is not manifest but it is because of the choice of the gauge that breaks it to Spin(8) to Spin(3) × Spin (3). Getting the off-diagonal currents J mi should also be possible but requires more work. Since both the bulk and the boundary are given by tractable worldsheet conformal field theories, it ought to be possible to test this holography in greater detail than has been possible in other contexts. For example, a comparison of correlation functions might be possible as was done for the related F1-NS5 system [87,88]. 3. One thing to keep in mind is that the boundary theory is expected to correspond to a string field theory of the bulk theory that includes the multi-string states as well. It would be interesting to see if it is possible to construct the string field theory of the bulk using conventional methods of string field theory and to compare it with the boundary hologram. In another related direction, it would be interesting to try to understand the known non-perturbative objects like D-branes in the bulk AdS 3 from the perspective of the boundary theory. 4. In the bulk, the AdS 3 structure in the bulk is an important part of the symmetry algebra, which manifests itself in the associated Brown-Henneaux stress tensor. In the boundary, this translates to closed algebras which contain the Virasoro generators. As we have seen, with a few additional assumptions, like that of linear realization and no higher spin currents constrains these algebras very tightly. It would be interesting to look for non-linear generalizations which involve higher spin operators. 5. There is also the related issue of single string v/s multi string Hilbert spaces. The boundary theory has a large extended chiral algebra which involves all the chiral operators on the string, these are not expected to be realized in the bulk single string Hilbert space. The symmetries of the single-string Hilbert space in the bulk form a closed subalgebra. So it seems reasonable to expect that only a maximal closed algebra that includes Virasoro will be realized in the single-string Hilbert space and not all extended algebras. 6. The orbital angular momentum generators L ij which rotate only the bosons on the boundary appears to be absent in the bulk. On the boundary, for the heterotic case, there are two symmetries generated by J ij and S ij but in the bulk we have only one. More work is needed to fully understand the details of this correspondence. 7. A singly macroscopic string with p windings along a single circle is marginally unstable under decay into p strings with unit winding. One then has to take into account the multistring branch analogous the Coulomb branch in the D1-D5 system [89]. In the context of F1-NS5 systems this necessitates turning on Fayet-Illiapoulos terms in the gauge theory that correspond to RR fluxes in the bulk. For fundamental strings, the multi-string branch can be prevented by the simple device of adding momentum or winding along in internal, but not both, along an internal circle. This makes the configuration stable under marginal decay without changing the entropy as explained in §2.2. For such configurations, an internal light-cone gauge would be more natural. 8. The way we measured entropy is by using a long fundamental string probe in AdS 3 . This involved a definition (although natural) of the "number" of F-strings p which in the context of SL(2, IR) current algebra is related to the spectrally flowed sectors. 9. The hologram that we have discussed can also be related to the usual gauge-gravity duality [90,91,92] by S-duality. If we consider N D1-branes in this context [93], then the dilaton becomes strong near the core. So one must perform an S-duality trasnformation to go to the weak coupling F-string description to see the horizon that we have discussed. We are taking a deep infrared limit of the D1-brane worldvolume theory. In this limit the 1 + 1 theory is simply the symmetric product (IR 8 ) N /S N where S N is the symmetric group of N objects [94]. There are many twisted sectors of this orbifold classified by the conjugacy classes of the symmetric group which are given by collections of cycles of various lengths (see [71,95,96] for a discussion in a similar context). Here we have discussed the sector in the orbifold with cycle length p. The matrix B = C t is used to impose a Majorana type condition. In this case, we have a pseudo-Majorana condition using the matrices B αβ and Ω ab = iσ 2 . The pseudo-Majorana condition is λ * = BΩλ, such that (B * ) * = B Another way to write this is to define the Majorana conjugate as λ = λ t Ω t C and then define λ = λ † . Spinors have a lower index λ α and their Majorana conjugates have an upper index λ α . Indices are raised and lowered with C and Ω: In terms of the indices, the pseudo-reality condition is (λ aα ) * = λ aα .
2007-09-07T09:41:37.000Z
2007-07-26T00:00:00.000
{ "year": 2007, "sha1": "f8f08197288a724fc72c18b8e09e8e483757ab66", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0707.3818v2.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0ef66e8f23b72272298614b03b90f05739bada9f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225424233
pes2o/s2orc
v3-fos-license
Encapsulation of Grapefruit Essential Oil in Emulsion-Based Edible Film Prepared by Plum (Pruni Domesticae Semen) Seed Protein Isolate and Gum Acacia Conjugates A dry-heated Maillard reaction was used to prepare plum seed protein isolate and gum acacia conjugates. Emulsion-based edible films (EBEF) were prepared by the encapsulation of grapefruit essential oil using conjugates solution as the continuous phase. The conjugates formed from 3 days of dry heating showed a significant improvement in emulsifying properties due to the unfolding of protein, as confirmed by structure analysis. The droplet size, electrical charge, and viscosity of emulsions increased with the increasing essential oil concentration, and all emulsions exhibited ‘gel’-like behavior. The water vapor barrier property, surface hydrophobicity, mechanical properties, and thermal stability of the films were improved as the essential oil content increased in the range of 1–4% due to enhancement in intermolecular interaction and compatibility, as well as a denser microstructure. Furthermore, all films exhibited an inhibitory effect against E. coli, while their radical scavenging activity depended on the release rate from films. The results obtained in this work confirmed that EBEF could be used as a novel food active packaging in the near future. Introduction Proteins, polysaccharides, and lipids are the main components used to produce edible films. However, there are some disadvantages for each polymer. Protein films, such as soy protein, corn protein, whey protein, and caseinates, possessed better barrier properties but poorer mechanical properties than polysaccharides films [1,2]. The films made from polysaccharides (such as chitosan, fiber, and starch) were more transparent and oil-resistant but exhibited higher water permeabilities than protein films. The lipids films, such as beeswax and fatty acids, were fragile and brittle, but were used for water transmission reduction [3]. Therefore, it is important to produce hybrid films by combining hydrophilic matrix with hydrophobic compounds in order to obtain multiple properties. The composite films from hydrocolloids and lipids mixtures can be obtained by bilayers or emulsions technology. The bi-layers films have shown some cracks or non-uniform surface [4]. Therefore, the emulsion-based edible films (EBEF), which can be easily obtained by film-forming casting following drying process, have attracted increasing research interest [5]. The properties of EBEF can be significantly affected by emulsification techniques, as well as the type and quantities of hydrocolloids or lipids and their compatibility [6]. Therefore, the emulsifying properties of hydrocolloids are important to properties of films. Physicochemical Properties of Conjugates The contents of lysine and arginine were lower in PSPI-GA conjugates (PSPI-GA 1, PSPI-GA 3, PSPI-GA 5) compared to the PSPI and GA mixtures (PSPI-GA) Figure 1A), which is consistent with previous study that showed that lysine and arginine are main free amino groups taking part in the Maillard reaction [9]. The results indicated that the conjugates were formed by covalent binding. As shown in Figure 1B,C, both EAI and ESI of conjugates increased as the conjugation reaction continued up to 3 days, while decreased afterwards. This increase is attributed to the conjugates combining the emulsifying property of proteins with the solvation property of the polysaccharides [20]. The decrease might be due to the generation of polymerization products as Maillard reaction proceeded [18]. As suggested in Figure 1D, H 0 values of conjugates were significantly (p < 0.05) lower than that of PSPI/GA mixtures. As shown in Figure 1E, λmax was shifted to longer wavelengths (bathochromic shift) when PSPI was grafted with GA, indicating the changes of PSPI conformation. As shown in Figure 1F, conjugates exhibited an increase of unordered coils. These changes in structure all suggested that the attachment of GA to PSPI lead to more hydrophilic and disordered structures with greater conformational flexibility. and circular dichroism spectrum (F) of conjugates. PSPI/GA: mixture of plum seed protein isolate (PSPI) and gum acacia (GA); PSPI-GA 1/3/5: the PSPI-GA conjugates prepared from 1, 3, or 5 days. PSPI: plum seed protein isolates. GA: gum acacia. Different letters in the same pattern represent significant difference (p < 0.05). Size Distribution Considering the negative effects of extended dry heating on emulsifying properties, the conjugates prepared from 3-day reactions were chosen for preparing the film-forming emulsion. The size distribution of film-forming emulsion containing different contents of EO was analyzed. Only one peak was observed in Figure 2A, indicating that the incorporation of EO into the emulsion of conjugates resulted in a mono-modal distribution of droplets. This result is consistent with soy protein films containing cinnamon and ginger EO previously reported [21]. In contrast, researchers observed a multimodal distribution of droplets, when cinnamon or ginger EO was incorporated in and circular dichroism spectrum (F) of conjugates. PSPI/GA: mixture of plum seed protein isolate (PSPI) and gum acacia (GA); PSPI-GA 1/3/5: the PSPI-GA conjugates prepared from 1, 3, or 5 days. PSPI: plum seed protein isolates. GA: gum acacia. Different letters in the same pattern represent significant difference (p < 0.05). Size Distribution Considering the negative effects of extended dry heating on emulsifying properties, the conjugates prepared from 3-day reactions were chosen for preparing the film-forming emulsion. The size distribution of film-forming emulsion containing different contents of EO was analyzed. Only one peak was observed in Figure 2A, indicating that the incorporation of EO into the emulsion of conjugates resulted in a mono-modal distribution of droplets. This result is consistent with soy protein films containing cinnamon and ginger EO previously reported [21]. In contrast, researchers observed a multimodal distribution of droplets, when cinnamon or ginger EO was incorporated in sodium caseinate-based films [22]. Multimodal distribution was also reported in soy protein-based film emulsified with flaxseed oil [23], gelatin-based films emulsified with olive oil [24], and whey protein-based films emulsified with rapeseed oil [25]. They attributed this phenomenon to the aggregation of polymers and coalescence of smaller droplets into larger ones after emulsification. In addition, Volume-mean diameter (D [3,4]) values of emulsion with EO concentrations ranging from 1% to 6% were 33.02 ± 2.01, 34.17 ± 1.94, 40.90 ± 3.04, and 63.84 ± 7.78 µm, respectively, indicating that the droplet diameter of film-forming emulsion increased as the EO concentration increased. This result might be due to the fact that with the increasing of EO concentration, insufficient adsorption of conjugates at the oil-water interface might occur, leading to flocculation and coalescence after emulsification. On the other hand, the increasing of EO concentration could also resulted in thinner interfacial film surrounding the oil droplets, which is more susceptible to rupturing. Coatings 2020, 10, x FOR PEER REVIEW 4 of 21 sodium caseinate-based films [22]. Multimodal distribution was also reported in soy protein-based film emulsified with flaxseed oil [23], gelatin-based films emulsified with olive oil [24], and whey protein-based films emulsified with rapeseed oil [25]. They attributed this phenomenon to the aggregation of polymers and coalescence of smaller droplets into larger ones after emulsification. In addition, Volume-mean diameter (D [3,4]) values of emulsion with EO concentrations ranging from 1% to 6% were 33.02 ± 2.01, 34.17 ± 1.94, 40.90 ± 3.04, and 63.84 ± 7.78 μm, respectively, indicating that the droplet diameter of film-forming emulsion increased as the EO concentration increased. This result might be due to the fact that with the increasing of EO concentration, insufficient adsorption of conjugates at the oil-water interface might occur, leading to flocculation and coalescence after emulsification. On the other hand, the increasing of EO concentration could also resulted in thinner interfacial film surrounding the oil droplets, which is more susceptible to rupturing. PSPI-GA conjugates emulsions containing grapefruit essential oil at 1%/2%/4%/6% level. PSPI: plum seed protein isolates. GA: gum acacia. EO: grapefruit essential oil. Different letters in the same pattern represent significant difference (p < 0.05). ζ-Potential It is well known that the electrical charge of oil droplets is important in the stability of emulsion by affecting the electrostatic repulsion. As suggested in Figure 2B, due to the anionic nature of PSPI-GA conjugates absorbed to oil droplets, the negative electrical charge was observed in all emulsions. EO concentration significantly affect the electrical charge. This is probably attributed to the ionizable compounds presented in EO. Furthermore, the difference of ζ-potential might lead to the changing of the intermolecular electrical repulsion in films, which could result in the changing of film structures. Rheological Behavior The viscosity of the film-forming solution are important to film properties, which could affect the removal of air bubbles [26] and elimination of sagging [27] during the process. Therefore, the flow curves of film-forming emulsion loaded with different concentrations of EO were analyzed. As shown in Figure 3A, the viscosity of emulsion increased as the EO concentration increased, which is probably due to the high viscosity of oil. In general, the film-forming emulsion showed shear thinning behavior. The R 2 (coefficient of determination) values for emulsion with EO concentration ranging from 1% to 6% were 0.7331, 0.6891, 0.9456 and 0.9262, respectively. These results indicated that the fitted curves for the emulsions with a higher concentration of EO gave good agreement with the experimental data and that the degree of shear thinning increased as the concentration increased. The flow behavior index of emulsion with EO concentration ranging from 1% to 6% was calculated as 0.677, 0.629, 0.476, and 0.412, respectively. These results indicated that all the film-forming emulsions exhibited a pseudo plastic behavior (the flow behavior index < 1) in which the viscosity PSPI-GA conjugates emulsions containing grapefruit essential oil at 1%/2%/4%/6% level. PSPI: plum seed protein isolates. GA: gum acacia. EO: grapefruit essential oil. Different letters in the same pattern represent significant difference (p < 0.05). ζ-Potential It is well known that the electrical charge of oil droplets is important in the stability of emulsion by affecting the electrostatic repulsion. As suggested in Figure 2B, due to the anionic nature of PSPI-GA conjugates absorbed to oil droplets, the negative electrical charge was observed in all emulsions. EO concentration significantly affect the electrical charge. This is probably attributed to the ionizable compounds presented in EO. Furthermore, the difference of ζ-potential might lead to the changing of the intermolecular electrical repulsion in films, which could result in the changing of film structures. Rheological Behavior The viscosity of the film-forming solution are important to film properties, which could affect the removal of air bubbles [26] and elimination of sagging [27] during the process. Therefore, the flow curves of film-forming emulsion loaded with different concentrations of EO were analyzed. As shown in Figure 3A, the viscosity of emulsion increased as the EO concentration increased, which is probably due to the high viscosity of oil. In general, the film-forming emulsion showed shear thinning behavior. The R 2 (coefficient of determination) values for emulsion with EO concentration ranging from 1% to 6% were 0.7331, 0.6891, 0.9456 and 0.9262, respectively. These results indicated that the fitted curves for the emulsions with a higher concentration of EO gave good agreement with the experimental data and that the degree of shear thinning increased as the concentration increased. The flow behavior index of emulsion with EO concentration ranging from 1% to 6% was calculated as 0.677, 0.629, 0.476, and 0.412, respectively. These results indicated that all the film-forming emulsions exhibited a pseudo plastic behavior (the flow behavior index < 1) in which the viscosity decreases with the increasing of shear rate. This behavior is attributed to the fact that interactions between the emulsion components were disrupted as the shear rates increased [23]. Previous studies reported a similar result in which that shear thinning behavior for emulsion was observed when flaxseed oil was emulsified into soy protein films at concentrations ranging from 3% to 10% [23]. The consistency coefficient of emulsion with EO concentration ranging from 1% to 6% was calculated as 0.0093, 0.0158, 0.0352 and 0.0798 Pa s n , respectively. The value of the consistency coefficient is related to the inter-molecular forces of emulsion according to previous study [28]. Thus, the inter-molecular forces in emulsion increased as the EO concentration increased. Figure 3B,C shows the influence of EO concentration on the variation of the storage modulus (G ) and loss modulus (G ) of film-forming emulsion. It could be observed that the value of G is higher that G" for all samples. These results indicated that all samples exhibited 'gel'-like behavior along the entire frequency range. The concentration of EO significantly affected the values of G and G" of the film-forming emulsion. The possible reason for this phenomenon is that the concentration of EO could change the particle size distribution, which has been proven to have dramatic effects on flow properties [29]. On the other hand, the concentration of EO could also change the inter-molecular forces, which have been proven to affect the formation of the gel network [30]. Film Physical Properties 2.3.1. Transparency, Whiteness Index, and Swelling Ability As shown in Table 1, the transparency of films was significantly affected by EO concentration. The values decreased as EO concentration increased from 1% to 4%, followed by an increase at the 6% EO level. On one hand, the transparency of emulsion-based films is affected by the oil type and concentration due to the fact that oil could change the extent of light scattering. On the other, the film surface and internal structure could also affect the light reflection and absorption. Therefore, the effects of oil concentration on the microstructure of films were studied later. As shown in Table 1, the transparency of films was significantly affected by EO concentration. The values decreased as EO concentration increased from 1% to 4%, followed by an increase at the 6% EO level. On one hand, the transparency of emulsion-based films is affected by the oil type and concentration due to the fact that oil could change the extent of light scattering. On the other, the film surface and internal structure could also affect the light reflection and absorption. Therefore, the effects of oil concentration on the microstructure of films were studied later. As shown in Table 1, the concentration of EO had a significant effect on the whiteness index of films. Overall, films became darker as the EO concentration increased. The increase in darkness is presumably due to the existence of EO, which contains pigments and is orange in color. The result is consistent with the visual appearance in films (Table 1). Changes in whiteness index affected by the color of the oil added were also observed in previous study in which the lightness of a soy protein-based emulsion-type film decreased as the flaxseed oil concentration increased [23]. As suggested in Table 1, swelling ability of the films decreased along with the increase of EO concentration. The decrease in swelling ability is thought to be associated with an increasing of hydrophobicity by adding EO, which could prevent the matrix from binding strongly to water and thus reduce water uptake. Our result is consistent with previous researches that showed that the swelling ability of chitosan-based films decreased as the virgin coconut oil concentration increased [31] and similar to results observed in soy protein film with flaxseed oil [23]. Water Vapor Permeability (WVP), Contact Angle, and Mechanical Properties The water vapor permeability is directly related to the property of the film to restrict water migration of the food. The most striking feature for EBEF is the excellent water vapor barrier property, which might be attributed to the oil incorporation. As shown in Table 1, the WVP decreased as the EO concentration increased from 1% to 4%. This is probable due to the fact that the viscosity of film-forming emulsion increased as EO levels increased. It has been proven that increasing the viscosity of film-forming emulsion could reduce water mobility through the film [23]. However, the WVP was not decreased as EO levels increased to 6%. This is probably due to the disruption of structure which might provide channels for water migration. Water contact angle are important in the film surface wettability and moisture transport. Higher surface hydrophobicity is necessary for EBEF when used as packaging or coatings; hence, the contact angle of edible films should be as large as possible. As suggested in Table 1, films made of EO at the 4% level possessed the highest contact angle. This is helpful for explaining why this film showed the lowest water vapor permeability. EO As shown in Table 1, the transparency of films was significantly affected by EO concentration. The values decreased as EO concentration increased from 1% to 4%, followed by an increase at the 6% EO level. On one hand, the transparency of emulsion-based films is affected by the oil type and concentration due to the fact that oil could change the extent of light scattering. On the other, the film surface and internal structure could also affect the light reflection and absorption. Therefore, the effects of oil concentration on the microstructure of films were studied later. As shown in Table 1, the concentration of EO had a significant effect on the whiteness index of films. Overall, films became darker as the EO concentration increased. The increase in darkness is presumably due to the existence of EO, which contains pigments and is orange in color. The result is consistent with the visual appearance in films (Table 1). Changes in whiteness index affected by the color of the oil added were also observed in previous study in which the lightness of a soy protein-based emulsion-type film decreased as the flaxseed oil concentration increased [23]. As suggested in Table 1, swelling ability of the films decreased along with the increase of EO concentration. The decrease in swelling ability is thought to be associated with an increasing of hydrophobicity by adding EO, which could prevent the matrix from binding strongly to water and thus reduce water uptake. Our result is consistent with previous researches that showed that the swelling ability of chitosan-based films decreased as the virgin coconut oil concentration increased [31] and similar to results observed in soy protein film with flaxseed oil [23]. Water Vapor Permeability (WVP), Contact Angle, and Mechanical Properties The water vapor permeability is directly related to the property of the film to restrict water migration of the food. The most striking feature for EBEF is the excellent water vapor barrier property, which might be attributed to the oil incorporation. As shown in Table 1, the WVP decreased as the EO concentration increased from 1% to 4%. This is probable due to the fact that the viscosity of film-forming emulsion increased as EO levels increased. It has been proven that increasing the viscosity of film-forming emulsion could reduce water mobility through the film [23]. However, the WVP was not decreased as EO levels increased to 6%. This is probably due to the disruption of structure which might provide channels for water migration. Water contact angle are important in the film surface wettability and moisture transport. Higher surface hydrophobicity is necessary for EBEF when used as packaging or coatings; hence, the contact angle of edible films should be as large as possible. As suggested in Table 1, films made of EO at the 4% level possessed the highest contact angle. This is helpful for explaining why this film showed the lowest water vapor permeability. EO As shown in Table 1, the transparency of films was significantly affected by EO concentration. The values decreased as EO concentration increased from 1% to 4%, followed by an increase at the 6% EO level. On one hand, the transparency of emulsion-based films is affected by the oil type and concentration due to the fact that oil could change the extent of light scattering. On the other, the film surface and internal structure could also affect the light reflection and absorption. Therefore, the effects of oil concentration on the microstructure of films were studied later. As shown in Table 1, the concentration of EO had a significant effect on the whiteness index of films. Overall, films became darker as the EO concentration increased. The increase in darkness is presumably due to the existence of EO, which contains pigments and is orange in color. The result is consistent with the visual appearance in films (Table 1). Changes in whiteness index affected by the color of the oil added were also observed in previous study in which the lightness of a soy protein-based emulsion-type film decreased as the flaxseed oil concentration increased [23]. As suggested in Table 1, swelling ability of the films decreased along with the increase of EO concentration. The decrease in swelling ability is thought to be associated with an increasing of hydrophobicity by adding EO, which could prevent the matrix from binding strongly to water and thus reduce water uptake. Our result is consistent with previous researches that showed that the swelling ability of chitosan-based films decreased as the virgin coconut oil concentration increased [31] and similar to results observed in soy protein film with flaxseed oil [23]. Water Vapor Permeability (WVP), Contact Angle, and Mechanical Properties The water vapor permeability is directly related to the property of the film to restrict water migration of the food. The most striking feature for EBEF is the excellent water vapor barrier property, which might be attributed to the oil incorporation. As shown in Table 1, the WVP decreased as the EO concentration increased from 1% to 4%. This is probable due to the fact that the viscosity of film-forming emulsion increased as EO levels increased. It has been proven that increasing the viscosity of film-forming emulsion could reduce water mobility through the film [23]. However, the WVP was not decreased as EO levels increased to 6%. This is probably due to the disruption of structure which might provide channels for water migration. Water contact angle are important in the film surface wettability and moisture transport. Higher surface hydrophobicity is necessary for EBEF when used as packaging or coatings; hence, the contact angle of edible films should be as large as possible. As suggested in Table 1, films made of EO at the 4% level possessed the highest contact angle. This is helpful for explaining why this film showed the lowest water vapor permeability. EO As shown in Table 1, the transparency of films was significantly affected by EO concentration. The values decreased as EO concentration increased from 1% to 4%, followed by an increase at the 6% EO level. On one hand, the transparency of emulsion-based films is affected by the oil type and concentration due to the fact that oil could change the extent of light scattering. On the other, the film surface and internal structure could also affect the light reflection and absorption. Therefore, the effects of oil concentration on the microstructure of films were studied later. As shown in Table 1, the concentration of EO had a significant effect on the whiteness index of films. Overall, films became darker as the EO concentration increased. The increase in darkness is presumably due to the existence of EO, which contains pigments and is orange in color. The result is consistent with the visual appearance in films (Table 1). Changes in whiteness index affected by the color of the oil added were also observed in previous study in which the lightness of a soy protein-based emulsion-type film decreased as the flaxseed oil concentration increased [23]. As suggested in Table 1, swelling ability of the films decreased along with the increase of EO concentration. The decrease in swelling ability is thought to be associated with an increasing of hydrophobicity by adding EO, which could prevent the matrix from binding strongly to water and thus reduce water uptake. Our result is consistent with previous researches that showed that the swelling ability of chitosan-based films decreased as the virgin coconut oil concentration increased [31] and similar to results observed in soy protein film with flaxseed oil [23]. Water Vapor Permeability (WVP), Contact Angle, and Mechanical Properties The water vapor permeability is directly related to the property of the film to restrict water migration of the food. The most striking feature for EBEF is the excellent water vapor barrier property, which might be attributed to the oil incorporation. As shown in Table 1, the WVP decreased as the EO concentration increased from 1% to 4%. This is probable due to the fact that the viscosity of film-forming emulsion increased as EO levels increased. It has been proven that increasing the viscosity of film-forming emulsion could reduce water mobility through the film [23]. However, the WVP was not decreased as EO levels increased to 6%. This is probably due to the disruption of structure which might provide channels for water migration. Water contact angle are important in the film surface wettability and moisture transport. Higher surface hydrophobicity is necessary for EBEF when used as packaging or coatings; hence, the contact angle of edible films should be as large as possible. As suggested in Table 1, films made of EO at the 4% level possessed the highest contact angle. This is helpful for explaining why this film showed the lowest water vapor permeability. EO 1/2/4/6: PSPI-GA conjugate films containing grapefruit essential oil at 1%/2%/4%/6% level. PSPI: plum seed protein isolates. GA: gum acacia. EO: grapefruit essential oil. Different letters within a column represent significant difference (p < 0.05). Table 1, the concentration of EO had a significant effect on the whiteness index of films. Overall, films became darker as the EO concentration increased. The increase in darkness is presumably due to the existence of EO, which contains pigments and is orange in color. The result is consistent with the visual appearance in films (Table 1). Changes in whiteness index affected by the color of the oil added were also observed in previous study in which the lightness of a soy protein-based emulsion-type film decreased as the flaxseed oil concentration increased [23]. As shown in As suggested in Table 1, swelling ability of the films decreased along with the increase of EO concentration. The decrease in swelling ability is thought to be associated with an increasing of hydrophobicity by adding EO, which could prevent the matrix from binding strongly to water and thus reduce water uptake. Our result is consistent with previous researches that showed that the swelling ability of chitosan-based films decreased as the virgin coconut oil concentration increased [31] and similar to results observed in soy protein film with flaxseed oil [23]. Water Vapor Permeability (WVP), Contact Angle, and Mechanical Properties The water vapor permeability is directly related to the property of the film to restrict water migration of the food. The most striking feature for EBEF is the excellent water vapor barrier property, which might be attributed to the oil incorporation. As shown in Table 1, the WVP decreased as the EO concentration increased from 1% to 4%. This is probable due to the fact that the viscosity of film-forming emulsion increased as EO levels increased. It has been proven that increasing the viscosity of film-forming emulsion could reduce water mobility through the film [23]. However, the WVP was not decreased as EO levels increased to 6%. This is probably due to the disruption of structure which might provide channels for water migration. Water contact angle are important in the film surface wettability and moisture transport. Higher surface hydrophobicity is necessary for EBEF when used as packaging or coatings; hence, the contact angle of edible films should be as large as possible. As suggested in Table 1, films made of EO at the 4% level possessed the highest contact angle. This is helpful for explaining why this film showed the lowest water vapor permeability. Mechanical properties, which are usually measured by tensile strength (TS) and elongation at break (EB), are key factors that determined the industrial application of films. In theory, the incorporation of oil would disrupt the biopolymer network in the film, leading to increased flexibility (EB) and decreased TS [32]. For instance, the incorporation of beeswax into pea-starch films decreased the tensile strength when lipids became greater than 20% [33]; incorporation of olive oil decreased the tensile strength of a gelatin-based film when the oil-to-protein ratio was increased from 5% to 10% [24]; the incorporation of sunflower oil decreased the tensile strength of quinoa protein-chitosan based films when the lipids concentration was increased from 2.9% to 34.7% [34]. In contrast (as shown in Table 1), the tensile strength of films increased as the EO concentration increased in the range of 1-4%. This trend has been observed in previous studies in which the incorporation of flaxseed oil into soy protein-based films increased the tensile strength when the lipids concentration was increased from 1% to 5% [23]. Ataréz et al. also reported that the increasing of cinnamon oil content resulted in an increase in the tensile strength of soy protein-based films [21]. They attribute this phenomenon into protein rearrangement, which resulted in a more ordered structure. The tensile strength decreased as the EO content increased to 6%, which is probably due to disruption of the biopolymer network and the formation of a holey microstructure. As expected in Table 1, the elongation of films was found to be increased significantly when the EO content increased from 1% to 4%, and then it decreased with the further increasing of EO content. The increasing of elongation was due to the increase in electrical charge of film-forming emulsion and droplet sizes. It has been proven that repulsive forces among molecules can increase the distance between polymers and that larger droplet sizes could decrease chain-chain interactions, which resulted in a plasticizing effect [28]. In fact, the mechanical properties of films are possibly dependent upon a variety of parameters, such as the type of ingredient, oil content, properties of film-formation emulsion, microstructure of films, and so on. In this study, it is possible that 6% EO concentration might result in a discontinuous microstructure to give lower elongation. This is probably due to the EO migration upwards in the films and further volatilization during water evaporation, leading to a holey structure. Thermal Properties of Films Differential scanning calorimetry (DSC) technology is generally employed to evaluate the thermal transition of edible films. As shown in Figure 4A, endothermic peaks appeared in the range of 180-200 • C, which can be attributed to the melting temperature (Tm) of films. The films with 2% or 4% EO have one endothermic peak, indicating good compatibility between the film components. Multiple peaks were observed as the EO concentration increased to 6%. All those results indicated that the concentration of EO could affect the interactions between polymers, which could result in changes of the Tm [35]. On the other hand, the appearance of a new peak suggested that a Maillard reaction occurred during the film-forming process [36]. TG curves are widely applied to study the thermal decomposition of films as reflected by their weight loss under continuous heating conditions. The higher onset decomposition temperatures of films indicated the better thermal stability [37]. As shown in Figure 4B, the thermal stability of films was significantly improved as the EO content increased in the range of 1-4%; then, it decreased with the further increasing of EO concentration. X-Ray Diffractometry XRD was widely applied to study the compatibility of components in the films. As shown in Figure 4C, a single peak located around 2θ = 21 • was observed in films with 1%-4% EO, indicating that the components in the films were in an amorphous state. The peak became broader as the EO concentration increased from 1% to 4%, suggesting the good compatibility between the EO and polymers in the films. A strong peak located around 2θ = 19 • appeared as the EO content increased to 6%, indicating the formation of new crystalline domains. This result supported the appearance of multiple peaks in DSC analyses. Fourier Transform Infrared Spectroscopy (FTIR) FTIR was employed to analyze the functional groups in EBEF to evaluate the effect of EO on the interactions between polymers. The absorption band of amide-A was commonly used to study the hydrogen bonds, due to N-H and O-H bands participate in the formation of hydrogen bonds [38]. As shown in Figure 4D, the band intensity of amide-A was affected by the content of EO, which indicated that the concentration of EO could affect the hydrogen bonds' interaction between the polymers. In addition, amide-I and amide-II are common parameters for studying Maillard reaction (C=N stretching vibration) between proteins and polysaccharides [35]. As suggested by Figure 4D, the band intensity of amide-I and amide-II increased with the increasing of EO concentration, which suggested that the concentration of EO could affect the Maillard reaction during the film-forming process and then change the physicochemical properties of films. Microstructure The surface (S) and cross-section (C) of films are shown in Figure 5. Generally, films containing EO showed a rough surface. This result is consistent with previous researches that incorporation of EO led to the increase of coarseness on film surfaces [28,39]. They attributed this result to the oil droplets migration and volatilization upwards the films, leading to an irregular surface. Moreover, both the surface and cross-section became denser and more compact as the EO content increased from 1% to 4%, which could help understand the improvement in TS and thermal stability. However, the holey surface in the cross-section was observed as the EO concentration increased to 6%. The appearance of a holey microstructure might help us understand why the film with 6% EO showed a decrease in transparency, TS, EB, and thermal stability. Figure 5. SEM images of the surface (S) and cross-section (C) of PSPI-GA films containing essential oil. EO 1/2/4/6: PSPI-GA conjugates films containing grapefruit essential oil at 1%/2%/4%/6% level. PSPI: plum seed protein isolates. GA: gum acacia. EO: grapefruit essential oil. Antioxidant and Antimicrobial Activity Grapefruit is regarded as highly nutritional because of the presence of various phytonutrients, such as vitamins, terpenes, and other compounds. The essential oil from grapefruit was reported to possess antibacterial and antifungal effects [16]. A previous report suggested that films emulsified with cinnamon essential oil could exhibit antioxidant activity, which increasing as the essential oil content increased [40]. However, in our study, all films with EO exhibited radical scavenging activity but the film with 1% EO showed the best of antioxidant activity ( Figure 6A). In fact, the antioxidant activity of film is related to the EO concentration released from film. Therefore, the release kinetics of EO from PSPI-GA films was also investigated in this study. Furthermore, all films containing EO demonstrated antimicrobial effect against E. coli (as shown in Figure 6B, and it was not significantly affected by EO concentration (p < 0.05)). The result is consistent with previous studies, in which essential oil incorporated in polysaccharides films showed antimicrobial behavior against E. coli due to the destruction of the bacteria cell membrane [41,42]. Figure 5. SEM images of the surface (S) and cross-section (C) of PSPI-GA films containing essential oil. EO 1/2/4/6: PSPI-GA conjugates films containing grapefruit essential oil at 1%/2%/4%/6% level. PSPI: plum seed protein isolates. GA: gum acacia. EO: grapefruit essential oil. Antioxidant and Antimicrobial Activity Grapefruit is regarded as highly nutritional because of the presence of various phytonutrients, such as vitamins, terpenes, and other compounds. The essential oil from grapefruit was reported to possess antibacterial and antifungal effects [16]. A previous report suggested that films emulsified with cinnamon essential oil could exhibit antioxidant activity, which increasing as the essential oil content increased [40]. However, in our study, all films with EO exhibited radical scavenging activity but the film with 1% EO showed the best of antioxidant activity ( Figure 6A). In fact, the antioxidant activity of film is related to the EO concentration released from film. Therefore, the release kinetics of EO from PSPI-GA films was also investigated in this study. Furthermore, all films containing EO demonstrated antimicrobial effect against E. coli (as shown in Figure 6B, and it was not significantly affected by EO concentration (p < 0.05)). The result is consistent with previous studies, in which essential oil incorporated in polysaccharides films showed antimicrobial behavior against E. coli due to the destruction of the bacteria cell membrane [41,42]. , antimicrobial (B) activity, and oil release kinetics (C) of PSPI-GA films containing essential oil. EO 1/2/4/6: PSPI-GA conjugates films containing grapefruit essential oil at 1%/2%/4%/6% level. PSPI: plum seed protein isolates. GA: gum acacia. EO: grapefruit essential oil. Different letters in the same pattern represent significant difference (p < 0.05). Release Kinetics of EO from Films It can be observed from Figure 6C that the release kinetics of EO from PSPI-GA films followed a typical exponential pattern. This result is consistent with previous study of the release behavior of Figure 6. Antioxidant (A), antimicrobial (B) activity, and oil release kinetics (C) of PSPI-GA films containing essential oil. EO 1/2/4/6: PSPI-GA conjugates films containing grapefruit essential oil at 1%/2%/4%/6% level. PSPI: plum seed protein isolates. GA: gum acacia. EO: grapefruit essential oil. Different letters in the same pattern represent significant difference (p < 0.05). Release Kinetics of EO from Films It can be observed from Figure 6C that the release kinetics of EO from PSPI-GA films followed a typical exponential pattern. This result is consistent with previous study of the release behavior of lemongrass oil from alginate film [43]. As suggested in Table 2, the release data of EO fitted better to the Peppas model (r 2 close to 1) than the Weibull model. In general, the n constant of the Peppas model took values lower than 0.5, indicating that the release mechanism is a combination of partial diffusion through a swollen matrix and pores filled with water [44]. Interestingly, the levels of essential oil released from films decreased as the EO content increased in the range of 1-4%. This is probably due to the formation of a more ordered structure as supported by TS and microstructure analysis. The released level increased as the EO content increased to 6%. This is probably due to the formation of a holey structure, which might benefit the release of EO from films. In addition, released levels are positively related to their antioxidant activity. Analysis of Grapefruit Essential Oil Solid-phase microextraction (SPME) coupled with gas chromatography mass spectrometry (GC-MS) was used to analyze main components in grapefruit essential oil. The volatiles were extracted using a 50/30 µm Divinylbenzene/Carboxen/Polydimethylsiloxane (DVB/CAR/PDMS) fiber (Supelco, Bellefonte, PA, USA). The essential oil (10 mL) was placed in a 20 mL vial, and extracted by the fiber at 40 • C for 30 min. After extraction, the fiber was thermally desorbed in the GC injection port for 5 min. The analysis was performed by An Agilent 7890 B GC coupled with an Agilent 7000 mass spectrometer (Agilent, Santa Clara, CA, USA), equipped with an HP-5MS fused silica capillary column (30 m length, 0.25 mm inner diameter, 0.25 µm film thickness; Agilent). The mass spectrometer ion source temperature was at 230 • C, the electron energy was at −70 eV. The oven temperature was first raised from 60 to 210 • C at 3 • C/min, then raised to 240 • C at 20 • C/min, and held for 8 min. The compounds were identified by comparing the mass spectra with those contained in the database (NIST11). Determination of Amino Acids, Emulsifying Properties, Surface Hydrophobicity, and Structure of Conjugates The lysine and arginine levels, emulsifying activity index (EAI), emulsifying stability index (ESI), surface hydrophobicity, intrinsic emission fluorescence spectra, and circular dichroism spectrum were measured according to our previous study [9]. The lysine and arginine contents were measured by an Agilent 1100 high performance liquid chromatograph (Agilent technologies Co., Ltd., Santa Clara, CA, USA) equipped with an ODS Hypersil column (5 µm, 250 × 4.6 mm). Fluorescence intensity (FI) was determined with a Hitachi F-7000 fluorescence spectrometer (Hitachi, Ltd., Tokyo, Japan) at excitation wavelength of 390 nm and emission wavelength of 470 nm. The initial slope of the FI versus protein concentration plot was the index of surface hydrophobicity. Intrinsic emission fluorescence spectra of the samples were analyzed by a Hitachi F-7000 fluorescence spectrophotometer (Hitachi, Ltd., Tokyo, Japan). The circular dichroism spectrum of samples was obtained using a Mos-450 CD spectropolarimeter (Biologic, Claix, France). Preparation of Film-Forming Emulsion Conjugates (3 days incubation) solution (5%, w/v) was stirred for at least 12 h (4 • C), and then added with glycerol (2%, w/v), stirring for another 1 h. For coarse emulsion forming, grapefruit essential oil at proportions of 1%, 2%, 4%, and 6% (w/w) were incorporated into the dispersion by using the FM200 homogenizer (FLUKO, Shanghai, China) at 10,000 rpm for 2 min. Then, the coarse emulsion was treated with a pressure homogenizer (ATS, Beijing, China) at 70 MPa for 2 passes. The resulted emulsion was then degassed for at least 30 min using vacuum oven. Particle Size and ζ-Potentials The film-forming emulsions were diluted with distilled water (1/20, w/w) and stirred for 5 min at 25 • C. The particle sizes of samples were measured by light scattering using a Mastersizer 2000 equipped with a Hydro 2000 MU dispersion unit from Malvern Instruments Ltd. (Worcestershire, UK). The pump speed was settled at 1800 rpm, and the refractive index and absorption parameter were 1.330 and 0.001, respectively. The ζ-potential of film-forming solutions was measured using a nanoZS instrument (Malvern Instruments, Worcestershire, UK). Rheological Behavior of Film-Forming Emulsions An AR2000 rheometer (TA Instruments, Leatherhead, UK) fitted with parallel plates (50 mm diameter and 1 mm gap) was employed to measure the rheological behavior of film forming emulsions. The shear rate was increased linearly from 0 to 100 s −1 . A dynamic strain sweep was conducted in a range for angular frequency (omega) of 0.1-100 rad/s at amplitude (gamma) = 0.1%. The flow rheological properties of emulsion were fitted to the power law equation (log viscosity = (n−1) log shear rate + log m) which is applied extensively to describe the rheological behavior of food emulsions [46]. The flow behavior index and the consistency coefficient were also calculated from the power law equation, where n is the flow behavior index, and m is the consistency coefficient [47]. Film Formation The films were prepared by casting emulsion, as outlined in Section 3.4, on leveled polytetrafluoroethylene plates (42 × 42 cm 2 ) and dried using a KBF720 ventilated chamber (Binder, Germany) at 30 • C and 43% relative humidity for 18 h. The transparency of films was determined by employing Rubilar's method by using the Spark 10M microplate spectrophotometer (Tecan, Switzerland) [48] and calculated by Equation (1). The color of the film was obtained by using a Hunter-Lab colorimeter (Reston, VA, USA) and the L (lightness), a* (redness and greenness), and b* (yellowness and blueness) were recorded. The whiteness index of films was calculated according to Equation (2). The swelling ability of films was measured by immersing the films in water at 25 • C for 5 h and then calculated the weight gain (equation (3)) [23]. The water vapor permeability (WVP) of films was tested by using the gravimetric method [49]. Films were placed in measuring cells containing silica gel and deposited in a ventilated chamber at 25 • C and 75%. The WVP value was calculated according to Equation (4). where T 600 is the transmittance of light through the film at 600 nm and x is the film thickness (mm), which was measured by a micrometer with the accuracy of 0.001 mm (Jiangsu, China). where M 1 is the mass (g) of the initial film before immersion in water and M 2 is the mass (g) of the film after immersion in water for 5 h. where ∆m is the weight gain (mg) of the cups during time ∆t (d) and A is the area of exposed film (cm 2 ). Contact Angle-Sessile Drop Method The contact angle of EBEF was tested using an OCA15EC goniometer (Stuttgart, Germany). Deionized (10 µL) water was released onto the EBEF, and the image was recorded after 5 s. The contact angle was defined as the angle between the baseline and the tangent to the drop boundary. Mechanical Properties The tensile strength (TS) and elongation at break (EB) of films were determined using a TA-XT2i texture analyzer (London, UK). The initial distance of separation and cross-head speed was fixed at 50 mm and 1 mm/s, respectively. TS was calculated according to Equation (5), and EB was calculated according to Equation (6). where F is the maximum force at break (Kg) and S is the initial transverse section (mm 2 ). where L 1 is the original length (mm) and L 2 is the length at break (mm). Differential Scanning Calorimetry (DSC) The DSC was performed using a thermal analyzer (DSC214 Polyma, Netzsch, Germany). Films (2-4 mg) was heated in an aluminum pan from 25 to 250 • C at a rate 10 • C min −1 under nitrogen atmosphere. The data were analyzed with TA Universal Analysis software. Thermal Gravimetric Analysis (TG) Thermal gravimetric analysis was carried out a Q500 thermal analyzer (TA Instruments, New Castle, USA). Film samples (7 mg) were sealed in ceramic pans, and the temperature was raised from 25 to 800 • C at a heating rate of 10 • C min −1 . The nitrogen was at a constant flow rate of 60 mL min −1 . 3.8.6. X-ray X-ray diffraction patterns were performed by a Smartlab-3kw X-ray diffractometer (Rigaku, Japan) with Cu Kα radiation at 40 kV and 30 mA. The scan rate was 10 • min −1 and the patterns were collected in the range of 2θ from 5 • to 55 • . Fourier Transform Infrared Spectroscopy (FTIR) The Fourier transform infrared spectra of samples were obtained with a FTIR-7600 (Lambda, Australia). Scanning was carried out in the range from 4000 to 400 and 4 cm −1 resolution. Scanning Electron Microscopy (SEM) The films were sputter-coated with gold-palladium, and the microstructures were observed with MERLIN scanning electron microscope (MERLIN SEM, ZEISS, Germany). The films were fractured by immersing in liquid nitrogen to observe the microstructure of the cross section. Antioxidant and Antimicrobial Activity of Films The scavenging activity of ABTS was determined by the method of Yikling et al. [50] with modifications. Films were mixed with ABTS (7 mM) to give a final film concentration of 0.5 mg/mL. After incubation (10 min), the sample was centrifuged at 12,000× g, 4 • C for 2 min. The absorbance values of 200 µL supernatant were recorded using a Spark 10M microplate spectrophotometer (Tecan, Switzerland). The solution containing films without free radicals was used as blank. The films (1 mg) were placed into Luria-Bertani medium (2 mL) that had been previously seeded with inoculum containing indicator bacteria in the range of 10 6 -10 8 CFU/mL, and then they were incubated at 37 • C for 24 h. The absorbance values of 200 µL samples at 600 nm were measured using a Spark 10M microplate spectrophotometer at 0, 4, 8, 12, and 24 h. The medium containing films without bacteria was used as a blank. Essential Oil Release Kinetic from Films Release kinetics of the EO from films was performed according to the previous method [44]. The solution of ethanol:water (50:50, v:v) was used as simulant to study the migration. The films (2 × 2 cm 2 ) were placed inside dialysis bags (12,000 Dalton) and submerged in 30 mL of simulant. A calibration curve was obtained for EO under study using dilutions of EO with the simulant, and the absorbance was measured at a wavelength of 274 nm using a Spark 10M microplate spectrophotometer (Tecan, Switzerland). Then, the concentration of EO released in the simulant was determined. Release data were fitted to Weibull (Equation (7)) and Peppas (Equation (8)) models. Ln(1 − Q) = −a·t b (7) where Q is the fraction of EO released, a and K p are constants, n and b are constants indicative of the release mechanism, and t is time. Statistical Analysis All the tests were repeated three times and the data obtained were analyzed by one-way analysis of variance using SPSS Windows version 17.0. Values are expressed as means ± standard deviation. Duncan's multiple range test was used to identify significant differences (p < 0.05) between means. Conclusions The results obtained in this research give some insights on the preparation of edible films using emulsions of PSPI-GA/EO as film-forming solution. It was found that the emulsifying properties of PSPI could be improved after being grafted with GA. The improvement is related to the changes in surface hydrophobicity, secondary structure, and tertiary structure. The droplet size, surface charge, and viscosity of emulsion increased as the EO concentration increased. However, the water vapor barrier property, surface hydrophobicity, mechanical properties, and thermal stability of EBEF were improved as the EO content increased in the range of 1-4%, while it decreased as the EO concentration increased to 6% due to the formation of a holey microstructure. The release data of EO from films fitted well to the Peppas model, and the radical scavenging activity of EBEF was significantly affected by the different release patterns due to the variation of EO concentration. Author Contributions: Conceptualization, C.L. and F.X.; investigation, J.P.; writing-original draft preparation, C.L.; writing-review and editing, X.X.; project administration, X.X. All authors have read and agreed to the published version of the manuscript. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: EBEF emulsion-based edible films PSPI plum seed protein isolate GA gum acacia EO grapefruit essential oil
2020-08-13T10:05:33.694Z
2020-08-12T00:00:00.000
{ "year": 2020, "sha1": "ef9cbbcf4fb9bc6e85a369493ce1cff9e79b16a5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6412/10/8/784/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "bd99153863c83fe79bc9a7cdc2771a822e3661c3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Materials Science" ] }
265493994
pes2o/s2orc
v3-fos-license
What Works for Controlling Meningitis Outbreaks: A Case Study from China The meningococcal meningitis (MM) vaccine reduces the incidence of MM significantly; however, outbreaks still occur in communities with high vaccine coverage. We aimed to analyze the driving factors of infection from a community outbreak. A total of 266 children aged 9 to 15 years old from the three junior high schools of Tongzi county were identified. We documented infection cases using laboratory tests and analyzed attack rates, infection rates and risk factors for transmission. The index case in School A was identified, and the attack rate in School A was 0.03%. Children showed a significantly low infection rate of MenC in School A (13.2% vs. 19.5% in total children, p = 0.002), while exhibiting significantly high infection rates of MenA in School B (44.1% vs. 24.8% in total children, p < 0.001) and MenB in School C (11.1% vs. 4.1% in total children, p = 0.015). The infection rate of MenA for females (30.0%) was higher (p = 0.055) than for males (19.9%). In School A, 63.19% of children were vaccinated against MenC, while in School B the rate was 42.65% and in School C, it was 59.26%. Three male MenC infection cases were detected as breakthrough infection cases in addition to the index case. The findings suggest that the current full-course immunization has limited long-term effectiveness and is inefficient in preventing the transmission of MM among older children. Introduction Meningococcal meningitis (MM) is an acute contagious infectious disease and also one form of bacterial meningitis that may cause epidemics [1].As a global public health issue, the estimated MM incidence in the 1980s was about 25,000 to 200,000 cases annually [2], with an annual incidence rate ranging from 0.07 to 12.6 cases per 100,000 population from 1950 to 2002 [3].The incidence of MM reached its peak in the spring of 1967, with 403 cases per 100,000 population in China, whereas it was controlled at 0.09 cases per 100,000 population from 2005 to 2010, contributing to the government's long-term efforts in vaccine development and immunization [4].However, despite the small number of MM cases, the high mortality and sequelae of MM impose a heavy burden on families.Compared to the mortality rate of 0.30 per 100,000 population in the United States of America in 2019, the mortality in China reached 0.51 per 100,000 population according to the GBD 2019 study [5].According to the National Notifiable Diseases Registry System data, MM is the leading cause of mortality and has the highest mortality incidence among contagious infectious diseases [4]. The introduction and utilization of conjugate meningococcal C vaccines in the UK and the USA, which follow a 2-, 4-, 6-and 12-15-month schedule, reduced N meningitidis serogroup C disease by over 90%.In addition, near-elimination of Haemophilus influenzae has been documented due to the introduction of conjugate Hib vaccines [6].However, as one of the major public health challenges, its progress in reducing the burden is significantly lagging behind other vaccines for preventable infectious disease [7].In addition, breakthrough infection, defined as the detection of Neisseria meningitidis carriers in throat swabs after full-course immunization with the corresponding meningitis serogroup vaccine, is another huge challenge for immunization plans.Breakthrough infections occur from time to time, and some meningitidis serogroups not included in the current national immunization program may still cause large-scale infections.Outbreaks of MM are especially more frequently detected in children.In day-care settings or schools, healthy children (aged 6-18 years old in China) can be infected under close contact with MM cases or asymptomatic carriers, leading to further outbreaks.Previous literature has reported similar cases being infected by the index cases and, finally, resulting in deaths [8][9][10].One study pointed out that meningitis was either the second or third most important infectious syndrome [11].Due to the severe symptoms and complex complications, the MM outbreak has caused an enormous epidemiological and economic burden at the individual, community and national levels [12][13][14]. There is a lack of records on the characteristics of recent meningitis infections among Chinese children.It must be noted that immunization plans and emerging pandemics may have changed the past infection pattern, which may further require updating immunization plans and prevention and control strategies.Cheng [15] suggests that the distribution of MM pathogens in China has undergone significant changes after COVID-19, and relevant monitoring needs to be strengthened.However, the existing literature provides limited evidence for the immunization plans' reform.In addition, vaccine characteristics also matter.At present, there are five types of meningococcal meningitis vaccines in China, with MPSV-A and MPSV-AC included in the national immunization plan, and children are vaccinated free of charge at the sixth month, ninth month, third year and sixth year after birth, respectively.The remaining MPCV-AC, MPSV-ACYW135 and MPCV-ACYW135 need to be administered to eligible children at their own expense.Therefore, most infants and children are vaccinated with PSV.However, on one hand, nonconjugated pneumococcal polysaccharide vaccines do not elicit a protective immune response in children younger than 2 years [16], and on the other, only the PCV vaccine has a memory-enhancing immune effect after repeated vaccination.This prevailing condition may cause concerns about the long-term protection against meningitis. In addition, the decline of antibodies induced by vaccination, immuno-deficient individuals and exposure to a higher viral inoculum may cause breakthrough infections, potentially impacting vaccination strategies.On the other hand, if breakthrough infections occur rarely or are mild and have a comparatively low probability of causing wide transmission, watchful waiting may be appropriate.On the other hand, if breakthrough infections are comparatively common, additional vaccine doses, changes in vaccine formulations or non-pharmaceutical interventions should be considered as a response.Limited evidence has focused more on the infection and incidents among children under five years old, ignoring the potential risk of outbreaks of infection among school-age children.In dense classrooms, infections are more likely to occur and can quickly form outbreaks among children in the short term.This characteristic may exacerbate the potential risk of infection among school-age children and further cause a huge burden of disease. Exploring the current driving factors of the MM outbreak epidemic is necessary, especially under the breakthrough infection in the high-immunization-coverage era in China.In addition, it is also necessary to evaluate the impact of prevention and control measures during the outbreak, to provide evidence for the precise prevention and control of the subsequent MM outbreak.This study aims to investigate the outbreak of meningococcal infection among school-age children in Tongzi County, Guizhou Province, at the end of 2022, to provide the evidence for formulating the improved prevention and control strategies for the subsequent outbreak of MM in other settings. Index Case Identification An index case refers to the case infected with pathogens during an infection outbreak that was first detected and reported.On 17 November 2022, at 10:00 a.m., the Guizhou Provincial Centers for Disease Prevent and Control (Guizhou CDC) received notification of a suspected case of MM from Zunyi City.The patient was a 13-year-old male resident student from School A in Liaoyuan Town.After feeling unwell on the afternoon of November 15, this patient returned from school to his home in the same town, where only his grandfather and grandmother lived with him.In the morning of November 16, the student's face was cyanotic (with ecchymosis on his face), eyelids puffy and he felt weak.He took a temperature of 38 • C and was then sent to the emergency department of the local hospital at around 7 o'clock.In the afternoon on November 16, the patient was transferred to a higher-grade hospital and diagnosed as a "Suspected MM case" and was uploaded to the pandemic network at 8:38 on 17 November and further confirmed as meningitidis serogroup C by laboratory diagnosis.The main method is to use real-time PCR to detect Neisseria meningitidis species and specific nucleic acid fragments of common serogroups in blood samples, and to capture specific genes of Neisseria meningitidis species (CtrA gene and Group C serogroup-specific gene).The patient died on November 20 and was eventually defined as the index case. After the confirmation of the index case, the local health and education departments took prompt action.First, the school strictly implemented morning and afternoon inspection and a daily reporting system.Second, they ordered all medical institutions and townships to carry out an active search for suspected cases, improve awareness of suspicious symptoms of MM, report suspicious cases and carry out isolation and treatment in a timely manner.An active search for additional MM cases was performed through symptom monitoring in schools, communities and the hospital information system.Specifically, six carriers were detected among the index case's close contacts at the same school.These carriers were isolated and observed at home for 10 days, and no suspicious clinical symptoms found.Medical observation was also conducted on their family members of the index case and carriers above, as well as the school's teachers and students, and no suspicious symptoms, such as fever, headache and vomiting, were found.There was no suspected MM case or subsequent incident in this outbreak infection.Finally, the local government encouraged residents aged under 18 to carry out doses of MenA plus MenC vaccine revaccination for those unvaccinated before.For the school where the index case was located, the immunization experience of students was verified through systematic verification combined with child vaccination certificates.For other residents aged over 24 months having not complete vaccination against MM in the past, residents were vaccinated with doses of the MenA plus MenC vaccine according to the immunization program.Residents can also choose ACYW135 vaccines (not included in current immunization program) as an alternative to vaccinate based on the principles of being informed, voluntary and self-funded. Environmental Description and Case Exploration The index case was a resident student in School A in Liaoyuan Town, Tongzi County.School A has 57 classes, 3215 students and 208 staff members, allowing students to attend as day students.Most of the students and teachers left School A on November 17th because of the COVID-19 epidemic; thereafter, the provincial-, municipal-and county-level CDC searched actively for suspected cases in Tongzi County, and no other case was detected in this process. The class the index case was in has a total of 52 students, including 26 resident students and 26 day students.This classroom is in the middle of the corridor on the south side of the School A teaching building, with 11 other classrooms on the same floor.Two large windows are open on the corridor and four small windows are open on the back side.The distance between the seats in the classroom is normal.The canteen consists of a total of two floors with an independent entrance and exit channel.A staggered peak dining system is implemented.The index case's dormitory contains 10 beds (5 high beds and 5 low beds) inside and 8 other students lived in this room with the index case. Data Collection and Analysis We investigated the implementation progress for planned MM vaccination in Tongzi County.At the same time, we conducted a survey on the immunization experience of the MM vaccine, meningococcal-carrying and serum antibody testing on children in three schools: the school where the case was (School A), another middle school in the same town as School A (School B) and a middle school in a town far away from the town School A is in (School C).School B has 3051 students and 167 teachers, and School C has 558 students and 45 teachers.This study adopted judgment sampling based on the professional knowledge of CDC investigators.A certain number of students aged 10-15 in the same dormitory, class, grade, school and the same age group from other schools with the index case were surveyed. The immunization experience of children in this study was surveyed by checking vaccination certificates, vaccination cards and parents' memories.A total of 266 children were surveyed, including 144 in School A, 68 in School B and 54 in School C.Under informed consent, signing an informed consent statement, surveyors were allowed to collect venous blood and detect meningitis antibodies from children.The blood of a total of 246 children was collected, including 124 in School A, 68 in School B and 54 in School C. We defined the infected case as: (a) a Neisseria meningitidis carrier is detected on the throat swab; (b) the serum test shows positive antibodies against meningococcal meningitis, but the sample has no corresponding meningitis serogroup vaccination experience.This study tested the student samples in three schools except for the index cases, and further calculated the infection rate of every meningitidis serogroup and the attack rate of MenC.In addition to infection cases, we also analyzed the breakthrough infection case.This study collected the class, gender, resident, vaccination, specimen collection time and laboratory test results from three schools (A, B, C).SPSS 27.0 was used for statistical analysis application.One-way ANOVA was used for mono-factor analysis to describe the differences between groupings of different characteristics and explore the relationships between infection and characteristics. Result In 3215 children in School A, one MenC case (attack rate in School A: 0.03%) was detected as attacked after breakthrough infection and listed as the index case.However, no other new secondary case was detected (attack rate: 0 in School B vs. 0 in School C).We then identified 266 children aged 9 to 15 years old from the three junior high schools of Tongzi county.To explore the spatial mode of infection, this study explored the differences among the infection cases and rates in the children in Schools A, B and C in Tongzi county (Table 1).In 144 children in School A, there was laboratory evidence of only MenA infection for 11 children (7.6%),only MenB infection for 2 children (1.4%),only MenC infection for 9 children (6.3%),only Men W for 2 children (1.4%), MenA plus MenC for 9 children (6.3%) and MenA plus MenB plus MenC for 1 child (0.7%).In addition, one child (0.7%) was detected infected with unclassified Neisseria meningitides.In 68 children in School B, there was laboratory evidence of only MenA infection for 8 children (11.8%),only MenB infection for 1 child (1.5%),only MenC infection for 1 child (1.5%), MenA plus MenC for 21 children (30.9%) and MenA plus MenB plus MenC for 1 child (1.5%).In 54 children in School C, there was laboratory evidence of only MenA infection for 6 children (11.1%),only MenB infection for 5 children (9.3%),only MenC infection for 1 child (1.9%), MenA plus MenC for 8 children (14.8%) and MenA plus MenB plus MenC for 1 child (1.9%).Note: School A is the school where the case was, School B is another middle school in the same town as School A, and School C is a middle school in a town far away from the town School A is in. Mono-Factor Analysis To explore the mode of Neisseria meningitides infection, this study explored the differences among the infection cases and rates of Group A, B, C and W Neisseria meningitides (MenA, MenB, MenC, MenW).Table 2 further presented Neisseria meningitides infection cases and rates of the children with different characteristics.The results showed a significantly low infection rate of MenC in School A (13.2% vs. 19.5% in total children, p = 0.002), whereas there were significantly high infection rates of MenA in School B (44.1% vs. 24.8% in total children, p < 0.001) and MenB in School C (11.1% vs. 4.1% in total children, p = 0.015).In addition, the infection rate of MenA for females (30.0%) was higher (p = 0.055) than for males (19.9%). Vaccination and Breakthrough Infection The long-term implementation of the corresponding immunization program has been carried out by local government.As shown in Table 3, 99.29% of the children at the corresponding ages in Tongzi County were vaccinated with the first dose and 97.28% with the second dose, and 99.24% with the first dose and 98% with the second dose in Liaoyuan County (also the subregion of Tongzi County), respectively.A vaccination survey of 30 children aged 1 to 6 years at the site where the case lived showed that both MenA and MenA plus MenC vaccination rates were 100%.The index case was vaccinated with one vaccine against MenA on 29 February 2012, and two doses of vaccines against MenA plus MenC on 27 September 2012 and 28 October 2015, respectively.In our study, 18.75% of children in School A were detected as vaccinated with only one dose of vaccine against MenA plus MenC, 61.11% vaccinated with two doses of vaccine against MenA plus MenC, 18.06% without a vaccination record of vaccine against MenA plus MenC, 2.08% vaccinated with the ACYW135 vaccine and 63.19% vaccinated with vaccines against MenC.In School B, 4.41% of children were detected as vaccinated with only one dose of vaccine against MenA plus MenC, 42.65% vaccinated with two doses of vaccine against MenA plus MenC, 52.94% without a vaccination record of vaccine against MenA plus MenC and 42.65% vaccinated with vaccines against MenC.In School C, 5.56% of children were detected as vaccinated with only one dose of vaccine against MenA plus MenC, 59.26% vaccinated with two doses of vaccine against MenA plus MenC, 35.19% without a vaccination record of vaccine against MenA plus MenC and 59.26% vaccinated with vaccines against MenC (Table 4).According to the laboratory evidence and samples' vaccination experience, three male MenC infection cases were further detected as breakthrough infection cases in addition to the index case.These three breakthrough cases were all studying in the eighth grade of School A while one of the them was a classmate and roommate of the index case. Discussion Vaccination is the best strategy to prevent MM and control meningitis outbreaks [17].However, in this study, based on a large infection case that occurred in Tongzi County in southwestern China, some key issues were identified in the existing childhood MM immunization. First, an enhanced emergency response can be made to protect high-risk populations through prophylactic medication and other measures.In this study, there were no other MM cases except for the index case and the results showed a low attack rate of MenC in School A. Therefore, it is necessary to point out other potential protective measures that may play roles in addition to vaccines; that is, 25 (17.2%) in 144 children surveyed from School A had prophylactic medication.When a higher proportion of the total population have prophylactic medication, the transmission route of meningitidis may be interrupted and those who do not have prophylactic medication may also benefit, resulting in higher effectiveness.Therefore, when the level of immunity is not sufficient to protect close contacts against some specific meningitidis serogroup, an emergency response in this way, together with social isolation measures, can be used to interrupt the main transmission route of the meningitidis in quite a short time [18].However, it should be noted that mass prophylactic medications are always unfeasible and have a limited high cost and logistic problems [19].Considering the common phenomenon of meningococcal-carrying among healthy children in this study, further consideration needs to be given to the potential contribution of vaccination in addition to prophylactic medication. Second, there is a need to update the National Immunization Programme (NIP) to address the existing issues of meningitidis serogroups' vaccines.Although no new secondary MM case was found in this study, there were still many children infected by healthy carriers.With regard to MenA and MenC, we found some infected children in this case, whereas MenA and MenC have been included in the NIP.Due to a paucity of systematical genomic evidence, we cannot speculate about the impact of exposure to a higher viral infection on breakthrough infections.What is certain is that the existing vaccines seem not to play a role in controlling transmission, which encourages vaccine producers to develop better vaccines or develop a booster schedule.It should also be noted that in the current NIP, two doses of the MenA polysaccharide vaccine should be administered to children at 6 and 9 months old and two doses of the MenA plus MenC polysaccharide vaccine should be administered to children at 3 years and 6 years, respectively.Further, children aged over 6 years old no longer receive boosting doses [20].Therefore, for the children aged 9 to 15 years old included in this study, the protective effect could hardly persist [19,21,22].It requires booster immunization in the NIP to prevent the high infection rate of MenA and MenC in older children, such as a quadratic meningococcal conjugate vaccine (MenACWY) for adults aged 11 or 12 years and a booster dose at age 16 years [23].Some countries have expanded this vaccination schedule to older children.For example, Italy has extended the immunization population to those aged 18 years old in 2017, and Switzerland and Belgium opted for a vaccination strategy that included adolescents [24]. In addition to the MenA and MenC currently included in the NIP, we have also detected a large number of MenB-infected children.This is different from the existing literature evidence, perhaps due to the spatiotemporal population heterogeneity within China [22,25].A national study has reported that Serogroup C, others and NG were the major reason among students aged over 7 years old but highlighted that one of the important tasks for MM control and prevention in the future is just to develop and provide new vaccines for Serogroup B [26].In China, the MM cases caused by MenB have been rapidly increasing since 2015.The effectiveness of some MenB vaccines has reached 83%, and its protective effect can last for more than 3 years for 75% of the vaccinated [27,28].However, neither 4CMenB nor rLP2086 (two mainstream MenB vaccines) have been introduced into the NIP [20], which may be another weakness of the existing immunization program.The active introduction and development of MenB-contained vaccines should be introduced to prevent the potential infection threat caused by MenB.Similar conclusions have been highlighted in other studies.For example, Truong [29] suggested after the introduction of a Hib vaccine, the leading cause of bacterial meningitis became S pneumoniae.The surveillance of potential changes in serotypes' distribution over time is also encouraged, especially in lowand middle-income countries with limited resources for managing vaccine-preventable bacterial infectious diseases.In terms of China, the economic heterogeneity is large.Children are voluntary to vaccinate with the vaccines not included in the NIP, and those in less-developed regions are still at a great risk of being infected by the emerging leading serotype due to the low coverage of its corresponding vaccines.It should be noted that the distribution of pediatric bacterial meningitis causative organisms may vary by age [30], which should be also considered in the NIP. Finally, although the children in School A showed the highest proportion of the population vaccinated against MenC, and the infection rates of MenC in School B and C were higher than in School A, the breakthrough infection and the attack after breakthrough infection both occurred only in this school.Further identification of the genotype of the C strain prevalent in the school should be carried out to explore the causes of the outbreak infection.In addition, distributions of different serogroup infection rates among schools showed significant differences.We tended to attribute these differences to different sources of infection among schools.On the one hand, although the only MenC case was in School A, a large number healthy carriers of other serogroups were still detected.On the other hand, School A and School B are relatively close and have close population linkage, making it reasonable to believe that their population is homogeneous.However, we still observed certain differences between Schools A and B, which helped us to exclude the possible impact of population heterogeneity on the distributions of serogroups.Therefore, genomic studies are necessary to be applied to detect the transmission pathway and further determine whether there are differences among the transmission ability of different serogroups, clarifying whether the existing vaccines are off target. The contribution of this study is to provide the latest evidence to explore the weaknesses of the current MM outbreak control and prevention among school-age children in China, providing improvement strategies for the immunization program.On the one hand, it raised concerns about the effectiveness of the existing vaccines for older children and encouraged a wider immunization age period.On the other hand, it is suggested to actively introduce and develop corresponding vaccines based on the main meningitidis serogroups that cause MM cases in China to improve the NIP.From a global perspective, this study aimed to call on public health professionals from other parts of the world to pay attention to the attack and infection risk of older children.Also, this study hopes that the government will introduce more vaccines for adolescents and adults to avoid potential production losses. This study has several limitations.First, this study only tested some main meningitis serogroups and cannot evaluate the infection of all serogroups.In the future, surveys targeting other meningitis serogroups should be encouraged to provide further evidence for understanding the full distribution of meningitis serogroups among Chinese children.Second, this study only carried out an exploratory analysis for the risk factors; thus, the conclusions of this study should be taken with caution.Due to the fact that the data in this study were from a survey that aims to detect the infection distribution in a public health emergency, more demographic characteristics have not been included in primary consideration.When more data are available, more causal inference techniques are encouraged to be applied here.Finally, due to the non-random sampling, the results may be influenced by the subjective judgment of the investigators.The representativeness of the results should be carefully viewed. Conclusions We evaluated the attack rates and the infection rates in different meningitidis serogroups of Neisseria meningitidis and provided strategies for meningitis outbreak controlling.On the one hand, effective emergency response measures could be considered as a short-term tool to prevent Neisseria meningitidis transmission.On the other hand, the extension of full-course immunization and active introduction and development of vaccines should be encouraged to improve the current NIP. Table 1 . The number and rate of infected cases in three schools. Table 2 . The number and rate of infection with different characteristics. Number (Rate) of Cases Infected with MenA Number (Rate) of Cases Infected with MenB Number (Rate) of Cases Infected with MenC Number (Rate) of Cases Infected with MenW (MenW) denotes only MenW.NA: Laboratory tests on MenW were only conducted in school A. Therefore, number (rate) of cases infected with MenW are not applicable in school B and C. Table 3 . Vaccination status in study objectives (region). Table 4 . Vaccination status in study objectives (school).
2023-11-29T16:06:28.103Z
2023-11-27T00:00:00.000
{ "year": 2023, "sha1": "0399bb2235ea454fa355b961f0702ca33ed429c9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e52e37256935a550102ddd6c9d3798a452edf5c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237435561
pes2o/s2orc
v3-fos-license
Complex open elbow fracture-dislocation with severe proximal ulna bone loss: a case report of massive osteochondral allograft surgical treatment We report a case of a 69-year-old right-dominant man who had an open Monteggia-like lesion of the right elbow (Gustilo-Andersen IIIA) with severe proximal ulna bone loss associated with an ipsilateral ulnar shaft fracture due to a motorcycle accident. The patient underwent two-stage surgery. Wound debridement and bridging external fixation were performed at first. Three months later, a frozen massive osteochondral ulnar allograft was implanted and fixed with a locking compression plate. A superficial wound infection appeared 5 weeks after the second surgery. Superficial wound debridement, negative pressure therapy, and antibiotics were administered for 3 months, achieving infection healing. At 3 years post-surgery, the elbow range of motion was satisfactory with a Disabilities of the Arm, Shoulder and Hand (DASH) score of 16.7. Radiographs and computed tomography scans showed good allograft-bone integration without allograft reabsorption or hardware loosening. Although not complication-free, massive ulna osteochondral allograft implantation can be considered a valid option in cases of open Monteggia-like lesions associated with ulnar shaft fracture and severe bone loss in active patients, whenever osteosynthesis or joint replacement is not a proper solution. This type of bone stock restoration allows for future surgery, if needed. ing limb-sparing tumor resection, joint revision surgery, and infections [2]. Open elbow fracture-dislocations associated with severe bone loss are uncommon and rarely described. In this paper, we report the use of an ulna frozen massive allograft for an open Monteggia-like lesion with severe proximal ulna bone loss, associated with an ipsilateral ulnar shaft fracture, in an active patient. CASE REPORT This study was approved by the Institutional Review Board of Surgical Department ASUGI (IRB No. 0539-1256). The subject in this case signed an informed consent approving the discussion of his medical history in the present manuscript. A 69-year-old man was admitted to our department after a high-speed motorcycle accident. The patient presented with large soft tissue damage in his right elbow with bone exposure (Fig. 1A); neither neurovascular deficits nor any other injuries were noted. He had no other comorbidities. He underwent elbow X-rays that showed a Monteggia-like lesion with a multifragmented articular fracture of the proximal ulna with severe bone loss and an ipsilateral oblique fracture of the distal third of the ulnar shaft ( Fig. 1B and C). This injury was classified as Gustilo-Anderson type IIIA. We planned a two-stage strategy for the complex open injury. The patient was immediately prepared for the first surgery. Under general anesthesia, wound irrigation with saline and iodine solutions was performed, followed by surgical debridement and fracture-dislocation reduction and fixation using a bridging external fixator, according to damage control principles. Minimal fixation of the distal ulnar fracture fragments using an intramedullary K-wire and further stabilization of the radial head with a K-wire were performed to achieve acceptable forearm alignment (Fig. 2). The lateral ulnar collateral ligament (LUCL), radial collateral ligament (RCL), annular ligament, and medial collateral ligament (MCL) were completely torn and were repaired with simple sutures. The wound was closed without suture tension. Systemic antibiotic prophylaxis with amoxicillin/clavulanic acid and clindamycin was undertaken for 7 days, according to our hospital protocol with regards to open fractures. Complete blood count formula, renal function, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) were monitored weekly with a rapid decrease of the inflammation markers at 3 weeks post-surgery. Considering the high risk of infection in this complicated injury, and after a brief consultation with the infectious disease specialist, we continued oral amoxicillin/clavulanic acid (1.0 g four times a day) for a total of 12 weeks. Inflammation markers normalized in 8 weeks, and soft tissue B C A condition improved such that a definitive surgical intervention could finally be planned at 3 months. During this time, the distal ulna fracture showed almost complete healing; however, nonunion of the residual proximal ulna was observed at 12 weeks follow-up. Given the comminuted fracture, the articular surface disruption, and the entity of the articular bone loss, we planned to use a massive ulna allograft fixed by plate and screws. We used an allogenic frozen proximal ulna from the Bone Bank per the international standard ISO 9001:2008 and under the European Guidelines of Tissue and Bone Banks. The patient underwent the second surgery after 3 months under general anesthesia after having signed an informed consent. The external fixator was removed. We performed a posterior approach to the olecranon and the ulnar nerve was identified and preserved during the procedure. The triceps tendon was released from the residual fragments of the olecranon, exposing the proximal part of the forearm. We aspired to preserve the MCL, the LUCL, and their humeral attachments. The RCL, previously repaired during the first surgery, was tight; therefore, we did not perform any further procedure on the RCL. A synostosis between the proximal radius and the ulna was identified and thus removed via an ulna osteotomy was performed 8 cm from the tip of the olecranon, removing the proximal part of the ulna and the residual fragments. The ulnar allograft was prepared preserving the donor triceps tendon insertion, was implanted in the patient's elbow, and fixed with an eight-hole 2.7/3.5 LCP plate (DePuy Synthes, Warsaw, IN, USA). The novel ulnar-humeral joint was reduced and stabilized by two K-wires. The triceps tendon was then reattached on its insertion using a 5-mm titanium suture anchor (Healix, Mitek; DePuy, Warsaw, IN, USA) according to a modified Krackow suture technique and using the preserved donor triceps tendon to increase stability of the construct. The MCL, the LUCL, and the residual capsule were reattached to the graft using trans-osseous sutures. Good articular congruity was achieved. The wound was closed using a standard approach and a simple brace was used for 4 weeks. Postoperative standard radiographs of the elbow showed good positioning of the LCP plate with sufficient compression of the docking site and restoration of ulna-humeral joint congruity (Fig. 3). The two K-wires were removed 5 weeks after surgery to start rehabilitation. Ten days after removal, a posterior wound dehiscence with siero-hematic effusion appeared. Although no infectious agents were identified, levofloxacin 500 mg twice a day and rifampicin 600 mg were administered for 12 weeks, after a brief consultation with the infectious disease specialist. In addition, superficial wound debridement was conducted. V.A.C. (Vacuum Assisted Closure, Acelity; KCI, St. Paul, MN, USA) negative pres-sure wound therapy was applied for 3 weeks, as well as PICO (Smith & Nephew, Watford, UK) negative pressure wound therapy for an additional 7-week period, achieving complete skin closure. CRP and ESR levels were found to be in range at 8 weeks and the antibiotic therapy was well-tolerated by the patient during the entire treatment period. The patient was assessed 3 years after the second surgery. At examination, the soft tissues looked normal and the range of motion of his elbow was 110° of flexion, 30° of extension, 10° of supination, and 0° of pronation and the patient had already returned to his normal daily activities with limitation concerning weightlifting of heavy objects and some residual pain after work activities (Fig. 4). No pain was present at rest or during any flexion/extension movement related to simple daily activities. The Disabilities of the Arm, Shoulder and Hand (DASH) score was 16.7. There was no medial instability. Mild discomfort and apprehension during the lateral pivot-shift test and posterolateral rotatory drawer test suggested mild posterolateral rotatory instability. Radiographs at 3 years showed good allograft osteointegration without any signs of bone reabsorption on the docking point or hardware loosening or severe osteoarthritis of the radial-humeral joint with partial lateral humeral condyle reabsorption ( Fig. 5A and B). Computed tomography (CT) scans confirmed the radiographic findings and good creeping substitution at the docking site (Fig. 5C). DISCUSSION Elbow fracture-dislocations can lead to osteoarthritic changes, articular stiffness, and recurrent instability. These injuries are typically addressed with compressive and locking plates, regardless of whether associated with capsular-ligament repair or reconstruction. When the fracture is comminuted, and the articular surface is severely impaired, joint restoration becomes diffi- cult to achieve and soft tissue biology is often compromised. In such cases, open reduction and internal fixation is not always possible, and is often associated with high risk of hardware failure and patient functional insufficiency. As in other articular injuries, joint replacement can be used. Several authors reported good clinical and radiographic results in total elbow arthroplasty in AO type C distal humeral fractures in elderly patients with osteoporotic bone [3]. Barco et al. [4] reported a survivor rate of 85% at 5 years and 76% at 10 years follow-up in rheumatoid geriatric patients treated with primary total elbow replacement for distal humeral fractures. However, when this survivor rate is compared to hip and knee replacement, survivor rate is undoubtedly lower; in fact, the complication rate for total elbow replacement is higher in comparison to other joint arthroplasties. Complications are also more frequent in young, obese, and smoking patients, and that functional recovery is better in rheumatic elbows than in the fractured ones [5]. A B In our 69-year-old patient, an elbow mega prosthesis should have been used due to the large ulnar bone defect. Even though Capanna et al. [6] described good results in 31 oncological patients when such technique was used, an elbow mega prosthesis was not a suitable option for our patient. In this case, the patient would have a high risk of implant loosening related to the poor ulnar bone stock and to his high functional request, considering that the injured arm was his dominant one, and he would have been at high risk of infection due to the previous severe soft tissue damage. Therefore, we decided to use a massive osteochondral allograft considering the proximal ulna bone loss and the need to provide the best functional restoration, as is typically required for active patients. Even if the elbow is an infrequent site for tumor and metastasis, the use of massive bone allograft for limb-sparing tumor resection is widely described in the literature [7], especially for hip and knee surgery. The main advantage for a massive allograft is the possibility to totally restore the bone stock while maintaining joint function. In our opinion, this approach should safely be considered in active patients. Preoperative planning is fundamental. In fact, a mismatch between the articular portion of the allograft and the host trochlea can bring elbow instability and cartilage wear, because of the altered mechanical stress on the joint surface. Preoperative CT scans and contralateral elbow radiographs helped in measuring the trochlea size and finding a suitable ulnar allograft. Allograft storage protocol is also important to ensure good cell biology and A B matrix content. Freezing and sterilization reduce the cellular component [8]. In our case, the graft had been frozen and preserved at -80°C prior to surgery, according to the Bank of Tissues protocols. Massive bone allograft is not complication free. Fractures, infections, non-unions, articular degeneration, and joint instability are frequently described in the literature, reporting an overall complication rate that ranges from 40% to 70% [2]. Allograft fractures were seen in 10% to 52% of the cases [2] and usually occur after graft healing and with little to no trauma. These fractures are probably associated with incomplete creeping substitution, and the larger the osteochondral allograft, the higher the risk of fracture can be. In our patient, neither allograft fractures nor graft-host nonunions occurred. We should consider, however, that massive bone allograft is generally used alone or in an allograft-prosthesis composite for oncological patients who underwent perioperative chemotherapy or radiotherapy. This can also explain their high complication rate. In our case, we operated on a non-oncological patient. In addition, the ulna is not a weight-bearing bone, so the mechanical stress on this bone is much lower. Graft fracture can also depend on fixation technique. In our patient, we used an LCP plate that allows a rigid and stable fixation and reduces the risk of fracture and allows compression and contact at the docking site. These considerations are crucial to ensure healing of the graft-host junction, while avoiding nonunions. Infection is the most feared problem for orthopedic surgeons; in fact, it is the most common complication associated with graft removal in the first 2 or 3 years after reconstruction [9]. Infections are reported in up to 16% of previous case cohorts [2,9]. There is no consensus about the management of massive allograft infections. A topic of debate is whether to remove the graft, considering that graft removal might lead to severe dysfunction, and in some cases limb amputation. Aponte-Tinao et al. [10] retrospectively analyzed 673 patients who underwent reconstruction with massive bone allografts for tumors or for a previous limb rescue procedure. Only 18% of the infected patients were treated successfully with surgical debridement and antibiotics without graft removal; the remaining 82% of patients were treated with graft removal, a cemented spacer, and a second reconstruction and 34% of the subjects presented with new infections. In our patient, the infection was superficial in a way that wound debridement, negative pressure therapy, and antibiotics for 3 months were enough to achieve wound healing. The use of negative pressure therapy is well documented in the literature, although the mechanism remains unclear. It has been suggested that it promotes wound healing by fluid absorption when in excess, by preserving microcirculation dynamics through toxin removal from the surrounding tissue, and by decreasing the bacterial load in the case of infection. Time lapse between trauma and bone allograft is another point to consider. Given the few case reports of allograft for knee fractures described in literature, at the moment there are no guidelines that define the proper time of action. In our case report, 3 months were necessary before definitive surgery. In fact, even if there were not any signs of infection, ESR and CRP normalized only 8 weeks after surgery. Because timing is crucial to reduce the risk of infection, we suggest that bone allograft after complete wound healing and normalized laboratory exams are mandatory. Moreover, a closer collaboration between the orthopedic surgeon and the infectious disease specialist is fundamental for deciding timing for the second stage after the first surgical treatment, as well as the duration of antibiotic therapy after allograft implantation. In some cases, when the soft tissues are significantly injured, and muscle flaps are required to cover the massive allograft, a plastic surgeon can be very helpful when available. Another key point of these fracture-dislocations is the management of capsular-ligament injuries that are usually severe, as in our case. Ligament repair or reconstruction should be done to avoid instability. In fact, even if the osseous anatomy of the ulnohumeral joint gives intrinsic stability acting as a hinged joint, MCL and LCL complex are primary static constraints. Open elbow fracture-dislocations are particularly challenging injuries. Massive ulnar osteochondral allograft can be considered a valid option in cases of open Monteggia-like lesions with large bone defects. Based on our study, this procedure restores complete joint function and offers a satisfactory bone stock for a future total elbow replacement.
2021-09-08T06:16:55.183Z
2021-08-10T00:00:00.000
{ "year": 2021, "sha1": "b8f3a9c1fd0f88085153ce384943f3f17edb6244", "oa_license": "CCBYNC", "oa_url": "https://www.cisejournal.org/upload/pdf/cise-2021-00220.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da9b7903490b7e52ea198431ff8e0de01a75a888", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21494227
pes2o/s2orc
v3-fos-license
Gap between contact and content in maternal and newborn care: An analysis of data from 20 countries in sub–Saharan Africa Background Over the last decade, coverage of maternal and newborn health indicators used for global monitoring and reporting have increased substantially but reductions in maternal and neonatal mortality have remained slow. This has led to an increased recognition and concern that these standard globally agreed upon measures of antenatal care (ANC), skilled birth attendance (SBA) and postnatal care (PNC) only capture the level of contacts with the health system and provide little indication of actual content of services received by mothers and their newborns. Over this period, large household surveys have captured measures of maternal and newborn care mainly through questions assessing contacts during the antenatal, delivery and postnatal periods along with some measures of content of care. This study aims to describe the gap between contact and content –as a proxy for quality– of maternal and newborn health services by assessing level of co–coverage of ANC and PNC interventions. Methods We used Demographic and Health Surveys (DHS) data from 20 countries between 2010 and 2015. We analysed the proportion of women with at least 1 and 4+ antenatal care visit, who received 8 interventions. We also assessed the percentage of newborns delivered with a skilled birth attendant who received 7 interventions. We ran random effect logistic regression to assess factors associated with receiving all interventions during the antenatal and postnatal period. Results While on average 51% of women in the analysis received four ANC visits with at least one visit from a skilled health provider, only 5% of them received all 8 ANC interventions. Similarly, during the postnatal period though two–thirds (65%) of births were attended by a skilled birth attendant, only 3% of newborns received all 7 PNC interventions. The odds of receiving all ANC and PNC interventions were higher for women with higher education and higher wealth status. Conclusion The gap between coverage and content as a proxy of quality of antenatal and postnatal care is excessively large in all countries. In order to accelerate maternal and newborn survival and achieve Sustainable Development Goals, increased efforts are needed to improve both the coverage and quality of maternal and newborn health interventions. Background Over the last decade, coverage of maternal and newborn health indicators used for global monitoring and reporting have increased substantially but reductions in maternal and neonatal mortality have remained slow. This has led to an increased recognition and concern that these standard globally agreed upon measures of antenatal care (ANC), skilled birth attendance (SBA) and postnatal care (PNC) only capture the level of contacts with the health system and provide little indication of actual content of services received by mothers and their newborns. Over this period, large household surveys have captured measures of maternal and newborn care mainly through questions assessing contacts during the antenatal, delivery and postnatal periods along with some measures of content of care. This study aims to describe the gap between contact and content -as a proxy for quality-of maternal and newborn health services by assessing level of cocoverage of ANC and PNC interventions. Methods We used Demographic and Health Surveys (DHS) data from 20 countries between 2010 and 2015. We analysed the proportion of women with at least 1 and 4+ antenatal care visit, who received 8 interventions. We also assessed the percentage of newborns delivered with a skilled birth attendant who received 7 interventions. We ran random effect logistic regression to assess factors associated with receiving all interventions during the antenatal and postnatal period. Results While on average 51% of women in the analysis received four ANC visits with at least one visit from a skilled health provider, only 5% of them received all 8 ANC interventions. Similarly, during the postnatal period though two-thirds (65%) of births were attended by a skilled birth attendant, only 3% of newborns received all 7 PNC interventions. The odds of receiving all ANC and PNC interventions were higher for women with higher education and higher wealth status. Conclusion The gap between coverage and content as a proxy of quality of antenatal and postnatal care is excessively large in all countries. In order to accelerate maternal and newborn survival and achieve Sustainable Development Goals, increased efforts are needed to improve both the coverage and quality of maternal and newborn health interventions. Electronic supplementary material: The online version of this article contains supplementary material. Over the past 25 years, concerted global efforts have led to dramatic reductions in maternal and under-five mortality. Globally, the maternal mortality ratio has declined by nearly 44%, [1] while the under-five mortality rate has fallen by 53% [2]. Yet, most low and middle income countries failed to attain the maternal, newborn and child health goals set out in the Millennium Development Goals (MDGs) [3] and an unacceptably large numbers of women, newborn and children are still dying. About 800 women and 7700 newborns die each day from complications during pregnancy and childbirth and in the postnatal period [4]. Increasing newborn survival is a continuing challenge that must be addressed as neonatal deaths are becoming an increasing share of under-five deaths. [3]. Thus, a major unfinished agenda is the annual toll of 2.9 million neonatal deaths which account for 45% of all underfive deaths [5,6]. It is now well established that care around the time of birth has the potential to avert more than 40% of neonatal deaths and must be prioritized as the world seek to eliminate preventable neonatal deaths [7]. Key proven interventions include care by a skilled birth attendant, emergency obstetric care, immediate care for every newborn baby including breastfeeding support and clean birth practices such as cord and thermal care and newborn resuscitation [2]. Evidence also suggests that increased coverage and quality of preconception, antenatal, intrapartum, and postnatal interventions by 2025 could avert 71% of neonatal deaths, 33% of stillbirths and 54% of maternal deaths per year [7]. Monitoring the coverage of effective and affordable maternal, newborn and child health interventions is central to assess progress [8,9]. For the purpose of global monitoring and reporting, a set of coverage indicators along the continuum of care have been adopted by global monitoring frameworks like the Global Strategy for Women' s, Children' s and Adolescents' Health 2016-2030 and the Every Newborn Action Plan, to mention a few [10][11][12]. More women are now receiving antenatal care and delivering with a skilled attendant. Globally, antenatal care coverage for 4 or more antenatal visits by any provider has increased from 35% in 1990 to 58% in 2015 [13], while the proportion of births delivered with a skilled birth attendant rose from 61 to 78% between 1990 and 2015 [14]. However, these changes in coverage of maternal and newborn health have not reflected expected progress in impact indicators related to maternal and newborn survival. It is being increasingly recognized that the global measures of coverage of maternal and newborn health capture only contacts with the health system with little information about the quality of care received. Maximizing coverage of measures focused on contacts alone is insufficient to reduce maternal, newborn and child mortality. To move towards elimination of preventable causes of maternal and newborn deaths, increased coverage of recommended contacts should be accompanied by increased focus on content of services [4,[15][16][17][18][19][20][21]. Recent evidence shows that closure of quality gap of facility based maternal and newborn health services could prevent an estimated 113 000 maternal deaths, 531 000 stillbirths and 1.325 million neonatal deaths annually by 2020 [7]. Currently, the global indicators specific to pregnancy, delivery and postnatal periods that are common to the Global Strategy and ENAP include antenatal care (at least four visits), skilled attendant at birth and postnatal care for mothers and newborns within 48 hours following birth. These global maternal and newborn health indicators are truly the tip of the iceberg as these focus only on contacts between women or newborns with the health system and provide no indication of the content of services and quality of care delivered, which limits their usefulness for programmatic purposes [22]. A critical gap is noted in the measurement and reporting of quality of services received by women and children with the recommendation of adding core indicators assessing quality of maternal and newborn health care to the global coverage indicators [4,12,18,23,24]. Recently, the World Health Organization has proposed standards of care and measures assessing quality of maternal and newborn health care [4]. Large-scale, nationally representative household surveys such as UNICEF-supported Multiple Indicator Cluster Surveys (MICS) [25] and USAID-supported Demographic and Health Surveys (DHS) [26] are the largest source of data on maternal and child health outcomes at the population level. But, these surveys are limited in terms of providing information on content of care during the antenatal, labour, delivery and postnatal period. Data are often collected on basic services received during antenatal care such weighing, testing of urine and blood, measuring blood pressure, tetanus protection, etc. During intra and postpartum periods, information on initiation of breastfeeding, weighing, immunization and postnatal care of mother and newborn is collected. While this information does not cover the breath of all services required, and especially in cases of emergency care and treatment, together, it can allow an assessment of whether women and newborns are receiving the minimum expected services. Thus, data collected through MICS and DHS has the potential to provide an indication of level of quality of care, at least at a basic level. Unlike health facility or quality of care surveys that focus on care provided at service delivery sites, these household surveys have the advantage to provide nationally representative estimates that can also be disaggregated by relevant background characteristics including sub-national regions, mother' s education, mother' s age, sex of the child, wealth quintiles, etc., and allow to conduct relevant equity analyses which are a priority in the Sustainable Development Goals (SDGs) era. In this paper, we analyse the co-coverage of content interventions used as proxy for quality of care received by women during antenatal care and by the newborn during postnatal period using data from nationally representative surveys. We then compare this co-coverage estimate with the global coverage indicators assessing contacts with health system to highlight the gap between contact and content. Data Source Data for this study are from DHS surveys conducted between 2010 and 2015. We used data on interventions during the antenatal, delivery and postnatal periods from DHS surveys in 20 countries (see Table S1 in Online Supplementary Document). These 20 countries were included due to the availability of data on 8 antenatal care (ANC) and 7 postnatal care (PNC) interventions included in this analysis. Of the 20 countries, 18 countries had data on the full set of ANC interventions and 17 countries reported on all 7 PNC interventions included in the analysis. Method of analysis To assess the quality of maternal and newborn health services during pregnancy, birth and postnatal period, we analysed the co-coverage of selected interventions received by mothers and newborns. The cocoverage indicator, proposed in 2005, is a simple count of how many interventions are received by mothers and newborns out of a set of selected interventions [27]. For the purpose of this analysis, we included 8 ANC content interventions as a proxy for quality of antenatal care ( Table 1). We first assessed the contact coverage estimates defined as (1) percent of women with a live birth in last 2 years who had at least one ANC visit with a skilled provider and (2) percent of women with a live birth in previous 2 years who had four or more ANC visits with at least one visit with a skilled health personnel. We then described coverage of content among all women with a live birth in previous 2 years and also restricted to women who reported having an ANC contact as the proportion of women with at least one ANC visit and those with four or more visits who received all 8 interventions. In order to compare the gap between contact and content at the time of birth we included 7 PNC interventions. Interventions as weighing the newborn at birth, early initiation of breastfeeding, vaccinating the newborn with Polio dose 0 and BCG were included as proxy for quality as these are directly within the control of the skilled birth attendant. No prelacteal feeds for first 3 days was included as educating and assisting women on initiating exclusive breastfeeding and maintaining successful breastfeeding has been identified as a core function of skilled health personnel [28]. Postnatal health checks within 48 hours of birth for the mother and newborn was included due to lack of data availability on content of postnatal care in the analysed household surveys. For PNC, we analysed women delivering with a skilled birth attendant (SBA) whose surviving newborn received the 7 interventions. In the present analysis, a skilled birth attendant was identified based on the database maintained by UNICEF and Countdown which validates the skill and qualifications of the health personnel. For postnatal interventions, data on immunization was collected only on surviving children. We therefore, restricted the analysis to surviving children under 2 years at the time of the survey. This may positively affect the results if it is assumed that children who have died may be more likely to have had low quality care. To assess the factors associated with the receipt of all interventions during ANC and PNC periods, we carried out random effect logistic regression on pooled data on women who had a contact. The regres- Table 1. Set of interventions included for co-coverage analysis sion model controlled for several maternal, socio-demographic characteristics as maternal age, education status, parity, area of residence and wealth status. Antenatal period The analysis presented in Figure 1 characterizes the quality of care received, among women who reported receiving at least one ANC visit with a skilled provider and those with four or more ANC visits. The gap between contact and content, defined as the difference between the percentage with four or more antenatal care visits and the percentage who received all 8 interventions, in the antenatal period is huge; compared to an average of 51% [range: 32%-76%] of women who received four or more ANC visits with at least one visit with a skilled health provider, only 5% (range: 0.3%-19%) of the women received all 8 ANC interventions (panel A in Figure 1). Among all interventions provided to women who had a contact during the antenatal period, receipt of three doses of intermittent preventive treatment of malaria in pregnancy was lowest (panel B in Figure 1). The gap between contact and content was found to be widest in case of Congo and Gabon where difference of 70 percentage points was noted between percentage of women who received 4+ ANC and the percentage of women who received all 8 ANC content interventions (see Table S2a in Online Supplementary Document). The logistic regression analysis showed that women who had four or more ANC visits had 2 times higher odds of receiving all 8 interventions than those with only 1 ANC visit (odds ratio (OR) = 2.06, 95% confidence interval (CI) = 1.72-2.46). It was also found that primiparous women had 23% increased odds to receive all 8 ANC interventions compared to women with 5 or more children. The odds of receiving all ANC interventions increased significantly with greater levels of education and wealth status (Figure 2). Postnatal period The gap between contact and content of care highlights that though about two-thirds (65%, range: 34% to 93%) of women and newborns had contact with the health system only a handful are able to report receiving all 7 interventions considered (3%, range: <1% to 9%). (Figure 3). In the postnatal period, this gap was found to be the widest for Congo and Gabon. (see Table S2b in Online Supplementary Document). As with ANC interventions, the likelihood of receiving all 7 PNC interventions was higher for newborns born to women with higher education (OR = 1.23, 95% CI = 1.12-1.35) and wealth status (OR = 1.31, 95% CI = 1.02-1.67). Contact during antenatal period was also found to be associated with the receipt of PNC interventions. Analysis revealed that the odds of newborns receiving all PNC interventions were 17% (OR = 1.17, 95% CI = 0.94-1.46) more for newborns whose mothers received four or more ANC visits than those who received 1-3 visits (Figure 4). Carvajal-Aguirre et al. Our analysis demonstrates that there are large gaps between contact and content of care during antenatal, birth and postnatal period across all countries, as assessed using mothers' recall from household survey. Among all ANC interventions included in the analysis, measurement of blood pressure was found to be the most commonly received intervention. Our finding resonates with an earlier study which assessed the content of antenatal care when data on antenatal interventions such as h8 and weight checking, blood pressure testing, and blood and urine testing was first available in Demographic and Health surveys [29]. The findings of the present study are also consistent with other studies that examined coverage of high quality contacts during the antenatal and postnatal period [24,[29][30][31]. A recent study noted a substantial decline in the coverage of at least one antenatal contact and skilled birth attendance on adding content in Nigeria, Ethiopia and India [30]. Such gaps between globally recommended coverage indicators measuring contacts and actual content indicate ineffective care resulting in lack of accelerated progress towards maternal and newborn survival. A limitation of this analysis is that we were able to analyse only interventions that were available in household surveys across the countries included in the analysis. We recognize that the scope of essential newborn care is broader and encompasses a range of interventions. Additional essential newborn care interventions such as thermal care and cord care have recently started to be included in household surveys. However, at the time of analysis data on additional newborn care interventions was available for only a few countries. Thus, our analysis included a subset of interventions in the antenatal and postnatal period for which data were available for a larger number of countries. Another limitation is that all measures included in the analysis are based on mother' s recall of care during the antenatal and postnatal period and therefore may be subject to differential recall bias. Further, only few studies have assessed the validity of coverage indicators for MNCH interventions measured through household surveys. A recent series on "Measuring Coverage in MNCH" found that the sensitivity and specificity of coverage indicators is highly variable across interventions and women report less accurately about interventions that occurred immediately following childbirth [9]. An area of further research would be linking data from facilities surveys with population based data in order to better understand the quality of available services. Recent studies linking these two sources have found an association between service readiness in health facilities and the likelihood of receiving an appropriate set of essential newborn care interventions, as well as highlighted important gaps in service delivery as obstacles to universal access to health services [32,33]. The current global maternal, newborn and child health coverage indicators for pregnancy, labour and postnatal period focus merely on contacts with the health system with no information on quality and process of care. These measures of MNCH coverage only show whether services are reaching intended beneficiaries but do not assess the effectiveness or actual content of the care received. Our analysis establishes that focusing on merely contacts with health system rather than on content of care is a critical gap in assessing the true effectiveness of maternal and child health interventions. For example, we observed that although 2 in 3 births were attended by a skilled birth attendant, only 3% of the births received all 7 interventions recommended during the immediate postnatal period. There is increasing evidence to support that increased coverage of recommended contacts alone is insufficient to reduce maternal and neonatal mortality and morbidity [4,7,[15][16][17][18][19][20][21]24]. Quality of care is being internationally recognized as a critical aspect of the unfinished maternal and newborn health agenda [4,15]. Our findings also highlight the need to include elements of quality of care for regular monitoring through health management information systems (HMIS), household and facility surveys in other to identify the real gaps in effective coverage. Periodic program assessments can include a measure for content analysis of ANC and PNC visits in a given sample of mothers and newborns and explore reasons of omitting certain interventions which can vary from lack of competency to stock-outs of urine and haemoglobin test kits. Further research is also required to identify more sensitive indicators on quality of care and including these in future household surveys.
2018-04-03T00:16:20.802Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "d946a49dabdfadc70820f4b635e1ce7baeccc825", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7189/jogh.07.020501", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d946a49dabdfadc70820f4b635e1ce7baeccc825", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118650797
pes2o/s2orc
v3-fos-license
A tunable, dual mode field-effect or single electron transistor A dual mode device behaving either as a field-effect transistor or a single electron transistor (SET) has been fabricated using silicon-on-insulator metal oxide semiconductor technology. Depending on the back gate polarisation, an electron island is accumulated under the front gate of the device (SET regime), or a field-effect transistor is obtained by pinching off a bottom channel with a negative front gate voltage. The gradual transition between these two cases is observed. This dual function uses both vertical and horizontal tunable potential gradients in non-overlapped silicon-on-insulator channel. Owing to its superior control of short channel effect together with a negligible dopant-induced variability 1 , fully-depleted silicon-on-insulator (FD-SOI) is nowadays considered as a consistent solution for future low power applications 2 . One of the key challenge for FD-SOI is to design low access resistances. On the other hand single electron transistors (SETs) require access resistances of the order of the quantum unit (25.8 kΩ) to exhibit Coulomb Blockade Oscillations (CBO) 3,4 . Hence these two devices have so far been designed separately, although a simple modification of the Source/Drain architecture enables to get SET operation at low temperature with a metal-oxide-semiconductor field-effect transistor (MOS-FET) structure 5 . Silicon SETs greatly benefit from the mature silicon technologies: recently scaling below the 5 nm range allowed room temperature operation 6 . Coupled SET-FET circuits have been studied for multi-valued logic applications 7 , realized with silicon technologies 8 or integrating CMOS devices 9 . Nevertheless a CMOS FD-SOI facility has never been used to realize such hybrid circuits, though it is an excellent tool to benchmark these concepts. Here we report on the use a) xavier.jehl@cea.fr of the substrate back gate to switch between FET and SET behaviour within the same device, fabricated in a CMOS facility. We have fabricated n-MOSFETs adapted from FD-SOI technology (see Fig.1). The SOI layer is etched to pattern the active area above the 150 nm thick buried oxide (BOX). Silicon is then oxidised (5 nm) and polycrystalline silicon is deposited, resulting in a conventional gate stack. After gate etching, the source and drain module is designed. For that purpose silicon nitride spacers are formed on both sides of the gate, epitaxy is performed to raise the source and drain, and finally arsenic is implanted, leading to a typical concentration above 10 20 cm −3 . The resulting junction profile is such that the device is non-overlapped: the undoped region below the spacers-acting as a potential barrier for electrons-is responsible for the SET behaviour described hereafter 5 . The silicon substrate below the BOX is used as a back gate. This low-doped substrate used in industrial CMOS processes is not suitable for changing the voltage at low temperature: trying to change the back gate voltage leads to very slow relaxations, of the order of days, making experiments impossible. Shining light directly over the sample with a red LED thermally anchored at 4.2 K and an optical fiber to transmit the light down to lower temperature stages, the substrate reacts much faster, making substrate polarisation studies possible. The experimental procedure we followed here consists in shining light during only a few seconds after each change of the back gate voltage value. In this paper we show the data for two devices, however many samples have been produced and show the same behaviour. Fig. 1a shows a transmission electron micrograph of a device similar to the samples we measured. The first sample has a channel thickness T Si = 8 nm, an active width W = 40 nm and a gate length L g = 70 nm. Considering the small T Si and the relatively long L g this sample is designed to have good sub-threshold electrostatic control by the gate. Its electrical characteristics at 300 K are shown in Fig. 2a in linear and logarithmic scales. It exhibits a sub-threshold slope of 70 mV/decade (measured with V d = 1 mV) which is at the state-of-the-art for an oxide thickness of 5 nm, close to the theoretical limit for thermally activated transport kB T e ln(10) (60 mV/decade at 300 K). For such an non-overlapped geometry moderate oncurrent level is expected due to the extra access resistance to the channel. Fig. 2c shows the variation of the conductance (in the linear regime) at V g = 1.6 V above the threshold voltage. Using a positive back gate voltage V bg , we increase by 20 % the normalized value of the oncurrent, reaching 0.4 mA/µm (V bg = 39 V and the sourcedrain bias V d = 1 V, not shown). This is a high value for an etched Si-nanowire. Fig. 2a shows that this gain is obtained without degrading the sub-threshold slope. Increasing V bg leads to a decrease of the threshold voltage (V t ), plotted in Fig. 2b. From the slope dV bg /dV t we can obtain the ratio of the effective front gate and back gate capacitances 10 . We found a ratio of 40 which is not far from a crude estimation with planar capacitors: |dV bg /dV t | ≈ T BOX /T ox = 30. The deviation from this model can be explained using a more realistic 3D model of capacitances. For practical applications much thinner BOX should be used to lower the applied V bg . At low temperature the non-overlapped geometry of our FET turns it into a SET without using the back gate, as reported before 5,11 . Here we investigate how the device is modified by increasing the substrate voltage at T = 1 K. Source-drain conductance is shown on Fig. 3 for increasing V bg . For each V bg , the V g range is chosen to see the onset of current through the device. A clear SET regime is found at V bg = 0 V ; the observed CBO are regularly spaced with a period of ∆V g = 4.5 mV that corresponds to a gate capacitance C g = e ∆Vg ≃ 35 aF (consistent with the 27 aF given by a planar capacitor model) and the value of conductance for each oscillation does not change significantly throughout the V g range. This SET regime is well described by the orthodox theory of Coulomb blockade because the island contains a large number of electrons at the onset of current. It is useful to note that the flat band condition is reached at V bg ≃ V g ≃ 0 in our devices at T = 0, we can therefor estimate that around one hundred electrons are present in the SET on Fig. 3 at V bg = 0. The situation changes gradually when increasing V bg ; the spacing between Coulomb peaks becomes larger and irregular, which characterizes a SET with a much lower density of electrons. The Coulomb island is progressively fragmented into low electron density flakes down to a Coulomb glass regime 12 . The electron island is well defined at V bg = 0 V because the electron gas experiences sharp gradient of potential at the top of the Si-wire, induced by V g . On the contrary these gradients are smoother near the BOX interface, at a positive back gate voltages, leading to several electron flakes. Therefore the sharp transition between high electron density regions and potential barrier-which is necessary for orthodox SET-is progressively lost at large V bg . Finally, for V bg = 39 V, an electron gas is located at the buried interface and pinched off by a negative front gate voltage, leading to a FET characteristic. The observed bumps below the threshold are due to remaining disorder affecting the smooth parabolic potential of the pinch-off region. The second sample is designed with a reduced L g of 30 nm. The slightly thicker T Si is now 12 nm and W is unchanged. We aim to see the effect of scaling down L g both on the SET and the FET. Fig. 4a shows the characteristic of sample 2 at 4.2 K and V bg = 0 V. Regular CBO appear due to the nonoverlapped geometry but the measured period ∆V g = 21 mV ± 6 mV which corresponds to a capacitance C g = e/∆V g ≃ 8 aF is larger by a factor 4 as compared to sample 1. This is mainly a consequence of the smaller gate length, that is the smaller gate-channel overlap capacitance. In sample 2 the first few electrons in the island are detected (at V g > ∼ 0). This is due to the larger tunnel coupling to source and drain at larger T Si . The low density limit explains the fluctuations of ∆V g through quantum capacitance effects. By applying V bg = +20 V (red line in Fig. 4b) an excellent FET regime is now observed, with both a very steep current rise, and a good on-conductance. We have not significantly improved the on-current level as compared to sample 1 because the on-current level is limited by the access resistance. The sub-threshold swing at T = 4.2 K is excellent, reaching 8 mV/decade. Compared to sample 1 less bumps of conductance are observed near the threshold, indicating that the pinch-off potential is steeper (smaller gate length) and less sensitive to the disordered potential. The sub-threshold swing is nevertheless larger than the best theoretical limit at T = 4.2 K which is 0.85 mV/decade. We attribute the observed swing to the thermal activation with a lever arm parameter α = δφ/δV g ≃ 0.1 (φ is the potential at the electron gas location). At V bg = 0 V from the analysis of the CBO we have measured α ≃ 0.3 when the electron gas is located near the top interface. The lower α value found in the FET regime is consistent with the fact that the electron gas has been pushed towards the BOX interface. For comparison the FET characteristic at 300 K and V bg = 0 V is shown on the same plot. It exhibits a sub-threshold swing of 112 mV/decade, as a result of its short channel and relatively thicker T Si . The maximum transconductance at 4.2 K is 30 µS (not shown), which makes this FET a good candidate to design an amplifier at cryogenic temperature 13 . In conclusion, we report here the fabrication of a device acting either as a SET or a FET depending only on substrate voltage. The CMOS fabrication process enables large scale integration. We showed that scaling down the gate length both decreases the size of the SET down to the few electrons limit and improves the FET caracteristic at T = 4.2 K. This work opens up the opportunity to design hybrid circuits with SET and FET devices using a local back gate. Such circuits are well adapted to low power applications for which they will not suffer from the relatively high access resistance of the FETs due to the non-overlapped geometry. The research leading to these results has received funding from the European Community's seventh Framework (FP7 2007/2013) under the Grant Agreement Nr:214989. The samples subject of this work have been designed and made by the AFSID Project Partners http://www.afsid.eu.
2012-01-18T11:23:02.000Z
2012-01-18T00:00:00.000
{ "year": 2012, "sha1": "df31bad8dfd638b52288fbf13555e54a567e6001", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1201.3760", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "df31bad8dfd638b52288fbf13555e54a567e6001", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
245121328
pes2o/s2orc
v3-fos-license
A Nanoparticle-Conjugated Anti-TBK1 siRNA Induces Autophagy-Related Apoptosis and Enhances cGAS-STING Pathway in GBM Cells Department of Neurosurgery, Xiangya Hospital of Central South University, Changsha 410008, Hunan, China National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China Health Management Center, Xiangya Hospital of Central South University, Changsha 410008, Hunan, China Department of &oracic Surgery, Xiangya Hospital of Central South University, Changsha 410008, Hunan, China Department of Microbiology, School of Basic Medical Science, Central South University, Changsha 410008, China Introduction e incidence of glioblastoma (GBM, grade IV) in Western countries and China is increasing year by year, which owned the highest mortality rate [1,2].Some biomarkers such as isocitrate dehydrogenase (IDH) mutation and 1p19q codeletion have been proposed to indicate favorable prognosis of glioma patients [3,4].Besides, age is considered as a prognostic factor for glioma patients, and the younger age indicates a favorable prognosis [5,6]. e conventional therapies include surgery, chemotherapy, and radiotherapy; however, the average survival time of GBM patients ranges from 12 to 15 months [7,8].Although gene regulation is regarded as a key factor for tumorigenesis, the application of gene therapy remains controversial.Hence, the treatment for GBM is challenging. TANK-binding kinase 1 (TBK1) is a noncanonical member of the inhibitor of nuclear factor κB (IKK) family, which is involved in cell survival, autophagy, mTOR/AKT signaling, and KRAS-driven tumorigenesis [9,10].Moreover, EGFR constitutively complexes with TBK1 and leads to TBK1 phosphorylation in glioblastoma [11].e loss of TBK1 inhibits kidney cancer cell growth [12].erefore, downregulation of TBK1 may be a promising approach to suppress the progression of GBM, which has not been reported yet. As an emerging and promising tool, gene therapy has attracted increasing attention in the treatment of cancers or autoimmune diseases [13].RNA interference (RNAi), a conserved biological response to double-stranded RNA, is about 20-25 nucleotides, which binds to the specific target gene to manipulate gene expression by inhibiting mRNA and protein.As variously known, siRNA can regulate gene expression specifically, which is essential in RNAi phenotype.SiRNA may be a promising strategy in the treatment of cancers including GBM.However, the practical application of free siRNA is limited, owing to the inefficient cellular uptake and nuclease degradation because of its enzyme vulnerability and negative charges.Consequently, it is urgent to modify siRNA and make it used more widely, especially in GBM. Based on our previous work [14], graphene oxide (GO) can stabilize proteins.From this, we further designed TBK1si/ rGO-PEG to targeted delivery of TBK1si RNA into GBM cells. is work may facilitate and expand the experimental, even clinical applications of siRNA in targeted therapy. Data Extraction. e expression of TBK1 in gliomas and normal brain tissues was extracted from e Cancer Genome Atlas (TCGA), Chinese Glioma Genome Atlas (CGGA), and Genotype-Tissue Expression (GTEx) databases.A total of 672 and 1013 glioma patients were included in the TCGA and CGGA datasets, respectively.GEPIA website was used to detect the expression of TBK1 in gliomas and normal brain tissues [15]. Subgroup Analysis. For subgroup analysis, patients were divided into two groups based on the following variables: grade (III or IV), IDH status (wildtype or mutant), and age (≤41 years old or >41 years old). Characterization of rGO-PEG Nanoparticles. For GO functionalization, 2 mg of GO was diluted in 2 ml ultrapure DI (Deionized) water and then added 20 mg PEG-NH2.Put those mixtures under sonication for 90 min. en, the mixture was mixed with 20 mg EDC and stirred for 12 h.Afterward, centrifuged at 20000 rpm for 30 min to remove excess free PEG-NH2.At last, resuspended the precipitate into 2 mL ultrapure DI water.e rGO-PEG was synthesized in the presence of GO-PEG and NaBH4.Last, rGO-PEG with the final concentration of 2 mg/mL was achieved after centrifugation to remove excess free reagents. e TBK1si/rGO-PEG was diluted with distilled water, put on a copper grid with nitrocellulose, and then stained with phosphotungstic acid.Afterward, it was measured by Nano ZS-90 (Malvern Instruments, Malvern, UK) under room temperature.AFM (atomic force microscopy) images were taken by a Nanoscope V multimode atomic force microscope (Veeco Instruments, USA).TBK1si/rGO-PEG was diluted with ultrapure DI water with a final concentration of 1 × 10 −6 M for AFM.Twenty μL TBK1si/rGO-PEG sample was placed on the brand new muscovite mica and dried the samples under critical point dryer.Photos were taken in the tapping mode under room temperature. ose samples were mixed and stirred for half a day at 37 °C.en, samples were centrifugated to remove free TBK1si.e TBK1si/rGO-PEG nanoparticles ranging 5 : 1, 50 : 1, and 500 : 1 (weight ratios, siRNA: rGO-PEG) were electrophoresed under 150 V for half an hour.en, the agarose gel was stained and illuminated to show the blot of RNA. Cell Transfection. e U251 cells, purchased from American Type Culture Collection (Manassas, VA, USA), were cultured in six-well plates at the density of 2 × 10 5 /well with 4 mL complete DMEM for a day.en, the medium was changed to fresh serum-free DMEM while transfection.Fluorescein isothiocyanate (FITC)-labeled TBK1si (TBK1si-FITC) was designed and the quantity of which was about at 0.1 nm/well.e weight ratios of TBK1si-FITC/rGO-PEG range 5 : 1, 50 : 1, and 500 : 1.To detect the transfected cells and evaluate the transfection efficiency, the fluorescence signal was measured by the LSR Fortessa device (BD Biosciences, San Jose, CA, USA) and the fluorescence microscope (Carl Zeiss Meditec AG, Jena, Germany).e Consi sequence was 5′-UUCUCCGAACGUGUCACTUTT-3′. Cell Cycle Analysis.GBM cells were cultured in 12-well plates at the density of 1 × 10 5 /well with 2 ml complete DMEM with 10% FBS.After 12 h, GBM cells were treated with NS, rGO-PEG, Consi/rGO-PEG, or TBK1si/rGO-PEG for 48 h.en, all GBM cells were harvested and fixed in 70% ethanol for 12 h at 4 °C.After washed twice by PBS, GBM cells were incubated RNase A for 1.5 h at room temperature.Afterward, propidium iodide (PI) (Becton-Dickinson, San Jose, CA) and Triton X-100 were used to stain GBM cells.At last, the LSR Fortessa device (BD Biosciences, San Jose, CA, USA) was used to acquire the data, which were analyzed with FlowJo V10. Cell Apoptosis Analysis.GBM cells were cultured in sixwell plates at a density of 1 × 10 5 cells/well with 4 ml complete DMEM for 12 h.en, GBM cells were treated with NS, rGO-PEG, Consi/rGO-PEG, or TBK1si1/rGO-PEG for 12 h.Afterward, the cells were collected and stained by the Annexin V-CF Blue/PI apoptosis detection kit (Abcam) as protocol for 20 min at 25 °C in the darkness. e LSR 2 Evidence-Based Complementary and Alternative Medicine Fortessa device (BD Biosciences, San Jose, CA, USA) was used to evaluate the apoptotic cells including the early and late ones in GBM cells. Statistical Analyses. e data were analyzed and visualized by R version 3.6.0and GraphPad Prism version 8.0.2.Kaplan-Meier analysis was used to estimate the survival difference between TBK1-high and TBK1-low groups.e optimal cutoff point of TBK1 expression in survival analysis was determined using "survminer" R package.Statistical analysis was accessed by the unpaired two-tailed Student's t-test or one/twoway ANOVA variance.All data were shown as mean ± standard deviation.Asterisks indicated the various statistical significance ( * p < 0.05, * * p < 0.01, * * * p < 0.001).n.s.means nonsignificant, indicating p > 0.05.Error bar � mean ± standard deviation presented in all necessary graphs. TBK1 Was Highly Expressed in Gliomas and Correlated with the Prognosis of Glioma Patients. To preliminarily explore the potential role of TBK1 in gliomas, we extracted data from online databases including TCGA, CGGA, and GTEx.e expression of TBK1 was significantly elevated in lower-grade glioma (LGG, grade II and III gliomas) and GBM compared with normal brain tissues (p < 0.05) (Figure 1(a)).Glioma patients with low expression of TBK1 had relatively better survival time compared with those with high TBK1 expression (p < 0.05) (Figure 1(b)).In subgroup analysis, the low expression of TBK1 indicated better prognoses for patients with grade III or IV glioma (p < 0.05) (Figures 1(c) and 1(d)).Moreover, in IDH wildtype or mutant glioma patients, those with low expression of TBK1 had a relatively long overall survival time (p < 0.05) (Figures 1(e) and 1(f )).ese findings were consistent in glioma patients who were younger than 41 years old or not (p < 0.05) (Figures 1(g) and 1(h)).ese results indicated that TBK1 played a potential carcinogenic role in glioma and correlated with the prognosis of glioma patients. Preparation and Characteristics of TBK1si/rGO-PEG. GO was obtained according to our previous work [14,16] in the presence of EDC and conjugated with six-arm PEG into GO-PEG [17].en, GO-PEG was reduced by NaBH4 with rGO-PEG yielded.It is critical to raise the chemistry and physiological stability of rGO or GO via PEG functional modification.rGO-PEG absorbed siRNA and consisted of TBK1si/rGO-PEG, with a sheet shape (Figure 2(a)).rGO-PEG significantly improved the stability with the culture medium, which was similar to physiological conditions (Figure 2(b)). e average diameter of the miRNA-loaded nanoparticles was 102.00 ± 20.53 nm, and the height of TBK1si/rGO-PEG is 18.00 ± 3.18 nm in AFM image (Figures 2(c) and 2(d)).e UV-vis absorption spectrum revealed that after the alternation from GO-PEG into rGO-PEG, the absorption maximum presented a significant redshift from 230 nm to 265 nm, but there was no 265 nm nearby peak for PEG (Figure 2(e)).ose results indicated that the redshift of the UV spectrum was on the basis of the restoration of electronic conjugation in the rGO, instead of the existence of PEG [17]. Stability and Release Rate of TBK1si/rGO-PEG. e fluorescence emission spectrum was measured after the TBK1si/rGO-PEG nanoparticles were mixed with a certain concentration (1 μM) of complementary DNA.Only 20% of the labeled TBK1si released from rGO-PEG to complete DMEM after 3 d (Figure 3).However, over 70% of the TBK1si is released in the presence of corresponding complementary dye-labeled siRNA within 10 h (p < 0.05). is result could lay a solid foundation of a brand new and efficient platform based on siRNA/rGO-PEG nanoparticles for targeting the gene in vitro and even in vivo. TBK1si/rGO-PEG Inhibited Autophagy but Activated cGAS-STING Pathway. To reveal the mechanism of the anti-GBM effect of TBK1si/rGO-PEG, the critical protein expression level of apoptosis and autophagy after the interventions of NS, rGO-PEG, Consi/rGO-PEG, or TBK1si/ rGO-PEG were measured.In Figure 6 Discussion Gene regulation plays a critical role in the development of cancer including cell proliferation, angiogenesis, immunology, and metastasis.e gene mutations can lead to tumorigenesis [18][19][20][21].However, the treatment of malignant tumors including GBM remains a challenge.Gene therapy is promising for the treatment of cancer and certain autoimmune diseases [22][23][24][25].Particularly, gene was transferred into the abnormal cell to produce a functional molecular, such as protein, which can correct a genetic disorder [26][27][28][29]. TBK1 and its IκB kinase epsilon (IKKε) are noncanonical members of the IKK family.eir roles in innate immune signaling and cancer have been well characterized, including promotion of cell survival, autophagy, and AKT-mTOR signaling, and TBK1 activation promotes KRAS-driven tumorigenesis and development [9,10].Furthermore, TBK1 signaling in both cancer and immune cells can promote immunosuppression, and potent/specific TBK1 inhibitors have been shown to potentiate ICI responsiveness in preclinical models [10,30].TBK1 also activates type-1 IFN signaling downstream of the cGAS-STING pathway and other viral and pathogen sensors and thus can regulate both pro and antitumorigenic innate immune pathways [31].Moreover, EGFR constitutively complexes with TBK1 leading to TBK1 phosphorylation in glioblastoma [11].Based on those points, inhibition of TBK1 could obviously prevent GBM from proliferation.TBK1 could be inhibited by various methods such as siRNA, shRNA, or inhibitors.e transfection efficiency of shRNA is low since it is a plasmid system, which is hard to deliver via a nonviral vector.Although the transfection efficiency could be enhanced by a viral vector system, the safety issue should be addressed.In addition, small-molecule inhibitors are limited for its specificity, in spite of being focused recently.As a result, the siRNA is more promising for its safety and reliability in nonviral vector transfection.Due to the specific properties such as enzyme vulnerability and negative charges, siRNA is limited in application for inefficient cellular uptake and nuclease degradation.us, it is urgent to develop novel strategies for transporting siRNA.e virus system and physical methods are popular, but the delivery and security issues should be addressed [32,33].Since the 1960s, many chemical transfection systems including calcium phosphate, lipid, and cationic polymers have been designed as an alternative to viral vectors to overcome the previous shortcomes [24,34].Afterward, many efforts have been devoted to modify chemical molecular features such as structure, size, and surface potential to enhance the transfection efficiency [35,36].However, the ideal siRNA delivery system should possess not only high transfection efficiency but also low toxicity. erefore, TBK1si/rGO-PEG was designed to inhibit TBK1 for GBM treatment.First, three different target RNA sequences were synthesized to inhibit the TBK1 protein expression, and results presented that TBK1si1 was the most efficient.Second, TBK1si1 was used to evaluate the anti-GBM effect.ose results presented that TBK1si1/rGO-PEG could significantly impede the proliferation of GBM than other groups including NS, rGO-PEG, Consi/rGO-PEG, and TBK1si/rGO-PEG.As a result, the TBK1si1/rGO-PEG nanoparticle was used as the target delivery system for TBK1 siRNA. In this study, autophagy-related molecular expression was investigated, and the result presented that inhibition of TBK1 can upregulate the expression of caspase-3, P53, P62, LC3-I, cGAS, and STING, while downregulating the Bcl-2, p-TBK1, p-P62, and LC3-II.erefore, we found that TBK1siRNA promotes apoptosis via inhibiting autophagy probably.Meanwhile, the damaged DNA activated P53, which results in cell cycle arrest and apoptosis.As is known, autophagy can be inhibited by the knockdown of TBK1, which lays a foundation of accumulation of P62.Interestingly, the cGAS-STING pathway was enhanced probably by the accumulation of P62 [37], which could promote the cell survival or initiate resistance in the presence of damage or medicine [31].e cGAS-STING pathway is an evolutionary conserved defense mechanism against viral infections.It is reported that activation of TBK1 and cGAS-STING resulted in cancer progression and inflammation [38].In cancers, it remains unclear how cGAS-STING axis suppresses type-1 IFN signaling and upregulate NF-κB pathways to enhance metastatic behaviors.Maybe, the intensity of cGAS-STING activation determinates the switch between tumor suppression or promotion [39].Yet, the underlying mechanisms remain poorly understood and require further research.e shortcomings in our work should be addressed, the possible mechanism of anti-GBM needs to be further studied, and TBK1si/rGO-PEG needs to be further studied whether it can effectively inhibit GBM growth in vivo. Conclusions e rGO-PEG could be an efficient system for the delivery of siRNA, and TBK1si/rGO-PEG could be a novel therapeutic approach for GBM treatment.
2021-12-13T16:10:38.737Z
2021-12-11T00:00:00.000
{ "year": 2021, "sha1": "c54eb53854b13ecf4535b4944662caa8c3644ec2", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ecam/2021/6521953.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "46b440c9b470746af70f8343dddc45f990a85272", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
238844877
pes2o/s2orc
v3-fos-license
Urban Rituals in Sacred Landscapes In this book, I have asked the question why autochthonous, local or regional sanctuaries were so vital to the development of poleis in Hellenistic Asia Minor even though they were located at great distances from the urban center. Although I have focused this research on a few case studies, the phenomenon was fairly common, as discussed in the introduction with the list of cities whose major sanctuaries were situated at a distance, sometimes even in faraway places (Table 1.1). In examining current approaches from archaeological and historical studies, it soon becomes apparent that available models are tailored to answer very different questions, regarding either the rural setting of urban sanctuaries in the context of Archaic and Classical Greece, or the degree of autonomy and economic, social or political dimensions of sanctuaries in Asia Minor. While both approaches have informed the framework of analysis applied here, they nonetheless leave a gap in interpreting the urban roles of major sanctuaries in the chora of poleis in Hellenistic Asia Minor, particularly regarding the dynamics of change that many of these local or regional shrines underwent as they were drawn into the orbit of the polis to become its primary sanctuary. The difference between the two main approaches lies not only in the nature of the disciplines of archaeology and history, but also in the different kinds of material or epigraphic data. I have attempted a synthesis, but have also noticed that the major studies in this area are largely informed by dualistic paradigms, with core-periphery, urban-rural, civilized-wild, and even Greek-non-Greek polarities that are more reflective of modern concerns than ancient realities. Since such biases will inevitably steer the results, I took a step back to look to other disciplines in order to gain a broader perspective on some of the fundamental issues at hand. Perceptions of space and landscape, ritual, cross-community contact, and identity are often taken at face value in studies of antiquity, yet are central concerns to the cognitive, social and spatial sciences. These disciplines help problematize these issues from very different angles, even if they require some tweaking before being applicable to the ancient world. This current study incorporates relevant issues drawn from these various approaches that should be taken into account. The resulting framework of analysis, discussed in Chapter 2, provides a holistic tool that can help assess the multifarious contexts of sanctuaries in Asia Minor in the Hellenistic period. No two sanctuaries were alike, nor were their relationships with their communities. But this tool allows for a deep assessment of the areas of change over time and their urban impact, as well as comparative analyses of different city-sanctuary relationships by pinpointing the most relevant areas of transformation and revealing larger patterns. With its focus on sources, the framework balances theoretical potential with empirical data; its fruitfulness has been demonstrated through the case studies. Before turning to the larger themes, it is worthwhile to briefly review the results, focusing on the role of landscape, major turning points in communal scope and their key interpretations. This volume opened with a brief sketch of the landscape of Labraunda. Among timeless boulders near a strategic mountain pass and with a view that embraces much of southern Karia, this landscape of power was surely one of the main attractions for the Hekatomnids. They turned the old Karian sanctuary into a center of their domain, with the grand architecture and banqueting halls that framed the splendid views, and placed it within a defense network. This was Karia at its finest. It is no wonder that the polis of Mylasa laid claims to the shrine after the passing of the dynasty. Yet as it does so it shifted the focus of Labraunda from Karia to Mylasa. This marks an important second phase in the scope of the shrine, from encompassing the region (and dynasty) shrine under the Hekatomnids, to concentrating on the polis. This shift in focus met with opposition from the priests as the shrine became contested space. While the formal matter concerned administrative control over the extensive sacred lands, the real debate was who controlled the shrine and its deep heritage. The wealth of Labraunda lay in the layers of memory residing in its monuments, as well as its landscape and panorama that included Mylasa. Both priests and polis laid claims to the Hekatomnid past in their effort to legitimate control over the shrine and its landscape. The need by the democratic polis to engage the memory of the rulers whom they had once called 'tyrants' (I.Labraunda 41), can only be explained by the power of the past. The sanctuary was still visited by other Karian communities, but inscriptions of the Hellenistic era primarily concern political manifestations of the polis, and they mark key spots at the shrine. The city put itself on display here, before all of Karia, with the memory of the Hekatomnids as backdrop. Mylasa was a composite polis and the identity of the sub-groups was typically celebrated at their local shrines, yet Zeus Labraundos gave the polis a single face that it could present to the wider world by capitalizing on its Karian heritage. The sanctuary of the Karian god Sinuri provides a rich contrast with Labraunda. Tucked away in a valley, the landscape of the sanctuary seems more connected with agriculture, yet the shrine nonetheless also drew the attention of the Hekatomnids. The main shift in scope here concerns the administration of the sanctuary, which passed from the Pelekos syngeneia in the late Classical period, who had direct relations with the ruling dynasty and appear to have been independent from Mylasa, to the Pormounos syngeneia. This group appears in the epigraphic record towards the end of the fourth century, and are clearly under the jurisdiction of Mylasa by the second century BC. The two main phases at the shrine of Sinuri are primarily distinguished by a lateral shift in the community using the sacred center. One might argue that in the Hellenistic period the sanctuary does not really qualify as urban space since its scope appears always to have focused on a subset, the syngeneia, rather than the entire body of citizens, and because its priesthood was hereditary, rather than being elected by a central body. Yet their rituals at the shrine clearly show that it functioned like urban space, reflecting civic structures, albeit on a smaller scale. The Pormounos syngeneia used the same Mylasan official jargon in their decrees -they followed the Mylasan calendar, adhered to its legal system, and adopted its institutions of financial administration and overall management of the sanctuary. They were among the most active subgroups of Mylasa in bestowing honors. They also constructed a stoa at the shrine that helped create an enclosed, urban-like space. Although we do not know whether they held processions across their landscape, their inscriptions highlight their sacred lands while their designated locations mark the key ritual spaces at the shrine. The nested levels of identity of the syngeneia would have been typical for most citizens of Mylasa, and probably several poleis in Karia or in Asia Minor for that matter. But their rituals especially show how a sanctuary offered a once-autonomous community a channel to assert its identity, while still being full members of the polis. These were not the only sanctuaries in the sacred landscape of Mylasa. No doubt the picture would greatly be enhanced if we knew more about the sanctuary of Zeus Karios, or the identity of the god worshiped at Gencik Tepe. One of the most important cults for the polis was that of Zeus Osogollis, of which very little remains besides the inscriptions.1 These show nonetheless that it was the fulcrum of much of the religious and political life of Mylasa and formed thereby an important urban counterpart to the sanctuary of Zeus Labraundos. Finally, the pottery record from the Hellenistic period is notoriously difficult to identify -a finer chronological resolution would certainly enhance and challenge many of the interpretations postulated here. While such lacunas in our current knowledge must preclude any hard modeling, there is sufficient evidence to at least confirm that both of these distant country sanctuaries were critical to the identity of Hellenistic Mylasa, albeit at different levels and for different reasons: Labraunda underwent a fundamental shift in concept as the polis, rather than the priests, assumed final control over the administration of the sanctuary and its assets, which contained besides cash crops the symbolic capital of the memory of the Hekatomnids; on a tighter scale, the syngeneia of Pormounos used the sanctuary of Sinuri to redefine itself as a community under the aegis of the polis and mirroring its institutions. Both show how the city and its surroundings adjusted to the polis model that was gaining momentum in the Hellenistic period. Stratonikeia was similar in the disparate nature of its citizen base, yet its urban origins were more recent. Whereas Mylasa appears to have undergone an internal synoikism by the early fourth century under the Hekatomnids, the urbanization of Stratonikeia was a new development, having been founded as a Macedonian colony by the Seleukids in the second quarter of the third century BC. The surrounding communities would have merged gradually into the citizen base of the rising polis by the late third century BC. The sanctuary of Hekate at Lagina seems especially to have played a key role in unifying these communities. The shrine is located on a lush hillside near the conjunction of the Hayırlıdere valley and the north-south Marsyas valley. The steep mountains just north of this form a natural border and may well have been the northern limits of the chora of the new polis. The communal scope of the sanctuary underwent at least three phases: 1) when it belonged to the local polis of Koranza, in the late fourth and third century; to 2) when it became attached to Stratonikeia as a major urban sanctuary, in the third and second centuries, while Koranza became a deme of the polis; and finally, 3) when the festivals for Hekate and Rome were used to create political networks following the grant of asylia by Rome after the Mithridatic wars. Hekate's appearance on the early coinage attests to the strong bond with the polis in the second phase, as well as the radical transformation of her sanctuary into a large and monumental complex. But it is especially in this last phase that her sanctuary experienced a surge as inscribed urban space, with the many honorific decrees and dedications. Several concern the processions that integrated the diverse communities as the old road was now ritual space, carrying the urban body from the new town towards the sanctuary, but especially back into town, with the centripetal processions of the sacred key, the kleidos agoge. The cult of Hekate and its spectacles served to merge the composite citizen body and its dispersed territory into a unified polis, and later helped position the polis in the world of cities through its festival network. Forging a sense of community is a constant theme among these case studies, but is perhaps most evident at Panamara. Situated on a peak in the forested hills near the Marsyas valley south of Stratonikeia, the sanctuary was evidently at a strategic point as Philip V used it to garrison his troops. The scope of the cult of Zeus Karios underwent at least four phases: 1) in the late third and probably early second century BC, when it was administered by the koinon of the Panamareis, and attracted a regional following that extended well into the Rhodian peraia, and even across the Gulf of Keramos; 2) a transitional phase around the mid-second century BC, when the sanctuary was still run by the koinon but with a priest from Stratonikeia, who revived the cult and expanded its network; 3) a period of stability under Stratonikeian control, probably when the cult of Hera was added; and 4) the period following the epiphany of Zeus during the attack by Labienus. This miracle had a galvanizing effect on the identity of the polis and became a common focus. It led to a grant of asylia by Rome and probably to the renewed expansion of cult network, initiated again by one of the priests. Panamara became a focus for urban ritual in a number of ways, with the entire population of Stratonikeia 'performing the landscape' through processions that emphasized male or female unity across the multiplicity of origins and social classes. Perhaps most spectacular was the procession that brought the festival and cult image of Zeus, now with the epithet of Panamaros, rather than Karios, into the center of town. The sanctuary clearly played a mediatory role in forging connections between Stratonikeia and the surrounding communities, creating a focus for the various strata of the polis, while extending its regional network. The second century BC was a critical time for Stratonikeia, with two essential concerns: internal social cohesion, and territorial integrity. Both sanctuaries were critical in this regard. Drawing the two remote sanctuaries into orbit, and placing their gods at the heart of urban life was an act that bound the polis, sanctuary and landscape together in a locked relationship. Stratonikeia connected to the older communities north of town through the sanctuary of Hekate at Lagina; at the same time this gave it a strong presence in the area overlooking the Marsyas valley. Panamara was located in the hill country south of town, with a good view to the Marsyas valley as well, but also with its own ties to the communities in the southern Marçat mountains. Gaining control over this sanctuary was a major step for Stratonikeia in the direction of the Gulf of Keramos, and surely opened up new economic avenues of commerce for the landbound city. Both Lagina and Panamara were in their own ways determining factors in the territorial development of the polis in the Hellenistic period. The success of this is indicated by Strabo who, when giving a bird's eye view of Karia, states: In the interior are three noteworthy cities: Mylasa, Stratonikeia, and Alabanda. The others are dependencies of these or else of the cities on the coast, among which are Amyzon, Herakleia, Euromos, and Chalketor. As for these, there is little to be said.2 Despite real territorial concerns and their likely locations near the edges of the Stratonikeian chora, Lagina and Panamara were nevertheless not true 'frontier' sanctuaries. There were many other issues at stake besides the definition of territorial borders. Strabo paints an image of Stratonikeia as a foreign 'Macedonian colony' dropped onto the landscape, which does not entirely seem to be the case. But unlike Mylasa it was a new polis developing in an environment that was already highly articulated socially, politically and religiously. The road to success lay in the integration of pre-existing communities in such a way that they could maintain their local identities while being incorporated into the larger citizen body of the polis. This was achieved not only by retaining their communities as the new demes of the polis, but especially by mobilizing a sacred center where their various backgrounds could coalesce into a common citizen identity. Lagina and later Panamara both provided excellent outlets where this could take place. Landscape clearly played a role but in a kind of inversion of Turner's theory -here urban communitas was being forged as the rituals at these country shrines served to produce and reinforce urban social structures. The difference is perhaps best articulated by the actions of the polis in inversing the rituals and bringing both cults into the urban center.3 Community-building, territorial ambition, and regional networking all must have been involved in Stratonikeia's choice to lay her identity in the gods of these two distant sanctuaries. But this could only take hold when the surrounding communities clearly understood that these gods, their sanctuaries, and the city were now inseparable. Coins, legends, inscriptions, architecture, and especially festivals and processions carried this message in overlapping layers, repeated over and over again until the pattern was simple and commonplace. The sanctuaries thus acted as turning points for the perception of the landscape. By shifting the focus of the sacred landscape to the polis itself, Stratonikeia simultaneously realigned the political composition of this area along the Marsyas as well, with the polis at the logical center of both the physical and the cognitive environment. The four case studies thus revolved around two cities, Mylasa and Stratonikeia, that were each seeking to define or redefine its position in the wake left by Alexander the Great. Country sanctuaries provided a muchneeded structuring principle -as economic centers, memory theaters, institutional organizations, spaces of geopolitical negotiation and social identity, but also eventually as magnets of urban pride. One of the criteria for selecting these four case studies was their proximity to natural or geographical borders in order to test their potential as frontier sanctuaries. This model is a ready explanation for major 'extra-urban' sanctuaries with a strong urban dimension in the Archaic Greek world. It could also easily be applied to these sanctuaries when considering their acquired urban status in connection with their location on the map. Labraunda, for example, is in a heavily fortified area along a mountain pass between Mylasa and Alinda to the north, and is nearly equidistant from these poleis. Mylasa is known to have aggressively pursued a course of expansion and the shrine of Zeus was clearly a critical concern to the polis; also, the fortress at the shrine was intensively used in the Hellenistic period, but whether this was by Mylasa or more probably by the strategos Olympichos is unclear. Also, Alinda hardly figures in the politics of Mylasa. From a territorial perspective, the sanctuary is much more likely to have been considered as a station at a critical point along the pass, rather than a frontier marker, although the festivals and fairs would certainly have given the road quite a bit of traffic, adding to the mediatory function of the shrine. For Mylasa, however, this border does not seem to have been a prime concern. One might have expected the eastern perimeter, near the shrine of Sinuri, to be a sensitive zone as neighboring Stratonikeia was also eager to expand where possible. Both of these assertive poleis were known to have had at least one border conflict (although the location is unclear).4 Looking at the map alone, the monumental sanctuary of Sinuri would seem a likely candidate for a frontier shrine. There are, however, two principal objections to this. One is that despite the attention of the Hekatomnids, this shrine is more tucked away in a pocket rather than at a critical thoroughfare -in fact the terrain to the east becomes much more difficult to cross, although this may have been very different in antiquity. The second and more obvious factor is that, of all the shrines discussed in this volume, the shrine of Sinuri was the least concerned with urban politics. Although absorbed by Mylasa, it was not used to display any form of Mylasan identity other than through the sub-community to which it belonged. At the same time, as Mylasa expanded it seems to have left the sacred centers in its path alone, rather than converting them to 'polis' sanctuaries. This shows a different strategy than the colonies of Magna Graecia presumably followed. As a colony, Stratonikeia might be a more plausible candidate for the frontier sanctuary model, with not one but two major shrines in the outer reaches of its territory. The sanctuary of Hekate at Lagina was located in a landscape becoming to the goddess of the crossroads. The conjunction of the Marsyas valley and the renown of her sanctuary would be one of the reasons that Strabo (14.2.29) lists Lagina, rather than Stratonikeia, among his few measuring points in Karia. The mountains to the north probably indicate the extent of Stratonikeian territory, but if so, the polis does not appear to have been preoccupied with it as such. It was, however, much more interested in the capacity of the shrine to establish links with communities, both within the territory as well as beyond its confines. Connectivity can be a property of frontier sanctuaries, and the festivals and later markets at Lagina would certainly have drawn crowds from afar, as they did at Labraunda. Yet this was more related to its capacity as network hub than as border mediator. With Panamara, however, borders could have played a more fundamental role in the absorption of the shrine by the polis. The hilltop shrine was presumably situated at the northern frontier of the Rhodian peraia in the early second century BC, and Stratonikeian interest in the cult may well have been related to territorial concerns as it exploited the mediatory role of the shrine. But it does not appear to have used the shrine to accentuate the border in any way. Moreover, by the first century BC the focus of the cult of Zeus at Panamara had clearly shifted inward to urban center, and its festivals were used to unite the disparate population of the polis. None of the sanctuaries exhibit any signs of marking frontiers, despite their position near what was most likely a territorial boundary. Even their inherent mediatory function seems more related to connecting with other poleis and communities, rather than their immediate neighbors. Certainly these shrines came to signify the polis, but borders were only one of many functions that country sanctuaries could fulfill with regard to geography and civic territory. Labelling them as frontier shrines imposes political border issues that take us in the wrong direction with these sanctuaries, as charged as this term has become. The primary concerns of these two rising poleis in Hellenistic Asia Minor has already been shown to lie not at the border, but within. A recurring challenge that shines through each case study is internal social cohesion, and in most cases external connectivity. Both factors would have been a central matter as each city was faced with developing a strategy to position itself within the larger world of cities. Internal Social Cohesion -Building the Polis Dispersed communities were native to Karia, but the polis model was an imported concept, known principally along the coastal fringe. As the model took hold in Karia, cities were often created as agglomerations of much older communities. Mylasa incorporated a multiplicity of older clans with religious hearths and local identities of their own. As a Macedonian foundation, Stratonikeia drew its citizen base from the surrounding communities. The influx of 'Greek' and later 'Roman' citizens will have compounded matters in a way that was by no means limited to Karia. The challenge was to get this diverse and disparate group of people to identify with the idea of the polis together -this kind of coordination problem is exactly what ritual is equipped to deal with. Ritual is a powerful instrument in creating a common focus, whatever that might be. Theories on the mechanisms of ritual help analyze its capacity as a coordinating mechanism through its focus in performance in space. This involves the mnemonic effect of ritual cognition, the element of spectacle or 'flashbulb memories' , as well as the frequency of repetition and creation of ritual habit, regardless of content. Within the contexts of the cults in these case studies, however, the knowledge conveyed and reiterated through ritual is significant as well -the inseparable link between the god, the community, and its territory: this is particularly true at Lagina and Panamara, but also at Labraunda and even the sanctuary of Sinuri, albeit at a smaller scale. Social cohesion as a conscious goal is particularly apparent at Stratonikeia. The ritual space at Lagina was built to literally embrace the community in a concentrated setting resembling an agora, while the rituals at Panamara aimed at men or women deliberately cut across all of the usual social boundaries, bringing the entire population together under this common denominator. Both cults also turned their gaze to the city, as the processions moved the sacred space and objects of the gods inwards towards the political center. This was an excellent means to create a common focus for a diverse, citizen base, with ritual and spectacle burning the shared experience into the minds of the participants, and inscriptions writing this memory into the minds of the community for generations to come. Once autonomous communities were now subjugated to the polis as its demes and syngeneia. Mylasa seems to have had an even more federated political structure than Stratonikeia, as the hearth of identity continued to reside in the shrines of the clans -this is where honors were bestowed the most, with the shrine of Sinuri as an extreme example. The polis functioned almost as an abstract concept, a distributed model of cooperation as needed. This makes the polis-wide participation in the sanctuary of Zeus Labraundos all the more significant -here Mylasa is represented as a singular entity through the central institutions of the demos and the boule. This is also where international decrees and contracts or grants of proxeny were on display. Labraunda, with its renowned legacy and power of place, was understood to symbolize the polis, both by the Mylasans, but also by the international world of poleis. The sanctuary functioned as the outer face of Mylasa to the world. Obviously, sanctuaries are all about creating community. Turner argued that this was a fundamental dynamic of distant sanctuaries, 'the center out there' . Yet in his view remote shrines foster an alternative community, an 'antistructure' , that is distinct from urban or political identity, often even at odds with it. In this study, we have seen that it is precisely at such sanctuaries where urban identity, as polis religion, is promoted the most visibly. This study has elaborated on the dynamics involved in this in two significant ways. One is that the cults and festivals at these distant sanctuaries actively incorporated the landscape and space itself as part of the ritual focus. A sense of identity but also territorial integrity was constructed through ritual performance. The regular processions, the rituals at the sanctuaries and the memories recalled, and inscribed on the spot, helped configure social but also spatial memory. Such ritual acts and objects were vital towards creating a shared focus, common knowledge, and therefore a sense of unity. A second significant hallmark is that cities in Hellenistic Asia Minor could have a very local interpretation of what polis religion meant between the layers of their social composition. Communities continued to celebrate their older, indigenous identities through their own cult centers, while major shrines were used to shape the contours of evolving urban identity, both internally and externally. Having a central sanctuary for federated communities was in fact a very Karian solution to the problem of coordination, and in this regard the polis functioned as yet another level in the complexity of nested identities. This distinguished these cults from polis religion in the sense of classical Greece, where the polis presumably permeated every level of society. Sanctuaries had an exceptional predisposition towards creating community, but especially their locations helped anchor a sense of polis identity and urban integrity, sealed through ritual. These ritual ties were a critical factor in establishing links with the wider network of communities. External Connectivity -Festival Networks These sanctuaries became involved in the public relations of the polis as it engaged with other poleis and communities. As centers of federated communities, this was a natural function of Karian shrines, but they could also connect communities together through less political ties of cult, such as syngeneia, kinship based on ancestral or mythical ties. The sanctuary of Zeus Karios at Panamara, for example, was held in common by Kallipolis, across the Gulf of Keramos, and the more local koina of the Londeis and the Laodikeis, as well as the Panamareis themselves. Connecting with these communities could have been a reason behind Stratonikeian involvement at Panamara in the first place. The role of such sanctuaries as hubs in a regional network seems to have been one of their main drawing features. Through endowments at the shrine, a polis could initiate a dialogue of goodwill with its communities. This is more apparent with the cults of Stratonikeia than of Mylasa, possibly due to the longestablished presence of Mylasa thanks to the Hekatomnids, while Stratonikeia still needed to legitimate its position. But it does explain in part the claims Mylasa laid on Labraunda. Thanks to their social capital, certain sanctuaries could provide a critical means for the polis to address the wider region at large. This pivotal function, however, was not restricted to major regional sanctuaries; it could also be leveraged at shrines with a more modest scope prior to their incorporation by the polis, as at Lagina. In this case, Stratonikeia used Hekate's festivals not only to coordinate local communities with a common focus, but also to mobilize Rome and the wider Greek world to acknowledge the authority and asylia of the shrine, and thereby the position of the polis. The contests of the festivals further served to promote the polis among the regional circuit of athletic competitions, especially in Karia and Ionia. Festivals, especially those celebrating an epiphany of the god, were important vehicles for poleis on the rise in building inter-urban networks. Rational rituals, i.e. the means which the polis used to create a shared focus and thus generate common knowledge, worked equally well at this global level as it did at the local level of the polis. While the contests and honors allowed for distinction among individuals, as well as individual communities, the collective event helped to create a sense of global community, with the polis hosting the event at the ritual center of the festival, side by side with the deity which was being celebrated. This was a relevant message, and the place of cult in the territory of the polis served to foreground the sanctuary in the minds of the larger world as well, ensuring at the same time its relation with the polis. Besides these political goals, the general circulation of knowledge that traveled with the foreign delegates along the festival circuit surely helped increased the cosmopolitan standing of the polis, while raising awareness of its own place in the wider world. With the natural function that these sanctuaries already possessed as connectors of communities at multiple levels, it would have been a small step to extend this to engage in the panhellenic trend that was sweeping across the Hellenistic world.5 This new tradition was well underway by the time of the Hekatesia-Romaia, although Stratonikeia was one of the earlier cities to involve Rome. By this time, deploying the inherent connectivity of such regional-turned-urban sanctuaries to put the polis on the world stage would have been a natural means to meet both local and geopolitical needs at once. Urban Identity The emphasis on social cohesion and internal and external networking indicate urban identity as a root cause for polis involvement at most of the sanctuaries discussed here. Criteria for this involvement with a sanctuary may be traced to aspects such as strategic location, visual region, position as hub in a nexus of communities, and the social and symbolic capital of the deity. All of these factors point to the ability of a cult to both capture and hold public attention while transforming the perception of civic territory at the same timecontrolling the cult meant controlling its sacred, social and political landscape. The decisions taken at the polis level and the impact at these sanctuaries coincide well with the steps involved in building 'regional identity' , in modern terms, as outlined by Paasi.6 He defines the following stages in this process as territorial shaping, symbolic shaping, institutionalism, and finally, establishment through external recognition. Considering the polis through this lens, with administrative, social, and territorial concerns, helps analyze how the different indicators of polis involvement at these sanctuaries coalesced in the construction of urban identity, and why they are especially apparent in poleis that were undergoing a momentous new phase in their development. The first stage of territorial shaping matches the expansionist tendencies of the poleis discussed in the case studies. Borders were important, yet as we have seen there was much more to defining territory than just establishing its extent. Sanctuaries at sites perceived to be vital to the polis, such as a tactical location in the evolving political landscape, would logically have been 'tagged' for special treatment. The ability of a sanctuary to mobilize a local community or network of communities, discussed in the previous section, would also have been a positive factor for the polis. Another critical factor would have been the scope of a sanctuary's visual region, or viewshed, since this would have been added to the visual region of the polis itself. Of the case studies analyzed in this research, the sanctuaries that came to occupy a central position in the political scope of the polis were those that also possessed broad vistas. This may also explain why the sanctuary of Sinuri, although monumentalized by the Hekatomnids, was much less critical to Mylasa than Labraunda was; it looks out over its valley but not beyond. The sweeping panorama at Labraunda, on the other hand, was surely one of its primary assets -adding its visual region to that of Mylasa, located in the plain below, significantly expanded its visual and strategic reach. Similar observations were made above for the case studies concerning Stratonikeia; located in a narrow east-west valley, the strategic reach of the polis was greatly extended to the north and to the south with the inclusion of the visual regions of Lagina and Panamara. The significance of the sanctuaries helped foreground them in the minds of the local and regional communities, making them seem closer by. This was surely one of the reasons that modest sanctuaries were turned into 'big' places through architecture and their festivals pumped up as major spectacles, and especially why the processions were so crucial as they ensured the entire population physically 'performed the landscape' and inscribed it in their memory. Foregrounding these places in this way was even more effective in making them feel like they belonged to the polis than their legal status or position inside a border. The second stage in building regional identity is that of symbolic shaping. Here too the sanctuaries, their deities and their festivals played a critical role in creating the shared focus that was necessary for a common identity. Foregrounding the sanctuaries in the minds of the citizens through processions and festivals also turned the landscape of the sanctuary itself into an emblem of the polis, becoming more and more familiar with each procession and each festival until it was naturally equated with the polis. The image of the god also became the icon of the city, especially on its coinage with its wider circulation. Mylasa's choice to depict Zeus Labraundos as a conscious echo of his image under the Hekatomnids illustrates the power of divine symbolic capital, especially in a form already familiar to the community while rerecalling its legendary past. The new image of Zeus Panamaros by Stratonikeia on its post-miracle coinage would have had a similar function; portraying Zeus as a rider-god broadcast the contemporary processions of the Panamareia while recalling his miraculous salvation of the polis. Adding epithets or changing them, as with Hekate Soteira and Zeus Panamaros, was also a clear statement in adaptation of cult focus to meet the needs of the polis, creating thereby a new identity for both. The territorial and symbolic shaping of these sanctuaries were channeled through institutions, with decisions taken at a central level. While this is most evident in polis administration, in several cases a sanctuary was run by a local community. In this study, the syngeneia at Sinuri are seen to have functioned as a kind of polis in miniature. Also the koinon of the Panamareis at Panamara, or the katoikountes at the sanctuary at Lagina, were decision-taking bodies with institutions of their own. But these were not always understood in the same way across the board; a fundamentally different perception of priesthood and its chain of responsibilities seems to lie at the root of the conflict between Mylasa and the priests at Labraunda. The escalation of this is logical considering that the priesthood was one of the most important institutions in leveraging the resources of the sanctuary. This is evidenced by the priest Leon who advanced the link between Stratonikeia and Panamara. Priests were critical actors in tailoring the attention of the local gods to suit the rising polis. As leading figures in the rational rituals that bound polis and sanctuary together, shaping the memories of the citizens and the politics of the region, priests should certainly be seen as professional urban producers, or in Paasi's terms, as part of the cultural and media elite.7 Human relationships with the gods were always contingent on divine will. Shifting the focus of a sanctuary would only have been successful if the deity was perceived as principal actor, whose idea it was to take the developing polis under his or her wings. Whereas a complex process of negotiation likely took place, probably between the power brokers of the polis and the local elite at the sanctuary, for all involved it would ultimately have been the decision of the gods. An epiphany, a supernatural act of salvation on the part of the deity was the ultimate seal of godly approval for the polis. This was in turn a major reason to obtain the right of asylia, that could then be used to acquire local, regional, or international recognition, spark festivals and engage in geopolitics; hence the surge in epiphanies in the later Hellenistic period.8 This is the final stage in the development of the identity of a region, i.e. polis: its establishment. This study has demonstrated the importance of a close reading of the data in combination with an awareness of theoretical potential. It has shown the perils of applying models without considering the wider context, but also the necessity of providing alternative interpretations. While all of these sanctuaries were presumably located near frontiers, interpreting them based on this type of location alone will not get us very far. At the other end, a micro view of the epigraphic evidence will give us a story from the perspective of the polis, and a complicated one at that, but not a complete one. Solely empirical approaches will ultimately allow unconscious biases to enter if not tested against different options. In order to understand the evolving relationship between a sanctuary and a nearby urban center, a more holistic approach to the data is needed. Besides expanding the data set, this requires a wider range of theories to draw on in properly assessing the different data, while yielding a list of factors to consider. The framework of analysis applied in this study was developed with this aim in mind. It is now time to assess this framework, starting with the theories borrowed from other disciplines and how they were adapted to this study, and the overall value of this approach. Assessing the Theoretical Approaches In the review in Chapter 2, a gap was discerned in studies in the ancient world concerning major country sanctuaries. One the one hand, there is a strong and primarily archaeological focus on the Archaic and Classical Greek world, in which such shrines are largely seen as frontier sanctuaries, defending a sensitive border of a developing polis. On the other, in studies of Hellenistic Asia Minor sanctuaries are caught up in discussions of autonomy, based on economic and social status, or urban-rural bias, and largely based on epigraphy. Each tangent identifies significant facets that should be addressed in any assessment of the impact of urban centers on sacred landscapes, yet they also pursue a different line of inquiry than the present investigation. The difficulties of applying these models has been discussed at length above. In order to fill the gaps, I turned to other disciplines to better conceptualize the processes at work in the developing spatial and social relationships between local or regional sanctuaries and expanding poleis. These are discussed Chapter 2, but because these are 'new' approaches to studies of antiquity it is worthwhile to further assess their value and future potential with regard to this kind of research. 5.1 Visual Regionalization Theories on spatial memory are one of the foundations of this research. They show that mental snapshots and 'snippets' of spatial information are pieced together in our mind as 'cognitive collages' , as Tversky calls them.9 Places appearing within the same 'snapshot' subconsciously feel like they are closer together, no matter what the actual distances in between might be. Ellard calls this effect 'chunking space' , emphasizing how the brain zones places in a process of visual regionalization.10 Foregrounding spaces, as Hirsch emphasize, creates points of heightened awareness that further help compress space in the mind's eye.11 More than 'mental maps' , these concepts better describe how spatial memory works and the importance of visual perception in creating mental associations and hierarchies among places. In the context of this present research, the concept of visual region underscores the importance of viewsheds, or visibility from specific places, in the definition of civic territory. As observed above, combining the visual region of a sanctuary to that of the polis created a greater single unit that would coincide with physical territory. But how were such visual regions integrated? In this present research, ritual action was the key, such as processions that literally melded the visual regions together, but also the grand festivals that took place at the sanctuaries, creating indelible memories. This is what served to foreground these places, together with monumental architecture, literally heightening their visual and symbolic significance in collective memory. Visual regions were therefore especially critical to the territorial shaping of the polis, not only because they increased its strategic reach, but also for their symbolic value, by literally bringing the distant sanctuaries and their landscapes the edges of the territory within emotional reach, thus creating a broad sense of place and belonging that extended across the chora. 5.2 Concentric and Linear Space Breaking space down into functional categories is also a mnemonic device that aids memories of places and how to navigate them.12 Two of these are 'nodes' and 'paths' , elements that I have combined with the location and 9 Tversky (1993). 10 Ellard (2009), 126-128. 11 Hirsch (1995). 12 Lynch (1960). direction of communal focus in defining such spatial types as 'concentric' , i.e. with a static central focus, or 'linear' , i.e. with a progressive focus involving movement, whether physical or visual. Concentric space -enclosed spaces with a singular and internalizing focus -is characteristic of urban sanctuaries in the Hellenistic period, as it was of most urban spaces.13 It is concentric space in which monuments were typically erected, 'in the most conspicuous place' , and it is this space which therefore best served as 'memory theater' , an arena where past and present continuously flowed through each other in a web of associations that intermingled personal stories with collective memory. Urban sanctuaries were increasingly separated from the outer world by their delimiting architecture, becoming highly intense and focused spaces that were charged with intent and agency at multiple scales. Linear space, on the other hand, is used to interpret how the sanctuary was integrated in the landscape and connected to places of significance, both physically and visually. The importance of visual linear space has already been stressed in several places as a tool that helps analyze ways that associations were created or emphasized through 'framing' . This should coincide with an analysis of the kinds of places that were visually ignored; although this was not pursued here, it could be relevant to other case studies as a way of addressing reception and resistance. Kinetic linear space refers to embodied movement through the landscape and in this context largely applies to sacred roads. This concept has helped understand the dynamics in processional routes that physically and ritually connected city and sanctuary. Two factors are involved in such routes: one is how they were determined, possibly involving environmental aspects (e.g. the 'shortest path' or 'least-cost corridors') as well as the need to wind it along places of significance; the other concerns the ability of such a route, once established, to attract places of meanings. Of the case studies discussed in this research, the two sacred roads that are known seem to be a combination of these factors. As places where the entire population regularly traveled up and down, they would have been magnets for social activity, accumulating meaning with the passage of time -evidence for this is found near the springs and by the many tombs that typically line the sacred roads near sanctuaries, just as they do around more urbanized areas. Future studies of this kind should search for other kinds, and often difficult to trace, of signs of presence en route. 5.3 Rational Rituals The shapes of ritual space, concentric and linear, are related to rational rituals in that they provide either a static or a moving focus for the entire community. This visual focus occupies a central place in Chwe's theory on rational rituals as a coordinating device and one of the most direct means of generating common knowledge, a prerequisite for cooperation and social cohesion.14 Common knowledge is transmitted directly through joint attention towards a shared focus, which ritual readily provides. Cognitive studies on ritual have shown it to strengthen the neural pathways of memory, particularly through frequency, repetition, and spectacle, i.e. the 'flashbulb' memories.15 Of course, a shared focus is in itself not enough to create joint action, but it is a prerequisite according to rational ritual theory. As straightforward as it seems, this theory has helped interpret a number of phenomena at the sanctuaries and in the cults with regard to the need of social cohesion, particularly in poleis that were made up of heterogeneous communities. Considering public rituals and ceremonies as rational rituals has led to insights into how ritual performance in festivals and processions worked to unite the population, not only by bringing them together in an enclosed space, but by giving them a shared focus embedded in ritual. That this did not automatically produce harmony or the desired effect is evident from the number of sacred laws that were established at various sanctuaries. But these were the exceptions that may prove at least the intention behind the rule. In focusing on rational rituals as a means of 'saturation advertising' among humans, Chwe's theory purposefully leaves out the authority of the divine, or at least the perception of this authority. Although even this authority was not foolproof -e.g. oaths were often broken and sanctuaries were frequently sacked -it clearly brought the earnest of the rituals to a higher level by making them contingent on the pleasure of the divine will.16 The element of supreme power of the deity of the rituals in theory would certainly increase the compulsion to participate in them. This study does not presume that the rituals discussed here were consciously engineered or intended to be 'rational' -they were in the first place religious ceremonies. Nonetheless, understanding festivals and especially the processions as rational rituals makes the logic behind the sanctuaries, the cults, and ultimately the relationship with the polis much more lucid as it leads to a better comprehension of the effects of ritual, especially by identifying the shared focus, or foci, at these festivals and interpreting the various ways in which it was produced. 5.4 Network Model Network theory is gaining momentum in studies of the ancient world.17 One dimension of this concerns webs of associations, the general domain of Actor-Network Theory.18 In this study this has proved useful in understanding different levels of the past that can be evoked through architecture, providing another facet to the dimension of 'memory theater' . Network is certainly useful as an analogy to describe the role of sanctuaries as nuclei in a wider nexus of communities. Rigorous network analysis, however, would require a higher degree of data collection than this present book can accommodate as well as a greater consistency of data quality in order to study the weights of nodes and how strong or weak their ties were. But such studies are being conducted.19 Future approaches could include agent-based modeling, where computer simulations can help elucidate patterns from incomplete data sets. In this present study, however, even a metaphorical application of network theory has proven fruitful. The information gathered here shows that three of the sanctuaries clearly functioned as a connecting factor between communities, while the larger (ego-)network aspirations of one of these, Lagina, can be traced with a fair degree of accuracy.20 Ma's application of the 'peer polity interaction' model has furthermore elucidated the basis of these ties, and how the wider community of poleis was typically founded on the inter-urban recognition of grants of asylia (inviolability) and claims of syngeneia (kinship).21 Both terms could be used to draw communities to a sanctuary, as at Lagina and Panamara respectively, and are flags of network activity, as with the other items on Ma's list, the theoria (foreign delegates), and the foreign judges who settled disputes. These two categories are less prevalent at the sanctuaries in the case studies in this research, but should be on the list of things to watch for in discerning the wider interests See the Connected Contests project at the University of Groningen: connectedcontests. org for data on festivals and contestants, used to perform network analysis of inter-civic or panhellenic festival culture. 20 Van Nijf and Williamson (2016). 21 Ma (2003). of a polis in a particular sanctuary. Ma's list should furthermore be expanded to include new panhellenic festivals, e.g. the rise of quadrennial festivals, as well as inter-state treaties, such as isopoliteia or sympoliteia, which were often sealed by oaths in a sanctuary of relevance.22 Also, the role of rulers in initiating some of these festivals, or festivals initiated by cities in connection with ruler cult, deserves to be addressed from a network perspective, even though Ma excluded this from his equation. In short, network theory is certainly a way forward in considering how ritual served to create ties among cities and as a way for new cities to enter the geopolitical playing field, integrating them into the age-old model of Greek festival culture, but with a new twist that reinforced inter-urban bonds and goodwill with superpowers in the turbulent Hellenistic era. 5.5 Regional Identity Regional identity, as modeled by Paasi,23 has already been extensively discussed above as well as in the case studies. This model has proven to be extremely useful in examining how sanctuaries were used to build and establish urban identity. When considering the ancient city-state as a region, then all of the stages which Paasi describes -territorial and symbolic shaping, institutionalism, and establishment through external recognition -remarkably fall into place: the overlap between ritualized landscape and civic territory, a central cult focus, the institutionalization of the priesthood, and the role of priests as urban leaders, and the use of the sanctuary for inter-urban networking. At the same time, Paasi's categories are broad enough to accommodate many of the theories and models previously mentioned. Because of this, very few issues were encountered in transferring the characteristics of this modern concept of 'regional identity' to 'polis identity' in the ancient Greek world. Nonetheless, based as it is on modern political and geographical studies, a few minor technical modifications were needed. This concerns in the first place the importance of three-dimensional features in the landscape as experienced, e.g. mountains, rivers, and foregrounded places, rather than cartographic outlines, as the primary expressions of territorial shaping in the ancient world. Also, most of the symbolic shaping would have been inundated with cult and ritual, even more so than in modern times, adding the weight of divine authority to the idea of the 'region' , i.e. polis. This perception is important in understanding the role of the institutions, particularly the priests, but 22 Kamphorst (forthcoming) addresses terms of connectivity in inter-state relations. 23 Paasi (2009), see above Chapter 2. also the wider community who 'performed the region' ,24 as actors who in principal were following the divine will. With these few details in mind, the model of regional identity would also prove useful in studies of the ancient city in general that focus on similar issues of territory, social composition and institutions, and symbolic focus. Assessing the Framework of Analysis The bulk of this book has been channeled through the lens of the framework of analysis, developed in the second chapter (Table 2.2). Perhaps one of the principal assets of this study, this framework provides a tool through which the many changes in the evolving relationship between a shrine and a community can be weighed, analyzed, and compared in all their diversity. Regarding the historical development, the case studies revealed that each relation between a city and an outlying sanctuary was a unique combination. This was particularly evident in the analyses of just the two cities and their fundamentally different paths of connecting to the major sanctuaries drawn into their orbit. Each shrine initially had a different radius, from local to regional, demanding unique strategies to incorporate them into the civic sphere. This was perhaps most evident with Stratonikeia and the different approaches applied towards Lagina and Panamara as the scope of each was realigned towards the urban community. Inseparable from the chronology of both city and shrine is the role of the environment -the physical and social geography that constituted the foundation of the nature and impact of the cult, if not its essence. The potential relevance of frontiers has already been discussed, but the landscape itself played an important role. The timelessness of the eroding slopes near Labraunda along with the strategic location near a passage and panoramic view over the plain below surely demanded a cult for a primordial and supreme deity. The power of place and cult was neither lost on the dynasty that came to rule nor the nearby city left behind in its wake. Social geography is equally significant. Proximity to local communities and accessibility to road networks may help explain the attraction of Lagina, but for the shrine of Sinuri this is less obvious and leads us to search for alternative explanations, such as its embeddedness in the productive landscape. The rapidly disappearing cultural landscape around Panamara will leave many questions unanswered, but the hilltop shrine would in any case have acted as a beacon of urban presence for a region otherwise visually separated from the city. Landscape has an agency of its own in this equation. The topochronic conditions of shrine and city constitute the foundational layers of their connection, but the shape that this takes is as varied. In assessing the degree of integration of a shrine into the urban sphere, a variety of factors need to be considered individually, before they can be lumped together. One of the most obvious is the physical appearance of the place of cult, the monumental and ritual space. In a quick assessment, one might think that if it looks and acts like a civic sanctuary, then it must be one too. All of the cases here present as such, but a closer investigation shows a marked deviation in intent early on. With the possible exception of Lagina (initially under the aegis of Koranza), monumentalization processes at these shrines were well underway, if not largely completed, prior to the advent of the city. Labraunda was a dynastic showcase that remained one even under Mylasa, while the shrine of Sinuri never became a polis shrine, although the resemblance was strong. Panamara blossomed under Stratonikeia, but its monumentalization began early on as an investment by local communities. Concentric ritual space, so conducive to community-forming as discussed above, was articulated at all of the shrines and this capacity would have been another major asset. Linear space, on the other hand, is in most cases visible with the advent of the polis through the use of processions that connected city and shrine. Ritual performance is another critical indicator of change that allows us to observe the shift in cultic scope in perhaps even higher resolution, depending on the survival of the sources. The crowds that the shrines drew may be evidenced by increasing water supplies, as at Labraunda and Lagina, ceramics such as tableware or terracottas (although most of this has not been published), but especially inscriptions. Festivals provided a joint focus of attention for the newly incorporated communities and are seen to be increasingly scripted events, particularly at Lagina and Panamara. The element of spectacle was equally on the rise and, besides the sacrifices and singing, collective rituals such as banqueting and especially contests would have sharpened the sense of 'community spirit' , even though these rituals were simultaneously used to define and label the various segments of the population. The changes in ritual performance would have been gradual and tailored to each situation. Sanctuaries in Asia Minor have long been studied with an eye towards their economies, the nature of their priesthoods, and their degree of autonomy. The legal administration and organization of these religious centers, primarily informed in the present cases by epigraphic evidence, is an important barometer of change. Change is evident at all of the cases analyzed in this study, but is perhaps the most poignant at Labraunda, which for a time was contested space between the priests and the polis, at odds over administrative control of the resources of the shrine. Economy is certainly an important factor and is especially apparent in the landscape of Mylasa, where a construction allowing for private lands to pass to the sanctuary, only to be leased back to the original owners for further exploitation, originated in the third century. This is another sign that each city devised its own strategy concerning the administration of its shrines, and that this also developed over time. Especially interesting are the fluctuations observed in the local communities who are attached to the sanctuary. While this is best visible with the syngeneiai at the shrine of Sinuri, the other sanctuaries in this study also had communities of their own, e.g. the Korrides syngeneia at Labraunda, or the katoikountes at Lagina; interestingly, the least is known of a residing community at Panamara after the passing of the koinon of the Panamareis. Cult and festival were clearly instrumental in the urban mediatization strategies of the polis. Mylasa clung to the image of Zeus Labraundos as established by the Hekatomnids while using the grand shrine as civic podium. Besides resetting the scope and focus of the cult onto the city, the incorporated deity could also be used to establish geopolitical connections with peer communities or other powers of authority. Epiphanies were an important precedent as they were used to demonstrate the importance and relevance of the city on the political map. At Lagina an epiphany of Hekate accelerated Stratonikeia's claims of loyalty to Rome that eventually led to an extension of territory and especially the privileged recognition of asylia for the shrine. This in turn gave the polis reason to host a festival and petition for recognition and participation from its peer cities. The gods of these country sanctuaries increasingly appeared on the coinage of the cities, one of a variety of avenues that realigned the cults to their new communities. This framework provides a lens to examine the many different ways that a sanctuary and its cult could become attached to a community. The outcome may or may not be surprising, but the main merit is three-fold. In the first place, it provides a means for identifying explicit areas of change, allowing us to move beyond a general impression that the available data gives. In the second place, the framework takes into account a wide variety of data, more than can probably be addressed in any one case. But this forces us to integrate the variety of sources and to look across the gaps in data and beyond a single data type. Finally, the structured approach allows for at least a degree of comparative analyses across different sanctuaries, despite the widely differing circumstances. This can help us understand the repertoire of options that cities had, leading towards a better understanding of why such sanctuaries were critical to developing cities and how such relationships were forged and maintained. Through this lens we can gain clearer insight into the strategies deployed by developing urban communities as they sought to position themselves and anchor their identity in a world outlined by warfare and local rivalries, but also one with increasing paths of connectivity. Rather than providing a monolithic model, this framework, with its integration of data and structural approach, can expose the wide diversity of solutions, which should lead to new questions in turn. Final Remarks This book began with a description of the shrine of Labraunda and the example of the power of its landscape, asking the question of who it belonged to. As we have seen, the answer is complex and depends on one's perspective in time and place. This may be said of Karia in general, but it may also be said of developing relationships between sanctuaries and cities across Asia Minor in the Hellenistic era, particularly in the later third and second centuries BC. At a time when local lines of organization were being blurred or erased, communities were blending together or being torn apart, and power alliances were constantly shifting, sanctuaries offered a haven of stability and local divine authority, at least on the surface. This is surely a major factor behind the surge in poliad deities that cities begin identify themselves more and more with, a phenomenon designated by Andrew Meadows as the 'Great Transformation' .25 The realignment of the scope of local or regional sanctuaries to first include the rising city, sometimes even including a revival of cult, then to solidify the bond, and finally to present it as the will of the gods is no small feat. Continuity of sacred landscapes would need to be stressed all the more as they merged with civic territory, making the presence of the new community appear natural and divinely sanctioned. The logic behind the surge of local and regional sanctuaries being absorbed by rising cities, as shown at the beginning of this book, now seems clear. Current models address the role of outlying sanctuaries in the Archaic and Classical Greek world in the context of the rise of the polis, but omit Asia Minor or the second rise of the polis in the Hellenistic era. Studies of sanctuaries in Asia Minor have, in turn, focused on their economy and autonomy, but 25 Meadows (2018), a phenomenon which he observes through the increasing portrayal of deities on civic coinage in the second century BC. omit their relation to landscape. Both models are driven by disciplinary focus, but lack the holistic approach needed to address the multifarious situation in Asia Minor. And so, a framework of analysis was developed. Informed by a variety of theories, models, and various approaches, the framework provides an approach to these many different sanctuaries and their contexts which is both systematic and yet not too rigid in that it allows for their diversity to become apparent. A handful of case studies were selected that could yield sufficient data to intensively test the framework and allow for comparative analyses of the results. The fruitfulness of this overall and combined approach was demonstrated above. It was shown that these relationships could develop in many ways, with many different manifestations. Each combination of city and sanctuary was unique, yet despite their many differences, a number of recurring concerns emerged -especially social cohesion and the need for external recognition as cities sought to put themselves on the larger map. This study has exposed some strategies that were developed to address these concerns, and it has made clear that sanctuaries such as those examined here were linchpins in this process. 7.1 Suggestions for Further Research A number of issues were raised in the preceding pages, opening up areas of inquiry that deserve further exploration. In the first place a further application of the framework of analysis from this study would help examine the relationships between other expanding poleis and their country sanctuaries, such as those shown in the introduction (Table 1.1). This framework could on the one hand provide interpretations for cities and sanctuaries in analogous situations, such as Pisidian Antioch and the sanctuary of Men Askaenos, or Aizanoi and the sanctuary of Meter Steunene. It could also help with tentative interpretations for sanctuaries whose data sets are much more restricted, such as the sanctuaries in the chora of Myra, or the sanctuary of Zeus Stratios near Amaseia. It might even be of help with sanctuaries whose locations are as of yet unknown -one of the most prominent examples is the extramural Nikephorion of Pergamon, but also the temple of Artemis Pergaia, somewhere outside of Perge -of course the section on geographical data would remain empty but other areas could still be indicative of the role that the sanctuary fulfilled for the polis. With some adaptations it could also help understand the relationship between federation sanctuaries and the poleis which 'hosted' them, such as the Letoon, home of the Lykian League, and Xanthos. This framework of analysis is meant to be dynamic; studies at other sanctuaries and poleis may well lead to very other conclusions, and in any event to a modification of this framework, based on the situation at hand. The visual regions of a sanctuary were discussed as an important factor for their being drawn in to the orbit of the polis, as their panoramas were merged with the view from the city, literally expanding its horizons. In this respect a more comprehensive visual analysis of sanctuary viewsheds would be worthwhile, to investigate how these may have been related to their overall function.26 Viewshed analysis could be an important research tool for addressing questions such as whether viewshed shapes correspond to particular types of sanctuaries, or whether viewshed size is a valid indicator of a sanctuary's relevance for the polis. Visual studies should also be further incorporated in studies that address polis religion, but also studies that explore other kinds of sensory perception for a holistic approach to understanding how these places and their festivals functioned. Polis religion, now being reassessed through numerous angles,27 should also be viewed through an Anatolian lens. This present study has revealed alternative views on how state cult, or polis religion, may have been interpreted in Karia. At the same time, the function of sanctuaries in integrating heterogeneous societies could be brought into sharper relief through comparative studies with colonization processes in Magna Graecia or the Black Sea region. This could also impact views on Hellenization in general, and at least on the mediatory role of sanctuaries and the porosity of frontiers. While addressing the indicators of urban involvement in this research, several very different areas pertaining to sanctuaries were explored, such as priesthoods, processions, and ritual performances. A number of studies examine these as institutions in closer detail.28 But a deeper analysis of the social composition of the polis and understanding how this was expressed, or at least projected, during the urban festivals of these sanctuaries would certainly enhance our appreciation of the role these sanctuaries had in consolidating community; such analyses might also show the impact of class differentiation, the mechanisms of power, and channels of resistance, and the many voices that invested layers of identity and pride at the shrine. At the same time, a closer examination of communities residing in local settlements at or near country sanctuaries would greatly increase our understanding of their social function. These approaches would surely lead to important refinements of the framework while embedding this relationship in the context of the wider academic discourse on these social and economic topics. Cult networks were a central part of the discourse of this study, but these could be subjected to much more detailed network analysis to discover their topologies and reveal patterns of interest, especially concerning aspects of reciprocity. This would include analyzing the weights of nodes, and whether their ties are weak or strong. It might also lead to 'shortest path' connections between poleis via sanctuaries, especially when situated in a geographic information system, with least-cost path analysis, and navigational routes by land or by sea, in combination with seasonal data. Agent-based modelling can also complement insufficient data, with intensive and 'random' simulations that provide material for pattern analysis. This has the potential to reveal local, regional, or even 'global' inter-polis festival circuits. Connected to this should be a study of the other kinds of exchanges that may have taken place between these cities, e.g. not just trade, but perhaps the extension of citizenship to certain foreigners (proxeny), or inter-state treaties (such as sympoliteia). Studying festivals through this lens may very well prove them to have been the glue of international Hellenistic society, and one of the prime facilities through which the global political culture was developed.29 Finally, much more archaeology is needed to adequately address the issue of the political and social impact of country sanctuaries in Asia Minor. I have tried as far as possible to indicate the local contexts of sanctuaries, particularly with regard to the locations and nature of their local settlements. But these have hardly been the object of research until now. The last suggestion for further research, with which I will close this work, focuses on understanding the sacred landscapes of these sanctuaries, and of the poleis, in a much higher resolution than is now available. This will entail not only more literary and epigraphic studies, but especially archaeological surveys which will help place the sanctuary in its own social context, including not only the local settlement of the sanctuary, but also other nearby settlements, shrines, necropoleis, but even farmsteads or other kinds of activity. Only through this high resolution can it be determined what the 'spatial continuum' around a sanctuary was actually like.30 In short, the results have been promising so far and the framework has proven its worth, highlighting the areas of the greatest change and continuity for cult and community as urban rituals were etched onto sacred landscapes. But much remains to be done. 29 Van Nijf (2012) See Frejman (2020) and (2018).
2021-09-28T16:02:15.809Z
2021-07-19T00:00:00.000
{ "year": 2021, "sha1": "b9f15dbeaad86dd0ca4824a77d28dd4bf476b8a4", "oa_license": "CCBYNC", "oa_url": "https://brill.com/downloadpdf/book/9789004461277/BP000017.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "0d45cae6cd8345701b842b4fef1a84b2e8be8820", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "History" ] }
17619112
pes2o/s2orc
v3-fos-license
Loss of Retinal Function and Pigment Epithelium Changes in a Patient with Common Variable Immunodeficiency Common variable immunodeficiency (CVID) has only scarcely been associated with ocular symptoms and rarely with retinal disease. In this case we describe a patient with distinct morphological and functional alterations in the retina. The patient presents with characteristic changes in retinal pigment epithelium, autofluorescence, and electrophysiology. Introduction Common variable immunodeficiency (CVID) is a group of heterogenic disorders affecting both children and adults [1]. CVID is characterized by disturbances in both the innate and the adaptive immune system resulting in recurrent infections, lymphoproliferative disease, granulomatous disease, and autoimmunity affecting multiple organs. The patients have hypogammaglobulinemia with poor antibody production and responses. Eye involvement in CVID has only been scarcely described. One report described retinal vasculitis in a case series of 3 patients affected by CVID in childhood [2], and two case reports have described the occurrence of uveitis in a child and in a young adult [3]. Another adult patient showed signs of keratoconjunctivitis as an onset manifestation of CVID [4]. One case report has described choroidal changes in patients with CVID [5]. We describe a patient with CVID who developed loss of retinal function and distinct morphological changes at the retinal pigment epithelium (RPE) level. To our knowledge this has not been observed before. Case Presentation Our patient is a 63-year-old female with a long medical history of recurrent infections and problems with the gastrointestinal system. Blood tests revealed very low levels of immunoglobulin (Ig)-G, and a diagnosis of CVID was made about ten years ago. She now receives treatment with intravenous immunoglobulins. We first saw the patient in 2007 because of an itching and burning sensation in both eyes. She had quite severe keratoconjunctivitis sicca syndrome, but visual acuity was normal, and the rest of the ophthalmological examination was unremarkable. In 2011 she complained of diminished vision on both eyes and photophobia. Best corrected visual acuity had dropped to 0.3 on both eyes. She had developed concentric visual fields defects to approximately 10-30 • on both eyes assessd with Goldman visual field testing and diminished colour vision using Farnsworth Panel D-15. Magnetic Resonance Imaging (MRI) of the cerebrum was normal apart from small areas with gliosis. Full-field electroretinogram (ERG) showed diffuse amplitude reduction and delayed implicit time. Multifocal ERG revealed markedly reduced amplitudes in the central area. Funduscopy revealed retinal changes in both eyes ( Figure 1). Autofluorescense imaging showed a diffuse pattern of increased autofluorescence around the entire macula with a marked increase of autofluorescence around the fovea (Figure 1). Spectral-Domain Optical Coherence Tomography OCT (SD-OCT), (Heidelberg Engineering, Heidelberg, Germany) showed marked RPE changes with accumulation of material in the areas of increased autofluorescense. The RPE changes were characterized by an increased thickness of the RPE band on SD-OCT and RPE thickening was predominantly localized around the area of pigment derangement observed on funduscopy (Figure 1). There were no areas of separation between Bruch's membrane and RPE and no signs of drusen either on fundoscopy or on SD-OCT. Fluorescein angiography revealed hypofluorescence related to the RPE changes but no leakage or any vascular changes were observed. Observation during one year showed gradual increase in the RPE changes. Discussion Ocular involvement in CVID is rarely described, and no previous reports have described loss of retinal function nor have suggested an involvement at the RPE level in CVID. This patient has two forms of ocular involvement. One is keratoconjunctivitis sicca, which has been previously described in CVID [4] but our patient also has diffuse loss of retinal function and marked symmetrical changes in the RPE resulting in increased autofluorescense patterns and changes of the RPE on SD-OCT. Even though there are some resemblances to age-related macular degeneration (AMD) some hallmarks are missing such as drusen. Furthermore AMD rarely presents with circumscribed lesions in the RPE around the fovea. One could consider whether this could be Pattern Dystrophy; however these lesions are more confluent, predominantly butterfly shaped and yellowish, whereas the changes in this patient are more pigmented. Another possibility could be Stargardt's disease but some of the such as hallmarks characteristics of Stargardts disease, dark choroid, "fish tails," are missing and the age of onset is not typical for Stargardt's disease. Also the lesions in Stargardt's disease are, just as in Pattern Dystrophy; more yellowish rather than pigmented as seen in this patient. None of the known age-related macular changes produce loss of function, like the one which have been demonstrated in this patient. The patient never received any medication known to cause RPE changes. There were no signs of granulomatous choroidal involvement or uveitis, suggesting a local inflammatory component at the RPE level. This is the first description of RPE and retinal functional changes in a patient with CVID. A distinct causal relationship still needs to be established, and since simultaneous CVID and retinopathy only has been seen in this one patient our finding could simply be due to chance. However, this paper might draw attention to possible retinal findings in this rare disorder. Conflict of Interests The others have no financial or proprietary interests in the material described in the paper. Disclosure The findings have not been presented previously in any form.
2018-04-03T05:11:11.003Z
2012-09-29T00:00:00.000
{ "year": 2012, "sha1": "3cb0b4d9c313f8eb496742c388be135150c60e0c", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/criopm/2012/967561.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1bffcae4428629ea3ae03c10495fbd04c4d41c3f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
14134250
pes2o/s2orc
v3-fos-license
Bandwidth Adaptation for Scalable Videos over Wireless Networks Multicast/broadcast services (MBS) are able to provide video services for many users simultaneously. Fixed amount of bandwidth allocation for all of the MBS videos is not effective in terms of bandwidth utilization, overall forced call termination probability, and handover call dropping probability. Therefore, variable bandwidth allocation for the MBS videos can efficiently improve the system performance. In this paper, we propose a bandwidth allocation scheme that efficiently allocates bandwidth among the MBS sessions and the non-MBS traffic calls (e.g., voice, unicast, internet, and other background traffic). The proposed scheme reduces the bandwidth allocation for the MBS sessions during the congested traffic condition only to accommodate more number of calls in the system. Our scheme allocates variable amount of bandwidths for the BMS sessions and the non-MBS traffic calls. The performance analyses show that the proposed bandwidth adaptation scheme maximizes the bandwidth utilization and significantly reduces the handover call dropping probability and overall forced call termination probability. I. Introduction The existing wireless network technologies such as femtocell [1], WiFi, Mobile WiMAX, 3G, and 4G are able to support multicast/broadcast mechanisms [2], [3]- [6]. However, the bandwidth of existing wireless networks is still inadequate to support huge voice, data, and video services with full quality of service (QoS). To provide the high data rate video services e.g., multicast/broadcast services (MBS) and unicast services along with the existing voice, internet, and other background traffic services over the wireless networks, it is very important to efficiently manage the wireless bandwidth in order to ensure the admission of maximum number of calls in the system during the congested traffic condition, to maximize the overall service quality, and to maximize the review. Scalable video technique [2], [7]- [9] allows the variable bit rate video broadcast/multicast/unicast over wireless networks. This technique utilizes multiple layering. Each of the layers improves spatial, temporal, or visual quality of the rendered video to the user [2]. Base layer or the highest priority layer guarantees the minimum quality of a video stream. The addition of any enhanced layer or low priority layer improves the video quality. The number of layers for a video session and the bandwidth per layer can be manipulated dynamically. Therefore, to broadcast/multicast/unicast videos through a wireless network, layered transmission is an effective approach for supporting heterogeneous receivers with varying bandwidth requirements [9]. In this paper, we propose a bandwidth adaptation based bandwidth allocation scheme that efficiently allocates bandwidth among the MBS sessions and the non-MBS traffic calls (e.g., voice, unicast video, internet, and other background traffic). The proposed scheme decreases the bandwidth allocation for each of the MBS sessions during the congested traffic condition only to accommodate more calls in the system. Our scheme allocates variable amount of bandwidths for them. However, the minimum quality of each of the videos is guaranteed by allocating minimum bandwidth for each of the video sessions. The SVC technique allows the reduced bandwidth allocation for the MBS sessions and the unicast videos. The proposed scheme also reduces the bandwidth allocation for the background traffic based on the QoS adaptability [10], [11] of the multimedia traffic. The rest of this paper is organized as follows. Section II presents the proposed bandwidth adaptation scheme. Performance evaluation results of the proposed scheme are presented and compared with other schemes in Section III. Finally, Section IV concludes our work. For the low traffic condition, all of the calls are provided with the maximum qualities. However, for the congested traffic condition, the bandwidth allocation for the MBS sessions and the non-MBS traffic calls are decreased. Suppose C max,nB and C min,nB are, respectively, the maximum allowable and the minimum allowable bandwidths for the non-MBS traffic calls. C max,B and C min,B are, respectively, the maximum allowable and the minimum allocated bandwidths for the active MBS video sessions. The bandwidth C max,B is provided to MBS sessions only if the allocated bandwidth for the non-MBS traffic calls is less than or equal to C min,nB . II. Proposed Bandwidth Adaptation Scheme The proposed scheme permits reclaiming some of the bandwidth from already admitted QoS adaptive multimedia traffic calls and MBS sessions, as to accept more calls in the system. Therefore, our scheme can accommodate more calls. Fig. 2 shows the procedure of bandwidth degradation for the proposed scheme. The proposed scheme gives highest priority for the handover calls. Suppose C req,max and C req,min are, respectively, the maximum and the minimum required bandwidths for a requested call. The system accepts a handover call if it can manage only C req,min amount of bandwidth. However, for the case of new call arrival, it is equal to C req,max . The qualities of the unicast video calls are degraded only to accept handover calls in the system. The overall resource allocation scheme is divided into four categories based on the traffic characteristics. The resource allocation and QoS adaptation for each of the traffic types is different. Suppose β v , β min,v , and β max,v are, respectively, the currently allocated, minimum allocated, and maximum allocated bandwidths for each of the voice calls. The bandwidth relation for the voice calls is found as: The bandwidth relationships for the unicast video calls are stated as follows: where β uni , β min,uni , and β max,uni are, respectively, the currently allocated bandwidth, minimum allocated bandwidth, and maximum allocated bandwidth for each of the unicast video calls. β 0,uni is the allocated bandwidth for the base layer of each of the unicast video calls. K max and K min are, respectively, the maximum and the minimum numbers of supported enhanced layers for each of the unicast video calls. β k,uni is the required bandwidth for the k-th layer of a unicast call. The bandwidth relationships for the MBS video sessions are expressed as follows: where β B,m , β min,m , and β max,m are, respectively, the currently allocated, minimum allocated, and maximum allocated bandwidths for m-th MBS session. where β min,back(i) and β min,back(i) are, respectively, the minimum and maximum allocated bandwidths for a background traffic call of i-th class. i x is the maximum levels of bandwidths that can be degraded for a background traffic call of i-th class. III. Performance Analysis In this section, we present the results of the numerical analysis of the proposed scheme. We compared the performance of our proposed scheme with the performance of the "fixed bandwidth allocation for MBS sessions" schemes. Table 1 shows the assumptions of the summary of the parameter values used in our analysis. The call arriving process and the cell dwell times are assumed to be Poisson. The average cell dwell time is assumed to be 540 sec [12]. Fig. 3 shows that the proposed scheme provides negligible handover call dropping probability even for very high traffic condition. The 14 Mbps for MBS sessions causes the reduced maximum bandwidth allocation for the non-MBS traffic calls. This reduced bandwidth allocation for the non-MBS traffic calls and also the non-priority of handover calls causes very high handover call dropping probability. 6 Mbps for MBS sessions provides higher bandwidth for the non-MBS traffic calls but the non-priority of handover calls causes high handover call dropping probability. Fig. 4 shows the overall forced call termination probability performance comparison. Our proposed scheme provides best performance due to the dynamic nature of bandwidth allocation both for the MBS sessions and the non-MBS traffic calls. The other two schemes cannot improve the overall forced call termination performance due to the QoS nonadaptive nature of both the schemes and also the reduced allocated bandwidth for the scheme where 14 Mbps is allocated for the MBS sessions. IV. Conclusions Video over the wireless link is one of the most promising services for the current and next generation communication. In this paper, we proposed a QoS adaptive bandwidth allocation scheme for MBS videos over wireless networks. Our idea behind the proposed scheme is that, during the congested traffic condition, the system releases some part of bandwidth from the MBS video sessions and other running QoS adaptive calls to accommodate more number of calls in the system. More bandwidth is released to support handover calls over new calls. Also, more bandwidth is released to support new voice and unicast video calls over new background traffic calls. Thus, the scheme results in negligible handover call dropping probability for all traffic types and lower new call blocking blocking probability for voice call and unicast calls. The proposed scheme provides the opportunity for the network operator to increase the revenue. Therefore, the proposed scheme is expected to be of considerable interest for MBS provision through wireless networks.
2017-02-24T20:40:34.531Z
2012-07-04T00:00:00.000
{ "year": 2012, "sha1": "00d5eb4ad4e685cebca0b138086554b1d2d9fe07", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1810.03470", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "00d5eb4ad4e685cebca0b138086554b1d2d9fe07", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
18483864
pes2o/s2orc
v3-fos-license
e136 Point of View THE ECOSYSTEM OF CARDIOLOGY GUIDELINES- PATRONS OF THE STATE OF THE ART IN CARDIOLOGY " ... guidelines, viewed as anything less than objective and reflecting broad consensus, may be inadequate to overcome the rugged individualism inherent to physicians... " Anthony N. De Maria (Editor-in-chief – Journal of the American College of Cardiology) " You know how difficult this profession is for one who is conscientious and exact, and who states only that which he can support by argument or authority, or for one who cannot recall where he saw it mentioned or proved... " Moshe ben Maimon-Maimonides (1135-1204) " Experience is a hard teacher because she gives the test first, the lesson afterwards. " I hereby declare that I harbor no conflict of interests, but rather an interest in conflicts inherent to the guidelines – conflicts that relate to the choice of scientific data, to interpretation seen as evidence, and to classification of the recommendations and their implementation. My interest constitutes a way to follow up the process of balancing physicians' two focal poles – medical practice and patients – from the inside, from within the ecosystem of cardiology. It is a way to keep close track of the dynamics between what, technically speaking, would be in compliance with the best medical practice available to the doctor-patient relationship (which implies ongoing exercise), the application of its units in the manner that would best complete individual needs (which implies " on-the-spot " exercise) and thus result in the " best decision " for any given situation. It makes it easier to analyze responsibilities that stem from the privilege of being qualified to make decisions – as the locomotive – and clarifies rights and obligations of those in a subordinate position – that of the railway cars – the same tracks, the same stations, the same destination. It engender reflection on the question of ethics, of the conflict between a science of objective measures – but that brings to mind a production line and fosters a culture that features control and standardization – and the art of competence and judgment that neither separates persons who know from their knowledge, nor loses sight of intuition and " hunches " 1. It helps us weigh the outpouring of questions stemming – but not exclusively – from the bedside, inspired by direct contact with the realities of disease and which, transforming themselves into hypotheses for research, generate scientific data, source answers capable of returning to the bedside from whence … "... guidelines, viewed as anything less than objective and reflecting broad consensus, may be inadequate to overcome the rugged individualism inherent to physicians..." Anthony N. De Maria (Editor-in-chief -Journal of the American College of Cardiology) "You know how difficult this profession is for one who is conscientious and exact, and who states only that which he can support by argument or authority, or for one who cannot recall where he saw it mentioned or proved..." Moshe ben Maimon-Maimonides (1135-1204) "Experience is a hard teacher because she gives the test first, the lesson afterwards." Vernon Sanders Law (1930-...) I -Preliminary comment I hereby declare that I harbor no conflict of interests, but rather an interest in conflicts inherent to the guidelines -conflicts that relate to the choice of scientific data, to interpretation seen as evidence, and to classification of the recommendations and their implementation.My interest constitutes a way to follow up the process of balancing physicians' two focal poles -medical practice and patients -from the inside, from within the ecosystem of cardiology.It is a way to keep close track of the dynamics between what, technically speaking, would be in compliance with the best medical practice available to the doctor-patient relationship (which implies ongoing exercise), the application of its units in the manner that would best complete individual needs (which implies "on-the-spot" exercise) and thus result in the "best decision" for any given situation. It makes it easier to analyze responsibilities that stem from the privilege of being qualified to make decisions -as the locomotive -and clarifies rights and obligations of those in a subordinate position -that of the railway cars -the same tracks, the same stations, the same destination. It engender reflection on the question of ethics, of the conflict between a science of objective measures -but that brings to mind a production line and fosters a culture that features control and standardization -and the art of competence and judgment that neither separates persons who know from their knowledge, nor loses sight of intuition and "hunches" 1 . It helps us weigh the outpouring of questions stemming -but not exclusively -from the bedside, inspired by direct contact with the realities of disease and which, transforming themselves into hypotheses for research, generate scientific data, source answers capable of returning to the bedside from whence they originated in the form of value-added evidence. It helps when delving into the extreme complexity of human biology to discover subtleties inherent to the raw materials furnished by illnesses and to work with them on the various levels of knowledge that, in the universe of heart diseases, comprise what we might call the "gross domestic product" of the ecosystem of cardiology. It leads us to understand to what extent the bedside would admit to guidelines functioning with the "authority" to recommend and to demand obedience.Although they may inspire confidence, guidelines cannot stifle physicians' professional freedom and in addition they help us appreciate how much that same bedside suffers, not only from pernicious ambivalence -fed back by certain ethical infractions -but also from abdication of the decision-making process -due to bad attitude stemming from lack of knowledge and qualification that can result in malpractice. It seeks the real dimension of the extent to which the bedside is a fitting environment for indecision -Shall I do it?What shall I do?When shall I do it?How shall I do it?And thus, fall back on the use of guidelines as a strategy for perfecting inversion of the concept (that physicians are already accustomed to practice at the bedside) that humanity "... is divided into decision-makers and abdicators... the majority tends to behave as abdicators... many require the stimulating pressure of deadlines to make any decision... running the risk of impulsivity..." (Theodore Isaac Rubin, 1923-...).In therapeutics, the answers seem to be more positive, but in prevention, the implementation of guidelines has been negatively influenced by the above-mentioned in the patient's mind. Guidelines harmonize decision-making support, but their use cannot be abused by those who, in their desire to solve conflicts of conscience, consult said guidelines systematically whenever faced with some theme on which they lack knowledge. Guidelines are not omniscient and must not, in themselves, be seen as orders, as though they were capable of transferring the ethical-professional responsibility of the recommendation, which is personal within the scope of the doctor-patient relationship, to a collective mea culpa. Guidelines are a serious business and, within the limits of the heritage of medical practice -representing the selection of the good part -sufficient reason to not be seen as a safe harbor for those who aim for models of perfectionism, whatever they may be, within the cost-risk-benefit ratio. Guidelines are a rendering of the service of rediscovery, not of a rewriting of medical literature.They are officially seen as revision, not as original messages.However, the finishing touches that they provide make them the origin of messages on expectations regarding performance and safety, demanding, as state of the art, of the physician's knowledge and leadership, of the efficient decoding by the wisdom of the patient's body, and of the social focus of health. Guidelines organize the "chaos" of literature and offer one sole language in a tower of Babel; they turn all physicians into "doctors without borders".By classifying gathered past experience, they help strengthen the habit of foreseeing consequences of practice.They fulfill the need for anticipation in a labyrinth; play the role of oracle when doubts arise, and act as codes to invert morbid happenings.However, the real bedside world is not exactly monolingual, and therefore, experiences actually lived through -always "multilingual" -constitute filters interposed in the connecting channels between the perspective informed in the text and that which one perceived at the bedside, including moral values that cardiologists cannot fail to notice."The heart has its reasons of which reason knows nothing" (Blaise Pascal (1623-1662).What this favors is good discernment regarding objectives and good mentalization of end-views in three stages: OBSERVATION of the current clinical situation of the case; retrospective KNOWLEDGE of similarities previously observed within the cardiological ecosystem environment; JUDGMENT of the prospective meaning of the OBSERVATION-KNOWLEDGE meeting point for the case at hand.Guidelines are not akin to Kantian categorical imperatives (Immanuel Kant 1724-1804) because if they could be acknowledged as moral duties (imperatives), they would not reach all, without exception (categorical), even though the universal principle "do unto others what you would have all others do unto all" must always be present.Further in the Kantian line, an explanation for the non-categorical is that if a guideline represents a theoretic reason for a recommendation, the desire expressed by the patient's own free will can determine a practical reason for not heeding said recommendation. Far be it from me any intention to philosophize -merely a passing thought -a guideline can be analyzed by its capacity to answer a triad of basic questions from Kantian criticism, when posed at the bedside: a) What can I know?A guideline might have the intention of answering that the physician can know about probabilities of utility and diagnostic, therapeutic, and preventive efficacy; b) What should I do?Mentalizing what should be universal conduct, start from the "best scientific evidence", which is diffuse, and arrive at the "best clinical evidence", individualized according to precepts of freedom, clarification and reorganization within the doctor-patient relationship; c) What can I expect?Maximization of successes thanks to knowledge of the best practices and interpretation of failures due to the biological nature of human beings. A guideline is a plural tool when one perceives that it can satisfy philosophical theorizations such as the realism of a fact that serves as a reference point to a recommendation (example: blood pressure of 180x120 mmHg measured with a sphygmomanometer); the idealism of the need to take measures to reduce (the) blood pressure levels; the rationalism of the essential human and technical qualification to perform that mission, and the empiricism of the research-based recommendations. If we were to draw an analogy with the concept of media, guidelines would be mass media communication -high probability of a good (clinical) return on (scientific) investment.Guidelines represent an emerging power that must be considered from the viewpoint of their inter-relation with the classic concept of the clinic is sovereign and, more recently, from the viewpoint of the notion that the clinic is sovereign and the image is powerful 2 . In the analysis that follows, I sought to support the concept that data/evidence -due to ongoing construction/ deconstruction -has both the strength of the "efficient scientific stint of duty", and the fragility of "probability, not precisely clinical certification", to cater to the best bedside practices.We must keep in mind that the essence of medical practice comprises the many anxieties physicians are subject to, to transform the recommendation should apply, to the statement applied in the line of duty. I tried to keep a keen "outside view" to ensure that the text was, as far as possible, based on evidence from learned publications about guidelines and that it contained an inkling from the teachings of Osler who said that the more you look outside of the narrow circle of your work, the better equipped you will be for the struggle in your profession.Any feeling of rarefaction of general culture in the atmosphere of the ecosystem of cardiology jeopardizes the technical-scientific rationality in the context of the specialty.I justify certain excerpts featuring the use of the personification style of language because of the view of dialogue we must have with the content of the guideline and which is "a dialogue with colleagues" with the power to influence the "me with myself" dialogue, and due to the custom of personifications (anthromorphisms) at the bedside such as: the heart accelerated, the blood pressure dropped, the hypertrophy pulled the vector back, the aorta is overriding the interventricular septum, or the echocardiogram didn't say anything about the mitral valve score. I explain some repetitions in the course of the article due to the chance of "revolving reading" that occurs with texts that deal with viewpoints because these lead to thoughts, engender ideas in a permanent merry-go-round that, contrary to an original article, needn't -or shouldn't -have the virtuosity of rigorous conciseness.And the merry-go-round in this instance is called GUIDELINES TO BE RESPECTED IN THE PROPER DOSAGE.Consensus regarding what the proper dosage is seems to be as impossible to determine as the chance of each little horse representing a supposition of an answer and all racing off to get a winner. I presume that the text may contain certain biases of appreciation, self-conflicts of interest between an unconsciously partial self and a self that wants to be impartial, but they will not be prejudices.I believe that, at least in part, possible biases could be attributed to habits in the treatment of valvular heart disease because the conceptual bases of conduct in this pioneer sub-specialty of cardiology, entwined as it is with cardiovascular surgery, have been put to the test for decades; so many years at the bedside, countless randomizations due to human nature in the broadest sense, lent them high "study-power", strong significance, and a narrowing of the confidence interval. Time -that cruel purifier -because it is the unblemished guardian of the truth, is the best method of evidence.And, in this respect, the Framingham Heart Study, nearly sixty years old and with 3900 grandchildren "born" in 2002, is worthy of praise -an hors-concours purifier of evidence. Chronos is the father of Kiron, who created Asclepius, a hero and the god of medicine "...at night, Asclepius appeared in the dreams of the ill and gave them advice... in the morning the priests collected the prescriptions and explained them..." Meaningful coincidences! II -Plato on the duty roster Here I transcribe an excerpt from Banquet, a symposium by Plato (428-347 B.C.) that deals with "ascending dialectic", heading constantly upward without stopping, stage by stage, after an initiation.The object is to carpet some steps on the subject of guidelines."...to pass from the love of one beautiful body, to the love "of all beautiful bodies", and after the beauty of bodies, to the "beauty of souls", then to the "beauty of actions and laws", then to that of the sciences, until ultimately, his spirit fortified and broadened, he perceives one sole science...". III -Kafka parable Below is a parable by Franz Kafka (1883-1924) 3 .My intent in reproducing this parable is to provoke thought about conciliating guidelines with the bedside.I suggest that the term 'antagonist' be decoded as the interface between the past -the workforce that we know and that can be synthesized in a guideline -and the future -the next case, while 'he' is a physician eager to deal with beneficence/non-harmfulness. "He has two adversaries: the first (knowledge of medicine) presses him from behind, from the origin.The second blocks the road ahead (the patient in need of expertise).He gives battle to both.To be sure, the first supports him in his fight with the second (guideline furnishing the recommendation), for he wants to push him forward, and in the same way the second supports him in his fight with the first, since he drives him back (the specificity of a case).But it is only theoretically so.For it is not only the two antagonists who are there, but he himself as well (the physician between the past and the future), and who really knows his intentions?(to lend meaning to the care provided).His dream, though, is that some time in an unguarded moment -and this would require a night darker than any night has ever been yet -he will jump out of the fighting line (preserve his autonomy in the face of the patient's autonomy) and be promoted, on account of his experience in fighting, to the position of umpire (beneficence/nonharmfulness of what the past taught for the future of the clinical situation) over his antagonists in their fight with each other." IV -Gross domestic product The gravitational pull of the cardiological ecosystem's atmosphere attracts the specialty's evolution and stabilizes it in its three layers -assistential, educational, and research.Gravity, by definition, means 'a natural force of attraction', and also means 'seriousness'.Cardiologists are zealous and have their feet planted firmly on the ground, and Brazilian cardiologists, heirs to 60 years of accumulated expertise in this specialty in Brazil, have been committed to technical-scientific biosafety, preservation of the interpersonal relationship environment, decisions that entail the least risk of offending human nature due to carelessness or negligence -in short -committed to protecting the ecosystem of cardiology. Because of their moral responsibility, cardiologists all get involved, to the best of their ability, in investments, consumption, costs, imports and exports, and consequently help promote the accumulation of scientific assets and cardiological services. The consequence is the composition of a true 'gross domestic product' of cardiology, a clinical wealth that, in the name of technical-scientific equity, has various forms of per capita distribution. It is worthwhile to reiterate one of these -the entity that plays the role of vector of the idea that optimizing clinical practice cannot be separated from the capacity of knowing how to choose literature -the GUIDELINES.The well-known highlighting in modern medical communication, which personifies scientific excellence, is a very good example of the truism "knowledge is power". V -Human warmth, ethical humidity, and a bioethical breeze In the ecosystem of cardiology, the guidelines lend the bedside a dry climate devoid of any major thermal fluctuations, with a tendency toward cold, less cloudy, and with scant wind.However, the monotony seems incapable of fulfilling human nature's various needs.Therefore, equilibrium in the ecosystem of cardiology requires at least three bedsidesoftening factors: human warmth, ethical humidity, and a bioethical breeze.The guidelines help physicians make decisions according to their own conscience.When that conscience is contrary to the evidence and its alleged universal validity, they set out to search for some agreement by means of a medicine-physician-patient dialogue that can be symbolized in a bedside triangulation 4 . The triad thus formed tempers respect for the limits of moderation between intensive and extensive, of grasp, and of the acknowledgment of inadequacies in the face of circumstances that are marginal to what was idealized elsewhere under rigid methodological controls -"... research customarily uses carefully selected populations..." 5 .This proviso is reinforced by a unisonant text in the introduction to the updated guidelines: "... therefore, Max Grinberg The Ecosystem of Cardiology Arq Bras Cardiol 2007; 88(5) : 137-147 deviating from these guidelines may be appropriate under certain circumstances..." 6 , or "... there are circumstances in which deviation from these guidelines is appropriate..." 7 .The same is seen in the AMB/CFM Guidelines Project: "... the relation between study quality and recommendation grade is insufficient if used in an isolated manner and it will be up to the physician to judge the form, timing, and pertinence of utilization according to the guideline..." 8 . At the bedside, inclusion criteria, exclusion criteria, limitations of the study, editorials, and letters to the editor transform themselves according to individual needs of beneficence/nonharmfulness in the ecosystem of cardiology.The actual patient's right to autonomy influences the weight of the beneficence and the weight of the harmfulness to be placed on the plates of the scales of humanization of the doctor-patient link.Prospects of harmfulness are usually feather-light in comparison to the heavy-weight measures of the so-called heroic beneficence in light of the imminent risk to life, and heavy as lead in view of the light nihilism of certain "discharges upon request".The guidelines can be relegated to a second level. Virtual patients mentioned in publications represent characters to whose conflict -after having been worked out -at the end was added the foreseeability of a fact ('A' would work well, 'B', not so well).Real patients represent a conflict that contains an explanatory factor to be associated to the foreseeability of the virtual patient.The nuances of the conflict we are faced with raise additional arguments to explain the "whys" of a decision.Furthermore, the endpoints that encourage research are not necessarily identical to those of real patients in terms of need or in terms of priority. When the boomerang of the evidence launched with the expectation stemming from research returns, it may be surprised by a combination of forces (given by the assistance) that changes the course that had been foreseen in view of the concept.Nature teaches: a scientific element, we will call hydrogen, a clinical fact, let's call it oxygen: the first element can cause a fire, the second can cause it to spread.When the two elements combine to form water, the effects are inverted. But it is certain that the presence of guidelines oxygenates the organizational climate of the bedside.They lend vitality and support against routine factors of clinical anoxia: the myriad difficulties to arrive at a diagnosis, the toxicity of ambivalent therapeutics, and those dreadful immersions in the swampland of prognosis. On the other hand, it is also certain that "breathing" guidelines leads to "exhaling" carbon dioxide, which causes concern regarding excesses; exaggerations are associated with the risk of hypercapnia and dulling of the clinical senses.In addition, paradoxical as it may seem, excess can result in a greenhouse effect and consequently heighten possible humanization irradiation unbalances in the bedside atmosphere, especially at times when holes are detected in the ethical-ozone layer. VI -Truth -an endless quest Research finds "truths", teaching divulges them, and health care applies them.The quotation marks are justified by the sense of revelations that sound like truths because they happen to be useful and effective.The majority of these, however, are susceptible only to endless forms of discussion because in medicine, contrary to other environments, the "self-evident" nature of truth is not the rule.Without the quotation marks, the opposite meaning could be "lie" or "error" -which certainly does not comply with the spirit of good faith in the quest for truth in medicine; nor does it comply with the conviction that not using a guideline is an error because what one knows by means of it is true. It is in these three doctorly activities -etymologically speaking, doctor is one who teaches a colleague or layperson, directly or indirectly -that the unchangeable truth, "devoid of competition", and the "on-call truths" circulate (or at least as far as we can see), and which, being viewpoints, represent approximations of the truth by the momentary force of their connection with utility within the ecosystem of cardiology.It is what distinguishes a pericardial rub -eternal in the clinical meaning -from the dosage of corticosteroids it provokes -something temporary, depending on a new conclusion regarding utility.The "on-call" truths are habitually seen as having a contingent nature.They feature a built-in uncertainty, and the "will it or will it not occur" duality is the fuel that keeps the flame of the search for the "best evidence" always burning. The history of cardiology registers some peculiar types of immutable truth: the "adopted" truth, such as the Doppler effect (Christian Johann Doppler,1803-1853); or that found by a physician during a war, such as the Korotkoff sounds (Nicolai Sergeyvich Korotkoff, 1874-1920); or the "lay" truth of Musset's sign (Louis Charles Alfred de Musset, 1810-1857); or the "literary" truth (Charles Dickens-1812-1870) of the Pickwickian syndrome (proposed by Charles Sidney Burwell -1893-1967); or that "learned from a layperson" such as Withering (William Withering, 1741-1799); or the "predestined", such as Chagas disease (Carlos Ribeiro Justiniano das Chagas, 1879-1934); and the monomorphic truth of Ebstein's anomaly (Wilhelm Ebstein 1836-1912) 9 .One historic "on-call truth" is Peter's aphorism, which in fact we can consider an emblematic forerunner of a guideline.Michel Peter (1824-1893) "inaugurated" a Class III recommendation when he discouraged pregnancy in women with heart disease because "... it puts the mother's life at risk... it is associated with premature birth or abortion... it aggravates heart disease..." 9 .There is also the truth that is idealized and not yet realized half a century later, known as Harken's criteria for an ideal heart valve prosthesis 10 . The enthusiasm of the moment usually speaks out loudly and makes statements that quickly become obsolete: "... a remarkable characteristic of modern medical treatment..." was written by William Bart Osler (1849-1010), always Osler..., in the early 20th century, something along the lines of the hot-cold empathy gap 11 .It is a warning of a symptom of the "syndrome of the most recently published article": an ardent "we have the truth..." instead of a guarded "we may have a truth..." The guidelines, somehow, constitute a counterbalance to this context of indisputable superiority of the latest information, even though they deal with the most recent, as long as they meet criteria such as "...a moral imperative..." 12 .The context of a new fact in medical literature is different from a new clinical fact, and at times we tend toward group reasoning as though we were conditioned by the habits of day-to-day communication.One thing is the mobility of a temporary certainty of the clinical moment blood glucose tests repeated several times to follow up on treatment of decompensated diabetes, a sine qua non condition for immediate therapy.Another thing is the mutability of a temporary certainty of the scientific moment.For example: "...statins significantly reduce the hemodynamic progression of moderate-to-severe aortic stenosis, an effect that may not be related to cholesterol lowering..." -Raphael Rosenhek -Circulation 7, September 2004 13 .Nine months later, "...in view of studies by Cowell and colleagues, prescribing statins is not justified for aortic stenosis, unless due to other indications..." -Raphael Rosenhek -NEJM -09 June 2005 14 . The guidelines are collections of "truths" that would be closer to scientific excellencies, evidence of reasons for use and the probability of success -class I recommendations -or reasons for avoiding use and the probability of failure -class III recommendations. The guidelines testify that progress in medicine is achieved by establishing "truths" with undefined validity, the infinite is the minority.Each novelty is capable of "vanquishing the oncall truth" due to acknowledged superiority, or it may even re-label the one it replaced as an untruth.Thus, there is a cycle of truthfulness in the guidelines.To parody Lewellys Barker, successor of William Osler -that name again -as physicianin-chief at Johns Hopkins University School of Medicine, in his observation on early 20th century technological advances (sphygmomanometers, electrocardiogram, x-ray) 9 : if a cardiologist were to sleep for quite some time he would wake up astounded -and feeling rather disoriented -by the changes in the environment. The notion of a succession of truths over time, in medicine, is closely linked to a permanent state of doubt, atavistic in physicians -diagnostic hypothesis, surgical risk, guarded prognosis.Degrees of skepticism are useful drivers of advantageous replacements -the pencil and eraser style of research, write and erase, erase and write, in search of the "best expression".The intention to achieve perfection, knowing it to be unachievable, is inherent to the "best evidence" concept of the guidelines.The commitment to time is promising "...the guidelines will be reviewed annually..." 6 . Inclusions and exclusions in accelerated remakes of the 'state-of-the-art' reinforce the notion that guidelines are advisors to those reasonably up-to-date in knowledge of the theme and qualified to apply them.The problem lies in how much we fill in -or fail to fill in -gaps in the guidelines with what we know -or do not know -and then, how much prevails of the information researched and of the complementation mentalized.Following are two illustrative examples based on the ACC/AHA 2006 Guidelines for the Management of Patients with Valvular Heart Disease 6 .The first: "...Aortic valve replacement may be considered for asymptomatic patients with severe aortic stenosis and abnormal response to..." was downgraded from application is reasonable (class iia), to could be considered (class iib), from 1998 15 to 2006 6 .There was an impact, although, in this case, slight, since all cardiologists, according to their knowledge and qualification, can have their own convictions regarding the usefulness and effectiveness of functional evaluations.The second: "...aortic balloon valvotomy might be reasonable as a bridge to surgery in hemodynamically unstable adult patients who are at high risk for valve replacement..." was downgraded in exactly the same way as the first.However, cardiologists must have already changed their concept regarding the beneficence/nonharmfulness of the method quite some time before by staying abreast of medical literature and by their own experience.In other words, anyone who had not experienced complex cases of aortic stenosis, up to August 2006, by following a guideline valid for eight years, was running the risk of following the advice of an unsuitable application is reasonable guideline.It is well worth while to think about the following phrase: "...within the context of rapid technological evolution, guidelines must be dynamic and reviewed/revised with greater frequency..." 16 . VII -Guideline reader -bedside author Guidelines are practice, both noun and adjective.Three uses stand out: Use 1 -A guide for what I should do.When faced with a clinical situation, guidelines work like a compass to orient the information to be obtained from the anamnesis, the prime signals of a physical examination, the dynamics of complementary tests, the therapeutic conduct to deal with the manifestation and at the same time benefit prolongation of life and the programming of prevention.From the moral and ethical viewpoint, they should be used preferably by those who have expertise -past, tradition -in the theme "...physicians who are not familiar with the evidence that supports the recommendation are not prepared to follow the guidelines..." 17 Guidelines must be internalized, in the sense that they help us to add missing links to our chain of knowledge of the theme.On principle, guidelines must not overprotect us from lack of knowledge of the theme.Analogy 1A -Cake recipe.Guidelines list ingredients and recommendations on how to handle them [18][19][20] .However, the actual making of a cake requires something more, and those who apply guidelines as though they were using cake mix "...tend to disregard the patient's participation in the decision..." 19 , and to disregard the fact that "...uniform recommendations may ignore a patient's special needs..." 20 .One might say that the degree of acknowledgment of guidelines as "a monopoly of conduct" is reduced in the same proportion as the weight of another type of cake -the stack of prescriptions signed and stamped by the hands of commitment and involvement at the bedside. Analogy 1B -Changing a light bulb.Guidelines officialize certain obvious features that are intuitive: "...postoperative visits... for patients with valve prostheses, anamnesis, physical examinations and appropriate tests should be carried out upon the first postoperative outpatient evaluation..." 6 .The level of evidence is C (consensual opinion of specialists).Obviously, there is no justifying a randomized study that would sound like evaluating the usefulness and effectiveness of the sequence: get a ladder, place the ladder under the light bulb, find a new light bulb, climb up on the ladder, remove the burned-out bulb, and screw in the new bulb.No double-blind trial is suggested..., nor is there any fear that someone might suggest holding the ladder and spinning the world...The fact is that "...a well-informed and highly-trained practitioner can practice content equivalent to a guideline without ever actually having resorted or adhered to the guidelines..." 20 . Analogy 1C -The ventriloquist's art.Guidelines speak with the voice of medical literature; physicians may speak with the voice of guidelines.Or could it be that the guidelines open physicians' mouths and the physicians say what serves their purpose?In situations of poor clinical outcome, blaming the guideline recommendation is tantamount to blaming the ventriloquist's dummy. Analogy 1D -The Peter Pan effect.Guidelines, when seen merely as labels to be glued on with superficial involvement and commitment, inhibit both full accountability and the growth of clinical qualification."... Physicians are concerned because their management of patients is increasingly worse due to a standardized and automated process..." 21 .Bertrand Russell (1872-1970) left this legacy: "…Passive acceptance of the teacher's wisdom ... seems rational because the teacher knows more... Yet the habit of passive acceptance is a disastrous one in later life.It causes man to seek and to accept a leader, and to accept as a leader whoever is established in that position...". Analogy 1E -The magic flute behavior.Guidelines may be the sound that "eliminates rats", or the sound that "makes children disappear", depending on intra-and interpersonal variables.Bedside music may not be the same as the desk-side music of health care managers. Analogy 1F -Chameleon effect.Guidelines must be integrated in the clinical circumstances and environmental aspects determined by the health care system."...There is a vacuum between the scientific aspects of medicine and clinical practice..." 17 .The analogy goes beyond this change-of-skin folkloric mimicry.It includes flexibility in a broader sense, with aspects of freedom -we need only recall that chameleons have independent eye movement.Osler (that man again): "... educating the eyes to observe facts takes time, but it begins with the patient, continues with the patient, and ends with the patient..." Use 2 -Guide for a brief overall vision of the theme.Guidelines make it easier to appreciate the submerged part of the iceberg. Use 3 -Guide for delving more deeply into medical literature.Guidelines orient the choice of articles that reveal the submerged part of the iceberg. VIII -Nomenclature • Scientific fact -the result of scientific research.Scientific fact constitutes the largest component of the cardiological ecosystem's gross domestic product.More than "an original", it must be "a universal asset" because more than a scientific novelty, what counts is reproducibility and safety. • Clinical fact -occurrence whose existence can be indisputably verified; many could dispense with research.What follows was published in the British Medical Journal 22 , with the following summary.Objective: to determine if parachutes are effective in preventing major trauma related to gravitational challenge.Structure: systematic review of randomized controlled trials.Data source: Medline, Web of Science, Embase, Cochrane Library, Internet sites, and lists of citations.Study selection: studies on the use of parachutes during free fall.Points of interest: death or major trauma, defined as lesion with severity score>15.Results: we were unable to identify any randomized and controlled trial on intervention with parachutes.Conclusion: as is the case with many interventions that aim to prevent disease, the efficacy of parachutes was not submitted to a strict evaluation by randomized controlled trials.Followers of evidence-based medicine have criticized the adoption of interventions that are evaluated exclusively by observational data.It is our opinion that benefit would be forthcoming if the more radical proponents of evidence-based medicine were to organize and participate in randomized, double-blind, placebo-controlled, crossover trials of the efficacy of the parachute. A word to the wise: "...the popular belief that only randomized controlled trials produce reliable results and that all observational trials are illusions constitutes a disservice to patient care and to the education of health professionals..." 23 . • Evidence -interpretation and qualification of scientific data is an attribute.To judgment of the scientific finding (fact) a value judgment is added.The authors conclude and publish after passing through the filter of the publishing house.Based on the data, the power of the society of the specialty, fundamented by means of a committee, analyzes the status of approximation to the truth by a mixture of soothsaying (in the sense of an asset, a value because it will come to be good for the patient), and dogmatism (in the sense of a truth to be offered to the patient), each portion of which, when joined in the realm of science, loses its everyday meaning -not well thought of in the medical field.Humanist philosopher André Comte-Sponville (1952-....) defines soothsaying + dogmatism as utopia, not in the sense of something unattainable because it is based on a concept that we judge fanciful, but in the sense of another philosophical school that sees it as something with potential to be attained because it is plausible to give the concept the benefit of the doubt. •There is no "best" objective evidence -there is a choice (and a degree of subjectivity is inherent to choice) that is considered to be best, under certain premises, to "reintegrate itself" at the bedside, upon each new case.Because we can disagree with the committee-certified qualification at any time, the evidence cannot be termed absolute truth.An insightful definition is: "...evidence is a status granted based on a fact; it reflects, at least in part, that this subjective and social judgment of the fact increases the probability of a given conclusion being true... thus, evidence is not merely a research datum or a fact, but the result of some interpretations that cater to social and philosophical needs..." 24 . • Guideline -a structured recommendation that stems from scientific data, "reproduced" as evidence and transcending individual limitations in the figure of a committee that aspires to drawing up excellence, the best that can possibly be hoped for, with the best evidence.A guideline constitutes a disclosure, colleague-to-colleague, stipulated and delegated to a society of specialties -the most common -a clinical identity, articulation of values "assured" in an a priori manner, a sort of authority franchise -worthy of respect for a scientific reason, but without the sense of coercion -collective, with freely granted permission to use.However, having in hand, as it were, an authorization to believe that the guideline is true, the members of the specialty society, given their due responsibilities, maintain their free will in regard to the scientific fact -after all, "scientific obedience" is not mandatory.But there is vigilance, "... once the guideline is adopted by the health care service, all physicians are expected to comply with it..." 25 , and there are ways of thinking, such as "...without due reason, deviation from compliance should lead to corrective action..." or "...latitude for exercising professional judgment..." or "... replacing the vague language of "standards of care" with explicit contractual terminology, such as "expectations of performance" and incorporating guidelines selected and adjusted directly to physicians' routine could result in equilibrium between public concern regarding the quality of health care and physicians' interest in a fair performance evaluation review carried out by other physicians to ensure quality health care in the institution (peer review)..." 25 . The strength and value of the guidelines for consensual treatment lie in the best possible approximation to usefulness and effectiveness, in the critique and hierarchy of options, in scientific updating, and clinical clarity as opposed to the contradictions of medical literature.Stratifying the recommendations into critical and non-critical seems to be useful in the realm of the cost-risk-benefit ratio 26 . A relevant aspect is the cultural influence, most commonly of the language used to decode the message.How is it possible to properly grasp and apply "is reasonable, may be considered, might be considered", and "translate" it at the bedside into " I'm going to do it & I'm not going to do it, I'll probably do it & I'll probably not do it"? Guidelines do not cause physicians to exist; it is physicians who lend existence to the guidelines.And that existence comes to take part in the physician's professional life in such a manner that it becomes the goose that lays the golden eggs, but fails to reveal which came first.Guidelines are not exactly an exaltation to science, they are linked to the state-of-the-art in medicine and, because of the "little bit more" that they provide, they require clinical reading.In other words, guidelines do not constitute a manual for those who have no clinical vision, or an interpreter for those who are deaf and dumb in regard to their patients.Guidelines are not exactly an "out" for building up a stereotype professional image.Triangles, due to the interdependence of their vertices, help one to perceive the multiple facets of what might be considered admissible as a humane attitude in regard to disease 4 : there is that facet resulting from the "dialogue" between the medical recommendation (guideline) and the patient's preferences and perceptions; there is that resulting from the "dialogue" between the medical recommendation (guideline) and the physician's ethical, moral, and legal responsibilities; and there is that resulting from the "dialogue" between the patient's preferences and perceptions and the physician's ethical, moral, and legal responsibilities.Each "dialogue" has its "evidence" regarding beneficence/nonharmfulness/autonomy and thus "...dealing with patients should be seen more as cooperation of a team of specialists (doctor, nurse, lab personnel, patient, family, etc...) than as a physician shooting a magic bullet of authoritarian competence... protecting the freedom and equality of this cooperation would be the prime clinical objective on both individual and institutional levels..." 27 . • Conflicts of interest -conditions in which judgment of a primary interest has the potential to be influenced in an undue fashion by a secondary interest that may be linked to an economic, or even personal, social, or scientific aspect 28 . • Selected publication -article in a scientific journal that has passed the quality control of the guideline committee and was included in the references.It represents certification as an asset of scientific value, and in our specialty, is to be held under the guardianship of the ecosystem of cardiology.Having been examined and selected to be "part of such a blessedly select group" is the high point of a publication's curriculum -the feeling of winning an Oscar for the best script or a Nobel Prize for medical literature.Such an honor broadens the definition of "... primarily, a means of scientific communication, information for colleagues, proof of academic competence, a criterion for academic promotion, an argument for funding, and a fundamental prestige-enhancing factor for universities..." 12 .We must also keep in mind the academic maxim, "publish or perish". Every article is a primary source as long as it is in the hands of its author(s), a stage that includes the publication of data and interpretation of that data by the person(s) who obtained it.The community's reading of the article is a re-interpretation -an interpretation of the interpretation actually contained in the original, or interpretation of the non-interpretation of data which, after having been classified and awarded merit, can become a secondary source in a revision, in an update, in a view point, in discussion about some similar article, or in a guideline.As is usually the case with a good book, it is the first paragraph -the primary source -and the last paragraph -the reproducibility -that foretell the quality of a guideline text. • The aspects of colloquial language -It would be fitting to say "no scientific data is available", but it would not be fitting to say "no evidence is available".Availability refers to the research that generates data and not to the interpretation of that data, because any data, if available, can be qualified as evidence -good or bad, positive or negative.In like fashion, it would not be fitting to say "no guideline is available", unless that is literally the case, in other words, that no guideline was drawn up by a committee.It would not be fitting to state that no scientific data capable of serving as a fundament and guide exist.Scientific fact is not born as a guideline.First it must become a source; second, be analyzed as evidence; third, acquire a new form -that of acceptance or refusal of its recommendation status. IX -Guidelines and moral commitment A guideline is not an eleventh commandment.We might see it as a compact ideal of scientific data and of the value (evidence) of those data, or as a guide to what might be missing in our treatment of the patient.If we were to see it as an ideal, we would be admitting duty and subjection, but if we see it as a guide, we would be doing away with the submission. What we might lack could be the objectivity of a pharmacological effect, or it could be the subjectivity of the patient's perceptions and preferences.One can deduce that although a guideline may not complete a bedside ordination -when, for example, a patient favors emotional reasoning to the detriment of intellectual fundamentation -it can certainly arouse awareness of what may not have been thought out and point out possible strategies.After all, if the bedside-care concept did not exist, why would we have guidelines?But that which is capable of completing the classification nominates itself as the spokesperson of a scientific consortium that offers a product of continued education with global raw material (we are doctors without borders), produced, packaged, and labeled in certain communities.The right to one sole quota for all harks back to twenty-five centuries ago, acquired and eternalized in the rationale of the Hippocratic Oath: "...to teach them this art -if they desire to learn it... to give a share of precepts and oral instruction and all the other learning...".One deduces that guidelines have roots in an archetypal fraternity of medicine. Guidelines imply awareness of the data's legitimacy, reasoning as to their reproducibility and their honesty of purpose regarding what may or may not be recommendable.One deduces that guidelines are an "open system" that turns on an axis of renewed reflection on contradictory probabilities and the Aristotelian classification of agonist and antagonist trends that coexist in the field of medical literature. Guidelines focalize strong moral commitment -in their elaboration, in view of the data selection process, and in their application, since the intention to use what is a diffuse asset of medicine ("the best evidence" in literature) must not clash with what would in fact be good for the patient, which is specific ("the best evidence" at the bedside) -in accordance with what is implied in the beneficence/nonharmfulness binomial. One concept that knocks on door after door of medical scholars from the earliest days of their training is that technical availability is not a synonym of clinical recommendation -after graduating it is also useful to knock on the door of one physician after the other.Having a valve prosthesis still in its original packaging, presumption of "best hemodynamics", is not, in itself, an argument for replacing an abnormal native valve, knowingly in "worst hemodynamics".And not having something available is not an antonym -the lack of expertise to perform a mitral balloon valvuloplasty does not eliminate that procedure from the list of options to be commented by the physician who practices autonomy with a patient suffering from mitral stenosis.One might add that the relevance of the moral aspects of a decision is proportional to the degree of risk of each option to attain benefit and/or avoid harmfulness.One can deduce that the effort to achieve a communion of interests in the doctor-patient relationship is an undebatable stimulus for collective "best evidence" in literature to adjust itself to individualized "best evidence" at the bedside. Guidelines constitute a reciprocity agreement among colleagues.By means of a tacit agreement of wills, a physician furnishes a clinical situation and the guideline committee offers a recommendation certified as reliable.This is in compliance with "...formulation of a clear patient-based clinical issue... search for relevant articles in medical literature... critical evaluation of the evidence... selection of the best evidence for the clinical decision... linking of the evidence with clinical experience, knowledge, and practice... implementation of the useful findings in clinical practice..." 29 Be it a convention "...echocardiography is recommended annually for patients with asymptomatic mitral stenosis and mitral valve area >1.5 cm²..." 6 , or an imperative, "....mitral balloon valvotomy is not recommended in patients with mild mitral stenosis..." 6 , there is a capital commitment related to the Hippocratic "...to no one shall I give advice that induces loss..." One can deduce that guidelines incorporate a symbolism of service rendered with zeal and prudence. It just so happens that because this process involves an exchange, it requires an endorsement: we must verify if we, putting ourselves in the place of the guideline committee and having total autonomy, would apply the same process that resulted in the final product that we are accepting -"we" in this case referring to any cardiologist who is familiar with heart disease, whether active in academic life or not.The answer is complex.One can deduce that it must be broken down. During the phase of panning for scientific data, we could explore the same veins of the 'literature mine' and make identical selection and classification of nuggets -restrictions are not usually of an intellectual nature, they are on duty on days that have "only" 24 hours.We can conclude that, conceptually, this first half of the answer has a good chance of being 'yes, we would use an analogous process'. The answer in the phase of certifying scientific data as evidence -where those data that are supposedly closest to the truth (useful and effective intervention) are chosen -has a good chance of being ambiguous.The tasks of ranking evidence and making the "best evidence" hegemonic are subject to subjective judgment and therefore far from being neutral.One can deduce that giving a 'yes' in this second part is associated with a high degree of transfer of trust to the patron committee. In the case of a double 'yes', approval is complete and we project in it the outlook of the best result.We feel that a result that may differ from what was expected would not be due to malpractice.One of the angles of the concept of guidelines is exactly to shield, as far as possible, from imprudence -because by following the guidelines, we would not be doing something that the majority does not do either -as well as from negligence -because we would not be failing to do something that the majority does -so the guidelines would represent the guardian of "...acting in the benefit of the patient..." (Art.6 of the Code of Medical Ethics).But there is the other side of the bedside: adherence to a guideline can become imprudent when its beneficence is not passed through the filter of non-harmfulness.Here is an example in valve treatment.Let us suppose that there has been adhesion to the following recommendation: "…a bioprosthesis is indicated for mitral valve replacement in a patient who will not receive anticoagulation, is incapable of receiving anticoagulation, or has a clear contraindication to anticoagulation therapy..." (class I recommendation in selection of mitral prosthesis) 6 .Immediately following implantation of the selected bioprosthesis because the patient does not have a profile for anticoagulation, care must be taken to avoid adhering to "... during the first three months after replacing the mitral with a bioprosthesis, the use of warfarin is reasonable..." (class IIa recommendation for antithrombotic therapy in patients with prosthetic heart valves) 6 .One deduces that standardization by guideline, as an ideology, must have interdependencies legitimized as being of real interest to the community. To be well suited to the sensed result, guidelines should not be seen as bedside "package inserts" or as "cheat sheets" for examining patients.For those who have not read a bare minimum of articles mentioned in the references, guidelines are little more than caricatures of the available literature.In fact, when guidelines are not read as the revelation of secrets, they contribute to professional success in that they represent a second reading in accordance with the concept that "... Curiously enough, one cannot read a book: one can only reread it..." (Wladimir Nabokov, 1899-1977).Nabokov teaches us that the first reading leaves a sediment, a mark of the effort to understand, and that when we are faced with the need, we recover the stored recollection, a sort of mental searchcopy-paste that causes a second, more immediatist reading -a second look at the literature just to make sure. It is essential that physicians who intend to hitch a ride on the guidelines do not do so without first checking the itinerary at the ticket booth of experience.At this point, "doubly informed" and clarified by their own clinical sensibility, they can fulfill one more of Osler's words of wisdom (one more and he will be a co-author) "...Lack of systematic personal training....leads to... misapplication …" For everything else we have the "credit card guidelines" that even admit an effort to develop "...systems of guidelines interpretable by computers and targeting non-specialists..." 30 It brings to mind pioneer scientific publications, centuries ago, which were anagrams that preserved the credit of authorship, but were intelligible only to those who shared the password. It hardly seems ethical to consider ourselves multispecialized simply because we have access to a collection of guidelines.Contrary to what many may believe, one can deduce that guidelines are not meant to be used as Personal Protective Equipment (PPE) by those who are not intimate with the subject.Guidelines may even give us a feeling of being free and protected, but using and goggles" without a proper initiation ritual hampers our sensitivity to light touch and visual acuity, and therefore run the risk of protected freedom in a labyrinth.Once caught up in that labyrinth, we will have to face a series of realities with which we are not familiar and we will soon start contradicting ourselves.It is something like "not confirming tomorrow the untruth that you uttered today" because those who are unable to tell a story based on their own experience -the more kaleidoscopic, the more they tend to base their practice on theory -will probably not feel at ease to play the role of the character that "recites guidelines" at the bedside with the authenticity of an expert's opinion.In situations far from the clinical interrelations of our daily routine, applying guidelines like cake recipes -other than in cases of "extreme isolation from colleagues" when the guidelines become lifesavers -ethically speaking, constitutes a borderline case of Munchausen's syndrome.Hippocrates would say to Osler: They forgot that they swore "...I will follow that system of regimen, which, according to my ability and judgment, I consider for the benefit of my patients..." -and Osler would reply: "...lack of systematic training is apt to place us, in the eyes of the public, on a level with empirics and quacks..." If some fact is interpreted as the most [applicable], in complementation there must be another interpreted as the least, but not necessarily inapplicable.Likewise, if there is best evidence, we can assume that there is also worst, but not necessarily bad.Comparisons are made in relation to non-absolute references, not like when one uses a placebo.Thus, evidence should not admit to any appreciation other than as the representation of the probability -and not the veracity -that recommends (class I) or discourages (class III).In other words, the dictionary of biostatistics respects alphabetical order: probability (evidence closest to success) always precedes veracity (proven success).British authors Lockey, Crewdson, and Davies of the London Helicopter Emergency Medical Service 31 analyzed a highly complex biopsychosocial situation -cardiac arrest.They observed that 19% (13 of 68) of the survivors of posttraumatic cardiac arrest -a situation that carries a dismal prognosis in which many consider resuscitation to be futile -contradicted recently published guidelines.The conclusion was: "...adherence to the guidelines may rule out a number of patients with chances of survival.. ." And what happens with the "less-than-best evidence"?What do those data represent that, making up the gross domestic product of cardiology and far from being scientific trash, acquired the status of a lower grade of evidence on the scale of scientific value in accordance with the agreed criteria?They make up a sort of vice-evidence of the "best" evidence.This being the case, there is no denying their legitimacy to take first place when the lead-evidence is barred.A strong argument in favor of the view that "less than the best evidence" is not the antithesis of beneficence is that we know how improbable it is that all patients with the same disease will always require the same prescription 32 .One can deduce that recommendations, both classes I and II, can determine similar results at the bedside regardless of the degree of probability we may sense in the scientific source. This raises the question: The "best" evidence for whom?If it is for the guideline committee, it is insufficient.If it is for the academic satisfaction of a pathophysiological concept, it is insufficient.If it is for the administrative view of recomposing the cost-benefit ratio, it is insufficient.If it is for the physician who will apply it, it is insufficient.If it is for the beneficence/ nonharmfulness following (the patient's informed consent, it is sufficient.One can deduce that the guarantor feedback "guideline evidence" -"bedside result" depends on the multiplicity of case series because "...observation data can cooperate with the randomized trials to confirm or not if the same efficiency obtained under controlled conditions occurs in routine practice..." 33 , curiously enough, an inferior scientific method in the concept of evidence."On-call truths" dealing with prevention have notorious differences in relation to the timing for appreciation of the usefulness and efficacy as compared to those dealing with therapeutics.Therapeutics feature the proposition -opening the mitral valve by an intervention in view of due to acute pulmonary edema; the execution -obtaining a mitral valve area of 2.1 cm² by means of balloon valvuloplasty; and the result -functional class I, in the short term.As far as clinical epidemiology and biostatistics and their effects in Evidence-Based Medicine are concerned, the conceptual beneficial effects of a reduction (qualitative and quantitative) in exposure of the hypercholesterolemia are less explicit in the real life history of the "nearly ill".This finding, twenty years after the catch phrase "...a dimmer reduces the light when we leave the plasma compartment and enter the arterial wall..." 34 , brings to mind the teachings of epidemiologist Geoffrey Rose, famous for formulating that the concept ill x non-ill (medicate or observe) is a dichotomic clinical logic that does not apply to the populational realm where there is a subclinical continuum: "...in a mass population policy, a small benefit for each individual can become unexpectedly large..." 35 , and warns that the nature of beneficence of the change in habits cannot be seen separate from "...disease is a restraint applied to daily routine... medicine must insert itself in the context and values of the restraints of each life..." 36 . A paradox with ethical implications seems to be taking shape: "...according to evidence-based medicine there is no acceptable evidence that evidence-based medicine is a way to the truth... that randomization, when there is a discrepancy with some other method, is a belief, not evidence.After all, divergence between studies is the rule, not the exception, if not, if everything were identical, why would there be any need for systematic review?" 5 .Remember that one of the most common expressions in cardiologic literature is "conflicting results", justification for new research projects.One can deduce that it is valid to examine the context of the guidelines from the point of view of current medical culture, resorting to a metonym linked to Gaius Maecenas who was a Roman citizen of the imperial era who helped the autocracy satisfy the interests of the aristocracy: do guidelines, given the determination to gather a valuable collection, correspond to the Maecenas of the state of the art in the ecosystem of cardiology?X -"Guideline-Centrism": the current era's cultural phenomenon Guidelines immigrated to the Brazilian ecosystem of cardiology about 15 years ago, and upon their arrival demonstrated international competence for a vini, vidi, vince attitude in regard to selecting & interpreting & organizing. They never failed to recall the difficulties represented by the notion that extracting a finding from the "scientific recipient" was no guarantee of its subsequent smooth insertion -now in the form of evidence -in the "recipient patient" due to: a) a not very realistic -unskillful -bedside vision "...the proclaimed hemodynamic superiority of a stentless bioprosthesis does not necessarily translate into best clinical evidence..." (37); b) ambiguity of the recommendation; c) the lack of tips on application; d) the lack of encouragement in regard to educating the patient.The guideline's identity was registered in the immigration form: a specialized committee's recommendation, based on qualified scientific data, for zeal and prudence in the management of specific clinical circumstances.The admission opinion was: [guidelines] should contribute toward cultural improvement, divert from medical judgment any ungranted trust, and lend structured and responsible support. Adherence to the guidelines came face-to-face with the "teflon effect", while "...some practitioners found the guidelines useful, others described them as anti-intelligent..." (20).An atmosphere of postural ambivalence led to something like a 'good past conduct certificate' issued by experience vs. a 'good past conduct certificate' issued by a committee. And why is that?Because clinical habits were always strong, backfed by the fact that practitioners feel confident about what they do and protected by a feeling of veneration toward antiquity, as Professor Décourt (38) reminds us; because dealing with change depends on each individual's rhythm; because there is always a climate for a fantasy version; because people always fear impositions -realities worked out with a high degree of freedom could dictate standards and then, in a climate of restriction of the degree of freedom, every standard would dictate reality.Concerns in regard to ethical attitudes are obvious: "...ethics should value the use of the best available in medical research, but the ethics of an attitude cannot be limited to the best medical evidence..." (39). Fears?That a movement intended to sanitize the impurities of medical literature might serve as a pretext for authoritarian interference of the classes of evidence in the Hippocratic and Oslerian traditions of dealing with conflict, which could thus lead to centralized control in power relations of the ecosystem of cardiology and, because of the deformation of historic roots or distortion of realities, bring about a consequent threat of ostracism of conventional medicine; that freezing knowledge in a guideline could depreciate certain skills classically considered essential to being a physician, such as skepticism with its permanent doubting attitude, or self-criticism and its lessons; that exalting the superiority of guidelines would be to depreciate experience itself, which would depreciate truth, which in turn would depreciate a dream, something like the feeling that physicians would disappear, replaced by professionals who medicate. Benefits?Anti-imprudence & anti-malpractice, useful when making a clinical decision; a less abstract connection with secular principles; individual or group teaching; and, a guide for costs.In other words, a favorable cost-risk-benefit equation. Harmfulness?Instilling a conformist and complacent attitude; going counter to awareness of inductive/deductive reasoning in the practice of medicine; serving as an instrument of "sub-clinical" intentions to limit professional practice; and representing a sword more than a shield (40) in the case of law suits. Prospects?That selection, analysis, construction, and incorporation would increasingly gain receptivity and that guidelines would be appreciated as weavers of clinical strategies; that coexistence might gradually lead to a softening of the "orders to fulfill" image, because this image carries with it the assumption that a state of disease should -or could -only be controlled, partially or completely, according to the reason of its recommendations, and thus, all the rest could be iatrogenic.And we must not assume that guidelines "...that fail to solve all the uncertainties of habitual medical practice and should be seen merely as strategies aimed to improve the quality of health care..." (41) are an active immunization against iatrogenesis, since guidelines building up antibodies against malpractice by means of the knowledge they impart is one thing, and another thing is how far undue attitudes toward patients can go to multiply antigens. Based as they are on the ideology of Evidence Based Medicine, the guidelines: 1. Progressively broke through "mental frontiers".Here and there they managed to overcome a climate of indifference and the image of second-hand products -in the form of reviews -and expanded on an ascending course: scientific fact-evidence-clinical result.They won over sympathizers of the concept that guidelines are assets and that approving and applying them is a joyful obligation.Guidelines encrusted themselves like barnacles on the clinical consciousness and came to influence the generation of physicians with higher medical association numbers as of their internships -that generation that never had any tailor-made clothes and, accustomed as they are to "off-the-rack" (prêt-à-porter) clothing, must be warned against "already-thought-out" (prêtà-penser) guidelines. Nevertheless, the guidelines aroused adversaries of this "moral duty" attitude -adversaries who attributed a mystic character to them -more "...faith in a dogma that confuses evidence with truth..." (12) than reason by means of proof.Even taking into account those who usually manifest a compulsively independent attitude in regard to prevailing opinion, it seems that rejection stems from a pretentious exaltation of the selfevaluation of superiority of the adjective "Evidence-Based" as compared to conventional -and perhaps "unadjectivable" -medicine, since wisdom is merely wisdom. In the course of these last 15 years, medical literature has collected countless tales of grief in regard to underutilization of the guidelines "...they're more preached than practiced..." (20) By the tone of many articles, suggestions for increasing the degree of compliance (42-44) achieved only modest results.To judge by the results of the many pro-adhesion juggling acts, a safety net under the trapeze is still essential, and a very well anchored net at that. A certain "Official Gazette" nature of scientific literature of the ecosystem of cardiology raises controversy.The Norwegian government (32), for example, considered simply mailing the guidelines to physicians insufficient to change prescription habits and reduce costs.A proactive attitude featuring organized visits to physicians was frustrating: despite this personalized promotion, 83% of the patients of the physicians visited failed to receive thiazides for hypertension in accordance with the guidelines.Even so, this result was better than the 89% of the control group that received the guidelines by mail and simply trashed the envelope unopened; 2. Highlighted by praise to advances in science and technology and by the fear of vulnerabilities stemming from them, the guidelines came to be seen as an avant-garde value -an ever safer future -and safeguard of a present more scientific than the past; 3. Incorporated in the image of modernity and referenced on progress, the guidelines optimized themselves as tutors, those that know what is best, highlighting from the protocol of intentions: unquestionable reliability in their elaboration, sagacious interpretation of the content of the references, and objective contribution to people's health; 4. The guidelines legitimized themselves as partners, advisors, and teachers, with altruism and confidence.Databases facilitated communication: "...The Guidelines Finder of The National Library for Health provides an index with more than 1500 guidelines, updated weekly and featuring easy download..." (45). 5. The intelligentsia adopted them with the spontaneity of critical post-reflection, but also in other forms that, on the contrary, raise critical reflections which, in the name of conflicts of interests, managed to gather sponsors and promoters by recommending the collection of "a practical encyclopedia".; 6.They assimilated the "good tree-good fruit" concept and articulated themselves with both bedside decisions -tree by tree -and health care manager desk-side decisions -the forest; 7.They took care to present themselves as clinically friendly -to ease the clinical encounter -and scientifically concise -to provide re-encounters in the midst of the labyrinths of medical literature; Sources of Funding There were no external funding sources for this study. Study Association This study is not associated with any graduation program. 8 . They promoted a curricular remodeling with the highlight on evidence (interpreted data); 9.They filled gaps in the university training space and in the time for professional updating, encouraged capacities for a next conscious step, and catalyzed interdisciplinarity.As a point of honor, they made an effort to make very clear what is evidence and what is opinion as the basis for giving answers at the bedside; 10.They were harshly criticized -indirectly -for critical vigilance over bias in the support base, in regard to: changes in research in the course of its execution, arbitrariness of endpoints; discrepancies between statistical and clinical meaning "...statistical meaning must not be confused with clinical importance... biological pertinence, the test of the power of theoretic consideration and the force of correlated evidence are more important than the value of p... the erroneous idea that one sole number can capture both delayed effects of an experiment and the meaning of one sole result..." 46 ; non-publication of unfavorable results; the effects of polypharmacy, and conflicts of interests 47 .They were unable to eliminate the reductionist impression of a certain "magical realism" -class I x class III -of well imposed occultism -class IIa x class IIb -and even of "canned" medical literature.Potential Conflict of Interest No potential conflict of interest relevant to this article was reported.
2014-10-01T00:00:00.000Z
2007-11-01T00:00:00.000
{ "year": 2007, "sha1": "54b5d4ae81c3337ba368715c62b2f4b09c560454", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/abc/a/Jmv7phfcqwVVqZgBx8R7WRN/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "54b5d4ae81c3337ba368715c62b2f4b09c560454", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236845000
pes2o/s2orc
v3-fos-license
Oligodendrocyte Progenitor Cells Increase Chi3l1 Secretion in Exosome to Activate Myh9 and Promote Proliferation through Connexin47 Connected to Astrocytes Background: Neurodegenerative diseases, caused by the loss of neurons or myelin sheath, are some of the most important neurological diseases that threaten the health of the elderly. In the CNS, oligodendrocytes (OLs) are the only cells that can form myelin. Astrocytes (ASTs) play a generally benecial role in remyelination, including the proliferation and differentiation of oligodendrocyte precursor cells (OPCs) to OLs. However, the specic downstream mechanism is unclear. Methods: This study investigated the proliferation of OPCs in OPCs mono-culture, OPCs culture with ASTs supernatant, and ASTs-OPCs co-culture. Gene Ontology (GO) analysis were used to analyze the differentially expressed genes after transcriptome sequencing of these OPCs. Electron microscope, Nanoparticle Tracking Analysis (NTA), Fluorescence tracing of exosomes and Western blot were used to evaluate the effects of exsomes. Pull-down, co-immunoprecipitation (Co-IP) and mass spectrometry analys were conducted to nd the downstream signal proliferation which is transmitted information into OPCs. Reasults: Direct contact co-culture of ASTs and OPCs promotes the proliferation of OPCs. After Cx47 siRNA interference under ASTs-OPCs co-culture, Chi3l1 secretion in exosome reveals associated decrease, and OPCs proliferation decreased. The cell proliferation induced by Chi3l1 was inhibited after siRNA interfered with Myh9, and the expression of cyclin D1 was also decreased. Conclusions: These results suggest that ASTs transmit information to OPCs by increasing gap junction channel Cx47, thereby promoting the secretion of Chi3l1 in exosome of OPCs. The secretory form of Chi3l1 in exosome might be easier to enter the target cell than in extracellular supernatant, which is benecial to the activation of Myh9 to promote OPCs proliferation. This may be a potential target for drugs rescuing neurodegeneration diseases related to remyelination. Introduction The incidence of neurodegenerative disease is gradually increasing, and neurodegenerative disease has become a major disease type that jeopardizes the physical and mental health of the elderly population.Many studies have con rmed that neurodegenerative diseases, including Alzheimer's disease, Parkinson's disease and multiple sclerosis, are associated with damage and destruction of myelin sheath [1][2][3][4]. In the central nervous system (CNS), oligodendrocytes (OLs) are the only cells that can form myelin. The proliferation of OLs depends on the proliferation and differentiation of oligodendrocyte precursor cells (OPCs) [5]. When the body undergoes myelin damage, OPCs that retain proliferative potential proliferate, migrate to the damaged region of the myelin sheath, and differentiate into mature myelin-derived OLs, thereby replacing the myelin sheath that was lost [6,7]. It is very important to regulate OPCs proliferation for the prevention and rehabilitation of the neurodegenerative disease. Astrocytes (ASTs) are the glial cells with the largest number and volume in the CNS, supporting, protecting and nourishing other nerve cells. There are extensive gap junctions (GJs) between ASTs and OPCs that play a vital role in the process of myelination [8,9]. Among these GJs, the most common is the connexin 47 (Cx47)/Cx43 type [10,11], which, under normal conditions, has an irreplaceable effect on myelin formation [12]. OPCs can connect with ASTs through Cx47 (expressed in OPCs ). Loss of Cx47 causes damage to the myelin sheath and severe demyelinating lesions [13,14]. For example, Chi3l1 increases the malignancy of tumors in gliomas [23,24]. Chi3l1 is also expressed in the brain, as an in ammatory marker in Alzheimer's disease [19,25]. Exosome are small vesicles containing complex protein. Exosome can regulate the biological activity of target cells by carrying proteins, nucleic acids and lipids. There is no report that Chi3l1 exists in exosome. However, the mechanism that ASTs including the proliferation and differentiation of OPCs is unclear. Studies of GJs Cx47, Chi3l1 in exosome of OPCs are currently lacking, and the relationship between Cx47 and Chi3l1 in exosome has not yet been investigated. Here, transcriptome sequencing, Electron microscope, NTA, pull-down, co-immunoprecipitation (Co-IP) and mass spectrometry results identi ed expression changes about Cx47, Chi3l1, exosome, then explored the mechanism between Cx47 and Chi3l1 in OPCs proliferation. Materials And Methods Experimental animal source. The study has been approved by the Ethics Committee of Chongqing Medical University. Experimental animals were provided by the Experimental Animal Center of Chongqing Medical University. Collection of B104 supernatant. The frozen B104 cell strain was rapidly dissolved in a 37 °C water bath, and the dissolved cell suspension was added to a medium containing 12% fetal bovine serum (FBS, Zhejiang Tianhang Biotechnology, 11011 − 8511). Centrifuged at 1000 rpm for 5 min. The cells were then planted in a petri dish containing 12% FBS medium, which was changed every other day until the cells were 70%-80% con uent. The medium was replaced with a medium containing 1% N-2 supplement (Gibco, A1370701), and cells were incubated for 4 days in the incubator at 37 °C. Then, the supernatant was centrifuged, ltered, collected and stored at -80 °C. Primary culture OPCs. The cerebral cells of 1-3-day-old Sprague-Dawley (SD) rats were cultured, and 12% FBS (Gibco, 10099-141) medium was added. After 4-5 days, the medium was replaced with selective medium which containing 20% B104 supernatant and 1% N-2 supplement. Cells were cultivated for 4-6 days. OPCs were separated by 0.01% EDTA digest which cannot digest ASTs because of its weak digestion, and OPCs were seeded in a new medium. Primary culture ASTs. The cerebral tissue of newborn SD rats was trypsinized (Gibco, 40127ES60) for 5 min, and the cells were seeded in poly-D-lysine (PDL)-coated dishes and cultured for 5-6 days in a medium containing 12% FBS. Primary cultured OPCs were digested with 0.01% EDTA and seeded in petri dishes coated with PDL. A medium containing 1% N-2 and 20% B104 supernatant was added, and cells were incubated for 4-6 days. ASTs and OPCs were digested with 0.25% trypsin and 0.01% EDTA, respectively, and ASTs were seed in the upper chamber of the OPCs. A medium containing 1% N-2 and 20% B104 supernatant was added. Cells were incubated for 4-6 days. ASTs and OPCs co-culture (group A). After ASTs were seed in PDL-coated dishes for 1 day, OPCs were then seed on the cell surface of ASTs. A medium containing 1% N-2 and 20% B104 supernatant was added. Cells were incubated for 4-6 days. Digest with 0.01% EDTA when collecting cells. Interfering agents were added to co-cultured ASTs and OPCs according to the instructions of the siRNA Interference Kit (RiboBio). NC siRNA was a disordered RNA used as a control, and the Cx47 interference sequence was CCGAGAAGACTGTCTTCTT. Digest with 0.01% EDTA when collecting cells. siRNA interference Cx47 + exosomes ( group Cx47si+) Puri ed OPCs in direct contact with ASTs were cultured in OPC proliferation medium for 24 h. Then, the Cx47 siRNA was transfected and about 9 × 10 10 particles/ml of exosomes were added for 24 h at the same time. Interfering agents were added to OPCs mono-culture according to the instructions of the siRNA Interference Kit (RiboBio). The interference sequence of Myh9 siRNA1 is: GCCTGTTCTGTGTGGTCAT; siRNA2 is: GCATCGAGTGGAACTTCAT; siRNA3 is: GCGTGACTGGTCTCCTTAA. Pull-down (group goat and group Ch). Cell lysate was added to OPCs to extract total cellular protein. The proteins were mixed with goat IgG (group goat) (Beyotime, A7007), Chi3l1 antibodies (group Ch) (Santa Cruz, sc-393484) and incubated overnight at 4 °C with shaking on a shaker. Then, the agarose beads (Santa Cruz, sc-2344) were added to the protein, incubated overnight at 4 °C, and collected. The loading buffer was added to agarose beads and boiled for 10 min, followed by gel electrophoresis. The gel after electrophoresis was subjected to Coomassie blue staining, then the protein bands obtained after the staining were cut out and subjected to mass spectrometry to identify the protein species. Co-immunoprecipitation (Co-IP, group goat and group My). The total cellular protein of OPCs was extracted, mixed with goat IgG and Myh9 antibodies (PRB-440P, Biolegend), incubated overnight at 4 °C with shaking on a shaker. Then, the agarose beads were added to the protein, incubated overnight at 4 °C, and collected. The loading buffer was then added to the beads and boiled for 10 min. Next, the nal obtained protein was analyzed by western blot. OPCs were digested with 0.01% EDTA and xed with 75% alcohol overnight at 4 °C. The cell cycle was detected by ow cytometry. Immuno uorescence. The exosomes of groups were resuspended in PBS and dropped onto the copper mesh grid of electron microscopy in the form of water droplets. The droplets were retained on the copper grid for 1 min and xed in 2% uranyl acetate for 1-10 min. Then it were dried naturally at room temperature and observed and photographed under 120 kV biotransmission electron microscope. The exosome samples were appropriately diluted using 1X PBS buffer to measure the particle size and concentration. The exosomes particle size and concentration using NTA at VivaCell Biosceinces with Zeta View PMX 110 (Particle Metrix, Meerbusch, Germany) and corresponding software Zeta View 8.04.02. NTA measurement was recorded and analyzed at 11 positions. The Zeta View system was calibrated using 110 nm polystyrene particles. Temperature was maintained around 23 °C -30 °C. After the total protein of OPCs was extracted, the loading buffer was added and boiled for 10 min. The protein was then subjected to gel electrophoresis, and the strips were cut and electrotransferred. After the strips were blocked with 5% skim milk powder for 2 h, they were incubated with the primary antibody ( EdU. EdU incubation and uorescent staining were performed according to the protocol for the EdU kit (RiboBio, C10310-1, C10310-3). Fluorescence microscopy was used to detect cellular uorescence. Exosomes of each group were isolated from 50 ml OPC proliferation medium during a 36 h culture period and were suspended in 500 µl PBS. NTA and western blot analysis Alix of exosomal marker protein for exosomes quanti cation. Cellular uptake of exosomes. The exosomes were labeled with PKH26 Red Fluorescent Cell Linker Kit (Sigma-Aldrich, USA). The OPC was coincubated with labeled exosomes at 37 °C for 24 h. Images were detected via confocal microscopy after mounting using Antifade Mounting Medium with DAPI. Statistical analysis. One-way analysis of variance was used in this study. All data were processed using SPSS17. Results Direct co-culture of ASTs and OPCs promotes the proliferation of OPCs. OPCs mono-cultured were labeled with PDGFRα ( Fig. 2A green uorescence), and ASTs mono-cultured were labeled with GFAP ( Fig. 2B red uorescence). Cell purity reached more than 95%. OPCs were cultured under three different culture conditions (ASTs-OPCs direct co-culture, OPCs and ASTs secretion co-culture and OPCs mono-culture). EdU staining showed ( Fig. 2B and C) that the proportion of neonatal OPCs (red uorescent-labeled) increased signi cantly under culture in direct contact with ASTs, and ow cytometry ( Fig. 2D and E) revealed that the proportion of OPCs entering the S phase of DNA replication was also signi cantly increased from (6.10 ± 0.27)% in group O to (11.69 ± 3.27)% in group C and (21.46 ± 2.89)% in group A. Therefore, ASTs promote OPCs proliferation through direct contact with OPCs. A: PDGFRα is speci cally expressed in OPCs,labeled with green uorescence; GFAP is speci cally expressed in ASTs labeled with red uorescence. B: Red indicates the nucleus of the neonucleated cells labeled with EdU and blue is the nucleus labeled with DAPI. C: Statistical analysis of the proportion of newborn cells in each group, **p<0.01 n = 9. D: Statistical analysis of each group cell cycle, **p<0.01 n = 9. E: Detection the cell cycle of OPCs under three different culture conditions by ow cytometry. ASTs promote OPCs proliferation by inducing Cx47 expression in OPCs. The OPCs collected under the above three conditions were subjected to transcriptome sequencing, and differentially expressed genes were screened by Gene Ontology (GO) analysis. Genes enriched in channel activity (please see Supplementary Material 1) are displayed in a heat map (Fig. 3A). Among these genes, the difference in Cx47 expression was signi cant among the three groups. To examine the relationship between Cx47 and OPCs proliferative capacity, the expression level of Cx47 after Cx47 siRNA interference was tested under ASTs-OPCs co-culture conditions. Western blot and immuno uorescence con rmed that Cx47 siRNA successfully reduced the expression of Cx47 (Fig. 3B, C, D and E). Then, we examined the proliferative capacity and cell cycle of OPCs after Cx47 siRNA interference. The relative number of red uorescent-labeled neonatal cells was signi cantly reduced after decreasing Cx47 expression ( Fig. 3F and G), and ow cytometry revealed that the ability of OPCs to enter the S phase of DNA replication was inhibited with decreased Cx47 expression ( Fig. 3H and I). Thus, the ability of ASTs to promote the proliferation of OPCs is closely related to Cx47. Cx47 regulates the expression of chi3l1 in OPCs under ASTs-OPCs co-culture conditions. Cx47 promotes the proliferation of OPCs. GO analysis of the transcriptome sequencing in OPCs under Cx47 presence (group A) or absence (group Cx47si) conditions revealed that these proliferation-related genes are mainly concentrated in extracellular exosomes (Fig. 4A). A secondary analysis of these genes (please see Supplementary Material 2) by volcano map suggested that Chi3l1 may be a downstream signal for the Cx47-induced proliferation of OPCs (Fig. 4B). Therefore, we examined the expression of Chi3l1 after Cx47 siRNA interference under ASTs-OPCs co-culture conditions. The western blot results showed that the gray value of the Chi3l1 band decreased from (1.01 ± 0.05) in group A and (1.04 ± 0. 22) in group NCsi1 to (0.48 ± 0.15) in group Cx47si (Fig. 4C and D). The immuno uorescence results also con rmed that the expression of Chi3l1 in OPCs decreased after Cx47 siRNA interference ( Fig. 4E and F). A: GO analysis was performed on the differentially expressed genes of group A and group Cx47si, and combined with the bubble plot and barplot results, the differentially expressed genes of the extracellular exosome were selected for further analysis, |log2 (FC)|≥ 1.5. B: The volcano map analysis of the differential genes enriched in extracellular exosomes. The red and green dots denote the upregulated and downregulated differentially expressed genes, respectively. C: Western blot detected the expression of Chi3l1 after Cx47 siRNA interference. D: Statistical analysis of the expression of Chi3l1 by western blot, **p<0.01, n = 5. E: Statistical analysis of the expression of Chi3l1 by immuno uorescence, **p<0.01, n = 9. F: Detection of the uorescence intensity of Chi3l1 after Cx47 siRNA interference. Cx47 regulates the expression of Chi3l1 secretion in exosome to promotes OPCs proliferation. Cx47 promoting the secretion of exosomes in OPCs. NTA revealed that the exosomal concentration increased from 9 × 10 9 particles/ml in the CO-OPC group to 8 × 10 10 particles/ml in the ASTs-OPCs group. The exosomal concentration was signi cantly reduced in the CX47 siRNA group (8.8 ± 0.4 × 10 9 particles/ml) compared with those in the ASTs-OPCs (1.5 ± 0.1 × 10 10 particles/ml) ( Fig. 5A and B). The 98.6% particles of exosome were approximately 135 nm ( Fig. 5A and C). Exosome of the culture were detected by TEM and western blot analyses. TEM demonstrated vesicles of approximately 100 nm in diameter (Fig. 5C). Western blot results con rmed the special markers of exosome Alix and CD63 proteins and expression of Chi3l1 in OPCs secreted exosome (Fig. 5D). Cx47 regulates the expression of Chi3l1 secretion in exosome The exosomal secretion was signi cantly increase in the ASTs-OPCs group compared with that in the CO-OPC group. The expression of Chi3l1 in exosome of ASTs-OPCs group(10.29 ± 1.71) is signi cantly higher than the exosome of supernatant of the CO-OPC group(2.87 ± 0.31) (Fig. 5C&D). SiRNA CX47 signi cantly reduce the expression of Chi3l1 in exosome of the siRNA CX47 group (2.09 ± 0.58) compared with in the ASTs-OPCs group (Fig. 5E&G). SiRNA CX47 signi cantly decreased the proliferative capacity of OPCs. The results showed a lower percentage of S phase OPCs in group CX47si(17.59 ± 0.64%) than those in groups ASTs-OPCs (26.78 ± 0.78%). The ability of siRNA CX47 to inhibit proliferation OPCs was alleviated by exosomes supplementation. Exosomes labeled with PKH26 were coincubated in the ASTs-OPCs group for 24 h to evaluate whether they could be internalized by cells. PKH26 was observed to transfer into cells (Fig. 5I). The proportion of OPCs in S phase increased from 17.38 ± 0.32% in CX47si group to 22.07 ± 0.15% in the CX47si + group (Fig. 5J and K). These results indicate that exosome play a vital role in the proliferation of OPCs. Many studies have reported that Chi3l1 can promote tumor cell growth during tumor progression [22,26], but the role of OPCs has not been reported. To test whether Chi3l1 also promotes cell proliferation in OPCs, 5 ng/ml and 10 ng/ml exogenous Chi3l1 were added to OPCs under mono-culture conditions. Exogenous Chi3l1 increased the proportion of green uorescent-labeled neonatal OPCs ( Fig. 6A and B) and the proportion of OPCs entering the S phase of the cell cycle ( Fig. 6C and D), con rming that Chi3l1 also promotes cell proliferation in OPCs. A: After addition of exogenous Chi3l1 at 5 ng/ml and 10 ng/ml, respectively, EdU detected the proliferative capacity of OPCs, and the neonatal OPCs were labeled with green uorescence. B: Statistical analysis of the proportion of new cells detection by EdU, **p<0.01, n = 9. C: Statistical analysis of each group cell cycle, **p<0.01, *p<0.05, n = 5. D: Flow cytometry detected the proportion of each cell cycle after adding exogenous Chi3l1. Chi3l1 may bind Myh9 to transmit information to OPCs. Chi311 is an exocrine glycoprotein with a molecular weight of 40 kDa. Its role in promoting cell proliferation must depend on the receptor, but Chi311 has no speci c receptor. A pull-down experiment was performed on the protein homogenate of OPCs using agarose beads combined with a Chi3l1 antibody, and the protein bound to the agarose beads was electrophoresed. The gel was then stained with Coomassie blue, and the bands were cut for mass spectrometry for identi cation of the protein species (Fig. 7A). A total of 111 proteins (please see Supplementary Material 3) were detected by mass spectrometry, of which Myh9 had the highest degree of peptide matching (Fig. 7B). Then, we used the Myh9 antibody for Co-IP experiments and performed western blot on the pulled-down proteins, revealing that the protein pulled down by the Myh9 antibody contained Chi3l1 (Fig. 7C). The experiment proves that Chi3l1 is able to bind Myh9. A: The electrophoresis gel of the pull-down protein was stained with Coomassie brilliant blue. Myh9 was identi ed in the strip labeled "a". B: The map of Myh9 analyzed by mass spectrometry with the amino acid sequence shown above the map. C: Western blot of the protein pulled down by the Myh9 antibody with goat IgG used as a control. The addition of 10 ng/ml exogenous Chi3l1 was shown to promote the proliferation of OPCs. On this basis, Myh9 siRNA interference was performed to observe whether OPCs proliferation was inhibited. The three siRNA interference sequences in the siRNA kit were screened and veri ed to select the siRNA sequence with the best interference effect. The western blot results ( Fig. 8A and C) showed that the M1 interference sequence was most effective, which was con rmed by immuno uorescence (Fig. 8D and E). Here, 10 ng/ml exogenous Chi3l1 protein was added to mono-culture OPCs, and M1 siRNA was used to knockdown expression of Myh9. EdU staining (Fig. 8F and G) showed that M1 siRNA decreased the number of green uorescent-labeled neonatal cells, and ow cytometry ( Fig. 8I and J) revealed that M1 siRNA suppressed the ability of OPCs to enter the S phase. Western blot (Fig. 8B and H) Discussion Neurodegenerative diseases, caused by the loss myelin sheath, are some of the most important neurological diseases that threaten the health of the elderly. In the CNS, OLs are the only cells that can form myelin [27]. The proliferation of OLs depends on the proliferation and differentiation of OPCs. It is very important to regulate OPCs proliferation for the prevention and rehabilitation of the neurodegenerative disease. The interaction between glial cells exerts a regulatory function on the growth of glial cells themselves. As the most abundant glial cells, ASTs have an important in uence on the proliferation of OPCs under both pathological and physiological conditions [28][29][30][31]. Under normal conditions, ASTs provide nutrients and necessary cytokines for the growth and survival of OPCs, such as lipids, tumor growth factor β1 (TGFβ1), chemokine stromal-derived factor 1 (CXCL12), and broblast growth factor receptor 3 (FGFR3) [9,28,[32][33][34]. Moreover, the effect of ASTs on OLs may be bidirectional regulation. For example, in MS, ASTs can promote in ammation to strengthen myelin damage on the one hand, and on the other hand protect OLs and axons from in ammatory damage [35]. It can be seen that when the body undergoes demyelination, ASTs can regulate the growth of OLs to compensate for the loss of myelin [31,[36][37][38][39][40]. A similar phenomenon was found in this study, which examined the proliferation ability of OPCs under conditions of OPCs mono-culture, ASTs supernatant culture, and ASTs-OPCs co-culture. The OPCs in direct contact with ASTs were found to proliferate vigorously, indicating that the chemical channels produced by the direct contact between ASTs and OPCs can inevitably transmit information that promotes OPC proliferation and cannot be replaced by AST supernatant. GJs are information channels formed by hemichannel protein subunits or hexamer linkers between adjacent cells, allowing small molecules and ions to pass through [41,42]. Their molecular weights range from 25-62 kDa and are used to name different gap junction proteins [43]. There are 21 GJ proteins expressed in the human CNS [44], of which OLs mainly express three types, including Cx47, Cx32 and Cx29, while ASTs mainly express Cx43, Cx30 and Cx26 [45,46]. In addition to the GJs formed between OLs and their own cells, Cx47/Cx43 and Cx32/Cx30 are mainly formed between OLs and ASTs, whereas Cx29 does not participate in the formation of GJs with ASTs [47]. Cx47 is expressed in the oligodendrocyte cell line and is a hemichannel that can be linked to Cx43 expressed by ASTs in the brain [48]. GO analysis of differentially expressed genes in OPCs under three culture conditions, screened 47 differentially expressed genes related to channel activity, which Cx47 expression was signi cantly different between the three groups and highly expressed in group A. The stable expression of Cx47 depends on the presence of ASTs [10,49]. Speci cally, Cx47 siRNA interference in ASTs-OPCs co-culture conditions inhibited the proliferative capacity of OPCs, thus con rming that Cx47 could transmit proliferation information to OPCs. Biological information analysis of the transcriptome sequencing results revealed that ASTs can increased secretion of Chi3l1 in exosome and exosome of OPCs by Cx47. It suggesting that these increased Chi3l1 in exosome and exosome might be responsible for OPCs proliferation.. Exosome are carriers of information exchange and material transfer between cells. Exosome can participate in cell immune response, migration, differentiation, proliferation and other functions [50]. The results of electron microscopy, immunoblotting and nanoparticle tracking analysis showed that astrocytes adhesion increased the secretion of almost 10 times exosome of OPCs. At the same time, The concentration of Chi3l1 in exosome was also much higher than that Chi3l1 in OPCs. It is suggested that Chi3l1 of exosome is a high concentration transport form. In other words, ASTs can increased autocrine and paracrine of Chi3l1 in exosomes and exosomes of OPCs by Cx47. Chi3l1 is a member of the glycoprotein hydrolase 18 family and has the ability to bind chitin but has no enzymatic activity [51][52]. Chi3l1 is a tumor-associated factor that is highly expressed in various tumor cells by activating the phosphatidylinositol-4,5-bisphosphate 3-kinase (PI3K)/AKT and nuclear factor (NF)-κB pathways to promote tumor growth, invasion and metastasis [26,[53][54][55]. It is also an in ammatory factor that activates the in ammatory response [19,[56][57][58]. This study found that Chi3l1 also promotes cell proliferation in OPCs. Chi3l1 is a secreted protein that inevitably plays a biological role by binding to the corresponding receptor. Because Chi3l1 has no speci c receptors in reports, many studies have focused on nding receptors for Chi3l1. Interleukin-13 receptor (IL-13R), CRTH2 and CD44 can interact with Chi3l1 to act as a receptor [20,[59][60][61]. In this study, we used pull-down,mass spectrometry and Co-IP to nd the speci c combination of Myh9 and Chi3l1. These experiments showed that Myh9 is the downstream signal that Chi3l1 transmits proliferation information into OPCs. Myh9, also known as nonmuscle myosin heavy chain IIa (NMMHC-IIA), is located on chromosome 22q12.3 [62] and is expressed in many cell types. Myh9 is a cytoskeletal component composed of complex multimeric protein [63]. It is essential for biological processes such as cell migration, division and signal transduction [62,[64][65][66]. It also has angiogenesis, vascular remodeling abilities in vascular endothelial cells [66] and promotes tumor invasion, metastasis in some tumors [64,67]. Furthermore, Myh9 promotes tumor invasion and metastasis in some tumors [65,[68][69][70]. In addition, some studies have shown that Myh9 also contributes to cell proliferation, cell contraction, adhesion and cytokinesis. In this study, Myh9 siRNA interference on OPCs under the added exogenous Chi3l1 condition revealed that Myh9 knockdown blocked the proliferation of OPCs induced by Chi3l1. The expression of cyclin D1 also decreased with Myh9 siRNA interference. Our research and other studies have shown that exogenous Chi3l1 can promote the proliferation of OPCs. However, the study also showed that Chi3l1 was secreted in the form of exosome. We speculated that the secretory form of Chi3l1 in exosome might be easier to enter the target cell than in extracellular supernatant, which is bene cial to the activation of Myh9. In summary, ASTs activate the expression of Chi3l1 in OPCs via gap junction Cx47. Subsequently, the expression and secretion of Chi3l1 in exosome of OPCs is increased. Then, Chi3l1 binds Myh9 of OPCs and transmits proliferation information into the cells to stimulate expression of cyclin D1, thereby promoting OPCs proliferation (Fig. 9). It provides a new understanding for the treatment of neurodegenerative diseases by promoting myelin sheath regeneration. Conclusions ASTs play a generally bene cial role in the proliferation and differentiation of OPCs. However, the speci c mechanism is unclear. This study shown that direct contact and secretory exosomes between ASTs and OPCs plays an important role in the proliferation of OPCs. These results suggest that ASTs transmit information to OPCs by increasing gap junction Cx47. More interestingly, ASTs also regulates the expression of Chi3l1 in exosomes of OPCs via gap junction. The exosomes may be the special transport form of Chi3l1 in the cells. It is bene cial to the transmembrane transport of Chi3l1 and the proliferation of OPCs. The Chi3l1 binds Myh9 of OPCs and stimulate expression of cyclin D1. It provides a new understanding for proliferation of OPCs. This may be a potential help for drugs rescuing neurodegeneration diseases related to remyelination.
2020-11-26T09:05:37.430Z
2020-11-12T00:00:00.000
{ "year": 2020, "sha1": "ac4461ab0d2aac0df94d51871fc5d4e8f4b2b6f7", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-108120/v1.pdf?c=1606494791000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "00a2d0980bd4dac52f5918b39a8a874255dd11ea", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
16730684
pes2o/s2orc
v3-fos-license
Boundary Liouville Field Theory I. Boundary State and Boundary Two-point Function Liouville conformal field theory is considered with conformal boundary. There is a family of conformal boundary conditions parameterized by the boundary cosmological constant, so that observables depend on the dimensional ratios of boundary and bulk cosmological constants. The disk geometry is considered. We present an explicit expression for the expectation value of a bulk operator inside the disk and for the two-point function of boundary operators. We comment also on the properties of the degenrate boundary operators. Possible applications and further developments are discussed. In particular, we present exact expectation values of the boundary operators in the boundary sin-Gordon model. Liouville field theory During last 20 years the Liouvlle field theory permanently attracts much attention mainly due to its relevance in the quantization of strings in non-critical space-time dimensions [1] (see also refs. [2,3,4]). It is also applied as a field theory of the 2D quantum gravity. E.g., the results of the Liouville field theory (LFT) approach can be compared with the calculations in the matrix models of two-dimensional gravity [5,6] and this comparison shows [7,8] that when the LFT central charge c L ≥ 25 this field theory describes the same continuous gravity as was found in the critical region of the matrix models. Although there are still no known applications of LFT with c L < 25, the theory is interesting on its own footing as an example of non-rational 2D conformal field theory. In the bulk the Liouville field theory is defined by the Lagrangian density where φ is the two-dimenesional scalar field, b is the dimensionless Liouville coupling constant and the scale parameter µ is called the cosmological constant. This expression implies a trivial background metric g ab = δ ab . In more general background the action reads Here R is the scalar curvature associated with the background metric g while Q is an important quantity in the Liouville field theory called the background charge It determines in particular the central charge of the theory In what follows we always will consider only the simplest topologies like sphere or disk which can be described by a trivial background. For example, a sphere can be represented as a flat projective plane where the flat Liouville lagrangian (1.1) is valid if we put away all the curvature to the spacial infinity where it is seen as a special boundary condition on the Liouville field φ φ(z,z) = −Q log(zz) + O (1) at |z| → ∞ (1.5) called the background charge at infinity. The basic objects of LFT are the exponential fields V α (x) = exp(2αφ(x)) which are conformal primaries w.r.t. the stress tensor The field V α has the dimension In fact not all of these operators are independent. One has to identify the operators V α and V Q−α so that the whole set of local LFT fields is obtained by the "folding" of the complex α-plane w.r.t. this reflection. The only exception is the line α = Q/2 + iP with P real where these exponential fields, if interpreted in terms of quantum gravity, seem not to correspond to local operators. E.g. in the classical theory they appear as hyperbolic solutions to the Lioville equation and "create holes" in the surface [9,10]. Instead, these values of α are attributed to the normalizable states. The LFT space of states A consists of all conformal families [v P ] corresponding to the primary states |v P with real 0 ≤ P < ∞, i.e., The primary states |v P are related to the values α = Q/2+iP and have dimensions Q 2 /4+P 2 while other values of α are mapped onto non-normalizable states. This is a peculiarity of the operator-state correspondence of the Liouville field theory which differs it from conventional CFT with discret spectra of dimensions but make it similar to some conformal σ-models with non-compact target spaces. In what follows the primary physical states are normalized as v P ′ |v P = πδ(P − P ′ ) (1.9) The solution of the spherical LFT amounts to constructing all multipoint correlation functions of these fields, (1. 10) In principle these quantities are completely determined by the structure of the operator product expansion (OPE) algebra of the exponential operators, i.e. can be completely restored from the two-point function which determines the normalization of the basic operators and the three-point function Once these quantities are known, the multipoint functions can be in principle reconstructed by the purely "kinematic" calculations relied on the conformal symmetry only. Although these calculations present a separate rather complicated technical problem, conceptually one can say that a CFT (on a sphere) is constructed if these basic objects are found. For LFT these quantities were first obtained by Dorn and Otto [11,12] in 1992 (see also [13]). We will present here the derivation of the simplest of them, the two-point function D(α), to illustrate a different approach to this problem proposed more recently by J.Techner [14] which seems more effecticient. Close ideas are also developed in the studies of LFT by Gervais and collaborators [15]. We will use similar approach shortly in the discussion of the boundary Liouville problem. Among the exponential operators V α there is a series of fields V −nb/2 , n = 0, 1, . . . which are degenerate w.r.t. the conformal symmetry algebra and therefore satisfy certain linear differential equations. For example, the first non-trivial operator V −b/2 satisfies the following second order equation (1.13) and the same with the complex conjugate differentiation inz andT (z) instead of T . In the classical limit of LFT the existence of this degenerate operator can be traced back to the well known relation between the ordinary second-order linear differential equation and the classical partial-derivative Liouville equation [16]. The next operator V −b satisfies two complex conjugate third-order differential equations and so on. It follows from these equations that the operator product expansion of these degenerate operators with any primary field, in the present case with our basic exponential fields V α , is of very special form and contains in the r.h.s only finite number of primary fields. For example for the first one there are only two representations where C ± are the special structure constants. What is important to remark about these special structure constants is that the general CFT and Coulomb gas experience suggests that they can be considered as "perturbative", i.e. are obtained as certain Coulomb gas (or "screening") integrals [17,18]. For example in our case in the first term of (1.15) there is no need of screening insertion and therefore one can set C + = 1. The second term requires a first order insertion of the Liouville interaction −µ exp(2bφ)d 2 x and where as usual γ(x) = Γ(x)/Γ(1−x). It is remarkable that all the special structure constants entering the special truncated OPE's with the degenerate fields can be obtained in this way. Now let us take the two-point function D(α) and consider the auxiliary three-point function Then, tending z → x 1 we see that in the OPE only the second term survives and in fact our auxiliary function is ∼ C − D(α + b/2). Instead tending z → x 2 we can "lower" the parameter of the second operator down to α which results in C + D(α). Equating these two things we arrive at the functional equation for the two-point function This equation can be easily solved in terms of gamma-functions which coincides precisely with what was obtained for this quantity in the original studies. In fact there are many solutions to the above functional equation. It is relevant for the moment to stop at the remarkable duality property of LFT. Besides the abovementioned series of degenerate operators V −nb/2 there is a "dual" series with b replaced by 1/b. This results in another "dual" functional equation for D(α) with the shift by 1/b instead of b. The solution becomes unique (at least if these two shifts are uncomparable) [14]. Note that these two equations are compatible only if in the dual equation the cosmological constant µ is replaced by the "dual cosmological constant"μ related to µ as follows With this definition ofμ the duality property, which turn out to hold exactly in LFT, can be formulated as the symmetry of all observables w.r.t. the substitution b → 1/b and µ →μ. The same way one can readily obtain and solve the functional equations for the threepoint function [14] which reads where a special function Υ(x) has to be introduced This integral representation is convergent only in the strip 0 < Re x < Q, otherwise it is an analytic continuation. In fact Υ(x) is an entire function of x with zeroes at x = −nb − m/b and x = Q + nb + m/b with n and m non-negative integers. In the sense mentioned above the explicit results (1.11) and (1.20) constitute the exact construction of the Liouville field theory on a sphere. For example, the four-point function can be explicitly expressed in terms of the three-point function where the intergration is over the variety of physical states |v P and F(∆ α i , ∆, x i ) is the four-point conformal block, determined completely by the conformal symmetry [19] 3 . In the four-point case, which we are considering now, the latter can be further reduced to a function of one variable, e.g., Parameters α i are related to ∆ α i as in eq.(1.7) and in the intermediate dimension ∆ = Q 2 /4 + P 2 . The boundary Liouville problem The basic ideas of 2D conformal field theory with conformally invariant boundary were developed long ago mostly by J.Cardy [20] who also applied them successfully to rational CFT's, in particular to the minimal series [21,22]. Here we'll try to apply these ideas to the Liouville CFT with boundary. A conformally invariant boundary condition in LFT can be introduced through the following boundary interaction where the integration in ξ is along the boundary while K is the curvature of the boundary in the background geometry g. In what follows we consider only the geometry of a disk which can be represented as a simply connected domain Γ in the complex plane with a flat background metric g ab = δ ab inside. The action is simplified as where k is the curvature of the boundary in the complex plane. Typically the most convenient domain is either a unit circle or the upper half-plane. In the last case the boundary ∂Γ is the real axis and one can omit the term linear in φ in the boundary action (2.2). The price is again a "background charge at infinity", i.e., the same boundary condition on the field φ at infinity in the upper half plane as in the case of the sphere. It seems natural to call the additional parameter µ B the boundary cosmological constant. We see that in fact there is a one-parameter family of conformally invariant boundary conditions characterized by different values of the boundary cosmological constant µ B . Contrary to the pure bulk situation where the cosmological constant enters only as a scale parameter, the observables in the boundary case actually depend on the scale invariant ratio µ/µ 2 B . For example, a disk correlation function with the bulk operators V α 1 , V α 2 · · · V αn and the boundary operators (see below) B β 1 , B β 2 · · · B βm scales as follows where F is some scaling function and we indicate only the dependence on the scale parameters µ and µ B 4 . Our present purpose is to study this dependence. In the boundary case we have to introduce the boundary operators. In LFT the basic boundary primaries are again the exponential in φ boundary fields B β = exp βφ. Their dimensions are To avoid any confusions we shall always use parameter α for the bulk exponentials and parameter β in relation with the boundary operators. In general a boundary operator is not characterized completely by its dimension, because the conformal boundary conditions at both sides of the location of the boundary operator may be in general different. One has to specify which boundary condition it joins. Therefore in general we are talking about a juxtaposition boundary operator between, in our case, two boundary conditions with the parameters µ B 1 = µ 1 and µ B 2 = µ 2 and denote it B µ 1 µ 2 β (x). To define completely the boundary LFT on the disk, i.e, to be able to construct an arbitrary multipoint correlation function including bulk and boundary operators, we have to reveal few more basic objects in addition to the bulk two-and three-point functions (1.18) and (1.20) we already have. 1. First is the bulk one-point function (we imply almost constantly the upper half-plane geometry) In fig.1a it is drawn however as the one-point function in the unit disk. 4 In the presence of boundary operators it is possible to impose different boundary conditions at different pieces of the boundary, each being characterised by its own value of µ B . In this case the scaling function in (2.4) may depend on several invariant ratios, see below. 2. Second, one needs the boundary two-point function which in general depends on two boundary cosmological constants µ 1 and µ 2 (see fig.1b). 3. The bulk-boundary structure constant, which determines the fusion of a bulk operator V α with the boundary resulting in the boundary operator B µ B µ B β . This is basically the same as the bulk-boundary two-point function ( fig.1c) In fact the one-point function (2.6) is a particular case of this quantity with β = 0 so that its introduction, however convenient, is redundant. Finally, there is a boundary three-point function which in fact depends now on three different boundary parameters, µ 1 , µ 2 and µ 3 related to the corresponding sides of the triangle as shown in fig.1d. These three basic boundary structure constants, together with the bulk structure constants, allow in principle to write down an intermediate state expansions for any multipoint function. An instructive example is the bulk two-point function Joining these two operators together with the bulk structure constant we can reduce this quantity to the one-point bulk function and write down the following expansion is the projective invariant of the four points z 1 , z 2 ,z 1 andz 2 and F is the same four-point conformal block as enters the expansion of the four-point bulk function, see (1.22,1.23). Notice that while in that case it entered in a sesquilinear combination (1.22), here it appears linearly (J.Cardy [21]). Expansion (2.11) is suitable appropriate if the bulk operators are close to each other, i.e., η → 0. Another representation is suitable the limit η → 1 where the points z 1 and z 2 approach boundary and the bulk operators can be expanded in the boundary ones. This gives Equating these two expressions we see that the basic boundary quantities also must satisfy some bootstrap relations analogous to that in the bulk case. It is interesting to note, that there is another application of this relation. The conformal block itself, although being completely determined by the conformal symmetry, is it fact a complicated function which is not in general known explicitly. On the other hand it is important since it explicitely enters the conformal bootstrap equations [19]. Besides, one might expect that it encodes some information about the structure of the representations of the conformal symmetry. In particular conformal block must satisfy the following cross-relation with some cross-matrix K which determines the monodromy properties of the conformal block. Suppose now we've managed to find the basic quantities of the boundary Liouville problem, in particular the one-point function U(α) and the bulk-boundary structure constant R(α, β). Then the crossing relation becomes a linear equation for the cross-matrix of the symmetric (i.e., α 3 = α 1 and α 4 = α 2 ) conformal block, from where this matrix can be figured out 5 . Bulk one-point function We start with the calculation of the bulk one point function U(α|µ B ). For this we apply the degenerate operator insertion, like above for the bulk two-point function. Consider the auxiliary bulk two-point function with the additional degenerate bulk field V −b/2 (z) Apply first the OPE at z → x where the degenerate operator V −b/2 generates only two primary fields so that where C ± (α) are the special structure constants as given by the screening integrals and G ± (x, z) are expressed through the special conformal blocks F ± (x, z) related to these special values of parameters In fact V −b/2 satisfies the second order differential equation. Therefore these special conformal blocks are solution to a second order linear differential equation and can be expressed in terms of the hypergeometric functions This is a particular case of more general conformal block with a degenerate operator Now, as both operators approach the boundary, they are expanded in the boundary operators. It turns out that the degenerate bulk operator V −b/2 near the boundary gives rise to only two primary boundary families B 0 and B −b . The simplest thing is to find the contribution of B 0 = I. The fusion of V α to the unity boundary operator is described by the quantity R(α, 0) = v(α) while the fusion of the field V −b/2 (V −b/2 → boundary) is described by a special bulk-boundary structure constant R(−b/2, Q). It can be computed as the following boundary screening integral with one insertion of the boundary interaction −µ B B b (x)dx Comparing this with the behavior predicted by the bulk expansion (2.16) we find the following functional equation for the one-point function (in the last term we used the bulk special structure constant C − (α) from(1.16)). Equation is solved by the following simple expression where the parameter s is related to the scale invariant ratio of the cosmological constants Also this expression satisfies the dual functional equation provided the dual bulk cosmological constantμ is related to µ as before in (1.19) while the parameter s is self-dual, i.e. the dual boundary cosmological constantμ B is defined as follows It is remarkable enough that the expression (2.24) automatically satisfies the "reflection relation" [13] for the operator V α with the bulk Liouville two-point function (1.18). If α corresponds to a physical state, i.e., α = Q/2 + iP with P real, expression (2.24) reads This quantity is interpreted as the matrix element between a primary physical state |v P from (1.8) and the boundary state Of course the functional relation does not fix the overall constant so that it can be multiplied by any (self-dual) factor U 0 (b) = U 0 (1/b). In (2.24) this factor is chosen in the way that all the residues in the "on-mass-shell" poles at 2α = Q − nb, with n = 1, 2, 3, . . . are equal precisely to the corresponding perturbative integrals appearing in expansions in µ and µ B , i.e. res 2α=Q−nb U(α) where · · · 0 is the correlation function w.r.t the upper half plane free field with µ = µ B = 0 i.e., with free boundary conditions. Explicitely In particular, the pure boundary perturbations in µ B reproduce the Dyson integrals over a unit circle Several remarks are in order in connection with the expression (2.24) presented. 1. Semiclassical tests. Consider the limit b → 0 while P in eq.(2.28) is of order b and s is of order b −1 . In this limit the minisuperspace approximation is expected to work. Take the geometry of semi-infinite cylinder of circumference 2π and consider the states on the circle. In the minisuperspace approximation one takes into account the dynamics of the zero mode neglecting completely all the oscillator modes of field φ(x). The primary state |v P is represented now by the wave function is the modified Bessel function) which satisfies the minisuperspace Shrödinger equation and has the following asymptotic at φ 0 → −∞ ψ P (φ 0 ) = e 2iP φ 0 + S The matrix element B s |v P can be carried out explicitely and agrees precisely with the corresponding limit of (2.28). Note, that this calculation is sensitive to the prefactor in eq.(2.24) and confirms our choice U 0 (b) = 1, at least in the limit b → 0. Compare (2.44) with the minisuperspace distribution (2.36). This result implies that the Shrödinger equation (2.37) in the logarithm of the scale φ 0 = b −1 log(l/2π) (which is called sometimes the Wheeler-deWitt equation) does not hold only in semiclassical limit but is in fact exact with a suitable renormalizations of constants (see in this relation the paper [24] where this equation first appeared in the context of the Liouville field theory, and also [25,26] where similar expressions are obtained in the framework of random surface models). Let us present also the double distribution in the length (2.42) and area A defined as It is given by a rather simple expression 3. "Heavy" α semiclassics. Consider again the limit b → 0 but with large value of α = η/b not nessesserily close to Q/2. Exact expression (2.47) gives in this limit where C is the Euler's constant and On the other hand the corresponding classical solution with the area A and the boundary lenght l reads for the classical field ϕ = 2bφ (we imply here the geometry of the disk |z| ≤ 1 with the unit circle as the boundary) where a is related to the area as follows (we imply here that η ≤ 1/2 and l 2 > 4πA(1 − 2η) so that a real classical solution exists). The classical Liouville action for this solution is readily carried out and coincides with (2.49). In principle it might be possible to check the prefactor in (2.48) performing the one-loop correction. This is not yet done however. 4."Light" α semiclassics. Direct semiclassical calculation of the one-point function (2.24) is possible also in the case α = bσ, with b → 0 and σ fixed. In particular, one can calculate the semiclassical approximation to the function (2.47) by taking the saddle-point contribution to the corresponding functional integral over φ with fixed area A and boundary length l. In the present case α ∼ b the exponential insertion does not affect the saddle-point configurations. The nature of these classical solutions depends on the relative value of A and l 2 . Here we consider explicitely only the negative-curvature situation 4πA < l 2 , in which case the classical configurations form an orbit under the action of SL(2, R). To be specific we adopt the upper half-plane geometry with the boundary at the real axis. Then generic classical solution φ G is obtained from the "standard" solution The semiclassical approximation to the expectation value (2.47) is then evaluated as an integral over this manifold of classical configurations, i.e. where S cl is the classical action (2.52) with η = 0, dµ(G) stands for the SL(2, R) invariant integration mesure and the factor N combines the determinant of zero modes and the contributions of positive modes to the gaussian integral around given classical solution. It is important to note that while N can very well depend on A/l 2 , it carries no dependence on σ, i.e. all the σ dependence of the one-point function in this approximation comes from the integral in (2.56). The integrand in (2.56) can be simplified by a shift of the integration variable G → G z G, where G z is any fixed (z-dependent) SL(2, R) transformation which maps the point z to the point i in the upper half-plane; this gives for the integral in (2.56) (2.57) To evaluate this integral one can introduce the following coordinates on the group manifold of SL(2, R), where x is real and y andȳ are complex conjugate with ℑm y > 0. The invariant mesure takes the form dµ(G) = 2i dx d 2 y (y −ȳ) |y − x| 2 (2.59) and the integral in (2.57) can be written as This integral is readily evaluated and one obtains for (2.56) where the factorÑ = π l 2 N/A does not depend on σ; as is mentioned above its determination requires analysis of the fluctuations around the classical configurations which we did not perform. The σ-dependent part in (2.61) agrees with b → 0 limit of (2.47). 5. Boundary state. Once the function U(P ) is constructed, the boundary state B s | can be written down explicitely where the so called Ishibashi states [27] are designed in the way to match the conformal invariance of the boundary. Since the combination U(P ) P | is invariant w.r.t. the reflection P → −P one can extend formally the integral (2.62) to the negative values of P and write It is natural to call u(P ) the boundary state wave function. Note that the state P |, although consistent with the conformal invariance, does not correspond to any conformal boundary state, i.e., to a state created by a local conformally invariant boundary condition. However, it can be constructed as a linear combination of boundary states. In view of eq.(2.64) we can write down This equation allows to single out a conformally invariant state containing only one primary state v P | and its descendents. In finite dimensional situation of rational conformal field theories this trick has been friquently used by J.Cardy [22]. Boundary two-point function In this section the boundary two-point function d(β|µ 1 , µ 2 ) of (2.7) will be derived. To this purpose we apply basically the same Techner's tric which has been used in the first section to determine the bulk structure constants. Considering the boundary operators B β (x) we find that all the operators B −nb/2 (x) (and also of course the dual fields B −n/2b ) with n = 0, 1, . . . are degenerate, i.e., count primary states among their descendents. A complication here is that not all of these "null vectors" nessesserily vanish, contrary to what happens in the bulk situation. For example simplest non-trivial degenerate boundary operator B s 1 s 2 −b/2 (form now on we shall denote the exponential boundary operators B s 1 s 2 β = e βφ s 1 s 2 instead of B µ 1 µ 2 β having in mind the relation (2.25)) in general does not satisfy the second order differential equation. This means that the null-vector in the corresponding Virasoro representation is some non-vanishing primary field and therefore the second order differential equation has some non-zero terms in the right hand side. This effect can be already seen at the classical level where the upper half-plane boundary Liouville problem is reduced to the classical Liouville equation for the field ϕ = 2bφ in the upper half-plane with the boundary condition at the real axis. The boundary value of the classical stress tensor can be easily computed The boundary operator B s,s −b/2 in the classical limit reduces to the boundary value of exp(−ϕ/4) for which we have In the right-hand side there is a primary Virasoro operator exp(3ϕ/4) s,s which has exactly the same dimension as the null-vector in the corresponding degenerate representation. It is interesting to note that there is a unique relation between the cosmological constants Here we are interested in the general situation where this operator is of no use since it does not always satisfy the second order differential equation. It happens however that the next degenerate boundary operator B s,s −b do satisfy the third-order differential equation when placed between identical boundary conditions. Therefore it can be used in our calculations instead of B −b/2 . As in the bulk, the differential equation predicts the following truncated OPE of this operator with any exponential boundary primary where c σ (β) are again the special boundary structure constants, which can be calculated as certain screening integrals. Considering again the auxilary three-point boundary function with a B −b insertion one figures out immediately that The structure constant c − (β) can be evaluated as a combination of scrining integrals. These are of two tipes: a volume screening by the bulk Liouville interaction term e 2bφ and two boundary screenings e bφ related to the boundary interaction where the contours C i are chosen as in fig.2 while µ i are the corresponding values of the boundary cosmological constant, as it is also indicated in in the same figure. Both contributions can be carried out explicitely and we have and s 1 and s 2 are again related to µ 1 and µ 2 as in eq.(2.25). It has zeroes at x = Q + nb + m/b and poles at x = −nb − m/b ( m and n are non-negative integer numbers). In the strip 0 < Re x < Q the following integral representation is allowed With this definition it satisfies also the "unitarity" relation It is also convenient to introduce a self-dual entire function G(x) which contains only zeroes at x = −nb − m/b, m, n = 0, 1, 2, . . . and enjoes the following shift relations This function is "elementary" in the sense that both Υ(x) from eq.(1.21) and S(x) are simply expressed in G(x) The integral representation which is valid for all 0 < Re x reads With this function one can easily construct a solution to (3.7). This solution satisfies also the "dual-shift" relation analogous to (3.7) so that (3.18) is the unique self-dual solution to (3.7). It is of course possible to express the ratio of two Gfunctions in terms of S-function times some ordinary Γ-functions. We prefere to present d(β|s 1 , s 2 ) in the form (3.18) to make obvious the "unitarity" relation d(β|s 1 , s 2 )d(Q − β|s 1 , s 2 ) = 1 (3.19) Note, that an overall independent of β constant which is allowed by (3.7) and its dual is completely fixed by (3.19). Concluding remarks • Eq. Under our suggestion B s,s±ib −b/2 (x) satisfies a second order differential equation in x and therefore has special operator product expansion with any B β Then, exactly the same trick which led to eq.(3.7) gives the following shift relation As usual we adopt the structure constant with no screenings requred c (±) This integral is evaluated quite easily (unlike (3.8) or (3.9)) sin πb (β ∓ ib(s 1 + s 2 )/2) sin πb 2 (β ∓ ib(s 1 − s 2 )/2) It is easy to see that the two-point function (3.18) satisfies both relations (4.3). After this support one may suggest further that any degenerate field B s,s ′ −nb/2 has vanishing null-vector (and therefore has truncated operator product expansions if s − s ′ = ibk or s + s ′ = ibk with k = −n/2, −n/2 + 1, −n/2 + 2, . . . , n/2, in close analogy with the fusion rules for degenerate bulk fields. • The boundary two-point function (3.18) is readily applied as the reflection coefficient in the reflection relations for the one-point function of an exponetial boundary operator in the boundary sin-Gordon model. The latter is defined by the following two-dimensional euclidean action where the bulk part of the action is integrated over a half-plane Γ so that the boundary ∂Γ is a strainght line. For the moment β denotes the standard sin-Gordon coupling constant. Apart from it the boundary model depends of three parameters µ, µ B and φ 0 [28]. The dimensional parameters µ and µ B can be given a precise meaning by specifying the normalisation of the composite fields they couple to. As these operators are combinations of exponentials it suffices to specify a normalisation for the exponential fields in the volume and at the boundary. Here we adopt the conventional normalisation of these fields (see e.g. [29]) corresponding to the short distance asymptotics at |x − y| → 0 and z * is the complex conjugate to z. The details and some applications will be published elsewhere. • In the next publication [30] we will present an explicit expression for the bulk-boundary structure constant (2.8). Equation (2.66) then permits to resolve the system (2.11), (2.13) and (2.14) for the cross-matrix and obtain an explicit expression for a special case of symmeric cross-matrix K α 1 α 1 α 2 α 2 | P, P ′ . • The random lattice models of 2D quantum gravity allow in many cases to find explicitely the partition functions of minimal models on fluctuating disk with some bulk and boundary operators inserted [25,26]. Detailed comparison with the Liouville field thery predictions seems quite interesting. The work on that is in progress. Aknowledgements The present study started as a common project with J.Teschner. In the course of the project it turned out that the methods and results were rather complimentary, so that it was decided to present the respective points of view in seperate publications, see ref. [31]. The work of V.F. and Al.Z. was partially supported by EU under contract ERBFMRX CT 960012. The work of A.Z. is supported in part by DOE grant #DE-FG05-90ER40559.
2014-10-01T00:00:00.000Z
2000-01-04T00:00:00.000
{ "year": 2000, "sha1": "381ff0f4336aa106a991b513415ddc645ef386bf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "69066e168ef18d47e16fc01c898409f95b2b80b5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15327685
pes2o/s2orc
v3-fos-license
Insights into granulosa cell tumors using spontaneous or genetically engineered mouse models Granulosa cell tumors (GCTs) are rare sex cord-stromal tumors that have been studied for decades. However, their infrequency has delayed efforts to research their etiology. Recently, mutations in human GCTs have been discovered, which has led to further research aimed at determining the molecular mechanisms underlying the disease. Mouse models have been important tools for studying GCTs, and have provided means to develop and improve diagnostics and therapeutics. Thus far, several genetically modified mouse models, along with one spontaneous mouse model, have been reported. This review summarizes the phenotypes of these mouse models and their applicability in elucidating the mechanisms of granulosa cell tumor development. Introduction Granulosa cell tumors (GCTs) are sex cord-stromal tumors that comprise 5% of all ovarian tumors in women [1,2]. Although GCTs arise mainly from granulosa cells, they can develop in both the ovaries in women and the testes in men [3,4]. GCTs are usually detectable at an early stage; however, 43% of patients experience recurrence, and 80% of those patients die from the disease [5,6]. Due to the indolent nature of these tumors, along with their propensity for relapse and malignan-cy, patients with GCTs need long-term follow-up to monitor whether recurrence or metastasis has occurred [7,8]. Inhibins have been used as reliable markers to diagnose GCT recurrence and progression [9][10][11]. GCTs are classified into juvenile granulosa cell tumors (JGCTs) and adult granulosa cell tumors (AGCTs) based on histology, nuclear morphology, the age of occurrence, and the potential for disease recurrence. AGCTs are the most common type of GCT, and occur in periand postmenopausal women [12]. JGCTs occur in girls from infancy through puberty and have the potential for malignancy [7]. AGCTs often show prominent nuclear and histological features, such as nuclear grooves (coffee-bean nuclei) and Call-Exner bodies (small fluidfilled spaces surrounded by granulosa cells). By contrast, granulosa cells in JGCTs are neoplastic, round, non-grooved, luteinized, and have hyperchromatic nuclei [13]. Moreover, histological analysis shows the presence of follicle-like spaces in JGCTs. Advances in the identification of molecular mechanisms implicated in AGCTs have identified the C402G missense mutation of the FOXL2 gene as present in 95% of AGCT patients [14,15]. This mutation has not been found in JGCTs, and, furthermore, the loss of FOXL2 expression has been observed in aggressive JGCTs [16]. The absence of FOXL2 can alter the fate of granulosa cells, pushing them into uncontrolled growth, because FOXL2 expression is important for establishing and maintaining granulosa cell identity. The FOXL2 C134W mutation may induce AGCT formation by regulating targets in apoptotic [17] and steroidogenic [18] pathways. However, no clear pathophysiological mechanisms have been described. While FOXL2 mutations in other loci induce mislocalization, protein aggregation, and impaired transactivation [19], the C402G missense mutation of the FOXL2 gene does not lead to alterations in FOLX2 protein subcellular localization, protein aggregation, mobility, or transactivational activity on its target promoter in vitro compared to wild-type protein FOLX2 [20]. Recently, it was proposed that GSK3β regulation on serine 33 (S33) of mutant FOXL2 is the cause of oncogenicity in AGCT [21]. Two activating mutations (R201C and R201H) of the stimulatory α subunit of a trimetric G protein (Gαs) were discovered in JGCT patients [22], and in-frame duplications within the pleckstrin homology domain of AKT1 were discovered in > 60% of JGCT patients [23]. Two cell lines derived from human GCTs have been investigated to understand the etiology and molecular mechanisms of AGCTs and JGCTs [24,25]. Despite the important information that has been obtained using these cell lines, some discordances have been observed with data obtained from studies of human tumors [26], suggesting that the use of mouse models for studying GCTs may be necessary to more fully understand their origins. Here, we summarize the phenotypes of the currently available GCT mouse models and what they have revealed about the molecular mechanisms underlying GCT development. SWR mice SWR/Bm (SWR) mice were reported in 1985 as a model for studying pathways leading to the formation of spontaneous JGCTs [27]. Approximately 1% of inbred female SWR mice develop malignant JGCTs at approximately 8 weeks of age, starting at the time of the first ovarian follicle maturation at approximately 3 to 5 weeks of age [28]. Possible tumor susceptibility modifiers include the Gct loci, such as Gct1 on chromosome 4 and Gct4 and Gct6 on the X chromosome. Gct1 is essential for GCT development and is responsive to the androgenic precursor dehydroepiandrosterone, which has been shown to increase tumor frequency [29]. Although other foci such as Gct2, Gct3, Gct4, Gct5, Gct6, Gct7, Gct8, and Gct9 are also linked with Gct1 and may be associated with the formation of GCTs, the Gct1 SW allele is an essential driver for the ovarian tumor phenotype [30]. Four genes within the Gct1 interval (Vps13d, Tnfrsf8, Tnfrsf1b, and Dhrs3) may be involved in tumor formation in SWR mice. The tumors in this model are endocrinologically active, secreting high levels of inhibin and estrogen [29]. The initial formation of spontaneous GCTs is dependent on endocrine hormones, such as androgenic steroids at puberty, im-plying that the time frame of the first wave of maturing follicles is critical in the development of JGCTs. GCTs from SWR mice have neoplastic potential, as demonstrated by the incidence of metastases through consecutive transplantation. SWR mice have significantly decreased serum levels of follicle-stimulating hormone (FSH) and luteinizing hormone (LH), as seen in human GCT patients, while inhibin-α is robustly increased [31]. The serum levels of progesterone, dihydrotestosterone, and testosterone are also reduced in SWR mice, while tumor-bearing mice have a high capacity for aromatization. Therefore, this mouse model has similar histological and endocrine characteristics to human JGCT patients. Inhibin-α null mice The inhibins belong to members of the transforming growth factor-β (TGF-β) family and inhibit the synthesis and secretion of pituitary FSH [32,33]. These peptide hormones are expressed in the adrenal gland, pituitary, brain, spleen, kidney, central nervous system, placenta, and the gonads [32]. Targeted deletion of the Inha gene causes the development of gonadal stromal tumors as early as 4 weeks of age in both males and females, with nearly 100% penetrance [34]. Female mice develop multifocal, hemorrhagic, bilateral tumors with tubular or cord-like structures. Comparison of the serum FSH levels in these Inha-null mice shows a two-to threefold increase compared to heterozygous or wild-type controls, a characteristic that is secondary to the lack of suppression by inhibin. This suggests that downstream molecules in the inhibin signaling pathway are important for GCT formation, and imbalances in gonadotropins might also play a role [35]. As shown in follitropin receptor knockout (FORKO) mice [36], perturbations in the gonadotropin signaling pathway and milieu of the ovary induce the development of GCTs. FORKO mice have a high serum level of activin secreted from the ovarian tumor, resulting in a cachexia-like wasting syndrome that is lethal to the mice at the onset of ovarian tumor development [34,37]. High levels of activin caused by elimination of the Inha gene induce activation of the SMAD2/3 signaling pathway in granulosa cells, stimulating proliferation [34]. The importance of SMAD3 for tumor progression is supported by studies of Madh3-/-(SMAD3-null) and Inha double knockout mice, which show slower progression of GCTs; SMAD2 is not necessary for inducing tumor formation in inhibin-deficient mice [38][39][40]. Although inhibin-α null mice develop GCTs, their relevance for human GCTs is not completely understood because the majority of human GCT patients have high serum levels of inhibins [41,42]. Nonetheless, this mouse model has been useful for understanding the downstream molecular pathways of GCT formation. Mice with the simian virus 40 T-antigen fusion gene Simian virus 40 T-antigen is a proto-oncogene that can transform cells. Transgenic mice with overexpression of the simian virus 40 Tantigen driven by the murine inhibin-α subunit promoter (Inha/Tag) were originally developed as a source of granulosa tumor cell lines to investigate the characteristics of GCTs. The ovarian tumors from these mice are prominent at 5 to 6 months of age and have 100% penetrance [43]. Tumor cells from these mice are atypical, mitotic, and have the appearance of granulosa cells, though the tumor cells cannot be classified as AGCT or JGCT. Additionally, a transgenic mouse line with overexpression of simian virus 40 T-antigen driven by the anti-Müllerian hormone (AMH) promoter also develops ovarian tumors [44]. The ovarian tumors are bilateral, with 10% of mice developing tumors at 3 to 8 months of age. These mouse ovarian tumors contain serous cystic spaces, large hemorrhages, and necrosis, showing further metastases in the lungs or liver in later stages. Cell lines derived from these mice maintain granulosa cell characteristics, such as the expression of LH, production of estradiol, responsiveness to human chorionic gonadotropin (hCG), and the presence of granulosa cell markers, as well as AMH type II receptors [44]. Mice with overexpression of the LHβ subunit (bLHβ-COOHterminal peptide) These mice were generated using bovine LH β-subunit/hCG β-subunit COOH-terminal peptide (bLHβ-CTP) [45]. Tumorigenesis in these mice occurs at 4 to 8 months of age. These transgenic mice show unusually high levels of LH, as well as precocious puberty, a prolonged luteal phase, formation of cysts and, thus, GCTs, causing infertility in transgenic female mice dependent on their genetic predisposition [46]. High levels of estradiol, testosterone, and progesterone are present, and LH is especially high, suggesting that elevated LH might contribute to the formation of GCTs. According to a report describing the crystal structure of hCG [47], LH has growth factor-like properties. Excessive levels of LH in transgenic mice result in angiogenesis and growth aberrations, indicating that abnormal gonadotropin stimulation is tumorigenic. However, these mice have many unique non-gonadal phenotypes due to chronically elevated steroid levels. The importance of excessive LH in inducing tumor formation is also supported by mice deficient in both inhibins and LH [48]. These mice show a delay in tumor progression and increased survival. Moreover, bLHβ-CTP and Inha/Tag double transgenic mice show much faster gonadal tumorigenesis with elevated serum levels of LH [49]. Altogether, these mice models indicate that excessive LH levels can induce tumor formation. Double conditional knockout mice of SMAD1 and SMAD5 in granulosa cells Female mice deficient in both Smad1 and Smad5 in granulosa cells using Amhr2-cre are subfertile and develop GCTs with 100% penetrance [59]. This mouse model shows a phenotype at 2 to 3 months that is similar to that of humans with JGCTs [60]. Furthermore, 80% of these mice develop peritoneal and lymphatic metastases by 8 months of age [59]. This mouse model suggests that a signaling pathway involving the activation of SMAD2/3 or the disruption of SMAD1/5 is conducive to JGCT pathogenesis. SMAD1/5 double-knockout mice had lower serum levels of FSH and altered LH and estradiol levels than control animals. Moreover, serum levels of inhibin-α and AMH are highly elevated in this mouse model. WNT/β-catenin in SMAD1/5 double-knockout mice is not significantly different from that observed in wild-type mice, suggesting that WNT/β-catenin may not contribute to JGCTs in the SMAD1/5 mouse model. Recent reports have demonstrated that TGFβ-SMAD signaling contributes to JGCT development through a study of Smad1/5/4 triple-knockout mice, which were found to exhibit delayed tumor formation and no evidence of metastasis, in contrast to Smad1/5 double-knockout mice [61]. These findings suggest that the TGFβ signaling pathway contributes to tumor formation in JGCTs through the repression of apoptosis. Mice with double-mutant BMPR1A and BMPR1B in granulosa cells The BMP signaling pathway is important for granulosa cell development. Among the type I receptors of the BMP signaling pathway, BMP receptors 1A and 1B are expressed in granulosa cells [62], and the knockout of both genes results in GCTs [63]. Tumors from Bmpr1a/1b double-knockout mice show upregulated TGFβ and TGFβ target genes. The ovaries of Bmpr1a/1b double-knockout mice develop bilateral ovarian tumors from 8 ( ≤ 40%), 16 ( ≤ 90%) months of age. The gene expression profiles of ovarian tumors of Bmpr1a/1b double-knockout mice are similar to those of Smad1/5 doubleknockout mice tumors, although some differences between the two exist. This implies that the BMP signaling pathway through the BMP ligands, BMPR1A/1B to SMAD1/5, is important for the regulation of tumor suppressor pathways in mouse granulosa cells. Estrogen receptor-β knockout mice Pituitary and ovarian tumors are observed in estrogen receptor (ER)-β knockout mice female mice at 2 years of age. GCTs in ERβ-/mice secrete estrogen and have high expression of ERα [64]. Pituitary tumors induce the high expression of gonadotropin-releasing hormone, which consequently causes the proliferation of granulosa cells, as well as endometrial hyperplasia, resulting in ovarian tumors [64]. Regarding the role of ERα in ovarian tumor formation, Couse and Korach [65] showed that 40% of ERα/β double-knockout female mice developed sex cord-stromal ovarian tumors between 15 and 20 months of age. However, Fan et al. [64] showed that ERα/β doubleknockout female mice did not develop ovarian tumors, emphasizing the necessity of ERα in the development of GCTs. Consistent with this observation, female ERα-/-and Inha-/-double-knockout mice show an enhanced onset of GCT formation, shorter survival, and induced hypergonadotropism caused by disruption of the negative feedback mechanism in the absence of ERα [66]. Burns et al. [66] also showed that ERα was the genetic modifier involved in the development of ovarian tumors using ERα/Inha double-knockout mice and ERα/β/ Inha triple-knockout mice. However, the survival curves for ERβ/Inhadouble-knockout mice overlapped with those of Inha mice, indicating that ERβ alone is not enough to induce tumor formation. High expression of the LH receptor and SMAD3 are seen in both ERβ/Inha double-knockout and inhibin-α knockout mice. Therefore, ER signaling pathways may have protective effects on tumor formation in females, with acceleration of tumor formation upon mutations in the ERα locus and the loss of both ERα and ERβ. Mice with depletion of Foxo1/3 and Pten The selective inactivation of the Foxo1 and Foxo3 genes in mouse ovarian granulosa cells leads to the development of GCTs in ≤ 20% of Foxo1/3 double knockout mice by 6 to 8 months of age. Although Pten conditional knockout mice with loss of the Pten gene in granulosa cells (Pten f/f ;Cyp19-Cre) show persistent nonsteroidogenic luteal cells [67], some mice (1%-7%) develop GCTs in Pten f/f ;Amhr2-Cre [58]. Additional inactivation of the Pten gene in the Foxo1/3 strain enhances the onset of GCT formation to 65% in the Foxo1/3/Pten tripleknockout mice at approximately 2 to 3 months of age [68], suggesting that the loss of Pten in the Foxo1/3 double knockout mice strain has a synergistic effect, inducing the formation and growth of GCTs. The loss of Foxo1/3/Pten contributes to the formation of GCTs because FOXO1/3 in granulosa cells regulates follicular development and apoptosis. This mouse model shows high expression of the Foxl2, Gata4, and Wnt4 genes, similar to what is observed in human GCT patients. Furthermore, the serum hormone profiles, which indicate elevated estradiol levels, high levels of activin (specially, βB) and inhibin, and low serum LH and FSH levels, suggest that the Foxo1/3 double-knockout mice can serve as a model for adult human GCTs. The tumor granulosa cells exhibit high expression of p-SMAD2/3 in the nuclei of granulosa cells, indicating that activin/TGFβ signaling is active in the formation of GCTs. Oocyte-driven PIK3CA* mice This mouse model was generated by crossing mice expressing oocyte-specific Cre-recombinase (GDF9-iCre) [69] with mice expressing constitutively active mutant PI3K (PIK3CA*) [70]. In these mice, the elevation of phosphatidylinositol (3,4,5)-trisphosphate levels within oocytes promotes the survival of follicles and anovulation due to endocrine abnormalities. This mouse model develops GCTs when the mice are mature, at 2 months of age [71]. The hormonal profiles show high levels of activin, inhibin, AMH, testosterone, and progesterone, and low levels of FSH and LH. The molecular signatures of this mouse model include high levels of SMAD3, FOXL2, and GATA4. This mouse might provide a good model for identifying the molecular mechanisms of GCT initiation and formation due to the absence of the mutation in the granulosa cells [70,71]. Conclusion Despite significant progress in understanding GCT biology, questions remain regarding the molecular mechanisms of JGCT and AGCT development. Although some mutations that induce the formation of AGCTs and JGCTs have been discovered, the events that initiate tumors and drive recurrence are still unclear. Therefore, suitable models are very important in understanding the molecular pathways underlying GCT formation. In particular, it is important to revisit these mouse models (Table 1) and reassess their characteristics compared to human GCTs based on histopathology, molecular pathways, and [70,71] These are the most relevant mouse models, but others exist for determining molecular mechanisms and signaling pathways in detail. GCT, granulosa cell tumor; FSH, follicle-stimulating hormone; LH, luteinizing hormone; AMH, anti-Müllerian hormone; DHEA, dehydroepiandrosterone: JGCT, juvenile granulosa cell tumor; N/A, not available; ES cells, embryonic stem cells; CTP, c-terminal peptide; ER, estrogen receptor; AGCT, adult granulosa cell tumors. recurrence. Novel mouse models will be useful in answering challenging and persistent questions about GCT etiology. In conclusion, mouse models are powerful tools that aid in understanding the etiology and biological mechanisms driving the initiation and progression of GCTs, as well as help in the development of new detection methods and treatments.
2016-05-04T20:20:58.661Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "4abb62307bef30570482c127b0e38fca9364bc0b", "oa_license": "CCBYNC", "oa_url": "http://ecerm.org/upload/pdf/cerm-43-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4abb62307bef30570482c127b0e38fca9364bc0b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
73572042
pes2o/s2orc
v3-fos-license
IDENTIFICATION AND VALIDATION OF REFERENCE GENES FOR THE NORMALIZATION IN REAL-TIME RT-QPCR ON RICE AND RED RICE IN COMPETITION , UNDER DIFFERENT NITROGEN DOSES Real time reverse transcription polymerase chain reaction (RT-qPCR) is an important technique to analyze differences in gene expression due to its sensitivity, accuracy, and specificity. However, before analyzing the expression of the target gene, it is necessary to identify and evaluate the stability of candidate reference genes for the proper normalization. This study aimed at evaluating the stability of candidate reference genes in order to identify the most appropriate genes for the normalization of the transcription in rice and red rice in competition under different nitrogen levels, as well as to demonstrate the effectiveness of the reference gene selected for the expression of the cytosolic ascorbate peroxidase (OsAPX2). Eleven candidate reference genes were assessed using the RefFinder which integrates the four leading software: geNorm, NormFinder, Bestkeeper, and the comparative delta-Ct method in addition to the analysis of variance to identify genes with lower standard deviation and coefficient of variation values. Eight out of the eleven genes have shown the desired effectiveness and, among them, the gene UBC-E2 has the highest stability according to RefFinder and the analysis of variance. The expression of the gene OsAPX2 has proven to be effective in validating the candidate reference gene. This study is the first survey on the stability of candidate reference genes in rice and red rice in competition, providing information to obtain more accurate results in RT-qPCR. INTRODUCTION The analysis of gene expression is essential to understand several aspects of plant biology (Martin et al., 2008).The real time reverse polymerase chain reaction is currently one of the most powerful and sensitive techniques for the analysis of gene expression and it contributes to substantially improve the understanding of the signaling, development of metabolic pathways and cell processes (Paolacci et al., 2009).The reliable quantification through the analysis of real-time RT-qPCR of gene expression levels requires the normalization and control of many parameters, such as: initial sample size, RNA integrity, enzymatic efficiency of cDNA synthesis and PCR amplification and the transcriptional activity of tissue and cells analyzed (Bustin, 2002;Ginzinger, 2002).For the normalization, the use of internal control genes (reference genes) is the most reliable and convenient method to estimate the amount of initial RNA (Thellin et al., 1999), as well as to reduce possible errors made in quantifying the expression of the gene, which is obtained by comparing the expression levels in samples of the gene of interest and the genes constituting stable control (Paolacci et al., 2009). It is likely that one or more genes are expressed constitutively in a specific organ an environment (Andersen et al., 2004).Thus, the systematic selection and validation of reference genes should be performed prior to all real-time RT-qPCR analyses (Gutierrez et al., 2008).Some genes are usually designated as reference genes due to their roles in basic cell processes, primary metabolism, and maintenance of cell structure (Czechowski et al., 2005;Wong and Medrano, 2005). So, reference genes traditionally used in real-time RT-qPCR studies on plants include actine (Maroufi et al., 2010), tubulin (Wan et al., 2010), ubiquitin (Chen et al., 2011), 18S ribosomal RNA (Jain et al., 2006), and40S ribosomal RNA (Cruz-Rus et al., 2011).Therefore, the reliability of the results of gene expression depends on the use of reference genes suitable for culturing and study conditions.To date, studies on reference genes in weeds are scarce and only a few reference genes have been validated under herbicide stress in a study regarding Alopecurus myosuroides and herbicides inhibitors of acetyl-CoA (Petit et al., 2012) and acetolactate-synthase in Lolium sp.(Duhoux and Délye, 2013). Among the factors that have direct interference in a cropping process, the presence of weeds stands out due to the competition for resources.The competition may be either intra or interspecific, when one or more resources required to the development and growth are limited to meet the needs of all individuals present in the environment (Radosevich et al., 2007). In the present study, we assessed the stability of candidate reference genes in order to identify the most appropriate genes for the normalization of the transcription in rice and red rice competing under different nitrogen levels and demonstrate the effectiveness of the reference gene selected by the expression of the cytosolic ascorbate peroxidase (OsAPX2) which is a key enzyme in the antioxidant metabolism. Plant material and experimental conditions We used the rice cultivar IRGA 424 and a red rice biotype, varying the ratio of plants per pot (with no (100:0) and with (50:50) competition).The soil was added with different nitrogen dosages (0, 120, and 240 kg N ha -1 ).Sixty days after germination, we collected the shoots of plants, stored at -80 o C temperature up to the total RNA extraction and molecular analysis. Total RNA extraction and cDNA synthesis The total RNA was extracted from the leaves of rice and red rice using the reagent PureLink™ (Plant RNA Reagent -Invitrogen™), as recommended by the manufacturer.The cDNAs were obtained by using the commercial kit SuperScript First-Strand System for real-time RT-qPCR (Invitrogen™), following the manufacturer's recommendations.The quantity and quality of RNA were evaluated by agarose gel electrophoresis 1% (p/v).The quantity and purity of RNA were determined by using a spectrophotometer NanoDrop™ 2000 (Thermo Scientific) with 260/280 nm ratio in the 1.9 to 2.2 interval and 260/230 nm around 2.0 considered as acceptable for use in real-time RT-qPCR. Determination of reference and target genes and Real-Time PCR conditions For reference genes, we selected 11 primer pairs, mentioned in literature in studies on rice and used as internal control in real-time RT-qPCR analyses and that supposedly showed no significant variation among the treatments analyzed (Table 1).In order to validate the reference gene, we used the ascorbate peroxidase gene (OsAPX2-EC1.11.1.11),Foward (5'AGAGTCAGTACGATCAAGAC3') and Reverse (5'TCTTGACAGCAAATAGCTTGG3') (Zhang and Hu, 2009). For the amplification reaction, we used a total volume of 12 µL, containing 6.25 µL LightCycler ® 480 SYBR Green I Master (Roche Applied Science), 0.5 µM primer (10 mM), 1 µL cDNA (0.2 µg) and water in an amount to complete the final volume.The amplification conditions were as follows: one cycle of 95 o C (5 min), followed by 45 cycles of denaturation at 95 o C (20 s), 60 o C (15 s), and 72 o C (20 s), interrupted by the dissociation curve with denaturation at 95 o C (5 s), cooling at 70 o C (1 min) and gradually heating at 0.11 o C steps up to 95 o C and cooling at 40 o C (30 s), using the system LightCycler 480 (Roche Applied Science).All reactions were carried out in triplicate for each cDNA sample.The amplicon purity was determined when a single melting peak was reached. Effectiveness and analysis of stability expression of reference genes The PCR effectiveness was obtained from four serial dilutions of cDNA (1:1, 1:5, 1:25, and 1:125) to generate the standard slope for each pair of primer tested.The value of E was estimated by the equation E = 10 (-1/slope) (Rasmussen, 2001), and values of effectiveness between 1.8 and 2.2 were considered as acceptable for reference genes. To classify and determine the performance of each reference gene were used the average Ct values for each test sample obtained by each reaction cycle in real-time RT-qPCR.Data were subjected to analysis of variance, using the Statistical Analysis System -Winstat -Version 2.0 (Machado and Conceição, 2003).Were considered stable those genes with lower standard deviation and coefficient of variation.At the same time, we observed changes in the expression levels both in the crop and in the weed, using the web-based tool RefFinder (www.leonxie.com/referencegene.php) which integrates all four software algorithms, GeNorm, NormFinder, BestKeeper, and comparative delta-Ct method.The detailed calculations for each one of the methods are described in Chen et al. (2011).The mean Ct value of each sample for each primer was used as input data on the website, and the Ct value belonging to the crop and the weed were analyzed altogether. Validation of reference genes In order to ensure the reliability of the potential reference gene, the expression profile of OsAPX2 was measured and normalized with the most stable reference gene as determined by RedFinder and the analysis of variance.The amplification conditions by real-time RT-qPCR were the same as abovementioned.The relative expression of data were calculated using the formula QR=2 - (ΔCT) , modified from that proposed by Pfaffl et al. (2002), which was subjected to the analysis of variance (p ≤ 0.05) to test possible variations for the factors nitrogen, competition, and plant (rice and red rice), in isolation, as well as the interaction between both factors according to Tukey's test (p ≤ 0.05). Effectiveness and stability of reference genes The amplification effectiveness of reference genes was individually calculated from the logarithm (Log) of cDNA dilutions for the crop and the weed (Table 1).The most appropriate dilution for the sample amplification was 1:25 subsequently used to validate the target gene.The efficiency ranged from 1.5 to 2.87 for rice and from 1.82 to 3.45 to red rice.For both competitors, the endogenous genes 18S ribosomal RNA (18S), cyclophilin, eukaryote elongation factor 1-a (Eef-1a), eukaryote initiation factor 4-a (eLF-4a), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), ubiquitin-enzyme conjugate E2 (UBC-E2), ubiquitin 5 (UBQ5), and ubiquitin 10 (UBQ10) had their effectiveness as expected (between 1.8 and 2.2), and they were used to test the stability and, afterwards, the most stable one was used along with the target gene.On the other hand, the actin (ACT), beta tubulin (β-Tubulin), and aquaporin (TIP41) genes had effectiveness other than expected. For the study of the stability, we analyzed the coefficient of variation (CV%), standard deviation (SD), and means (X) of reference genes of rice and red rice which had effectiveness between 1.80 and 2.20.For the crop, it was noticed that UBC-E2, UBQ10, and eLF-4a had the lowest CV (3.39,4.14,and 3.45,respectively) and SD (0.94, 1.03, and 1.08, respectively) values, whereas for the weed the genes UBC-E2, UBQ5, and eLF-4a had lower CV (1.34, 1.57 and 2.18, respectively) and SD (0.38, 0.42 and 0.60, respectively) values, indicating higher stability expression (Table 2).In order to evaluate the stability expression of reference genes, in addition to the analysis of variance, we have also calculated and compared the mean stability expression (M) using the software geNorm, NormFinder, Bestkeeper, and comparative delta-Ct method through the web-based tool which provides stability rankings.The lower the geometric mean, the higher the stability expression of the reference gene, and M values exceeding the cut-off value of 1.5 are not considered to be stable among treatments (Chen et al., 2011). The determination of the normalizing gene appropriate for the study of gene expression is the first step that allows the analysis of the relative expression pattern of interest and target genes in a given experiment, providing more reliability to the results obtained.The software GeNorm is based on the principle that the ratio between the expressions of two reference genes is supposed to be frequent across different experimental conditions and/or organs/tissue.The value of M is determined as the average variation of the gene compared to the other ones tested. Based on M values calculated for the eight candidate normalization genes for rice competing with red rice, it was observed that Cyclophilin (M=1.27) and UBC-E2 (M=1.27) were the most stable genes and 18S (M=1.67) and GAPDH (M=1.85) as the most variable ones (Figure 1A).For red rice, the genes eLF-4a (M=0.70) and UBC-E2 (M=0.70) were the most stable whereas UBQ10 (M=1.01) and 18S (M=1.37) were the least stable ones (Figure 2A). According to the algorithm of NormFinder software which analyzes both intra and intergroup variations, the candidate genes in rice presenting the lowest M value are cyclophilin (0.70) and UBC-E2 (M=0.81).The highest M values were for 18S (M=1.85)followed by GAPDH (M=2.11)(Figure 1B).The same was observed in red rice and the genes UBC-E2 (M=0.41) and cyclophilin (M=0.49) were the most stable ones, whereas UBQ10 (M=1.23) and 18S (M=2.34), the least stable ones (Figure 2B). The BestKeeper software algorithm calculates the standard deviation (SD) and establishes the value 1 as the cut-off (SD = 1), being considered as stable genes those with SD value lower than 1 (SD<1) and unstable those with values higher than 1 (SD>1).The analysis indicated only UBQ5 as the most stable reference gene in rice presenting SD = 0.66, whereas the other genes were considered unstable (Figure 1C).For red rice, the most stable genes were UBQ5 (M=0.65),UBC-E2 (M=0.77),cyclophilin (M=0.83),eLF-4a, and GAPDH (M=0.95), and the others did not present stability (Figure 2C). According to the comparative delta Ct method, in rice, the genes cyclophilin (M=1.56) and UBC-E2 (M=1.60) were the most stable and presented a stability expression order close to the GeNorm algorithm for the three first candidate normalization genes, even though it has shown different M values for each gene (Figure 1D).In red rice, the same genes, cyclophilin (M=1.09) and UBC-E2 (M=1.15), were the most stable ones, whereas UBQ10 and 18S (M=1.54 and 2.45, respectively) were the least stable ones (Figure 2D). According to the general acceptance criteria, the ideal reference gene is stably expressed (or, at least, only slightly varying in the expression) among the sets of samples investigated and has an expression level compared to the target gene (Andersen et al., 2004).Appropriate reference genes have already been identified for many crops especially for model plants (Czechowski et al., 2005), however, a few reference genes have been validated for weeds because most of them are species with no genomic information available.Also, the extrapolation from other species even in taxa closely related is not indicated since the expression of putative reference genes varies between different sets of organs and different experimental conditions (Hruz et al., 2011), i.e., there is no universal reference gene. Based on the results of effectiveness, analysis of variance and mean stability according to the combination of the algorithms GeNorm, NormFinder, Bestkeeper, and Comparative Ct method, the normalizing gene UBC-E2 was selected for rice and red rice.One of the three main components of the ubiquitination system is the ubiquitin-conjugating enzyme (UBC-E2) which bounds the ubiquitin (Ub) to the substrate (Unver et al., 2012).The ubiquitination is involved in many important processes such as plant growth and development, hormonal regulation, flowering, and responses to biotic and abiotic stresses (Dreher and Callis, 2007). In the analysis of ten normalizing genes as internal control for gene expression studies on 25 rice samples, it was found that UBQ5 and Eef-1a were the most stably expressed, whereas UBQ10 exhibited the least stable expression in different tissues or cell types at different developmental stages (Jain et al., 2006).This result differs from that observed in this study, since the gene UBC-E2 was the most stable one, presenting low CV, SD, and M values, indicating higher stability expression both for rice and red rice.Nevertheless, Li et al. (2012) by submitting rubber tree (Hevea brasiliensis) to different conditions found the gene UBC as the most stable of several tested.The reference genes are regulated differently in different plant species and may exhibit distinct expression patterns.Therefore, a reference gene presenting stable expression in an organism may or may not be appropriate for the normalization of gene expression in another organism in a set of conditions and needs to be validated before its use (Jain et al., 2006).Several studies have shown that the expression of a same reference gene may vary in certain situations (Thellin et al., 1999).This may be partly explained by the fact that reference genes are not only implicated in the metabolism of basal cells, but also taking part in other cell functions (Singh and Green, 1993;Ishitani et al., 1996). Validation of the reference gene -Expression of OsAPX2 For the gene OsAPX2 in rice competing with red rice, there was no interaction between the factors tested, only an effect of nitrogen dose with an increased expression in the dosage 240 kg ha -1 nitrogen (QR=3.43),differing from 0 kg ha -1 (QR=1.00)and 120 kg ha -1 (QR=0.90)(Figure 3). In red rice, there was an interaction between the factors tested for the expression of the gene OsAPX2 showing higher expression of the gene in the dosage 240 kg ha -1 and 50% ratio in comparison to the monoculture (QR=3.34),and no differences between the ratios in the dosages 0 kg ha -1 and 240 kg ha -1 (Figure 4). The APXs are the main peroxidases removing H 2 O 2 inside cells and act along with other enzymes which play a role in the ascorbate-glutathione cycle (Foyer and Noctor, 2000).The APX may have an increased activity in response to environmental stresses such as salinity, low temperatures, metal poisoning, drought, high temperatures, ozone, high luminous intensity, pathogens, among others, as reported for different plant species (Yoshimura et al., 2000;Mittova et al., 2004;Sharma and Dubey, 2004).When rice and red rice were in an interspecific competition, subjected to the dosage of 240 kg ha -1 nitrogen, the expression of the gene OsAPX2 increased, indicating a possible greater oxidative stress.According to Kandlbinder et al. (2003), the excess of nitrogen could promote an increase of APX in rice. Based on results of effectiveness, analysis of variance and Reference Finder, the normalizing gene UBC-E2 was selected for rice and red rice in competition.The expression of the gene OsAPX2 has proven to be effective in validating the candidate reference gene.The study is of great importance in other gene expression analyses aiming at understanding the mechanisms of competitive stress tolerance in rice and red rice plants.Upper-case letters are comparisons of plant ratios within nitrogen doses and lower-case letters are comparisons of nitrogen doses within plant ratios. Figure 2 -Figure 1 - Figure 2 -Average stability expression.(M) according to the algorithm GeNorm (A), NormFinder (B), BestKeeper (C), and comparative delta-Ct method of eight candidate normalization genes in red rice competing with rice in different ratios and nitrogen levels. Upper-case letters are comparisons of nitrogen doses. Figure 3 - Figure 3 -Relative quantification of OsAPX2 gene expression in rice competing with red rice under different nitrogen doses. Figure 4 - Figure 4 -Relative quantification of OsAPX2 gene expression in red rice competing with rice in different ratios and nitrogen doses. Table 1 - Relation of reference primers used for real-time RT-qPCR in rice and red rice in response to the stress caused by competition and nitrogen levels Table 2 - General mean (M), Variation coefficient (VC%), and Standard deviation (SD) of eight endogenous genes in rice and red rice
2018-12-23T08:13:02.234Z
2017-05-25T00:00:00.000
{ "year": 2017, "sha1": "9a879ea681ea5028841df93f893ff6cce471738a", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/pd/v35/0100-8358-pd-35-e017161319.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "06a9fdd0de0902f3ea34ea79532f20b221abb4ce", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
265439796
pes2o/s2orc
v3-fos-license
Heterogeneous microstructure design by pre-manipulating ferrite recrystallization in a cold rolled medium Mn steel The medium Mn steel (MMS), which is regarded as the most representative candidate of the third generation advanced high strength steels, has received widely attention during the last several decades with respect to the exceptional advantage of low cost and excellent strength-ductility properties. In this study, a microstructural strategy of developing heterogeneous microstructures in a cold rolled MMS is presented. By pre-manipulating occurrence of the ferrite recrystallization, both the lamellar-shaped and granular-shaped ultra-fine retained austenite can be obtained after the two-step intercritical annealing process. Various amounts of recrystallized ferrite and difform martensite can be obtained by adjusting the pre-annealing temperature, which can effectively contribute to producing the two types of heterogeneous retained austenite, i.e., lamellar and granular in the following annealing process. The heterogeneous-structured retained austenite enables an excellent strength–ductility combination and reduced Lüders strain in the cold-rolled MMS. Introduction Over the past decades, significant efforts have been made to develop advanced high strength steels (AHSSs) towards the target of satisfying lightweight, energy efficiency and structural safety of automobiles [1].Among various candidates, the medium Mn steel (MMS) with 3-12 wt% Mn content is becoming the most potent of the third generation of AHSSs due to its excellent strength and ductility combination [2][3][4].The MMS is essentially one typical transformation induced plasticity (TRIP) assisted steel, so that sufficient retained austenite with suitable stability is mandatorily demanded to afford the good mechanical properties [5].For the MMS, the retained austenite is commonly produced through austenite reverse transformation (ART) annealing [6].However, very long annealing duration is usually required due to the slow Mn diffusions, which leads to low production efficient and limits the practical application in industry.Of course, the ART process might be accelerated by introducing deformation dislocations into the MMS by cold-rolling, such that the austenite reversion and ferrite recrystallization can be easily triggered to produce ultra-fine duplex microstructures with recrystallized ferrite and metastable austenite, which can lead to the comprehensive mechanical properties of high strength and good elongation [7].However, typical discontinuous yielding behavior has to be expected derived from the ultra-fine equiaxed duplex microstructure during the tensile deformation.The formation and propagation of Lüders band can deteriorate the surface quality of the cold-worked steel product, which is not anticipated for the commercialization of the cold-rolled MMS [8]. Several methods have been proposed to reduce Lüders strain in cold-rolled MMSs, e.g., by pre-heat treatment [9], by designing bimodal grain size microstructures [10], by preserving deformed structures [11], by reducing deformation reduction [12], by introducing pre-strain [13] and by forming heterogeneous microstructures [14].Actually, transformations during the intercritical annealing process of the cold-rolled MMS are intrinsically related, including ferrite recrystallization, austenite reversion transformation and alloy element partitioning, which bring a large variety in diversified microstructures tailoring.In this study, a two-step annealing strategy was used to design heterogeneous microstructures in a 0.15C-4.7Mncold-rolled MMS.The effectiveness of the recrystallization control via adjusting the pre-annealing temperature was analyzed.The formation of heterogeneous microstructures and their influence on the stability of retained austenite and mechanical properties were discussed. Experimental The nominal composition of the MMS used in the present study was Fe-4.7Mn-0.15C(wt.%).The equilibrium transformation temperatures of Ae1 and Ae3 were calculated to be 502 °C and 731 °C, respectively, using the Thermo-Calc software.The as-received MMS was cast using a 50 kg vacuum induction melting, then homogenized at 1200 °C for 10 h and rolled between 1050 °C and 900 °C to produce the hot-rolled plate in 3 mm thickness.After soft annealing at 500 °C for 1 h, the plate was cold-rolled to the final thickness of 1.5 mm.The microstructure of cold rolled sheet mainly contains deformed martensite with cementite precipitations, as shown in figure 1a.Here, a strategy of two-step intercritical annealing processing was adopted.The cold-rolled MMS was pre-annealed firstly at 680, 700 and 720 °C for 10 min, respectively, such that different scenarios of ferrite recrystallization did take place to produce various microstructures with different recrystallized degrees.Then a conventional ART annealing processing of 650 °C holding for 30 min was subsequently performed using the pre-manipulated recrystallized microstructures as the precursors.The samples after two-step annealing are referred as P680-ART, P700-ART and P720-ART.For comparisons, the conventional ART annealing during which the cold-rolled MMS was intercritically annealed at 650 °C for 1 h was also considered (hereafter referred as C-ART).The microstructures of the annealed samples were characterized by MERLIN Compact field emission scanning electron microscope (SEM) equipped with an electron backscatter diffraction detector (EBSD) operated at 20 kV.Samples for EBSD were prepared by mechanical polishing finished with fine polishing using SiO2 suspension.A Talos F200X mode field emission transmission electron microscopy (TEM) with energy dispersive X-ray spectroscopy (EDXS) was used to characterize the fine microstructures and distributions of alloy elements.The TEM foils were prepared by twin-jet electro-polishing with a solution containing 5% perchloric and 95% ethanol at the temperature of about −30 °C and a voltage of 30 V. Uniaxial tensile tests were carried out on a Zwick/Roell Z150 testing machine at a constant rate of 2.4 mm /min.The tensile samples were prepared along the rolling direction with a gauge length of 25 mm and a width of 4 mm. Results and Discussion Figure 1 shows the microstructures of the cold-rolled MMS produced by the two-step annealing using the pre-annealed microstructure at 700 °C and by conventional ART annealing.It can be seen that, an ultra-fine duplex microstructure of granular-shaped ferrite (αG) and austenite (γG) is produced by the conventional ART processing.The heavily deformed martensite matrix has been completely recrystallized.In the P700-ART sample, however, typical heterogeneous microstructures consisting of the granular-shaped αG and γG, as well as ultra-fine lamellar-shaped ferrite (αL) and austenite (γL) have been obtained.The volume fraction of retained austenite of P700-ART sample is measured to be about 36.2 %, which is higher than 33.2 % of the C-ART sample.It suggests that the ART process has been significantly accelerated by introducing the pre-annealing process.(d-f) and microstructure schematics (g-i) of the cold-rolled 0.15C-4.7MnMMS pre-annealed at 680 °C (a, d, g), 700 °C (b, e, h), and 720 °C (c, f, i).The gray and black lines of IPF images in a-c represent boundaries with misorientations of 2° < θ < 15° and θ > 15°, respectively.The green, blue, and black lines of band contrast images in d-f represent boundaries with misorientations of 2° < θ < 5°, 5° < θ < 15°, and θ > 15°, respectively. Ferrite recrystallization manipulation during the pre-annealing Figure 2 shows the microstructures of the cold rolled MMS pre-annealed at different intercritical temperatures.Since the pre-annealing temperature is relative high, the recrystallization of ferrite is easy to be triggered.It can be seen that after annealing at 680 °C, 700 °C and 720 °C, ferrite recrystallization and austenite formation have taken place simultaneously, forming a fine dual phase structure composed of equiaxed recrystallized ferrite and reversed austenite.However, since the pre-annealing temperature adopted is higher than the optimal temperature of conventional ART treatment for the 0.15C-4.7Mnsteel, the concentration of C and Mn in the formed austenite is low.Therefore, the reversed austenite formed in the pre-annealing will transform into martensite during cooling due to the low thermal stability.The microstructure of pre-annealed sample at 680 °C is composed of 60% recrystallized ferrite and 40% martensite, and the microstructure is maintained as an equiaxed structure with uniform distribution, as shown in figure 2d.When the pre-annealing temperature is increased to 700 °C, the phase transformation of austenite is promoted obviously.Although the volume fraction of recrystallized ferrite decreases, the grain size maintains fine equiaxed at the level of 1~2 μm, which indicates that the coarsening of recrystallized ferrite grains in the formed duplex structure is effectively inhibited.At the same time, some austenite grains tend to merge and coarsen, forming the reversed austenite with various grain sizes (figure 2e).It can be seen that the large size austenite grains are transformed into fresh martensite during cooling.When the pre-annealing temperature rises to 720 °C, the volume fraction of recrystallized ferrite is further reduced to about 10%, and large lath martensite is formed from the coarsened austenite grains, as shown in figure 2f.There are abundant large angle grain boundaries and subgrain boundaries in the new martensite grains.In addition, stress is introduced while the martensite transformation is taking place.The existence of these substructures and local stored energy will greatly promote the subsequent austenite transformation during the ART annealing.In a word, the volume fraction control of recrystallized ferrite can be achieved and fresh martensite structures with diversified proportions and sizes are significantly obtained, which provides a space for the control of retained austenite in subsequent ART annealing process. Formation of the heterogeneous microstructures The final microstructures of two-step annealed cold-rolled MMS are shown in figure 3.For comparison, the microstructure of C-ART sample is also discussed.Based on the precursors via different pre-annealing, the sample is subjected to ART treatment at a lower critical temperature of 650 °C.Consequently, the thermal stability of the retained austenite is improved, which makes it easy to remain at room temperature.As is seen in figure 3e, both ferrite and retained austenite grains of P680-ART sample inherit the fine and equiaxial characteristics of the pre-annealed samples.The austenite transformation is mainly taking place in the equiaxed martensite formed in the pre-annealing.The grain size of the recrystallized ferrite is obviously smaller than that of C-ART sample and the austenite grains are also equiaxed with an average grain size of about 500 nm, which means that no obvious grain growth occurs.The grain refinement will significantly reduce the diffusion distance of C element, so that the austenite reverse transformation kinetics can be accelerated significantly. As mentioned above, the volume fraction and grain size of recrystallized ferrite and fresh martensite can be controlled according to the pre-annealing temperature.Unlike the fine martensite of the P680-ART sample, coarsened martensite with sub-structures can also be formed when increasing the pre-annealing temperature, which gradually acts as major nucleation sites for austenite formation.In addition to equiaxed ferrite grains, two kinds of fine austenite grains with heterogeneous morphology are simultaneously obtained in the P700-ART sample: lamellar (γL) and equiaxed (γG).The lamellar-shaped austenite is mainly formed along the martensite lath interface inside the large size of fresh martensite generated in the pre-treatment, while the equiaxed austenite is mainly from the small size of martensite.Therefore, in this case, heterogeneous structural austenite composed of two different morphologies are formed, as shown in figure 3f.For the P720-ART sample, large scale coarsening occurs in the reversed austenite, and abundant grain boundaries and substructures are formed during cooling.In the subsequent annealing, the formation of austenite mainly occurs at the interface of martensite lath, which attributes to the generation of uniform lamellar-shaped austenite (figure 3g).Generally, by pre-annealing, two morphologies of small size martensite and large size martensite are formed in the microstructure.After the second step of ART treatment, martensite with relatively small size transform into equiaxed retained austenite (γG), while larger martensite colonies transform into lamellar retained austenite (γL) and lamellar ferrite (αL), resulting in the formation of ultra-fine heterogeneous microstructures.Figure 4 shows the duplex microstructures and associated Mn distributions developed by the pre-annealing at 700 °C and after the two-step ART treatment.It can be seen that Mn partitioning has occurred significantly from the ferrite matrix to the intercritical austenite during pre-annealing.The increased Mn diffusivity due to the high annealing temperature may accelerate the Mn partition between the two phases, introducing evident chemical heterogeneity in the formed duplex microstructures, as shown in figure 4b.During the subsequent ART annealing, the Mn-enriched fresh martensite colonies may act as active Mn reservoirs and provide preferred routes for austenite formation due to the higher chemical driving force and intensive tangled dislocations.Apparently, the Mn enrichment will be inherited by the newly formed heterogeneous austenite in the second ART process.For equiaxed austenite, although they are formed following the same mechanism of austenite reversion as that in the pre-annealing process, their transformation kinetics are accelerated due to the prior Mn enrichment.Additionally, continuously Mn partition from ferrite permits the equiaxed austenite stabilization to room temperature.Another type of fine lamellar austenite is formed in the interior of relatively coarse martensite colonies.An increased Mn content can be obtained not only from the inheritance of chemical heterogeneity from prior fresh martensite, but also due to further formation of the adjacent lamellar ferrite through a diffusive austenite reverse transformation (figure 4e).Thus, the lamellar austenite with nanoscale will gain higher stability as the lamellar ferrite inhibit austenite coarsening.These two types of retained austenite show in wider chemical concentrations and morphological features, which present heterogeneous TRIP effects during tensile testing compared to that containing more homogeneous austenite in the C-ART sample. Tensile behaviors of the heterogeneous microstructures Figure 5 shows the tensile behaviors of cold-rolled MMS under tensile deformation at room temperature.By pre-manipulating ferrite recrystallization, the mechanical properties of the final annealed samples are significantly improved.The product of strength and elongation for two-step annealing samples is increased to a level of 37-41 GPa•%, compared with 32.3 GPa•% for the C-ART sample.More importantly, the Lüders strain of the pre-manipulated samples is significantly reduced.The work hardening behavior of P680-ART sample is similar to that of C-ART sample, mainly because the similar fine equiaxial duplex microstructures [15].However, shorter ART treatment will reduce the mechanical stability of the retained austenite formed.In this case, the TRIP can occur firstly within the austenite with low mechanical stability, which may inhibit the Lüders deformation at a much low strain.Moreover, the deformation induced martensite will continue to participate in the further strengthening during subsequent deformation, so that the tensile strength is significantly improved.The high volume fraction of austenite can also afford a high elongation in sample.Among the heterogenous austenite of different morphology obtained in P700-ART sample, equiaxial austenite has low mechanical stability, while lameller austenite with higher Mn concentration and smaller size has high mechanical stability.The formation of heterogeneous austenite enables the internal TRIP effect to occur continuously and gradually within a wide strain range, which is also the main reason for the annealed sample to obtain high strength with no obvious reduction in plasticity.In addition, the strengthening effect caused by TRIP effect at the initial stage of deformation can coordinate the localization of plastic deformation, thus effectively inhibiting the development of bands [16].The results show that there is no fluctuation of work hardening rate in the P720-ART sample during Lüders deformation in the early deformation stage, indicating that TRIP effect has not been effectively activated at this time, which is mainly caused by the lameller austenite with high mechanical stability.However, the mechanical stability of austenite is lower than that of C-ART sample, and the occurrence of TRIP still reduces the Lüders strain and improves both strength and plasticity. Conclusions By manipulating ferrite recrystallization at different pre-annealing temperatures, diversified microstructural precursors consisting of recrystallized ferrite and fresh martensite with different volume fractions and grain sizes can be obtained.By using these microstructures as precursors for subsequent ART processing, ultra-fine heterogeneous microstructures with both lameller-shaped and granular-shaped retained austenite are successfully produced, which may provide diversified mechanical stability to make the TRIP effect occurring continuously during the tensile deformation.Compared with the conventional ART treatment of the cold-rolled MMS, significant enhanced strength and suppressed Lüders strain can be achieved by including ferrite recrystallization manipulated annealing.It suggests a feasible way to adjust the final heterogeneous microstructures with shortened duration time of the ART process and optimize the mechanical properties of the cold rolled medium Mn steel. Figure 4 . Figure 4. TEM images (a, d), Mn distribution maps (b, e) and corresponding Mn concentration profiles (c, f) along the corresponding white arrows L1 and L2 of the cold-rolled 0.15C-4.7MnMMS after pre-annealing (a-c) at 700 °C and after two-step annealing (d-f). Figure 5 . Figure 5. Tensile curves (a) and work hardening rate curves (b) of the cold-rolled 0.15C-4.7MnMMS after two-step annealing and conventional annealing.
2023-11-26T16:03:24.247Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "5bf95d66c0f0cda7897331ddb2fa76fe00f6930d", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2635/1/012004/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bbe22364fe3c3c4ee22851cf010fb772c75ba5a5", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
202514481
pes2o/s2orc
v3-fos-license
Introducing low-quality feedstocks in bioethanol production: efficient conversion of the lignocellulose fraction of animal bedding through steam pretreatment Background Animal bedding remains an underutilized source of raw material for bioethanol production, despite the economic and environmental benefits of its use. Further research concerning the optimization of the production process is needed, as previously tested pretreatment methods have not increased the conversion efficiency to the levels necessary for commercialization of the process. Results We propose steam pretreatment of animal bedding, consisting of a mixture of straw and cow manure, to deliver higher ethanol yields. The temperature, residence time and pH were optimized through response-surface modeling, where pretreatment was evaluated based on the ethanol yield obtained through simultaneous saccharification and fermentation of the whole pretreated slurry. The results show that the best conditions for steam pretreatment are 200 °C, for 5 min at pH 2, at which an ethanol yield of about 70% was obtained. Moreover, the model also showed that the pH had the greatest influence on the ethanol yield, followed by the temperature and then the residence time. Conclusions Based on these results, it appears that steam pretreatment could unlock the potential of animal bedding, as the same conversion efficiencies were achieved as for higher-quality feedstocks such as wheat straw. Background Lignocellulosic ethanol is still too expensive to compete with fossil fuels on the commercial scale, due to its relatively high production cost, the main contributions being the cost of the feedstock and the capital cost [1]. In fact, the cost of the feedstock can represent as much as onethird of the total production cost [2], and the use of new feedstocks with zero or negative value will be required to achieve cost competitiveness [1]. Animal manure is one example of such a low-value feedstock, and ethanol production could offer a way of valorizing a biomass source that is usually lost otherwise [3]. The use of this material as feedstock would reduce the cost of the raw material in the ethanol production process and, at the same time, alleviate the problem of waste disposal, which would counterbalance the environmental impact of the ethanol production process [3]. Animal manure is thus an attractive feedstock from both the economic and environmental perspectives. In spite of these advantages, animal manure has been little explored as a resource in bioenergy production [4], although a few studies have been carried out on ethanol production from animal manure. For example, Gomaa et al. concluded that this feedstock had potential as a raw material for biogas and bioethanol production [5]. However, the ethanol yields obtained in their study were low, and they pointed out the need for further research to optimize the production process. Some studies have found that a pretreatment step is necessary to enhance the release of sugars from animal manure, and thus improve the ethanol yield [6,7]. This would be especially the case for animal manure with a high fiber content, such as farmyard cow manure [8]. The effect of acid concentration, pretreatment time and cellulase dosage in pretreatment involving acid hydrolysis followed by enzymatic hydrolysis, on the fermentability of farmyard cow manure has been studied by Vancov et al. [9]. They reported the highest ethanol yield to date of 55% of the theoretical maximum based on the glucose in the raw material. Although this is a significant improvement compared to previous yields of 20% [8], the authors stated that further development was needed to realize the potential of cow manure as a feedstock for bioenergy production [9]. This study was carried out to investigate the effect of steam pretreatment instead of acid hydrolysis. This technology would reduce both the environmental impact and the cost of pretreatment [10], and we hypothesized that higher ethanol yields could be obtained from the fibrous fraction of animal bedding, which is a mixture of straw and cow manure. To validate this hypothesis, we tested several operating conditions in a steam pretreatment reactor to identify the maximum ethanol yield obtainable and compared the results with the yields obtained previously from similar materials. In addition, we modeled the effect of temperature, residence time and pH in the pretreatment step on the ethanol yield to identify trends that explained the results obtained, which could be extrapolated to the design of other processes based on similar materials. Table 1 gives the composition of the raw material and the fiber fraction after washing in a concrete mixer with deionized water at room temperature. Fermentable carbohydrates accounted for almost 40% of the dry mass of the unwashed bedding, which proves that this material could become an important source of substrate for bioethanol production. Moreover, 30% of the dry mass of the unwashed bedding (the organic part of the manure) could potentially be used as a substrate for biogas production, which illustrates the high potential of animal bedding as a resource for bioenergy production, since approximately 70% of its dry mass could be used for this purpose. Raw material composition The composition of the animal bedding presented in this study is similar to that reported by Bona et al. [8]. However, the manure content is lower, and the fermentable carbohydrate content is higher, than those reported by Vancov et al. [9], while the opposite is true compared with the composition reported by Chen et al. [6]. This variation can be expected, as the composition of such material is affected by many factors, such as the kind and number of animals, their diet, animal housing and time spent in the stable [11]. Washing reduced the manure content of the material from 43 to 10% (Table 1), as the average washing efficiency was 75.8% with a standard deviation of 3.6%. After washing, the material has a composition very similar to that of wheat straw [12,13], despite the fact that the washed fiber still contains a small fraction of manure. Although the residual manure could give rise to the Maillard reaction during pretreatment [14], it can be expected that the washed fiber would behave similarly to wheat straw during steam pretreatment, as the materials have very similar compositions. Pretreatment The fiber fraction obtained after washing the material with water was pretreated with steam and subsequently analyzed before its conversion to bioethanol. Rather than discussing the complete composition of all the materials, which can be found in Additional file 1, the intention of this section is to validate the data through checking its consistency with the chemistry of steam pretreatment reported previously in the literature, and comparing the results with those obtained when performing steam pretreatment on wheat straw. The fiber fraction of the pretreated materials contained 43-59% glucan, 4-14% xylan and 32-38% lignin, depending on the pretreatment conditions, while the liquid fraction contained mainly xylose, at concentrations between 21 and 41 g/L, and only minor amounts of glucose and other sugars. This implies that cellulose and lignin remained mostly in the solid phase after pretreatment, while hemicelluloses were solubilized ( Fig. 1), which is consistent with the chemistry of pretreatment performed at low pH [15]. Moreover, these compositions are similar to those reported for steamexploded wheat straw in previous studies [16,17], which indicates that washed animal bedding behaves similarly to wheat straw during steam pretreatment. A fraction of the sugars released during pretreatment was degraded into other by-products, such as furfural and hydroxymethylfurfural (HMF); this effect became more pronounced as the severity of the pretreatment was increased (Fig. 2). This is consistent with the conclusions about carbohydrate degradation during steam pretreatment presented by Li et al. [18]. The generation of by-products during steam pretreatment does not follow the same pattern as for wheat straw, as the furfural production was higher and the HMF production lower than the results reported by Ballesteros et al. [19]. It thus appears that, although sugars are recovered in a similar fashion, carbohydrate degradation during steam pretreatment differs between animal bedding and wheat straw, possibly due to the presence of residual manure in the material that can trigger various degradation mechanisms, such as the Maillard reaction [14]. Despite the differences in by-product generation, the amount of furfural generated during pretreatment is not high enough to compromise the efficacy of fermentation, since furfural concentrations over 3 g/L are necessary to affect the performance of S. cerevisiae [20]. Thus, byproduct formation does not appear to be critical when pretreating the fiber fraction of animal bedding, as the concentrations of the by-products obtained are not toxic to the fermenting microorganism. However, this may be a problem when using pretreatment techniques that produce a material with a higher dry matter content, as the resulting inhibitor concentrations may be higher. Simultaneous saccharification and fermentation The material steam pretreated at each of the conditions tested was converted into ethanol through simultaneous saccharification and fermentation (SSF), and the yield obtained in each case is given in Table 2. The ethanol yield ranged from 36.3 to 69.3% depending on the pretreatment conditions, and the maximum error between duplicates, obtained for conditions 2 and 8, was 0.03 g ethanol/g glucose in the washed fiber. The maximum yield obtained in this study (69.3% for condition 4) was higher than those previously reported for acid hydrolysis pretreatment, although the results are not strictly comparable since the methods used to perform the biological steps were not the same. For example, a yield of 55.3% has previously been achieved using acid hydrolysis [9], while a lower yield of 22.2% was reported in another study using the same technology [8]. The results obtained with steam pretreatment also compare well to those from other technologies based on high pH, such as the NaOH pretreatment applied by You et al. [21], with which the authors achieved a yield of 39.9%. The outcome is also favorable when compared to radically different technologies, such as pretreatment by anaerobic digestion followed by NaOH treatment, proposed by Yue et al. [22]. They obtained a highly digestible fiber, leading to high enzymatic hydrolysis and fermentation yields (90% and 72%, respectively) but, based on their mass balances, there is a cellulose loss of 24% during NaOH treatment, which lowers the combined sugar yield based on the original fiber to 46.7%. Thus, it seems that our initial hypothesis is valid, and steam pretreatment allows higher yields to be obtained than with previously tested technologies. However, to confirm the hypothesis irrefutably, it would be necessary to evaluate the performance of the different technologies with the same methodology to obtain results that can be directly compared. The limitation on the ethanol yields obtained from farmyard cow manure could be attributed to the relatively low recoveries achieved in the pretreatment step, as the hydrolysis and fermentation yields are usually within an acceptable range. For example, acid hydrolysis provided 79% sugar recovery [9], which is very similar to that obtained using NaOH pretreatment [22]. This means that the excellent sugar recovery that characterizes steam pretreatment [18] might be the reason why this technology enables higher conversion efficiencies (approximately 90-100% recoveries were obtained in this study). The maximum yield in our study is also in the same range as those reported for ethanol production from steam-exploded wheat straw [19,[23][24][25][26], which indicates that fractionation with water (i.e., washing) followed by steam pretreatment allows the same conversion efficiencies to be achieved as for higher-quality residues. This implies that the technology proposed in this study could help to unlock the potential of cow manure as a resource for bioenergy production, since the same conversion efficiencies can be achieved, but at a reduced feedstock price. Modeling and optimization Model development and validation We developed a model that relates the ethanol yield to the operational parameters in the pretreatment step using multiple linear regression (Eq. 1). The model was developed based on the coded variables, which implies that the coefficients in the model are a measure of the significance of each of the terms included in the model [27], i.e., a larger coefficient in Eq. 1 means that the term has a greater influence on the response (ethanol yield). To validate the model, the variance was disaggregated into several fractions through an ANOVA analysis ( Table 3). The variance due to the residuals, i.e., the variance not explained by the model, can be used to calculate the value of R 2 for the model, which was 0.758. Although this value may seem low, R 2 is not sufficient to evaluate the goodness of fit of a model, since it does not consider the degrees of freedom, and contains no information on the source of the error in the prediction [28]. In fact, when considering the degrees of freedom using a test for the significance of the regression, it can be said that there is an 85% probability (p = 0.1405) that at least one of the coefficients (1) is different from zero or, in other words, that the model is significant, which is acceptable for this kind of system. ANOVA analysis deals with this apparent inconsistency through further separating the variance due to the residuals into: (i) the variance due to the lack of fit, which corresponds to that originating from bad fitting of the coefficients, and (ii) the variance due to experimental uncertainty. Based on these variances, a test for the lack of fit was applied, and the result showed that there was only a 66% probability (p = 0.3394) that the lack of fit is significant, which is low compared to the usually applied 95% confidence level. It can then be said that the effects of temperature, residence time and pH on the ethanol yield are correctly fitted, even though the predictive power of the model is low due to the relatively large experimental errors and possible uncontrolled factors. The practical meaning of these results in terms of the ethanol production process is that the operating conditions in the steam pretreatment determine the ethanol yield of the process to a large extent, but not completely. This implies that the ethanol yield cannot be predicted based solely on the conditions chosen for pretreatment. Small fluctuations can be expected due to random errors in the overall process, and larger errors may arise from uncontrolled changes in factors deemed constant, such as the composition of the feedstock, the activity of the enzymes and the vitality of the yeast. Size of the effects The influence of the pretreatment variables on the ethanol yield was further investigated by performing a test for a set of parameters to determine the significance of each part of the model. The test showed that there is a 98% (p = 0.0159) probability that the linear terms are significant, while this probability is only 36% (p = 0.6426) for the quadratic terms, and 76% (p = 0.2343) for the interaction terms. The reason for this is that, of the 75% variance explained by the model, 69% is explained by the linear terms, 2% by the quadratic terms and 29% by the interaction terms. From this it can be seen that the pretreatment variables have a strong linear effect on the ethanol yield in the range studied, and that there are relevant interactions between them, while the curvature due to quadratic effects is minimal. It is possible that the small size of the quadratic terms is a result of the range of conditions included in the study, which is relatively small, and not necessarily because these effects do not exist. A function with a curvature may appear linear when analyzed over a small range and, therefore, larger quadratic effects could have been found if the pretreatment variables had been studied over larger ranges. However, a larger range in the pretreatment variables would lower the precision in the fitting of the model [28,29]. Thus, the model presented provides a more accurate representation of the effects near the optimal operating conditions, although it may not be valid for extreme conditions at much lower or higher combined severity, as defined by Chum et al. [30], than those tested in this study. The curvature in the model is the result of interaction effects, which means that increasing the severity by changing one of the variables limits the severity that can be achieved through changing the others. This result is consistent with the fact that the optimal pretreatment severity is governed by carbohydrate degradation [18], and also with the results reported by Vancov et al. [9], where interactions between the pretreatment variables were also found for acid hydrolysis pretreatment of cow manure. The existence of these interactions makes the prediction of the outcome of pretreatment more complex, because the pretreatment variables are not completely interchangeable, i.e., different results could be obtained when increasing the severity by raising the temperature than by increasing the residence time. To understand which of the pretreatment variables is more significant, it was necessary to analyze the model as a whole, rather than for just a set of parameters. Response surfaces were used for this purpose, in particular three surfaces at the value of each pretreatment variable giving the best conditions tested, i.e., T = 200 °C, t = 5 min and pH = 2 (Fig. 3). The more significant a variable is, the more it can compensate for suboptimal values of the other variables; therefore, the change in ethanol yield represented by the response surface can be used as an indication of the significance of the variable. For example, when the time is at its optimal value but the other variables are not, the ethanol yield decreases to 35% (Fig. 3b), but in the analogous situation for the pH, the yield is only reduced to 55% (Fig. 3c), which indicates that the pH has a greater influence on the yield than the residence time. Based on this, it can be said that the residence time influences the ethanol yield to a much lower extent than the temperature and the pH, which have a similar degree of influence, although that of the pH is slightly higher. Model-based optimization Based on the optimization of the model, the best conditions for steam pretreatment are 200 °C, for 5 min, at pH 2, which is one of the tested conditions, so no further validation was required. The optimal condition found for animal bedding was the same as that previously reported for wheat straw [19,24], which shows that the time the bedding is in the stable does not help overcome the recalcitrance of the material, as the same severity is needed in its pretreatment. Due to the relatively low predictive power of the model, the optimum may instead be at 190 °C, for 10 min, at pH 2. Other authors have reported that these two conditions gave very similar results in terms of ethanol yield in the subsequent biological processes [19], and it is therefore difficult to reach a level of accuracy that allows differentiation between them. In spite of this, the optimum does not lie outside the tested range, since the best yields were not obtained for either the lowest or the highest severity, although a more accurate estimate might be obtained Conclusions Design of experiments together with response-surface modeling was used to optimize the pretreatment conditions to maximize the ethanol yield from animal bedding. The optimal conditions were 200 °C, for 5 min, at pH 2, at which an ethanol yield of 69.3% was obtained. The yield obtained when using steam pretreatment was higher than that obtained with other pretreatment technologies previously tested and was in the same range as that for steam-exploded wheat straw. This means that steam pretreatment may provide a means of unlocking animal bedding as a resource for bioenergy production, as the same conversion efficiencies can be obtained as for higher-quality feedstocks. Further analyses of the model showed that pH has the greatest influence on the ethanol yield, followed closely by the temperature, and that residence time has considerably less influence. Although the effects were properly fitted, the predictive power of the model may be low due to the high experimental variability, and the possible existence of uncontrolled factors. This implies that, in an ethanol production process based on animal bedding, it would not be possible to predict the yield of the process based only on the pretreatment conditions, although they determine it to a large extent. Animal bedding collection and preparation Animal bedding was collected from a dairy farm at Lille Skensved, a small town close to Køge, in Denmark. The barn is approximately 600 m 2 , has a rectangular shape, hosts 150 dairy cows in a loose house regime, and approximately 500 ton of straw is used per year as bedding. Samples were collected from 13 different positions in the barn and stored frozen until further use, according to previous recommendations [31]. After sample collection, the animal bedding from each of the sampling positions was washed with deionized water to separate the manure from the straw. Washing was performed by mixing 4 kg of animal bedding (approximately 1 kg dry animal bedding) with 10 L of deionized water at room temperature in a concrete mixer for 2 min. The material was subsequently pressed in a filter press to remove the liquid, which contained most of the manure. After washing, a subsample of the washed animal bedding was taken from the material collected at each of the 13 sampling positions after thorough mixing of the material in the concrete mixer. The 13 subsamples were then mixed to produce an average sample that is representative of the whole barn. This average material was used in the pretreatment and fermentation experiments. Steam pretreatment The washed animal bedding was impregnated with sulfuric acid by soaking in a dilute sulfuric acid solution (0.3-0.6 wt% depending on the pretreatment conditions) for 1 h. Soaking was performed at a solid-to-liquid ratio of 1:20, and sulfuric acid was added progressively until the desired pH was reached. Different pH levels, from 1.6 to 3.4, were tested according to the experimental design described in Sect. "Experimental design and statistical analysis". The material was pressed in a filter press at 13 bar to remove the liquid, and the soaked fiber was left overnight at room temperature in a sealed container prior to steam pretreatment. The soaked fiber was then subjected to steam pretreatment in a 10 L reactor (Process & Industriteknik AB, Kristianstad, Sweden), which has been described elsewhere [32]. Steam pretreatment was performed at various conditions, from 186 to 204 °C, for 3-12 min, according to the experimental design described in Sect. "Experimental design and statistical analysis". At each condition, 600 g dry soaked fiber was pretreated and the pretreated materials were stored at 4 °C before further use for analysis or experiments. Simultaneous saccharification and fermentation SSF experiments were performed on the whole pretreated slurry in 2 L Labfors bioreactors with a working weight of 1 kg. Prior to running the experiments, the fermenters with the slurry were sterilized (after correcting the pH of the material to 5). A water-insoluble solid (WIS) load of 8%, Cellic CTec2 (Novozymes, Denmark) enzyme cocktail at a load of 0.05 g enzyme/g WIS (which corresponds approximately to 10 FPU/g WIS) and Ethanol Red (Lesaffre Advanced Fermentations, France) yeast at a dry weight concentration of 3 g/L were used during the experiments. Due to severe mixing difficulties at the start of SSF, mixing at 400 rpm was applied 1 h after adding the enzymes and the yeast, when the material had become sufficiently liquefied to be mixable. SSF was performed at 35 °C and the pH was maintained at 5 through the automatic addition of 10% NaOH solution. The SSF media were supplemented with 0.5 g/L (NH 4 ) 2 PHO 4 , 0.025 g/L MgSO 4 , 1 g/L yeast extract and, to avoid the risk of infection, 10 mg/L streptomycin and 10,000 U/L penicillin. All the SSF experiments were performed in duplicate. Samples obtained from the SSF experiments were centrifuged in 2 mL Eppendorf tubes at 13,000 rpm for 5 min. The supernatant was filtered through 0.2 μm syringe filters (GVS North America, Sanford, USA) and stored at − 20 °C prior to high-performance liquid chromatography (HPLC) analysis. Ethanol, organic acids and other by-products were analyzed using a Shimadzu LC-20 AD HPLC system equipped with a Shimadzu RID 10A refractive index detector (Shimadzu Corporation, Kyoto, Japan). The chromatography column used was an Aminex HPX-87H, with a Cation-H Bio-Rad Micro-Guard column (Bio-Rad Laboratories, Hercules, United States) at 50 °C, and a 5 mM sulfuric acid solution was used as eluent at a flow rate of 0.5 mL/min. Compositional analysis Animal bedding is considered to be a mixture of two components, manure and straw, and the manure content is assumed to be equal to the mass removed after ten washing cycles [33]. The manure was further analyzed by incinerating a sample at 575 °C for 3 h to determine the organic matter content, and the inorganic matter content was calculated as the difference between the total solids and the organic matter content. The straw was analyzed following the protocols from the National Renewable Energy Laboratory (NREL) [34][35][36][37]. The manure content of the washed animal bedding was calculated as the product of the manure content in the native material and one minus the average washing efficiency of the 13 samples (Eqs. 2 and 3). The rest of the composition of the washed animal bedding was calculated assuming that the composition of the manure and the straw remained constant during washing and are therefore the same as that in the native material. where manure native is the manure content in the native material, expressed as %TS; M liquid the mass of the expressed liquid after washing; TS liquid the total solid content of the expressed liquid after washing; M bedding the dry mass of the animal bedding washed; and N samples the number of samples that were washed (13 in this study). The WIS content of the pretreated materials was determined using the non-wash method described by Weiss et al. [38]. The structural carbohydrates and lignin content of the solid fraction and the composition of the liquid fraction were analyzed following NREL protocols [37,39]. Monomeric sugars in the liquid fraction were analyzed using the HPLC system described above, using an Aminex HPX-87P chromatography column with a De-Ashing Bio-Rad Micro-Guard column at 85 °C, using reagent-grade water as eluent at a flow rate of 0.6 mL/min. Pretreatment by-products in the liquid fraction were analyzed using the same HPLC system, chromatographic column, and conditions as described in "Simultaneous saccharification and fermentation". Sugar samples generated during the analyses of structural carbohydrates and lignin were analyzed using highperformance anion-exchange chromatography coupled with pulsed amperometric detection. A Dionex system with a Carbo Pac PA1 column, a GP50 gradient pump and an AS50 autosampler were used. The flow rate was 1 mL/min, the temperature was 30 °C and the solutions used as eluents were: deionized water, 200 mmol/L NaOH, and 200 mmol/L NaOH mixed with 170 mmol/L sodium acetate. Yield calculations The ethanol yield was calculated based on the total available glucose in the washed fiber, which corresponds to 1.11 times the amount of glucan in the fiber (due to the addition of water during hydrolysis). The yield is presented as g ethanol/g glucose in the raw material (washed fiber), and also as a percentage of the theoretical stoichiometric ethanol yield (0.51 g/g), which are the values used in the development of the model described in Sect. "Experimental design and statistical analysis". Experimental design and statistical analysis The effects of the three pretreatment variables, temperature (T), residence time (t) and pH during soaking (pH), on the ethanol yield were investigated using responsesurface modeling. A spherical central composite design was chosen due to its improved performance [40], and four replicates were performed at the center point (195 °C, 7.5 min, pH 2.5), which was chosen based on previously reported optimal conditions for wheat straw [19]. The variables were coded to prevent scale effects from influencing the modeling. The coding was based on centering so that the zero value was assigned to the values of the variables at the center point, and the rest of the values were calculated based on the following conversion factors: 5 °C/coded unit, 2.5 min/coded unit and 0.5 pH units/coded unit. Table 4 gives the value of the variables in each of the 18 runs in both uncoded and coded units. An empirical model was constructed through multiple linear regression, as described previously by Brereton et al. [27] The model includes an intercept term, linear effects, quadratic effects and interaction terms (Eq. 4). The interaction terms account for the possibility that the value of one variable may influence the effect of another on the response [27]. For example, the effect of
2019-09-11T14:39:22.256Z
2019-09-10T00:00:00.000
{ "year": 2019, "sha1": "aaefff065bf2cfdbf61a90021470ef3bc480d08e", "oa_license": "CCBY", "oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/s13068-019-1558-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf2d1303b991ffca54666bcbea3e9c61cb6463ea", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
89703841
pes2o/s2orc
v3-fos-license
Barley yellow dwarf virus-PAV management using seed-treatment with the insecticide imidacloprid Najar, A., I. Ben Fekih, H. Ben Ghanem, S.G. Kumari and A. Varsani. 2017. Barley yellow dwarf virus-PAV management using seed-treatment with the insecticide imidacloprid. Arab Journal of Plant Protection, 35(3): 178-184. This research aimed to investigate the effectiveness of the insecticide imidacloprid as seed treatment against Barley yellow dwarf virus–PAV (BYDV-PAV) infection in barley. Artificial inoculation of resistant (QB 813-2) and susceptible (Manel, Rihane and Cyclon) barley cultivars with BYDV-PAV under field conditions was conducted through the bird cherry-oat aphid (Rhopalosiphum padi). Following inoculation, virus incidence was monitored on the various cultivars, and growth parameters such as plant height, biomass, grain yield and thousand seed weight were measured. Seed treatment with imidacloprid, was applied at concentrations of 0.7, 1.4 and 2 g a.i/kg seed. The highest reduction in BYDV incidence was observed after treatment with the concentration of 1.4 g a.i/kg seed. The results also indicated that the 1.4 g a.i/kg seed treatment significantly reduced the impact of BYDV-PAV infection on plant height for the susceptible cultivars Manel and Cyclon. An improvement in the biomass, grain yield and thousand kernels weight was recorded after imidacloprid treatment of the susceptible cultivars at the concentration of 1.4 g a.i/kg seed, and this concentration can be considered an economic practice for BYDV management in barley fields at locations where BYDV incidence is high. In Tunisia, barley is among the important economic crops and plays a key role in the country's agri-food sector.Commonly, barley is sown during early November in Tunisia.During the early stage of vegetative growth, the plants are most susceptible to BYDVs infection and significant numbers of winged aphids were often observed in barley fields in Tunisia (4).This vulnerable vegetative growth stage in early winter with mild temperature offers a suitable environment for aphid infestation, and thus leads to primary spread of viral diseases.Previously, the impact of aphid infestation during early and mid-vegetative stages on the establishment of primary and secondary spread of barley-associated viral diseases was highlighted earlier http://dx.doi.org/10.22268/AJPP-035.3.178184© 2017 Arab Society for Plant Protection ‫النبات‬ ‫لوقاية‬ ‫العربية‬ ‫الجمعية‬ (11).Other studies in Tunisia reported the wide occurrence of BYDVs in barley, reaching 30% incidence when infection starts during early growth stage of the plant (2,19).Earlier virus surveys of barley fields in Tunisia reported the natural occurrence of BYDV-PAV as the predominant virus species on cereal crops (16).Several methods are reported for BYDVs management such as: (i) cultural management based on high plant density, since winged aphid vectors have preference to land in fields with low plant density (21); (ii) timely seeding, based on delayed sowing to avoid peak of aphid flights (3); and (iii) Genetic resistance, based on the selection and use of BYDVresistant cultivars (9).Among the insecticides used to control virus vectors, imidacloprid a neonicotinoid representing the highest selling insecticide worldwide (13), was successfully used as a seed treatment to control insect pests of several crops such as sugar beet, maize and vegetables (1) and applied early in the season.This insecticide is commonly applied to control sucking insects such as aphids, thrips, plant hoppers, some coleopteran and some lepidopteran pest species (7,8,30).Several studies reported the benefit of seed treatment to protect chemical molecules from rain and UV degradation beside the reduction of labor cost (23).However, few studies investigated the effect of imidacloprid in reducing BYDV incidence on cereal crops through aphid control (18).Within this context, we carried out this study to assess, under field conditions, the effectiveness of the imidacloprid as seed treatment against R. padi to reduce BYDV-PAV infection.The major aim was to define the appropriate economic concentration of imidacloprid as seed treatment applied mainly to barley varieties commonly sown by Tunisian farmers. Plant material Four barley cultivars (Manel, Rihane, Cyclon and QB 813-2) were used in this study.Manel and Rihane are common Tunisian cultivars that are high yielding but susceptible to BYDV infection.Cyclon and QB 813-2 (+ Yd2) both provided by ICARDA, served as susceptible and resistant control to BYDV infection, respectively. Field trial Field experiments were conducted in the 2003/2004 cropping season at the experimental station in Oued Beja (North of Tunisia) of the National Agricultural Research Institute of Tunisia (INRAT).Annual fertilization was based on broadcasting DAP (100 kg/ha) one week prior to sowing and two applications of urea (75 kg/ha) during the 3-leaf stage and at early tillering.Before tillering, weed control was performed using the herbicides Illoxan (2 l/ha) against monocots and Sansac (0.8 l/ha) against broad-leaf weeds.Faba bean was the previously cultivated crop in the experimental field used for this study. Seed treatments and experimental setup The insecticide imidacloprid (N-{1-[(6-Chloro-3-pyridyl) methyl]-4,5-dihydroimidazol-2-yl}nitramide) registered worldwide, mixed with starch as an additive was evaluated for seed-treatment application.Seeds were coated with imidacloprid at the rate of 0.7, 1.4 and 2.0 g a.i/kg seed.Two groups of untreated seeds were included: (i) virusinoculated at 2-3 leaf-stage (0) and (ii) non-inoculated (Control).The imidacloprid treatments were the main factor evaluated and the barley cultivars were the sub-factor.The adopted design included three blocks; each block included five sub-blocks.Each sub-block was divided into four experimental units to accommodate the tested cultivars.For each treatment, around 40 seeds were sown in four rows (1.5 m long) and 25 cm apart. BYDV isolate maintenance The BYDV-PAV Tunisian isolate used in this study was previously identified in infected barley plants collected from the Cap-Bon region, Tunisia (19).This isolate was maintained on Manel cultivar through successive serial aphid (Rhopalosiphum padi) transfer. Aphid vector rearing R. padi, known for its high efficiency in transmitting BYDV-PAV (20) was used in this study.Aphid rearing was initiated by using apterous aphids fed on nutritive media (20% sucrose) through a parafilm membrane as previously described (5).Petri dishes, 90 mm diameter, were used as support to assure the first-instar nymphs production.Aphids were reared in polyvinyl chloride (PVC) plastic cages in a growth chamber at 20°C under light: dark (16:8 h) conditions. Virus acquisition and transmission Aphid colonies were kept on infected plants for 48 h to provide access to the virus.Twenty days after sowing at 2-3 leaf stage, R. padi were then placed on all plants in the field (5 viruliferous aphids/plant) to ensure inoculation.The aphids were killed 3 days later with a non-systemic insecticide spray, to avoid their multiplication and their effect as pests.Fifteen days later, BYDV-PAV presence in inoculated plants was confirmed by the tissue blot immunoassay (TBIA) test (17). Data collection Data was collected from an area of 1 m 2 in the center of each experimental plot.In order to estimate the infection rate, the number of infected plants per100 tested plants was measured.Plant height (cm), biomass (g/m 2 ), grain yield (g/m 2 ) and thousand kernels weight (g) were also determined. Data analysis The adopted model was a split-plot experimental design as yij(k )= u + Ri + Tj + e (a) + Ck + (TxC) jk + e (b) Where yijk is the response of the effectiveness of the treatment (j) in the replicate (i) for the corresponding cultivar (k); Ri, Tj, e(a) and Ck refers to block effect, treatment effect, errors related to treatment and cultivar effect, respectively.(TxC) jk and e(b) represent the interaction cultivar x treatment and associated experimental error .ANOVA and least significant differences (LSD)-test at P=0.05 using SAS 9.1 software were used to analyze the data. Results Analysis of variance (ANOVA) showed a highly significant (P=0.001)effect of the interaction between treatment and cultivars on infection rates (INF), biomass yield (BM) and grain yield (RG), and significant (P=0.05)effect on stem height (H) (Table 1). Evaluation of the interaction between seed treatment and cultivars Effect on infection rates -Seed treatment with imidacloprid affected BYDV-PAV incidence regardless of applied doses (Figure 1).For the four varieties, a significant decrease in the infection rate was observed with concentrations of 0.7 and 1.4 g a.i/kg seed.After treatment, infection rate was 30.65%, 10.34%, 32.33% and 25.34% for the varieties Cyclon, QB813-2, Manel and Rihane, respectively.No significant difference was found for using a concentration higher than 1.4 g a.i/kg seed. Effect on plant height -As can be seen in Figure 2-A, except for Manel, BYDV-PAV infection reduced plant height of all cultivars tested significantly.The imidacloprid treatment at the concentrations of 1.4 and 2 g a.i/kg seed reduced the effect of BYDV-PAV making it not significantly different when compared with the healthy control.The average plant height for the susceptible variety Cyclon was 40% less compared to that of the control.In ‫مجلد‬ ‫العربية،‬ ‫النبات‬ ‫وقاية‬ ‫مجلة‬ 35 ‫عدد‬ ، 3 ( 2017 ) response to 0.7, 1.4 and 2 g a.i/kg seed treatments, the plant height increased by 50, 90 and 100%, respectively. Effect on biomass -The imidacloprid treatment impact on biomass was also affected by the applied concentration and the tested cultivars (Figure 2-B).Except for the resistant variety QB813-2, a significant biomass increase was observed among treatments.However, the concentration 0.7 g/kg was not sufficient to protect the susceptible cultivar Cyclon from BYDV-PAV infection.The same results were found for Tunisian varieties Manel and Rihane. A highly significant effect for applying imidacloprid at 1.4 g a.i/kg seed on biomass was obtained for the three BYDV-PAV sensitive cultivars, Cyclon, Manel and Rihane.In fact, seed treatment had simultaneously decreased viral infection and increased biomass yield by 93.33, 65.40 and 50.90% for Cyclon, Manel and Rihane, respectively.No difference was found between the effects of 1.4 and 2 g a.i/kg seed concentrations. Effect on grain yield -Seed-treatment of the susceptible variety Cyclon at the concentration 0, 0.7 and 1.4 g a.i/kg seed led to a significant increase in grain yield by 84.30, 175.30 and 317.33 g/m 2 , respectively (Figure 2-C).In addition, grain yield increase of 37.66 and 32.10%, respectively, for Manel and Rihane cultivars was obtained in response to 0.7 and to 1.4 g a.i/kg seed treatment.The BYDV resistant variety QB 813-2 did not show significant increase in grain yield in response to seed treatment. Effect of imidacloprid treatment and cultivars on thousand kernels weight (TKW) - The effect of imidacloprid treatment on the thousand kernels weight was obtained only for the Cyclon and Manel cultivars (Figure 2-D).For cv. Cyclon, significant increase of this parameter was obtained in response to the concentration 0.7 g a.i/kg seed, but was not comparable to the uninfected plants (control) at the concentration 1.4 g a.i/kg seed.For the Manel variety, there was a tendency for the TKW increase in response to treatment concentration, however, the difference was only significant in response to the concentration 1.4 g a.i/kg seed for the inoculated and untreated plants. Discussion This study documents for the first time in Tunisia, the efficiency of imidacloprid as seed treatment to control BYDV-PAV under field conditions.Imidacloprid seedtreatment reduced BYDV-PAV infection in both resistant (QB 813-2) and susceptible barley cultivars (Cyclon, Manel and Rihane).Several earlier studies reported the efficiency of imidacloprid in reducing BYDV infection in cereals (10,11,26).Other studies related to the effect of imidacloprid seed-treatment also reported the efficacy of imidacloprid in reducing Potato leafroll virus (27), and Sugar yellows virus (12) infection. Imidacloprid seed-treatment resulted in a significant reduction in BYDV-PAV infection when 1.4 ga.i/ kg seed treatment was used, whereas no further significant effect was obtained with 2 g a.i/kg seed treatment.The same concentration did show a significant effect on biomass and yield of the susceptible cultivars.Therefore, the use of imidacloprid seed-treatment at 1.4 g a.i /kg of seeds to reduce BYDV incidence in barley fields seems to be an appropriate concentration to be used by farmers in years of high BYDV incidence.In this study, the use of higher concentration was found not necessary and consequently not recommended.This finding is in agreement with an earlier study (18), where they reported that seed dressing with imidacloprid at 1.4 g a.i./kg seed has effectively reduced the incidence of Bean leafroll virus and Faba bean necrotic yellows virus in faba bean and lentil crops under both glasshouse and field conditions. The use of imidacloprid as seed or foliar-treatment against aphids has been previously investigated (8,11,15).In this study, only the application of imidacloprid as seedtreatment to control primary infection of BYDV-PAV transmitted by R. padi during fall, was investigated.A significant effect of the adopted treatment to reduce BYDV-PAV incidence and improve both biomass and yield was highlighted.This is consistent with an earlier study (26) that demonstrated no significant difference on BYDV incidence in wheat between using imidacloprid seed-treatment alone and seed-treatment followed by seven foliar insecticide applications. Even though imidacloprid has shown to be efficient against virus vectors, the residual effect of this systemic insecticide after seed treatment has been always a concern.In this study, the residual effect of imidacloprid in barley was not considered.However, a study previously performed (22) has shown that there is no residual effect after seedtreatment with the systemic neonicotinoids in cereal crops such as wheat. The use of chemical insecticides remains an important component of pest control.However, the need for ecofriendly control approaches remains an important aspect to be adopted within the integrated pest management context.The use of resistant cultivars to the virus would always be the most practical and cost effective mean for BYDV management, as long as the resistance genes have the ability to express themselves in the target environment.This is the case of the Yd2 gene, which has shown effectiveness in international barley breeding programs for resistance to BYDV (9).In the absence of resistant cultivars and in regions of high risk of BYDV epidemic, seed treatment with imidacloprid may further reduce the primary and secondary spread of BYDV in cereal producing regions. Figure 2 . Figure 2. Effect of seed treatment with Imidacloprid insecticide on plant height (A), biomass (B), grain yield (C) and thousand kernel weight (D) of four barley cultivars (Cyclon, QB813-2, Rihane and Manel) after infection with Barley yellow dwarf virus-PAV (BYDV-PAV).Bars with similar letters for each trait are not significantly different at P=0.05.
2019-04-02T13:13:32.492Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "a4d06c5e43df60fd9134e12bfb80062e979fa29b", "oa_license": "CCBYNC", "oa_url": "https://mel.cgiar.org/reporting/download/hash/iZF9rVzf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "a4d06c5e43df60fd9134e12bfb80062e979fa29b", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
5153631
pes2o/s2orc
v3-fos-license
Regional Specializations of the PAZ Proteomes Derived from Mouse Hippocampus, Olfactory Bulb and Cerebellum Neurotransmitter release as well as structural and functional dynamics at the presynaptic active zone (PAZ) comprising synaptic vesicles attached to the presynaptic plasma membrane are mediated and controlled by its proteinaceous components. Here we describe a novel experimental design to immunopurify the native PAZ-complex from individual mouse brain regions such as olfactory bulb, hippocampus, and cerebellum with high purity that is essential for comparing their proteome composition. Interestingly, quantitative immunodetection demonstrates significant differences in the abundance of prominent calcium-dependent PAZ constituents. Furthermore, we characterized the proteomes of the immunoisolated PAZ derived from the three brain regions by mass spectrometry. The proteomes of the release sites from the respective regions exhibited remarkable differences in the abundance of a large variety of PAZ constituents involved in various functional aspects of the release sites such as calcium homeostasis, synaptic plasticity and neurogenesis. On the one hand, our data support an identical core architecture of the PAZ for all brain regions and, on the other hand, demonstrate that the proteinaceous composition of their presynaptic active zones vary, suggesting that changes in abundance of individual proteins strengthen the ability of the release sites to adapt to specific functional requirements. Introduction Communication between neurons involves chemical signaling via synaptic contacts consisting of the presynaptic neurotransmitter release site, the synaptic cleft, and the receptor-loaded postsynaptic site. Synaptic signaling is governed by the concerted action of a large variety of proteins. Due to the refinement of mass spectrometric methods proteomic studies of murine brain-derived synaptosomes, synaptic vesicles, postsynaptic densities, and presynaptic active zones (PAZs) recently provided increasing information on the proteomic composition of chemical synapses and their subcompartments (reviewed in [1]). These synaptic proteomes provide the basis for studying the interactome of the molecular constituents identified (reviewed in [2]). In contrast to the complex dynamics of postsynaptic densities [3,4], the dynamic composition of the PAZs is less well characterized, but now arouses increasing interest as a molecular platform of presynaptic plasticity (reviewed in [1]). The structural and functional dynamics at synaptic contacts in the adult CNS are reflected by presynaptic rearrangements of the proteinaceous inventory [5]. The detailed understanding of regional specializations requires the analysis of the proteomes of the presynaptic active zone (PAZ) from specific brain regions. Based on our experimental expertise for immopurifying the PAZ from rat [6,7] and mouse brain [5,8], we developed a novel experimental approach for the purification of subregion-specific PAZ proteomes. By combining stringent subcellular fractionation with subsequent immunopurification, we obtain a highly purified native PAZ proteome. This is now exploited for comparing the proteinaceous composition of the PAZ from mouse olfactory bulb, hippocampus and cerebellum by quantitative immunodetection and mass spectrometry. All these brain regions are involved in synaptic plasticity, in memory formation and memory consolidation. We hypothesized that the presynaptic release sites of the neuronal subpopulations in these brain regions are equipped with proteomes adapted to their circuitry-specific functions. For example, the olfactory bulb receives via the glomeruli odorant information from the olfactory epithelium. Willful and unconscious odorant signal processing takes place by an interplay of neuronal populations prior to odor processing in higher brain regions. The olfactory bulb is also the target region of migrating neuroblasts that originate in the neurogenic niche of the subventricular zone and potentially involved in olfaction plasticity [9][10][11]. The hippocampal formation receives its input via the perforant path to the dentate gyrus. The concerted action of the hippocampal neuronal populations contributes to long-term memory formation. The hippocampus harbors a neurogenic niche, the subgranular zone of the dentate gyrus. Newborn neuroblasts migrate into the granule cell layer of the dentate gyrus and differentiate into interneurons potentially involved in declarative memory formation and consolidation [12,13]. The cerebellum plays an essential role in motor control and may also be involved in cognitive functions such as attention and language. Its neuronal network is important for motor memory acquisition and storage of complex motion sequences [14,15]. Our proteomic analyses reveal region-specific differences in the proteinaceous components of the PAZ proteomes concerning, e.g., calcium homeostasis, synaptic plasticity and neurogenesis, implicating adaptions to specific functional demands. These data present novel insight into the PAZ proteome and provide a solid basis for further characterizing differences in the proteinaceous inventory of the release sites derived from distinct brain regions. Animals Animal treatment was performed under veterinary supervision according to European Guidelines. Mouse strain C57BL/6N was purchased from Charles River (Sulzfeld, Germany). Mice of both sexes, 3 months of age, were kept under 12 h light and dark cycles with food and water ad libitum. Subcellular Fractionation of the PAZ from Mouse Brain Regions Olfactory bulb, hippocampus, and cerebellum were dissected from native mouse brain prior to subcellular fractionation. Synaptic vesicles were isolated from synaptosomes according to the protocol guidelines of Whittaker [16]. The protocol has previously been adapted to the fractionation of individual mouse brains [5] and now downscaled for individual mouse brain regions (downscaling II). The following modifications were applied: individual brain regions (olfactory bulb, hippocampus, and cerebellum) were homogenized each in 0.4 mL of preparation buffer (5 mM Tris-HCl, 320 mM sucrose, pH 7.4) containing the protease inhibitors antipain, leupeptin, chymostatin (2 µg/mL each), pepstatin (1 µg/mL) and benzamidine (1 mM). Unless otherwise mentioned, the material was kept at 4 °C during the entire preparation. The brain homogenate was centrifuged using a Beckman TLX Optima 120 and rotor TLA 120.2 by acceleration (mode 4) up to 2800 rpm for 2 min. The resulting pellet was discarded and the supernatant was further fractionated by discontinuous Percoll gradient centrifugation. The Percoll gradient was prepared by layering 1.0 mL supernatant solution onto three layers of 1.0 mL Percoll solution (3%, 10%, 23% (v/v) in preparation buffer). After centrifugation using TLA 100.4 rotor for 7 min at 35,000 gav, fractions containing synaptosomes were collected and diluted twofold in preparation buffer and centrifuged using TLA 100.4 rotor for 35 min at 50,000 gav. For hypoosmotic lysis of synaptosomes the resulting pellet was triturated in four volumes of lysis buffer (5 mM Tris-HCl, pH 7.4) at room temperature. The suspension was centrifuged using TLA 100.4 rotor for 60 min at 250,000 gav. The pellet was resuspended and homogenized in 300 µL sucrose buffer (10 mM HEPES-NaOH, 0. Immunopurification of the Presynaptic Active Zone via Docked Synaptic Vesicles The immunopurification protocol for the presynaptic active zone (PAZ) via docked synaptic vesicles, as described recently [5,7], was modified for individual mouse brain regions. In brief, 100 µL magnetic beads pre-coupled with an anti-mouse monoclonal antibody were washed with Tris-buffered saline (TBS, pH 7.4) and incubated with TBS containing 1% glycine, 1% lysine and 0.5% saponin followed by three washing steps in TBS. Magnetic beads were then incubated for 1 h with the anti-SV2 antibody (3 µg of antibody per 10 7 magnetic beads to gain representative SV2 population). Crosslinking of the antibodies was performed with 0.1% glutardialdehyde in TBS for 5 min and stopped by adding TBS containing 1% glycine and 1% lysine. Finally, the beads were incubated over night at 4 °C with the pooled lower sucrose gradient fractions (LF, [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30]. Beads containing the bound material were washed three times with TBS and incubated with ice-cold acidified acetone (acetone containing 125 mM HCl) for 30 min at 20 °C. Elution was performed with different elution agents for 30 min. For Western blot analysis, proteins were eluted with sample buffer containing 2% SDS. For MS analysis proteins were eluted with 25 mM ammonium bicarbonate (ambic). The elution of PAZ proteins is supported by applying short ultrasonic pulses. Lactate Dehydrogenase (LDH) Lactate dehydrogenase (LDH) activity was determined as described by Johnson [17]. In brief, mouse brain homogenate was subjected to discontinuous Percoll gradient centrifugation as described. After centrifugation for 7 min at 35,000 gav 26 fractions were collected (130 µL each) from top to the bottom of the gradient. Ten microliters of sample were added to the substrate solution containing 150 mM NaCl with 50 mM Tris/HCl adjusted to pH 7.4, and 2.0 mg sodium pyruvate and 3.5 mg NADH+ were added per 50 mL. The amount of free LDH was measured using a spectrophotometer (Colora SPC 300, Hitachi, Tokio; Japan). Subsequently, 50 µL of 10% Triton X-100 was added to determine the total LDH activity (Katal per mL). The activity of occluded LDH was obtained by subtracting free from total LDH activity. Western Blotting For quantification of protein contents, the BCA-assay kit (#23225; Pierce, Rockford, IL, USA) was used. Immunopurified material was eluted from the beads with 2% SDS, 62.5 mM Tris, pH 6.8 prior to protein determination. The BCA kit tolerates up to 5% SDS, and 2% SDS are recommended to eliminate interference by lipids. Subsequently proteins were dissolved with sample buffer containing 2% SDS, 62.5 mM Tris, pH 6.8, 10% glycerol, and 0.01% bromophenol blue. Equal amounts of protein (100 ng) were resolved on a 15% Tris-glycine SDS-PAGE [18] and transferred onto nitrocellulose membrane (GE Healthcare, Solingen, Germany) using semi-dry blotting techniques (BioRad, Munich, Germany). Membranes were blocked with 5% skimmed milk powder in PBS/T (123 mM NaCl, 7.4 mM Na2HPO4, 4.3 mM KH2PO4, 0.1% Tween20) for 1 h. Incubation with the respective primary antibody was performed over night at 4 °C, followed by a second blocking step with 5% skimmed milk powder (five times, 10 min each), subsequent incubation with the respective HRP-conjugated secondary antibody (GE Healthcare) and a final washing step in PBS/T (five times, 10 min each). Quantification and Statistics Immunoblots were incubated with Western Lightning ECL substrate and visualized using ImageQuant LAS 4000 (both GE Healthcare). Quantification of immunosignals was performed with samples obtained under identical experimental conditions (n = 3-8) and run in one gel. Pixel intensities of non-saturated bands (±SEM, standard error of the mean) from the same blot were measured in voxels using ImageQuant TL software (version 8.1.0.0; GE Healthcare, 2011). Data were statistically processed employing unpaired Student's t-test. Mass Spectrometry-LC-MS/MS Analysis of Individual Brain Regions (B, H, C) The immunopurified presynaptic active zone derived from mouse hippocampus, olfactory bulb and cerebellum was subjected to enzymatic digestion using the well-established serine protease trypsin. The amount of trypsin (Proteomics Grade, Sigma Aldrich, St. Louis, MO, USA) was adjusted to an enzyme-to-substrate ratio of 1:50 for each sample according to the protein concentrations determined by BCA Protein Assay (Pierce, Thermo Scientific, Waltham, MA, USA). The digestion was performed at 37 °C for 18 h and stopped by adding 3 µL of formic acid (FA). Samples were dried down and were solubilized in solvent A (5% MeCN, 0.1% FA) to obtain a final concentration of 1 µg peptide mixture per µL. Chromatographic separation of peptides was performed using an EASY nLC II system (Thermo Scientific, Bremen, Germany). Both precolumns as well as analytical columns were filled in-house with XBridge BEH C18 material (3.5 µm, 130 Å, Waters, Eschborn, Germany) and an optimized gradient was applied with increasing amounts of solvent B (95% MeCN, 0.1% FA) during 130 min at 300 nL/min. Mass spectrometric measurements were performed online using a micrOTOF-Q II ESI-Qq-TOF instrument (Bruker Daltonics, Bremen, Germany) equipped with a nano ESI source using the following parameters of a previously optimized acquisition method [19]: electrospray voltage 4500 V, end plate voltage 50 V, nebulizer gas pressure 0.4 bar, dry gas 4 L/min, dry gas temperature 180 °C, scan range 50-2000 m/z, scan rate 1.25 Hz. Nitrogen was used as collision gas with the flow rate set to 30%. Collision sweeping was set as active, with the collision RF changing from 800 Vpp to 200 Vpp. A maximum of seven precursors per MS spectrum was selected for MS/MS acquisition. Quadratic calibration was performed using a calibration tune mix for ESI measurements with extended mass range (Bruker Daltonics, Bremen, Germany). Acquired spectra were post-processed using Compass Data Analysis (V4.0, Bruker Daltonics, Billerica, CA, USA) including deconvolution of spectra, detection of compounds and compilation of mascot generic format (MGF) files for database search. MS/MS searches were performed employing an in-house Mascot server (V2.4.1, MatrixScience Ltd., London, UK [20]) using the following parameters: 25 ppm peptide mass tolerance, 0.05 Da fragment mass tolerance, tryptic enzyme specificity, up to one allowed missed cleavage and ESI-QUAD-TOF as instrument setting. The database (SwissProt, released on 13 November, 2013) was restricted to murine proteins and false discovery rate (FDR) was estimated by a decoy search and set to be ≤1.5%. Protein identifications with two or more matched peptides were considered as significant. Results The first aim of this study was to immunopurify the proteome of the presynaptic active zone (PAZ) from defined mouse brain regions. Therefore we developed a new experimental protocol for subcellular fractionation and immunoisolation of the PAZ derived from olfactory bulb (B), hippocampus (H) and cerebellum (C), based in principle on the method for individual total mouse brain as previously described in detail [5]. For this purpose, we evaluated the key steps of subcellular fractionation for all three brain regions starting with the purification of synaptosomes via Percoll gradient centrifugation and followed by sucrose density gradient centrifugation of synaptic vesicles docked to the presynaptic active zone. Activity of lactate dehydrogenase (LDH) measurement was chosen as marker for the accumulation of metabolic active membrane-sealed compartments during Percoll gradient centrifugation. Sealed compartments containing synaptic vesicles were regarded as synaptosomes. Upon Percoll gradient centrifugation membrane occluded lactate dehydrogenase (LDH) revealed a peak in fraction 6 and a broad plateau ranging from fractions 12-20 ( Figure 1A). The curve progression is highly comparable for the individual brain regions. Immunodetection of characteristic marker proteins of the PAZ, the ubiquitous synaptic vesicle protein SV2 and the presynaptic plasma membrane constituent amyloid precursor protein APP, revealed strong signals for SV2 and APP within fractions 12-20. The results demonstrate the simultaneous presence of membrane-occluded LDH and marker proteins, indicating an enrichment of metabolically intact nerve terminals referred to as synaptosomes. The synaptosome enriched fractions were pooled and subjected to hypoosmotic lysis prior to discontinuous sucrose gradient centrifugation and immunodetection. Marker proteins for synaptic vesicles (SV) SV2 and synaptotagmin-1, as well as the plasma membrane (PM) constituents APP and Na + /K + -ATPase (NKA) co-migrated to denser fractions (fractions [16][17][18][19][20][21][22][23][24], indicating the presence of synaptic vesicles attached to the presynaptic membrane via the SDS-resistant SNARE complex (Figure 2). These fractions were employed for immunoisolation of the PAZ using a monoclonal antibody directed against the 12 transmembrane span synaptic vesicle protein SV2. The immunopurified PAZ was subsequently analyzed by quantitative immunodetection to evaluate potential differences between PAZs derived from individual brain regions. The immunosignal for SV2-the target for immunopurification-revealed no significant differences between the PAZs derived from the three different brain regions ( Figure 3) and yielded similar amounts of PAZ protein upon immunopurification. The fast calcium buffer protein calbindin was more abundant in the cerebellar PAZ as compared to the olfactory or hippocampal PAZ (*** p < 0.001). The calcium/calmodulin-dependent kinase CaMKII involved in presynaptic signaling and an important mediator of learning and memory was most abundant in the hippocampal PAZ (* p < 0.05), whereas the neuronal cell adhesion molecule NCAM (** p < 0.01) was most abundant in the olfactory PAZ ( Figure 3). Constituents of the immunopurified PAZ from olfactory bulb, hippocampus, and cerebellum were further identified by mass spectrometry. We have identified 648 individual PAZ proteins in total (B: 359; H: 418; C: 424) including prominent constituents like: SV2, synaptotagmin-1, synaptophysin, SNAP25, synataxin-1, Munc-18 and Na + /K + -ATPase. The data highlighted here focus on the composition of select PAZ proteomes derived from different brain regions. Setting the threshold for proteins that could be reproducibly identified with a significance of FDR < 1.5% and with two or more peptides in a minimum of three independent experiments yielded 199 proteins (B = 96, H = 139, and C = 129 proteins), resulting in the high overlap of 61 proteins between the three brain regions (VENN diagram; Figure 4). This underpins the occurrence of common core constituents of PAZs and is in line with previous observations [5]. Furthermore, 25 (B), 41 (H), and 29 (C) individual proteins were exclusively identified within one of the respective brain regions. Core constituents of the PAZ abundant in all three brain regions included integral and associated synaptic vesicle proteins such as the SNARE-complex constituents VAMP2, SNAP25, syntaxin-1, munc18, the glycolytic machinery, signaling proteins such as 14-3-3 isoforms, CNPase, DRP-2, and subunits of CaMKII, plasma membrane-allocated proteins such as Thy-1 and NKA, the numerous cytoskeletal proteins involved in actin filament and microtubule dynamics, and spectrin (S1). A selection of proteins identified in only one of the respective PAZ and involved in calcium homeostasis and synaptic plasticity is listed in alphabetical order in Figure 4. With the exception of atlastin-1, these proteins have recently been assigned to the PAZ of the entire rat and mouse brain. These are involved in diverse functional aspects of the release sites such as calcium homeostasis (calretinin, calbindin, Purkinje cell protein 1), synaptic plasticity (neuromodulin, paralemmin-1, contactin-1, protein NDRG2), cellular dynamics (septins-3/5/7/11), and structural reorganization (stathmin, dihydropyrimidinase-1). Additional examples include the olfactory marker protein OMP that was exclusively identified in the PAZ derived from olfactory bulb, neurochondrin in the hippocampal PAZ, and calbindin in the cerebellar PAZ. This suggests that these proteins have an increased abundance in the respective PAZs. Discussion The presynaptic active zone represents a focal hot spot that is not only involved in the regulation of neurotransmitter release but also in multiple plastic structural and functional alterations underlying neural activity in the adult CNS [5]. The concerted action of a set of proteins present at the release site governs central functions in synaptic signaling (reviewed in [1,2]). Furthermore, constituents of the presynaptic active zone are targets of numerous potent neurotoxins and vulnerable to neurodegenerative diseases. Lassek and coworkers allocated the amyloid precursor family members APP, APLP1 and APLP2 to the release sites [8]. Here we present proteomic data from different brain regions that originate from heterogeneous populations of neurons. The highly purified native PAZ proteomes derived from olfactory bulb, hippocampus and cerebellum reveal a conserved common core composition, indicative of common functional and structural principles. Moreover, our data elucidate pronounced differences between brain regions in the abundance of PAZ constituents, implicating specific adaptions of the PAZ proteinaceous inventory to specific tasks in neural circuitry and plasticity. In agreement with previous data derived from entire rat [6,21] and mouse brain [5] mitochondria are common to all PAZ proteomes, whereas constituents of the postsynaptic density are virtually absent. In the following we briefly highlight one selected protein with increased abundance in the PAZ of only one of the brain regions. In addition, selected proteins in the immunopurified PAZ from olfactory bulb, hippocampus and cerebellum are portrayed in the supplementary material (Text S2). Interestingly, many of these proteins are involved in adult neurogenesis and synaptic plasticity. Moreover, they have been implicated in learning and memory formation that is often impaired in neurodegenerative disorders. The olfactory marker protein (OMP) was exclusively identified in the PAZ derived from olfactory bulb. OMP immunohistochemistry identifies olfactory receptor cell axons in the olfactory bulb [22]. OMP is present only in mature neurons [23]. Double labeling demonstrated that OMP and the microtubule-associated MAP2 are distributed in distinct regions within the glomerulus revealing the compartmental nature of subglomerular organization. The synaptic vesicle protein synaptophysin was found to strongly co-localize with OMP [24]. OMP knock-out pups fail to show preference between their biological mothers and unfamiliar lactating females [25]. The neuron specific neurochondrin was identified in the hippocampal PAZ. Prominent expression of neurochondrin in the adult brain was previously observed in hippocampus, amygdala, septum, and nucleus accumbens with moderate expression in the dorsal striatum [26,27]. Neurochondrin was originally discovered as a protein that induces neurite outgrowth [28]. A synaptosome fraction purified from mouse brain contained both neurochondrin and mGluR5 [27]. Neurochondrin knockout attenuated mGluR5-dependent stable changes in synaptic function-LTP and LTD-in the hippocampus [27]. Neurochondrin acts as a negative regulator of calcium/calmodulin-dependent protein kinase II phosphorylation and is essential for the spatial learning process [29]. Neurochondrin knockout led to a behavioral phenotype associated with an animal model for schizophrenia, as indexed by alterations both in sensomotoric gating and psychotomimetic-induced locomotor activity [27]. The EF-hand calcium binding protein calbindin was highly abundant in the immunopurified cerebellar PAZ. It binds calcium ions with high affinity [30] and is enriched in Purkinje cells [30,31]. Cells that displayed calbindin during brain development were also calbindin-positive in the adult animal. Positive cells represented 74% of the Purkinje cells from the cerebellar cortex, whereas less than 1% of the neurons in the frontal cortex were immunopositive for calbindin [32]. The adult expression pattern developed steadily in cerebellum [33] and in mature Purkinje cells with calbindin contributing about 15% of total cellular protein [34]. Selective deletion of calbindin from cerebellar Purkinje cells resulted in distinct cellular and behavioral alterations with permanent deficits of motor coordination and sensory processing [35,36]. In summary, the proteome of the immunopurified PAZ derived from olfactory bulb, hippocampus, and cerebellum revealed common core constituents that play a central role in presynaptic function. It is of note that the abundance of several of the protein constituents differed considerably between the respective brain regions, presumably reflecting region-specific functional adaptions of the presynaptic release site. This information helps to understand the impact of therapeutic drugs on their targets and to elucidate their subsequent effects on the PAZ proteome. In a similar way, the dynamics and functional diversity of the postsynaptic AMPA receptor proteomes was found to reflect context-specific modulation [4]. Conclusions The identification of individual presynaptic active zone protein components is a prerequisite for further functional investigations and also provides a solid basis for evaluating their interaction. Our data suggest that the differences in the PAZ proteome reflect specific adaptions to regional neuronal circuitries and the functional and structural dynamics of their corresponding release sites. Moreover, our novel experimental setup opens avenues for studying presynaptic active zone proteomes in time and space under native conditions. The findings reported here may serve as a template also for studying the impact of brain region specific mutants on the presynaptic proteome and presynaptic physiology.
2015-09-18T23:22:04.000Z
2015-05-13T00:00:00.000
{ "year": 2015, "sha1": "38c6ce4302f4faa6435a09b01fc841faffa08853", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7382/3/2/74/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38c6ce4302f4faa6435a09b01fc841faffa08853", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
108292406
pes2o/s2orc
v3-fos-license
Assessment of Tp-Te Interval and Tp-Te/Qt Ratio in Patients with Aortic Aneurysm BACKGROUND: Arrhythmic disorders in the aortic aneurysm (AA) have been rarely reported. AIM: The study aimed to assess the repolarisation indices of ventricular arrhythmia (VA) (mainly Tp-Te interval and Tp-Te/QT ratio) in patients with AA. METHODS: A group of 98 patients with AA and 75 patients as control were recruited. Many of indices of ventricular arrhythmia were assessed. RESULTS: Many of indices like QT, QTc, QTpc, Tp-Te/QT, Tp-Te/QTc, Tp-Tec/QTc, S-Tp, S-Tpc, S-Te, S-Tec and fQRS were found to be significantly different in AA group (for all P < 0.05). However, QTp, mean Tp-Te and Tp-Tec were not found different (for all P < 0.05). Aortic diameter (Ao-D) was found to have a positive correlation with QTc, QTpc, S-Tp, S-Tpc, S-Te, S-Tec, fQRS (for all P < 0,05) and negative correlation withTp-Te/QT (P = 0.047). The best cut-off level for prediction of Tp-Te ≥100 ms was found the Ao-D > 43.5 mm in ROC analysis (AUC: 0.69; P = 0.151) with sensitivity 60% and specificity 79.6%. CONCLUSIONS: Although our study did not find any differences for mean Tp-Te interval between groups, many of other indexes of TDR were found to be significantly different. Ao-D was found to have significant correlations with many indices. Introduction Thoracic aortic diseases include degenerative, structural, acquired, genetic-based, and traumatic diseases of the aorta and aortic aneurysm (AA) is the main part of this conundrum whose diagnosis is made easily with transthoracic echocardiographic (TTE) study [1], [2]. AA had a complicated pathogenetic process with the degenerative formation and diminished significant aortic distensibility also with substantially increased of aortic wall stress and stiffness which has been demonstrated as a predictor risk factor of increased cardiovascular disease and arrhythmic events [3], [4], [5]. Although there are several case reports about disorders of atrioventricular conductivity in AA with dissection complications, however there is not enough knowledge about arrhythmic disorders in patients with AA without rupture or dissection in the literature [6], [7]. On surface Electrocardiography (ECG) image T wave is inscribed by a sum of opposite voltage gradients in three different cell layers (Epicardial, M and endocardial cells) in the ventricular wall. Tpeak-Tend (Tp-Te) interval has been considering a measure of transmural dispersion of repolarization (TDR) and prolongation of Tp-Te (≥ 100 milliseconds [ms]) as well as QTc, QT and Tp-Te/QT ratio have been found of risk factors to develop cardiac arrhythmia especially ventricular arrhythmia (VA) and sudden cardiac death (SCD) in various cardiac disease also with normal healthy individuals [8], [9], https://www.id-press.eu/mjms/index [10], [11], [12], [13], [14]. Fragmented QRS (fQRS) is another important novel ECG risk predictor for electromechanical dyssynchrony, VA, SCD and poor prognosis in patients with HF and hypertrophic cardiomyopathy [15], [16]. So this study aimed to determine if mainly Tp-Te interval and other indices of TDR like QT, QTp, Tp-Te/QT and fQRS are significantly different in patients with AA compared to the healthy control group. The study was completed between March 2017 and January 2018 with totally 173 patients. Ninety-eight patients with AA and 75 normal healthy persons were included. Baseline characteristics and history of diseases including of diabetes mellitus (DM), hypertension (HT), coronary artery disease (CAD) and as well as being on any treatment or diet were assessed at baseline. AA was evaluated according to previous guidelines with the upper limit of normal ascending aorta diameter was accepted 39 millimetres (mm) [1], [2]. Patients with prior pacemaker implantation, cancer, other major illnesses, abnormal thyroid function test, abnormal electrolyte values and on antiarrhythmic drug treatment due to may affect ECG images so make changes on T wave measurements were excluded. Approval of the Ethics Committee The study protocol was approved by the Ethics committee at AfyonKocatepe University, and informed consent was obtained from each patient. ECG All ECGs were recorded using a General Electric MAC 5000 (GE Healthcare, Milwaukee, WI, USA). All 12-lead ECGs were recorded at 25 mm/s with standard lead positions. After magnification by 200%, all indices were measured. To eliminate both interobserver variability and bias, all measurements were measured by a single observer who was blinded to all clinical findings. QT intervals were taken to be from the onset of the QRS complex to the end of the T wave. The Tp-Te interval was defined as the interval from the peak of the T wave to the end of T wave [17]. Q-Tpeak (QTp) was measured from the onset of QRS to the peak of the T wave ( Figure 1). The Tp-Te value reported was the average value of obtained in all precordial leads. The Tp-Te/QT ratio was calculated as the ratio of Tp-Te in that lead to the corresponding QT interval. Other novel indices were described as S-Tend (S-Te) interval and S-Tpeak interval (S-Tp). S-Te and S-Tp were measured from nadir of S wave to peak of T wave and end of T wave in precordial limbs. Bazett's formula (n/RR) was applied to all the indices to find heart rate corrected form (c: heart rate corrected) [18]. [17] fQRS included various RSR patterns and was defined by the presence of an additional R wave (R prime), a notch in the nadir of the S wave, notch of the R wave, or the presence of more than one R prime (fragmentation) in two contiguous leads corresponding to a major myocardial segment [15]. Echocardiography A Vivid 5 pro echocardiographic unit (GE, USA) with 3.5 MHz probe was used. The echocardiographic study was performed in standard position and standard measurements (M-mode, twodimensional and Doppler echocardiography), were performed and/or reviewed by experienced staff cardiologists, compliant with the recommendation of the American Society of Echocardiography. Mitral inflow was determined by continuous and pulse wave Doppler echocardiography at the tips of the mitral leaflets. Early diastolic mitral peak flow velocity (E), late diastolic mitral peak flow velocity (A), E/A ratio were measured. Left ventricular diastolic dysfunction (LVD-Dys) was defined as a mitral continuous-wave (CW) Doppler E < A as stated in previous guidelines [19], [20]. Statistical analysis Continuous variables were expressed as mean ± SD (Standard deviation), and categorical variables were presented as frequencies (%, per cent). Continuous and categorical measures were compared with t-tests or 2 statistics, as appropriated. For correlations, appropriate calculations were done. A p value < 0.05 was accepted as a statistically significant. All analyses were performed using SPSS Version 16.0 (SPSS Inc. Chicago, IL, USA). Results A group of 173 patients were included in our study (98 patients with AA and 75 patients in the control group). Some of the baseline features were displayed in Table 1 and 2. Many baseline parameters were found to be significantly different in AA group comparing to control group except LDL cholesterol (LDL-chol) and pulse rate (for LDL-chol P = 0.178; for pulse rate P = 0.610 and all others P < 0.05). Mean ascending Ao-D was found 41.8 ± 3.0 mm in the AA group and 27.8 ± 3.2 mm in the control group (P < 0.0001). Significantly differences were found to be between groups for posterior wall (PW) and left atrium diameter (LA), QT time and mitral E < A or E > A (for all P < 0.05), but not for right atrium (RA), right ventricular (RV) dimensions, P-time, QRS-time and T time (all P > 0.05). For TDR, significant differences were found to be between groups for QTc, QTpc, Tp-Te/QT, Tp-Te/QTc, Tp-Tec/QTc (for all P < 0.05) except QTp, Tp-Tec and Tp-Te (for all P > 0.05 in Table 3). * : Chi-Square test. S-Tp: Measurement from nadir S wave to T peak. S-Te: Measurement from nadir S wave to T end. fQRS: Fragmented QRS. SD: Standard deviation, ms: millisecond, c: Heart rate-corrected form with Bazett's formula (n/RR), min: minimum, max: maximum. https://www.id-press.eu/mjms/index When considering all patients with Tp-Te interval ≥ 100 ms, there wasn't any difference between groups (P = 0.382). Significant differences were also found to be between groups for S-Tp, S-Tpc, S-Teand S-Tec and fQRS (for all P < 0.05 in Table 4, and Figure 2). In correlation analysis, Ao-D was found to have a positive correlation with QTc, QTpc, S-Tp, S-Tpc, S-Te, S-Tec and fQRS (for all P < 0.05 in Table 5). However negative correlation was found with Tp-Te/QT (r = -0.158; P = 0.047). Figure 2: Comparing of indices of TDR To determine the best cut-off, point of Ao-D for prediction of Tp-Te ≥ 100 ms, analysis of ROC (Receiver Operating Characteristics) curves demonstrated cut-off level of Ao-Dwas to be determined > 43.5 mm with the area under the curve (AUC) was 0,69 (P = 0.151) and sensitivity 60%, specificity 79.6%. Discussion A histopathological feature of AA is based on degeneration of medial muscular (consisting of main proteins of collagen and elastin) layer of vessel wall [21]. Pathogenesis of AA includes aortic wall degeneration which has passive lumen dilation and active dynamic remodelling and stiffness of aorta also plays a major role as a contributor risk factor in this pathogenetic process as well as being a result of the progress of AA [22]. Aortic stiffness with other risk factors of AA has been accepted as a risk factor for increased major cardiovascular events and some arrhythmia [4], [5]. Some reports have been published about arrhythmic consequences of aortic disease especially acute aortic dissection [6], [7]. However, there is limited information about arrhythmic events and disorders in patients with AA without dissection. TDR within the ventricular myocardium has been suggested due to three electrophysiologically different cells, endocardial, epicardial and M cells [23]. The peak of the T-wave was shown to coincide with epicardial repolarisation and the end of the T-wave with repolarisation of the M cells so that Tp-e provides a measure of TDR [24]. Prolongation of indices of TDR like QTc, QTp, Tp-Te, Tp-Tec interval and Tp-Te/QT ratio has been suggested to provide of indexes of TDR and supposed to be risk factor of VA in various clinical scenarios like patients with Brugada syndrome (BS), hypertrophic cardiomyopathy, myocardial infarction with ST-Segment elevation and HF with low ejection fraction [12], [13], [14], [25], [26], [27], [28]. In these studies, various cut-off levels for Tp-Te values ≥ 100 ms have been proposed to predict the adverse outcome [27]. In our study mean ascending Ao-D was found higher in AA group (P < 0.0001). As the main part of our study, we found the significant differences between groups for indices of TDR like QTc, QTpc, Tp-Te/QT ratio, Tp-Te/QTc ratio and Tp-Tec/QTc ratio except for QTp, Tp-Tec. Interestingly, the mean Tp-Te interval was not found to be different between groups (P = 0.111). When considering to all patients with Tp-Te ≥ 100 ms, there wasn't any difference between groups (P > 0.05). Newer indices S-Tp, S-Tpc, S-Te, S-Tec and fQRS were found to be significantly different (for all P < 0.05). In correlation analysis, Ao-D was found to have a positive important correlation with QTc, QTpc, S-Tp, S-Tpc, S-Te, S-Tec and fQRS (all for P < 0,05) and negative correlation with Tp-Te/QT ratio (P = 0.047). To determine the best cut-off level of Ao-D for Tp-Te ≥ 100 ms interval, ROC (Receiver Operating Characteristics) curves demonstrated cut-off level > 43.5 mm with the area under the curve (AUC) was 0.69 (P = 0.151) and sensitivity 60%, specificity 79.6%. Limitations: There are some important limitations to this study. This study was a crosssectional study, and these findings need to further evaluate in a cohort study to find the importance of these indices for prediction of cardiovascular outcomes. In conclusion, although our study did not find any differences for mean Tp-Te interval between groups many of other incidents of TDR were found to be significantly different. Ao-D was found to have significant correlations with many indices. Their clinical usages for prediction of adverse outcomes are needed to be assessed in the future.
2019-04-12T13:29:36.055Z
2019-03-13T00:00:00.000
{ "year": 2019, "sha1": "5e70fc89c9a012db16d631658dc67083b23cff59", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc6454177?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "5e70fc89c9a012db16d631658dc67083b23cff59", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195825568
pes2o/s2orc
v3-fos-license
A Qualitative Investigation of the Experiences of Students and Preceptors Taking Part in Remote and Rural Community Experiential Placements During Early Medical Training Background: Medical education can help alleviate the chronic undersupply of physicians to rural communities. Providing students with early rural clinical experiences may allow the gaining of necessary knowledge and skills to practice and live rurally, as well as the desire to do so. Purpose: This study aims to provide a detailed understanding of Remote and Rural Community Placements (RRCPs) which occur in the second year of a Doctor of Medicine programme. Methodology/Approach: Using a thematic analysis approach, we examined the experiences of students and preceptors in the RRCP. Data were collected using semi-structured interviews and focus groups. Findings/Conclusions: Students valued RRCPs as a formative clinical experience and preceptors gained professionally from participating. The RRCPs enhanced students regard for, and knowledge of, rural medicine. Yet, contrary to the stated aims of the placement, students spent very little time in activities outside of the clinic, neither learning about the community nor about the life of a physician as a community member. Implications: Medical educators should recognise that students and preceptors will inevitably place different value on the different sociocultural and perceptual aspects of placements, namely clinical and non-clinical. As such, the curriculum should draw clearly articulated links between each. Introduction Rural communities struggle to retain physicians which contributes to health inequities characterised by higher rates of disease morbidity and mortality. 1 For example, in Canada, rural residents have a shorter life expectancy than those living in urban centres. 2 Although this is in part due to a higher incidence of workplace accidents in rural communities, increasing access to health care services remains a key priority in rural settings. 1 While insufficient physical infrastructure is sometimes evident in rural locations, it is the lack of health-care practitioners that predominantly underlies the poorer health status of rural residents. 3 The reasons for the lack of rural physicians are many and include personal and professional factors such as geographic remoteness, lack of professional support including high workload, lack of recreational facilities, minimal or no education choice for children, and a deficit of employment opportunities for spouses. [4][5][6] These actual or perceived negative aspects of rural practice have had a detrimental impact on both recruitment and retention of physicians leading to a mismatch between supply and demand. For example, although 31% of Canadians live in rural areas, only 17% of family physicians, and 4% of specialists live and work rurally, with this disparity being expected to increase. [7][8][9] Improving the health of rural populations is therefore, at least in part, conditional on increasing the supply of physicians practicing in rural communities. Training physicians to possess the necessary knowledge, skills, and attitudes for rural practice is a key mechanism to address this gap, with rural community-based medical education being a primary mechanism for achieving this. 10 The development of community-based medical education has been driven by the desire to train doctors where they will base their future practice, a practice which occurs increasingly within communities. This is in contrast with the more traditional method of teaching students in large urban hospitals. Underlying this change is the idea that medical practice is place-dependent, and that learning in one location does not necessarily equip the student to practice somewhere else. 10,11 This applies both to the knowledge and skills learned, but 2 Journal of Medical Education and Curricular Development also to the attitude and adaptivity of the graduate to practice in contexts familiar or unfamiliar to them, as well as to develop a place-informed professional identity. 12 As such, the training of physicians who can, and who want to, practice in rural communities is best done experientially in rural settings. 3 Informed by the wider place-based education movement, 13 it is this idea that has led to the development of the Remote and Rural Community Placements (RRCP) at the Northern Ontario School of Medicine (NOSM) which form the subject of this study. 14 The RRCPs were based upon rural elective placements at other institutions which began at Dartmouth in the 1970s, and later at the Morehouse School of Medicine and Eastern Virginia medical school in the 1980s. [15][16][17][18] Such placements had been shown to be effective for developing rural physician identities, [8][9][10][11][12][13][14][15][16][17][18][19][20][21] and nurturing positive attitudes towards rural medicine. [22][23][24][25] NOSM was established in 2005 with a social accountability mandate to improve the health and healthcare of those living in Northern Ontario, a large region covering approximately 800 000 square km. 14 Although some residents live in smaller cities of approximately 80 000-100 000 residents, many live in small rural and often remote communities, communities which have experienced difficulties recruiting and retaining physicians. 14 The RRCPs represent one of the main approaches for training physicians in rural communities in a manner that prepares them for their later practice in the same northern communities. The NOSM MD programme comprises a 2-year mainly classroom-based foundational phase occurring in the two largest cities in the region, followed by a longitudinal integrated clerkship which occurs in smaller rural communities, followed by a rotation-based clerkship in the hospitals located in the two larger cities. The RRCPs are embedded within year 2 of the NOSM MD programme and are mandatory experiences which all students must complete before progressing to year 3. 14, 26 The RRCPs occur within a 6-week teaching module with the first and last week of the module being on-campus. Both placements are 4 weeks long providing students the opportunity to live in a rural community and learn from one or more of the physician preceptors. Each RRCP placement week includes 15 hours of 'clinical time' and 3 hours spent with other health professionals in the community or healthcare-related agencies. These experiences are in addition to the academic curriculum, which is taught using either pre-recorded lectures or phone-in small group sessions while the students are away from their home campuses. The curriculum of the clinical time component of the RRCP was deliberately left only very generally defined as '(Students) will learn about what it is like to live and practice medicine in these settings' 27 due to a desire to allow the preceptor to teach students what they view as being relevant to the practice of medicine in their own community. This has, however, left it rather unclear as to what occurs during the placement and how these relate to the desired outcome of preparing students for rural practice. To address this gap this study investigated the experiences of medical students and their preceptors in the RRCPs to better understand the pedagogies that contribute to meaningful engagement and preparation. The study sought to understand what occurs during the placements, identify outcomes of the RRCPs, and guide future models for RRCPs and similar activities occurring elsewhere. Moreover, given that the RRCPs are experiential in nature they fall within the 'Perceptual' dimension of place-based education, 28 and we ask, 'perception of what?'. Methods Participants Participants were recruited by purposive and convenience sampling. Preceptors (P) who had taken part in the RRCP during at least one of the previous two academic years were invited to participate. Student participants (S) were recruited over two academic years from the Lakehead University campus of the medical school. All participants gave informed consent before taking part in the study according to a protocol approved by the Lakehead University Ethics Board (File # 1462163). In total, 13 preceptors (8 female and 5 male) and 20 students agreed to participate. The gender of the student participants was representative of the gender mix of the class. All students had grown up in Northern Ontario with 11 having grown up in smaller communities and 9 in Thunder Bay. Data collection Preceptors were interviewed individually using a semi-structured interview 29 by telephone (P1 -P13); student participants took part in two focus groups (FG1 and FG2) held in-person except for one student who was individually interviewed due to scheduling reasons (S1). Interviews and focus groups lasted between approximately 30 and 90 minutes. Student focus groups took place immediately following the first RRCP of year 2. Both preceptors and students were asked to describe (1) a typical clinical learning session; (2) what experiences, both positive and negative, stood out in their minds; and (3) what they had learned (students) about rural medicine or what they thought students had learned (preceptors). In addition, students were asked specifically if and how their attitudes towards rural practice had changed after the RRCP, and preceptors were asked about why they were involved in the RRCPs and what they personally gained or lost from their participation. The semi-structured interview questions were developed based on the research question and existing knowledge about the RRCPs. Detailed field notes on body language, researcher biases, and affect detectable during the interviews and focus groups immediately following the interview, served as another important source of data in the study. 3 Data analysis All interviews and focus groups were audio-recorded, professionally transcribed, and uploaded to ATLAS.ti (Scientific Software Development GmbH, Germany). Field notes were also transcribed and uploaded to ATLAS.ti. One member of the research team performed the initial coding for the project. Thematic analysis was undertaken using reflexive memoing and successive rounds of coding. The researcher first immersed themselves in the data by reading the data twice, followed by a process of open coding the data, examining small sections of text made up of words, phrases, and sentences. This formed the basis for a preliminary and everevolving master 'code-book' for analysing subsequent data. 30 Peer debriefing with other members of the team throughout the process also added rigour and ensured validity. Open coding was followed by axial coding, which helped make connections between the emerging categories and eventually, after being sorted, compared, and contrasted until saturation, led to key themes. In the study, rigour was enhanced using the following strategies: (1) detailed fieldnotes as a form of description, (2) reflexive investigator memoing, (3) professional transcription, (4) data sources and theoretical triangulation, and (5) coders' detailed audit trails including reporting on 'code drift'. 31 Results Preceptors and students (interviewed after the first RRCP) were asked about their experience of the RRCPs and what they found meaningful regarding their participation. In the data, four main themes emerged: (1) motivation of preceptors; (2) clinical experiences of students; (3) communication between preceptors, students and/or the institution; and (4) valuing place and community in medical education, which is described below. Theme 1: motivation of preceptors The interviews with the preceptors revealed why they had chosen to be involved in the RRCPs. Preceptors identified four main motivations. (i) Enhancement of regional healthcare The involvement of preceptors in the RRCPs flowed from a desire to be part of the mission of the school to enhance the provision of healthcare to the region: 'when I heard about (the medical school) I wanted to be involved … teaching students so they could actually work here in the future was really exciting, a medical school that actually might help' (P4). (ii) Enhancing clinical capacity The community preceptors also hoped that their involvement with the school would benefit their clinical practice, although this was not generally realised: 'it would be good to also have some residents here at the same time to help with the load' (P5) and 'I am happy to take these young students but I was hoping there would be some new docs here by now or even post-graduate learners but that's not happened' (P2). Rather, preceptors articulated how the teaching of novice learners takes time: 'my students have been generally good to teach but it does slow me down clinically but that's to be expected and we are prepared for that' (P9). Such comments reinforce the mission of NOSM-to enhance the supply of rural physicians-while signalling the need to assess the burden of RRCP placements on preceptor workload. (iii) Teaching students about rural medicine Preceptors also wanted to teach students about the work of a rural physician: 'I get to be the one to show (students) what it is like to be working in a small town, some like it, some probably don't, but they all gain something useful from this' (P2) and 'when I was at medical school I never met a single rural physician and (at NOSM) we are the first (physicians) they get to experience clinical work with' (P13). (iv) Professional development as teachers Finally preceptors also viewed the RRCPs to have enhanced their development as teachers, particular the mentoring of such novice learners: 'I had only taught residents before and it took a bit of discussion with the student to plan out the time, and even after that I was learning about what their needs were as we went along' (P1) and 'with these students I can't assume much, and I had to learn to break things down for them and really think about what I do and why' (P10). As such, the RRCP structure enabled preceptors to reflect on their own practice and how best to share their situated knowledge with medical students. Theme 2: clinical experiences of students The clinical experiences of the students represented the majority of what was said during their focus groups and can be separated into three subthemes: clinical confidence, formative clinical experiences, and learning about rural medicine. (i) Building clinical confidence The student participants expressed how much they had enjoyed their first substantial clinical experience in medical school and how it had increased their confidence compared to purely classroom-based learning: 'on the first day I was terrified, I thought I was going to be in the way but by the end I was really enjoying it, I grew a lot' (FG2). The students also referred to how the RRCP helped them feel prepared for their 4 Journal of Medical Education and Curricular Development longitudinal integrated clerkship the following year: 'I was really worried about going away for so long next year but I found (the RRCP) helped me see what that might be like and that it would be okay' (FG2). (ii) Formative clinical experiences The students and preceptors both highlighted the advantages of having formative experiences in a rural practice. They spoke about the opportunity to apply the knowledge gained in the classroom: 'It was good to try out what I had learned in (clinical skills classes) with actual patients, I felt I got a lot better at communicating with patients' (S1) and 'I realise that this is the first clinical experience these students have had and that is a big deal for me, I am glad they had it here' (P2). Second, the need to integrate knowledge gained in the body-systems based curriculum was found to be both challenging and useful: 'the range of patients and things we were doing surprised me, I was struggling to keep up but I learned a lot' (S1) which was echoed by another who talked about a need to integrate clinical knowledge saying 'in (clinical skills classes) I knew what sort of case we would have but in (the RRCP) I had to put a lot of different things together' (FG1). Finally, the variety of clinical experiences was seen as an advantage of the RRCP: 'In one week I was at a birth, saw chemotherapy administered, and had a shift in ER' (FG2) and 'I can't imagine a better place than a small community to learn the basics of medicine. You need to do a lot yourself and I think that leads to a better understanding' (P8). (iii) Learning about rural medicine Student participants recalled many experiences that were specific to rural medicine: 'one patient was really upset when they were told that they would have to go to (larger urban centre) for treatment' (FG1) and 'I learned about how (rural physicians) worked with the physicians in (larger urban centre) to do things they could not do in (the rural community)' (FG1). Interprofessional team work was also identified by student participants: 'working with (Nurse Practitioner) was really interesting, I really felt I was part of a team' (FG2). Student preceptors shared their growing understanding and appreciation for rural medicine: 'I liked the variety of things I did and how everyone worked together' (FG1) and 'I am glad I got to see what being a physician in a small town is like and I really admire those who do it but, to be honest, it's not for me' (FG2). Theme 3: communication between preceptors, students and/or the institution The nature and quality of the interaction between students, teacher and the institution emerged as a key theme in the data. First, the relationship between preceptors and the medical school was viewed as lacking: 'I did not hear much from (NOSM) except when they wanted me to take a student, but I figured it out' (P1). The poor communication impacted two different aspects of the curriculum, the first related to student well-being such as a preceptor's experience with a disengaged learner stating, 'I think they were missing home, they did not seem to really want to be there but I was not sure what was going on with them' (P6). When asked if they knew how to obtain support from the institution for such a scenario, they replied that they did not and commented, 'there are a few of us who do this here, we basically help each other'. Second, a lack of clarity regarding the curriculum was expressed: 'I gave (the student) lots of feedback but (NOSM) doesn't seem interested in knowing what (their students) are achieving except that they showed up' (P10), 'my preceptor was not clear about what we should be doing' (FG2), although this was not always viewed negatively: 'I was glad there were not too many set objectives which gave us a lot of freedom to create something with the student' (P4). Theme 4: valuing place and community in medical education One of the main aims of the RRCP is for students to explore their host community and what life is like for a rural physician outside of the clinic. The importance placed on this objective of the placement was starkly different between teachers and students. Preceptors valued this aspect particularly as it related to professionalism: 'I spoke to (the students) about what to do when they met patients outside (the clinic)' (P2) and 'It's important to know that they have to behave really well in public, so I tell them things like I am never seen with a drink in my hand because patients might think that I am revealing all their secrets' (P1). They also noted, however, that students were not so interested in this aspect of the placement 'the (community events) that go on around here are usually on the weekend and students don't have to be here then so they miss them' (P2), while another commented, 'I find it hard to interest students in anything outside the clinic' (P7). When the students were questioned about what they did when they were not in the clinical environment or 'in class' one student laughed and said, 'sleep and eat' (FG1), and when they were outside of the clinic they spent time mainly with their own peers. The lack of community involvement was not seen as a major deficiency by students: 'I just wanted to spend time learning about medicine' (FG1), 'I was not really interested in the community to be honest because I will never practice there so what's the point?' (FG2) and 'I grew up in (the same community as the placement) so I know all about it already' (FG1). In addition, students commented on feeling overwhelmed during the RRCP as the clinical time with their preceptors was in addition to the regular curriculum: 'I found going to the (regular curriculum sessions) and working with my preceptor exhausting … (the preceptor) did not seem to know I had other things to do' (FG2) and 'I was asked to come in on the weekend, I just did not want to do it, but I said yes because I wanted Ross et al 5 to keep my preceptor happy' (FG1). That this could lead to conflict within the teaching relationship was evident from both preceptors and students: '(My preceptor) was inviting me to additional things over my 15 hours and I had to just say no, they were kind of upset about that' (FG1) and 'I had setup some additional experiences in line with what the student said they were interested in but they refused to come' (P12). Discussion and Conclusion Our data suggests that both students and preceptors view the RRCPs as valuable and as a formative clinical experience. The RRCPs gave the students an opportunity to apply and improve their classroom acquired knowledge in an authentic clinical setting (see Theme 2). The findings suggest the RRCPs contribute to increased clinical confidence, a similar outcome to that of other early clinical experiences in medical school. 32 There was also evidence, in the data collected from students, that the RRCPs may be viewed in part as an 'orientation' for clerkship and we suggest that programmes which seek to include community-based clerkships also include shorter 'in residence' placements in earlier years of their undergraduate programmes for this reason (see Theme 2.i). In addition, it is notable that the RRCPs and community-based clerkship occur in different places. This may allow the student to develop an understanding of how place effects practice and, in doing so, improve their ability to adapt to new practice contexts. As such, an explicitly sequenced curriculum in which students build on that learned in previous placements, perhaps using a combination of articulated learning objectives in concert with a process of self-reflection, may be warranted. Our study (see Theme 2.iii) also indicates that the RRCPs allowed students to discern experientially important features of rural healthcare such as interprofessionalism, health-care teams, and generalism, both widely accepted as key components of rural medical practice. [33][34][35][36] Students also learned about the limits of rural community-based care, and how urban and rural physicians interact to deliver healthcare. There was evidence of students developing a positive regard for rural medicine which may act to enhance the reputation of rural medicine within the profession, as well as to allow students to build their identity as rural physicians, in agreement with previous studies. [18][19][20][21]37 As such, experiencing rural medicine early in training may be effective in forming such an identity, as opposed to experiencing rural practice later in training when a, presumably non-rural, identity has already formed. 3 What can be clearly concluded from the data, however, is that the RRCPs allow students to learn about rural medicine and discern whether, or not, they see themselves as rural physicians in training. The impact of poor learning experiences such as (as suggested by our data and that of others), 38,39 feeling overwhelmed, not being able to gain desired clinical experiences, or having conflict between student and teacher, may reduce the desire to practice rurally as they relate to the personal and professional aspects of community life that are known to effect physician recruitment and retention. 40,41 Indeed, the suitability of such an exposure model, promoted by both NOSM and elsewhere, 42,43 as an aid to physician recruitment is unclear. This is a key question as it is an important motivator of physician involvement (see Theme 1.ii). While NOSM and others have reported that rural-based training enhances the likelihood of future rural practice, 18,20,22,44,45 it is unknown how the RRCPs effect physician recruitment to these communities. Indeed, preceptors voiced concerns that clinical capacity had not been increased in their community, this being compounded by a lack of senior learners, for example, residents, being placed there that could offset the drag on clinical practice that novice learners represent. Having various stages of learners in the same community at the same time, termed integrated clinical learning, can reduce this effect as more senior learners can add to clinical capacity, but this clearly had not occurred at all placement sites. 46 Such comments also suggest a quid pro quo of preceptors taking junior learners with the understanding they would also be able to share their clinical and teaching load with more senior learners or fully qualified physicians, although this also requires further investigation. In the meantime, we would recommend that those designing similar placements pay close attention to the overall student experience if enhanced recruitment to rural communities is desired given that the affective outcome of the placement likely plays an important role. Our data also indicated that in addition to a desire to teach students about rural medicine and build clinical capacity, the RRCPs also contribute to the development of the professional identity of preceptors as academic physicians, something that is the norm in large centres but is much less a part of rural medicine. Viewed in this way the RRCPs may play an important role in the development of rural academic medicine in that they represent an important initial step towards increasing clinical teaching capacity in small communities that previously had very little. Further movement along such a developmental trajectory is dependent on ongoing and effective communication with the placement communities, something our data suggest can be difficult, perhaps due to geographic isolation. While improved communication in the distributed learning environment may be advantageous for the enhancement of collaborative partnerships with community, using this to exert too much control over the learning experience may not be universally welcomed (Theme 3). While broad curricular aims should be articulated and made mandatory, we would suggest that more detailed curricular materials should de made optional to be utilised by those who need more assistance in structuring their own teaching, particularly those who have had little experience teaching novice learners. One aspect of our data that we found surprising was the different value of students and preceptors placed on learning outside of the clinic. Given that developing a place-based 6 Journal of Medical Education and Curricular Development professional and social identity is key to recruitment of physicians to rural communities, this is, in this study, a significant finding, 11 and highlights that curriculum intent and actual student experience can markedly differ. Viewed through the lens of place-based educational theory this is fundamentally a difference of how students and preceptors relate to the sociocultural aspects of the placement location, as short-term residents students would not be expected to value the learning about the wider community context compared to the permanently residing preceptors. 28 In other words, to answer the question about perception asked in the introduction to this paper, what is desired to be perceived and, to a large extent what is perceived, differs between students and their teachers. It is therefore likely that including mandatory community exploration experiences to the curriculum would not result in students valuing such learning unless there is a well articulated connection to clinical work. It is advisable that those contemplating inclusion of such placements in early clinical learning consider making this aspect of curriculum visible in the form of conveying more precise placement learning objectives and facilitating the better communication between students and preceptors, perhaps in the form of formalised learning agreements which include a plan to learn outside of the clinical environment. In summary, this study highlights that the RRCPs were valued by both students and teachers alike and are effective vehicles to learn about the rural medicine and places. Our study shows, however, that students and their teachers may place different value on experiences gained inside and outside of the clinical environment, something that we would advise needs to be explicitly addressed in the curriculum within the overall context of rural medical education. We also would recommend that those contemplating the inclusion of rural placements during early clinical education play close attention to the overall student experience and the quality of communication with the placement sites, particularly if the placements are intended to aid in recruitment of rural physicians.
2019-07-10T13:05:10.682Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "6b67ac374611f15deb36ca7957214e2af27e7ef6", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/2382120519859311", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b67ac374611f15deb36ca7957214e2af27e7ef6", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
260563490
pes2o/s2orc
v3-fos-license
Plasmon-Triggered Ultrafast Operation of Color Centers in Hexagonal Boron Nitride Layers High-quality emission centers in two-dimensional materials are promising components for future photonic and optoelectronic applications. Carbon-enriched hexagonal boron nitride (hBN:C) layers host atom-like color-center (CC) defects with strong and robust photoemission up to room temperature. Placing the hBN:C layers on top of Ag triangle nanoparticles (NPs) accelerates the decay of the CC defects down to 46 ps from their reference bulk value of 350 ps. The ultrafast decay is achieved due to the efficient excitation of the plasmon modes of the Ag NPs by the near field of the CCs. Simulations of the CC/Ag NP interaction show that higher Purcell values are expected, although the measured decay of the CCs is limited by the instrument response. The influence of the NP thickness on the Purcell factor of the CCs is analyzed. The ultrafast operation of the CCs in hBN:C layers paves the way for their use in demanding applications, such as single-photon emitters and quantum devices. . Optical microscope images of the different stages of the dry transfer method. S-3 Emission spectra of the hBN:C film as a function of excitation wavelength Figure S4. (a) Emission spectrum of an hBN:C film on the SiO2/Si substrate (the same sample with what we studied in Fig. 3(a)) with varying excitation wavelength from 700 to 745 nm, bottom to top, with 5 nm steps. The excitation wavelengths are highlighted by vertical arrows in the gray region, where signal transmission to a spectrometer is rejected by a long-pass filter with a cut-on wavelength of 750 nm. It is observed that the two Raman peaks shift in parallel with the excitation wavelength, as expected. In contrast, the colorcenter luminescence peak (denoted by CC in the figure) stays unshifted at around 804 nm, though its intensity gradually changes with the excitation wavelength. (b) Excitation spectrum of the color-center peak at 804 nm. The maximum luminescence intensity is observed when the excitation wavelength is around 724 nm, due to phonon-assisted resonant excitation (the TO phonon energy of hBN is 169.5 meV). S-4 Temperature dependence of luminescence decay signals in hBN:C on SiO2 Figure S5. Luminescence decay curves in the hBN:C exfoliated film on SiO2 for different temperatures from 68 to 300 K, where we excite the sample at a wavelength of 724 nm using a ps pulsed laser (5-mW average power and 76-MHz repetition rate) and detect the color center signal at a wavelength of 804 nm. The measured curves reveal the constant emission lifetime, which is free from nonradiative relaxation, up to room temperature. Figure S6. (a) Position dependent emission spectra of the hBN:C sample interacting with the Ag NPs on the SiO2/Si substrate at 68 K. The sample is the same as the one we studied in Fig. 4. (a) of the main part. Comparison of emission spectra measured inside the hBN:C sample region (top) and outside the hBN:C sample region (bottom). The top curve reveals the presence of strong peaks at around 754 and 780 nm, which we ascribe to the Raman signals of the Si substrate, and a small but significant peak at 804 nm, which we ascribe to the color center emission from the hBN:C ultrathin film. In the bottom curve, where we observe the response from the substrate and Ag NPs alone, the strong Raman signals are remained, but the color center signal disappears, as expected. (b) Series of emission spectra, where we move the monitoring position with one micrometer steps across the sample edge of the hBN:C film. It shows the reproducible presence of the color center signals inside the hBN:C region and the absence of the signals outside the hBN:C region. Numerical analysis of the decay data The defects density of a hBN:C monolayer has been measured experimentally to the value , measuring current distributions through an STM tip over varying the 2.4 × 10 -4 nm -2 applied voltage. Considering a surface of , the hBN:C monolayer includes (90 nm) 2~2 defects, then in the volume of 26 nm a single Ag NP interacts with defects. For the ~150 counts of the relaxation measurements, the atom-like CC defects of the hBN:C that are further away from the Ag triangle NPs provide most of them. Also, CCs that are close to the edges of the Ag triangle NPs have higher interaction strength that depends also on each defects' dipole orientation. The Purcell factor of the CCs extracted from the lifetime measurements, is only a moderate estimation and does not reveal the full CCs/Ag NPs interaction Γ ~8, strength. The single exponential relaxation fitting is connected with a single lifetime value for all the defects, although as we have discussed there is a distribution of CCs' lifetime values due to their position with respect to the Ag NP. To proceed, we numerically analyze the relaxation process of the atom-like defects by resolving it to a lifetime distribution using the fitting expression, , (1) is the fitted lifetime of the atom-like defect, is the distribution over the ( , ) different contributing lifetimes, where and is the Purcell factor, that = /Γ Γ depends on the exponent and is in ns units; is the integration constant. Figure S7a we observe that the closer we get to the single exponential expression, , the closer the fitting to the = 1 experimental data. The different values of are connected with different values of the relaxation , which they vary to the range of 44 ps, for , to 48.7 ps, , which = 0.8 = 99 values are relatively close to the experimental value. S-7 For the different relaxation and parameters pair to fit the relaxation of the atom-like defects, different distributions are extracted and they are presented to In Figure S7b ( , ) [10.1063/1.4984608]. We observe that the more stretched, , is the parameter fitting the < 1 broader is . The value corresponds to the lifetime , interestingly we ( , ) = 1 = / Γ observe that a smaller, , value leads the peak value of the distribution away from < 1 ( , ) the value, exhibited for . At the same time, the distribution has a tail for the value /Γ = 1 which is appreciable for all values of , meaning that the distribution has > 1 ( , ) contributions from higher Purcell factor values of the defects. We focus on the and ps and we introduce the effect of the Purcell factor = 0.95 = 47.9 through the upper limit on the integration of the distribution function to simulate the ( , ) partial relaxation of the atom-like defect . (2) ( ) = ∫ Γ 0 ( , ) -/ In In Figure S6c we consider different values for the Purcell factor used in the integration Γ of Eq. (3) to describe the relaxation of the atom-like defect, we start with the value , Γ = 8 which value is value extracted from the experimental data. We observe that the numerical fitting is not close at to the normalized experimental data, where at only approaches = 0 67.8%. Thus, we need to include higher Purcell factor values to approach the full relaxation of the atom-like defect in hBN:C. For a value of the relaxation, described through Γ = 10Γ Eq. (3), approaches the 99.3% of the experimental value. Thus, the atom-like defects of the hBN:C layer that are closer to the Ag triangle NPs present an enhanced Purcell factor of at least of 80. Up to know we have presented the theory of the stretched distribution , in (Γ, ) the remaining of this section we present the exact fitting to the experimental data. In Figure S8a we present a contour plot of the Purcell factor of a single CC scanning the Γ x-y plane, when placed 4nm above a Ag triangle NP of thickness of 16nm. The CC transition dipole moment is along x and the transition energy is 804nm. The red lines enclose the area of and the dark red areas where . Thus, the simulated data present that the Γ > 100 Γ > 200 Purcell factor value of the CC surpasses the experimentally extracted value, , by two Γ = 8 orders of magnitude, when the CC is close to the Ag triangle NP. In the inset of Figure S8b we present the Purcell factor distribution extracted by (Γ) numerical fitting of the atom-like CC decay experimentally measured data (points in Figure S7a). The Purcell factor distribution can give us the percentage of CCs that achieve Purcell factor values above the value, which is given by . Our analysis reveals the Γ (Γ > Γ ) 29% of the CC achieve values above the experimentally measured value , . Γ (Γ > Γ ) = 29% Here, we would like to stress out again that the laser response limits the relaxation measurements, where higher Purcell values are expected. We again use a multi-component fitting of the relaxation of CCs within the bulk hBN:C material, for the time span of [0,1000ps], to analyze the lifetime distribution and Purcell factor percentage, at Figure S8. In Figure S8a we present the lifetime distribution of the CCs defects for bulk hBN:C, which has a peak value of ~500 ps and is slower than the single exponential fit. Where in Figure S8b we observe that 37% of the quantum emitters have a Purcell factor value above 1, which is an accelerated lifetime. Only 9% of the CCs present have a Purcell factor enhancement above 2, translating to a lifetime shorter than 175 ps. Thus, we conclude that the nonradiative intrinsic relaxation is limited and that most of the CC defects in the hBN:C nanosheet are extremely high-quality quantum photon sources following a purely radiative relaxation. Figure S9. (a) The simulated nanostructure to investigate the atom-like color-center (CC) defect interacting with a Ag triangle nanoparticle (NP) (b,c) Contour plots of the Purcell factor and radiative emission of the CC defect, varying the emission wavelength and the position of the defect follows the blue arrow in (a), the Ag NP thickness is 16 nm. S-10 Figure S10. (a-d)Contour plots of the Purcell factor on the x-y plane of the atom-like colorcenter (CC) defect of the hBN:C layer with a thickness of 26 nm, considering different separation planes from the Ag NP with a thickness of 16nm. S-11 Figure S11. Average Purcell factor value of the CC distribution using the data from Figure S10 for the different CC/Ag NP separation distances. The hBN thickness is 26nm, the emission wavelength of the CC 804nm and the Ag NP thickness is 16nm. Figure S12. Contour plots in the plane of the (a) Purcell factor, (b) radiative emission, and (c) quantum efficiency of a CC, 4nm above the Ag nanoparticle embedded in the hBN:C nanosheet of 26nm thickness. In (a,b) the areas that the enhancement factor is above 100 and 25 are encircled with green for the total and radiative emission of the color centers, respectively. In (c) the areas where the enhancement of the relaxation rates is above 40 and the quantum efficiency is above 40%, which is defined as . = Γ , /Γ S-12 Figure S13. Contour plots in the plane of the (a) Purcell factor, (b) radiative emission, and (c) quantum efficiency of a CC, 8nm above the Ag nanoparticle embedded in the hBN:C nanosheet of 26nm thickness. In (a,b) the areas that the enhancement factor is above 50 and 20 are encircled with green for the total and radiative emission of the color centers, respectively. In (c) the areas where the enhancement of the relaxation rates is above 20 and the quantum efficiency is above 40%, which is defined as . = Γ , /Γ
2023-04-12T15:10:39.891Z
2023-04-10T00:00:00.000
{ "year": 2023, "sha1": "85bceacea60c13641982c27512984cd53b964781", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1021/acsomega.3c00512", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44719876fe92206bd5948407adbb3d7a16be88f6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
59944563
pes2o/s2orc
v3-fos-license
Combined Visualization of Nigrosome-1 and Neuromelanin in the Substantia Nigra Using 3T MRI for the Differential Diagnosis of Essential Tremor and de novo Parkinson's Disease Differentiating early-stage Parkinson's disease (PD) from essential tremor (ET) remains challenging. In the current study, we aimed to evaluate whether visual analyses of neuromelanin-sensitive magnetic resonance imaging (NM-MRI) combined with nigrosome-1 (N1) imaging using quantitative susceptibility mapping (QSM) in the substantia nigra (SN) are of diagnostic value in the differentiation of de novo PD from untreated ET. Sixty-eight patients with de novo PD, 25 patients with untreated ET, and 34 control participants underwent NM-MRI and QSM. NM and N1 signals in the SN on MR images were visually evaluated using a 3-point ordinal scale. Receiver operating characteristic (ROC) analyses were performed to determine the diagnostic values of the visual ratings of NM and N1. The diagnostic values of the predicted probabilities were calculated via logistic regression analysis using the combination of NM and N1 visual ratings, as well as their quadratic items. The proportions of invisible NM and invisible N1 were significantly higher in the PD group than those in the ET and control groups (p < 0.001). The sensitivity/specificity for differentiating PD from ET was 0.882/0.800 for NM and 0.794/0.920 for N1, respectively. Combining the two biomarkers, the area under the curve (AUC) of the predicted probabilities was 0.935, and the sensitivity/specificity was 0.853/0.920 when the cutoff value was set to 0.704. Our findings demonstrate that visual analyses combing NM and N1 imaging in the SN may aid in differential diagnosis of PD and ET. Furthermore, our results suggest that patients with PD exhibit larger iron deposits in the SN than those with ET. Differentiating early-stage Parkinson's disease (PD) from essential tremor (ET) remains challenging. In the current study, we aimed to evaluate whether visual analyses of neuromelanin-sensitive magnetic resonance imaging (NM-MRI) combined with nigrosome-1 (N1) imaging using quantitative susceptibility mapping (QSM) in the substantia nigra (SN) are of diagnostic value in the differentiation of de novo PD from untreated ET. Sixty-eight patients with de novo PD, 25 patients with untreated ET, and 34 control participants underwent NM-MRI and QSM. NM and N1 signals in the SN on MR images were visually evaluated using a 3-point ordinal scale. Receiver operating characteristic (ROC) analyses were performed to determine the diagnostic values of the visual ratings of NM and N1. The diagnostic values of the predicted probabilities were calculated via logistic regression analysis using the combination of NM and N1 visual ratings, as well as their quadratic items. The proportions of invisible NM and invisible N1 were significantly higher in the PD group than those in the ET and control groups (p < 0.001). The sensitivity/specificity for differentiating PD from ET was 0.882/0.800 for NM and 0.794/0.920 for N1, respectively. Combining the two biomarkers, the area under the curve (AUC) of the predicted probabilities was 0.935, and the sensitivity/specificity was 0.853/0.920 when the cutoff value was set to 0.704. Our findings demonstrate that visual analyses combing NM and N1 imaging in the SN may aid in differential diagnosis of PD and ET. Furthermore, our results suggest that patients with PD exhibit larger iron deposits in the SN than those with ET. INTRODUCTION Parkinson's disease (PD) and essential tremor (ET) are common movement disorders, especially among older adults (1). PD is characterized by motor symptoms including bradykinesia, resting tremor, and rigidity, while ET often manifests as isolated tremor in the bilateral upper limbs. Although they are distinct entities, these two movement disorders may share some clinical characteristics, such as non-motor features and resting/postural tremor, as well as genetic and pathological mechanisms (2,3). Hence, the differentiation of PD and ET remains challenging, especially early in the disease course. Neuroimaging may aid in the differentiation of the two movement disorders. Dopamine transporter (DAT) imaging of the striatum is recommended in the differential diagnosis of PD (4). However, DAT imaging is expensive, subjects the patient to low doses of radiation, and is only available in specialized centers, limiting its clinical application. In contrast, MRI is widely available and does not subject the patient to radiation. Recent studies have provided evidence that MRI biomarkers may aid in the diagnosis of movement disorders (5,6) and help to reveal the pathological changes correlated with motor and non-motor symptoms (7). The pathological hallmarks of PD include progressive neurodegeneration of dopaminergic neurons and iron overload in substantia nigra (SN) (8). In contrast, whether ET is a degenerative disease is still debated, although some studies suggest that it is associated with cerebellar degeneration (9). Dopaminergic neurons in the SN contain a black pigment called neuromelanin (NM). Based on the paramagnetic properties of NM, high-resolution T1-weighted fast spin echo (FSE) imaging at high field strength (e.g., 3T) can visualize NM-generated contrast. This technique is referred to as neuromelanin-sensitive MRI (NM-MRI) (10). Previous studies have indicated that the signal intensity of NM is decreased in patients with PD (10, 11), even in the early stage of the disease (12), while it remains unchanged in patients with ET (13,14), when compared with that in healthy controls. In addition, nigrosome-1 (N1), the largest of the five described nigrosomes, is most affected in patients with PD (∼98% neuronal loss in N1) (15). N1 represents the pockets of high signal intensity in the dorsal part of the healthy SN, at intermediate and caudal levels on high resolution T2 * /SWI, and can be visualized as a "swallow-tail sign" (16,17). However, hyperintensity of N1 is absent in most patients with PD (17), possibly due to increases in iron deposition that occur in parallel to the loss of dopaminergic cells (16). Quantitative susceptibility mapping (QSM) may overcome several nonlocal restrictions of SWI and phase imaging, allowing for the quantification of iron content (18,19). Indeed, this method may be more sensitive for detecting iron-related changes in patients with PD (20). To our knowledge, however, no studies have investigated N1 appearance in patients with ET. Unlike voxel-based morphometry, diffusion tensor imaging, or blood oxygenation level-dependent imaging, which require complicated post-processing or quantitative measurements, NM on NM-MRI and N1 on QSM can be assessed visually, making these methods feasible for clinical application. To the best of our knowledge, no previous studies have investigated the combination of these two MR sequences for the differential diagnosis of PD and ET. In the current study, we aimed to evaluate whether visual analyses of NM imaging using NM-MRI combined with N1 imaging using QSM in the SN are of diagnostic value in the differentiation of de novo PD from untreated ET. Participants Sixty-eight patients with de novo PD, 25 patients with untreated ET, and 34 healthy controls were voluntarily recruited between September 2016 and April 2018. Patients with PD were diagnosed in accordance with MDS clinical diagnostic criteria for PD (4), while patients with ET were diagnosed in accordance with the criteria outlined in the Consensus Statement of the Movement Disorders Society on Tremor (21), by two movement specialists (J.L.R. and H.Z.). All patients were drug naïve. All control participants were recruited as volunteers from the community and had no history of neurological/psychiatric disorders. Exclusion criteria were as follows: history of other neurological/psychiatric disorders, severe infection, liver dysfunction, renal insufficiency, past/current substance abuse, tremor-related dysmetabolism including thyroid dysfunction and drug toxicity, and abnormal signals that affected further analyses on structural MRI. Unified Parkinson's Disease Rating Scale (UPDRS) motor scores were obtained for all patients with PD and ET. This study was approved by the Committee on Medical Ethics of Zhongshan Hospital, Fudan University. Written informed consent was obtained from all participants. Imaging Protocol All MR images were acquired using a 3T MR unit (Discovery TM MR750, GE Healthcare, Milwaukee, WI). A T1-weighted fast spin-echo sequence was obtained for NM-MRI images, as previously described (22), and the imaging parameters were as follows: repetition time/echo time (TR/TE), 600/13 ms; echotrain length, 2; section thickness, 2.5 mm, with no intersection gap; number of slices, 16; matrix size, 512 × 320; field-of-view (FOV), 220 mm; NEX, 5. A three-dimensional multi-echo GRE sequence was used to acquire T2 * -weighted images, and the scanning parameters were as follows: TR: 51.5 ms; number of echoes, 16; first TE, 2.9 ms; TE spacing, 3 ms; bandwidth, 62.50 kHz; flip angle, 12 • ; FOV, 22 cm; matrix, 220 × 220; slice thickness, 2 mm; acceleration factor, 2; slices, 66. Afterwards, the QSM images were reconstructed from T2 * -weighted images as described in previous studies (20). In addition, conventional MRI sequences including T1-weighted images, T2-weighted fluid-attenuated inversion recovery (FLAIR) images, and diffusion-weighted images (DWI) were obtained to exclude other pathological imaging findings that may have interfered with further assessment. The axial sections were scanned parallel to the anterior commissure-posterior commissure line with whole-brain coverage for QSM and routine MRI scans, and with coverage from the posterior commissure to the pons for NM-MRI. Visual Analysis of Imaging Data The NM-MR images were transferred to a workstation (ADW4.6, GE Healthcare) and displayed using certain settings (window width: 400, window level: 800-900) for analysis. The method for visual analysis was based on that described in a previous study (23), with modifications. We classified NM-MR images according to an 3-point ordinal scale, as follows: 0, normal view of the SN with high signal intensity bilaterally and no volume loss, indicating a healthy SN; 1, possible abnormality with reduced signal or volume of the SN unilaterally or bilaterally, indicating possible SN pathology; 2, definite abnormality with reduced signal or volume of the SN, indicating SN pathology (Figure 1). QSM data were transferred to a local computer, and images were viewed using MRIcro software (Version: 1.40 build 1). N1 was visualized as an oval-shaped area of low signal intensity surrounded by hyperintensity on QSM in the dorsal part of the healthy SN, at intermediate and caudal levels. Visual analysis of N1 was performed using a 3-point ordinal scale, as follows: 0, normal, N1 present bilaterally; 1, non-diagnostic, indecisive presence of N1 unilaterally or bilaterally; 2, pathological, N1 absent bilaterally (Figure 1). Two radiologists (rater 1: W.J. and rater 2: L.D.L.), who were blinded to participants information, independently performed visual analyses twice, with an interval of at least 7 days. The intra-and inter-rater agreement of visual scores was determined using weighted kappa values. For the conflicting cases between the two raters, visual analyses were conducted by a third radiologist (with 30 years of experience) to acquire a final rating for statistical analyses. Statistical Analyses The results were expressed as the mean ± SD. One-way analyses of variance (ANOVA), Mann-Whitney U-test, and chi-square test were used to compare demographic data. Logistic regression analyses were employed to estimate the combined predicted probabilities of the visual ratings for NM and N1. Diagnostic performances in visual assessments of NM and N1, separately, and in the predicted probabilities were analyzed using receiver operating characteristic (ROC) curves. The chi-square test was used to compare the area under the curve (AUC) values of the different diagnostic models. The level of significance was set to 0.05, and all tests were two-sided. Statistical analyses were performed using SPSS version 22.0 (SPSS Inc., Chicago IL) and Stata/SE version 14.0 (StataCorp LP). Demographic and Clinical Data The demographic and clinical characteristics of all participants are summarized in Table 1. No significant differences in gender, age, Mini Mental State Examination (MMSE) scores, or Montreal Cognitive Assessment (MoCA) scores were observed among the three groups. Disease duration was significantly longer in patients with ET than in patients with PD (p < 0.001), although there was no significant difference in UPDRS tremor scores between the ET and PD groups. All patients with PD had mild disease severity (Hoehn and Yahr stages 1 to 2). Visual Analyses of the SN on NM-MRI and N1 on QSM The proportion of conflicting cases was 22.8% (29/127) for NM-MRI analyses and 11.8% (15/127) for N1-QSM analyses for rater 1. For rater 2, the proportion of conflicting cases was 28.3% (36/127) for NM-MRI analyses and 22.0% (28/127) for N1-QSM analyses. The proportion of conflicting cases between the two observers was 25.2% (32/127) for NM-MRI analyses and 24.4% (31/127) for N1-QSM analyses. Thus, the weighted kappa coefficient was calculated to evaluate intra-and inter-rater agreement for visual analyses. For rater 1, intra-rater agreement values for NM and N1 were 0.837 and 0.903, respectively. For rater 2, intra-rater agreement values for NM and N1 were 0.828 and 0.815, respectively. Furthermore, the inter-rater agreement was 0.827 for NM and 0.777 for N1. Thus, visual analyses were highly consistent. For NM visual analysis, scores were 0 in 8 patients with PD (11.8%), 1 in 27 patients with PD (39.7%), and 2 in 33 patients with PD (48.5%). In the ET group, NM scores were 0 in 20 patients (80.0%) and 1 in 5 patients (20.0%). In the control group, NM scores were 0 in 30 participants (88.2%) and 1 in 4 participants (11.8%) (Figure 2A). NM scores of 2 were not observed in the ET and control groups, and the proportion of NM ratings did not significantly differ between these two groups (p = 0.385). However, the proportion of NM ratings in the PD group differed significantly from that in the ET group (p < 0.001). N1 scores were 0 in 3 patients with PD (4.4%), 1 in 11 patients with PD (16.2%), and 2 in 54 patients with PD (79.4%). In the ET group, N1 scores were 0 in 11 patients (44.0%), 1 in 12 patients (48.0%), and 2 in 2 patients (8.0%). In the control group, N1 scores were 0 in 10 participants (29.4%), 1 in 19 participants (55.9%), and 2 in 5 participants (14.7%) ( Figure 2B). The proportion of N1 ratings in the ET group did not significantly differ from that in controls (p = 0.454). However, the proportion of patients with N1 scores of 2 was significantly lower in the ET group than in the PD group (p < 0.001). Representative images obtained from patients with PD and ET and controls are presented in Figure 1. Diagnostic Performances of Visual Analyses of NM and N1, and of the Predicted Probabilities Combining the Two Biomarkers Based on our findings, we then employed ROC analysis to assess the diagnostic values of several models for differentiating PD from ET. The AUC of NM-MRI (model 1) for differentiating PD from ET was 0.890 (95% CI 0.822, 0.958), and the sensitivity and specificity were 0.882 and 0.800, respectively, when the cutoff value for NM scores was set to ≥1. The AUC of N1 on QSM (model 2) for differentiating PD from ET was 0.882 (95% CI 0.802, 0.962), and the sensitivity and specificity were 0.794 and 0.920, respectively, when the cutoff value for N1 scores was set to 2. No significant differences in AUC values were observed between these two models (p > 0.05, Table 2). We calculated the predicted probabilities using logistic regression to further explore the diagnostic performance of these two biomarkers. Model 3 was established using the predicted probabilities obtained by simply combining these two biomarkers (Log it = −2.176 + 1.923 × NM + 1.429 × N1). The AUC of model 3 was 0.933 (95% CI 0.883, 0.983). The sensitivity and specificity of model 3 were 0.809 and 0.960, respectively, when the cutoff value was set to 0.848. The AUC of model 3 was significantly higher than that of model 1 (p = 0.009), whereas it did not significantly differ from that of model 2 (p = 0.051, Table 2). Furthermore, the specificity of model 3 was higher than those of model 1 and 2 while the sensitivity of model 3 was lower than that of model 1. Model 4 was established using the predicted probabilities obtained by combining NM, N1, and N1 2 (Log it = −1.499 + 2.014 × NM−1.194 × N1 + 1.267 × N1 2 ). The AUC was 0.935 (95% CI 0.884, 0.986). The sensitivity and specificity of model 4 were 0.853 and 0.920, respectively, when the cutoff value was set to 0.704. The AUC of model 4 was significantly higher than those of model 1 (p = 0.041) and model 2 (p = 0.014). The sensitivity and specificity of model 4 were best among the four diagnostic models ( Table 2). We further established model 5 using the predicted probabilities obtained by combining NM, N1, NM 2 , and N1 2 . However, the results of model 5 were not better than those of model 4 (data not shown). The ROCs of the four diagnostic models are shown in Figure 3. were observed in the ET and control groups, and the frequency of nigrosome-1 absence (score: 2) was significantly lower in the ET group than in the PD group (p < 0.001). DISCUSSION In the current study, we aimed to evaluate whether visual analyses of NM and N1 imaging in the SN are of diagnostic value in the differentiation of de novo PD from untreated ET. Patients with PD exhibited reduced signal intensity on NM imaging and an absence of N1 in the SN, relative to patients with ET and healthy controls. Moreover, when visual analyses of NM and N1 imaging were combined, the model exhibited high diagnostic accuracy for differentiating de novo PD from untreated ET. To date, although ET is a common neurological disease, the pathogenic mechanisms remain poorly understood (24), and there are no adequate diagnostic biomarkers. Our findings suggest that non-invasive neuroimaging studies may aid in the differential diagnosis of tremor disorders, particularly PD and ET. NM plays an important role in the pathogenesis of PD (25). Previous MRI studies have indicated that NM signal intensity in the SN are decreased even in the early stages of PD (22,26). NM has a high binding affinity for iron. However, Reimao et al. reported that there is no significant correlation between NM and iron content in the SN (26). A recent review demonstrated that the NM is not only directly involved in reactive oxygen species (ROS) reduction but also in Ca 2+ homeostasis, with NM loss leading to the death of dopaminergic neurons (27). Consistent with our quantitative results, a previous analysis study (14) demonstrated that NM levels in the SN are not significantly decreased in patients with ET. Indeed, only 20% of patients with ET obtained scores of 1. All others obtained scores of 0. In contrast, 39.7 and 48.5% of patients with PD obtained scores of 1 and 2, respectively. These results may provide evidence against a possible pathogenic link between PD and ET. Consistent with the findings of previous reports (17), our results indicated that N1 was absent on QSM images in 79.4% of patients with PD. While NM is known to participate in intracellular iron metabolism, loss of nigrosome signals may be related to iron deposition in the brain (28). At present, QSM is the optimal imaging method for quantifying iron content in the brain in vivo (18,20). Thus, our findings indicated that iron overload occurs in N1 in patients with PD. Previous studies have indicated that changes in N1 can aid in the differential diagnosis of PD (29), as they can be observed in both the early and late stages of the disease (17). Our data also demonstrated the presence of N1 in most patients with ET, in contrast to our findings for patients with PD. To date, several studies have indicated that ET is likely to be a neurodegenerative disease, especially affecting cerebellar system proven by clinical, neuroimaging, and postmortem studies (30,31). In addition, patients with ET exhibit a 4-fold higher risk of developing PD than those without ET (32). One MRI T2 * -relaxometry study revealed that ET is associated with iron deposition in the SN (33). Another functional MRI study provided further evidence of neurodegeneration in patients with ET, reporting over-activation in the parietal cortex and dorsolateral prefrontal cortex (34). Mild abnormalities in striatal DATs have also been observed in patients with ET, along with a typical PD-like pattern of uptake loss (35). However, there is no loss of DAT binding over time in patients with ET, providing evidence against the neurodegeneration hypothesis (36). To our knowledge, our study is the first to report that the rate of N1 absence is significantly lower in patients with ET than in patients with PD, suggesting that ET is associated with a lower iron deposition than PD. By combining our NM and N1 findings for patients with PD and ET, we developed diagnostic models for the differentiation of the two disorders. Molecular imaging techniques such as DAT imaging may further improve diagnostic accuracy. Novellino et al. reported that DAT-SPECT and MIBG scintigraphy findings are abnormal in patients with probable PD, while they are normal in patients with ET (37). Another study also reported that DAT imaging can be used to differentiate early-stage PD and ET with high sensitivity (84.4%) and specificity (96.2%) (38). However, this technology is not feasible for clinical application. Several transcranial sonography (TCS) studies have also aimed to distinguish patients with PD from those with ET (38)(39)(40)(41), achieving moderate sensitivity and high specificity. Despite these results, recording failures due to an insufficient acoustic bone window may limit the use of TCS (42). A previous NM-MRI study reported sensitivity and specificity values of 67.7 and 93.3%, respectively, when high-signal areas in the SN were used to distinguish ET from PD (13). Because visual analysis is fast and more convenient for clinical applications, we performed visual analyses of NM and N1 to aid in the differential diagnosis of de novo PD and untreated ET. Sensitivity values were >79% for both biomarkers, while specificity values were equivalent to 80 and >90% for NM and N1, respectively. Despite the relatively good diagnostic ability of both visual assessments for NM and N1, further combining the two biomarkers may provide higher diagnostic accuracy in clinical practice for individual diagnosis. Indeed, the combination of both biomarkers achieved a sensitivity of 85.3% and specificity of 92%-values higher than those for each biomarker. The present study possesses some limitations of note. First, the number of participants was relatively small, especially the number of patients with ET, and all participants were recruited from clinical settings. Therefore, replication in larger population-dwelling samples is warranted to confirm the effectiveness of the diagnostic models used in our study. Second, NM-MRI acquisition times were relatively long in our study, which many patients may be unable to tolerate. However, a recent study introduced a three-dimensional NM-MRI sequence that took only slightly more than 4 min, which may facilitate clinical application of this method (43). Third, the analysis in our study was qualitative, which may be more suitable for clinical application, rather than quantitative. However, QSM can be used to quantify iron content (20), while NM content can be quantified on NM-MRI based on volume/width (22,44). Further studies should therefore focus on the quantitative differences between PD and ET. Notably, the patients recruited in this study were all drug naïve, which eliminated the undefined confounding factors associated with medication use (45). In conclusion, our findings indicated that visual analyses combing NM and N1 may represent a diagnostic biomarker for the differentiation of tremor disorders. Furthermore, our results suggest that iron deposition is greater in patients with PD than in those with ET.
2019-02-12T14:04:46.752Z
2019-02-12T00:00:00.000
{ "year": 2019, "sha1": "5f758d0f8180c74593e88698283eec6e4a2886ec", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2019.00100/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f758d0f8180c74593e88698283eec6e4a2886ec", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
219104620
pes2o/s2orc
v3-fos-license
Age Differences in COVID-19 Risk Perceptions and Mental Health: Evidence From a National U.S. Survey Conducted in March 2020 Abstract Objectives Theories of aging posit that older adult age is associated with less negative emotions, but few studies have examined age differences at times of novel challenges. As COVID-19 spread in the United States, this study therefore aimed to examine age differences in risk perceptions, anxiety, and depression. Method In March 2020, a nationally representative address-based sample of 6,666 U.S. adults assessed their perceived risk of getting COVID-19, dying if getting it, getting quarantined, losing their job (if currently working), and running out of money. They completed a mental health assessment for anxiety and depression. Demographic variables and precrisis depression diagnosis had previously been reported. Results In regression analyses controlling for demographic variables and survey date, older adult age was associated with perceiving larger risks of dying if getting COVID-19, but with perceiving less risk of getting COVID-19, getting quarantined, or running out of money, as well as less depression and anxiety. Findings held after additionally controlling for precrisis reports of depression diagnosis. Discussion With the exception of perceived infection-fatality risk, U.S. adults who were relatively older appeared to have a more optimistic outlook and better mental health during the early stages of the pandemic. Interventions may be needed to help people of all ages maintain realistic perceptions of the risks, while also managing depression and anxiety during the COVID-19 crisis. Implications for risk communication and mental health interventions are discussed. When COVID-19 entered the United States, reports from China were already indicating that case-fatality rates increased with older adult age (Novel Coronavirus Pneumonia Emergency Response Epidemiology Team, 2020). Generally, older adult age has been associated with reporting less negative emotions (Carstensen, Pasupathi, Mayr, & Nesselroade, 2000), perceiving stressful events as less unpleasant (Neubauer, Smyth, & Sliwinski, 2019), and scoring lower on anxiety and depression (Löwe et al., 2010). Socio-emotional Selectivity Theory posits that adults who are relatively older are more motivated to maximize their well-being in the limited time they perceive to have left (Carstensen, 2006). However, the Strength and Vulnerability model suggests that older adults may find it harder to cope with serious or prolonged stressors (Charles, 2010). As COVID-19 spread through the United States in March 2020, this study examined whether older adult age was associated with lower risk perceptions for COVID-19 and with less depression and anxiety. The former reflect cognitive/deliberative perceptions of threat, and the latter emotional responses (Kobbeltved, Brun, Johnsen, & Eid, 2005). Method Sample Between March 10-31, 2020, 6,666 of 8,489 invited members of the University of Southern California's (USC) Understanding America Study (UAS) aged 18-100 (M = 48.56, SD = 16.62) answered the questions analyzed here (response rate = 79%). To obtain a nationally representative sample, UAS members were recruited from randomly selected U.S. addresses (Understanding America Study Recruitment Protocol, 2019), sampling probabilities were adjusted for underrepresented populations, and internet-connected tablets were provided to interested individuals if needed (Alattar, Messel, & Rogofsky, 2018). Address-recruited online panels tend to be better than opt-in online panels at achieving national representativeness (Tourangeau, Conrad, & Couper, 2013) and delivering highquality data (Kennedy et al., 2020). Following the survey literature (Valliant, Dever, & Kreuter, 2013), poststratification weights were used to further align the present sample to the U.S. adult population regarding age, gender, race/ethnicity, education, and location (see https://uasdata.usc.edu/page/ Weights). A sample size of 1,481 would have been sufficient to discover r ≥ 0.10 with 0.90 statistical power and α = 0.01. Demographic characteristics are discussed in the Results section. There were no significant differences between invitees who completed the questions analyzed here and those who did not, regarding age, gender, education, and race/ ethnicity. However, compared to invitees who did not complete the survey, those who did were slightly less likely to report below-median income (50% vs 45%), χ 2 (1) = 12.23, p < .001, slightly more likely to be married (51% vs 55%), χ 2 (1) = 8.26, p < .01, and slightly less likely to live in worsthit states (26% vs 22%), χ 2 (1) = 9.77, p < .01. Procedure The online survey was approved by USC's Institutional Review Board, as part of the UAS. Survey and data are publicly available (https://uasdata.usc.edu/index.php; #230). Risk perceptions Participants were asked "On a scale from 0 to 100%, what is the chance that you will get the coronavirus in the next three months?" and "On a scale of 0 to 100 percent, what is the chance that you will be quarantined within the next three months?" with the explanation that "In a quarantine, someone who has been exposed to coronavirus but is not presently sick may have to stay away from other people for 14 days." Perceived infection-fatality risk was assessed by asking "If you do get infected with the coronavirus, what is the chance you will die from it?" Participants who indicated being employed were asked "What is the percent chance that you will lose your job because of the coronavirus in the next three months." All answered "What is the percent chance that you will run out of money because of the coronavirus in the next three months?" Responses were provided on a validated visual linear scale ranging from 0% to 100% (Bruine de Bruin & Carman, 2018). Control variables Experiences with COVID-10 were assessed by asking "has a doctor or another healthcare professional diagnosed you with the coronavirus (COVID-19)?" and "do you think you have been infected with the coronavirus (COVID-19)?" with response options yes, no, and unsure. Demographic variables were on record at the UAS, including gender (male = 1; female = 0), marital status (married = 1; not married = 0), non-Hispanic white race/ethnicity (yes = 1; no = 0), college education (yes = 1; no = 0), below-median income (yes = 1; no = 0), and residing in states that were worst-hit by COVID-19 at the time of the survey, including California, Massachusetts, New Jersey, New York, and Washington (yes = 1; no = 0). The date on which participants completed the survey was treated as a dichotomized variable (March 10-12, 2020 = 0; March 13-31, 2020 = 1) because half completed the survey within the first 3 days and very few completed it on later days (Bruine de Bruin & Bennett, in press). Participants were asked whether they were currently employed (yes = 1; no = 0). To incorporate precrisis depression diagnosis, the present survey data were merged with data from a survey conducted between December 2019 and January 2020, on which 5,638 (85%) of the 6,666 participants reported whether they had ever been diagnosed with depression (yes = 1; no = 0). Separate poststratification weights were used in analyses that included this variable, to align that sample to the U.S. adult population regarding age, gender, race/ethnicity, education, and location (see https://uasdata.usc.edu/page/Weights). Control Variables Likely because the survey was conducted as the COVID-19 epidemic emerged in the United States, none of the participants reported a diagnosis with COVID-19, but 0.3% were unsure. None thought that they had been infected, with 6.9% being unsure. Older adult age, which was treated as a continuous variable in all analyses, was not associated with being unsure about a diagnosis (r = −0.02, p = .08) but it was associated with being less unsure about infection (r = −0.10, p < .001). Because the low variability of the former likely undermined the ability to discover relationships, only the latter was included as a control variable. Overall, 48% of participants were male, 55% were married, 64% were non-Hispanic white, 34% had a college degree, and 22% lived in states that were worst-hit at the time (California, Massachusetts, New Jersey, New York, and Washington). Participants' median income was $50,000-$59,999. By comparison, national statistics suggest that the U.S. population is 49% male, 50% married, 63% non-Hispanic white, 32% college-educated (if aged 25+), and 25% living in worst-hit states, with median income being $60,293 (Parker & Stepler, 2017;United States Census Bureau, 2018). In the present survey, 62% reported having a job. In a precrisis survey, 18% of N = 5,638 participants reported a depression diagnosis. As noted, half of participants completed the survey between March 10−12, 2020, and half between March 13−31, 2020. Risk perceptions Pearson correlations indicated that older adult age was associated with perceiving greater infection-fatality risks, but smaller risks for getting COVID-19, getting quarantined, experiencing job loss (among N = 4,119 reporting current employment), and running out of money (Table 1; see also Figure 1). Except for job loss, these relationships with age held in linear regressions that controlled for being unsure about having been infected with COVID-19, gender, marital status, employment status, race/ethnicity, education, residing in the states that were worst-hit at the time, income, and survey date ( Mental health Pearson correlations indicated that relatively older adults scored lower on depression and anxiety, or their combination, with a similar pattern for exhibiting warning signs (Table 1; see also Figure 2). These results held in regressions that included the same control variables as above (Table 1 Discussion In a national life-span sample, this study examined age differences in risk perceptions and mental health during the early stages of the COVID-19 outbreak in the United States. Older adult age was associated with perceiving greater infection-fatality risk. However, older adult age was also associated with seeing lower risks of getting COVID-19 and of experiencing negative economic consequences. Furthermore, older adult age was associated with less depression and less anxiety, for better overall mental health. Results for risk perceptions and mental health outcomes held after accounting for demographic control variables and whether or not participants had precrisis depression diagnoses, as reported between December 2019 and January 2020. The present findings agree with studies suggesting that adults who are relatively older tend to report less negative emotions, better mental health, and less responsiveness to daily stressors (Carstensen et al., 2000;Neubauer et al., 2019), and experience less depression and anxiety (Löwe et al., 2010). Although concerns have been raised that such findings may not hold for more severe or prolonged stressors (Charles, 2010), the present findings suggest that older adult age was associated with less negative responses to the emerging COVID-19 crisis in the United States. Similarly, older adult age was associated with less distress after the 9/11 attacks, less fear of future attacks, and a steeper decline in post-traumatic stress over time (Scott, Poulin, & Silver, 2013). While the COVID-19 epidemic was outside of their control, adults who were relatively older may have regulated their emotions by focusing on the positive, or choosing activities and interactions that reduced their stress (Carstensen, 2006;Neubauer et al., 2019). Time will tell, however, whether older adults were too positive in their outlook. While unrealistic optimism can help to regulate emotions in the short run, it may sometimes leave people unprepared for negative outcomes occurring in the future (Shepperd, Waters, Weinstein, & Klein, 2015). Like any study, the present study has potential limitations. One limitation is that it did not track individual participants over time. The survey was conducted in March 2020, at the early stages of the epidemic. As more people get sick, have loved ones fall ill and die, and suffer economic consequences, age differences in responsiveness may become less pronounced, disappear, or even reverse-especially because COVID-19 infection-mortality disproportionally affects older adults (Novel Coronavirus Pneumonia Emergency Response Epidemiology Team, 2020). Indeed, analyses of survey data from the later stages of the COVID-19 outbreak in China suggest that there were no longer age differences in depression and anxiety (Qiu, Shen, Zhao, Xie, & Xu, 2020;Wang et al., 2020), even though the traditional finding of older adults being less depressed and anxious held in China before (Prina, Ferri, Guerra, Brayne, & Prince, 2011). Another limitation is that ill and vulnerable individuals may have been less likely to respond to the survey, potentially undermining extensive efforts toward recruiting a nationally representative sample. Regardless, interventions may be needed to help people of all ages maintain realistic perceptions of the risks, while also managing depression and anxiety during the COVID-19 crisis. The Centers for Disease Control and Prevention's (2014) guidelines on risk and crisis communication suggest that communications must be timely, accurate, and responsive to people's need for information, while identifying what is known and unknown, and how the unknowns will be addressed. Additionally, the literature suggests that, moderate fear appeals may be effective when pointing to Control variables included being unsure about already having been infected (yes = 1; no = 0), gender (male = 1; female = 0), marital status (married = 1; not married = 0), non-Hispanic white race/ethnicity (yes = 1; no = 0), college education (yes = 1; no = 0), residing in worst-hit states (yes = 1; no = 0), below-median income (yes = 1; no = 0), and survey date (March 10-12, 2020 = 0; March 13-31, 2020 = 1). All regression models except ones predicting risk perceptions for job joss also included a control variable for being currently employed (yes = 1; no = 0). Pre-crisis depression diagnosis was reported in December 2019 and January 2020 (yes = 1; no = 0). ***p < .001. **p < .01. preventive behaviors that allow people to control their risks (Witte & Allen, 2000). To manage mental health without requiring in-person meetings, psychological counseling services in China were delivered online and through voice-over-internet during their COVID-19 outbreak (Liu et al., 2020). Before COVID-19, it was already recommended that telemedicine be used when in-person care was impossible (García-Lizana & Muñoz-Mayorga, 2010). Preliminary evidence suggests the potential effectiveness of depression self-management through self-administered computer-based cognitive behavioral therapy (Grist & Cavanagh, 2013), and smartphone apps (Firth et al., 2017). Follow-up research is needed to understand age differences in risk perceptions and mentalhealth impacts of COVID-19 over time as well as to inform and subsequently test intervention strategies as the crisis unfolds. Supplementary Material Supplementary data are available at The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences online. Funding Data collection was supported and conducted by University of Southern California's Center for Economic and Social Research. W. Bruine de Bruin was supported by the National Science Foundation (#2028683). University of Southern California's Schaeffer Center for Health Policy and Economics, and the Swedish Riksbankens Jubileumsfond Program on "Science and Proven Experience." Figure 2. Age differences in warning signs for depression and anxiety disorder. Note. Age groups were computed for presentation purposes only. The reported analyses treated age as a continuous variable. Warning signs of depression and anxiety disorder referred to scores of ≥6 on the 4-item Patient Health Questionnaire (PHQ-4) and warning signs of either depression or anxiety disorder referred to scores of ≥3 on PHQ-4 subscales (Kroenke et al., 2009;Löwe et al., 2010). N = 874 for age group <30, N = 1,630 for age group 30-39, N = 1,045 for age group 40-49, N = 1,102 for age group 50-59, N = 1,199 for age group 60-69, N = 816 for age group ≥70.
2020-05-28T09:11:35.598Z
2020-05-29T00:00:00.000
{ "year": 2020, "sha1": "cfb5b483dff49411bceae99d5c00be72bacad49f", "oa_license": null, "oa_url": "https://academic.oup.com/psychsocgerontology/article-pdf/76/2/e24/35921204/gbaa074.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "2a3cfd304d20968946167758489ee4cafc350bfe", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
13923218
pes2o/s2orc
v3-fos-license
Herpes Simplex Virus Type-2 Cervicovaginal Shedding Among Women Living With HIV-1 and Receiving Antiretroviral Therapy in Burkina Faso: An 8-Year Longitudinal Study Background. The impact of antiretroviral therapy (ART) on herpes simplex virus type-2 (HSV-2) replication is unclear. The aim of this study was to assess factors associated with cervicovaginal HSV-2 DNA shedding and genital ulcer disease (GUD) in a cohort of women living with human immunodeficiency virus type-1 (HIV-1) in Burkina Faso. Methods. Participants were screened for cervicovaginal HSV-2 DNA, GUD, cervicovaginal and systemic HIV-1 RNA, and reproductive tract infections every 3–6 months over 8 years. Associations with HSV-2 shedding and quantity were examined using random-effects logistic and linear regression, respectively. Results. Of the 236 women with data on HSV-2 shedding, 151 took ART during the study period. Cervicovaginal HSV-2 DNA was detected in 42% of women (99 of 236) in 8.2% of visits (151 of 1848). ART was associated with a reduction in the odds of HSV-2 shedding, which declined for each year of ART use (odds ratio [OR], 0.74; 95% confidence interval [CI], .59–.92). In the multivariable model, the impact of ART was primarily associated with suppression of systemic HIV-1 RNA (adjusted OR, 0.32; 95% CI, .15–.67). A reduction in the odds of GUD was also observed during ART, mainly in those with HIV-1 suppression (adjusted OR, 0.53; 95% CI, .25–1.11). Conclusions. ART is strongly associated with a decrease in cervicovaginal HSV-2 shedding, and the impact was sustained over several years. Herpes simplex virus type 2 (HSV-2) infection is one of the most common sexually transmitted infections, with the highest burden in Africa [1]. HSV-2 coinfection is associated with increased plasma and genital human immunodeficiency virus type 1 (HIV-1) loads [2,3] and increased quantities of genital tract inflammatory cells [4,5]. People living with HIV tend to have more-frequent HSV-2 clinical manifestations, with recurrent and persistent genital ulcerative disease (GUD) attributed to impaired immune responses [6,7]. HIV coinfection has also been shown to increase genital shedding of HSV-2 [8] and the likelihood of transmission [9]. Antiretroviral therapy (ART) should reduce clinical and asymptomatic manifestations of HSV-2 infection, through immune restoration. The impact of antiretroviral therapy (ART) on GUD and HSV-2 shedding has been described in multiple contexts, with varying results depending on sampling frequency [10,11]. Both GUD and HSV-2 genital shedding can increase during the first 1-3 months of ART, particularly among women with low CD4 + T-cell counts at ART initiation, likely owing to immune reconstitution [12,13]. The impact of ART on HSV-2 shedding beyond 6 months has not been described. In this article, we present data on the short-term and longterm effects of ART on symptomatic HSV-2 genital shedding (defined as the presence of GUD) and asymptomatic HSV-2 genital shedding in a cohort of high-risk women living with HIV-1 in Burkina Faso. METHODS Participants were women living with HIV-1 and coinfected with HSV-2 who were enrolled in the Yerelon cohort in Bobo-Dioulasso, Burkina Faso [14][15][16]. Combined ART has been available since 2004 for women with World Health Organization clinical stage 3/4 HIV disease or a CD4 + T-cell count of ≤200 cells/µL (or ≤350 cells/µL, beginning in 2009) [17]. First-line treatment for most participants was based on nonnucleoside reverse transcriptase inhibitors. Participants were followed approximately every 3-6 months. A subset of women were enrolled in a randomized trial of valacyclovir to suppress HIV-1 genital shedding, with fortnightly visits over a 12-week period in 2004-2005 [18,19]. All visits corresponding to regular cohort visits were included in this analysis, excluding those with valacyclovir use. At each visit, a clinician performed a gynecological examination; recorded whether GUD was present, based on detection of vesicles or ulcers; and collected genital samples. Enriched cervicovaginal lavage (eCVL) was performed by infusing 2 mL of normal saline into the vagina for 60 seconds and collecting it into a cryotube. A swab was rotated 360 degrees in the cervical os and placed into the same cryotube [20]. Women with symptoms of reproductive tract infections were treated according to national syndromic management guidelines, which did not include acyclovir during the study period. Visits were deferred during menses. The research protocol was approved by the institutional review boards at the London School of Hygiene and Tropical Medicine and Centre Muraz and the research ethics committee at the Burkina Faso Ministry of Health. All women provided written informed consent. HSV-2 serology was assessed using the Kalon IgG2-ELISA kit (Kalon Biologicals). HIV-1 RNA in plasma and eCVL specimens was detected and quantified using real-time polymerase chain reaction (PCR) analysis (Generic HIV Viral Load; Biocentric) [21]. HSV-2 DNA was extracted from 200 µL of eCVL fluid by using the QIAamp DNA mini kit (Qiagen) and was eluted in 100 µL of buffer. HSV-2 DNA was amplified from 5 µL of eluate by Taq-Man real-time PCR analysis, using the ABI Prism 7000 Sequence Detection Systems, and was quantified using the HSV-2 Quantitated External Control (Tebu-Bio) [22]. The lower limit of detection was 300 copies/mL (2.50 log 10 copies/mL). Cervical swabs were tested for Neisseria gonorrhoeae and Chlamydia trachomatis, using PCR (Amplicor CT/NG PCR assay; Roche); testing was restricted to swabs dating from 2007 onward, owing to the potential for DNA degradation [23]. Vaginal smears were examined using wet-mount microscopy. Bacterial vaginosis was diagnosed on the basis of the Nugent score assigned to heat-fixed vaginal smears. The presence of sperm was detected using qualitative PCR to detect the Y chromosome [24]. The frequency of GUD and HSV-2 shedding and the quantity of HSV-2 DNA were assessed after stratification by ART status and ART duration. HIV-1 RNA and HSV-2 DNA loads in plasma and eCVL specimens were transformed to log 10 copies/ mL. Viral suppression was defined as achieving an undetectable HIV-1 RNA load in plasma (defined as a plasma viral load of < 2.50 log 10 copies/mL) within the first 12 months of ART, and immune reconstitution was defined as a CD4 + T-cell count increase of ≥100 cells/µL by 12 months after ART initiation [25]; data collected 18 months after ART initiation were evaluated if data collected at 12 months were missing. Logistic regression was used to estimate odds ratios (ORs) associated with (1) detectable shedding and (2) GUD, adjusting for within-woman correlation by using random-effects models. Multivariable logistic regression models were constructed using a hierarchical framework and included factors known to be associated with either GUD or detectable cervicovaginal HSV-2, namely age group [10,26] and immune reconstitution [27,28], or to be independently associated with GUD or HSV-2 DNA in univariable analysis, with a P value of <.10. Immune reconstitution and viral suppression were preferentially included in the final model owing to missing values for concurrent CD4 + T-cell counts and plasma viral load. For the quantitative analyses, visits with undetectable HIV-1 or HSV-2 were assigned half the threshold value [18]. Randomeffects linear regression was used to assess factors associated with the quantity of cervicovaginal HSV-2 DNA, restricted to visits with detectable HSV-2. A multiple linear regression model was constructed in the same fashion as the logistic model. Statistical analyses were performed using Stata, version 12.0 (StataCorp). RESULTS Between 2003 and 2011, 317 women seropositive for HIV-1 and HSV-2 were enrolled, of whom 236 had data collected on cervicovaginal HSV-2, and 81 did not have any stored samples. The characteristics of women with shedding data are shown in Supplementary were receiving ART at their first visit during which HSV-2 DNA was measured, 54% (128 of 236) initiated ART during the study period, and 4% (10 of 236) did not have any HSV-2 DNA measured after starting ART. The median CD4 + T-cell count was 357 cells/µL (IQR, 196-564 cells/μL) at the first visit with HSV-2 DNA sampling and 177 cells/µL (IQR, 116-233 cells/μL) at ART initiation. The most common ART regimen was zidovudine/lamivudine/efavirenz (42%); 85% (130 of 151) achieved plasma HIV-1 suppression by 12 months of treatment, and 69% (104 of 151) achieved immune reconstitution. Shedding was measured during 1896 cohort visits, with 1308 occurring during ART. There was a median of 11 visits (IQR, 1-16 visits) per woman during ART and 6 visits (IQR, 1-16 visits) per woman before ART initiation. The median follow-up time was 1.2 years (IQR, 0.2-1.7 years) before ART initiation and 6.2 years (IQR, 5.0-6.6 years) during ART; 48 visits at which women received valacyclovir were excluded from analyses. HSV-2 DNA was detected at least once in eCVL samples from 42% of women (99 of 236) at 8.2% of cohort visits (151 of 1848), with GUD detected concomitantly in 15% of shedding episodes (22 of 151). Of women with a measurement while not receiving ART, 33% (67 of 203) had detectable HSV-2 DNA at 15% of visits (84 of 551), and 32% (48 of 151) had detectable HSV-2 DNA at 5% of visits (67 of 1297; P < .001) after ART initiation ( Table 1). The highest proportion of visits with shedding Figure 1) and the first 3 months of ART (18% [7 of 38]), and the proportion significantly dropped after 12 months of ART (3% [36 of 1057]; P trend < .001). The proportion of visits during which GUD was detected also increased and decreased, during the same periods. Across all visits, concurrent CD4 + T-cell counts were associated with a 30% decrease in HSV-2 DNA detected per 100 cells/ µL increase (OR, 0.70; 95% CI, .61-.81), whereas concurrent plasma HIV-1 loads were associated with a 70% increase per log 10 copies/mL increase (OR, 1.70; 95% CI, 1.41-2.05). ART use was associated with a substantial reduction in HSV-2 shedding (OR, 0.26; 95% CI, .17-.39). There was a 25% decrease in the odds of shedding per year of ART after the first 12 months (OR, 0.74; 95% CI, .59-.92). In the multivariable model, HIV suppression was associated with a reduced odds of HSV-2 shedding (adjusted OR, 0.32; 95% CI, .15-.67; Table 1). Shedding episodes decreased with age in the univariable analysis, although this relationship was less pronounced after adjustment for ART (older women being more likely to be receiving ART). There were 317 women with clinical data on GUD during the study period. GUD was present in 22% of women (65 of 292) with data prior to ART initiation and in 32% (92 of 192) with data after ART initiation; among visits, 7.0% (82 of 1176) before ART initiation and 5.7% (93 of 1633) after initiation revealed GUD (P = .17). There were concurrent vesicles at 43% of visits (23 of 124) with ulcers. HSV-2 DNA was detected at 20% of visits (22 of 108) with GUD present and 7.5% of visits (129 of 1722) without GUD present (P < .001). Overall, there was a decrease in frequency of GUD episodes with an increase in concurrent CD4 + T-cell count (OR, 0.79; 95% CI, .70-.89 per 100 cells/µL increase) and an increase with increasing plasma viral load (OR, 1.49; 95% CI, 1.23-1.79 per log 10 copies/mL increase). There was an increase in the odds of GUD during the first 3 months of ART (OR, 2.00; 95% CI, .96-4.20) but an overall reduction in the odds of GUD during ART (OR, 0.72; 95% CI, .51-1.03). In the multivariable model, there was weak evidence that GUD was inversely associated with HIV-1 suppression (adjusted OR, 0.53; 95% CI, .25-1.11) and with increasing CD4 + T-cell count (adjusted OR, 0.86; 95% CI, .74-1.01). DISCUSSION We describe the impact of ART on cervicovaginal HSV-2 and GUD presence over several years. The frequencies of HSV-2 shedding and GUD increased in the 6 months prior to ART initiation, were sustained at that level for the first 3 months of ART, and decreased thereafter. This differs from findings from a study in Uganda, where there was a rise in the frequencies of HSV-2 shedding and GUD during the first 3 months of treatment [13]. In our study, the most substantial decrease in shedding was seen after 12 months of ART, although we were limited by the small number of samples in the first 3 months. The effect of ART was associated with HIV-1 suppression and immune reconstitution, although the magnitude of the effect was larger for viral suppression. This further supports the synergistic interactions between HIV-1 and HSV-2 replication, where systemic HIV-1 replication might drive HSV-2 replication in the sacral ganglia, compounded by weak immune control [29][30][31]. This reduction was maintained over time and was independent of age; therefore, it is less likely to be due only to the natural history of HSV-2 [9]. Among women who shed, HSV-2 DNA quantities were correlated with quantities of genital HIV-1 RNA, providing additional proof of local direct viral interactions [32]. The effect of ART on GUD appears to be driven by systemic HIV-1 suppression, although there was a decrease in the frequency of GUD among women with higher CD4 + T-cell counts and a trend toward a reduction in the odds of GUD among women with immune reconstitution during ART. The slightly different dynamics for the effect of ART on GUD, compared with HSV-2 shedding, suggest that the clinical benefits might wane over time. This is one of the first studies to demonstrate prolonged suppression of HSV-2 shedding during ART; other studies have shown no change in shedding during ART but reductions in GUD [33,34]. The variations in results are likely due to smaller sample sizes and variable duration of follow-up, particularly if studies are limited to early periods after ART initiation. There are limitations to this study. The frequency of sampling was every 3-6 months, and therefore clinical and asymptomatic episodes of HSV-2 activation might have been missed. GUD was assumed to be caused mainly by HSV-2 in this population, based on studies from the region [35,36]. Our prior study in this population showed that 52% of GUD cases harbored lesional HSV-2 DNA [7]. Although we only detected HSV-2 DNA at 20% of visits with concurrent GUD, this is consistent with other studies that used more sensitive methods [10,37]. In conclusion, ART has a significant influence on HSV-2 shedding and GUD episodes, primarily associated with HIV-1 suppression. Following ART initiation, HSV-2 shedding is rapidly suppressed, and the influence of ART is sustained over time.
2016-05-04T20:20:58.661Z
2015-10-15T00:00:00.000
{ "year": 2015, "sha1": "34b0c104b59c71d1e7bed473f241caafbab7421b", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jid/article-pdf/213/5/731/17410873/jiv495.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b9aacbdc667bf46e1b5c14bd9bfa1fa31d555fbf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221842694
pes2o/s2orc
v3-fos-license
Brain Activity during Different Throwing Games: EEG Exploratory Study The purpose of this study is to explore the differences in brain activity in various types of throwing games by making encephalographic records. Three conditions of throwing games were compared looking for significant differences (simple throwing, throwing to a goal, and simultaneous throwing with another player). After signal processing, power spectral densities were compared through variance analysis (p ≤ 0.001). Significant differences were found especially in high-beta oscillations (22–30 Hz). “Goal” and “Simultaneous” throwing conditions show significantly higher values than those shown for throws without opponent. This can be explained by the higher demand for motor control and the higher arousal in competition situations. On the other hand, the high-beta records of the “Goal” condition are significantly higher than those of the “Simultaneous” throwing, which could be understood from the association of the beta waves with decision-making processes. These results support the difference in brain activity during similar games. This has several implications: opening up a path to study the effects of each specific game on brain activity and calling into question the transfer of research findings on animal play to all types of human play. Introduction The words "play" or "game" capture a wide range of meanings, activities, and behaviors, and their definition remains a controversial issue [1]. Several studies, mainly conducted with animals [2], have delved into the neurophysiological basis of play. Although neurophysiological studies abound in sports and e-gaming, there is still little research on children's physical games. This study is intended as an initial attempt to investigate the brain processes that occur in different types of games. This work expands our line of research on children's play [3,4] by analyzing electroencephalographic activity during play episodes. The objective of this research is to know the differences in brain activity during different types of throwing games by taking encephalographic (EEG) records. Brain Wave Indicators Neural activity causes signals of different frequencies. By measuring electrical activity of neuronal assemblies with millisecond temporal resolution, EEG offer the possibility of studying brain function in real time. Unfortunately, the spatial resolution afforded by EEG is constrained by several factors [5]. Studies through electroencephalography (EEG) have associated different frequencies with types of brain activity. Existing research is extensive, and, in this section, we will focus on those contributions close to our subject of study. We will first summarize some characteristics associated with each type of frequency and then we will focus on studies on brain activity related to emotions and motor skills. 1.1.1. Delta Band (0.5-4 Hz) Delta rhythms reflect low-frequency activity (1)(2)(3)(4) and are associated with stages of deep sleep [7]. The role of the oscillation delta between frontal and parietal zones has been observed in decision-making processes [8], auditory attention, and memory updating [9]. The coupling between the beta band and the delta band has been related to the temporal prediction of events and the accuracy of the elaboration of a response [10]. Delta is also the predominant activity in infants during the first two years of life and slow delta and theta activity diminish with increasing age, whereas the faster alpha and beta bands increase almost linearly across the life span [5] (59). Theta Band (4-7 Hz) Theta activity refers to EEG activity within the 4-7 Hz range, prominently seen during sleep. During wakefulness, two different types of theta activity have been described in adults. The first shows a widespread scalp distribution and has been linked to decreased alertness (drowsiness) and impaired information processing [11]. The second, the so-called frontal midline theta activity, is characterized by a frontal midline distribution and has been associated with focused attention, mental effort, and effective stimulus processing [5]. In children it is common to find a greater presence of this band, which decreases over time until they reach adolescence [12]. Theta is believed to enable the coding and decoding of hippocampal learning in the neocortex, especially the frontal lobes [13]. They would have an important role in cognitive processing, memory performance, and learning mechanisms. The theta front midline has been associated with good cognitive control during planning [14,15], and as an indicator of optimal attentional engagement during skilled putting performance [16]. 1.1.3. Alpha Band (7)(8)(9)(10)(11)(12)(13) The alpha rhythm refers to EEG activity within the 7-13 Hz range and can be easily recorded during states of relaxed wakefulness. During normal development, an alpha frequency of 8 Hz appears at the age of three and remains stable for the rest of life [5]. The alpha rhythm has been considered as a means of communication between the thalamus and the cerebral cortex [17]. This oscillation between both structures appears with closed eyes, especially in occipital areas of the cortex. When the eyes are opened, the activity alpha disappears and is replaced by a much more unsynchronized activity within the bandwidth of beta, which is related to sensory or motor processing. Attentional processing or cognitive tasks attenuate the alpha waves [18]. In cognitive tasks, lower alpha (e.g., 8-10 Hz) desynchronization (suppression) has been associated with stimulus-unspecific and task-unspecific increases in attentional demands [5] (60). Klimesch [19] suggested that only slower alpha frequencies reflect attention characteristics such as alertness and expectancy. Gamma Band (30-50 Hz) Gamma Waves (above 25 Hz) are fast oscillations and are usually found during conscious perception. Gamma oscillations have been associated with attention, arousal, object recognition, learning, preparing for a move, top-down modulation of sensory processes, and, in some cases, perceptual binding [21,22]. The work of Martini et al. [23] shows that in the face of unpleasant stimuli, gamma waves appear. It has been proposed that the low frequency bands (e.g., delta and theta) may reflect the activity of motivational and emotional systems, while the higher frequency bands (e.g., alpha and beta) have been more involved in inhibitory processes [24,25]. In any case, there are many contextual and personal variables. For example, Tran, Craig, and McIsaac [26] showed how alpha values were higher in extroverted people than in introverted ones, and [27] point out that agreeableness is associated with theta activity in frontal and occipital lobes; neuroticism is detected with theta activity in parieto-temporal lobes; and extroversion is associated with alpha and theta frequency bands in frontal and temporal lobes. Motricity and Cortical Records Park, Fairweather, and Donalson [28] pointed out that EEG research within the sporting context has largely focused on alpha rhythms involved in the inhibition of unnecessary or conflicting processing in the cortex, global arousal, and attentional processes. Babiloni et al. [29] showed that in the eyes-closed resting state the alpha waves in the parietal and occipital areas were higher in expertise athletes. Del Percio et al. [30] registered a reduced alpha desynchronization over the motor cortex during preparation of movements in expertise athletes. High alpha activity also in the left temporal lobe is related to better performance, but the poorest shots during archery performance were associated with the highest levels of both temporal lobes' EEG alpha power [31]. A better performance was associated with an increase in upper alpha power at parietal electrodes, along with an increase in theta power at frontal electrodes [32]. Deeny et al. [33] showed that expert marksmen exhibited lower coherence between left temporal (T3) and mid-line frontal (Fz) regions for low-alpha and low-beta frequencies, lower coherence for high-alpha between all left hemisphere sites and (Fz), and lower coherence between T3 and all midline sites for the low-beta band (experts engage in less cortico-cortical communication, which implies decreased involvement of cognition with motor processes). In general, higher levels of intelligence and superior performance have been associated with reduced cortical activation [34,35]. A recent review [36] concluded that among the electroencephalographic components examined, only sensorimotor rhythm (8)(9)(10)(11)(12)(13) Hz oscillation in the sensorimotor cortex, related to the regulation of cognitive-motor information processing in motor performance) demonstrated a consistent and causal relationship with superior precision motor performance. On the other hand, focusing on child populations, the relationship between brain development and the acquisition of motor skills is a developing field [37]. The structure of the brain changes rapidly throughout childhood, and it is a challenge to separate the contributions of specific neural changes from development and learning [38]. Electroencephalographic measurements of movement detected that up to the age of 8-9 years, there is no slow negative change prior ("readiness potential") to movement [37]. Emotion and Cortical Records Theta assigns play an important role in human emotional processing and in general excitation processes [39]. Theta has also been explicitly involved in emotional brain networks [40]. In the review by Shu et al. [41] they point out that alpha values are high in the face of emotions of anger, anxiety, amusement, happiness, and joy, and low in cases of fear. Beta values are high in situations of amusement. Dzedzickis, Kaklauskas, and Bucinskas [42], in their review on emotion recognition through EEG, show that beta waves appear associated to an alert or anxious state. Also, negative emotions are related to increased beta responses in humans [43]. Some studies also have reported connections between high-frequency-band (>30 Hz) activities and emotions [44,45]. Matsumoto et al. [46] suggested that these high-frequency-band activities reflect emotional processing, playing an important role in the cognitive control of emotions [47]. Abhang et al. [48] point out that high beta waves are associated with significant stress, anxiety, paranoia, high energy, and high arousal. In any case, studies such as those by Wei et al. [49] warn of personal differences given the neural circuit underlying emotional process influencing on personality. Children's Play As Koeners and Francis propose [1], defining play is an open question and lively debate, but many authors, as Stevens [50] summarized, propose that play is an "altered state" related to fun and the state of flow proposed by Csikszentmihalyi, recognizing that this is the essence of the play experience; Huizinga's "intense and utter" absorption. Authors such as Moyá [51] associate that "altered state" with waves between 10 and 12 Hz (alpha oscillations). Probably, the core neural circuitry that motivates an animal to engage in playful social interactions is shared among mammalian species that engage in play [52]. Although the game circuits reside predominantly in subcortical structures such as the hypothalamus, the striatum, or the amygdala [52], different empathic or tactical aspects are associated with different areas of the cortex, especially prefrontal areas [52]. Play is usually associated with emotions of joy, regulated by subcortical limbic networks, and associated with increases in dopamine levels [53] linked to mental shifting, creativity, and motivation, but may also produce stress, frustration, and addiction [23]. As Koeners and Francis propose [1], this ambivalence is consistent with Sutton Smith's approach that the exciting and rewarding aspects of the game are often found in the ambiguity between creation and destruction [54]. These emotional aspects would be reflected in an increase in theta waves during play [55,56], although with great personal differences possibly due to each player's different approach to play [3] or the play type [4]. Participants To calculate the sample size, the biomath application (http://biomath.info) was used by performing a t-test with significance level: alpha = 0.05 and power of sample size = 0.90 was calculated from the mean and standard deviation of the study data. The following formula [57] was applied to confirm the result of t-test: where "Z" is standard normal variate (at 5% type 1 error (p < 0.05) it is 1.96); "SD" is the standard deviation of variable; and "d" the absolute error or precision (0.05 in this case). The result (<8) is common in preliminary EEG studies [58][59][60][61][62]. A total of eight children volunteers (four males and four females, mean age 7.20 years ± 0.19) participated in the experiment. All the participants were right-handed and healthy. All participants and their families gave written informed consent. The study was performed in accordance with the Declaration of Helsinki. The experiment was accompanied by an educational activity for participants on the functioning of the brain and the recording of brain signals. The participants and their families have been receiving reports on the results obtained from the different analyses of the data. Procedure The room that was set up for the sessions was isolated in order to avoid any kind of distraction or noise. Participants sat in a comfortable chair with their arms resting on the launch table. • First condition: "Throwing." Participant had to throw tennis balls at 10 wooden pieces from 2.5m. In preliminary tests we had seen that it was an easy challenge for children of this age. • Second condition: "Goal." Participant had to throw, from a distance of 2.5m, tennis balls to a goal (of 80cm) defended by a dummy handled by a friend of the participant. This challenge increased the complexity of the throw as the target became changeable and a relational variable was introduced into the game. • Third condition: "Simultaneous." This consisted of a throw to 10 wooden blocks located 2.5 m away, simultaneously to another opponent who threw to the same targets. This challenge introduces a time factor (knocking down the blocks before the opponent) and therefore could increase the arousal. "Throwing" was proposed as the first activity to serve as a throwing test. The "simultaneous" throwing challenge was left for the end since it was assumed that it would generate the highest excitement and it was intended that this possible state would not influence a later challenge. The experiments were carried out between 5 and 6 pm. In each game they were able to perform 15 throws. We did not leave more attempts to avoid disinterest in the task and because in experiments with children it is recommended to use an electrode application time under 30 [63]. After a brief explanation of the procedure and instructions to minimize movement and speech during the recording, the EEG recording system was put in place. An Emotiv EPOC ®®®® headset with 16 electrodes, 14 EEG recording channels (AF3, AF4, F3, F4, F7, F8, FC5, FC6, P7, P8, T7, T8, O1, and O2) and 2 reference electrodes (P3 and P4), positioned according to the International System 10-20, was used. The electrodes of this system are contact and saline type. The Emotiv Control Panel software provides visual monitoring of the electrode impedance lower than 5 kΩ (kilo-ohmios) in order to obtain a good quality signal. The recorded EEG signal, with a sampling frequency of 128 Hz, is sent wirelessly to a Bluetooth receiver placed on the computer. The Emotiv EPOC ®®®® has an artifact cancelation system on its reference electrodes and a filter for the frequencies 50 (notch filter) and 60 Hz. Emotiv-epoc has been widely used in studies on emotion detection [64][65][66] or in movement situations [67,68]. Signal Pre-processing For a first inspection of the data the Emotiv Brain Activity Map (v3.3.3) and Emotiv TestBench (v1.5.0.3) (Emotiv, San Francisco, CA, USA) applications were used. The Emotiv Brain Activity Map shows brain power activity maps at different frequencies obtained through a spectral analysis (Fast Fourier Transform-FFT) of each channel signal. The Emotiv TestBench displays the spectrum of the signals through a FFT (in decibels -dB-). In this first inspection, brain maps were compared with the spectrum and video images of each participant's actions in order to identify events ( Figure 2). Figure 2. Examples of Emotiv Brain Activity Map and TestBench applications interfaces in conjunction with a video image simulation. Data pre-processing and analyses were carried out using EEGLAB toolbox (v.2019.1) (Swartz Center for Computational Neuroscience, La Jolla, CA, USA) for Matlab (MathWorks, Natick, MA, USA). Baseline of the EEG signal for each channel was removed. A spatial filtering of Common Average Reference (CAR) was applied. For frequential filtering, data were high-pass filtered at 1Hz to remove slow drifts. Artefacts were visually identified and rejected from the channels data ( Figure 3). Data were decomposed by Independent Component Analysis (ICA). Components that did not account for brain were visually identified and removed. For this purpose, ICALabel tool (an electroencephalographic independent component classifier) was used (Figure 4). This is a plugin that, among other things, shows us the probability that the component picks up brain activity or other artefacts (muscles, blinking, heart, etc.). Statistical Analysis The frequency domain analysis was performed using the Fast Fourier Transform (FFT) algorithm (with the resolution of 0.125 Hz) to calculate absolute (µV 2 /Hz) power spectral density within theta (4-7 Hz), alpha (7-13 Hz), and beta (13-30 Hz) bands (this is a power-based logarithmic transform based on the microvolt (µV) measurement and the time, calculated for each frequency band). Channels and component measurements were pre-compute. Power spectral density metrics for each condition were calculated. EEGLAB allows users to use either parametric or non-parametric statistics to compute and estimate the reliability of these differences across conditions ("throwing," "goal," and "simultaneous"). The toolbox also allows the obtaining of different spectrum parameters such as the maximum and minimum, mean, medium, mode, standard deviation, and range. EEGLAB allows performing analysis of variance on power spectra. For mean power spectra, the p-values are computed at every frequency. In this case an analysis of variance (ANOVA) test was developed in order to detect statistical differences between the three conditions. The specific time-frequency point was considered significant at p ≤ 0.001. EEGLab designers recommend that while parametric statistics might be adequate for exploring data, it is better to use permutation-based statistics to plot final results. Preliminary Data Inspection For a first inspection of the data the Emotiv Brain Activity Map application (v3.3.3) and the Emotiv TestBench (v1.5.0.3) applications were used. The maps are generated by performing a spectral analysis for each of the 14 sensors, then dividing the sensor readings into delta, theta, alpha, and beta oscillations. The Emotiv TestBench show the spectrum averaged topographies (based on FFT) of the signals. The brain maps and the FFT were viewed simultaneously with the video recordings of the participants' actions in order to identify events. In the condition of throwing tennis balls at 10 wooden pieces from 2.5 m ("throwing"), an increase in theta and beta waves was seen in six of the participants after knocking down some pieces (especially after a successful shot after several failed attempts). Also increases in theta (75 dB) and mid-beta (20 dB) waves when throwing to the last piece of wood were noticed. In general, high fluctuating theta values (41 to 78 dB), moderate alpha values (especially associated with frontal and right lobe areas), and contained beta wave values (−30 to 15 dB) were observed ( Figure 5). In the throwing condition (from a distance of 2.5 m) of tennis balls to a goal (80 cm) defended by a dummy handled by a friend of the participant ("goal"), a progressive increase of theta values was observed throughout the experiment, as well as a constant fluctuation of the alpha oscillations in the frontal areas (-9 to 11 dB), and a high value of the beta waves (up to 15 dB). In three cases, an increase in theta was observed after the failures. In four cases after scoring goals, increases in theta and beta were observed. Five children paused their play and attempted throwing feints. In those cases, decreases in theta activity and records of beta oscillations in occipital areas were observed ( Figure 6). The third condition ("simultaneous") consisted of a throw at 10 wooden blocks located 2.5 m away, while another opponent threw a ball at the same targets simultaneously. High theta (73-81 dB) values were seen in all participants throughout the game (Figure 7). Alfa showed constant synchronizations and desynchronizations and beta exhibited medium (9 to 14 dB) and high values (up to 15 dB). In three cases a decrease in theta and beta activation was observed in the last throws. Comparison of the Three Conditions The results after the analysis of variance show significant differences (p < 0.001) between the power spectral densities (PSD) in the beta oscillations, especially in the high beta as summarized in Figure 8. Theta frequencies did not differ significantly (Figure 9). The range of densities for the "throwing" condition was from 47. . Spectrum plotting of the three conditions components ("throwing," "goal," "simultaneous"), with their plot averaged topography over frequency range, and the p-value plot for frequencies from 4 to 7 Hz (Theta). Figure 10. Spectrum plotting of the three conditions components ("throwing," "goal," "simultaneous"), with their plot averaged topography over frequency range, and the p-value plot for frequencies from 7 to 13 Hz (alpha). In the spectral densities for low beta (13-16 Hz) the "goal" and "simultaneous" conditions showed their most significant differences (p < 0.001) with respect to the "throwing" condition between 14.38 and 16 Hz (Figure 12). . Spectrum plotting of the three conditions components ("throwing," "goal," "simultaneous"), with their plot averaged topography over frequency range, and the p-value plot for frequencies from16 to 22 Hz (mid-beta). As mentioned above, the most significant differences between the three experimental conditions were shown in the spectrum range of the high-beta oscillations between 22 and 30 Hz (Figure 14 Discussion EEG signals have been widely used for studying different cognitive functions. In this study we explore the differences in brain activity in three types of throwing games (simple throwing, throwing at a goal, and simultaneous throwing with another player) by taking encephalographic records. The differences, especially in high-beta power, were supported by results. Furthermore, the "throwing" condition presented low values in the beta spectrum compared to the conditions of "goal" and "simultaneous" throwing. Since the "throwing" condition presented an uncomplicated task for the participants and the association of theta waves with states of emotional excitement [39], a significant difference was expected in this frequency band between the "throwing" condition and those of "goal" and "simultaneous" (in principle, with greater emotional demand given the added uncertainty of throws and interaction with an opponent). The preliminary data inspection of the spectrum averaged topographies showed some difference, but the subsequent statistical analysis confirmed that this difference was not significant. Perhaps this can be due to the short duration of each game (less than 5 min) as indicated by studies based on video games [69]. In the alpha spectrum, between 9 and 11 Hz, there is an increase in the values in the "simultaneous" condition compared to the other two conditions that could be explained by a higher happiness and joy following the analysis of Shu et al. [41], although the difference is not significant. On the other hand, according to Reuderink et al. [70] a significant increase in alpha power range, associated with increasing emotional arousal, could be expected, but is not reflected in this study (there is a non-significant difference in favor of the "simultaneous" condition in the range of mid-alpha, 9-10.5 Hz). The results show a difference (not significant) in the low-alpha range (7-8 Hz) that may be associated with the higher socio-cognitive processing demanded in the "goal" condition [71]. A larger sample would be needed to confirm this trend. Beta oscillations are related to demands on the motor and somatosensory cortex [72], as well as to top-down control [73]. Since the conditions of "goal" throwing (in which the target is changing) and "simultaneous" throwing (in which there is a rush to throw) are more exigent, the difference in the beta spectrum versus the "throw" condition could be explained in this sense. On the other hand, Abhang et al. [74] point out that high beta waves are associated with significant stress, anxiety, high energy, and high arousal, and Dzedzickis, Kaklauskas, and Bucinskas [42] associated them to an alert or anxious state. According to these studies, it is plausible to find less arousal in the less demanding perceptual and relational "throwing" condition. This record of "controlled stress" in play situations would be in line with studies on the neurophysiology of play such as those by Wang and Aamodt [74]. According to these authors, play activates the brain's reward circuitry but not negative stress responses, which can facilitate attention and action. The high values of high-beta in the "goal" condition versus the "simultaneous" throwing condition could be understood from the association of the beta waves with decision-making processes [75,76]. Spitzer and Haegens [77] report that lateralized beta activity during decision-making tasks reflect a dynamic process of accumulatively updating a motor plan. In addition, it appears that beta oscillations are also involved in the timing [10]. Compared to the "goal" condition, the "simultaneous" throwing condition, in theory, involves less decision making (the challenge does not involve the evaluation of the opponent's reaction) and less timing (the "goal" throwing challenge involves an adjustment to the goalkeeper's movements). Versus video games studies that indicate low-beta frequency as the most informative band for discriminating among gaming conditions [78], in this study on physical games, it is the high-beta frequency that shows the differences between variants of the throwing games. Beta oscillations are also associated with unexpected positive rewards [79], but in this study it is not clear if this indicator can play a role in the results of the different conditions. As many studies in the field of motor praxeology point out [80], each game introduces players to a particular logic and each modification in the game structure results in different experiences. Although these results need to be confirmed with larger samples and other types of games, they reveal important implications within the field of physical education and, in general, in the study of human play. In the field of physical education, a line of research is opening up that could identify the brain demands of the different tasks proposed. As for the field of human play, it is questionable to transfer the conclusions of studies on animal play to human play (given that these studies are frequently used to make this type of extrapolation [53]) and it raises the question of whether speaking about "play" in general can be meaningful. This opens up many questions for future research. Further studies would be needed to determine the role of different explanations given to high beta (motor demand of the task, arousal, making decision or reward). If each game causes different states, can we talk about "play" in general? To get closer to the answer to this question, we can compare brain activity in different situations (relaxation, reading, arithmetic calculation) with records of different play activities such as those proposed in this work. In addition to the fact that each type of game causes different experiences, are the differences in brain states between players within the same game significant? To solve this problem, we should increase the sample size and try to find different clusters. Conclusions The results show the differences in brain activity between games belonging to the same family (throwing games) but with different characteristics. The main differences between the three compared game situations are seen in the high-beta frequency (20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30). "Goal" and "simultaneous" throwing conditions show significantly higher values than those shown for throws without opponent. This can be explained by the higher demand for motor control and the higher arousal in competition situations. On the other hand, the high-beta records of the "goal" condition are significantly higher than those of the "simultaneous" throwing, which could be understood from the association of the beta waves with decision-making processes. There are also differences (although not significant) in the theta and alpha waves that would need future studies with larger samples to be confirmed.
2020-09-23T13:06:06.968Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "612350a9f5f4309fc05bfee09f4dbd89cb1ac3a6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijerph17186796", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d8399b57a40a7f0286e1d35bad73eeaf4d18d9da", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
12259249
pes2o/s2orc
v3-fos-license
Treatment of cognitive impairment in Alzheimer's disease In Alzheimer's disease, cognition now responds to several drugs. Anticholinesterases target the acetylcholine deficit. In mild-to-moderate Alzheimer's disease, they all provide significant benefit versus placebo on the Alzheimer's Disease Assessment ScheduleCognitive Section (ADAS-Cog), Side effects, in 5% to 15% of cases, include nausea, vomiting, diarrhea, anorexia, and dizziness. Tacrine, the leading anticholinesterase, caused frequent hepatic enzyme elevation and was withdrawn; once-daily donepezil spares the liver and improves global measures of change in severe dementia; rivasiigmine is indicated in comorbid vascular disease; while galaniamine modulates the cerebral nicotinic acetylcholine receptors that potentiate the response to acetylcholine. Alternative agents include the N-methyl-D-aspartate (NMDA) receptor antagonist, memaniine, licensed in Europe for moderately severe to severe Alzheimer's disease; it acts on a different neurotransmitter system present in 70% of neurons, protecting against pathologic glutamergic activation while preserving or even restoring physiologic glutamergic activation. The clinician's armamentarium in AD has never been greater. lzheimer's disease is the commonest cause of dementia and describes a clinical syndrome made up of three domains. First, a neuropsychological domain encompassing those deficits of cognitive function such as amnesia (memory loss), aphasia (language disturbance), apraxia (the inability to carry out motor tasks despite intact motor functions), and agnosia (the inability to recognize people or objects despite intact sensory functions). Second, a group of psychiatric symptoms and behavioral disturbances, which have been termed neuropsychiatric features, 1 noncognitive phenomena, or behavioral and psychological symptoms of dementia (BPSD). 2 These consist of psychiatric symptoms (such as delusions, hallucinations, depression, paranoid ideas, and misidentifications) and behavioral disturbances (such as aggression, wandering, and sexual disinhibitions). Third, problems with activities of daily living (ADL), which include instrumental ADL in the early stages of dementia when the person is unable to carry out complex tasks, such as shopping, driving, and using the telephone, and basic ADL in the later stages of dementia, when a person is unable to go to the toilet or feed, dress, and wash themselves. Causes of dementia The relative frequency of causes of dementia vary depending on the population under study. Alzheimer's disease is probably the commonest form (about 50%), followed by vascular dementia (about 25%) and dementia with Lewy bodies (about 20%), with the other 5% being made up of reversible dementias and rarer forms P h a r m a c o l o g i c a l a s p e c t s 35 Treatment of cognitive impairment in Alzheimer's disease Alistair Burns, MD, FRCP, FRCPsych A In Alzheimer's disease, cognition now responds to several drugs. Anticholinesterases target the acetylcholine deficit. In mild-to-moderate Alzheimer's disease, they all provide significant benefit versus placebo on the Alzheimer's Disease Assessment Schedule-Cognitive Section (ADAS-Cog). Side effects, in 5% to 15% of cases, include nausea, vomiting, diarrhea, anorexia, and dizziness. Tacrine, the leading anticholinesterase, caused frequent hepatic enzyme elevation and was withdrawn; once-daily donepezil spares the liver and improves global measures of change in severe dementia; rivastigmine is indicated in comorbid vascular disease; while galantamine modulates the cerebral nicotinic acetylcholine receptors that potentiate the response to acetylcholine. Alternative agents include the N-methyl-D-aspartate (NMDA) receptor antagonist, memantine, licensed in Europe for moderately severe to severe Alzheimer's disease; it acts on a different neurotransmitter system present in 70% of neurons, protecting against pathologic glutamergic activation while preserving or even restoring physiologic glutamergic activation. The clinician's armamentarium in AD has never been greater. P h a r m a c o l o g i c a l a s p e c t s of dementia, such as frontal lobe dementia or Creutzfeldt-Jakob disease. There is increasing evidence that there is a significant overlap between the two commonest causes-Alzheimer's disease and vascular disease. Clinically, it is common for individuals to have features of both disorders. Epidemiological studies suggest that the risk factors for vascular disease are also associated with the development of Alzheimer's disease. 3 Histological studies have shown that in many patients there is a coexistence of vascular and Alzheimer's changes and that, even in the presence of Alzheimer's disease histologically, vascular changes significantly influence the clinical picture in terms of the presence of dementia. 4 Assessment of dementia There are now a number of established standardized tools for the assessment of features of dementia and measurement of change. Cognitive function Cognitive function is at the core of the assessment of Alzheimer's disease. The most widely used assessment is the Alzheimer's Disease Assessment Schedule-Cognitive Section (ADAS-Cog 5 ), which assesses a number of domains in addition to memory and is sensitive to change. Scores range from zero (no impairment) to 70 (severe impairment). Generally speaking, patients with mild-to-moderate Alzheimer's disease show an increase in ADAS-Cog scores of between 6 to 12 points a year (the ADAS-Cog is scored in the same way as the original Blessed Scale, 6 which measures the number of errors rather than the number of correct answers, hence a higher score indicates better cognitive function, in distinction to most other tests). In the later stages of dementia, the Severe Impairment Battery 7 is able to measure cognitive function with a score from zero to 100. 8 The Mini-Mental State Examination (MMSE) 9 is also used as both a measure of change and a descriptor of the severity of the illness (scores of less than 10 out of 30 equate with severe dementia, 10-18 with moderate dementia, and 18-23 mild dementia; scores of 24 and above indicate normality). Neuropsychiatric features Neuropsychiatric features have been included in studies more recently as recognition of their importance grows. One of the most popular assessments is the Neuropsychiatric Inventory (NPI), 1 which is a 12-item scale that measures a range of noncognitive features. Ratings of frequency and severity are included giving a total score of 144. Activities of daily living Several scales have been developed to measure what many regard as the most important feature of Alzheimer's disease and where improvement will have a major positive impact on the life of the patient and their carer. Scales that measure ADL include the Progressive Deterioration Scale (PDS, a 29-item assessment with a score of 1-100), 10 the Interview for Deterioration in Daily living activities in Dementia (IDDD), 11 and the Alzheimer's Disease Cooperative Study Activities of Daily Living Scale (ADCS/ADL). 12 Measures of ADL need to be sufficiently sensitive to assess activities over a range of severities, as well as being a sensitive measure of change. Global function There are two types of global function scales. First, there are those that capture the severity and stage of the disease (ie, mild, moderate, and severe) and, second, those that assess changes over the course of the illness. The Clinical Dementia Rating (CDR) 13,14 measures the stage of dementia over six domains (the sum of boxes and memory; orientation; judgment and problem solving; community affairs; home and hobbies; personal care) and gives a rating of questionable dementia (0.5), mild dementia (1), moderate dementia (2), and severe dementia (3). The Global Deterioration Scale (GDS) 15 gives a similar rating of severity, but with an emphasis on the more severe forms of disease. The concept of a global assessment of change was developed to overcome the criticism that clinical trials that only measured cognitive function were failing to capture (in a global sense) the changes that were the most important to patients and their families. There are a number of measures that have been developed, all of which are based on the premise that if a clinician is able to detect a change, then that change in itself is significant. The basic format of the assessments is the same-a 7point scale with an anchor point in the middle for no change and three measures of improvement and three measures of deterioration (Clinical Global Impression of Change 16 ). Some standardization has been introduced, which has tended to improve the reliability of the measures (Clinicians' Interview-based Impression of Change [CIBIC] 17 ), but part of the validity is that the score reflects the view of the individual rater, rather than being a scale where answers are simply recorded onto a form. A development is the introduction of information from the caregiver, which allows the independent clinician marking the scale to reflect changes that impinge on the patient and their carer in a global sense (CIBIC+, which includes information from the carer). Cholinesterase inhibitors These drugs were introduced on the basis of ample neurochemical evidence that there is a significant acetylcholine deficit in Alzheimer's disease. One of the drugs' main actions is to inhibit the enzyme acetylcholinesterase, which breaks down acetylcholine, thus effectively raising the level of the neurotransmitter. Four drugs of this type have been established in Alzheimer's disease: tacrine, donepezil, rivastigmine, and galantamine. They vary in their pharmacological action. Tacrine is an acridine-based compound (its liver toxicity probably results from this), donepezil is piperidine based and a selective acetylcholinesterase inhibitor, whereas tacrine and galantamine have significant activity on butyrylcholinesterase. Rivastigmine is a carbonate-based compound and is relatively free of drug interactions, and galantamine is an alkaloid. Donepezil has the longest plasma half-life at about 70 hours compared with 6 hours for galantamine, 3 hours for tacrine, and 1.5 hours for rivastigmine (this has the practical advantage that it is excreted quickly from the body and so relief from side effects is much more speedy than with the longer-acting compounds). The half-life also has implications for the daily dosing regimen: the advantage of donepezil is that it only needs to be given once a day. Tacrine This was the first drug to be introduced and, in many ways, was the gold standard by which the others were measured. The drug has positive effects on cognitive function at dosages of 160 mg/day, and benefits have been seen in terms of ADL and global function. 18,19 Unfortunately, almost half of all patients experience liver side effects, usually a rise in transaminases, and so a search began for an agent as effective as tacrine, but without side effects. Donepezil As a piperidine-based compound, the introduction of donepezil was important because of its lack of liver side effects and the convenience of once-daily dosing. One multinational study 20 involved patients in Australia, Belgium, Canada, France, Germany, Ireland, New Zealand, South Africa, and the UK. Eight hundred and eighteen (818) patients were randomized to receive placebo (n=274), 5 mg/day donepezil (n=271), or 10 mg/day donepezil (n=273). The mean age of patients was just over 70 and they all satisfied the NINCDS/ADRDA (National Institute of Neurological and Communicative Disorders and Stroke and Alzheimer Disease and Related Disorders Association) 21 criteria for probable Alzheimer's disease. Patients with mild-to-moderate impairment were included, as assessed by an MMSE score of between 10 and 26 and a CDR of 1 (mild) or 2 (moderate). The study lasted 30 weeks: 24 weeks with a double-blind, placebo-controlled phase followed by a single-blind placebo washout over 6 weeks. Patients started with 5 mg/day donepezil for 7 days followed by 10 mg/day. The positive effects on the ADAS-Cog are shown in Figure 1. The percentage of patients rated as improved was 21% for 5 mg/day, 25% for 10 mg/day, and 14% for placebo. The pattern of side effects (mostly related to the digestive system, eg, diarrhea, nausea, and vomiting, understandable in terms of the physiological effect of a cholinergic drug) was the same (10%) for placebo and 5 mg/day of the drug, and double that in those taking the higher dose. The IDDD was used to assess ADL and the drug showed a protective effect against the decline and activity that occurred with placebo. A similar USA-based study 22 was in accordance with these findings and there was evidence that the 10 mg/day dosage was superior to the 5 mg/day dosage. The longer-term efficacy and safety of donepezil has been shown by an analysis of the continuation of the US study. 23 In total, 133 patients completed the trial, which lasted nearly 5 years and showed that the rate of deterioration in those taking the active drug was less than that of placebo, that adverse events were mild and transient, and that there was no evidence of liver toxicity. Winblad et al reported a 12-month study in 286 patients in Nordic countries in Europe. 24 Two thirds of the patients in the donepezil and placebo group completed the study (patients took 5 mg/day donepezil 28 days followed by 10 mg/day). Another study, also of a year's duration, examined the effects of donepezil in preserving function over time. 25 A predetermined definition of a decline in functional status was operationalized and it was found that those on the active drug were 5 months slower at reaching this end point than those on placebo. This was quantified as showing that the drug reduced the risk of functional decline by 38% compared with placebo. The effects of the drug have also been examined in people with more severe Alzheimer's disease 26 with 144 patients randomized to donepezil and 146 to placebo over 24 weeks. Despite the severity of the illness, benefits were seen in terms of global measures of change, cognitive function, ADL, and psychiatric symptoms; 86% of placebo patients completed the trial with 6% withdrawing because of adverse events, compared with 84% and 8%, respectively, in those on active drug. Rivastigmine The effect of rivastigmine has been described in a USbased study over 26 weeks in 699 patients with mild-tomoderate Alzheimer's disease. 27 Significant improvements on the ADAS-Cog compared with placebo were seen and these were particularly marked in those taking a higher dosage (6-12 mg/day) An analysis of patients with moderate and severe Alzheimer's disease has shown that the effects are as marked in this group of subjects and it has been suggested that patients with comorbid vascular disease gain a particular benefit. 28,29 Improvements have been seen in patients with advanced dementia and behavioral disturbances using the NPI with at least 50% of subjects improving by a third on the scale and 44% being able to reduce or stop concurrent psychotropic medication. There were also significant benefits in ADL. A European study assessed the safety and efficacy of two dosages of rivastigmine (up to 4 mg/day and up to 12 mg/day) over 26 weeks. 30 In the rivastigmine group, 24% had improved compared with 16% in placebo by at least 4 points on the ADAS-Cog; 37% of people on rivastigmine compared with 20% on placebo showed evidence of a global improvement. Figure 2 shows these changes. The effects of rivastigmine have also been demonstrated in patients with dementia of the Lewy body type. 31 Patients with this disorder appear to have a particularly profound deficit in cholinergic function and the symptoms are characterized by significant psychiatric symptoms and behavioral disturbances. One hundred and twenty (120) patients who satisfied standard criteria for Lewy body dementia (the vast majority having fluctuating cognitive function and recurrent visual hallucinations) were recruited in the UK, Spain, and Italy (92 completed the study).Treatment started with 1.5 mg rivastigmine or placebo twice a day, increasing by 1.5 mg twice a day, for 2 weeks until 12 mg/day or a maximum well-tolerated maintenance dosage Galantamine Galantamine has a somewhat novel, dual mode of action in that, in addition to its anticholinesterase activity, it has a modulating effect on nicotinic acetylcholine receptors in the brain, which seem to have a role in potentiating the response to acetylcholine. In Europe and Canada, Wilcock 32 reported a 6-month study of 653 patients with mild-to-moderate Alzheimer's disease, who were randomly assigned to either placebo or a maintenance dosage of galantamine of 24 or 32 mg/day. At 6 months, improvements in ADL and on the CIBIC+ were recorded. Raskind et al 33 reported on a 6-month, randomized, placebo-controlled trial followed by a 6-month extension. Patients with mild-to-moderate Alzheimer's disease (n=636) were assigned to either placebo or an escalating dosage of 24 or 32 mg/day galantamine, followed by a 6-month, open-label study with 24 mg/day.The conclusion was that at 24 mg/day, the drug is effective and safe in improving cognitive function and global function (Figure 3) over 6 months, and maintaining that improvement at 12 months. A total of 978 patients were enrolled in a relatively slow escalation study described by Tariot et al. 34 A 4-week, placebo run-in was concluded with patients being randomized to receive placebo or 8, 16, or 24 mg/day galantamine. After 5 months, those on galantamine showed improvement on the ADAS-Cog, the CIBIC+, a number of psychiatric symptoms, and ADL. Adverse events resulting in discontinuation from the trial were found in 10% of the galantamine group and 7% of the placebo group. Coyle and Kershaw 35 carried out an analysis of the extension studies of galantamine and found that patients who had been treated with 24 mg/day throughout the trials had better cognitive function compared with those on placebo. The suggestion that stabilization occurs would be in keeping with the additional nicotinic receptor modulation activity of the drug. What is the difference between the anticholinesterase drugs? There are currently direct comparison (head-to-head) trials taking place to compare the three anticholinesterase drugs (tacrine is no longer marketed). Each drug has its own advantages and disadvantages, but these are often only in terms of theoretical differences reflected in the marketing of individual drugs and represent a particular interest or scale that has been used by investigators. For example, because of its long half-life, donepezil can be Change in ADAS-cog/11 from baseline Deterioration Improvement Double-blind Open-extension prescribed once a day. It has been suggested that galantamine delays the onset of behavioral problems and psychiatric symptoms in dementia. Rivastigmine seems to have fewer drug interactions 36 and has been shown to be effective in dementia with Lewy bodies. With regard to improvement in cognitive function, comparison of the rates shows that the difference between the ADAS-Cog and placebo in the trials are 4.1 points for tacrine, 2.5 and 2.9 points for 5 and 10 mg/day donepezil, respectively, 4.9 points for rivastigmine (8.0 points for patients taking between 6 and 12 mg/day with moderately severe to severe Alzheimer's disease; 6.2 points for those with Alzheimer's disease and comorbid vascular risk factors), and 3.8 and 3.9 points, respectively, for 32 and 24 mg/day galantamine. The commonest adverse events are nausea, vomiting, diarrhea, anorexia, and dizziness. Rates are between 5% and 15%. There is evidence to suggest that rivastigmine and galantamine (particularly at higher doses) are more likely to induce nausea, vomiting, and diarrhea as well as dizziness, although generally speaking, the longer the titration time, the smaller the number of side effects (something that agrees with clinical practice 37 ). Glutamatergic antagonists Glutamate is a hitherto relatively neglected excitatory neurotransmitter in the brain and is probably present in 70% of neurones.A number of different receptor types are involved, one of particular relevance to Alzheimer's disease being the N-methyl-D-aspartate (NMDA) receptor. These receptors appear to have a specific role in the plasticity of neurones and therefore a specific function in terms of the formation of memories and learning. In excess, glutamate is excitotoxic and activates NMDA receptors. There is evidence that glutamate may be involved in the pathological process of Alzheimer's disease and its presence seems to stimulate the deposition of β-amyloid. Drugs that have a high affinity for NMDA produce side effects including schizophreniform psychoses, but those that have lower receptor antagonist affinity seem only to have an influence in pathological conditions. The most widely studied of these drugs is memantine. Several double-blind studies in the early 1990s suggested a role for the drug in dementia, but more recently a European-based study and a USA-based study have shown the drug to be effective in people with moderately severe to severe Alzheimer's disease. Two studies suggest that it is effective in people with vascular dementia. The drug currently has a license under European regulations for the treatment of moderately severe to severe Alzheimer's disease, making it stand apart from the cholinesterase drugs. Significant improvements in global ratings of dementia, ADL, and cognitive function (as assessed by the Severe Impairment Battery) have been demonstrated for dosages of 10 or 20 mg/day (escalating from 5 mg/day over 1 week). The results of the clinical global impression ratings appear in Figure 4. 38 Open-label studies at the end of the double-blind phases have demonstrated that improvements can still occur when there is a delay to the initiation of treatment.The side effects of the drug tend to be quite minor, the commonest being dizziness, but confusion and hallucinations are commoner in the group taking the active drug. Agitation is much commoner in people on placebo. Memantine has been used in Germany for many years and so a significant body of safety data is available. 38 Whether the drug will be suitable for people with mild-to-moderate dementia, whether it will have a significant action against vascular dementia, and whether treatment in combination with cholinesterase drugs are effective strategies remain to be evaluated. Estrogen Estrogen has positive and beneficial effects on the brain in a number of areas. 39 There is good evidence from epidemiological work that postmenopausal women are protected against the development of Alzheimer's disease if Results in clinical global impression ratings they are taking estrogen.The evidence so far that estrogen itself helps the symptoms of Alzheimer's disease is less clear cut. The results from different studies appear to be contradictory: while some studies suggest that there is no benefit, [40][41][42] Asthana et al 43 have reported that estradiol may produce improvements. In a prospective study, Zandi et al 44 found that women who used hormone replacement therapy (HRT) had a lower incidence of Alzheimer's disease over 3 years' follow-up than nonusers. The distinct relationship between Alzheimer's disease risk and duration of HRT observed in this study highlights the need for continued research into the optimal regimen, dosage, and timing of HRT for possible neuroprotection. Although the combined estrogen-progestin arm of the Women's Health Initiative randomized trial was terminated due to a specific risk-benefit profile for a specific therapeutic regimen, the risk-benefit profile may well change if new studies confirmed these results. Statins Epidemiological studies have suggested that people on statins have a lower rate of Alzheimer's disease compared with those not taking the drugs. 45 There is good biochemical evidence to postulate why statins may be of benefit, not least in their role in reducing the influence of vascular risk factors on the degree of cognitive impairment. It may be that statins are of some benefit even if the serum cholesterol is normal. 46,47 There is insufficient evidence at this stage for the prescription of these drugs solely for an anti-Alzheimer's effect. Ginkgo biloba One published study 48 suggested a beneficial effect of Ginkgo biloba over placebo in people with dementia. However, the effects, while significant, are marginal and are not as persuasive as for the anticholinesterase drugs. Because Ginkgo biloba can be bought over the counter, it remains something that patients, sometimes encouraged by their carers, will take to alleviate the symptoms of dementia. Individuals often report a beneficial effect. Other approaches There is good evidence that oxidative damage occurs in Alzheimer's disease and so intervention with an antioxidant may prove to be of benefit in people with Alzheimer's disease. One study has suggested that vitamin E delays the progression of Alzheimer's disease 49 and several reports have now documented that high levels of homocysteine (reflecting, probably poor intake of vitamins B 12 and folate) are associated with Alzheimer's disease. 50,51 Vitamin C may also have some benefit in protection against Alzheimer's disease. However, the antioxidant vitamin taken seems to reduce the incidence of the disease, particularly those including vitamin E. 52,53 There was much publicity recently when a vaccine was introduced for Alzheimer's disease, which potentially had an antiamyloid effect. 54 However, clinical studies have been suspended because some patients in these two trials developed inflammation of the central nervous system. Recent negative publicity in the UK surrounding the combined measles, mumps, and rubella (MMR) vaccine has probably had the effect of directing public enthusiasm away from vaccinations. As case-control studies become more popular and epidemiological databases, which document risk factors, can easily be interrogated, the number of other risk-or protective-factors for Alzheimer's disease being described has increased. Mental and physical exercise is protective, 55 red wine is protective, 56 moderate alcohol intake of any type seems to be of benefit, and, most recently, drinking coffee appears to reduce the rate of Alzheimer's disease. 57 One specific study looked at the control of blood pressure and showed that rates of dementia can be significantly reduced in this way. 58 Chelation of metals may also have a beneficial effect. Translating the epidemiological findings into things that will change people's lifestyles, or even suggest a treatment strategy, is a long way off.
2014-10-01T00:00:00.000Z
2003-03-01T00:00:00.000
{ "year": 2003, "sha1": "28ebdd4a067087c15b99600683d3edc48dd7dd02", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181714", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "28ebdd4a067087c15b99600683d3edc48dd7dd02", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235746791
pes2o/s2orc
v3-fos-license
Poroid hidradenoma of the scalp Poroid hidradenoma has both features of hidradenoma and poroma. The histological hidradenoma framework consisting of solid and cystic components, and the presence of poroid and cuticular cells resembling a poroid neoplasm. Despite transforming into malignant neoplasm only in < 1% of cases, its histological characteristics may resemble those of malignant neoplasms. Although the risk of malignant transformation is very low, surgical excision is recommended to prevent growth and/or recurrence. To date, very few cases of poroid hidradenoma have been reported in the literature. Herein, we present a case of poroid hidradenoma on the scalp of a 74-year-old woman. INTRODUCTION Poroid hidradenoma is a benign neoplasm with eccrine differentiation, first described by Abenoza and Ackerman in 1990 [1]. These usually benign and asymptomatic neoplasms rarely become malignant. The age at onset ranges from 20 to 70 years old with a peak of incidence in the 7th decade, but no observed sex differences. The neoplasm is typically well-circumscribed to the intradermal layer, with a diameter between 1 and 2 cm, and a slightly reddish and round shape [2]. Although it is most commonly found in the trunk, it can also occur in the extremities, scalp, and face. There are few reported cases of poroid hidradenoma in the scalp. This study reports one such case in a woman in her seventies. CASE REPORT A 74-year-old woman presented to our hospital with a mass on her scalp, which she had for 40 years. When she first noticed the mass and she applied caustic soda at a beauty shop, about 40 years prior and in the last 3 years the mass had rapidly grown showing ulceration. The patient had a complex medical history of hypertension, arrhythmia, and thyroid cancer after total thyroidectomy. Further, she was maintained on oral nifedipine for hypertension. Clinical assessments revealed a firm, fixed, protruding mass measuring about 2.0 × 3.0 × 0.5 cm in size with ulceration and pigmented nodular lesions (Fig. 1). The clinical diagnosis was nevus with pyogenic granuloma, and the lesion surgically excised. Gross examination revealed the tumor to be a fibrogranuloma. However, since the patient had previously applied caustic soda on the lesion and ulcer was observed overall, the frozen biopsy was decided to perform during surgery. We performed duplicate tissue excision and frozen biopsy to confirm cell malignancy and local invasion (Fig. 2). The first simple excision and frozen biopsy during surgery revealed a poroid hidradenoma with atypia and abundant tumor cells at the anterior and posterior resection margins but not at the right and left sides (Fig. 3A). Since this biopsy could not rule out basal cell carcinoma, we performed wide excision with a safety margin of 5 mm (Fig. 3B). Then, the surgical wound was directly closed without complications. The pathology report confirmed poroid hidradenoma with cell atypia and free of all resection margins. Histopathologically, there was a well-circumscribed neoplasm composed of small dark poroid cells and larger paler cuticular cells with clear cytoplasm and no connection to the overlying epidermis (Fig. 4). On immunohistochemistry, the tumor stained positive for high molecular weight cytokeratin and epithelial membrane antigen and negative for membrane-bound carcinoembryonic antigen, low molecular weight cytokeratin, CK20, and CD34. At 1-year follow-up, she remains asymptomatic and without evidence of recurrence. The surgical wound healed uneventfully leaving only a scar and little alopecia (Fig. 5). DISCUSSION This case report of a poroid hidradenoma documents a rare In 1990, Abenoza and Ackerman [1] described four poroid neoplasm variants according to neoplastic cell location: poroid hidradenoma, eccrine poroma, dermal duct tumor, and hidroacanthoma simplex. As these tumors derive from eccrine glands, variants are located entirely within the dermis [3]. Poroid hidradenoma is a tumor with solid and cystic components, where neoplastic poroid cells are all located within the dermis and without connection to the epidermis [4]. Eccrine poroma is a lesion with a clear margin between adjacent, normal epidermal keratinocytes and a population of smaller cuboidal cells, usually with darker nuclei protruding down into the underlying dermis [5]. The dermal duct tumor resembles to hidroacanthoma simplex but the tumor cells are located in the dermis. And hidroacanthoma simplex is characterized by nests of clearly discrete, small, rounded cells within the epidermis. Despite the lack of reported malignancy with poroid hidradenoma, it can be developed to eccrine porocarcinoma as a primary lesion and be misdiagnosed with a malignant subcutaneous neoplasm and excised with a skin sparing procedure [6]. Further, it is not yet known whether additional safety margins should be used during excision. We considered a high probability of pyogenic granuloma in the present patient because of the past history of caustic soda use and ulceration at the time of hospital visit. Therefore, during surgery, frozen biopsies were performed to evaluate the possibility of skin cancer and once confirmed, a treatment plan to implement additional wide excision and evaluation was established. As discussed above, the most common preoperative misdiagnoses for poroid hidradenoma are pyogenic granuloma or soft fibroma. There are difficult diagnoses correctly in clinical ground alone, because poroid hidradenoma originates from dermal tissue, therefore radical surgery is recommended. As poroid hidradenoma is often a clinically misdiagnosed lesion, we recommend frozen biopsy and total mass excision, including the overlying skin and surrounding adipose tissue down to the superficial fascia, to prevent recurrence.
2021-07-07T06:16:38.460Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "8c510fa0bbfc63e994519ddbe9af52c6ff3ae369", "oa_license": "CCBYNC", "oa_url": "http://e-acfs.org/upload/pdf/acfs-2021-00101.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cbf3eae57883b0b2ea6ada026d627adc48104b10", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270110520
pes2o/s2orc
v3-fos-license
Organic Fertilizer Alleviates Salt Stress in Shallot by Modulating Plant Physiological Responses Salinity is a major constraint for crop productivity as it reduces agricultural land area. This problem can be ameliorated by the application of organic materials such as manure, which plays an important role in supporting plant growth and reducing soil toxicity by binding toxic compounds. The purpose of this study is to analyse the effect of manure in overcoming the impact of salt stress on shallots. Here randomised block design (RBD) consisting of 2 factors and 3 replications was used. The first factor is salinity levels (0, 50, 100, and 150 mM), and the second is manure doses (0, 10 and 20 t·ha -1 ). This study finds that the application of 20 t·ha -1 of manure decreases the shal-lot’s leaf tissue thickness, but the 50 mM of salinity significantly increases it. Then, the application of 20 t·ha -1 of manure increases the shallot’s number of tillers and bulbs, while the 100 mM of salinity significantly decreases its number of tillers. The application of 10 t·ha -1 of manure decreases the proline and flavonoids content of the plant’s leaves. In addition, plants treated and not treated with manure under 50 mM of salinity have higher proline and flavonoids levels in their leaf. Therefore, shallots can grow under salinity conditions if manure is applied. INTRODUCTION Shallot (Allium ascalonicum) is a horticultural crop that is widely cultivated and used for food and medicine throughout the world (Solouki et al., 2023).According to Statistics Indonesia (2023), the shallot harvest area in Indonesia in 2022 was 184,386 ha, decreasing from 194,575 ha in the previous year.Shallot production can be increased by expanding the planting area as Indonesia has about 440,330 ha of saline land (Usnawiyah et al., 2021), which has not well utilised.However, some special treatments are required.They involve correct plant species selection, soil property improvement through water management, mechanical soil management, chemical improvement with the addition of gypsum and sulphur, and the use of mulch and organic materials, not to mention improvements in farmer awareness (Karolinoerita and Annisa, 2020). High salinity adversely affect the morphology, physiology and yield of shallot plants (Shoaib et al., 2018) as it prolongs shoot emergence, slows leaf growth, reduces height, and alters the bulb shape and decrease its size and weight (Alam et al., 2023).Salinity determines the plant's ability to grow as it damages cells.It inhibits plant growth as it creates imbalances in nutrient ion balance and causes ion toxicity in addition to diminishes water availability, respiration rate, mineral distribution, membrane stability, turgor pressure, growth rate, and yield rate (Golldack et al., 2014;Makhloufi et al., 2014).High salinity affects water uptake by the plants due to the presence of salts around plant roots, which also causes oxidative stress.The impact of salinity on the physiological condition of shallots includes a decrease in proline levels, phenolic compounds, and pyruvic acid precursors (Hanci et al., 2016).Highly saline water inhibits plant's membrane instability, relative water content, total chlorophyll content, and carotenoids content (Venâncio et al., 2022).The impact of soil salinity on plant physiology can be reduced by the addition of organic matter such as manure as it can improve soil conditions to certain levels where plants can grow.Organic matter added to saline environment can improve osmotic regulation between plant root cells and soil nutrient solutions so as to reduce the effects of saline stress (Adil Aydin, 2012; Morrissey et al., 2014).Organic amendments are known to reduce the effects of salinity with the help of soil microorganisms and positively influence microbial activity and nutrient cycling (Wichern et al., 2020).Improved soil nutrition with the addition of manure results from microbial activity that breaks down organic matter into nutrients needed by plants.The use of compost, especially K, P, Zn, and N, can increase soil nutrient content so that the number and weight of tubers can increase (Yan et al., 2018;Showler, 2022). The availability of nutrients in saline soils can be increased by adding organic matter which greatly affects the growth of plants, especially shallots.The response of shallots to highly saline conditions is related to the activity of aquaporin genes, especially PIP2, which is related to Zn uptake (Solouki et al., 2023).Another study on shallot resistance to such conditions was conducted by Sanwal et al. (2022), who found that the effects of higher antioxidant enzyme activity and lower H 2 O 2 production and lipid peroxidation.Shallot's tolerance to salinity can be increased by adding Si fertiliser, as found by Venâncio et al. (2022) that Si application increases the yield, bulb freshness, and bulb size of shallots (≥ 50 mm) as well as decreases salinity stress to 2.8 dS m -1 .Solouki et al. (2023) found that a significant decrease in shallot growth only occurred when treated with salt concentrations of 50 mM.The utilisation of saline land for shallot cultivation can be done by adding organic matter in the form of manure, which improves osmotic regulation. MATERIALS AND METHODS This research was conducted from August to November of 2023 in Poncokusumo, Malang Regency, at an altitude of 500-600 MASL.The materials were shallot var.tajuk, NaCl, goat manure, basic fertiliser (N, P, and K fertiliser), and polybags with the diameter of 20 cm.This study uses randomised blok design (RBD) consisting of 2 factors and 3 replications.The first factor was salinity levels (0, 50, 100, and 150 mM), and the second was manure doses (0, 10, and 20 t•ha -1 ). The soil media were given NaCl that had been dissolved according to the salinity treatment levels of 50, 100, and 150 mM; they were then mixed with manure.Planting media that have been mixed evenly were put into polybags with a diameter of 20 cm weighing ± 5 kg and given a bucket base.Here healthy, fresh, dense, not wrinkled, and brightly coloured shallot seedlings were used to accelerate sprouting and ensure uniform growth.They were cut at the 1/3 end.The planting was done one week after filling the planting medium containing one shallot bulb.The plant care included watering, N, P, K fertilising (started 7 days after planting, 6 times with the span of 7 days), weeding, and pest management (started after right after the planting). The observations were conducted on leaf epidermal tissue thickness, proline content, leaf flavonoid content, tiller number per clump, and tuber number per clump.The thickness of leaf epidermal tissue was measured using the paraffin method by Nakamura (1995) with semi-permanent preparations.The proline levels were measured using the method proposed by Bates et al. (1973).The leaf's flavonoid levels were measured using the spectrophotometric (quercetin) method used by (Lindawati and Ma'ruf, 2020).The number of tillers and tubers per clump was calculated using the destructive method.The data obtained from the study were analysed using variance analysis (F-test) at the 5% test level in order to determine the effect of the treatment and whether a real effect was obtained.Tt was continued with the honestly significant difference (HSD) at the 5% level. Leaf epidermal thickness Manure doses and salinity levels significantly affect the thickness of shallot leaf epidermis.The addition of 20 t•ha -1 of manure was significantly influential and was able to reduce the thickness of shallot leaf tissue (Table 1 and Figure 1).This is caused by increased plant growth from higher nutrient content in manure (Agegnehu et al., 2017), as the formed cells tend to be larger but less dense (Nath et al., 2010).Thick leaf epidermal tissue Note: numbers accompanied by the same letter in each row are not significantly different based on the HSD test at the 5% level.Salinity levels are directly proportional to the increase in shallot's leaf epidermal thickness.The concentrations of 50, 100, and 150 mM significantly increase the thickness of the leaf's epidermis, compared to that of plants that do not experience salinity.The leaf epidermal thickness of shallots under 50 mM of salinity is not significantly different from those under 100 mM but is significantly different from those under 150 mM.However, such thickness in the plants under 100 and 150 mM of salinity was not significantly different.Salt-tolerant plants can initiate protective mechanisms that allow them to grow in saline environments (Zandalinas et al., 2017).Epidermal tissue can thicken as a form of structural resistance adaptation to the environment, including saline conditions (Ozturk et al., 2021).Thick leaf epidermal tissue can help reduce water losses and increase the production of anti-stress compounds that can protect plants from damage caused by salinity (Bhattacharya, 2021). Leaf proline level Manure doses and salinity levels have a significant interaction effect on the average proline content in shallot leaves (Table 2).The average proline content of in the leaves decreased along with the addition of manure.The addition of 10 t•ha -1 of manure can reduced the leaf's proline content, but the depletion rate is not significantly different from the addition of 20 t•ha -1 in the treatment without salinity.As for the treatment with 50, 100, and 150 mM of salinity, the addition of 10 t•ha -1 of manure was also significantly able to reduce leaf proline levels compared to the same addition on plants without manure.The addition of 20 t•ha -1 of manure resulted in the lowest leaf proline levels compared to the addition of 10 t•ha -1 of manure.These results are in line with the findings of Sanwal et al. (2022) that one of the factors affecting proline levels is stress in salinity conditions since the levels of soluble salts and sodium which decrease along with the addition of manure and gypsum (Foronda and Colinet, 2022).The use of organic matters can maintain good soil structure, increase cation exchange capacity, serve as a soil nitrogen reservoir, increase water retention, and enhance mineralisation (Havlin and Heiniger, 2020). In contrast to the effect of manure, increasing salinity improves the average proline content of shallot leaves.The salinity of 50, 100, and 150 mM can significantly increase proline levels in leaves compared to such content in plants that do not experience salinity.The salinity of 150 mM produced higher proline levels compared to the salinity of 50 mM and 100 mM, with or without manure.This shows that salinity can increase the amount of proline in shallot leaves.In addition to salinity, the proline amino acid levels in Allium cepa L. shoots increased under alkaline soil conditions (Sivasamy et al., 2022).The impact of salinity on the physiological conditions of Allium ascalinicum includes increased proline levels and decreased phenolic compounds and pyruvic acid precursors (Mohamed and Aly, 2008;Hanci et al., 2016).The increase in proline levels in Allium cepa shoots is caused by plant regulation in maintaining cellular osmotic balance and surviving oxidative damage caused by salinity stress (Solouki et al., 2023).Sanwal et al. ( 2022) also revealed that one of the factors affecting proline levels is salinity stress. Leaf flavonoids level Higher manure doses and salinity levels have an interaction effect on the flavonoid levels in shallot leaves (Table 3).The leaf's flavonoid levels decreased along with the addition of manure.The addition of 10 and 20 t•ha -1 of manure on plants that do not experience salinity and plants that experience salinity of 50 and 100 mM can significantly reduce leaf flavonoid levels.As for plants experiencing 150 mM of salinity, the addition of 10 t•ha -1 of manure did reduce the flavonoid levels, but the addition of 20 t•ha -1 of manure significantly decreased them (Table 3).Flavonoid levels in shallot leaves can be influenced by various other factors such as plant species, environmental conditions, and interactions with microorganisms (Anh et al., 2023).The type of soil or place of growth also affects the content of substances formed in plants (Hawari et al., 2022).The addition of manure can affect the flavonoid production in plants.Those containing flavonoids will produce orange, pink, and red spots (Riyana et al., 2018).Environmental stress caused by biotic and abiotic factors affects the production of secondary metabolites and generally increases the production of secondary metabolites (Mazid et al., 2011).The formation of secondary metabolites is plants' protective response to environmental stress (Ramakrishna and Ravishankar, 2011). In contrast to the effect of manure, salinity increased flavonoid levels.Plants that did not get manure addition and receive 10 t•ha -1 of manure addition, should they be under 50 mM of salinity treatment, were able to produce more flavonoid in their leaves compared to plants that did not experience salinity stress.Plants that were under 100 mM of salinity treatment but did not receive manure addition produced more flavonoid; the level was significantly different from plants under 50 mM of salinity condition but not significantly different from plants under 150 mM of salinity treatment.Plants under 50 mM of salinity condition that were added with 20 t•ha -1 of manure were able to increase their flavonoid levels compared to those that did not experience salinity treatment.Meanwhile, plants added with 10 to 20 t•ha -1 of manure under 150 mM of salinity had the highest flavonoid levels.Abdelrahman et al. (2020) found that shallots reprogramme their metabolism towards high accumulation of amino acids and flavonoids as an adaptive response.Flavonoids not only provide protection against harmful abiotic factors but also facilitate interactions with other plants and microorganisms due to their physical and biochemical properties (Khalid et al., 2019).A systematic review of the therapeutic uses of shallots highlight their high flavonoid content and antioxidant activities (Moldovan et al., 2022).Flavonoids have a role in frost hardiness and drought resistance and play a functional role in heat acclimatisation of plants.Flavonoids in plants act as antioxidants, antimicrobials, photoreceptors, visual attractants, food repellents, and light filters (Panche et al., 2016). Number of tillers Manure doses and salinity levels have a significant effect on the number of tillers in shallots.The average number of shallot tillers per clump with manure application increased at all plant ages (Table 4).The addition of 20 t•ha -1 of manure was significantly influential and increased the number of tillers.Onion bulbs increase along with the increasing application of manure (Díaz-Pérez et al., 2018).Manure added to soil has a positive effect on shallot growth (Ikrarwati et al., 2021; Bijay-Singh and Sapkota, 2022).Manure application can also increase soil biological activity, which facilitates nutrient cycling and particle aggregation, resulting in better soil health and plant growth (Hoffland et al., 2020).Increasing salinity causes a decrease in the number of shallot tillers per clump at 4 to 6 weeks after planting (Table 4).The number of tillers of shallots treated with 100 mM of salinity is lower than that of plants not experiencing salinity as it inhibits plant growth.Inhibited growth is an adaptive mechanism for survival, which allows plants to resist salt stress (Munns, 2002).A decrease in the number of tillers can lead to a decrease in the number of tubers per plant, thus negatively affecting yield (Venâncio et al., 2022).When Allium tuberosum plants are stressed with NaCl, their growth will be inhibited, and their yield will decrease (Liu et al., 2022).Na + is particularly detrimental at high concentrations in the cytosol of leaf cells as it interferes with metabolic processes such as photosynthesis (Schmöckel and Jarvis, 2016).High salinity can reduce crop production and further growth as well as cause physiological abnormalities which ultimately threaten global food security (Balasubramaniam et al., 2023). Bulb weight per clump Manure doses and salinity levels have a significant interaction effect on the weight of shallot bulbs per clump.The average weight of shallot bulbs per clump increased along with the addition of manure (Table 5).Shallot plants that did not experience salinity and that were treated with salinity levels of 50, 100, and 150 mM, that were added with 20 t•ha -1 of manure had weight of the bulbs compared to those that did not receive manure addition.Köninger et al. (2021) found that manure significantly CONCLUSIONS Shallots can grow well under saline conditions if they are treated with manure addition.Plants under the salinity level of 50 mM that were treated with 20 t•ha -1 of manure had lower proline and flavonoid levels than those under the 100 mM and 150 mM of salinity.Then, plants under 150 mM of salinity produced higher proline and flavonoid levels and produced smaller tubers.Based on these findings, further research on the dose of manure and levels of salinity needs to be done. Figure 1 . Figure 1.Leaf epidermal thickness of shallot leaves by manure dose and salinity level Table 1 . Average leaf epidermal thickness tissue due to manure dose Table 2 . Interaction between manure and salinity on average proline content in shallot leaves Note: numbers accompanied by the same lowercase letter in the same row are not significantly different based on the HSD test at the 5% level, numbers accompanied by the same capital letter in the same column are not significantly different based on the HSD test at the 5% level. Table 3 . Interaction between manure and salinity on average flavonoid content in shallot leaves Note: numbers accompanied by the same lowercase letter in the same row are not significantly different based on the HSD test at the 5% level,.numbersaccompanied by the same capital letter in the same column are not significantly different based on the HSD test at the 5% level. Table 4 . Average number of shallot tillers per clump due to manure dose and salinity level Table 5 . (Solouki et al., 2023))ure and salinity on the average weight of shallot.bulbs per clump by plants for growth and tuber formation(Köninger et al., 2021).In addition, manure can also increase the photosynthetic capacity of plants, which contributes to the formation of larger tubers(Lasmini et al., 2022).Increased salinity causes a decrease in shallot bulb weight per clump.Shallot plants, both those receiving manure addition and receiving 10 t ha -1 of manure addition, under 50 mM of salinity had lower bulb weight than those that did not experience salinity stress.Plants under 100 mM of salinity that received 20 t•ha -1 of manure have lower bulb weight than those that did not experience salinity stress.The addition of organic matter to saline soils can increase the availability of nutrients in the soil, which greatly affects the growth of plant, especially shallots(Alam et al., 2023).Increased salinity of irrigation water has a negative impact on the physiology of 'Rio das Antas' shallot plants, namely decreased bulb fresh weight, bulb production, bulb yield, and water use efficiency(Venâncio et al., 2022).Salinity can damage plant roots, resulting in a reduced water and nutrient absorption ability(Zhou et al., 2023).It also causes dehydration so the plants become weak and unable to produce large tubers.Salinity can affect plant growth, which is characterised by a decrease in plant dry weight(Suharjo et al., 2021).A significant decrease in shallot growth occurs when the plant is treated with salt at a concentration of 50 mM (Solouki et al., 2023) the condition can inhibit plant growth and yield, including bulb diameter.Salinity negatively affects the growth and yield of shallot plants, including their per-clump weight(Venâncio et al., 2022), which impacts the growth of shallot plants(Solouki et al., 2023). Note: numbers accompanied by the same lowercase letter in the same row are not significantly different based on the HSD test at the 5% level, numbers accompanied by the same capital letter in the same column are not significantly different based on the HSD test at the 5% level.needed
2024-05-30T15:20:47.041Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "1cc92ba4f98f47041ce73b19984953c72cd24200", "oa_license": "CCBY", "oa_url": "http://www.jeeng.net/pdf-188880-110638?filename=Organic%20Fertilizer.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "40d50eea1661ba095e9c281af88ac7becb0eb1ec", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
253974204
pes2o/s2orc
v3-fos-license
High honeybee abundances reduce wild bee abundances on flowers in the city of Munich The increase in managed honeybees (Apis mellifera) in many European cities has unknown effects on the densities of wild bees through competition. To investigate this, we monitored honeybees and non-honeybees from 01 April to 31 July 2019 and 2020 at 29 species of plants representing diverse taxonomic and floral-functional types in a large urban garden in the city of Munich in which the same plant species were cultivated in both years. No bee hives were present in the focal garden, and all bee hives in the adjacent area were closely monitored by interviewing the relevant bee keepers in both 2019 and 2020. Honeybee numbers were similar in April of both years, but increased from May to July 2020 compared to 2019. The higher densities correlated with a significant increase in shifts from wild bee to honeybee visits in May/June/July, while visitor spectra in April 2019 and 2020 remained the same. Most of the species that experienced a shift to honeybee visits in 2020 were visited mostly or exclusively for their nectar. There were no shifts towards increased wild bee visits in any species. These results from a flower-rich garden have implications for the discussion of whether urban bee keeping might negatively impact wild bees. We found clear support that high honeybee densities result in exploitative competition at numerous types of flowers. Introduction It is notoriously difficult to provide unambiguous evidence of competition, particularly in mobile organisms (Goulson 2003). Because of this there is no clear agreement whether increased honeybee densities have a negative impact on wild bee diversity or abundance via exploitative competition for nectar and pollen (Gunnarsson and Federsel 2014;Lindström et al. 2016;Geslin 2017;Mallinger et al. 2017;Wojcik et al. 2018). Most studies so far have focused on agricultural settings to address the question of resource overlap and competition between honeybees and wild bees. In Central Europe, however, cities are now a refuge for several species of wild bees (Sirohi et al. 2015;Banaszak-Cibicka et al. 2018;Hofmann et al. 2019), and some have higher bee diversities than similarly-sized arable areas or forest, probably because of high plant diversity, longer-lasting flowering season, and near-absence of pesticides and herbicides. The new role of cities as refugia for wild bees raises the question whether the current increase in urban honeybee keeping (Lorenz and Stark 2015) might negatively impact wild bees in cities by depleting their nectar and/or pollen resources. The question is difficult to answer, because the European dark honeybee (Apis mellifera mellifera) is a native European species that has coexisted with European wild bee species for thousands of years (Dams 1978) during which time both groups simultaneously had to cope with numerous changes in flower abundances and local climate. To detect significant ongoing changes in foraging competition between honeybees and wild bees, data are required from settings in which the abundances of honeybees change, but those of floral resources and wild bee nesting sites do not. Here we report such data from two flowering seasons in a botanical garden in an urban setting in which it was possible to monitor wild bee and honeybee visits in a wide range of plant species. The plants were studied at the same locations and with the same methods in both years (during short intervals distributed over numerous sunny days for a total of about 9 h/species), and honeybee numbers were estimated by monitoring all hives in the surrounding area and interviewing their owners. The expectation was that under food competition, increased honeybee densities at a particular flower species would shift the relative proportions of wild bees at that plant and time. The bee-rich garden in which our study was conducted contains no bee hives, so that all foraging honeybees come from the surrounding area . This experimental set-up captures the situation in many European cities in which bees from hives on roofs and balconies forage in near-by parks, private gardens, or allotment gardens (Beckedorf 2015;Wojcik et al. 2018). Given the lack of data on the effects of urban bee keeping on wild bees (Geslin 2017;Wojcik et al. 2018), we designed this study to help inform conservation and management measures in cities. Materials and methods The study took place in the Munich Botanic Garden from 01 April to 31 July 2019 and 2020. The garden opened in May 1914, covers about 21 hectares and borders on the 210-hectar-large Nymphenburg Palace Park at 48°09′45″ N and 11°30′06″ E, at 500 m above sea level. It is currently home to 106 bee species (including honeybee) whose abundances were scored in 2015-2017 by repeated monitoring walks . Several cavity nest boxes for solitary bees are located in the garden, but no honeybee hives have ever been placed there. The botanical garden provides a flower-rich habitat with thousands of native and cultivated species and varieties in flower beds and near-natural meadows throughout the year. Its layout of paths and beds is protected as a cultural monument, and all beds are watered and cared for by 44 gardeners, whose professional task and goal is to maintain a beautiful display of healthy plants all year long. Since 1795, 324 species of bees have been recorded from Munich and 123 from the Botanical Garden from 1997-2017 (79 species in 1997-1999, 106 in 2015-2017, with an overlap of 62 species; Bembé et al. 2001;. From 01 April until 31 July 2019 and 2020, we counted bees that alighted and foraged on the flowers of 14 species in April and May, and 15 species in June and July, one plant species (Nepeta mussinii) was observed in both April and May. Plants were observed at the same sites in both years and had the same distances to the surrounding honeybee hives in both years. Bees were counted during many 5-min intervals on 15-50 flowers or inflorescences for a total of about 9 h per species, with the number of flowers chosen so that all bees could be seen and counted with precision. Observations were only made during dry, sunny or at most slightly overcast days. Herbarium vouchers were made of each species and deposited in the Munich herbarium. In both years, all four bee keepers in the Nymphenburg Palace park (S. Fritz, M. Högner, A. Kromer, and Mr. Kostrow) were interviewed about the health and size of their bee hives. Results In total, we observed 9.328 honeybees and 6.460 wild bees over 172 h in 2019 and 18.630 honeybees and 6.281 wild bees over 264 h at the same 29 plant species in 2020 ( Table 1). The focal plants represented different taxonomic and floral-functional types ( Fig. 1), including native species and horticultural forms, species adapted to bee pollination (e.g., the Lamiaceae Lavandula angustifolia, Leonurus cardiaca, Stachys byzantina; Asteraceae such as Taraxacum) as well as species pollinated by other insects, such as flies and butterflies, in their native habitats (e.g., Hyacinthus) or species in areas naturally devoid of honeybees (e.g., New World Dahlia, Echinacea, and Mahonia aquifolium). The species and densities of other flowering plants (not monitored for this study) present in the botanical garden in both years were similar. Honeybees were observed at all plant species (Table 1) and at all distances from the hives (Fig. 2). The resource overlap within the habitat, i.e., the percentage of plant species used by both honeybees and wild bees was almost complete, suggesting food competition. Two species, Helianthemum and Cotoneaster, in 2019 were visited by both wild bees and honeybees, but in 2020 only by honeybees (Table 1). Twelve (41%) of the 29 species were visited as pollen and nectar sources; five (17%) were only pollen sources; and 11 (38%) were only nectar sources. The only plant used differently in the two study years was Narcissus pseudonarcissus, which in 2019 was visited for both pollen and nectar, but in 2020 only for pollen. Two of the nine species (22%) that experienced a shift in their visitor spectra in 2020 compared to 2019 were pollen-only sources, six (67%) were nectaronly sources, and one (11%) was exploited for both its pollen and nectar, implying that seven (78%) of the species that experienced a shift to fewer wild bees in 2020 were visited for their nectar. In April 2019 and 2020, honeybee densities remained identical (Table 2), but they increased from May to July 2020 compared to 2019 (Table 2). This increase correlated with significantly fewer wild bee visits in nine of the 20 May/June/July-flowering species, while visitor spectra did not change in the ten April-flowering species (Table 1, which shows 30 observations, because N. mussinii was observed in April and May; χ 2 = 6.43, df = 1, P = 0.05). All observed shifts in visitor spectra were in the direction of increased honeybee numbers (Table 1). Discussion Despite the large diversity and abundance of flowers available at our study site, a 21-hectar-large botanical garden, we found a significant negative relationship between the densities of honeybees and those of flower-visiting wild bees, almost regardless of flower type ( Fig. 1; Tables 1, 2). That the higher resource depletion by foraging honeybees in May, June, and July 2020 compared to 2019 negatively affected the abundances of foraging wild bees, matches evidence that the experimental addition of honeybee colonies negatively impacts bumblebees that overlap with honeybees in resource use (Wojcik et al. 2018). Per year, a honeybee colony harvests 10-60 kg of pollen and 20-150 kg of honey, which translates to 5-9000 kg pollen and 10-22.500 kg honey/km 2 /year (Goulson 2003). These numbers suggest that honeybees must use a substantial proportion of floral resources at any one time and place, and as our data show (Table 1), food competition occurred not only at flowers providing both nectar (sugars) and pollen (protein) but also at flowers that provide only pollen. Bees are often more taxonomically restricted in their pollen collection than in their nectar collection (Cane and Sipes 2006); however, only 23% of 445 wild bees that occur in Germany (and for which data on pollen preferences are available) are pollen specialists (Hofmann et al. 2019). Most European wild bees are also much smaller than honeybees and have short average flight distances , which further decreases their ability to avoid competition by foraging at more distant plant populations. Although our study demonstrates the depressing effects of increased honeybee densities on the simultaneous proportions of wild bees at flowers of the same species, we lack data on the fitness effects of this observation. It is plausible that in the summer of 2020, wild bees had to travel further and/or use less profitable flowers compared to 2019, but to determine whether this had non-trivial effects on their fitness would require competitive exclusion experiments combined with longer-term studies of wild bee populations. To our knowledge, no such study has been carried out (Steffan-Dewenter and Tscharntke 2000;Goulson 2003;Wojcik et al. 2018). That the visitor shifts observed in 2020 might instead have been due to lower abundances of wild bee species, or to higher or lower flower densities, seems implausible given the complete consistency of the direction of shifts (from wild bees to honeybees) throughout all three months with higher honeybee densities (Tables 1, 2) and the rich flower diversity and abundance in the botanical garden. Based on the present results from a resource-rich urban garden, caution should be used when introducing high densities of Apis mellifera in cities. The city of Paris in 2018 harboured 7 hives/km 2 , Berlin in 2014 had 6 hives/km 2 , and Hamburg in the same year 5-6 hives/km 2 (Beckedorf 2015), with an increase in the latter two cities of 125% between 2007 and 2014. In our study area, the densities 5, 6, 10, 11, 16, 22, 24, 26 and 28; D: plant nos. 3, 4, 7, 9, 12, 17, 18, 19, 20, 21, 23, 27 and 29; E: plant no. 8; and F: plant nos. 1, 13 and 15 Table 2 Honeybee hives near the botanical garden Munich in 2019 and 2020 (see Fig. 2 for the location of hives) The numbers of bees per hive were estimated by the four bee keepers who owned the hives ("Materials and methods"). The increase in June 2020 is due to natural reproduction (swarming of bees)
2022-11-27T15:27:19.187Z
2021-02-07T00:00:00.000
{ "year": 2021, "sha1": "ee1f47a6988257496e13f49ac9ab622646d5581c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00442-021-04862-6.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "ee1f47a6988257496e13f49ac9ab622646d5581c", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
52046369
pes2o/s2orc
v3-fos-license
Manual restrictions on Palaeolithic technological behaviours The causes of technological innovation in the Palaeolithic archaeological record are central to understanding Plio-Pleistocene hominin behaviour and temporal trends in artefact variation. Palaeolithic archaeologists frequently investigate the Oldowan-Acheulean transition and technological developments during the subsequent million years of the Acheulean technocomplex. Here, we approach the question of why innovative stone tool production techniques occur in the Lower Palaeolithic archaeological record from an experimental biomechanical and evolutionary perspective. Nine experienced flintknappers reproduced Oldowan flake tools, ‘early Acheulean’ handaxes, and ‘late Acheulean’ handaxes while pressure data were collected from their non-dominant (core-holding) hands. For each flake removal or platform preparation event performed, the percussor used, the stage of reduction, the core securing technique utilised, and the relative success of flake removals were recorded. Results indicate that more heavily reduced, intensively shaped handaxes with greater volumetric controls do not necessarily require significantly greater manual pressure than Oldowan flake tools or earlier ‘rougher’ handaxe forms. Platform preparation events do, however, require significantly greater pressure relative to either soft or hard hammer flake detachments. No significant relationships were identified between flaking success and pressure variation. Our results suggest that the preparation of flake platforms, a technological behaviour associated with the production of late Acheulean handaxes, could plausibly have been restricted prior to the emergence of more forceful precision-manipulative capabilities than those required for earlier lithic technologies. INTRODUCTION The production and use of flaked stone tools were likely important to the survival of Palaeolithic hominins. The potential influence of these manually demanding behaviours on the evolution of the human hand has long been recognised (Napier, 1962;Marzke, 1983;Marzke, 1997;Marzke, 2013;Williams, Gordon & Richmond, 2010;Rolian, Lieberman & Zermeno, 2011;Key & Lycett, 2011;Kivell, 2015;Almécija & Sheerwood, 2017; although see Almécija & Alba, 2014). Recent research has also demonstrated how the manual anatomy and associated biomechanical capabilities of different hominin species may have influenced the nature of the Palaeolithic archaeological record (Marzke & Shackley, 1986;Rolian, Lieberman & Zermeno, 2011;Domalain, Bertin & Daver, 2017;Key et al., 2017;Patiño et al., 2017;Key & Lycett, in press). That is, the types, forms and technological strategies of stone tool artefacts may have been limited by, or preferentially selected for, as a result of how effectively hominins could use the hand when manipulating or securing lithic objects. Research concerning how the evolution of the hominin hand may have been influenced by stone tool production and use has been reviewed in detail elsewhere (Marzke, 1997;Marzke, 2013;Kivell, 2015;Almécija & Sheerwood, 2017). The present article reciprocally focuses on the influence that hominin manual capabilities may have had on the types and forms of stone tools produced during the Lower Palaeolithic. Relationships between technological or morphological aspects of Lower Palaeolithic stone tools and hominin manual capabilities are often mentioned, but rarely tested, in archaeological literature (e.g., Crompton & Gowlett, 1993;Delagnes & Roche, 2005;Machin, 2009;Lycett & Von Cramon-Taubadel, 2015). Although paleoanthropologists frequently debate whether fossil hominin hand anatomy could facilitate stone tool related precision grips, it is rarely the case that specific technological or morphological aspects of these tools are discussed (although see Tocheri et al. (2008) for an example). Therefore, there are only a few instances where hypothesised relationships between technological or morphological features of Lower Palaeolithic stone tools and hominin manual capabilities have actually been investigated. Regarding the origin of the first flaked stone tools, Rolian, Lieberman & Zermeno (2011) used a metal 'simulated flake tool' to calculate the external moments, internal flexion moments and joint stresses of tool users. Their data suggested that efficient flake tool use with low biomechanical stresses may not have been possible prior to the evolution of the derived pollical anatomy observed in later Homo (Rolian, Lieberman & Zermeno, 2011). Recently, Key & Lycett (in press) demonstrated the significant impact that tool user biometric variation can have on stone tool-use efficiency across the Lower Palaeolithic, revealing that relationships between biometric parameters and tool-use efficiency depend on the type of tool being used and the biometric variable under consideration. Their results suggest that the effective use of flakes and handaxes is not only dependent on hominins displaying relatively strong hands, but that the onset of Acheulean handaxes may have been linked to the evolution of more anatomically modern manual dimensions (Key & Lycett, in press). Williams-Hatala et al.'s (2018) investigation of manual pressure variation during flake and handaxe use may also indicate there to be differences in grip loading levels dependent on the size of the tool gripped. These results are, in part, due to the variable grips required when securing different Lower Palaeolithic tools, as described by Marzke & Shackley (1986). Manual demands and grip choices have also been demonstrated to vary during different stone tool production sequences (Marzke & Shackley, 1986). Comparisons between flake and handaxe production, for example, identified differences in the motion of the dominant arm, with the latter requiring smaller, more precise flaking actions. The authors also suggest that a 'lighter grip' could be used to secure an Oldowan flake core relative to a handaxe or pick when detaching flakes (Marzke & Shackley, 1986). As cores become smaller over a reduction sequence, Marzke and colleagues (Marzke & Shackley, 1986;Marzke et al., 1998) describe how the distal aspects of digits are increasing heavily recruited and the palm is used less. An early experiment also suggested that lower thumb to finger length ratios may have precluded early hominin's ability to firmly secure handaxes during production, in turn resulting in ''very crude handaxes'' during the early Acheulean (Krantz, 1960: 116). Key et al. (2017) found that experienced knappers gripped hammerstones with high pressure when detaching particularly large flakes. In turn, large stone flakes within Lower Palaeolithic archaeological sequences (Sharon, 2010;Shipton et al., 2014) plausibly indicate that hominins were capable of exerting and resisting high manual pressures during precision (hammerstone) manipulation. Aside from manual requirements, other studies emphasise the increased cognitive demands of handaxe production relative to Oldowan flakes (Stout et al., 2008;Muller, Clarkson & Shipton, 2017), while Mateos, Terradillos-Bernal & Rodriguez (in press) have recently experimentally compared the energetic cost of soft and hard hammer handaxe production. Only Faisal et al. (2010), however, have empirically examined Lower Palaeolithic technological transitions from a manipulative perspective. Joint angles on, and abduction angles between, the digits of the non-dominant hand of a skilled flint knapper indicated that, at least for the individual under investigation, Acheulean and Oldowan stone tool production are ''indistinguishable'' in terms of manipulative complexity (Faisal et al., 2010: 6). Faisal et al.'s (2010) study also highlights key manipulative differences between these two reduction sequences, including the unique need to properly and securely brace a handaxe as it becomes increasing thin relative to its width. Together, these studies emphasise the distinct manual demands required by the type and form of stone tool being used or produced. These demands must be facilitated by effective grips, which are, in turn, facilitated by anatomical adaptations. Without this anatomy it is unlikely that the respective tool forms would be found in associated archaeological deposits. Yet, there is still relatively little known about hand recruitment during the production of different types and forms of stone tool. Further, there is limited information about the effect biomechanical variation in a tool producer's hand has on the efficacy of different stone tool production behaviours. Certainly, the onset and adoption of certain technological or morphological features in the Palaeolithic archaeological record could have been restricted by biomechanical capabilities, including the forceful precision grip capabilities of the hominin upper limb. The non-dominant hand is known to experience high loading levels and perform complex manipulative tasks during the production of stone tools (Marzke & Shackley, 1986;Faisal et al., 2010;Key & Dunmore, 2015), perhaps to a greater extent than the dominant hand. Differences in manipulative requirements between stone tool production behaviours might, then, be more readily detected in this hand relative to the dominant hand. Here, we test the null hypothesis that the pressures experienced across the non-dominant hand of stone tool producers during a series of Lower Palaeolithic technological activities, including a range of tool types produced and percussors used, are not significantly different. Further, we assess how flake removal success is related to the pressure used to secure cores and whether manual pressures vary according to the stage of a core's reduction, or the technique used to support a core against hammerstone impact reaction forces. Reduction strategies and technological differences Three Lower Palaeolithic reduction strategies are examined here: (1) the production of replica Oldowan flake tools ('flake'), (2) bifacial flake removals while shaping an 'Early Acheulean' handaxe (EAH), and (3) bifacial flake removals while shaping a 'Late Acheulean' handaxe (LAH) (Figs. 1 and 2). Both flake and EAH tools were produced via hard hammer percussion while LAH were produced with soft hammer percussion as well. The latter strategy also employed specialist grinding stones during the preparation of flake platforms. The terms EAH and LAH used here refer to general increases in flaking extent, shaping, volume control, symmetry, the use of intentional 'thinning' flakes, soft-hammer percussion and prepared flake platforms in later Acheulean handaxes (Saragusti et al., 1998;Schick & Clark, 2003;Grosman, Goldsmith & Smilansky, 2011;Diez-Martín et al., 2014;Stout et al., 2014;Gallotti & Mussi, 2017;Iovita et al., 2017;Shimelmitz et al., 2017). While these differences are often clearest when tools produced >1 Mya are compared to those produced after ∼0.5 Mya, we do not mean to imply uniform linear progression of forms across regional records (Vaughan, 2001;Gowlett, 2013;Moncel et al., 2015;McNabb & Cole, 2015). Rather, we seek to investigate if handaxe forms produced using distinct techniques may be limited by biomechanical capabilities, as inferred from manual pressure records (see below). Although the translation and rotation of cores are manually demanding behaviours (Marzke et al., 1998;Key & Dunmore, 2015), the present analysis focuses only on manual pressure while securing cores during flake removals or platform preparation activities (edge grinding, retouching and trimming). As these behaviours remove mass from a core, they shape a lithic artefact and have the potential to be identified from the archaeological record. Nine skilled flint knappers, each with at least five years experience, took part in the study. At a minimum, all individuals were capable of consistently producing replica Acheulean handaxes of predetermined form when required. Notably, some of the participants exceeded this lower skill threshold by a considerable margin (cf. Eren et al., 2014). All had previously knapped while connected to manual pressure sensors and are familiar with producing tools within other experimental conditions (Winton, 2005;Williams, Gordon & Richmond, 2010;Key & Dunmore, 2015;Key et al., 2017). Additionally, most knap on a professional and frequent basis (e.g., academic, craftsman etc.) and likely provide the best possible sample available for providing natural, unfettered, pressure data. For these reasons, we are confident in the use of a single trial per reduction strategy for each knapper (collected within a single day) and the repeatability of the data collected. Each individual undertook the flake reduction first, followed by the EAH and then LAH sequence (Fig. 3). British flint from Suffolk and Kent was used in all reductions. All tool production sequences were recorded using a HD video camera. Ethical approval was granted by the School of Anthropology and Conservation Ethics Committee (University of Kent; Ref. Ares 19065). All individuals gave informed consent. Each knapper used their own hammerstones and soft hammers, without restriction, although red deer (Cervus eleghus) and moose (Alces alces) billets were typically used. No wooden or copper billets were used. Knappers were free to use grinding stones during platform preparation events in the LAH reduction, although in many instances soft and hard hammers were also used for grinding and trimming (Figs. 3 and 4). Knappers produced flakes at their own pace and supported the core in whatever way they preferred (this varied between the core resting in the hand or on the leg). Every attempted flake removal was coded as successful if the flake detached or unsuccessful if it did not. In instances where a fracture had clearly propagated through the core but required additional minor taps to remove it, the original hammer strike was considered successful and the small taps were not included in the study. Small (micro) flake removals undertaken when preparing platforms for large flake's removals are considered as distinct to 'flake removals' in this study. Pressure sensors A wireless Novel Pliance R sensor system was used to record the pressures (kPa) experienced across the non-dominant hand of knappers during all three reductions (Fig. 5). The system was comprised of 10 17 ×17 mm 2 and two 10 ×10 mm 2 sensors. The larger sensors were attached to the distal and proximal phalanges of digits 1-4 as well as the intermediate phalanges of digits 2 and 3. The two smaller sensors were attached to the distal and proximal phalanges of digit 5 (Fig. 5). All sensors were attached to the palmar surfaces of digits using double-sided tape and Velcro straps. Latex finger cots were used to protect the sensors and help keep them in place. The sensors were 'zeroed out' prior to data collection starting to account for any potential pressure caused by the finger cots. In all instances data were collected at a rate of 50 Hz. Data extraction Reduction sequences ranged between 5 to 34 min in duration. The number of individual data points collected from sensors ranged from ∼12,000 to ∼102,000. To identify individual behavioural instances within data streams it was necessary to align the pressure data output with the video records of each reduction sequence. Knappers were asked to free their non-dominant hand of any loads prior the reduction sequence starting and forcefully pinch their thumb and index finger. This created a known behaviour that was clearly identifiable at the start of the pressure data and the video record, after which, the two outputs could be accurately aligned. Every time one of the behaviours under investigation was performed the peak pressure (kPa) experienced on each sensor was identified and recorded. For an attempted flake removal, peak pressures were identified from 2-second-long segments of the data stream (1 second either side of the point of impact; Fig. 6). Platform preparation behaviours could occur for substantially longer periods, therefore peak pressures were extracted from across their entire duration. Every manual activity recorded here, and therefore every peak pressure value, was assigned a technological strategy (flake, EAH, LAH), an indenture type (hard hammer, soft hammer, grinding stone), a removal type (successful flake, unsuccessful flake, platform preparation), a core-support position (leg, hand), and a sequence number. Pressure data from all 12 sensors were summed to produce a record of the digital peak pressures experienced at a whole-hand level during individual technological behaviours. For each statistical comparison the peak pressures from all nine participants were combined. Participant seven's distal sensor on the first digit became detached during his flake reduction sequence. To make this discrepancy equal across all conditions examined here, no data for this sensor from this participant were included in the analyses. To control for inter-knapper differences in pressure, records were normalised to a 0-1 scale by dividing the difference between each peak pressure record and minimum record of that reduction sequence, by the range of values in that reduction. Since all reductions begin in a similar manner this scaling should not preclude the identification of significant differences between groups. Statistical analyses Pressure differences between flake, EAH, and LAH reduction strategies Both successful and unsuccessful flake removal data were used to investigate how pressure varies between the three core reduction strategies. Shapiro-Wilk tests revealed that normalised peak pressure data were not normally distributed in any of the three reduction strategies (p ≤ .0001). As reduction sequence lengths varied between knappers, each was sub-sampled to n peak pressure records evenly spaced over that sequence length, where n was the minimum length of sequence data analysed (n = 30) (File S1). This step ensured that knappers that produced longer sequences were not over-represented in the data, while still yielding reasonable statistical power with a sample of 270 peak pressure records in each reduction type. A Friedman test and post-hoc pairwise Wilcoxon signed rank tests were used to test for significant differences between normalized median pressure values between each reduction type. Significant values were identified at p < .017 as a Bonferroni correction was applied. Pressure according to flake removal success Average pressure was compared between flake removals depending on whether they were successful or not, within each reduction strategy. Hard and soft hammer percussion were included in the LAH analyses, but platform preparation events were not. Shapiro-Wilk tests confirmed that all three data sets were not normally distributed (p ≤ .040). Mann-Whitney U tests were repeated individually for Flake, EAH and LAH reductions as these data were not repeated measures. Significance was assumed in-line with the Bonferroni correction (p ≤ .017). Pressure differences between core support strategies As the present investigation is one of a few to consider core securing events with the non-dominant hand, we also analysed how different core support strategies may influence manual pressures. Two methods of core support were naturally used by knappers during reductions. Cores were either secured and supported solely in the hand, with the palm and fingers working to support their weight, or by the hand bracing tools against the leg. Pressure differences between these two core support strategies were compared individually within the three reduction strategies using Mann-Whitney U tests as Shapiro-Wilk tests identified that all data sets were not normally distributed (p ≤ .0003). Significant values were identified at p < .017 as a Bonferroni correction was applied. The LAH data does include platform preparation events using both core support strategies. Pressure according to mass removal method Only the LAH reduction displayed multiple mass removal (core shaping) methods; namely, hard and soft hammer flake removals, and platform preparation events. To examine how pressure varies between each of these three mass removal strategies LAH data were separated and then compared by technique used. Shapiro-Wilk tests confirmed that the three data sets were not normally distributed (p ≤ .0001). In turn, peak pressures were statistically compared between the three strategies using sub-sampled data as for testing differences between reduction strategies, though here the lowest number of mass removals in a sequence of a given type was 11 and so each removal type pressure sample was constituted of 99 records evenly spaced over reduction sequences (File S1). A Friedman test and post-hoc pairwise Wilcoxon signed rank tests were used to test for significant differences between normalized median pressure values between each mass removal type. Pressure differences dependent on reduction stage To examine whether core reduction stage significantly influences the pressure exerted and resisted by the non-dominant hand, flake sequence numbers were regressed on summed peak pressure data for each respective reduction type. This analysis of the influence of a core's stage of reduction, as defined by the number of flakes removed, on manual pressure does not use normalized or sub-setted data since it is the covariance of these variables that is under investigation. Due to the influence that core form, knapping mistakes, raw material inclusions, and participant enthusiasm could have on the duration of tool production sequences, there is potential for later trends within shorter sequences to be concurrent with earlier stages of longer reduction sequences. In turn, if there is only an increase in pressure during the final stages of a handaxe's production, for example, then this trend in the shorter sequences may go undetected. Hence, we performed another regression using flake removal sequence numbers of equal range that were proportionally normalised to the shortest sequence length (out of the nine) for each reduction type. This allows assessment of manual pressure from the start of a reduction sequence relative to its end (as determined by the tool producer) irrespective of any variation in the number of flake removals. Both sets of regressions are performed with all nine participants' data. Regressions were repeated individually for each of the three reduction strategies. Only hard and soft hammer flake removals were included in these first analyses for the LAH data. Pressure data from platform preparation event sequences were independently investigated using both types of regression. Significance was assumed in-line with the Bonferroni correction (p ≤ .0125) in each instance. RESULTS Descriptive data for the pressure values used in each analysis are detailed in Tables 1-5. Between the three types of tool production sequence there were substantially more mass removal events when producing LAHs (n = 1,503), relative to flakes and EAHs (n = 506 and 777 respectively; Table 1). Around twice as many flake removals were required during the production of LAHs relative to EAHs. Mean, summed peak pressure records across the non-dominant hand during the production of LAHs were also greater than the flake and EAH sequences by ∼50 kPa (Table 1; Fig. 7). The Friedman test did not reveal significant differences between median pressures used in the three types of reduction (p = 0.22138) and so post-hoc tests were not conducted. Although the production of 'Late Acheulean Handaxes' required greater mean pressures to be exerted and resisted by the non-dominant hand across all data collected, compared to the production of Oldowan flake tools or 'early Acheulean handaxes', these differences were not significant. Ratios of successful to unsuccessful flake removals varied only slightly between the three reduction strategies (ranging between 7:2 and 9:2) ( Table 2). In each strategy, successful Table 1 Descriptive data outlining the differences in combined peak pressure data recorded on the non-dominant hand during flake and core, 'early Acheulean handaxe', and 'late Acheulean handaxe' stone tool production sequences. (Table 2). Mann-Whitney U tests identified that these differences were not significant in any of the three sequences (p = .069-.249). In turn, the success of flake removals does not seem to be a consequence of variation in pressure exerted by the non-dominant hand during stone tool production, although there is consistency in successful flake removal recording marginally lower pressure values. Core support strategies varied between the leg and hand in all three reductions. In terms of data frequency there is a split between flake production, which reports greater use of hand support, the EAH reductions which are broadly equal between the two, and the LAH reductions where there were clear preferences for cores being supported by the leg (Table 3). While no significant pressure difference is recorded between the hand and leg support techniques during flake production (p = .060), both of the handaxe sequences report significant differences (p = ≤.001; Table 3). However, during the EAH reduction greater pressure values are reported during leg support while LAHs report greater values during hand support (Table 3). The technique used to support a stone core therefore appears related to the pressures required to secure it during flake removals and platform preparation events, however, differences appear dependent on the type of tool being produced. It was only possible to compare hard hammer flake removals, soft hammer flake removals, and platform preparation events during the LAH reduction sequence. Across the nine participants there were equal numbers of hard and soft hammer flake removals (n = 617 for each removal type), suggesting that both types of percussor are equally important during LAH production sequences (Table 4). There were, however, 4.6 times as many flake removals relative to platform preparation events, indicating that only ∼one in five flakes required its platform to be prepared prior to its removal. When only soft hammer percussion is considered, where platform preparation may more normally be expected, every other flake was removed without its platform being prepared (i.e., one in two flakes had its platform prepared). Soft hammer percussion returned, on average, the lowest peak pressure records across the hand (Table 4; Fig. 7). Hard hammer percussion required an additional 33 kPa of pressure to be exerted and resisted by the non-dominant hand. An additional 59 and 92 kPa were recorded, on average, across the non-dominant hand of knappers during platform preparation events compared to hard and soft hammer percussion, respectively (Table 4; Fig. 7). The Friedman test between normalised median pressures used in the three types of mass removal was significant (p = .0001). Subsequent pairwise Wilxcoxon signed rank tests indicated that platform preparation events required significantly more pressure than both hard (p = 0.0002) and soft hammer (p = .0043) Figure 7 Boxplots depicting peak pressure data. The nine knappers during the three types of stone tool production strategies (n = 506, 777, and 1,503 for the Oldowan flake, EAH, and LAH data, respectively) and the three mass removal strategies utilised in the late Acheulean handaxe reduction sequence (n = 617, 617, and 269 for the hard hammer, soft hammer, and platform preparation data, respectively) are shown. All manual behaviours combined Full-size DOI: 10.7717/peerj.5399/ fig-7 removals, while there was no significant difference between the latter two mass removal types. Platform preparation events do, therefore, appear to require significantly greater pressure to be exerted and resisted by the non-dominant hand compared to both hard and soft hammer flake removals. The LAH data values used during the regression analyses were, on average, greater than both the flake and EAH reductions (by 34 and 45 kPa, respectively) despite the absence of platform preparation events (Table 5), demonstrating that even in the absence of this uniquely late Acheulean behaviour, the production of LAH forms requires greater manual pressures. Of the eight linear regressions undertaken all identified significant relationships between flake removal sequence numbers and manual pressure (Table 6). Flake and EAH reduction sequences displayed negative relationships, whereby pressure decreased as reduction sequences progressed. LAH sequences and LAH platform preparation events displayed positive relationships, indicating that later mass removal events required greater manual pressures (Table 6). In all but one instance R 2 values were ≤.090, indicating that limited (≤ 9%) pressure variation could be attributed to a core's stage of reduction. The single exception was the regression between LAH platform preparation sequence numbers and their respective pressure values, where 42% of the observed pressure variation could be attributed to the stage of a handaxe's production (Table 6; Fig. 8). This indicates that as late Acheulean handaxes progress further through production sequences (i.e., as they become smaller, increasingly shaped and thin relative to their thickness) the pressure required to stabilise them during platform preparation events increases significantly. The fact that this relationship is not similarly repeated in the normalised flake removal sequence numbers indicates that this relationship is unlikely to be driven by how close a handaxe is to being considered finished by the knapper, but by how long the sequence goes on for, how many flakes have been removed, and how 'refined' a biface becomes. DISCUSSION The present work investigates the origin of technological innovation during the Lower Palaeolithic from a biomechanical and evolutionary perspective, and asks whether the onset of new stone tool forms and production techniques may have been restricted by hominin manual capabilities. Our results demonstrate that although later Acheulean handaxes (LAH) required the exertion and resistance of greater manual pressure during their production relative to either Oldowan flake and core tools or early 'rougher' Acheulean handaxes (EAH) (by an average of 22% and 29%, respectively, when all data were considered), these differences were not found to be significant and may have been driven by a few individuals. It is, therefore, not possible to state that manual pressure requirements during flake detachments vary significantly between the three tools examined here. However, the preparation of LAH flake platforms, through retouching and edge grinding, elicited the greatest loads in this study. Indeed, the action of preparing a flake's platform prior to its removal required significantly (22-40%) more pressure than soft or hard hammer flake removals in the same reduction sequences (Table 4; Fig. 7). Compared to Oldowan or EAH flake removals, mean pressures are 55-59% (>110 kPa) greater during LAH platform preparation events (Tables 1 and 4; Fig. 7). This result suggests that platform preparation techniques may only have been possible for hominins capable of performing particularly forceful precision grips. These grips would have required greater force than those needed for earlier stone tool types. Arguably, only once hominins evolved enhanced manipulative capabilities in response to selective pressure exerted by earlier manual behaviours, would the innovation of later Acheulean handaxe forms, produced using the preparation of flake platforms, have been possible. Such behaviours include flake tool use, hammerstone use, and Oldowan/EAH core manipulation (Marzke, 1997;Marzke, 2013;Kivell, 2015;Key & Dunmore, 2015;Williams-Hatala et al., 2018). As highlighted by Tocheri et al. (2008), fossil hand anatomy indicates the continued derivation of hominin manual capabilities subsequent to the onset of the Acheulean, which may have facilitated the forceful grips used for securing the core during platform preparation events, required for LAH production. During platform preparation events edges are modified either via the removal of very small flakes when isolating as well as reshaping platforms or altering their angles, or they can be reduced, bevelled, reshaped and isolated through forceful grinding actions. In each case, these actions require the precise but forceful application of stone or antler against the handaxe's edge. In turn, it is essential for handaxes to remain stable throughout this process so that the percussor or grinding stone is applied only to the specific area being shaped (for refined bifaces flake platforms are often <10 × 5 mm). Regarding small flake removals, it is the highly precise nature of the removals that necessitates a particularly firm and steady grip on the handaxe. The act of grinding a handaxe's edge in preparation for a flake removal, however, also requires the input of substantial and prolonged forces through an abrasive stone onto the biface's edge. In addition to their extended duration, it is likely that the dominant hand at times creates forces in excess of those observed during flake detachments. Certainly, during edge grinding the palm contributes substantially to the loads transferred onto a core, something that is impossible during most hammerstone strikes (and therefore flake detachments). While previous biomechanical studies of the dominant hand have tended to overlook edge grinding events (although see: Marzke & Shackley, 1986), and thus these claims cannot yet be substantiated, our pressure data clearly identifies a requirement to oppose substantial reaction forces during platform preparation events. More specifically, these pressures are significantly greater than those observed during flake removals. When LAHs are secured during platform preparation events up to 42% of the pressure variation recorded here can be attributed to the stage of a handaxe's production, demonstrating proportionally greater force is required to prepare platforms for progressively refined flake removals (Fig. 8). This relationship cannot be straightforwardly attributed to participant fatigue, as platform preparation events and flake removals were undertaken throughout reductions and no fatiguing was reported or observed. Rather the form of the handaxe (core) being supported and secured is likely responsible for this result. As any reduction sequence progresses, cores become smaller (Clarkson, 2013;Douglass et al., 2018) and handaxe size has been experimentally demonstrated to have a strong negative relationship with reduction intensity (Shipton & Clarkson, 2015a;Shipton & Clarkson, 2015b). Marzke & Shackley (1986) found that as reduction sequences progress the thumb and distal aspects of the fingers are increasingly used in isolation when gripping the core to secure it against hammerstone strikes (see also : Pouydebat et al., 2009). As a corollary, both the greater surface area of the palm and the most ulnar digits (fourth and fifth) are used progressively less (Marzke et al., 1998), which concentrates manual forces on the radial three digits. This concentration of force thereby increases the pressures required to produce, typically smaller, LAH's. The stage of a handaxe's reduction also has potential to impact its volumetric distribution and shape (Crompton & Gowlett, 1993). Archer & Braun (2010) demonstrated that as reduction sequences progress, a handaxe's centre of mass moves first to the centre of the tool and subsequent thinning flakes move it to the tool's base. As highlighted by Faisal et al. (2010), this results in an increased requirement to properly secure and brace the tool during flake removals and platform preparation events. Certainly, during the latter stages of LAH production there is increased risk that a biface will break (i.e., fracture in an unintended way) when flake removals are instigated. This may be through the intended fracture 'diving' through the biface when searching for the route of least resistance, or by reaction forces propagating through the tool and creating stress enough to fracture in additional locations (often the tip). In both cases, the principle means for a knapper to prevent these mistakes (other than choosing suitable flakes to remove) is by forcefully bracing the length of the biface. While most effect sizes were small, the other regression analyses support this idea as flake and EAH regressions display negative relationships with pressure but LAH sequences show positive relationships. During flake and EAH reductions the reducing core mass requires less support and stabilisation, resulting in lower manual pressure. While this may also characterise early stages of LAH production, as sequences progress, pressure increases substantially. It is likely that the production of bifacially flaked tools with even lower thickness to width ratios, such as Solutrean or Clovis points (e.g., Smallwood, 2010;Eren et al., 2013), would require even greater pressures. Stout and colleagues (Stout et al., 2008;Stout et al., 2015), and more recently Muller, Clarkson & Shipton (2017), have demonstrated that Acheulean handaxe production requires increased visuomotor coordination, hierarchical organisation and is more cognitively demanding than Oldowan flake tool production. Wynn (2002) and Muller, Clarkson & Shipton (2017) further suggest late Acheulean handaxe production sequences to be more complex than those required for early Acheulean handaxes. When combined with the present study, the production of later Acheulean handaxes could, therefore, also be considered a biomechanically and cognitively more demanding behaviour than earlier types of stone tool production. Although earlier research hinted at how manually demanding later handaxes were to produce (e.g., Krantz, 1960;Marzke & Shackley, 1986), it is only now that there are empirical data in support of this conclusion. Earlier work by Faisal et al. (2010) investigated the manipulative complexity (variation) of Oldowan and late Acheulean handaxe reduction strategies did not find any notable differences in digit joint or abduction angles. Our platform preparation results may, at first, appear in contrast to those reported by Faisal et al. (2010) insofar as we did find significant manual differences between Oldowan flake and late Acheulean handaxe production. Each study, however, investigates or infers a distinct biomechanical element of stone tool production. That is, the manual demands associated with joint angle complexity are not tantamount to demands associated with loading levels. So while the complexity of these behaviours have not been demonstrated to be different (Faisal et al., 2010), the production of late Acheulean is still a more demanding manual behaviour, but only in terms of the manual pressure levels resisted and exerted. Wider implications Key and colleagues (Key et al., 2017;Key & Lycett, in press) have argued that the production of large flakes (e.g., >10 cm) via hard hammer percussion and the effective use of handaxes, which are both characteristic features of early Acheulean tool assemblages (De la Torre & Mora, 2014), required manual biomechanical prerequisites prior to their widespread adoption by hominin populations. The present study suggests that the removal of bifacial flakes from a core when shaping an EAH is no more demanding, in terms of loading on the non-dominant hand, than the removal of flakes from a core during more straightforward Oldowan core reduction strategies. So, while there may be other manual prerequisites prior to the adoption of early Acheulean technologies (Key et al., 2017;Key & Lycett, in press), the loads required to secure cores do not appear to be one. As far as the present study demonstrates, we can attribute the specific technological development of core shaping through bifacial flake removals (n.b. not large flake production or the effective use of these tools (Key et al., 2017;Key & Lycett, in press)) to be more likely linked to changes in hominin cognitive, cultural, or linguistic capabilities (Wynn, 2002;Uomini & Meyer, 2013;Stout et al., 2014;Morgan et al., 2015;Schillinger, Mesoudi & Lycett, 2015;Stout et al., 2015;Lycett et al., 2016), or increased functional and ecological demands for large tools with scalloped cutting edges (Jones, 1980;Key & Lycett, 2017a;Key & Lycett, 2017b;Wynn & Gowlett, 2018), than biomechanical restrictions. Further technological considerations Our finding that flaking success cannot be attributed to pressure levels when securing cores demonstrates that, for skilled knappers at least, other factors are more important in determining flake detachment success. We are not suggesting that a secure and forceful grip on stone cores is not essential to the successful removal of flakes. Neither do we mean to imply that the loads required to secure a core do not change in response to different morphological or technological aspects of a tool production sequence (e.g., flake and core size, platform angle, percussor type). The high but variable loads exhibited here attest to these requirements, as do results reported in previous studies (Marzke et al., 1998;Key & Dunmore, 2015). Rather, our results demonstrate that the visuomotor control of skilled flint knappers during stone tool production is such that they can appropriately judge manual pressure requirements during flake detachments with equal success across the three types of reduction strategies examined here. Although, of course, there is potential for considerable variation in appropriate or necessary pressure outputs (cf. Rein, Nonaka & Bril, 2014;Key et al., 2017). Given the experience of the knappers used in this study, indications of advanced motor-skills during flake detachments are not surprising (Nonaka, Bril & Rein, 2010). Nonetheless, it is interesting that the success of flake removals by skilled flintknappers cannot be attributed to the use of higher or lower than required loading through the non-dominant, core securing, hand. It is beyond the scope of the present study to comment on whether the success of flake removals by novice knappers can, at least in part (Nonaka, Bril & Rein, 2010;Stout et al., 2015), be attributed to an inability to appropriately judge the loads required to secure a core. Interestingly, the ratio of ∼4:1 successful to unsuccessful flake removals (991 successful and 243 unsuccessful flake) across the LAH reductions was repeated when only flake removals performed immediately after platform preparation events were considered (160 successful and 38 unsuccessful flaking attempts). Indicating that, at least for expert knappers, the preparation of flake platforms does not increase the success of flake removals. Both handaxe reduction sequences demonstrated significant pressure differences between the hand and leg core support strategies. The EAHs required greater values during the leg support technique while the LAH required greater values during the hand condition. The cause of this difference may relate to the disproportionate use of each support strategy at different stages of a reduction sequence, changes in grip choice and pressure requirements as reductions progress, and the inclusion of platform preparation data in the LAH reduction. All reduction types used the leg support strategy more frequently during the earlier stages of a reduction sequence. This was likely because the most comfortable way to support a particularly heavy core's weight was by using the leg, with the hand chiefly being used to stabilise the core against hammerstone strikes. As sequences progressed cores became smaller, meaning that it was easier to support and secure cores using only the hand. A shift to the more frequent use of a hand support strategy also coincided with the already discussed need for greater pressure as cores become more 'refined' during platform preparation events. The greater duration of LAH reductions would have created increased opportunity for high loading. The greater frequency of the leg support technique during handaxe reductions, but most notably the LAH sequence, is likely due to the greater stability of this technique. As handaxes become thinner relative to their thickness they are more likely to break during flake removals. The use of the leg as a supportive structure allows for greater areas of the biface to be firmly secured by the body, decreasing the likelihood of it breaking during flake removals. Such comprehensive support is rarely required during 'simple' flake production strategies, hence, the leg support technique is more frequently being used in early stages of flake production. Although soft hammer percussion was used more frequently during the later stages of LAH sequences, this percussive technique did not contribute to greater pressures values during the hand support strategy, nor the greater pressures recorded in the later stages of LAH reduction sequences. Indeed, soft hammer percussion required similar loads to hard hammer percussion. This is despite soft hammers being more frequently used to remove smaller flakes (in terms of mass, if not length), in turn requiring lower impact forces (Dibble & Rezek, 2009) and creating lower reaction forces to be resisted. Irrespective of the cause, our data indicates that the seemingly delayed onset and adoption of soft hammer percussion during the later stages of the Lower Palaeolithic (Copeland, 1991;Schick & Clark, 2003;Stout et al., 2014) cannot be attributed to biomechanical limitations in the non-dominant hand of hominins. Limitations It is important to note that the pressures recorded here are not likely representative of the total forces exerted and resisted by the non-dominant hand during stone tool production. As past research demonstrates (Marzke & Shackley, 1986;Key & Dunmore, 2015), the palm plays an important role in supporting cores during flake removals (e.g., Figs. 4A and 4B) and the sensor array used here did not take this into account. It is hard to say whether the inclusion of palmar pressure data would have altered any of the present results, but indications of an increased reliance on distal aspect of digits during later stages of reductions highlights the need for future research to take this into consideration. Further, although the number of behaviours analysed here is, as far as we know, the largest yet recorded during an investigation into stone tool related manual loading (n = 2786), only nine skilled flintknappers were able to take part in the study. In turn, and as already discussed, there is potential for our data to be significantly influenced by a few individuals. This includes differences caused by variable grinding and core securing techniques learnt as each knappers first developed their knapping capabilities, and the possibility of an individual not providing 'natural' nor repeatable data on the specific day data were collected. Although there does not appear to be any indication that this has happened, we cannot rule this possibility out in totality. Hence, we would welcome the publication of similar studies in the future that are able to examine increased numbers of knappers. CONCLUSION The Lower Palaeolithic artefact record represents the largest and most detailed record of the minimum technological capabilities of hominins during the Plio-Pleistocene. As such the Oldowan and Acheulean periods track significant shifts in the behaviour of hominins, which have been investigated in terms of cognition, social transmission, environmental factors and others. Here we investigate these transitions from a biomechanical perspective, as inferred from manual loading data. Our results demonstrate that the digital pressures required to forcefully secure later Acheulean handaxes during their production are not significantly greater than those required when knapping earlier Acheulean handaxe forms or Oldowan flakes. However, the novel LAH associated behaviour of preparing flake platforms would have required significantly stronger grips in the non-dominant hand compared to earlier stone tool production behaviours. Therefore, we contend that the behavioural shift marked by the onset of platform preparation behaviours, as observed in later Acheulean handaxe forms, may be intrinsically linked to the biomechanical capabilities of hominins, among other factors, in a co-evolutionary manner.
2018-08-22T21:31:15.510Z
2018-08-16T00:00:00.000
{ "year": 2018, "sha1": "6f55b8a8294c623d77b3b9bb92a1e70119a317dc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.5399", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f55b8a8294c623d77b3b9bb92a1e70119a317dc", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Geology", "Medicine" ] }
136683849
pes2o/s2orc
v3-fos-license
Effect of severe plastic deformation on the structure and mechanical properties of Al-Cu-Mg alloy Aluminum Al-Cu-Mg alloy has been subjected to high pressure torsion (HPT) and equal-channel angular pressing (ECAP) at various temperatures. An ultrafine-grained (UFG) structure thermally stable up to a temperature of 175 °C was produced in all the investigated samples. Simultaneous increase in strength and ductility has been demonstrated in an ECAPed sample in comparison with a coarse-grained sample subjected to standard treatment. Introduction Al-based alloys as structural materials arouse large interest in the world scientific community due to their light weight and low self-cost. In particular heat-resistant alloys of the Al-Cu-Mg system are very interesting; they are employed to manufacture articles operating at high temperatures. The ultimate tensile strength of these alloys in a coarse-grained state after standard treatment is about 400 MPa. Severe plastic deformation (SPD) can be an effective way to enhance strength and fatigue properties. SPD enables fabricating an ultrafine-grained (UFG) structure in different metal materials [1][2][3]. It is known that formation of the UFG structure in pure metals can lead to enhancement of strength characteristics. However, the grain refinement in aluminum alloys by SPD techniques has a number of important features because of additional precipitation of second phases and solid-solution hardening, which can result in lower ductility. In particular, in the temperature range 20-150 ºC the quenched samples of aluminum alloys demonstrate decreased deformation ability in comparison with pure metals, and under SPD the samples and billets fail even at the initial stages of deformation [4]. All this creates additional possibilities for enhancing the properties of aluminum alloys, but requires optimization of technological regimes for fabrication of bulk UFG billets to be used as structural materials. The aim of this work is to increase strength characteristics with retention of sufficient ductility of the alloy of the Al-Cu-Mg system via refinement of grain structure by SPD techniques at different temperatures. The material and research methods Al-based alloy of the Al-Cu-Mg system cast in USATU was chosen as material for studies. The chemical composition of the alloy was analyzed by an optical and emission technique and is presented in Table 1. Before SPD, the Al-Cu-Mg billets were subjected to solid solution treatment at 530 ºC for 5 hours followed by water quenching. Disks with a diameter of 20 mm, a thickness of 1 mm were the initial samples to be processed by high pressure torsion (HPT). Cylinders of 20 mm in diameter and 150 mm in length were subjected to equal channel angular pressing (ECAP). HPT was carried out on a set that is the developed design of the known Bridgman anvil [5]. The sample was placed between the anvils and pressed under an applied pressure of 5 GPa. As a result of surface friction force arising during the bottom anvil rotation, the sample was deformed by shear in the conditions of hydrostatic compression under the action of applied pressure. The disk-shaped samples were subjected to 3 HPT rotations at 20 °C. Several samples were subjected to combined HPT treatment: 3 HPT rotations at 20 °C and then additionally 2 HPT rotations at 150 ºC. The combined treatment was chosen on the basis of earlier studies of the HPT effect on the aluminum AK4-1 alloy structure [6]. It was established that more intensive refinement of grain structure took place due to precipitation of strengthening phases during straining at temperatures close to aging ones. During ECAP a workpiece was repeatedly pressed in a special die-set through two channels with identical cross sections intersecting at 90°. ECAP was conducted via the Bc route (after each pass the workpiece was rotated around its longitudinal axis by 90° in a clockwise direction only) in the temperature range 120-200 °C. The number of passes was selected according to the criterion of maintaining the workpiece integrity. The processing regimes are given in Table 2, the degree of their equivalent strain was calculated by the following formulas: for HPT, where the angle of rotation, r, hthe disk radius and thickness, correspondingly; for ECAP, where N -the number of passes, φthe internal angle, C [3,7]. Qualitative and quantitative analysis of the microstructure of the initial coarse-grained alloy was performed using an optical microscope OLYMPUS. To create an optical contrast, the surface of mechanically polished objects was etched in Keller's reagent (1 HF+1.5 HCl+2.5 HNO 3 +95 H 2 O ml). The UFG structure was investigated using a transmission electron microscope (TEM) JEM-2100. The samples for TEM studies were prepared by twin jet electro-polishing on a Tenupol-5 in an electrolyte of the following composition: 400 ml of acetic acid, 300 ml of orthophosphoric acid, 200 ml of nitric acid and 100 ml of water. The average grain size, as well as the size of particles, was determined by the random linear intercept method on dark-field and bright-field images of the microstructure. The Vickers microhardness was measured on a Buehler Omnimet tester across a sample diameter with a step of 1 mm, load of 0.1 kg, holding time of 10 s. Tensile tests were conducted on the machine for small samples testing [8] with a gage length of 4.0 mm, thickness of 0.5 mm and width of 1.5 mm. X-ray diffraction (XRD) analysis was performed on a "Rigaku" diffractometer with CuKα radiation. To determine the lattice parameter of the Al-Cu-Mg alloy, X-ray reflections with centroids in the 2θ angle range 34 ÷ 47º were taken, and the Nelson -Reilly extrapolation procedure was used. Results and discussion The structure of the Al-Cu-Mg alloy after standard treatment, which involved heating at 530 ºC for 1 hour, followed by quenching in water and aging at 180 °C for 14 hours, was characterized by equiaxed grains with an average size of 72 µm. Transmission electron microscopy showed that the coarse-grained (CG) structure of the initial alloy completely transformed into an ultrafine-grained one (UFG) after SPD treatment. Particularly, after HPT at 20 °C grains with an average size of 140 nm were observed in the structure. After further annealing at 250 ºC for 1 hour, the average grain size in the UFG samples increased up to 250 nm (Figure 1a). (b) combined HPT at two different temperatures and additional annealing at 250 °C After the combined HPT the grain structure was strongly refined to 66 nm, while after additional annealing at 250 °C the average grain size increased to 170 nm (Fig. 1b). In the structure of both samples after annealing at 250 ºC numerous particles with an average size of 50 and 30 nm, respectively, were observed, which obviously retarded the grain growth at elevated temperatures. In the samples subjected to 4 ECAP passes at 125 ºC a band structure with an average band width of 250 nm (Figure 2a) as well as separate areas were observed, in which an ultrafine-grained structure was formed with an average size of 200 nm (Figure 2b). After 6 ECAP passes at 150 ºC, an ultrafine-grained structure with an average grain size of 400 nm was observed in the aluminum alloy ( Figure 3). Thickness extinction contours were observed in grain boundaries, which testified to reduction of internal stresses and relaxation of crystalline lattice microdistortions (Figure 3a). Numerous precipitates with an average grain size of 75 nm were found too. The UFG structures after ECAP treatment at 175 ºC ( Figure 4a) and 200 ºC (Figure 4b) were similar, the grains with an average size of 500 nm and particles of 50-75 nm were observed. After HPT the peaks of Al 2 Cu, Al 2 CuMg phases were not observed, which implied their complete dissolution during quenching and HPT. However, the peaks of these phases were observed in the samples after ECAP, indicating that the applied strain was insufficient for their dissolution. It should be noted that the X-ray diffraction patterns of the samples after HPT are substantially different from those after standard treatment. Particularly, changes in the integral background radiation intensity, width and intensity of X-ray peaks were observed on the diffraction patterns. The lattice parameter changes significantly depending on the used processing regimes (Table 3), which is obviously due to the precipitation and dissolution of strengthening particles. The minimum size of the coherent scattering domain was observed after HPT at 20 o C and after ECAP at the lowest temperature of 125 ºC. The greatest root-mean-square values of the lattice micro-distortions were detected after HPT at 20 ºC. In order to investigate thermal stability of the structure, the UFG samples were subjected to annealing for 30 minutes in the temperature range of 100-300 ºC. The most thermally stable HPT samples were those fabricated by the combined regime. As for ECAP processed samples, the samples ECAPed at 125 ºC and 150 °C exhibited increased microhardness values in the temperature range up to 175 ºC. Table 4 presents the results of tensile tests on the Al-Cu-Mg samples before and after treatment by SPD techniques. After the tests the highest tensile strength was observed in UFG samples produced by ECAP at 125 ºC; they also retained a sufficiently good ductilityabout 11%, while the HPT samples were very brittle despite their high tensile strength up to 800 MPa. It should be noted that the increased strength values observed in ECAP and HPT samples are due to the strong refinement of a grain structure and presence of fine particles of strengthening Al 2 Cu, Al 2 CuMg phases. 4.Conclusions In this paper the effect of severe plastic deformation on the structure and mechanical properties of ultrafine-grained Al-Cu-Mg samples was studied, which allowed drawing the following conclusions: 1. During combined treatment by HPT at two different temperatures, grain refinement to 66 nm in size and microhardness of 2790 MPa have been achieved in the Al-Cu-Mg alloy. 2. 4 ECAP passes at 125 ºC are insufficient to produce a homogeneous ultrafine-grained structure, while 6 ECAP passes at 150 ºC result in fabrication of ductile UFG samples with high ultimate tensile strength and microhardness. 3. The analysis of the temperature dependence of microhardness proves that the microstructure of UFG samples is thermally stable up to a temperature of 175 ºC, above which a considerable microhardness decrease was observed due to beginning of grain growth and coarsening of precipitates. Thus, enhanced strength characteristics of the alloy of the Al-Cu-Mg system were achieved via SPD technique application. Primary ductility was retained due to application of an ECAP technique. Further optimization of SPD regimes can improve the mechanical properties.
2019-04-28T13:12:54.869Z
2014-08-08T00:00:00.000
{ "year": 2014, "sha1": "463aa157fe2bf3d18452e6d72ac3a2aa1267c40f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/63/1/012081", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "bf1fd8308529868ac82868ac23af3ec1fd742575", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
119351884
pes2o/s2orc
v3-fos-license
Renormalization group analysis of the spin-gap phase in the one-dimensional t-J model We study the spin-gap phase in the one-dimensional t-J model, assuming that it is caused by the backward scattering process. Based on the renormalization group analysis and symmetry, we can determine the transition point between the Tomonaga-Luttinger liquid and the spin-gap phases, by the level crossing of the singlet and the triplet excitations. In contrast to the previous works, the obtained spin-gap region is unexpectedly large. We also check that the universality class of the transition belongs to the $k=1$ SU(2) Wess-Zumino-Witten model. The existence of a gap in the spin excitation has been considered to be a key to understand high-T c superconductivity. This stimulated the study of one-dimensional (1D) electron systems some years ago. Recently, possibilities of superconductivity in quasi 1D systems are suggested [1], and understanding of spin-gap phase in (quasi-)1D systems increases the importance. Now, we reconsider this problem from the 1D t-J model which is the simplest but not fully understood. The Hamiltonian of the 1D t-J model is written as in the subspace without double occupancy. Generally, 1D electron systems belong to the universality class of Tomonaga-Luttinger (TL) liquid [2,3] which is characterized by gapless charge and spin excitations and power-law decay of correlation functions. The phase diagram of the 1D t-J model is obtained by Ogata et al., using exact diagonalization [4]. They found the enhancement of the superconducting correlation (K c > 1) and the phase separation (K c → ∞) for large J/t region. They also found a phase of singlet bound electron pairs in the very low density region, but could get no evidence for a spin-gap phase by using a finite size scaling method at 1/3 filling. Hellberg and Mele studied this phase by using a Jastrow-type variational wave function [5]. In their approach, the variational parameter ν is related with K c as K c = 1/(2ν + 1). They found that there exists a finite region where the optimized parameter takes constant value ν = −1/2 between TL phase and phase-separated state, and they interpreted the region as the spin-gap phase. Other variational wave function is proposed by Chen and Lee [6]. However, these authors did not discuss the detailed mechanism of the spin gap generation. One candidate of the spin gap generation mechanism is due to the attractive backward scattering (scattering between electrons with the opposite momentum (k F , −k F ) and spin) [7,3]. In this case, the universality class of the transition is the k = 1 SU(2) Wess-Zumino-Witten (WZW) model [8]. On the basis of this assumption, we determine the transition point with the singlet-triplet level crossing method [9,8,10] and we obtain the phase diagram (FIG.1). Then we will verify the consistency of our method, considering the ratio of the logarithmic correction term. In general, the low-energy behavior of a 1D electron system is described by the U(1) Gaussian model (charge part) and the SU(2) sine-Gordon model (spin part) [3,11], Here α is a short-distance cutoff, g 1 is the backward scattering amplitude, and for ν = c, s where Π ν is the momentum density conjugate to φ ν , and v c and v s are charge and spin velocities, respectively. The primary field of this model is exp i √ 2(m ν φ ν + n ν θ ν ), where the dual field is defined as ∂ x θ ν = πΠ ν . In TL phase (g 1 > 0), the parameters K s and g 1 will be renormalized as K * s = 1 and g * 1 = 0, reflecting the SU(2) symmetry. First, let us consider the case without renormalization, g 1 = 0. The finite size correction of the energy and the momentum of (3) are described by the conformal field theory (CFT) [12,13] with c = 1, where the central charge c characterizes the universality class of the model. For the t-J model, c = 1 as shown rigorously at J/t = 2 [17] and numerically [4]. The combined use of the CFT and the Bethe ansatz result gives a description of the 1D electron systems [14,15,16,17]. The ground state energy of the system under periodic boundary conditions is given by where L is the system size. The excitation energy and momentum are related with exponents as where k F = πN/2L with electron number N , and the scaling dimensions and the conformal spins are defined by The variables m ν and n ν are related with electron quantum numbers as Here ∆N c is the change of the total number of electrons, and ∆N s is the change of the number of down spins. D c (D s ) denotes the number of particles moved from the left charge (spin) Fermi point to the right one. N ± c (N ± s ) is characterized by simple particle-hole excitations near right or left charge (spin) Fermi points. These quantum numbers are restricted by the selection rule under periodic boundary conditions [14] In the case of twisted boundary conditions c † j+L,σ = e iΦ c † jσ which is equivalent to the system where the flux Φ penetrates the ring [18], D c is modified as D c +Φ/2π. For the ground state E 0 , we choose periodic boundary conditions (Φ = 0) for N = 4m + 2 electrons and antiperiodic boundary conditions (Φ = π) for N = 4m electrons with an integer m. Changing the boundary conditions, the ground state becomes always singlet with zero momentum (P 0 = 0) [19,4]. In order to eliminate the contribution of the charge part, and extract the singlet and the triplet excitation in the spin part (x s = 1/2), we turn our attention on following states: (∆N c , ∆N s , D c , D s ) = (0, ±1, 0, 0), (0, 0, ∓1/2, ±1) under twisted boundary conditions (Φ = π for N = 4m + 2, Φ = 0 for N = 4m). We can identify these excitation spectra by using (5) and (6), but the momentum P and the wave number p are not always identical. There is a relation P = p − ΦN/L between them [20]. Next, we consider the renormalization (g 1 = 0). By the change of the cut off α → e dl α, the coupling constant g 1 and K s are renormalized as [21] dy 0 (l) dl = −y 2 1 (l), (9a) where y 1 (l) = g 1 /πv s , K s = 1 + y 0 (l)/2. For the SU(2) symmetric case y 0 (l) = y 1 (l), and y 0 (l) > 0, the scaling dimensions of the operators for singlet and triplet excitations √ 2 cos √ 2φ s (x ss ), and √ 2 sin √ 2φ s , exp(∓i √ 2θ s ) (x st ) split logarithmically by the marginally irrelevant coupling as [22] x ss = 1 2 + 3 4 y 0 y 0 log L + 1 , (10a) where y 0 is the bare coupling, and we have set l = log L. This result is equivalent with that of the k = 1 SU(2) WZW model [8]. Note that the ratio of the logarithmic corrections are given by Clebsch-Gordan coefficients. When y 0 < 0, y 0 (l) is renormalized to y 0 (l) → −∞, and there appears spin gap. At the critical point (y 0 = 0), there are no logarithmic corrections in the excitation gaps. The physical meaning of this point is that the backward scattering coupling changes from repulsive to attractive. And the SU(2) symmetry is enhanced at the critical point to the chiral SU(2)×SU(2) symmetry [8], since the spin degrees of freedom of the right and the left Fermi points become independent. Therefore, the critical point is obtained from the intersection of the singlet and the triplet excitation spectra [9,8,10]. Using this method, we can determine the critical point with high precision [10], since the remaining correction is only x s = 4 irrelevant fields [23,24]. Here we analyze the numerical results for the t-J model (1) with the above explained method. We diagonalize L = 8-30 systems by the use of Lanczos and Householder method. An example of data (L = 16, n ≡ N/L = 1/2) is shown in FIG.2. Since the critical point is almost independent of the system size as is shown in FIG.3, the phase diagram can be constructed without extrapolation. Our result is similar to the Hellberg and Mele's in the low density region, but the spin-gap phase spreads extensively toward the high density region. We are not able to answer whether the spin gap survives in the n → 1 limit or not, because the numerical results become unstable in the high density region where the phase boundary is close to the phase-separated state. In TL phase, singlet and triplet superconducting correlations (SS, TS) have the same critical exponent 1/K c + 1 [3], while with a spin gap, TS decays exponentially and SS is enhanced as 1/K c , so that SS is dominant in the spin-gap region. In order to check the consistency of our argument, we calculate the ratios of the logarithmic corrections and scaling dimensions for the singlet and the triplet excitations from (5) and (10). Here the spin wave velocity is given by [25] v s = lim which is extrapolated by the function v s (L) = v s (∞) + A/L 2 + B/L 4 . These corrections are explained by the irrelevant fields. The average of the renormalized scaling dimension (x ss + 3x st )/4, eliminating logarithmic corrections, and its finite size effect are shown in FIG.4 and FIG.5, respectively. The extrapolated data become 1/2 with error less than 0.2 %. Finally, we discuss the reason why the previous studies have estimated the spin gap region very narrower than the real one. From the two-loop renormalization group equation of the k = 1 SU(2) WZW model [26,27,28] the spin gap ∆E grows singularly as where y 0 ∝ J c − J, therefore it is very difficult to find the critical point using conventional finite size scaling method. Note that (13) is the same asymptotic behavior as the spin gap of the negative U Hubbard model at half-filling, which can be obtained from the charge gap at positive U [29], and the transformation between the charge and the spin degrees of freedoms [30]. In conclusion, we studied the spin-gap phase in the 1D t-J model, considering the backward scattering effect in the TL liquid by the renormalization group analysis. Using the twisted boundary conditions, we can extract the spin excitation spectra, and determine the critical point as in spin systems. The phase boundary is determined by the point where the backward scattering becomes repulsive to attractive. The spin-gap phase obtained in this way is unexpectedly large, and the consistency of the argument is also checked. This method can be applied to other models in 1D electron systems, if the SU(2) symmetry is assured. This work is partially supported by Grant-in-Aid for Scientific Research (C) No. 09740308 from the Ministry of Education, Science and Culture, Japan. A.K. is supported by JSPS Research Fellowships for Young Scientists. The computation in this work was done using the facilities of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo. compressibility. This way of calculation has less size dependence near the phase separation (see M. Nakamura and K. Nomura, cond-mat/9702126, to be published in Phys. Rev. B).
2019-04-14T02:16:48.752Z
1997-08-27T00:00:00.000
{ "year": 1997, "sha1": "e82a8d84b849a7fbe3a21e67ed71131331f74fda", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9708204", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e82a8d84b849a7fbe3a21e67ed71131331f74fda", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
11213256
pes2o/s2orc
v3-fos-license
A Code Level Based Programmer Assessment and Selection Criterion Using Metric Tools this study presents a code level measurement of computer programs developed by computer programmers using a Chidamber and Kemerer Java metric (CKJM) tool and the Myers Briggs Type Indicator (MBTI) tool. The identification of potential computer programmers using personality trait factors does not seem to be the best approach without a code level measurement of the quality of programs. Hence the need to evolve a metric tool which measures both personality traits of programmers and code level quality of programs developed by programmers. This is the focus of this study. In this experiment, a set of Java based programming tasks were given to 33 student programmers who could confidently use the Java programming language. The codes developed by these students were analyzed for quality using a CKJM tool. Cohesion, coupling and number of public methods (NPM) metrics were used in the study. The choice of these three metrics from the CKJM suite was because they are useful in measuring well designed codes. By examining the cohesion values of classes, high cohesion ranges [0,1] and low coupling imply well designed code. Also number of methods (NPM) in a well-designed class is always less than 5 when cohesion range is [0,1]. Results from this study show that 19 of the 33 programmers developed good and cohesive programs while 14 did not. Further analysis revealed the personality traits of programmers and the number of good programs written by them. Programmers with Introverted Sensing Thinking Judging (ISTJ) traits produced the highest number of good programs, followed by Introverted iNtuitive Thinking Perceiving (INTP), Introverted iNtuitive Feelingng Perceiving (INTP), and Extroverted Sensing Thinking Judging (ESTJ) Keywords—computer programs; program quality; class cohesion; programmers; personality traits INTRODUCTION Programming is a challenging task, which requires appropriate skills as well as appropriate temperamental suitability.Among the skills often demonstrated by professional and successful programmers are logical and analytical thinking, problem understanding and interpretation, detailed understanding of a programming language's syntax and a good communication ability.Capretz and Ahmed [1] identified some of the skills required in computer programming to include strong analytical and problem solving skills, communication skills, interpersonal skills, ability to work independently, active listening skills, innovative skills, organizational skills, openness and adaptability skills, fast learning skills and team playing skills.Apart from possessing these skills, success as a computer programmer may also be influenced by personality types such as Extroversion (E), Introversion (I), Sensing (S), iNtuition(N), Thinking(T), Feeling(F), Judging(J) and Perceiving(P) (Okike and Olanrewaju [2]; Capretz and Ahmed [1]; Capretz [5]; Da Cunha and Greathead [3]; Tueley and Bieman [6]; Bentley [4]). Furthermore, the reliability of acomputer software depends on the code level quality of the program which indeed results from programmers coding skill.For this reason, it becomes very necessary to evolve code level measurement of program quality and individual programmer personality traits in the selection process of career computer programmers.Hence matching coding skill with personality traits will enable the identification and selection of good computer programmers.This is the motivation for this paper. A. Problem Statement Programmers are widely perceived as Introverts, Sensors and Thinkers (Capretz and Ahmed [1], Sensing and iNtuitionist, (Da Cunha and Greathead [3], Introverts, iNtuitionists, Thinkers and Judges (INTJ) (Tieger,[20] ).These assessments are purely based on personality traits factors without recourse to code level quality of programs or resulting software.The present study seeks to bridge the gap between programmer personality traits and the quality of programs written by programmers by making use of a two level metrics based on both personality traits and code level quality to assess and to select competent programmers who create quality software programs.The Quality of a Program (QoP) in this study is measured in terms of the Cohesiveness of the Program module (CoPm), Coupling Between Object classes (CBO) and Number of Public Methods (NPM).In software development, high cohesion (range [0,1]) and low coupling imply good design.In addition, number of methods (NPM) in a well-designed class is always less than 5 when cohesion is high, range[0,1] and coupling is low [17].The cohesion degree of a component is high if it implements a single logical function, and cohesive component tend to have high maintainability and reusability ((Okike [7], Badri [8], Bieman and Kang [9]) B. Study Ovjectives The main objective of this study is to create a two level metrics which is based on programmers personality traits and the code level quality of program modules.This instrument should be useful in selecting programmers who create quality programs.Specifically, the objectives of this study are to: www.ijacsa.thesai.org investigate the personality traits of skilled programmers using Myers Briggs type indicator (MBTI)  investigate the code level quality of programs written by programmers using Chidamber and Kemerer Java Metric tool (CKJM)  suggest the personality type indicator(s) of competent programmers. C. Research Questions The following research questions are investigated in this study.  What are the personality traits of good computer programmers?  Which personality traits designed quality (cohesive) programs? D. Research Hypotheses The following hypotheses are tested in this study:  H1: There is significant correlation between personality traits and code quality H0: There is no correlation between personality traits and code quality The rest of this paper is divided into 7 sections.Section 2 is a presentation of the conceptual model of the study.Section 3 is the literature review.Section 4 explains the research methodology.Section 5 presents the result of this study with appropriate discussion.Section 6 is the conclusion while section 7 is the list of references II. CONCEPTUAL FRAMEWORK The framework for this study is based on Capretz and Ahmed [1] model : Mapping programmers and skills to personality type as shown in figure 1 below and Okike [7] Metric calculation Process using Chidamber and Kemerer metric tool as shown in figure 2. Arising from these two models is a hybrid adapted from the two to achieve the objectives stated in section 1.3. III. LITERATURE REVIEW Code level measurement of program quality have been studied using class cohesion , coupling and other metrics from the Chidamber and Kemerer metric suite [7,8,9,16,17,19].High cohesion, range [0,1] and low coupling imply good design.The term cohesion is defined as the "intramodular functional relatedness" in software [22].Chidamber and Kemerer [19] first defined a cohesion measure for objected oriented software-the Lack of cohesion in Methods (LCOM) metric.Okike [7] studied class cohesion measurement in object oriented systems using Chidamber and Kemerer Metric suite and Java as case study.The study involved 6 different types of Java based industrial systems with over 3000 classes.The result of the study showed that the Lack of Cohesion in Methods metric (LCOM) defined by Chidamber and Kemerer was suitable in measuring class cohesion in the studied systems.In addition the study showed that the LCOM metric satisfy measurement theory conditions, and although the metric is prone to outliers; a new metric was defined which normalizes the LCOM metric such that outliers were eliminated.Furthermore, a pedagogical evaluation and discussion about the Lack of Cohesion in Methods metric using field experiments is presented in Okike [16], while a normalized Lack of Cohesion in Methods metric is presented in Okike [17].In both studies, the usefulness of LCOM metric alongside Coupling between Object (CBO) and Number of Public Methods (NPM) in the evaluation of well-designed classes were clearly established .Hence by measuring cohesion using the LCOM, CBO and NPM metrics in this study, well designed codes by individual programmers were identified Furthermore, the Myers Briggs Type Indicator(MBTI) has been widely used by researchers to measure the personality traits of individuals in various capacities and dimensions.Okike and Olanrewaju [2] investigated problem solving and decision making skills of 30 student programmers using the MBTI tool.A decision problem representing a programming task was given to the students.The students were expected to produce computer programs which solves the given problem.The MBTI, an automated personality traits questionnaire based tool was administered on the students.The responses from students were automatically analyzed in order to identify the personality traits of each student.The program code or codes written by each students was also analyzed using a Chidamber & Kemerer Java Metric (CKJM) tool, and the results matched with their corresponding MBTI to determine the problem solving and decision making skill of each programmer by looking at the quality of the resulting program code.The study concluded that The result of this study indicates that among the various personality traits, the Introverted Sensing Thinking Judging (ISTJ) appear to have the best problem solving and decision making skill followed by Introverted Intuitive Feeling Judging (INFJ) compared to other personality traits.Okike [10] investigated the role of personality traits in students' achievements in Computing Science.Results from the study suggests that the strongest motivator for a choice of career in the computing sciences is the desire to become a computing professional rather a students inherent temperamental ability (personality traits).Equally, students' achievements in the computing sciences do not depend only on personality traits, motivation for choice of course of study, and reading habits but also on the use of Internet based sources more going to the university library to use book materials available in all areas. Okike [11] studied the bipolar factor and systems analysis skills of 60 students analysts at the University of Botswana.The study evolved a new approach to construct a type matrix from a personality type frequency matrix.This approach was used to select the best systems analyst based on personality traits factors. Bentley [4] reviewed personality traits and programmer characteristics and presented some of the traits that can be indicators of success or failure in computer programming.Weinberg [13] explored the psychology of computer programming and noted that there could be variations in individual productivity due to personality type factor.Capretz [5] investigated personality types of software engineers based on the combined Jung and Myers Briggs bipolar.The study suggested that they were more (Introvert Sensing Thinking Judging (ISTJ) software engineers than other types in his data.Chung [15] studied the cognitive abilities in computer programming using 523 form four secondary school students in Hong Kong.Test administered to the students included mathematics, space, symbols, hidden figures and programming ability.Results of this study suggested that performance in mathematics and spatial tests were significant predictors in programming ability.Similarly, Bishop-Clark and Wheeler [14] investigated the Myers-Briggs personality type and its relationship to computer programming.Using 114 students, the study sought to know if college students with certain personality types performed better than others in an introductory programming course.In this study, results suggested that sensing students performed significantly better than intuition students in programming assignments while judging students performed better than perception students on computer programs although the results were not significant statistically. IV. STUDY METHODOLOGY A set of Java based programming task was given to some 33 student programmers who could use the Java programming language confidently.A Chidamber and Kemerer Java metric tool (CKJM) [18] was used to analyse the quality of program codes written by each participating programmer.In addition www.ijacsa.thesai.org the Myers Briggs Type indicator (MBTI) was used to measure the personality traits of each participating programmer.In this way, a two level metrics based approach was evolved namely: Level 1: Human metric tool (MBTI) Level 2: Code level metric tool (CJKM) The Human metric tool is based on the Myers Briggs Type Indicator tool.Each participating programmer completed and submitted the automated MBTI questionnaire and was subsequently scored by the tool as to the appropriate personality trait. At level 2, the programmers were given the same programming task, and each of them developed appropriate Java codes.The codes were evaluated automatically by applying the CKJM tool.The CKJM tool calculates for each program class the following six metrics when used in any experiment [18]  WMC: Weighted methods per class For the purpose of this paper, the LCOM, CBO and NPM metrics are mainly considered in the assessment of code quality .This follows from earlier research as shown in [7,16,17].High cohesion range [0,1] and coupling implies good design.Also the number of methods n in a welldesigned class should be less than 5 [17:pg22].Using the MBTI tool, the personality characteristics of the programmers were established as shown in column 2, while the corresponding program quality characteristics of program codes written by the programmers are shown from columns 3-11 as measured by the Chidamber and Kemerer Java Metric (CKJM) tool [18].A comprehensive discussion about the Chidamber and Kemerer suite of metrics is presented in [19].Furthermore, a pedagogical evaluation and discussion about the usefulness of Chidamber and Kemerer's metric suite, particularly the Lack of Cohesion in Methods (LCOM) metric is presented in [7,16,17] Details about each of this metric have been discussed in [7,18,19]. A. Bipolar Factor Characteristics s of Candidates Table 2 below presents the personality frequency matrix of the participating programmers [11].From this table, the dominant personality traits are Thinking (T) =21, Judging (J) = 21, Sensing(S) = 20, and iNtuition (N) = 19.Arising from Table 2, a type matrix table is presented in Table 3. Diagonals of type matrix tables must sum up to the total number of participants [11] Considering the bipolar factors -Extroversion (E ), Introversion (I), Sensing (S), iNtuition (N), Thinking (T), Feeling (F), Judging (J), and Perceiving (P), the number of well designed program codes are shown in Table 5 below Sensing (S) 10 Intuition (N) 8 Thinking (T) 10 Feeling (F) 8 Judging (J) 12 Perceiving (P) 6 Table 5 also provides answers to research questions (bullets 1 and 2) of this study.From this study, introverts appear not have better code design ability than extroverts.In fact, extroverts could be better programmers than introvert (hypothesis bullet 1).Sensors could design better codes than iNtuitives (hypothesis bullet 2).Thinkers could design better codes than feelers (hypothesis bullet 3).Judges could be better code designers than perceivers (hypothesis bullet 4).The study suggests that there is significant relationship between personality traits and code quality (hypothesis bullet 5).This result is also supported in [2] VII.CONCLUSION In this study, a model for measuring both the personality traits of individual programmers and the quality of programs developed these programmers at two levels has been presented.The model could be used when selecting competent computer programmers since the quality of well designed computer program can be measured by the level of cohesiveness of the program module or class [7,8,9,16,18,19].In addition, good computer programmers appear to have strong personality traits such as judging, extroversion, sensing, thinking, intuition, feeling, and could have introversion and perceiving abilities.This conclusion supports previous studies a presented in [1,2,3,20].Further details about the peculiarities of these traits are fully discussed in [22].  H1: Introverts design better codes than extroverts in terms of class cohesion H0: Introverts do not design better codes than extroverts  H1: Sensors design better codes than intuitives in terms of class cohesion H0: Sensors do not design better codes than intuitives  H1: Thinkers design better codes than feelers in terms of class cohesion H0: Thinkers do not design better codes than feelers  H1: Judges design better codes than Perceivers in terms of class cohesion H0: Judges do not design better codes than perceivers  DIT : Depth of Inheritance Tree  NOC: Number of children  CBO: Coupling between object classes  RFC: Response for a class  LCOM: Lack of cohesion in methods  Ca: Afferent coupling  NPM: Number of Public Methods for a class . Using the CKJM tool the following metric were computed for each class or program module written by a programmer: Weighted Method per class (WMC), Depth of Inheritance Tree (DIT), Number of Children (NOC), Coupling Between Object (CBO), Response for a Class (RFC), Lack of Cohesion in Methods (LCOM), Afferent Coupling (CA), and Number of Public Methods (NPM). ijacsa.thesai.org Introverted Sensing Thinking Perceiving (ISTP), and Introverted Intuitive Feeling Perceiving(INFP) are likely to have averagely problem solving and decision making skills while individuals with Extroverted Sensing Feeling Perceiving (ESFP) and Extroverted Sensing Thinking Perceiving (ESTP) traits appear to have poor problem solving and decision making skills. Table 1 below shows the result of the experiment described above in section 3. The Myers Brigg Type Indicator (MBTI) of each programmer and the Lack of Cohesion in Methods (LCOM) metric of program classes are considered together. TABLE I . [21]S DESIGN AND PROGRAMMING ABILITY OF STUDENTS USING CHIDAMBER AND KEMERER METRIC SUITE AND MBTI Adapted from[21]www.ijacsa.thesai.org TABLE IV . PERSONALITY TYPE AND GOOD PROGRAM DESIGN TABLE V . GOOD PROGRAMS BY BIPOLAR FACTOR
2017-05-03T10:42:01.645Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "d619c696ab2a5ee707cc5d6ecafec0fb6f83df84", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume5No11/Paper_16-A_Code_Level_Based_Programmer_Assessment_and_Selection.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d619c696ab2a5ee707cc5d6ecafec0fb6f83df84", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
271204901
pes2o/s2orc
v3-fos-license
Toxicokinetics of a developmental toxicity test in zebrafish embryos and larvae: Relationship with drug exposure in humans and other mammals Graphical abstract Introduction Zebrafish (Danio rerio) embryos and larvae have been increasingly explored as alternative model organisms for in vivo toxicity screening in the early drug discovery process because of their low cost, the small amounts of drug required, and the high throughput (Chng et al., 2012;Eimon and Rubinstein, 2009;Gibert et al., 2013;MacRae and Peterson, 2015).They enable the continuous monitoring of developmental morphological alterations and exhibit rapid organogenesis within 72 h post fertilization (hpf), making them suitable for screens for developmental toxicity (Stallman Brown et al., 2012).Zebrafish have high genetic homology to humans (70 % of their genes have identifiable human orthologs) and also have important similarities in organogenesis and functional mechanisms (Howe et al., 2013;McGonnell and Fowkes, 2006).In addition, international momentum to eliminate animal testing in chemical risk assessment is growing every year.In particular, in the field of cosmetics, animal testing has been banned in the European Union (EU) since 2013 (EU, 2009), and this ban has spread to more and more countries each year (Burbank et al., 2023).the use of zebrafish embryos and larvae is in line with the 3Rs (reduction, refinement, and replacement) approach to animal use for scientific purposes, because in Europe they are considered non-protected animals until the stage of independent feeding at 120 hpf, on the basis of a directive on the protection of animals used for scientific purposes (EU Directive 2010/63/ EU) (European Commission, 2010).Therefore, the use of zebrafish embryos is highly attractive from the perspective of animal welfare, which has become particularly important in recent years. In 2020, the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) revised the guideline for reproductive and developmental toxicity studies (S5) to allow the use of alternative methods to animal studies using mammals, such as rats and rabbits, in studies of the effects of drugs on embryo/fetal development (EFD) (International Conference on Harmonization, 2020).ICH S5 lists 29 positive-control reference drugs that have been shown to induce morphological abnormalities or embryo-fetal lethality in nonclinical studies under conditions of no apparent maternal toxicity or in humans.The positive-control reference drugs are used for corroboration to confirm the eligibility of an EFD test alternative.Several validations for the detection of developmental toxicity in zebrafish embryos have been published and have shown high concordance rates (81 %-90 %) with the results from in vivo studies in mammals (Brannen et al., 2010;Selderslaghs et al., 2009Selderslaghs et al., , 2012;;Song, et al., 2021;Weiner et al., 2024;Yamashita et al., 2014), so their use in drug discovery is accepted. However, several problems remain in more accurately predicting the developmental toxicity of various drugs to humans and other mammals.Most developmental toxicity assessments that use phenotype-based methods are based on the drug concentration in aqueous solution (Cw) at a nominal concentration, without measurement of the actual concentration (Brannen et al., 2010;Selderslaghs et al., 2009Selderslaghs et al., , 2012;;Yamashita et al., 2014).In general, the more lipophilicity a drug has-i.e., the higher the logarithm of the partition coefficient (logP)-the more it accumulates in fish (Arnot and Gobas, 2006).Therefore, Cw does not accurately reflect the drug concentrations directly related to the onset of developmental toxicity.To date, there few studies have measured the drug concentration in whole embryos or larvae (Ce) to explain the relationship between drug concentration and toxicity, and few reports have investigated the relationship between Ce and Cw for each timepoint according to the degree of lipophilicity (Weiner et al., 2024;Ball et al., 2014;Diekmann and Hill, 2013;Huang et al., 2010). A further problem concerns bioaccumulation, which refers to the accumulation of chemicals in an organism via any route, including inhalation, ingestion, and direct contact (Sanz-Landaluze et al., 2015).The bioaccumulation of chemicals in typical model fish, including zebrafish, can be explained by two first-order kinetics processes-uptake and excretion-based on a one-compartment model (OECD, 2012), and the kinetics parameters and bioconcentration factors for zebrafish embryos or larvae calculated on the basis of a first-order model have been reported (Sanz-Landaluze et al., 2015).Although permeability and bioaccumulation and the temporal changes in Ce during zebrafish developmental toxicity testing may differ depending on the logP value of the drug, there are no reported experimental Ce data for each timepoint under varying logP values and under the same exposure conditions as in the test. Another problem with using zebrafish embryo and larvae for alternative developmental toxicity testing is that the relationship between the amount of drug exposure required to cause developmental toxicity in zebrafish embryos or larvae and that required in mammals is unknown.The ICH S5 guideline states the need to clarify the relationship between the concentrations used in the alternative methods and the amount of drug exposure at which toxicity occurs in the animal species in which the predictions are made (ICH, 2020).Toxicokinetics studies in mammals for pharmaceuticals examine the dose and systemic exposure of the drug in animals and their relationship to the time course of exposure; they then relate the exposure information to the toxicity findings to help assess clinical safety in humans (Hood, 2012).Generally, in toxicokinetics studies in mammals or in human clinical studies, the blood concentrations of a drug are measured over time to determine the systemic exposure, and the maximum plasma concentration (C max ) or area under the curve (AUC) is calculated as a parameter that is related to systemic exposure and can be used to evaluate toxicity (Hood, 2012).However, zebrafish embryos and larvae are so small that blood concentration measurement is technically difficult and morphological observations must be made under the microscope.For these reasons, there T. Nawaji et al. have been limited reports of drug exposure in zebrafish embryos and larvae and its relationship with exposure in humans and other mammals (Weiner et al., 2024).Therefore, our aim here was to investigate the relationship between the Cw and Ce of developmental toxicity agents and between drug exposure in zebrafish and humans or other mammals to probe the applicability of the zebrafish developmental toxicity test as an alternative method of EFD testing.First, we used developmental toxicity studies to confirm the concentration-response relationships of various drugs in zebrafish embryos and larvae.After that, to examine the temporal changes in drug concentrations in zebrafish embryos and larvae, we measured Ce every 24 hpf and examined the relationship among Ce, Cw, and lipophilicity at each time point of exposure.On the basis of that relationship, the Ce values of the 21 ICH S5 positive-control reference drugs at no observed effect concentration (NOEC) were estimated, and the area under the Ce-time curve in zebrafish (zAUC) for each drug was calculated and compared with the AUC at the no observed adverse effect level (NOAEL) in rats and rabbits and at the effective dose in humans, as described in ICH S5. Test organisms and collection of fertilized eggs Wild-type zebrafish (Danio rerio, NIES-R strain) were obtained from the National Institute for Environmental Studies (Tsukuba, Japan) and were bred at the breeding facility of the Chemicals Evaluation and Research Institute, Japan (Kurume, Japan).All experiments using the zebrafish were done according to the EU Directive 2010/63/EU for animal experiments (European Commission, 2010).Parent zebrafish were bred in dechlorinated tap water (tap water passed through a cylinder filled with activated carbon to remove chlorine and for aeration) at a temperature of 26 ± 1 • C with a 16-h light/8-h dark lighting cycle.The zebrafish were fed with recently hatched (<24 h old) brine shrimp (Artemia) from Great Salt Lake (EGGS-90, Kitamura, Kyoto, Japan) two or three times per day.For the experiment using fertilized eggs, adult male fish (two per container) were placed in glass containers, each of which contained an individual adult female fish, and mated to obtain fertilized eggs.After the fertilized eggs were collected, those at a normal developmental stage with no morphological abnormalities were selected under a stereomicroscope SMZ800 (Nikon, Tokyo, Japan) and used in the experiments. Developmental toxicity test We did not test all the 29 positive control reference drugs suggested by the ICH S5 guideline, only 22, plus other 5 well known teratogenic drugs in zebrafish.Table 1 lists the normal logarithm of the n-octanol-water distribution coefficient at pH 7 (logD) and the nominal concentrations of each drug.Preliminary studies were conducted to determine the nominal test concentrations under the same exposure conditions as those used in other tests in the study.The upper limit of concentration was set at 10,000 µM.The number of concentration levels and the geometric ratio of concentrations were not fixed, but the concentrations were set to obtain 100 % or 0 % of developmental toxicity as soon as possible.Aqueous drug solutions at each concentration were prepared by using reconstituted water (ISO 6341-1982) (OECD, 1992) to contain a final concentration of 0.5 % (v/v) dimethyl sulfoxide (DMSO, Nacalai Tesque, Kyoto, Japan).The prepared aqueous drug solution was added to polystyrene 24-well plates (Sumitomo Bakelite, Tokyo, Japan) at 2 mL per well, and fertilized eggs of zebrafish were exposed in 12 replicates (1 egg/well) for between 5 and 120 hpf.During the exposure period, well plates were kept in an incubator (MIW-450 V, AS ONE CORPORATION, Osaka, Japan), without renewal of the aqueous solution, under a 14-h light/10-h dark lighting cycle and at 28 ± 1 • C.After exposure, tricaine methane sulfonic acid (MS-222, Sigma-Aldrich, St. Louis, MO, USA) was added at a final concentration of 0.03 %, and the anesthetized larvae were observed under an inverted microscope (CKX53, Olympus, Tokyo, Japan) to assess the items shown in Table 2, according to the criteria reported by Yamashita et al. (2014).The NOEC was defined as the maximum concentration at which no morphological or functional abnormality was observed in any individual organism.If no morphological or functional abnormalities were observed at the concentrations set near the upper limit concentration or water solubility, those concentrations were used as the NOEC. Evaluation of developmental toxicity in zebrafish For the ICH S5 positive-control reference drugs listed in Table 1, the presence or absence of developmental toxicity was classified based on findings observed in developmental toxicity studies using zebrafish embryos and larvae.The methods of scoring the toxicity and determining a drug as positive or negative were the same as those reported by Yamashita et al. (2014).For each parameter in each individual, the morphology or function listed in Table 2 without abnormality was assigned a score of zero, and an observed abnormality was assigned a score of 10.The mean total morphological score at the highest concentration at which the survival rate was 50 % or higher was regarded as MS 50 .When the MS 50 value of a tested drug was 10 or higher, the drug was judged as "positive" for developmental toxicity in zebrafish.The results of the determination were confirmed to be consistent with the developmental toxicity in humans and mammals. Exposure to test drugs and preparation of analytical samples for measuring drug concentration in zebrafish embryos and larvae The drugs used for measuring drug concentration in zebrafish embryos and larvae and their nominal concentrations in aqueous solutions are shown in Table S1.Collection of fertilized zebrafish eggs and preparation of aqueous drug solutions were performed in the same manner as in the developmental toxicity test.The prepared aqueous drug solutions were added to 500-mL or 1000-mL glass containers, into which the required volume of 5-hpf fertilized zebrafish eggs (density: ≤1 egg/mL) was placed to start exposure.During the exposure period, the glass containers were kept in an incubator (MIW-450 V) under a 14-h light/ 10-h dark lighting cycle and at 28 ± 1 • C. To maintain the drug concentration in aqueous solution as closely as possible, the aqueous solution was freshly prepared and renewed at 48 hpf for testosterone and every 24 hpf for diethylstilbestrol.For the other drugs, the aqueous solutions were not renewed because no decrease in concentration was observed in our preliminary study. Exposed embryos of larvae (three to 100 embryos or larvae per replication, three replicates per level per time point) were sampled and analyzed every 24 hpf up to a maximum of 120 hpf.The chorions of embryos were removed with forceps before measurement of the Ce.The dechorionated embryos (24 hpf) or the larvae (from 48 hpf onwards) were individually transferred through four beakers (for about 10 s per beaker), each containing 500 mL of fresh dechlorinated tap water, to remove chemical residues on the body surface.After that, the embryos or larvae were homogenized with a silicone pestle in a 1.5-mL sampling tube containing a mixed solvent of the same composition as the eluent used to extract the test compound in liquid chromatography.The dechorionated embryos or the larvae were homogenized to extract the test compound.The mixture was centrifuged at 20,000×g for 10 min at 10 • C in a refrigerated centrifuge (CR21N, Hitachi Koki, Tokyo, Japan).The supernatant was collected in a volumetric flask.The extraction procedure was performed twice, and the supernatants from the two batches were mixed and brought up to a volume of 1 mL or 2 mL with a mixed solvent of the same composition as the eluent used to extract the test compound in liquid chromatography.They were then filtered through a membrane filter with a 0.2-μm pore size (Millex-LG, Merck KGaA, Darmstadt, Germany) to prepare analytical samples. In these experiments, the Cw was measured.For testosterone, samples of the fresh aqueous solutions were taken at the start of exposure and at the time of renewal at 48 hpf, and old samples were taken before renewal at 48 hpf and at the end of exposure.For diethylstilbestrol, samples of the newly prepared aqueous solution and the old aqueous solution before renewal were collected, two aliquots of each.For drugs other than testosterone and diethylstilbestrol, aqueous solutions were collected at the start and end of exposure.The aqueous drug solutions were diluted with a solution of the same composition as the eluent for each drug to be within the concentration range of the calibration curve.They were then used as analytical samples. Quantification of drug concentration Preliminary studies have confirmed matrix effects (a loss in analytical response) derived from zebrafish embryos or larvae in the analysis of some drugs.For drugs for which the matrix effect has been confirmed in preliminary studies, a calibration curve was made by preparing standard solutions of the control or vehicle control that had been prepared in the same way, and had the same matrix content, as the analysis samples.For drugs for which it was confirmed that the matrix had no effect on the analytical results, a calibration curve was made by using a standard solution that did not contain the matrix. A calibration curve was made for each target substance (regression equation by using the least-squares method: Y = aX+b, where Y is the analytical response and X is the concentration of the target substance) by using four or more concentrations of standard solution.When the calibration curve met the following criteria: (i) The correlation coefficient (r) was > 0.995; and (ii) the absolute value of the intercept (b) was within 5 % of the maximum analytical response and the linear regression line was treated as a straight line from the origin, the concentration of the target substance was quantified by using the absolute calibration curve method and one concentration of standard solution.When the calibration curve did not meet criterion (ii), the concentration of the target substance was quantified by substituting the analytical response into the regression equation (Y = aX+b).Analytical samples and standard solutions were analyzed by high-performance liquid chromatography or liquid chromatographytandem mass spectrometry analysis.The analytical methods and conditions of analysis for each drug are shown in Tables S2 and S3.For highperformance liquid chromatography, each drug was detected by an SPD-20AV UV-VIS (ultravioletvisible light) detector (Shimadzu, Kyoto, Japan) using an LC-20AD solvent delivery system (Shimadzu) equipped with an L-column ODS (octadecyl-silica) column (length, 150 mm; inner diameter, 4.6 mm; particle size, 5 µm; Chemicals Evaluation and Research Institute, Japan, Tokyo, Japan) or L-column2 ODS column (length, 150 mm; inner diameter, 2.1 mm; particle size, 5 µm; Chemicals Evaluation and Research Institute, Japan).For liquid chromatographytandem mass spectrometry analysis, an LCMS-8060 triple quadrupole mass spectrometer (Shimadzu) and a Nexera X2 ultra-high-performance liquid chromatograph (Shimadzu) equipped with an ACQUITY UPLC BEH C18 column (length, 50 mm; inner diameter, 2.1 mm; particle size, 1.7 µm; Nihon Waters, Tokyo, Japan) were used. Calculation of Ce The wet weights of five embryos or larvae of the control or vehicle control every 24 hpf up to 120 hpf were individually measured by using a microbalance Cubis MSU 6.6S-DM (Sartorius Japan, Tokyo, Japan).Ce was then calculated by dividing the measured amount of drug in the whole embryo or larva by the average wet weight. Values calculated by using obtained regression equation, and comparison with experimental Ce The regression equation obtained from the plot of log [Ce/Cw] versus logD at each time point was used as the calculated value (Ce(cal)) for the nine drugs employed for the plots (Table S1, excluding caffeine).The values for logD shown in Table 1 were substituted into the regression equation for each drug, and the NOECs, which were determined from the results of the zebrafish developmental toxicity test shown in Table 3, were used as the Cw values.The NOECs are shown in Table 4.The estimated Ce(cal) for each drug was compared with the experimental Ce and the coefficient of determination (R 2 ) of the log [Ce(cal)] vs. log [Ce] plot was calculated to confirm the accuracy of the prediction.Ce(cal) was then calculated for 21 ICH S5 positive-control reference drugs (Table 1, excluding methotrexate, for which no human/ mammalian AUC data are listed in the ICH S5 guideline). Comparison of levels of exposure to ICH S5 positive-control drugs in zebrafish and humans/other mammals The estimated Ce(cal) values at each time point (24, 48, 72, 96, and 120 hpf) for the 21 ICH S5 positive control reference drugs were used to obtain a calculated value for zAUC (zAUC(cal)) at 120 hpf by using Equation (1), based on the trapezoidal rule method (Chiou, 1978). where Ce t : Ce at t hpf The zAUC(cal) values for each of the drugs that we identified as positive (14 drugs) and negative (7 drugs) were then compared with the AUCs for rats, rabbits, and humans, as listed in the ICH S5 guidelines [ICH, 2020].The AUCs used for comparison were those of rats and rabbits at the NOAEL and those of humans at the effective doses.In this comparison, the experimental zAUC calculated by Equation (1) using the measured values of Ce was also plotted to check the deviation from zAUC(cal). Concentration-response relationships of zebrafish embryos and larvae exposed to developmental toxicity agents The results of the developmental toxicity studies for each drug are shown in Table 3, and the maximum NOEC for each drug are shown in Table 4. Additionally, examples of the morphological and functional abnormalities observed in the zebrafish embryos and larvae at the end of exposure to the developmental toxicity agents are shown in Fig. 1.Reduced heart size, abdominal or facial edema, kinked tail, and notochord anomalies were observed as typical abnormalities under all-transretinoic acid, ibuprofen, carbamazepine, and/or methotrexate.In contrast, cytarabine and trimethadione had no developmental toxicity at the upper limit concentration of 10,000 μM.No developmental toxicity was observed for phenytoin and thalidomide at 150 μM and 400 μM, respectively, likely because of their aqueous solubility.For the other drugs, concentration ranges inducing developmental toxicity or lethality were obtained, and a concentration dependence was observed for the incidence of effects and mortality.The results of our judgment of the ICH S5 positive-control reference drugs as "positive" or "negative" for human/mammalian developmental toxicity on the basis of the observed toxicity findings are shown in Table 5.Of the 22 positive-control reference drugs, 15 were correctly judged to be positive, whereas the remaining seven drugs were judged to be negative.Calculated from these results, the concordance of true positive (sensitivity) was 68 % (15/22) and the false negative rate was 32 % (7/22). Drug concentration in zebrafish embryos or larvae and in aqueous solution The experimental measured values for Ce every 24 hpf for drugs with relatively low liposolubility (logD<1) are shown in Fig. 2 and increased linearly over the time period.In contrast, for drugs with relatively high liposolubility (logD>1), Ce peaked between 48 and 96 hpf, after which it decreased over time (Fig. 3). In addition, for 10 drugs (cytarabine, sumatriptan succinate, caffeine, fluconazole, imatinib, diclofenac sodium, carbamazepine, phenytoin, testosterone, diethylstilbestrol), a graph of the obtained Ce results at each time point plotted with log [Ce/Cw] on the vertical axis and logD of the drug on the horizontal axis is shown in Fig. 4. For all drugs except caffeine, plotting log [Ce/Cw] against logD gave a linear approximation.In a regression analysis of the plots, R 2 in the regression equation ranged from 0.87 to 0.96.Note that, at all time points, only the caffeine plot deviated clearly from the regression line and was above the regression line at all time points.Although the reason for this is unclear, it could have been caused by some property of caffeine, so it was excluded from the calculation of the regression equation. The results of the time-weighted mean of Cw in each experiment and for each drug are shown in Table S1.The Cw of each drug ranged from 86.9 % to 110 % of each nominal concentration and mostly maintained the nominal concentration. Results of comparison of Ce(cal) with the measured values of the nine drugs that displayed a linear approximation in the regression analysis From the regression equation obtained from the plot of log [Ce/Cw] versus logD at each time point (Fig. 4), the calculated values of Ce(cal) at each time point for the nine drugs included in the analysis are shown in Table S4, and the changes over time in the calculated and measured Ce for each drug are shown in Fig. 5.For most drugs, the pattern of change over time in Ce(cal) for each drug was generally similar to the measured Ce; however, there was poor agreement with the measured data for imatinib (Fig. 5C) and diclofenac sodium (Fig. 5G). To confirm the prediction accuracy of Ce(cal), the results of plotting log [Ce(cal)] vs. log [Ce] at each time point are shown in Fig. S1.At all time points, the relationship between the two could be regressed to a linear equation.Overall, R 2 of the regression equation ranged from 0.72 to 0.89. In addition, the results of Ce(cal) for 14 of the 21 ICH S5 positivecontrol reference drugs that were judged to be positive in the zebrafish developmental toxicity test are shown in Table S5 (as mentioned in section 2.7, methotrexate was removed from the calculations), and the calculated results of Ce(cal) for the seven drugs that were judged to be negative are shown in Table S6.The pattern of change in the calculated Ce(cal) over time was similar to the characteristics of the measured Ce (Fig. 3): The Ce for drugs with logD<1 increased over time up to 120 hpf, and that for drugs with relatively high fat solubility (logD>1) peaked between 48 and 96 hpf and decreased over time thereafter (Tables S5 and S6). Calculation of zAUC and comparison with human/mammalian AUC The zAUC(cal), measured zAUC, and human/mammalian AUC for the ICH S5 positive-control reference drugs are shown in Tables S7 and S8, and the relationship between zAUC (calculated and measured values) and the logarithm of human/mammalian AUC is shown in Fig. 6.The R 2 values for log [zAUC(cal)] of the 14 drugs judged to be positive in zebrafish compared with the log [AUC]s for rat, rabbit, and human were 0.73, 0.92, and 0.74, respectively (Fig. 6A, B, C), whereas those of the seven drugs judged negative in zebrafish were 0.49, 0.032, and 0.31, respectively (Fig. 6D, E, F).Measured Ce values for five ICH S5 positive control reference drugs (fluconazole, carbamazepine, cytarabine, imatinib, and phenytoin) were obtained, and the plots of the measured Fig. 2. Temporal changes in the concentrations of drugs with low liposolubility in whole embryos or larvae.Zebrafish embryos or larvae were exposed to each drug (cytarabine, sumatriptan succinate, caffeine, fluconazole, or imatinib) from 5 h post fertilization (hpf) until 120 hpf, and the drug concentration in whole embryos or larvae (Ce) was determined every 24 h from 24 hpf.(A) cytarabine (2430 mg/L), (B) sumatriptan succinate (2070 mg/L), (C) caffeine (solid line: 10.0 mg/L, dotted line: 5.00 mg/L), (D) fluconazole (1530 mg/L), (E) imatinib (24.7 mg/L).Each data point is the mean value of three replicates.Each error bar shows the standard deviation of three replicates.values were close to those of the zAUC(cal) values (Fig. 6).Furthermore, regression analysis of the plots of log [zAUC(cal)] vs. log [zAUC] for these five drugs showed that R 2 for the regression equation was 0.92 (Fig. S2). Discussion As a first step in this study, developmental toxicity studies using zebrafish embryos and larvae were conducted on a total of 27 drugs-22 ICH S5 positive-control reference drugs and five other developmental toxicity agents-to confirm their concentration-response relationships.Concentration ranges inducing developmental toxicity or lethality were obtained for most of the drugs, and the incidences of effects and mortality were concentration dependent, resulting in experimental NOECs.As exceptions, cytarabine and sumatriptan succinate had no effect at the upper concentration limit used (10,000 μM), and thalidomide and phenytoin had no effect at concentrations near the solubility limit, so 10,000 μM, or concentrations near the solubility limit, were used as NOECs.It may be possible to detect developmental toxicity for these four drugs by dechorionation using protease or by raising the upper concentration limit or by forced exposure (e.g., by microinjection or electroporation), rather than exposure by the immersion used in this study (Mikami et al., 2019;Nishiyama et al., 2021).In addition, Liu et al. (2020) reported that craniofacial defects caused by imatinib, phenytoin and busulfan in zebrafish were detected by Alcian blue staining.Combining multiple methods such as these may be an effective means of reducing false negatives. We investigated the developmental toxicity of ICH S5 positivecontrol reference drugs and determined it be positive or negative, resulting in a positive agreement rate of 68 % (15/22) and a false negative rate of 32 % (7/22).15 drugs identified as positive had previously been reported as positive (Table 5) (Inoue et al., 2016;Yamashita et al., 2014;Weiner et al., 2024).Of the seven drugs identified as negative, four drugs other than Imatinib, trimethadione, busulfan were also identified as negative (or inconclusive) in previous papers (Table 5) (Inoue et al., 2016;Yamashita et al., 2014;Weiner et al., 2024).For phenytoin and thalidomide, which we identified as negative, no developmental toxicity was observed at concentrations considered to be at the water solubility limit (phenytoin: 150 μM, thalidomide: 400 μM), suggesting that low water solubility may have been the reason for the false negative results.It is well-known that thalidomide induced malformations in the pectoral fins and other organs of wild-type zebrafish (Ito et al., 2010;Siamwala et al., 2012;Gao et al., 2014); however, it was also reported that the effects of thalidomide by simple soaking in a Fig. 3. Temporal changes in the concentrations of drugs with liposolubility in whole embryos or larvae.Zebrafish embryos or larvae were exposed to each drug from 5 h post fertilization (hpf) until 120 hpf, and the drug concentration in whole embryos or larvae (Ce) was determined every 24 h from 24 hpf.(A) diclofenac sodium (3.2 mg/L), (B) carbamazepine (47.3 mg/L), (C) phenytoin (solid line: 37.8 mg/L, dotted line: 18.9 mg/L), (D) testosterone (3.0 mg/L), (E) diethylstilbestrol (24.7 mg/L).Each data point is the mean value of three replicates.Each error bar shows the standard deviation of the three replicates.thalidomide solution were very weak (Mikami et al., 2019;Dong et al., 2023).Although cytarabine and aspirin did not have low solubility, few morphological abnormalities were observed at about the concentration range where lethality was observed, consistent with reports in previous papers (Inoue et al., 2016;Yamashita et al., 2014;Weiner et al., 2024).Although these drugs were identified as negative in zebrafish, there was no trend toward lower values of zAUC than those of the drugs identified as positive, suggesting that interspecies differences in toxicity sensitivity between zebrafish and human or mammalian species may be the cause of these findings.For imatinib, trimethadione, and busulfan, which we identified as negative, consistent with reports in previous papers by Inoue et al. (2016) and Yamashita et al. (2014), while Weiner et al. (2024) reported that these were correctly identified as positive.For imatinib and busulfan, Wener et al. (2024) increased the DMSO content of them to 1 % in their preparation, which may have allowed them to detect developmental toxicity due to the increased solubility.For trimethadione, it is unclear why our results differ from those of Weiner et al. (2024) but it is possible that differences in test conditions or observation items, or differences in the strain of zebrafish may be factors. In the results for Ce measured every 24 h from 24 hpf onwards, Ce increased over time for highly water-soluble drugs with logD<1 (Fig. 2), whereas highly lipid-soluble drugs with logD>1 showed a peak in Ce between 48 and 96 hpf (Fig. 3).These temporal behaviors were almost the same as the previously reported Ce of drugs with logD<1 (caffeine and valproate sodium salt) and drugs with logD>1 (diethylstilbestrol, diclofenac sodium salt, and testosterone) in aqueous solution concentrations that caused developmental abnormalities or no abnormalities (Nawaji et al., 2018;Nawaji et al., 2020).In our previous report, we showed that the gradual decline in the Ce of fat-soluble compounds is caused by a decline in the total lipid concentration in whole embryos or larvae with aging, because of the energetic costs of growth and development (Nawaji et al., 2018).It appears that the declines observed here had the same cause.Furthermore, no peaks other than those of the tested drugs were observed on the chromatogram, so it is unlikely that the decline in the Ce levels of those drugs were caused by biotransformation. We found a high correlation (R 2 : 0.87-0.96)between log [Ce/Cw] and logD at all time points every 24 h up to 120 hpf (Fig. 4).The reason Fig. 4. Relationship between lipophilicity and drug concentration in zebrafish at each time point of exposure to various drugs.The influence of logD on the drug concentration in whole embryos and larvae (Ce) or in aqueous solution (Cw) for 10 drugs (cytarabine, sumatriptan succinate, caffeine, fluconazole, imatinib, diclofenac sodium, carbamazepine, phenytoin, testosterone, and diethylstilbestrol) is shown.Regression lines and equations were obtained from black plots.White data points (which were obtained with caffeine) were excluded from the calculation.Each data point is the mean of three replicates with the standard deviation.hpf: h post fertilization. for this high correlation may be that the rate of drug concentration in embryos and larvae is strongly dependent on the total fat concentration (lipid content) in general (OECD, 2012); moreover, the total fat concentration in zebrafish embryos and larvae was nearly the same in each developmental stage.At all time points, the caffeine plot deviated well above the regression line, and caffeine was more highly concentrated than the other drugs in zebrafish embryos and larvae.In the caffeine exposure experiments, fish were exposed to two concentrations of caffeine in aqueous solution (5.00 mg/L and 10.0 mg/L); there was little variation in the Ce of the three samples taken at each time point (Fig. 2C), and the Ce/Cw ratio at both concentrations was almost the same (Fig. 4), so it is unlikely that the deviation from the regression line was caused by errors in experimental manipulation or other factors.The physicochemical properties of caffeine, or some intervening mechanism in the body, were likely the main causes of the deviation from the regression line, and caffeine was excluded from the calculations using the regression equation (Fig. 4).No findings have been reported that caffeine tends to accumulate more in animal tissues, or specifically in zebrafish, so it is unclear why caffeine was more highly concentrated than the other drugs in this study.Future analysis of several drugs with similar logD values to caffeine, or of drugs with similar bioactive effects to caffeine, may help to identify the cause.By using the regression equation obtained for the correlation between log [Ce/Cw] and logD at each time point in this study, it should be possible to estimate the drug concentration in zebrafish embryos and larvae if the logD of the developmental toxicity drug and the concentration of the drug in the aqueous solution used for exposure are known.In addition, because the drugs used in the analysis were drugs that showed pharmacological effects through various mechanisms of action, and the results were obtained in a concentration range at which no developmental toxicity was observed, it is highly likely that this correlation between log [Ce/Cw] and logD will hold true for a wide variety of drugs.Further data should be obtained using various categories of drugs in order to verify this. To check the prediction accuracy of Ce(cal), we compared Ce(cal) with the measured temporal behaviors of Ce for the nine drugs that displayed a linear approximation in the regression analysis (Fig. 4).Five Fig. 5. Comparison of calculated and measured values of drug concentration in whole embryos or larvae.Drug concentration in whole embryos or larvae (Ce) after exposure to each of the nine drugs used in the regression analysis is shown.The average value of three replicates is shown by black data points for calculated Ce and white data points for measured Ce. hpf: h postf ertilization. of the drugs were among the 21 ICH S5 positive-control reference drugs.The pattern of temporal behavior of Ce(cal) for each drug was similar to that of the measured Ce, and the coefficient obtained from the log [Ce (cal)] vs. log [Ce] regression analysis ranged from 0.72 to 0.89, indicating a relatively high precision of prediction.Some of the patterns of temporal behavior in Ce(cal) for imatinib and diclofenac sodium salt did not match the measured values (Fig. 5).This lack of agreement in the plots may have been caused by the presence of residuals from the log [Ce/Cw] vs. logD regression equation that were larger than the others. The zAUC(cal) was determined on the basis of Ce(cal) at every 24 hpf for each of the 21 ICH S5 positive-control reference drugs and compared with the AUC at NOAEL in rats and rabbits and at the effective dose in humans.Log [zAUC(cal)] for 14 of the drugs that were identified as positive in this developmental toxicity study showed a relatively high positive correlation with that of rats, rabbits, and humans (R 2 : 0.73-0.92,Fig. 6A, B, and C).For reference, the AUC for humans and the AUCs for rats or rabbits were analyzed in the same way as Fig. 6, and the results are shown in Fig. S3.The R 2 value of the regression equation between the logarithm of the human AUC and the logarithm of the rat and rabbit AUCs was 0.79 and 0.66, respectively.Although the R 2 values in each comparison cannot be simply compared, The R 2 values obtained in this study (Fig. 6A, B, and C) were equal to or higher than the R 2 values between rat or rabbit AUC and human AUC (Fig. S3).In the analysis of Fig. 6A, B, and C, two plots were excluded as outliers: ribavirin in comparison with the rat AUC and also valproic acid in comparison with the human AUC.Ribavirin, which is a nucleic acid analog, had a much lower rat AUC (0.00828 μg•h/mL) than the other drugs, indicating a very high sensitivity to toxicity in rat compared with the sensitivity in zebrafish.Yamashita et al. (2014) indicated that prolonged exposure to ribavirin due to mammalian-specific erythrocyte accumulation of ribavirin triphosphate (RTP) may have increased the susceptibility to ribavirin toxicity in humans/mammals, causing a difference in the susceptibility seen in zebrafish.In mammalian erythrocytes almost all ribavirin is phosphorylated and converted to RTP, but the absence of 5′-nucleotidase and alkaline phosphatase, which hydrolyze RTP to ribavirin in mammalian denucleated erythrocytes, has led to the longterm accumulation of RTP within erythrocytes (Page and Connor, 1990).On the other hand, because the erythrocytes of teleost fish, including zebrafish, have nuclei even in their mature form, zebrafish erythrocytes may retain dephosphorylation activity, and thus RTP accumulation in the erythrocytes is unlikely to occur.Therefore, it is understandable that ribavirin deviates markedly from the regression line in the comparison between zebrafish and rats.(For rabbits and humans, the AUC data for ribavirin are not known, because the ICH S5 guideline does not contain AUC data for ribavirin.)Valproic acid, an antiepileptic drug, is known to be a representative developmental toxicity drug that causes severe developmental abnormalities in humans, mammals, and zebrafish (Alsdorf and Wyszynski, 2005).The plot for valproic acid was uncharted in Fig. 6B and C because it was so high, but it is unclear why the blood levels of valproic acid are higher in humans.It is possible that the accumulation may be higher because of binding affinity to certain tissues, or other factors, but further studies are needed to determine the cause of the deviation of valproic acid from the plot.As mentioned above, except in the case of drugs that deviated from the plot, the zAUC could be used as an indicator of the exposure of zebrafish to drugs that were judged positive for developmental toxicity in both zebrafish and human/mammals.The dose-response relationship in zebrafish may be similar to that in rats, rabbits, and humans for developmentally toxic drugs, confirming the importance of the use of zebrafish as a mammalian alternative method.The R 2 value of the regression equation using the log [zAUC(cal)] vs. log [zAUC] regression analysis was high at 0.92 (Fig. S2), suggesting that, at least for the ICH S5 positive-control reference drugs (five drugs), the influence of the discrepancy between Ce(cal) and the measured Ce values on the comparison of zAUC and AUCs in Fig. 6.Comparison of systemic exposure to drugs in zebrafish and in humans/mammals.Comparison of area under the time curve (AUC) in zebrafish (zAUC) with AUCs in (A) rat, (B) rabbit, and (C) human for positive-control reference drugs that were judged as positive in zebrafish in this study.Comparison of zAUC with AUCs in (D) rat, (E) rabbit, and (F) human for positive-control reference drugs that were judged as negative in zebrafish in this study.Black dots and crosses indicate zAUC based on calculated concentrations in whole embryos and larvae (Ce), and white dots indicate zAUC based on the mean values of measured Ce.Regression lines and equations were obtained from the black dots; crosses were excluded from the calculations.The cross in A represents data for ribavirin and the crosses in B and C represent data for valproic acid. other species was small. On the other hand, for the seven drugs that we identified as negative, the normal logarithm of zAUC(cal) was poorly correlated with the normal logarithm of AUC for rats, rabbits, and humans (Fig. 6D, E, F).From our experimental results and those in previously published reports, the reasons for the negative results are likely low water solubility, interspecies differences in toxicity sensitivity between zebrafish and humans/mammals, and low drug uptake in embryos and larvae.zAUC is the amount of exposure at the NOEC, and the NOEC of each drug deviated from the measured value because of the various factors (low water solubility, interspecies differences in toxicity sensitivity, and low drug uptake), leading to a low correlation with the human/mammal AUC.In addition, only seven drugs were used in this study and the range of the plots was narrow; therefore, further experiments are mandatory to fully understand the correlation in exposure levels between zebrafish and mammals/humans. Conclusion In conclusion, the normal logarithm of zAUC(cal) for the 14 ICH S5 positive-control reference drugs identified as positive in zebrafish showed a relatively high positive correlation with that in rats, rabbits, and humans.This suggests that zAUC may be useful as an indicator of exposure to developmental toxicity drugs in zebrafish embryos and larvae.Furthermore, our results suggest that zebrafish may have an exposure-response relationship similar to that of rats, rabbits, and humans.As far as we know, our study is the first report of a relationship in drug exposure between zebrafish and mammals, using an original exposure index in the developmental toxicity test with zebrafish embryos and larvae.The findings obtained here provide important information on the relationship between the concentration used in the predictive method and the exposure level at which an adverse outcome occurs in the species in which exposure levels are being predicted.This information is required for use of this method as an alternative EFD test in the ICH S5 guidelines.However, because zAUC(cal) is a value calculated on the basis of a regression equation, it is necessary to collect actual Ce data and zAUC data for many drugs in the future to verify whether a similar correlation can be established. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Tasuku Nawaji, Naohiro Mizoguchi, and Ryuta Adachi are currently employed by CERI. Table 1 Drug name and nominal concentration. aBenet et al., 2011, bGiannoudis et al., 2007, c Hansen et al., 2010, d Jagodinsky et al., 2015, e Lombardo et al., 2001, f Kyowa Hakko Kirin Co., Ltd.(2022): Drug interview form of Topina®, g Fujimoto Pharmaceutical Corporation (2021): Drug interview form of THALED® CAPSULE, h Bristol-Myers Squibb Company (2022): Drug interview form of HYDREA® CAPSULE.*Logarithm of the normal logarithm value for the n-octanol/water partition coefficient (logP) from DrugBank (https://go.drugbank.com).The underlined values in bold indicate the concentrations that were also set in the experiment for measuring Ce.Values in parentheses represent concentrations set only in the experiment for measuring Ce. Table 2 Developmental toxicity observation points. Table 3 Results of the developmental toxicity test. (continued on next page) T.Nawaji et al. Table 4 NOECs of tested drugs.
2024-07-16T15:07:22.652Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "1660b68b3d8ca59cc7ebca622b4a2472bf2787a3", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.crtox.2024.100187", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ba26603b40667c30efb06c051a0c6edb8aa0b43", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
258678555
pes2o/s2orc
v3-fos-license
The importance of biomarker development for monitoring type 1 diabetes progression rate and therapeutic responsiveness Type 1 diabetes (T1D) is an autoimmune condition of children and adults in which immune cells target insulin-producing pancreatic β-cells for destruction. This results in a chronic inability to regulate blood glucose levels. The natural history of T1D is well-characterized in childhood. Evidence of two or more autoantibodies to the islet antigens insulin, GAD, IA-2 or ZnT8 in early childhood is associated with high risk of developing T1D in the future. Prediction of risk is less clear in adults and, overall, the factors controlling the progression rate from multiple islet autoantibody positivity to onset of symptoms are not fully understood. An anti-CD3 antibody, teplizumab, was recently shown to delay clinical progression to T1D in high-risk individuals including adults and older children. This represents an important proof of concept for those at risk of future T1D. Given their role in risk assessment, islet autoantibodies might appear to be the most obvious biomarkers to monitor efficacy. However, monitoring islet autoantibodies in clinical trials has shown only limited effects, although antibodies to the most recently identified autoantigen, tetraspanin-7, have not yet been studied in this context. Measurements of beta cell function remain fundamental to assessing efficacy and different models have been proposed, but improved biomarkers are required for both progression studies before onset of diabetes and in therapeutic monitoring. In this mini-review, we consider some established and emerging predictive and prognostic biomarkers, including markers of pancreatic function that could be integrated with metabolic markers to generate improved strategies to measure outcomes of therapeutic intervention. Introduction Type 1 diabetes (T1D) results from autoimmune destruction of insulin-producing pancreatic b-cells (1). The condition has a variable incidence rate of between 3.9-57.4/ 100,000 depending on the country, and annual incidence rates are increasing at approximately 3-4% worldwide (2). Increased incidence was originally reported in those diagnosed under 5 years of age (3) and this was shown to result from a shift to lower age at onset, and not an overall increased incidence across all age groups (4, 5). However, the Centre for Disease Control report from 2002-2015 shows a sharp increase in incidence of T1D in those diagnosed over age 5, most significantly in black, Hispanic, Asian and Pacific Islander populations (6). The natural history of T1D is increasingly well understood, particularly in children, making it possible to accurately identify individuals "at risk" of future T1D through islet autoantibody screening. This has facilitated clinical trials to delay the onset of T1D, which recently resulted in the regulatory approval in the USA of teplizumab, an anti-CD3 monoclonal antibody. Teplizumab treatment was shown to provide a delay onset of T1D by >2 years on average in "at risk" individuals (7). There is now increased focus on the optimal strategies to: 1. Identify those at risk; children and adults, both relatives of individuals with T1D and those in the general population for additional clinical trials. Monitor the effectiveness of new therapies. Biomarkers for a disease can be either predictive, prognostic, or both. In T1D, predictive biomarkers, usually islet autoantibodies, are used to assess risk of clinical diagnosis while prognostic biomarkers, for instance measures of beta cell function, are used to monitor disease progression rate. This mini-review provides a brief "snap-shot" of the current status of prediction and highlights the need for improved prognostic biomarkers. Crucial to monitoring outcomes of immunomodulatory agents is the phase in the natural history when therapeutic intervention occurs. Primary prevention trials rely on identifying those at genetic risk before the autoimmune process has begun. Current examples are trials within the Global Platform for the Prevention of Autoimmune Diabetes (GPPAD) launched in 2015 (8). GPPAD brings together several centres in Europe where neonates are screened for genetic risk of T1D prior to entry into primary prevention trials including POInT (9) and SINT1A (10). In addition, multiple efforts are ongoing in the USA, Australia, Europe, and the UK to screen for risk of ongoing autoimmunity in children and, more recently, in adults. Much has been learned about risk calculations and screening approaches from studies of first-degree relatives including BABYDIAB (11); DIPP (12); DAISY (13); TrialNet (14); the Belgian Diabetes Registry (15), the Bart's Oxford (BOX) study (16) and INNODIA (17). It is not clear, however, whether risk assessment in families where one individual already has been diagnosed with T1D will reflect the general population. Here we examine the strategies used to identify individuals "at risk" of future autoimmune diabetes and consider some of the key established predictive biomarkers and those emerging biomarkers which may, in the future, add to predictive and prognostic models ( Figure 1). 2 Identifying risk of future T1D for clinical trial recruitment Genetic Risk The importance of genetics in susceptibility to T1D has long been recognized; studies of monozygotic twins discordant for diabetes demonstrated that approximately half of risk is attributed to genetic factors and half to unidentified environmental factors (18). Human Leukocyte Antigen (HLA) associations were initially described in the 1970s (19,20). There are three particularly FIGURE 1 A simplified schematic diagram (created in Biorender.com) of the markers discussed with regard to T1D prediction and prognosis. (22). The majority of individuals who develop T1D are positive for one or both susceptibility haplotypes (23) and negative for the protective DR15-DQ6 haplotype (24). However, the high-risk combination of DR3/DR4 has been shown to be decreasing over time (25,26), which suggests an increase in environmental pressure for developing T1D, but the environmental determinants of T1D remain poorly defined. Genome-wide association studies (GWAS) have identified more than 60 non-HLA variants associated with T1D (27). These include variants in several genes already identified through case control studies [including INS (28); CTLA-4 (29), and PTPN22 (30,31)]. Over the last decade there has been a move away from traditional HLA genetic risk assessment to the cheaper, high-throughput strategy of using tagged SNPs to impute HLA risk combining data from HLA and non-HLA variants to generate genetic risk scores (GRS). These scores are proving particularly important in precision medicine approaches to help classify diabetes at diagnosis (32,33). In terms of identifying risk of future T1D, a GRS has the potential to be used to identify infants in the general population at increased genetic risk of type 1 diabetes through Guthrie spot screening, and the GPAAD platform has paved the way for roll out of this approach (8). Humoral risk factors: islet autoantibodies Despite the heterogeneity of T1D, current consensus classifies the prodrome to T1D as having three stages (34), with Stages 1 and 2 being presymptomatic. Stage 1 is defined by the presence of multiple islet autoantibodies in the blood, without dysglycemia; Stage 2 by the presence of multiple islet autoantibodies in the blood with dysglycemia and Stage 3 represents the onset of symptomatic disease. The natural history of type 1 diabetes has been studied intensively since the identification of islet autoantibodies with a combined study of three birth cohorts showing that children with two or more islet autoantibodies before the age of 5 years have a >80% risk of developing T1D by the age of 20 (35). The power of islet autoantibodies to predict T1D was first described in the 1970s (36) showing that Islet Cell Antibodies (ICA) could be detected in the blood before the onset of symptoms. This test involves incubating serum on pancreas sections, is operator dependent and lacks specificity. Although still carried out, it has largely been superseded by individual tests for autoantibodies to the four major autoantigens in T1D: insulin (IAA) (37), glutamic acid decarboxylase (GADA) (38), insulinoma-associated protein 2 (IA-2A) (39), zinc transporter 8 (ZnT8A) (40). Tetraspanin-7 (Tspan7A) is a more recently identified autoantigen for T1D (41) although its utility in predicting T1D is not established (42) and initial data suggest that Tspan7A do not provide much added value for T1D prediction (43). More studies are however needed across the age range of T1D to confirm whether or not Tspan7A will be useful as a biomarker for T1D. IAA are often the first islet autoantibody to appear in young children (44), and are more prevalent in this group (10, 45). However, these are often present at lower levels in the blood, which makes them the most difficult islet autoantibody to measure. Autoimmunity to insulin cannot be distinguished from antibodies to exogenous insulin appearing roughly two weeks after the first insulin injection in T1D cases, and therefore samples need to be tested in this window to be useful for diabetes classification or baseline monitoring in trials. Some children develop GADA first (44) while IA-2A and ZnT8A autoantibodies are rarely the first to appear and are usually seen as evidence of epitope spreading of the autoimmune response. The gold standard islet autoantibody tests are considered to be radiobinding assays (RBAs): liquid phase assays which use a radiolabeled antigen to capture and measure antibodies. They are highly sensitive, and risk data from large international longitudinal research studies such as TEDDY (44) and TrialNet (14) are based on RBAs. However, there are significant cost and safety issues associated with RBA and they are being replaced by other methods including ELISA, LIPS and ADAP (outlined in more detail on Table 1). The performance of these assays is measured through testing of blinded samples in islet autoantibody standardization performance (IASP) workshops associated with the Immunology of Diabetes Society (46). Overall, to facilitate general population screening strategies and future clinical trials in both those "atrisk" and in individuals with diabetes, high throughput and cheap sample collection and islet autoantibody tests are required. Islet autoantibodies vs. Markers of metabolic function for clinical trial monitoring Different approaches are currently used to measure b-cell function and provide different readouts about the health of insulin-producing cells (47). The methods most commonly used in research studies to monitor progression rate in individuals with multiple islet autoantibodies are stimulated tests; oral glucose tolerance tests (OGTT); intravenous glucose tolerance tests (IVGTT) and mixed-meal tolerance tests (44). These are carried out by skilled staff, usually in a hospital setting and form the basis for primary outcomes in most T1D clinical trials. Modeling metabolic data to inform progression rates is becoming increasingly sophisticated (48,49). While islet autoantibodies are crucial to identify individuals "atrisk" of T1D for trials to prevent or delay the onset of the condition, few data suggest that they represent useful biomarkers to monitor efficacy in the way that models of beta cell function, including Cpeptide and immune cell compartments, can be used (48)(49)(50)(51). Firstly, some tests use a positive/negative readout for islet autoantibodies and only recently has there been a focus on the potential usefulness of islet autoantibody level in studies of type 1 diabetes (52, 53). However interestingly, in a TrialNet study blocking the CD28/CD80/CD86 costimulatory axis with CTLA4Ig (Abatacept) in individuals with diabetes, participants with a poor response (resistance: measured by modeling rate of decline of C peptide) had a transient increase in activated B cell reprogrammed costimulatory ligand gene expression, and reduced inhibition of anti-insulin antibodies (54). Similarly, in the Teplizumab trial (7), the absence of ZnT8A identified individuals most likely to respond to the therapy. This shows that autoantibodies at baseline may be predictive of responses to immunotherapy and substantiates the inclusion of islet autoantibodies in monitoring. Emerging biomarkers 3.2.1 The exocrine pancreas The pancreas performs both endocrine and exocrine functions. Most biomarker studies have focused on the endocrine compartment, but broader pancreas abnormalities have long been detected in T1D; a reduction in pancreatic size after diagnosis is well-described (55,56) and pancreas weight is reduced in T1D patients compared with healthy controls (57). In 2012 using Magnetic Resonance Imaging (MRI), Williams and colleagues showed that the pancreas is already reduced in size by 25% at diagnosis (58). An Australian study in very young "at diagnosis" cases (median 5.5 yrs.) confirmed pancreatic shrinkage in early onset T1D (59). This suggests that pancreatic shrinkage is already ongoing in pre-diabetes. In a study of 85 children participating in the ENDIA study, levels of Fecal Elastase-1, another marker of pancreatic function, were shown to decrease over time in 28 progressors compared to nonprogressors (60). A study of TrialNet participants at diagnosis and in those with islet autoantibodies showed that lower levels of circulating P-amylase and lipase (both exocrine enzymes) can be detected before the onset of clinical symptoms in at-risk adult individuals, but not in children (61). Further evidence for the importance of pancreatic enzymes comes from a recent Mendelian Randomisation study to identify circulating proteins influencing type 1 diabetes susceptibility, which showed that increased levels of serum chymotrypsinogen was associated with a decreased risk of T1D (62). Such changes in volume are surprising, since beta cells represent only 2-3% of the pancreas, but reduced pancreas size is thought to reflect loss of the trophic effects of insulin. Recent studies have shown that the exocrine compartment may provide an important source of robust and straightforward biomarkers to monitor effects of therapeutic intervention. Enzymes of the exocrine pancreas as biomarkers in T1D Trypsinogen is the proenzyme precursor of trypsin and is stored in the pancreas to be released as required for protein digestion. Immunoreactive trypsinogen (IRT) is a term used to describe the two main isoforms of trypsinogen: the cationic trypsinogen-1, and the Placement of Nluc-reporter in antigen sequence may influence antigen conformation and subsequent autoantibody-antigen binding. Antibody detection by agglutination PCR (ADAP) Offers increased sensitivity compared to RBA. Low serum volumes required (1-2µl). Can be multiplexed. PCR-basedpotential for very high throughput. Predictive utility is yet to be fully evaluated in at-risk populations. anionic trypsingoen-2, both of which are produced by pancreatic acinar cells (63). IRT is released into the circulation in small amounts and can therefore be detected in the blood/plasma. Serum IRT is the most studied indirect test of pancreatic function. It was developed to diagnose chronic pancreatitis (64) and is used to aid diagnosis of exocrine atrophy in T1D (65)(66)(67). An IRT test is also currently used worldwide in neonatal screening for cystic fibrosis (68). There is reduced exocrine pancreatic function in T1D; IRT concentrations have been shown to be significantly reduced in T1D patients compared with healthy matched controls (54). In 2017, Li and colleagues showed that serum trypsinogen levels were significantly reduced in T1D patients compared with controls, and that this was also the case for multiple islet autoantibody positive subjects compared to those with single islet autoantibodies and healthy controls (69). Further studies have built on these findings to demonstrate the potential of trypsinogen as a predictive biomarker for T1D. In 2021, the same team of investigators expanded their studies to trypsinogen, lipase, and amylase in a larger cohort. They showed that trypsinogen and lipase are significantly reduced in subjects with established and recent-onset diabetes, and in individuals with multiple islet autoantibodies compared with single islet autoantibody positive and control subjects (70). In contrast, amylase levels were reduced only in patients with established T1D. They concluded that a combination of serum lipase and trypsinogen levels together provide the most sensitive serological biomarker of BMInormalised relative pancreas volume (RPV BMI ), and this could improve disease staging in pre-T1D, although validation in longitudinal samples from "at-risk" individuals is required. More recently, a proteomics screen of serum from monozygotic twins discordant for T1D unexpectedly identified exocrine proteins as the top five hits compared to co-twins without diabetes (71). Decreased levels were observed for all five proteins and this was subsequently validated for trypsinogen in a large cohort of individuals with T1D where levels were shown to be significantly lower than in healthy control individuals. They also found that trypsinogen levels were lower in recently-diagnosed cases compared with controls across a broad age range, and multiple linear regression in recently-diagnosed participants showed that trypsinogen levels were associated with insulin dose and diabetic ketoacidosis. Age and BMI were important confounders. Trypsinogen levels <15ng/ml were associated with an increased risk of progression in "at-risk" relatives. Together, these results further validate the potential of trypsinogen and possibly other exocrine enzymes as novel and cost-effective biomarkers to monitor efficacy in clinical trials. However, age and BMI need to be incorporated into all models and longitudinal measures will be essential for the outcomes of interventions to be monitored. MRI of the pancreas in combination with measures of exocrine enzymes is also potentially a powerful, if expensive, tool to monitor direct effects of immunomodulation on the pancreas. MicroRNAs MicroRNAs (miRNAs) are emerging as a potential area deserving further study in T1D research and may prove to be future biomarkers for the disease (72). miRNAs are small, non-coding RNAs approximately 20 nucleotides long (73) which have been identified in biological samples relevant to T1D, with some studies reporting their role in T1D pathogenesis. They act as gene expression regulators, primarily by inhibiting translation or by causing mRNA degradation, which obstructs protein synthesis at the post-transcriptional stage. miRNAs can be isolated from most biological specimens, and are durable, being protected by microsomes and exosomes, which develop a protective outer shell for the miRNAs. However, miRNA testing could be challenging to implement in a clinical trial setting. This is because sample analyses currently need to be carried out within 8 hours post-collection for an accurate assessment of miRNA species in the plasma, but most multi-centre trials will not be able to deliver samples to central laboratories within this time span. MiRNA-21 is a specific miRNA that has been shown to disrupt b-cell development in animal models of T1D when overexpressed (74). MiRNA-21 also targets bcl-2 gene translation, which results in increased b-cell apoptosis during diabetes development (75,76). Other specific miRNAs that dysregulate pancreatic function include miRNA-29, which impairs glucose-induced insulin secretion when increased in mouse and human pancreatic islets (77). More recently, plasma levels of five miRNAs were shown to be downregulated in diabetic vs. normoglycaemic mice (78). miR-409-3p was also downregulated in immune islet infiltrates of diabetic mice, and its expression correlated with severity of insulitis. Interestingly, CD8+ central memory T cells were enriched in miR-409-3p. Plasma levels of the microRNA gradually decreased during diabetes development in mice and improved with disease remission after anti-CD3 antibody therapy. However, these results do not necessarily suggest that these miRNAs will be similarly relevant to human T1D, because miRNA data are not fully translatable from rodent to human samples. In human plasma samples, miR-409-3p levels were lower in individuals with recently-diagnosed T1D compared with controls, and levels were inversely correlated with HbA1c levels (78). Studies such as these may suggest the potential of microRNAs to monitor therapeutic intervention in T1D, but they have not yet been studied in individuals at risk for the condition, so much work remains to be carried out to fully validate these molecules as tools for prediction or prognosis. Insulin-like Growth Factors Insulin-like growth factors (IGFs) promote glucose metabolism. IGF1 and IGF2 have their availability regulated by IGF-binding proteins (IGFBPs). A recent study by Shapiro et al. found that IGF1 and IGF2 levels were significantly lower in islet autoantibody positive compared with islet autoantibody negative relatives of individuals with T1D, and that IGF1 levels decreased over time in subjects with multiple islet autoantibodies and in those who progressed to T1D, in parallel with decreasing b-cell function (79). This study also found that high-affinity IGFBPs remain unchanged in individuals with pre-T1D, which suggests that total IGF levels may reflect bioactivity. These results indicate that IGF dysregulation occurs both before and after T1D diagnosis, and therefore could be a novel biomarker for disease prediction and monitoring the effects of therapy in secondary prevention trials. Importantly, IGFs could act as metabolic biomarkers in that they reflect metabolic dysregulation and therefore could inform T1D staging. Cell-Free DNA Cell-free (cf)DNA refers to b-cell-specific, cell-free DNA fragments that are released into the periphery as b-cells are killed by immune cells. These b-cell-specific cfDNA fragments can be measured, should correlate with b-cell death, and could therefore potentially be the most direct biomarker for b-cell death in T1D. Several years ago, multiple studies focused on methylation-specific cfDNA targets, particularly in the insulin gene, to measure b-cell-specific cell death (80)(81)(82)(83)(84)(85)(86). However, the methodology fell out of favour when it was reported that an ultrasensitive assay for detection of a b-cell-specific DNA methylation signature failed to observe increases in b-cell-derived cfDNA in a blinded study of 32 autoantibody-positive subjects at risk for type 1 diabetes, 92 individuals with recent-onset type 1 diabetes, and 38 individuals with long-standing disease (87). In the meantime however, cfDNA increasingly represents an exciting biomarker in cancer studies; in 2021 the National Health Service in the UK launched a research study to examine cfDNA in 140,000 volunteers, aiming to detect 50 types of cancer before symptoms appear. Sample collection systems for cfDNA have improved significantly with collection of plasma samples in dedicated cfDNA tubes which stabilize the cfDNA fragments as the current standard. Therefore, while cfDNA studies in T1D require further optimization, particularly using multiplex approaches, cfDNA has the potential to become a monitoring biomarker of the future. The exquisite specificity of a biomarker capable of directly measuring beta cell death has to be the ambition when monitoring drug efficacy in T1D. Conclusions The outcome of the Teplizumab trial in individuals "at-risk" of future T1D has energized researchers to broaden strategies to identify single-and multiple-islet-autoantibody-positive children and adults in the general population. These strategies would aim to help prevent diagnosis in diabetic ketoacidosis and offer participation in intervention trials and monitoring. Here, we have reflected on some existing and possible future biomarkers to determine efficacy of interventions, enroll and stratify individuals and, hopefully, be used to match the right patient to the right drug at the right time. Author contributions MF and KG designed the review. MF performed literature searches and wrote the manuscript with oversight by KG. KG is responsible for the integrity of the work as a whole. All authors contributed to the article and approved the submitted version. Funding This work was funded by grants to KG from Diabetes UK Grant Reference 20/0006300 and The Helmsley Charitable Trust. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-05-15T13:10:33.087Z
2023-05-15T00:00:00.000
{ "year": 2023, "sha1": "f3d94017a00352c21855a107a17abaabddf9671e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "f3d94017a00352c21855a107a17abaabddf9671e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
18004804
pes2o/s2orc
v3-fos-license
Top mass effects in Higgs production at next-to-next-to-leading order QCD: virtual corrections Top quark mass suppressed terms are calculated for the virtual amplitude for Higgs production in gluon fusion at three-loop level, i.e. O(\alpha_s^3). The method of asymptotic expansions in its automated form is used to evaluate the first three non-vanishing orders in terms of M_H^2/M_t^2, where the first order corresponds to the known results of the effective Lagrangian approach. Introduction Radiative corrections to Higgs production through gluon fusion are known to be unusually large [1,2,3,4]. The inclusive next-to-next-to-leading order (NNLO) cross section σ(pp/pp → H + X) exceeds the LO prediction by roughly a factor of two at LHC energies, and even up to a factor of three at the Tevatron [5,6,7]. Recent compilations of the currently available contributions to the production cross section can be found in Refs. [8,9]. The current NNLO prediction is based on the assumption that the top mass dependence is largely determined by the LO expression, while the higher order terms can be evaluated in the limit of infinitely heavy top mass M t [10,3,4,11]. At NLO, where a comparison with the full mass dependence of the cross section is possible, the heavy-top approximation is valid at the 2-3% level for Higgs masses M H < 2M t (see, e.g., Ref. [12]). Even at M H ≈ 1 TeV, the deviation from the full NLO result amounts to only about 10%. The fact that the heavy top limit works so well is at first sight surprising, because it assumes that M t is larger than any other scale in the process. This is certainly not the case at the LHC with a prospected hadronic center-of-mass energy of √ s = 14 TeV. However, one can argue that since the cross section is dominated by soft gluon radiation parton scatterings with energies √ŝ much larger than 2M t are strongly suppressed. It is indeed observed that an expansion of the partonic cross sectionσ in powers of (1 − z), where z = M 2 H /ŝ, converges rather quickly to the exact result [5]. On the other hand, resummation of the soft terms does not lead to a big effect at any of the three lowest orders in perturbation theory [13]. bulk of the radiative corrections can be obtained by resumming the leading π 2 -terms that arise from this transition [14]. These unresolved issues leave one with a certain amount of doubt as to the use of the heavy-top limit at NNLO. There is however surprisingly little activity in the field that addresses the validity of this approximation. Besides the NLO calculations for the inclusive cross section mentioned before [10,3,4,11], there are studies concerning the mass effects on differential distributions [15,16,17] which allow one to derive validity ranges on the kinematical variables. Furthermore, in Ref. [18], the effects of the partonic high-energy region on the total cross section have been studied by deriving the leading behaviour in this limit. A rather direct way to check the heavy-top limit is to evaluate formally subleading terms. In this paper, we consider them for the purely virtual corrections at NNLO. While they do not correspond to a physical quantity, they constitute an important gauge-invariant ingredient to the full inclusive cross section. Note that at NLO, the virtual corrections are known in closed analytical form for arbitrary values of M t [19,20,21]. Our approach is very similar to the calculation of the top mass suppressed terms to the Higgs decay rate into gluons, described in Ref. [22]. One might be tempted to use this result obtained for the decay rate as an estimate of the effects for the gluon fusion process. However, one should recall that the kinematics of the two processes are very different. In particular, the top quark mass is indeed the largest scale for the decay, so that the expansion in M H /M t remains within the radius of convergence. This is not the case for the higher order corrections to the gluon fusion production process involving real radiation of gluons and quarks. The partonic center-of-mass energy √ŝ can well exceed the threshold value of 2M t , and a series expansion in the limit of large top mass becomes questionable. For the purely virtual effects though, which are the subject of this paper, the partonic center-of-mass energy is fixed to M H which, according to the limits derived from electroweak precision fits, can safely be assumed to be lighter than twice the top mass. They will therefore be a useful ingredient for any possible treatment of the full hadronic cross section, be it inclusive or exclusive. Method Sample diagrams that contribute to the virtual corrections to gluon fusion at LO, NLO, and NNLO are shown in Fig. 1. An efficient and algorithmic procedure for evaluating them in terms of a consistent expansion in M H /M t is the well-known method of asymptotic expansions (see, e.g., Ref. [23]). In our case, it expresses the original diagrams as a sum of convolutions of massive vacuum with massless vertex integrals. The diagrammatic representation of this procedure is shown for two particular diagrams in Figs. 2 and 3. We generate the diagrams with the help of qgraf [24] and pass them to q2e/exp [25,26], which automatically carries out the expansion. The resulting 1-, 2-, and 3-loop vacuum integrals are evaluated by MATAD [27]. For the 1-and 2-loop vertex integrals we use the method of Ref. [28] by applying the relevant modifications [29] to MINCER [30]. The colour and Lorentz structure of the physical amplitude is given by The diagrams left of ⊗ represent subdiagrams of the original diagram that are to be expanded in the momenta corresponding to the dotted external lines before the loop integration. In this way, it is apparent that the original integral, depending on M 2 H and M 2 t , is decomposed into products of "tadpole" integrals with vanishing external momenta and massless vertex integrals. The shaded blob in the diagrams right of ⊗ represents an effective vertex given by the result of the diagram left of ⊗ (for details of asymptotic expansions, see Ref. [23], for example). The three terms right of "→" are proportional to N 3 t , N 2 t N h , and N t N 2 h , respectively (cf. Eq. (3) below). Subdiagrams without external mass scales are not shown. where q µ 1 and q ν 2 are the external gluon momenta, and a and b are the corresponding colour indices. We contract the amplitude with P ab µν (q 1 , q 2 ) in order to arrive at a scalar expression in Lorentz and colour space. Before the massive two-and three-loop integrals are passed to MATAD, we need to eliminate any external momenta in their numerators by appropriate decompositions into invariants, e.g. where the dots represent factors that are independent of l. There are two diagrams at one-loop level, 23 at two-loop level, and 657 at three-loop level, and the calculation of the 1/M 2 t -suppressed terms takes about 5 · 10 4 s, with the computationally most expensive one shown in Fig. 3. Results Before we present the results, let us introduce some useful notation. The renormalization scale µ appears in our calculation only through the factors with Euler's constant γ E ≈ 0.577216 . These expressions are understood as their Laurent series in ǫ = (4 − D)/2, where D is the number of space-time dimensions used in the calculation. The perturbative coefficients typically contain the transcendental numbers ζ n ≡ ζ(n), where ζ is Riemann's zeta function. The particular values occurring here are Throughout this paper, bare quantities are labeled by a superscript "B". Note that since the diagrams are evaluated with a spectrum of six quark flavours, renormalization has to be performed accordingly. To perform on-shell mass renormalization or conversion of α s from the six-to the five-flavour scheme one must keep the proper number of higher order terms in ǫ due to the presence of infra-red poles. Furthermore, in order to arrive at a physical result, the external gluons must be renormalized on-shell. For convenience, the number of light flavours n l is kept as a free parameter; the physical case corresponds to n l = 5. The virtual cross section for the process gg → H can be written as where (1 + ǫ) The amplitude is expanded in terms of a perturbative series: where the coefficients h (n) are functions of M H , M B t , and the renormalization scale µ. In our approach, they take the form The leading terms have been calculated in the framework of an effective Lagrangian. However, for consistency, we present them here in a form that is directly compatible with the mass suppressed terms to be presented below: where are the perturbative coefficients of the effective Higgs-gluon vertex as presented in Ref. [29] which we quote here for the sake of completeness. Furthermore, we find It may be worth noting that c (1) and c (2) correspond to the one-and two-loop results for the bare coefficient function of the effective Lagrangian: 1 Finally, is the 1-loop term of the bare decoupling constant for α s for the transition from n f = 6 to n l = 5 flavour QCD (see, e.g., Ref. [32]). The origin of the term involving ζ (1),B g in Eq. (9) is the fact that the coefficients a (n) in Ref. [29] were evaluated in 5-flavour QCD, while the h (n) of Eq. (9) are based on 6-flavour QCD. Therefore, diagrams like the right-most one in Fig. 3 do not have a correspondence in the effective theory calculation of Ref. [29]. The expressions presented so far correspond to known results and have been included in this paper only for the sake of the reader's convenience. They should facilitate any implementation of the newly calculated terms to be presented below. Besides that, they serve as a useful check of our setup. It should be noted that in our approach, we directly calculate the coefficients h (n) m , and the decomposition into a (n) and c (n) is just for comparison to the literature. The new results of this paper are the contributions to the virtual 3-loop amplitude that are formally suppressed by powers of M H /M t . In the notation of Eqs. (7) and (8), the first two subleading orders read 2 h for the inclusive NNLO QCD cross section. We have subjected the result to various checks and found full confirmation. The next steps towards the top mass effects in the Higgs production cross section will be the evaluation of the mass suppressed terms in the real radiation amplitudes. We defer this problem to a forthcoming publication. Upon completion of this paper, we became aware of a similar calculation [40]. We have compared our results and found full agreement.
2009-07-17T09:59:42.000Z
2009-07-17T00:00:00.000
{ "year": 2009, "sha1": "14ef55ca832ecb303242594bc6de6051fa8b5688", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0907.2997v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "14ef55ca832ecb303242594bc6de6051fa8b5688", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
46965640
pes2o/s2orc
v3-fos-license
A distributed voltage stability margin for power distribution networks We consider the problem of characterizing and assessing the voltage stability in power distribution networks. Different from previous formulations, we consider the branch-flow parametrization of the power system state, which is particularly effective for radial networks. Our approach to the voltage stability problem is based on a local, approximate, yet highly accurate characterization of the determinant of the power flow Jacobian. Our determinant approximation allows us to construct a voltage stability index that can be computed in a fully scalable and distributed fashion. We provide an upper bound on the approximation error, and we show how the proposed index outperforms other voltage indices that have been recently proposed in the literature. INTRODUCTION Operators of power distribution grids are facing unprecedented challenges caused by higher and intermittent consumers' demand, driven, among other things, by the penetration of electric mobility (Clement-Nyns et al., 2010;Lopes et al., 2011). Grid congestion is expected, as the demand gets closer to the hosting capacity of the network. One of the main phenomena that determines the finite power transfer capacity of a distribution grid is voltage instability (see the recent discussion in Simpson-Porco et al. 2016). The amount of power that can be transferred to the loads via a distribution feeder is inherently limited by the non-linear physics of the system. In practice, as the grid load approaches this limit, increasingly lower voltages in the feeder are typically observed, followed by voltage collapse. From the operational point of view, it is important to be able to identify operating conditions of the grid that are close to voltage collapse, in order to take the appropriate remedial actions. Although undervoltage conditions and voltage instability are related phenomena, it has been shown in Todescato et al. (2016) that it is not possible to identify the latter by simply looking at the feeder voltage levels. Instead, many different indices have been proposed to quantify the distance of the grid from voltage collapse. Most of them are based on the observation that the Jacobian of the power flow equations becomes singular at the steady state voltage stability limit (see the seminal work by Tamura et al. 1988 and, even before, Venikov andRozonov 1961). For a review of indices based on this approach, we refer to Chebbo et al. (1992) and to Gao et al. (1992). This research is supported by ETH funds and the SNF Assistant Professor Energy Grant # 160573. A geometric interpretation of the phenomena has been developed by Chiang et al. (1990), and starting from Tamura et al. (1983) voltage collapse has been related to the appearance of bifurcations in the solutions of the nonlinear power flow equations. More recently, semidefinite programming has been proposed as a tool to identify the region where voltage stability is guaranteed (Dvijotham and Turitsyn, 2015). The same region has been also characterized based on applications of fixed-point theorems (see Bolognani and Zampieri 2016 and references therein, and the extensions proposed in Yu et al. 2015 andWang et al. 2016). Additionally, convex optimization tools have been used to determine sufficient condition for unsolvability (and thus voltage collapse) in Molzahn et al. (2013). All these works propose global indices, in the sense that the knowledge of the entire system state is required at some central location, where the computation is performed. Such a computation typically scales poorly with respect to the grid size, hindering the practical applicability of these methods. Few exception include heuristic indices such as the one proposed in Vu et al. (1999), which can be evaluated by each load based on local measurements. The methodology that we propose in this paper builds on the aforementioned approach based on the singularity of the power flow Jacobian. Differently from other works, however, we adopt a branch flow model for the power flow equations (Baran and Wu, 1989a,b;Farivar and Low, 2013). This choice gives us a specific advantage, towards three results: first, we can reduce the dimensionality of the problem via algebraic manipulation of the Jacobian of such equations; second, we can propose an approximation of the Jacobian-based voltage stability margin that is function of only the diagonal elements of the manipulated Jacobian, and is therefore computationally very tractable; finally, we can show how such an index can be computed in a completely distributed way, based on purely local measurements at the buses. We derive an explicit bound for the approximation error, which is extremely small across the entire voltage stability region. Based on that, we discuss how the proposed voltage stability index can be used in practice, and we show in numerical experiments how it outperforms other indices recently proposed in the literature. The paper is structured in the following way. In Section 2 we recall the branch flow model, while in Section 3 we explain how voltage stability can be assessed based on that model. In Section 4 we propose an approximate voltage stability index and we analyze the quality of the approximation. Finally, in Section 5, we illustrate the result in simulations and we discuss the applicability of this approach to practical grid operation. POWER DISTRIBUTION NETWORK MODEL Let G = (N, E) be a directed tree representing a radial distribution network, where each node in N = {0, 1, ..., n} represents a bus, and each edge in E represents a line. Note that |E| = n. A directed edge in E is denoted by (i, j) and means that i is the parent of j. For each node i, let δ(i) ⊆ N denote the set of all its children. Node 0 represents the root of the tree and corresponds to the distribution grid substation. For each i but the root 0, let π(i) ∈ N be its unique parent. We now define the basic variables of interest. For each (i, j) ∈ E let ij be the magnitude squared of the complex current from bus i to bus j, and s ij = p ij + jq ij be the sending-end complex power from bus i to bus j. Let z ij = r ij +jx ij be the complex impedance on the line (i, j). For each node i, let v i be the magnitude squared of the complex voltage at bus i, and s i = p i + jq i be the net complex power demand (load minus generation) at bus i. Finally, we use the notation 1 and 0 for the vectors of all 1's and 0's, respectively. Relaxed branch flow model To model the power distribution network we use the relaxed branch flow equations proposed in Baran and Wu (1989a,b); Farivar and Low (2013) To write these equations in vector form, we first define the vectors p, q, and v, obtained by stacking the scalars p i , q i , and v i , respectively, for i ∈ N . Similarly we define p, q, , r, and x, as the vectors obtained by stacking the scalars p ij , q ij , ij , r ij , and x ij , respectively, for (i, j) ∈ E. In the following, we make use of the compact notation [x], where x ∈ R n , to indicate the n × n matrix that has the elements of x on the diagonal, and zeros everywhere else. Finally, we define two (0, 1)-matrices A i and A o , where A i ∈ R n+1×n is the matrix which selects for each row j the branch (i, j), where i = π(j), and A o ∈ R n+1×n is the matrix which selects for each row i the branches (i, j), where j ∈ δ(i). Notice that A := A o − A i is the incidence matrix of the graph. The relaxed branch flow equations in vector form are: We model node 0 as a slack bus, in which v 0 is imposed (v 0 = 1 p.u.) and all the other nodes as PQ buses, in which the complex power demand (active and reactive powers) is imposed and does not depend on the bus voltage. Therefore, the quantities (v 0 , p 1...n , q 1...n ) are to be interpreted as system parameters, and the relaxed branch flow model specifies 4n + 2 equations in 4n + 2 variables, (p, q, , v 1...n , p 0 , q 0 ). CHARACTERIZATION OF VOLTAGE STABILITY A loadability limit of the power system is a critical operating point (as determined by the nodal power injections) of the grid, where the power transfer reaches a maximum value, after which the relaxed branch flow equations have no solution. There are infinitely many loadability limits, corresponding to different demand configurations. Ideally, the power system will operate far away from these points, with a sufficient safety margin. On the other side, the flat voltage solution (of the power flow equations) is the operating point of the grid where v = 1 and p = q = p = q = = 0. This point is voltage stable and the power system typically operates relatively close to it. In the following, we recall and formalize the standard reasoning that allows to characterize loadability limits via conditions on the Jacobian of the power flow equations, and we specialize those results for the branch flow model that we have adopted. Jacobian of the power flow equations Based on the discussion at the end of Section 2, consider the two vectors corresponding to the system variables and the system parameters, respectively. Then, the relaxed branch flow model (1) can be expressed in an implicit form as ϕ(u, ξ) = 0 From a mathematical point of view, a loadability limit corresponds to the maximum of a scalar function γ(ξ) (to be interpreted as a measure of the total power transferred to the loads), constrained to the set ϕ(u, ξ) = 0 (the physical grid constraints). max From direct application of the KKT optimality conditions, it results that in a loadability limit the power flow Jacobian ϕ u = ∂ϕ ∂u becomes singular, i.e., det(ϕ u ) = 0 (for details, see Cutsem and Vournas 1998, Chapter 7). Based on this, we adopt the standard characterization for voltage stability of the grid, which we present in the following definition. Definition. (Voltage stability region). The voltage stability region of a power distribution network with one slack bus and n PQ buses, described by the relaxed branch flow model, is the open region surrounding the flat voltage solution where the set of power flow solutions satisfy: Although there might be other feasible regions, where the determinant of the power flow Jacobian is negative, the region characterized by (2) corresponds to the operating points of practical interest for the operation of the power system. When the branch flow model is adopted, ϕ u takes the form (3) where A o 2 and A 2 are the matrices obtained by removing the first row from A o and A, respectively, and where e 1 is the first canonical base vector. Observe that the first three row blocks of ϕ u are constant, while the last row block depends linearly on the variables p, q, v and . Reduced power flow Jacobian We define the following n × n matrix, that we denote as the reduced power flow Jacobian. In the following, we provide a key theorem that shows the merits of the reduced power flow Jacobian. Theorem 1. Consider the power flow Jacobian (3) and the reduced power flow Jacobian (4) of a power distribution network with one slack bus and n PQ buses, described by the relaxed branch flow model. The following statements hold. Proof. For i) and iii) only a sketch of the proof is provided. The full details are available in Aolaritei (2016). i) Observe that the last two columns of ϕ u are the canonical vectors e 1 and e n+2 of R 4n+2 . Thus, if we eliminate these columns together with the 1 st and n + 2 nd rows we obtain a new matrix, ϕ * u of dimensions 4n × 4n whose determinant is equal to (−1) n det(ϕ u ). We next prove that det(ϕ u ) = (−1) n det(ϕ * u ). To do so, we apply Schur complement twice on the matrix ϕ * u . After some very basic matrix manipulation the result is obtained. ii) In the flat voltage solution we have that ϕ u = A oT v = A oT 1 = I, and therefore det(ϕ u ) = 1. Thus there exists a solution in the voltage stability region where the determinant of the power flow Jacobian is positive. Moreover, we know that in a loadability limit, det(ϕ u ) = 0, and that the determinant is a continous function of the grid variables. Therefore, in order to remain in the voltage stability region, the determinant needs to remain positive. iii) By re-indexing the nodes of the network, ϕ u can be transformed in a block diagonal matrix, where each block depends only on node 0 and the subtree rooted by one child of node 0. Theorem 1 shows that the reduced power flow Jacobian ϕ u is an effective tool for the characterization of the voltage stability region, and for the voltage stability analysis of a distribution grid. In particular, i) shows that studying the reduced power flow Jacobian is completely equivalent to studying the original power flow Jacobian, when we are interested in its singularity. ii) provides a more precise characterization of the region where the grid voltages are stable. Finally, iii) explains how the dimensionality of the problem of computing the determinant of the power flow Jacobian can be further reduced, if the root (node 0) has more than one child. VOLTAGE STABILITY ANALYSIS In this section we first propose an approximation of the determinant of the reduced power flow Jacobian that is amenable to scalable and distributed computation, when measurements of the grid variables are available. Then, based on this approximation, we propose a voltage stability index to quantify the distance of the power system from voltage collapse. Mathematical preliminaries on matrix theory Given A ∈ R n×n , we denote by A diag and A off the matrices that contain only the diagonal and off-diagonal elements of A, respectively. We denote by ρ = ρ(A) its spectral radius, i.e. the maximum norm of its eigenvalues. Definition. A matrix where α is a real number and B is a nonnegative matrix. The set of all n × n Z-matrices is denoted by Z <n> . Definition. A matrix A ∈ R n×n is an ω-matrix if: (1) Each principal submatrix of A has at least one real eigenvalue. (2) If S 1 is a principal submatrix of A and S 11 a principal submatrix of S 1 then λ min (S 1 ) ≤ λ min (S 11 ), where λ min denotes the smallest real eigenvalue. The set of all n × n ω-matrices is denoted by ω <n> . Definition. A matrix A ∈ R n×n is a τ -matrix if it is an ω-matrix and λ min (A) ≥ 0. The set of all n × n τ -matrices is denoted by τ <n> . Determinant approximation Direct inspection of the reduced power flow Jacobian ϕ u shows that, for realistic parameter values and operating conditions, its off-diagonal elements (and in particular its lower-diagonal elements) are significantly smaller than the diagonal elements. The approximation proposed in this paper consists in ignoring them, and requires the following assumption. Assumption 5. All PQ buses in the network have positive active and reactive power demand. This assumption ensures that p ij , q ij ≥ 0 ∀(i, j) ∈ E, although it is not a necessary condition for that to hold. In practical terms, having positive power demands everywhere corresponds to the most unfavorable case for voltage stability, and there is little loss of generality in assuming that in this analysis. Based on this assumption, in the remaining of this paper we will refer to the nodes 1, ..., n as PQ loads. In Fig. 1 we represent the numerical value of ϕ u for two levels of loadability of a 56-bus distribution grid (described in detail in Section 5). In the left panel, the operating point of the system is close to the flat voltage solution, while in the right panel, the grid is operated close to a loadability limit. The diagonal elements of ϕ u are equal to (6) where i = π(j) and r 0i is the sum of the resistances of the lines connecting node 0 to node i (and similarly for x 0i ). By ignoring the off-diagonal elements, an approximation of det(ϕ u ) is obtained as the product of the elements on the diagonal defined in (6): In the next Lemma, we prove that the approximation is an upper bound for the true determinant. Lemma 6. For a power distribution network with one slack bus and n PQ loads described by the relaxed branch flow model, in the voltage stability region the determinant of the reduced power flow Jacobian satisfies Proof. Having p ij , q ij ≥ 0 ∀(i, j) ∈ E ensures that the off-diagonal elements of ϕ u are nonpositive. Thus, ϕ u is a Z-matrix and, from Theorem 2, ϕ u is also an ω-matrix. Recall that an ω-matrix is a nonsingular τ -matrix if and only if its smallest real eigenvalue is positive. But this is exactly what we require for voltage stability. To see this, notice that in the flat voltage solution all the eigenvalues of ϕ u are real and equal to 1. Thus the determinant becomes zero for the first time when the smallest real eigenvalue becomes zero. Notice that there is always at least one real eigenvalue, since ϕ u is an ω-matrix. Therefore, in the voltage stability region, ϕ u is a τ -matrix. The result follows from Theorem 3. Numerical experiments show that the approximation is exact only in the flat voltage solution, though the approximation error is almost negligible (see Section 5). With positive power demands, det(ϕ u ) < det approx . Voltage stability index Based on Theorem 1, the voltage stability region is defined as the region where det(ϕ u ) > 0. In practical terms, the grid operator has to identify a threshold β > 0 and impose that det(ϕ u ) ≥ β as a practical voltage stability measure. In order to make full use of the capacity of the grid, the value β needs to be chosen such that, when det(ϕ u ) = β, the operating point of the grid is very close to a loadability limit. From numerical experiments, it is evident that a proper choice of β intrinsically depends on the size of the network. To gain some intuition about this, recall that the determinant of a matrix is equal to the product of its eigenvalues. In the flat voltage solution, all the eigenvalues of ϕ u are equal to 1. As soon as the power demands increase, the eigenvalues start moving towards the origin. Since the number of eigenvalues is equal to the size of ϕ u , and thus to the size of the network, it is clear that bigger networks are associated to exponentially smaller determinants. Based on this intuition, we propose VSI := ln(det(ϕ u )) n as a voltage stability index. Thus, for some threshold β > 0, the practical voltage stability measure becomes VSI ≥ ln(β) n := VSI min (8) Following the determinant approximation proposed in (7), we then define the voltage stability index approximation VSIA := ln (det approx ) n In the following remark we point out an interesting and useful property of this voltage stability index approximation. Remark 7. (Distributed computation of the VSIA). Notice that ϕ u,jj is only function of the local state variables relative to the edge (i, j), where i = π(j). More precisely, ϕ u,jj can be computed in a distributed way from measurements performed at bus i and on the power lines that leave the same bus: v i , p ij , q ij and ij . Once each node i has computed ϕ u,jj for each children j ∈ δ(i), the computation of the VSIA amounts to simply evaluating the arithmetic mean of the terms ln ϕ u,jj for all (i, j) ∈ E. The arithmetic mean of these nodal quantities can then be computed via scalable fully distributed algorithms such as consensus algorithms (Olfati-Saber and Murray, 2004). Approximation error In this section we study the approximation error between VSI and VSIA. To do so, we need the following lemma. Lemma 8. In a power distribution network with one slack bus and n PQ loads described by the relaxed branch flow model, in the voltage stability region, the reduced power flow Jacobian satisfies the following: Proof. i) The two facts, ϕ u,diag = I in the flat voltage solution and det(ϕ u,diag ) > 0 in the voltage stability region (via Lemma 6), ensure that the elements on the diagonal remain positive. ii) Since ϕ u = ϕ u,diag (I + ϕ −1 u,diag ϕ u,off ), we have that det(ϕ u ) = det(ϕ u,diag ) det(I + ϕ −1 u,diag ϕ u,off ). In the flat voltage solution, ϕ −1 u,diag ϕ u,off = 0 n×n and in a loadability limit, det(I + ϕ −1 u,diag ϕ u,off ) = 0. Thus, the power grid becomes unstable when an eigenvalue of ϕ −1 u,diag ϕ u,off arrives at −1. Now, since −ϕ −1 u,diag ϕ u,off is non-negative, it has a positive real eigenvalue equal to the spectral radius ρ(−ϕ −1 u,diag ϕ u,off ) (Perron-Frobenius Theorem). Therefore, ϕ −1 u,diag ϕ u,off has a negative real eigenvalue with magnitude equal to ρ(ϕ −1 u,diag ϕ u,off ). Hence, this is the eigenvalue that first arrives in −1. This implies that in the voltage stability region, ρ(ϕ −1 u,diag ϕ u,off ) < 1. In the following Lemma we give an exact expression for the approximation error. Lemma 9. In a power distribution network with one slack bus and n PQ loads described by the relaxed branch flow model, in the voltage stability region we have: Proof. We have that ln(det(ϕ u )) = ln(det(ϕ u,diag )) + ln(det(I + ϕ −1 u,diag ϕ u,off )). As ρ(ϕ −1 u,diag ϕ u,off ) < 1 we know that ln(det(I + ϕ −1 u,diag ϕ u,off )) = Trace(ln(I + To conclude, notice that Trace(ϕ −1 u,diag ϕ u,off ) = 0. In Section 5 we show that the approximation is almost exact in the voltage stability region. Since the terms in the above sum are all positive, they are very small and they decay quickly to zero. The numerical value of the right hand side of (9) has been plotted in Fig. 3, for a the test distribution feeder described in Section 5, and for different load levels. In the following theorem, we present the main result on the quality of the proposed voltage stability index approximation. Theorem 10. In a power distribution network with one slack bus and n PQ loads described by the relaxed branch flow model, in the voltage stability region we have: where ρ = ρ(ϕ −1 u,diag ϕ u,off ). Proof. The first inequality descends from Lemma 6. The second inequality is proved by applying Theorem 4, using what we proved in Lemma 8. We conclude this section by presenting the following conjecture. Conjecture 11. In Ipsen and Lee (2011), the authors illustrate that the pessimistic factor in the approximation bound of Theorem 4 is given by the factor n that appears in (5). They found that replacing n by the number of eigenvalues whose magnitude is close to the spectral radius makes the bound tight. In our simulations we found that there is generally only one eigenvalue with magnitude close to the spectral radius. This would imply that the result that we presented in Theorem 10 can be tightened to This tighter bound on the approximation error has always revealed to be true in our simulations, as illustrated in the next section. Numerical validation of the VSI approximation In this section we assess the quality of the proposed voltage stability index approximation via numerical simulations. We consider a 56-bus distribution network, obtained from the three-phase backbone of the IEEE123 test feeder. The details of the testbed are available in Bolognani (2014). Power flow equations have been solved via MatPower (Zimmerman et al., 2011). In Fig. 2 we represent the voltage stability index (VSI) and the voltage stability index approximation (VSIA) when the system is operated at a series of increasing power demands. We start from an operating point very close to the flat voltage solution, and we increase the active and reactive power demand at four different buses in the grid until the Jacobian becomes singular and the Newton's method employed for the solution of the power flow equations cannot proceed. Observe, that the proposed VSI approximation is almost exact up to very close to the loadability limit. In Fig. 3 we represent the VSI approximation error, together with the bounds presented in Theorem 10 and Conjecture 11. Observe that the approximation error is quite small in either case, and it follows the conjectured bound (11) rather than the bound (10). More simulations can be found in Aolaritei (2016), and show how the quality of the approximation is consistently good across different power demands configurations. Comparison of practical voltage stability indices Recall from Section 4.3 that we propose VSI ≥ VSI min as a voltage stability measure, where VSI min has to be decided in order to characterize an operating condition close to the loadability limit of the grid. It can be seen in Fig. 2 that when VSI = −1, its negative slope is already extremely steep, meaning that for a very small increase in power demand the system would become unstable. Preliminary numerical investigation has shown that this threshold for VSI is valid for a diverse range of grid sizes and topologies. Notice that such a limit corresponds to an exponentially decreasing threshold for the determinant of the power flow Jacobian, i.e., det(ϕ u ) ≥ e −n . In practical terms, however, when the VSI is to be used as a tool for the assessment of the distance from voltage collapse, a more conservative value of VSI min is to be chosen. In the following, we choose a slightly more conservative limit (VSI min = −0.8) in order to present a comparison between the proposed VSI and three other indices that have been recently proposed in the literature. Observe from Fig. 2 that the approximation is extremely precise when VSI is larger than −0.8. This suggests that the VSIA can be safely used instead, enabling a fast, scalable, and distributed assessment of the voltage stability of the grid. The first two indices that we consider have been proposed in Bolognani and Zampieri (2016) and in Simpson-Porco et al. (2016), and they involve open-circuit load voltages, the grid impedance matrix (or a specific norm of it), and nodal power injections. The third index has been presented in Wang et al. (to appear), and it requires the knowledge of the impedance matrix of the grid and of phasorial measurements of the bus voltages. For each criterion we evaluated the proposed voltage stability index in a low-load operating point (very close to the flat voltage profile) and in the operating point in which our VSIA becomes equal to −0.8 (corresponding to what we defined as the practical voltage stability limit). Since the method proposed in Simpson-Porco et al. (2016) is based on the decoupled reactive power flow equations, for the comparison with their method we used only reactive power demands. We obtained the following values, showing how the proposed voltage stability index approximation is in fact an effective tool for the precise assessment of the distance of the system from voltage collapse. The other indices reach their threshold value before our index does, showing that they are more conservative, and therefore result in a less efficient use of the given distribution grid. CONCLUSIONS In this paper we have presented a voltage stability index for power distribution networks, for which an accurate approximation is available. Bounds on the quality of this approximation have been mathematically derived, and the accuracy have been validated in simulations. Notably, the approximate voltage stability index can be computed in a scalable and distributed way by agents that can measure local variables at each bus. Based on this observation, we envision three possible applications for which the proposed approach can bring a significant contribution. • As an online voltage stability monitoring tool, when the necessary quantities are measurable at the buses, the VSIA can be computed asynchronously via standard tools from multi-agent average consensus. • In optimal power flow programming, whenever the problem is expressed via the branch flow model, the VSIA can be used as a computationally efficient barrier function to maintain the solution of the problem inside the region of voltage stability. • In numerical algorithm for the construction of power flow feasibility sets that are based on the nonsingularity of the power flow Jacobian (as in Dvijotham and Turitsyn 2015), the proposed approximation can be used to avoid expensive determinant computations and improve scalability to larger networks. Preliminary numerical investigation shows that the voltage stability index approximation remains extremely accurate even in the presence of generators (i.e., positive power injections, which violate Assumption 5). An extension to this more general case is currently under development, together with a numerical assessment of the effectiveness of the proposed index on various distribution test feeders.
2017-03-31T10:48:43.000Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "0863fa374476cb21a831cc2800440f18d0b5e223", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ifacol.2017.08.1959", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "fb8e562debf5cd45c4f724d9964793fd84dd382f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
210176350
pes2o/s2orc
v3-fos-license
Professional appraisal of online information about children’s footwear measurement and fit: readability, usability and quality Background Parents increasingly use the internet to seek health information, share information and for purchasing textiles and footwear. This shift in footwear purchasing habits raises concern about how (and if) parents are getting their children’s feet measured, and what support strategies are in place to support the fit of footwear. In response to this, some companies and healthcare organisations have developed resources to support home measurement of foot size, and link these measures to footwear selection, measurement and fitting. The aim of this research was to undertake an appraisal of web-based resources about measurement and fit of children’s footwear, focussing specifically on readability, usability and quality. Methods Search terms relating to children’s foot measurement were compiled and online searching was undertaken. Search results were saved and screened for relevance. Existing resources were categorised based on their source e.g. a footwear company or a health website. The 15 most commonly identified resources were reviewed by a professional panel for readability, content, usability and validity. One researcher also assessed the accessibility and reading ease of the resources. Results Online resources were predominantly from commercial footwear companies (54%). Health information sources from professional bodies made up 4.2% of the resources identified. The top 15 resources had appropriate reading ease scores for parents (SMOG Index 4.3–8.2). Accessibility scores (the product of the number of times it appeared in search results and its ranking in the results) were highest for commercial footwear companies. The panel scores for readability ranged from 2.7 to 9 out of 10, with a similar range for content, usability and validity. Conclusions Information for parents seeking to purchase footwear for their children is readily available online but this was largely dominated by commercial footwear companies. The quality and usability of this information is of a moderate standard; notable improvements could be made to the validity of the task the child is asked to undertake and the measures being taken. Improvements in these resources would improve the data input to the selection of footwear and therefore have a beneficial impact on footwear fit in children. Background Parents are known to seek health information online [1] and increased use of technology has supported information sharing through websites, forums and social media [2,3]. In the United Kingdom in 2018, 54% of parents described using the internet to look for health-related information [4]. In relation to footwear purchases, the internet has supported a considerable shift from in-store to online purchasing which accounted for over 19% of total sales in the British footwear, clothing, and textile industry in 2018 [5]. Recent work exploring parents' knowledge, practices and perceptions of children's feet identified that parents wanted accurate, clear and consistent foot health information [6]. This work also highlighted the challenges with footwear choices in early childhood and identified the influence of footwear retailers in promoting information about foot development and footwear choices. Online purchasing of footwear is increasingly common and may pose challenges with ensuring that children have their feet measured prior to purchase as it negates the opportunity to try the footwear prior to purchase. To offset the expense to the companies associated with return of unsuitable purchases, many offer online fitting tools and advice such as size guides and printable charts to aide purchasing choices. For parents to make informed footwear choices this information needs to be accessible and, from a professional perspective, credible as this could have implications for promoting foot health in children [7]. Previous research advises that parents are unsure of how to evaluate the reliability of online health resources for their children [1] and that less than 10% of parents 'greatly trust' health information that they have identified through internet search engines [2]. These concerns are supported with the results of a systematic review which identified the quality of online health information as low [8];, a similar outcome to a review of scientific information on the internet [9]. Considering footwear information specifically, parents desire to purchase footwear online [4], access to online health information [4] and the prevalence of such information [6] highlight that online resources can influence parents. The quality and accessibility of this type of information is key for parents to be able to find, digest, understand and implement the tools to assist their purchasing behaviours; ultimately affecting the fit and therefore the appropriateness of the footwear their children will wear, which has wider implications for promoting foot health in children [7]. Inappropriate footwear choices in childhood could impact on foot development and health [10][11][12][13], although the mechanisms for these effects are not clear. Immediate effects of ill-fitting footwear are evident in the gait of infants and children in that shoes that are too large have been shown to affect spatial and temporal gait parameters [14] leading to greater instability during walking. Furthermore, shoes that are too big have been reported to impact on hip, knee and ankle kinematics during walking [15] and parents commonly believe footwear to be causal in the development of foot complaints [7]. Unlike footwear designed for adults, the footwear design for children needs to consider appropriate dimensions for growth. Foot length will increase by 2 mm per month up to three years of age and from five to 12 this decreases to 0.8-1 cm per year [13,16]. This requires adjustments to the last and the fit process of children's footwear. It also means that some sources recommend children having their feet measured every 6 weeks to 6 months, dependent on their age. Despite recommendations, a survey of children's foot size in a shoe store identified that 12.5% of the children were wearing shoes that were at least a size too small [17]. In more widespread surveys the number of children reportedly wearing poorly sized shoes is more than half [10,18] and this is even higher in children with disability [19]. With this high prevalence of incorrectly fitting footwear in children and hence associated foot problems, providing reliable and high-quality advice to parents through accessible sources to enable them to make informed footwear purchasing. The aim of this research was to undertake an appraisal of web-based resources about measurement and fit of children's footwear (up to 12 years of age), focussing specifically on readability, usability and quality. Methods The methods adopted in this study are presented in Fig. 1. Data collection Search terms relating to children's foot measurement and fitting information were obtained from our existing work where terms were reviewed and agreed with a panel of parents and clinicians [6]. For this study, search terms were compiled as a child or stage term plus a footwear term and a fit term ( Table 1). The original keywords were expanded to include more nouns relating to children of different ages and developmental stage (e.g. Toddler, Infant), similarly footwear terms for specific milestones were added (e.g. "First shoes", "School shoes"). Terms relating to fit were expanded to provide wider scope relating to footwear. These were agreed by the research team (Table 1). Searching was undertaken using Google search engine by one researcher (CP) in a single week (beginning 29th April 2019). Cookies were turned off on the web browser prior to searches and the cache was cleared at the beginning of searching and then after each cycle of child terms (after every 15 searches). The top 10 search results for each search term were output and saved using a search capture plug-in (Session Buddy, SessionBudy. com, Colorado, USA). Search results returned as adverts were ignored. These searches resulted in a primary set of 9800 resources which were screened for relevance (e.g. webpages were not related to footwear or function) (Fig. 1). Following this, 9156 resources remained, and these were categorised based on the source of the material (for example from a commercial footwear company, a health website or general such as a newspaper). After categorisation an appearance score was computed by rating the resources (10-1) for their rank following the search and summing this number for the total number of appearances. For example, a resource which appeared in three searches and as the second result in two of these and ninth in one would be scored 20 (the sum of 9, for being second rank, 9, for being second rank and 2, for being ninth rank). This resulted in scores which were a function of both the number of times the resource appeared using the key words and how high the resource appeared in the google search. The top 30 resources for appearance score were initially selected for screening and recorded by the researcher as PDF documents representing each resource alongside the links for the associated webpages. Data refinement These 30 resources were screened for inclusion and exclusion criteria ( Table 2) by two researchers (CP and MH). Disagreement in terms of inclusion criteria or resource source was to be decided a third member of the research team (SM), however this was not required. This resulted in 15 resources which had passed inclusion criteria. Assessment The professional panel was composed of 4 professionals working within footwear related roles (SM -PhD paediatric podiatry, CN -PhD biomechanics, AW -PhD footwear, MH -PhD candidate footwear) and all currently working on topic-related research projects. These roles included experience within clinical practice, research, footwear industry and academia. All four offered a breath of knowledge of the topic and were considered to be in a suitable position to comment on the resources, which previous literature has told us parents do not feel that they are in a position to do [1]. The professional panel rated the resources within two months of the original searches being completed. The professional panel received the resources in a web format such that the full usability of the resource was available as well as a PDF backup in case the website had been withdrawn. They also received criteria for assessment with the scoring guide (Appendix) and associated instructions to help them rate the resources. The criteria for assessment were defined by the research team but predominantly CP who was not on the (Table 3). At the same time as the professional panel undertook their review of the resources, aspects relating to readability were quantified and recorded by CP (not a member of the professional panel) using the SMOG Index calculated with an online tool (www.readabilityformulas.com/smog-readability-formula. php). The SMOG index is a readability score [21] which estimates the years of education required to be able to understand a piece of written text. This scoring reflects US grade levels within school and therefore provides an approximate age in years of a reader who can fully understand the text [21]. This system is the preferred method approach to determining the readability of healthcare material [22] and has broad application across healthcare research [23][24][25]. To assist in interpretation of this paper, the age relating to the US grades will be referred to as school grading systems are not consistent. Once all scores were received, these were combined for all professional panel members for each resource and aspect. These were used to compute a median and interquartile range to describe the score for each aspect and resource from the panel. Resources are presented as resource 1-15 based on their overall accessibility score however, resources were not provided to the professional panel in this manner to prevent any bias associated with this value. Additional data outcomes included the source of the resources and appearance scores for each resource. Results The source of the screened foot measurement and footwear fitting resources was identified (Fig. 2) and resources were predominantly from commercial footwear companies (54%). A large percentage of these commercial footwear resources were returned in the top three searches from the search engine; 18% of the total resources identified. Commercial websites from mixed stores such as department stores made up 22% of the resources, followed by parent advice websites (10%). Health information sources from professional bodies made up 4.2% of the overall resources. Forums (1%) resulted in a lower number of resources than parent advice sites (10%) and footwear association sites accounted for only 0.3% of the resources identified. The data from the professional panel review of the 15 most identified resources can be seen in Table 4. Ten of the 15 resources were commercial sites for footwear companies with a further three resources being mixed commercial sites, including clothing retailers and department stores. Accessibility scores ranged from 63 to 3990 (if a website would have been top of every search the maximum score would be 98, 000) which demonstrated the frequency at which some of the more common resources appeared in searches as high. Within these accessibility scores the most commonly found resource to appear top of the search in google appeared 133 times, with two resources never appearing first in the searches undertaken. The reading ease scores computed using the SMOG index ranged from 4.3 to 8.2. This represents interpretation from age 8-9 years and 'easy to read' to age 13-14 years and 'fairly difficult to read'. Three of the resources required a reading age of 12 years or above (see Table 4 -resources 3,4 and 6). The professional panel rated the readability, usability and quality (validity and content) of the resources with quite wide ranges for the assessment criteria across all of those identified. For validity, the average task and measures scores tended to be relatively consistent for each resource, those scoring higher in one (e.g. resource 1) scored higher in the other and those which scored lower in task validity (e.g. resource 8) reflected this in measure validity too. Across all data, the lowest median score was 1 and the highest was 9. For readability, the median resources scores ranged from 2.7 to 9 out of 10 with a similar range in the other criteria for assessment. Notable resources were 8, 9 and 15 which scored particularly low, resource 8 did not have a median score above 2.7 for any of the assessment criteria. In contrast, resources 1 and 3 scored highly across all criteria with the lowest scores being 7.5 and 8.5 respectively, both for validity measures. However, resource 3 had a reading ease score of 8.2. Despite scoring high for validity, the content would only be accessible by an audience with an older reading age and may mean that this resource was less easy to access. Discussion There was a breadth of information to support foot measurement for footwear fitting online (and published) from various sources. The high number of search results for footwear companies (76%) compared to health care providers (4%) reflected the dominance of information presented to parents. This is consistent with previous literature exploring parents' knowledge, practices and health-related perceptions of children's feet [6]. This work described parents' behaviour as the outcome of longstanding familiarity with brands, including their own experiences as children. Contrary to this, the low return of healthcare sources is concerning as these resources are those which parents perceive to be providing accurate and reliable information [26], although many parents were unsure about how to assess this [1]. Web users typically access resources at the top of their search results [27], and this usage varies depending on the types of device(s) used for the search. Despite accounting for only 4.2% of the results returned from the searches, over a third of healthcare resources ranked within the top three of the results returned in each search. This would suggest that health resources were visible, but we acknowledge that access to these would depend on the terminology entered into the search. Search engine optimisation might be an important consideration for healthcare providers and footwear association(s) to enhance the visibility of impartial and credible resources. The dominance of commercial sources in the search results confirm that the professional panel had a focus on commercial footwear fit for the sites they reviewed; 10/15 were commercial footwear, 3/15 commercial mixed. No health, footwear association or forum sites appeared frequently enough that they were included for the final screening. The dominance of commercial sources was also reflected in high accessibility scores. The most returned resource was from a commercial footwear company and had a score of 3990, being the product of the number of times it was identified in the search and the position in which it was returned in the search results. For this resource 133 of these appearances were as the first item in the search, which was the highest by at least threefold. The lowest accessibility score was 63 with zero first position appearances, which was a commercial mixed resource. This demonstrated a difference in terms of how commonly a parent would identify each resource while searching. Again, this demonstrates that the foot measurement information is dominated by a few commercial footwear companies. The accessibility and interpretation of published information is important for parents to comprehend the information they require to inform their footwear habits. The reading ease scores were appropriate for most of the resources in terms of interpretation and understanding. The highest score was equivalent to a reading age of 13-14 years of age, which suggests a higher complexity of the text and extends beyond typical recommendations [24]. It is important that written (health) education materials are accessible, at the lowest reading level that conveys true and Data presented as median (inter-quartile range) of scores out of 10 apart from accessibility which is a total score of the searches undertaken [no. of times as first in searches] and reading ease which is a single SMOG Index valuerelating to US school grades-calculated by CP. Where footwear resources are categorised: CF commercial footwear, CM commercial mixed, GE general and PA parent advice accurate the information [28]. The majority of the resources align well to recommendations that resources with a reading age of more than 12 years should be rewritten to broaden the audience [29,30]. In this study, readability was also assessed by the professional panel with a highest score of 9.0 (2.3) for the 3rd most identified resource (a mixed commercial source). This means all steps were clearly and concisely worded and followed a sequential pattern. The lowest score was 2.7 for both resource 15 and 8, both from commercial sources which means that, despite reading ease being appropriate for parents, the wording may be confusing and unclear, and instructions did not flow /make sense. This could result in confusing messages and inconsistencies for the parents, which may result in a distrust the resources or difficulty in its interpretation. This could potentially lead to problems with inaccurate foot measurements or poor footwear fitting, which could have longer term implications. Foot measurement is a skill and instructions for an untrained parent to be able to undertake such measures accurately enough to select a shoe size must be precise and clear. Resource content including the use of quantitative measures, diagrams or images and a clear layout was the lowest rated aspect across all resources (median 3.7/10) Scores are impacted by website design and use of text, imagery and instructional videos. These relate to usability scores associated with use across multiple platforms and the need to print resources. The latter allow a child to stand and have their foot length/width marked and measured. Mobile applications were not included within the current search terms which may have identified further approaches such as generating a 3D image of the foot from photogrammetry [30]. Whilst the accuracy of these limited measures may be adequate for sizing, whether these measures alone (e.g. just heel to toe length and forefoot width) can enable correct footwear size and style selection is unclear [31]. The professional panel was used because parents report being unsure about how to assess the reliability of online health resources [1,6]. Academics, clinicians and a footwear company employee were involved as they were experienced enough to address the validity of the task and measurement being undertaken and determine whether it was appropriate for measuring footwear fit (Table 3). These data encompassed a large range of values; however, the two aspects of the validity being assessed (task and measures) tended to score relatively consistently across each resource. This identified that resources that had a suitable task for assessment (e.g. standing still and weight bearing) then undertook appropriate measures while the child was in this position (e.g. measure of multiple foot aspects such as length, width and girth). Resources which quantified only unidimensional features of the foot such as length were scored lower, as were resources which measured the foot in a non-weightbearing position. In addition to this, the translation of these values to a shoe size is integral to the child receiving footwear of the correct size. The interpretation of foot measures and conversion to a shoe size occurs within the footwear company based on the measurements provided by the parent. This process requires further investigation to understand the association with fit. The consensus within the industry would be that for appropriate footwear fit, feet should be measured by an experienced shoe fitter who has been appropriately trained. The transition from in-store purchasing to online purchasing will mean that parents will move towards online fitting solutions as opposed to visiting store staff. Identifying a consensus approach for the footwear industry to employ to improve accuracy and reduce errors that result in ill-fitting footwear would reduce confusion for parents, as each website would suggest the same task and the same measures to fit footwear. Some limitations to this research include using Google as the sole search engine, however more than 87% of UK users chose this as their primary search engine therefore this covers a significant number of searches that are undertaken in the UK [32]. These resources are aimed at parents yet have been reviewed for usability by academic researchers and clinicians. These professional panel members were in a position to comment on the validity however, further work exploring how parents rank the usability, accessibility and credibility of the information would help to progress the findings from this study. Also, a measure of which resources are being used and implemented by parents would help the translation of the findings from the current research to improve the tools which parents are utilising. Conclusions Parents are increasingly using the internet to search for information about their children's feet and to purchase footwear. Information is available to parents seeking to purchase footwear, but this is largely dominated by resources from commercial footwear companies. The quality and usability of this information is of moderate standard, often of low quality, and whilst readability was appropriate, content was inconsistent in terms of value in assisting footwear fit. Improvements are needed to help parents make informed decisions. Guide is fully functioning as an online tool and does not require any printing. How easy is the website to navigate? Are the buttons clear and is the layout easy to read? Website is unclear to navigate. Multiple mouse clicks are required to access all information. Website is relatively clear to navigate and some secondary pages are easy to find. Some secondary information is readily available, but not all. Website is very clear and easy to navigate and secondary pages are found easily. Information and content of resources is accessible with 1 mouse click. Validitytask Is the task or process described appropriate for measuring the feet of children? The task or process described is not what I consider essential for foot measurement in children. Some of the task or process described is what I consider essential for foot measurement in children, but not all. The task or process described is all of what I consider essential for foot measurement in children. Validitymeasures Which aspects of the foot does it quantify? E.g. length only, width measures, instep, whole foot with an app. Few of the key measures I associate as essential for foot measurement are included. Most of the measures I associate as essential for foot measurement are included. All key measures I associate as essential with foot measurement are included.
2020-01-15T07:33:31.103Z
2020-01-14T00:00:00.000
{ "year": 2020, "sha1": "32654fb0f9b8f5d1d080a5b89481b48f74645f6f", "oa_license": "CCBY", "oa_url": "https://jfootankleres.biomedcentral.com/track/pdf/10.1186/s13047-020-0370-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32654fb0f9b8f5d1d080a5b89481b48f74645f6f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260061975
pes2o/s2orc
v3-fos-license
No association between genetic markers and hypertension control in multiple cross-sectional studies We aimed to assess whether genetic markers are associated with hypertension control using two cross-sectional surveys conducted in Lausanne, Switzerland. Management of hypertension was assessed as per ESC guidelines using the 140/90 or the 130/80 mm Hg thresholds. One genetic risk score (GRS) for hypertension (18 SNPs) and 133 individual SNPs related to response to specific antihypertensive drugs were tested. We included 1073 (first) and 1157 (second survey) participants treated for hypertension. The prevalence of controlled participants using the 140/90 threshold was 58.8% and 63.6% in the first and second follow-up, respectively. On multivariable analysis, only older age was consistently and negatively associated with hypertension control. No consistent associations were found between GRS and hypertension control (140/90 threshold) for both surveys: Odds ratio and (95% confidence interval) for the highest vs. the lowest quartile of the GRS: 1.06 (0.71–1.58) p = 0.788, and 1.11 (0.71–1.72) p = 0.657, in the first and second survey, respectively. Similar findings were obtained using the 130/80 threshold: 1.23 (0.79–1.90) p = 0.360 and 1.09 (0.69–1.73) p = 0.717, in the first and second survey, respectively. No association between individual SNPs and hypertension control was found. We conclude that control of hypertension is poor in Switzerland. No association between GRS or SNPs and hypertension control was found. Hypertension, a major cardiovascular risk factor, is the leading cause of premature morbidity and disabilityadjusted life years worldwide, and a primary risk factor for coronary artery disease, stroke, heart failure, chronic kidney disease and dementia 1 . Several randomized controlled trials have shown that reduction of blood pressure levels reduces fatal and non-fatal CVD events 2,3 . Those findings prompted international societies to issue guidelines for the adequate management of hypertension 4,5 . Still, one-half to one fifth of patients treated for hypertension fail to reach target levels [6][7][8][9][10] . In Switzerland, one in five men and one in six women present with hypertension 10 , and control of hypertension is far from optimal, as only half of treated patients achieve adequate blood pressure levels 11 . Increased age, lower educational level or being male are associated with lower control rates 9 . In the last years, several genetic variants related to hypertension 12 have been identified, leading to the constitution of genetic risk scores (GRS) 13 or polygenic risk scores (PRS) 14 associated with the risk of developing the disease. A list of SNPs associated with treatment-resistant hypertension has also been published 15 , and some genetic variants have been suggested to interact with specific antihypertensive drugs. For instance, a genetic variant in the catechol-O-methyl transferase (COMT) gene was significantly associated with a lower systolic blood pressure (SBP) level among subjects treated with calcium channel blockers 16 , while SNP rs2106809 of the ACE2 gene was associated with response to ACE inhibitors in women 17 . Indeed, it has been suggested that genotyping might improve hypertension management 18 , but in a previous study we failed to find any association between a GRS made of 362 SNPs and hypertension management 19 . Still, whether GRS or specific genetic variants might influence hypertension control has been little studied. Hence, our study aimed to identify the prevalence and the possible effect of genetic markers in poor control of hypertension in the Swiss population. Methods Study population. The CoLaus|PsyCoLaus (www. colaus-psyco laus. ch) is a prospective cohort study established in 2003 following every 5 years a sample of the inhabitants of the city of Lausanne (Switzerland), aged 35-75 years at baseline 20 www.nature.com/scientificreports/ tion, and blood samples were drawn for analyses. As information regarding type of antihypertensive drug treatment was incomplete in the baseline survey, data from the first (2009)(2010)(2011)(2012) Genetic analysis and genetic score. Genome-wide genotyping was performed using the Affymetrix 500 K SNP array. Subjects were excluded from the analysis in case of inconsistency between sex and genetic data, a genotype call rate of less than 90%, or inconsistencies of genotyping results in duplicate samples 20 . Quality control for SNPs was performed using the following criteria: monomorphic (or with minor allele frequency (MAF) < 1%), call rates less than 90%, deviation from the Hardy-Weinberg equilibrium (HWE) (p < 1 × 10 -6 ). Phased haplotypes were generated using SHAPEIT2 22 . Imputation was performed using minimac3 and the Haplotype Reference Consortium version r1.1. A genetic risk score (GRS) related to treatment-resistant hypertension consisting of 20 SNPs 15 , 18 of which were available in our database (Supplementary Table 1) was selected. The GRS was computed as a weighted sum of the different SNPs and values range between 0 and 17. Further, 133 individual SNPs related to response to specific antihypertensive drugs were included in the analysis (Supplementary Table 2). Other covariates. Socio-demographic and lifestyle data were collected by questionnaire and included gender, age, educational level (low/middle/high), marital status (alone/couple), personal and family history of CVD, family history of hypertension, smoking (never/former/current) and alcohol consumption (yes/no). Total number drugs (including or excluding non-prescribed, over-the-counter [OTC] drugs) was considered as a proxy for the number of comorbidities, including hypertension. Body weight and height were measured with participants barefoot and in light indoor clothes. Body weight was measured in kilograms to the nearest 100 g using a Seca ® scale (Hamburg, Germany). Height was measured to the nearest 5 mm using a Seca ® (Hamburg, Germany) height gauge 20 . Body mass index (BMI) was calculated and categorized into normal (< 25 kg/m 2 ), overweight (25 ≤ BMI < 30 kg/m 2 ) and obese (BMI ≥ 30 kg/m 2 ). Inclusion and exclusion criteria. For the genetic analyses, only participants of Caucasian origin were considered eligible. Caucasian origin was defined as having both parents and grandparents born in a restricted list of countries (available from the authors) 20 . A detailed description of the genetic background of the CoLaus sample is provided elsewhere 23 . Participants were included if they received any type of antihypertensive drug treatment. Participants were excluded if they lacked information regarding BP levels, genetic data or covariates. Statistical analysis. Statistical analyses were conducted using Stata v.16.1 (Stata Corp, College Station, TX, USA) separately for each survey. Results were expressed as number of participants (percentage) for categorical variables and as average (± standard deviation) for continuous variables. Bivariate comparisons between controlled and uncontrolled participants were performed using chi-square for categorical variables and Student's t-test or Kruskal-Wallis nonparametric test for continuous variables. Multivariable analyses were conducted using logistic regression for categorical variables and results were expressed as multivariable-adjusted odds ratio (OR) and 95% confidence interval (CI). For multivariable analyses, two models were applied: models 1 and 2 used the GRS for resistant hypertension either as a continuous variable (model 1) or categorized in quartiles (model 2). Adjustment was performed on age (continuous), gender (women/men), education (high/middle/ low), marital status (in couple/other), BMI categories (normal/ overweight/obese), smoking categories (never/ former/current), alcohol consumption (yes/no), hypolipidemic drug treatment (yes/no), antidiabetic drug treatment (yes/no), parental history of hypertension (yes/no), sedentary behavior (yes/no) and number of drugs, including OTC. The associations between individual SNPs and hypertension control were performed by comparing the distribution of the genotypes between controlled and uncontrolled participants taking specific antihypertensive drugs. Chi-square or Fisher's exact test were conducted as appropriate. Statistical significance was considered for a two-sided test with p < 0.05. www.nature.com/scientificreports/ approval was renewed for the first and the second follow-ups. The study was performed in agreement with the Helsinki declaration and its former amendments, and in accordance with the applicable Swiss legislation. All participants gave their signed informed consent before entering the study. Results Participants. Of the initial 5064 participants of the first follow-up, 2103 (41.5%) reported taking antihypertensive drugs and were considered as eligible. Of those, 955 (45.4%) were excluded due to lack of genetic data, 71 (3.4%) due to missing covariates, and 4 (0.2%) due to missing BP data. Of the initial 4881 participants in the second follow-up, 2330 (47.7%) reported taking antihypertensive drugs and were considered as eligible. Of those, 912 (39.0%) were excluded due to lack of genetic data, 110 (4.7%) due to missing covariates, and 151 (6.5%) due to missing BP data. Overall, 1073 and 1157 participants were included in the analyses from first and second surveys, respectively. Of the 1073 participants in the first follow-up, 693 (64.6%) also participated in the second follow-up, while 464 participants untreated for hypertension in the first follow-up were included in the second follow-up (40.1% of the sample). The characteristics of the included and excluded participants are summarized in Supplementary Table 3. Included participants were older, of a lower educational level, had a higher BMI, were more frequently former smokers or drinkers, and more frequently treated for dyslipidemia and diabetes. www.nature.com/scientificreports/ participants were significantly younger, received a higher number of drugs (including and excluding OTC) , while no difference was found for the GRS related to resistant hypertension. Controlled participants were more frequently women or current smokers in the first but not in the second follow-up ( Table 1). The results of the multivariable analysis are provided in Table 2 for the first and the second follow-ups. In the first follow-up, increasing age, being married and presenting with obesity were associated with a lower likelihood of BP control, while total number of drugs (including OTC) was positively associated with BP control. No association was found for the GRS for resistant hypertension (Table 2). In the second follow-up, increasing age, being a man and having a lower educational level were associated with a lower likelihood of BP control, while hypolipidemic drug treatment was positively associated with BP control. No association was found with the GRS for resistant hypertension ( Table 2). Table 2. Multivariable analysis of the associations between clinical and genetic factors with blood pressure control, first (2009-2012) and second (2014-2017) follow-ups of the CoLaus|PsyCoLaus study, Lausanne, Switzerland. Control defined as a systolic blood pressure < 140 mmHg and a diastolic blood pressure < 90 mmHg. HT hypertension, OTC over the counter, ttt treatment, -not included in the model. Results are expressed as odds ratio (95% confidence interval). Statistical analyses performed using logistic regression. Table 4). The results of the multivariable analysis are provided in Supplementary Table 5 for the first and the second follow-ups. In the first follow-up, increasing age, increasing BMI and alcohol consumption were associated with a lower likelihood of BP control, while antidiabetic drug treatment and total number of drugs (including OTC) was positively associated with BP control. No association was found with the GRS related to resistant hypertension (Supplementary Table 5). In the second follow-up, increasing age, increasing BMI and being a man were associated with a lower likelihood of BP control, while total number of drugs (including OTC) was positively associated with BP control. No association was found with the GRS related to resistant hypertension (Supplementary Table 5). Control defined as SBP/DBP Association between drug-specific SNPs and blood pressure control. The results of the associations between drug-specific SNPs and BP control according to the presence of the drug are summarized in Supplementary Fig. 1. Overall, very few significant (p < 0.05) associations were found, and only one SNP (rs675388 of KCNJ1) showed consistent associations with diuretic treatment in three of the four analyses performed. Discussion Our results suggest that genetic markers are associated neither with hypertension control, nor with response to antihypertensive drugs in a sample of community-dwelling people. Characteristics of the participants. Included participants were older, more frequently male, and had a higher prevalence of other cardiovascular risk factors than excluded participants. This was expected, as our study focused on participants with hypertension, as hypertension rates increase with age and are frequently associated with other cormorbidities. Prevalence of controlled hypertension. Prevalence of controlled hypertension was below 60% when using the 140/90 threshold and decreased to less than one third when using the 130/80 threshold. Those values are close to those reported in Germany 24 , where 54% of participants treated for hypertension had a BP level below the 140/90 mm Hg. Those values are also comparable to a Swedish study 25 where 59% of women and 48% of men treated for hypertension were controlled, or to a Greek study, which found a control rate among treated participants of 56% in women and 43% in men 26 . Conversely, our control rates were higher than reported in France (50%) 27 , an European study (47%) 28 and a study conducted in the UK and Ireland (38%) 29 . Overall, our results suggest that management of hypertension in Switzerland is comparable or slightly better than other European countries. Nevertheless, control rates remain suboptimal, as at least four out of ten patients failed to achieve adequate BP levels. Factors associated with blood pressure control. Increasing age was negatively associated with hypertension control. Our findings are in agreement with studies conducted in Germany 30 and Iran 31 but not with another German 24 or Swede 25 studies, where no association was found. Possible explanations include the use of a higher threshold for BP control among the elderly 32,33 , or the avoidance of deleterious side effects due to low BP levels in elderly people by Swiss GPs. Still, recent data indicate that BP lowering in elderly people is safe and reduces CVD events 34 . Hence, BP lowering should be applied to elderly people to the same extent as to younger people, as stated in the current ESC guidelines 5 . Increased BMI levels were negatively associated with hypertension control using the 130/80 threshold but less so using the 140/90 threshold. Our findings replicate those of a prospective study conducted in the UK, where hypertension prevalence increased and hypertension control decreased with increasing BMI 35 . Hypertension in obese patients is mainly due to increased cardiac output with "inadequately normal" peripheral resistance due to dysfunction of the renin-angiotensin-aldosterone system and the cardiac natriuretic peptide system 36 . This dysfunctional state could make BP control harder in obese people. Overall, our results indicate that obese people could benefit from stronger lifestyle and antihypertensive treatment than normal weight people. Total number of drugs including OTCs, was positively associated with BP control. The increasing number of drugs could be related to an increased number of antihypertensive drugs 37 . Still, no association between number of antihypertensive drugs and hypertension control was found, a finding in agreement with a German 30 , but not with a Greek study 26 , where number of antihypertensive drugs was negatively associated with hypertension control. Interestingly, presence of hypolipidemic drug treatment was associated with a better control of hypertension, suggesting that participants with multiple risk factors might be more health-conscious or more closely monitored. Genetics and hypertension control. No association was found between the GRS and hypertension control in both surveys and for both thresholds. Our results are in line with a previous paper from our group where no association between a 362-SNP GRS and BP control was found 19 and with a recent Finnish study, where no clear association between a 793-SNP PRS and BP control was found 38 . A likely explanation is that the effect of those GRS is too small to be detected with the current sample size. For instance, a genome-wide association study identified over 500 loci associated with BP traits 39 , but no BP score was derived, and together these loci www.nature.com/scientificreports/ only explained between 3.5% 40 and 13% 41 of the trait variance. Hence, the effect of GRS on BP levels might be too small to be clinically relevant in general practice. Another possibility is that antihypertensive drug treatment was stronger among participants with higher GRS. Still, no association was found between number of antihypertensive classes and the GRS (Supplementary Fig. 2). Similarly, no significant association was found between individual SNPs and hypertension control according to antihypertensive drug used, the most consistent association being found between rs675388 of KCNJ1 and diuretic treatment. Our findings do not replicate those of a previous review 42 , but are in line with a Finnish study, where higher PRS for hypertension tended to be associated with a lower response to diuretic treatment 38 and to hypertension onset 14 . Overall, our results are in line with current recommendations 4,5 and do not support the use of GRS or individual SNPs to manage hypertension. Importance for clinical practice. When managing patients with hypertension, doctors should focus on clinical factors such as age, increased BMI, and possibly gender and polypharmacy. The use of a GRS or individual SNPs to direct treatment is not recommended. Study limitations. This study has several limitations worth acknowledging. Firstly, the sample size was relatively small, and our study was likely underpowered to detect the minute associations between the GRS and hypertension control. Still, based on our findings, it is unlikely that the effects of the GRS, if any, could be of interest in clinical practice. Secondly, it was not possible to adequately collect the posology of the antihypertensive treatment. Hence, we could not determine if the participants were receiving the maximal dose. Thirdly, included participants presented with more comorbidities than excluded ones, which might have blurred the association between GRS and hypertension control. Hence, it would be important to replicate our study in a larger sample including participants with hypertension but devoid of other comorbidities. Finally, the SNPs used to compute the GRS for resistant hypertension were not independent, as indicated in Supplementary Table 7; hence, the weight of some genes on the GRS was overestimated. Still, restricting the GRS to one single SNP per gene (rs17035646 for CASZ1 and rs77270397 for EEF1DP3, FRY-AS1) led to similar findings, i.e., the lack of association between the short GRS and hypertension control (Supplementary Tables 8 to 11). Conclusion Control of hypertension is poor in Switzerland, namely among older adults and possibly among overweight or obese subjects. No association between GRS or individual SNPs and hypertension control could be found. Data availability The CoLaus|PsyCoLaus cohort data used in this study cannot be fully shared as they contain potentially sensitive patient information. As discussed with the competent authority, the Research Ethic Committee of the Canton of Vaud, transferring or directly sharing this data would be a violation of the Swiss legislation aiming to protect the personal rights of participants. Non-identifiable, individual-level data are available for interested researchers, who meet the criteria for access to confidential data sharing, from the CoLaus Datacenter (CHUV, Lausanne, Switzerland). Instructions for gaining access to the CoLaus data used in this study are available at https:// www. colaus-psyco laus. ch/ profe ssion als/ how-to-colla borate/.
2023-07-23T06:17:14.023Z
2023-07-21T00:00:00.000
{ "year": 2023, "sha1": "75663c08e2c9192df6e09a3a91c1488acfd0c9a2", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c2116584dc084b1e56fc7d6a88a36b7241c8ea57", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237796716
pes2o/s2orc
v3-fos-license
What are the Boundaries of Public Engagement in a More Connected World? The proposed paper discusses how the relationships between the researchers and the ‘field’ in social sciences have been transformed during the last decades. It explores the concept of the ‘public engagement’, its ethical and conceptual boundaries, and the criteria of its ‘successfulness’ – in relation to the academic research, researchers, and the local communities where the research is conducted. Gavrilova Modern Languages Open DOI: 10.3828/mlo.v0i0.352 "What?" is a question that might be read and answered in various ways in the context of academic "public engagement". What are we trying to achieve by engaging with a particular community? What are the methods of this engagement? What are we expecting as a result and what is the level of our control over the engagement project's legacy? Who determines what is produced? How do we package our findings? What responsibilities do we have towards the people we engage? In the context of UK academia, the reach and significance of our work's impact is determined by evidence of the changes it brings about. This is what funding bodies expect to learn from our research proposals and it is one of the essential criteria for measuring their success. When squeezed to conform to funding guidelines, the outcomes of public engagement work can sometimes narrow down to secure clichés: "broadening understanding", "stimulating creativity", "challenging preconceptions", "providing new perspectives." However, in many cases the influence of research on communities or policy-makers is far more complex than these labels suggest and cannot be measured in terms of easily evidenced benefits. Let us be honest: most Slavic Studies scholars do not build roads in settlements in the Global South, nor are we developing new vaccines or medicine. We do something else and our impact is harder to define. Public engagement work in the humanities and social sciences might not be able to present physical changes within a community; we talk instead about more abstract notions-shifts in values, understanding, attitudes-and quite often our control over a project's impact is limited. I myself work in places that appeared as a result of Soviet repressions and within the communities that were created by them. With my Gulag Maps project, I develop a spatial understanding of the Stalinist Terror and a language to name these often still nameless locations. I try to challenge the normalization of these places by reinforcing the connection between people's spatial identity and the repressions. But that is often precisely what people are trying to escape fromthey exclude these events from the history of their families, from their identity, in order to leave The system of the Ussol'lag camp, 1945. Copyright: 'Gulagmaps' project (gulagmaps.org). Gavrilova Modern Languages Open DOI: 10.3828/mlo.v0i0.352 those events in the past. They exclude the repressions not only from their own stories, however, but from local history and in so doing sever the connections between features of the built and natural environment from their origin connected with the exiled and repressed citizens. For example, you are unlikely to encounter anyone happy to trace the history of their village to forced resettlement, or to connect the history of the factory they worked in with the forced labour of gulag inmates. In this way, the repressions come to be erased from people's identities and their relationship to the spaces they inhabit, at both local and regional levels. The idea of a researcher who comes to a community to "change the way people think" raises some serious ethical questions. How can we manage our liberal will to make progressive changes while recognizing the rights of a community to reject our proposals and perspectives? How can we ensure that the changes or new gaze that we are proposing are desired, appropriate, and create no harm? In the field of contested memories, traumatic pasts, and local histories to which I belong, we often think we know better than locals what they should be commemorating or contesting, but is it really our responsibility to "shift their understanding" and influence their identities, spatial or otherwise? What are the power relations underpinning such projects? There are always ideologies behind naïve good intentions, even in such delicate forms of public engagement as creating a "safe environment" for dialogue between conflicting parties or, as in my case, in proposing an alternative spatial history of a village. In liberal Western society, a researcher has the power to name things, to define new subject areas ("Soviet Studies", "The Global East", "Donbas Studies", and so on), to provide definitions and boundaries, and to identify new directions for academic and creative attention. But the researcher's control over the result is limited and we are unable to trace all the outcomes and impacts of our research. We do not write the headlines and we cannot influence the way our findings will be read and interpreted by the people we initially engaged with, or reproduced in media, or implemented by policy-makers. Interpretations and misinterpretations of our research can cause harm and sometimes even have negative consequences for the communities in question. The people I work with can be seriously moved, or even upset, when I ask why there is no memorialization of the Stalinist repressions in their village; the reactions are even stronger when I discuss the fact that their homes appeared because there was a camp there first. They want a live in a "normal" place. Should I continue my work if I know that the local community does not feel that it is needed? These problems are even more acute when research projects are conducted in conflict areas or war zones and attract significant media coverage. The more abstract and delicate our impact plans, the more difficult it is to follow their traces and therefore to predict their outcomes in research proposals. Research projects conducted in heavily politicized environments present further uncertainty. How do you engage people who endorse views that you oppose? How should I respond to the Stalinists I meet among the descendants of the victims of repressions or when encountering active Vladimir Putin supporters? How does one maintain professional relationships with these people and what are our ethical responsibilities to them? Do we bear the same responsibilities to different groups with whom we engage (e.g. socially deprived communities vs. oligarch politicians)? When "engaging" with people for research, not just to "observe" or "collect data", one starts to develop relationships with them, which tend not to be talked about. Ethical questions about impact can seem abstract until the moment people start calling you on your birthday, or want to stay connected on Facebook, or ask you for help. A "classical" Western ethnographer, anthropologist, or geographer until the mid-twentieth century lived in two parallel worlds-"the field" and "academia", worlds that would have been much further apart from each other than they are today. But now, in part because of technological developments like social media and smartphones, the rules governing our work in the social sciences and humanities have also changed, as have the relationships with the people we work with "in the field". This has in turn blurred the boundary between "the field" and the everyday life of the researcher. The research object has moved closer to us and to our everyday lives, and we have developed different relationships with the people we study. Today we do research differently, and with that comes more responsibilities: we have to be responsible for delivering quality research outputs beyond academia, promoting public engagement, and we must be sure that this work is desired and will do no harm. What is more, we have to be able to defend our findings and the boundaries of our expertise, to be honest and to maintain healthy relations with the people we engage.
2021-08-27T17:20:14.244Z
2021-07-14T00:00:00.000
{ "year": 2021, "sha1": "9727d6d698a0bd6cd8f7e04be2609372e625b918", "oa_license": "CCBY", "oa_url": "http://www.modernlanguagesopen.org/articles/10.3828/mlo.v0i0.352/galley/475/download/", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "177ae609422eaa1f02e887d42d391122999e99bf", "s2fieldsofstudy": [ "Sociology", "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
269722791
pes2o/s2orc
v3-fos-license
Catalyzing sustainable development: Exploring the interplay between access to clean water, sanitation, renewable energy and electricity services in shaping China's energy, economic growth, and environmental landscape The Sustainable Development Goals (SDGs) reflect the shift in global economic conversation toward inclusive growth. The growth can promote inclusivity and widespread sharing of its advancements by concentrating on four key dimensions. (a) Equality of opportunity, (b) sharing prosperity, (3) environmental sustainability/climate adaptation, and (4) macroeconomic stability. We used the Kao cointegration test to study how certain variables are connected over a long period. The relationship between CO2 and GDP per capita, renewable energy and tourism, improved water and sanitation, and access to power all have a positive feedback effect on each other. Based on FMOLS's findings, a 1 % increase in Inclusive growth leads to a 0.342 % (Model 1) and 0.258 % (Model 3) increase in CO2 emissions. An increase of 1 percent in energy consumption per person resulted in a rise of 1.343 % in CO2 emissions in Case 1, 0.524 % in Case 2, and 0.618 % in Case 3. Increasing the tourism sector's proportion of total exports by just one percent will reduce CO2 emissions by 0.221 % (case 1) and 0.234 % (case 3). Based on CCR findings, a 1 % improvement in inclusive growth leads to a 0.403 % Introduction A person's living conditions and ability to absorb climate stress are particularly shaped by access to basic public services like electricity, water, sanitation, transport, and good institutions [1].It shows variations across countries [2].In Asian developing countries, about 800 million and 600 million people face difficulty accessing electricity and safe drinking water, respectively [3].Deepening inequality and climate vulnerability have become important challenges for policymakers.Thus, the concept of "Inclusive Growth" has been introduced by the Asian Development Bank (ADB) to cope with this situation [4] Inclusive economic growth can eliminate poverty by providing the means for better living standards, increasing productivity, generating quality jobs, and ensuring equitable access to opportunities for the entire society [5].The Sustainable Development agenda also recognizes that expanding infrastructure (SDGs 6, 7, and 9) and adapting to climate change (SDGs 13 and 15) are essential for sustained and inclusive economic growth (SDGs 8) to reduce poverty (SDGs 1).Therefore, sustainable development is possible through inclusive growth [6]. The growth can spur inclusion that is broadly sharing these improvements by focusing on four main dimensions: (a) equality of opportunity, (b) sharing prosperity, (3) environmental sustainability/climate adaptation, and (4) macroeconomic stability [7].Over the last decades, a frequent claim has been that traditional growth patterns must be transformed to address environmental concerns and climate change.However, the emphasis has been placed on rapid growth that ignores the socio-economic benefits of the growth [8].On this point, the current attempt explores the nature of linkages between inclusive growth and development.Thus, the motivation behind the present research is to study the dimensions of inclusive growth.This paradigm promotes broad-based economic growth, which is sustained across different income groups and provides equal and equitable opportunities to the people via productive employment generation (increase in income), access to better infrastructural services, shared prosperity, climate adaptation, price stability, and the like [9,10].This goes beyond the primary objective of eradicating poverty and inequality [10].Equality of opportunity to access basic public services is central to inclusion.Infrastructure development is an integral part of a county or region's progress in terms of productivity and growth.It expresses fundamental and universal aspirations driving inclusive growth [11].By providing services to households and industries, the role of infrastructure is important in the economy and society [12].The availability of transportation, energy, clean water supplies, sewage management facilities, and other essential amenities significantly impacts the quality of life, particularly for poor ones.Investment in physical infrastructure (water, electricity, clean fuels, and sanitation) is needed for development, employment, equity, and security [12].Infrastructure services aid businesses with production, transportation, and transactions that promote growth, increasing income distribution between the poor and rich and reducing unemployment [13][14][15][16].In addition, infrastructure facilitates the physical mobility of people and products and eliminates productivity constraints.Access to infrastructure lead to better living quality by promoting trade and sustainable growth [17][18][19].Conversely, a lack of basic infrastructural services indicates constraints on growth which cause inequalities; therefore, the main task is to develop effective and high-quality infrastructure that may encourage inclusive growth through increased job opportunities and ease of doing business [20][21][22]. Shared prosperity One of the objectives of inclusive growth is 'shared prosperity.'International trade can significantly increase prosperity and lower poverty in emerging nations [23][24][25].Trade openness facilitates better resource allocation by enabling countries to benefit from comparative advantages, positively impacting growth.Specialization enhanced productivity in several industries by increasing output, resulting in learning by doing [26,27].Proponents of free trade claim that it is required for long-term economic growth and shared prosperity for all [28].Trade openness coupled with FDI shows a beneficial role in knowledge spillovers in human capital [29,30].Moreover, both trade openness and FDI help developing nations by transferring technology (via the import of high-tech items) [31], and increasing firm productivity [32].In reality, the influx of FDI and trade openness have been viewed as opportunities for job creation that can result in inclusive growth and the development of economies worldwide [33,34]. Environmental sustainability Inclusive growth is the main target of development policy, but it is also crucial to consider climate resilience in the development agenda [35].Prepared against climate change is a key adaptation strategy [36].Adaptation should focus on increasing resilience across different countries and all sectors [37].Fostering climate resilience is a central goal of the 2015 Paris Agreement approved at the Climate Change Framework Convention of the United Nations [38].Frequent extreme weather events occurred due to climate change, negatively impacting food and livelihood security and threatening socio-economic development [39,40]. The global system for producing food and the management of ecosystem-related benefits have an influence on several of the Goals for Sustainable Development (SDGs), including eradicating hunger (SDG 2), ensuring universal water and adequate sanitation facilities (SDG 6), taking action on climate change (SDGs 13), and preserving life underwater and on land (SDGs 14 & 15).Therefore, the key challenges of feeding a growing population while minimizing environmental damage and safeguarding natural resources for the next generations must be addressed together to achieve these objectives [41].However, climate change has become a pervasive threat to biodiversity, affecting individual species' interaction with the physical environment and each other, altering the ecosystem structure [42].At the same time, agriculture contributes significantly to environmental issues, such as the climate crisis, ecosystem services degradation, and pollution of soil, air, and water [43].Ecological systems, crop cultivation, and climate changes are all interconnected global processes, and their link to one another has become more important as the disparity between environmental challenges increases [44,45].In light of this background, this study aims to identify, quantify and assess the interactive link between changing climate, ecosystem services, and agriculture production.Balancing the increasing consumer demand for nutritious meals with environmental threats has become a complex sustainability issue [46]. Fuel prices are rising as a result of a limited supply and rising demand for fossil fuels.Fossil fuels are important for economic activities but they may lead to Greenhouse Gases (GHGs) and climate change.Additionally, fossil fuels require a sizable quantity of foreign reserves [47].Oil price volatility is responsible for financial and macroeconomic risks, which adversely affected the world markets [48].Risks associated with climate change pose a threat to humankind and require substantial foreign reserves.It is required to mitigate loss through renewable energy [49].It is beneficial to develop cheap and environment-friendly renewable energy sources, such as biomass, hydropower, solar, and wind [50].It could address a number of issues, including the reliance on fossil fuels, the lack of energy, and the rise in import costs [50]. A fast-growing economy in transition and Increasing population requires more energy.Over the past few years, China's economic growth and greenhouse gas emissions have increased together.Examining the dynamic link between environment, growth, and energy has numerous policy consequences connected with these variables [50]. The dominant discourse is headlined by the necessity to shift from traditional growth models, including climate change; however, rapid growth is currently taking precedence over considerations of being socio-economic inclusive.It is against this background that this study embarks on a journey to probe the delicate interrelations existing between inclusive growth and development, focusing on the dimensions of inclusive growth in the way to observe paths that go beyond mere economic expansion and reach out for all-round socio-economic development [51]. S. Wang et al. Research gap and contribution Most of the research has employed GDP per capita as a means of empirical analysis to examine the relationship between environment and growth.Nonetheless, the idea of inclusive growth must be discussed in relation to the growing environment.First, in the context of growth-energy-environment, it considered the variable of inclusive growth for the first time rather than standard growth measurements. Research questions The research questions that guide this paper are as follows: It is suggested that the nation was on its way towards establishing long-term prosperity.In Contemporary [59] energy consumption, real GDP, tourism and trade, CO2 Panel Cointegration Analysis Within the OECD countries, this study revealed a nuanced environmental landscape: energy consumption and tourism were identified as contributors to greenhouse gas emissions, while increased trade held the potential for environmental improvements.Nonetheless, the Environmental Kuznets Curve (EKC) hypothesis faltered, with GDP and GDP2 coefficients showing conflicting signs, prompting a call for multifaceted policy approaches centered around bolstering energy efficiency, implementing robust environmental measures for tourism, and incentivizing trade for achieving a harmonious equilibrium between economic expansion and ecological preservation.[60] Clean technologies, energy, finance, and food Quantitative This research showed that Sub-Saharan Africa has a complicated natural landscape, which has implications for sustainable development.It found that the concept of an inverted U-shaped Environmental Kuznets Curve (EKC) with a tipping point at US$5540 GDP per capita was not supported for CO2 emissions but was supported for PM2.5 emissions.And it supported the "the contamination haven hypothesis" for CO2 emissions but not PM2.5 emissions.Carbon pollution in SSA nations was analyzed from the perspectives of technological advancements, FDI, and food insecurity.The number of people was found to be the primary driver for CO2 emissions in the decade that followed, while high incomes per capita, trade openness, and technology adoption were found to be the most significant variables for PM2.5 emissions, even though the IPAT hypothesis did not hold true for either type of emission.To realize a green growth agenda consistent with the UN's sustainable development objectives, it is crucial that SSA member states cooperate to solve these issues.[61] Greenhouse Gas, Sustainable Sanitation Panel Data Models Important information regarding the weather in rural Sichuan, China, was uncovered in this research.The average GWC for homes with anaerobic digesters was 54 % lower than that of those without biogas.GWC decreased by 48 % in biogas dwellings, and that was after accounting for methane loss.Value of a biogas plant was calculated at US$28.30 per ton of CO2-equivalent based only on reduced GHG emissions over a decade; this might offset part of the initial construction expenses.These findings demonstrate the potential importance of biogas facilities in the fight against GHG emissions.They also demonstrate the potential for synergy between policies that promote improved stoves, sustainable biomass collection, and energy initiatives that have positive effects on public health.[62] CO2 emissions, electricity Quantitative analysis This research provided crucial evidence for the validity of the Environmental Kuznets Curve (EKC) hypothesis in the context of Italy's economy and environment by demonstrating that rising prosperity was associated with declining pollution levels over time.In addition, the generation of renewable energy per person proved to be an effective factor in lowering CO2 emissions per person in both the short and long run, while international commerce was beneficial in the long term.The research also showed a Granger relationship between the two types of power generation, non-renewable and green, as well as between the two types of electricity generation.These findings highlight the importance of renewable energy generation in the long-term effort to lessen environmental damage.Legislators should give this a lot of thought. S. Wang et al. H2. Increase in the tourist arrival leads to decrease in the C02 Emission. H3. Improvement in access to water and sanitation leads to decrease in the C02 Emission. H4. Increase in the inclusive growth leads to improvement in the environment. Literature review Scholars have done a lot of study on the link between energy, the environment, and the business [52].Some of these studies have expanded their focus to include more factors, such as the use of green energy and tourists, to look at how their effects on this connection are linked [53].For example, scholars have looked at the part of using green energy sources and the effects of tourists at the same time.In this Table-1 we see how the CO2 emission, economic expansion, energy usage, integration of green energy mixing, trade, tourism, social infrastructure, financial development, and a few other factors are connected together [54].This table delineates the environmental impact of tourism, good and bad, be it waste or green energy, and cleanliness as well as trade [55].On the positive side, tourism promotes recycling, reforestation, gentle touring, and promotes the idea to get involved in environmental activities.Literature has indicated that in different instances such as an increase in pollution at Malaysia versus a decrease in pollution at Tunisia and Turkey, tourists.Moreover, green energy will not only diminish the environmental impact of tourists but that of economic development as well such as governments, businesses, and manufacturing firms.Research authors have brought into light the fact that the enhanced and easy accessibility to clean water as well as sanitary amenities can further bring about a healthier life for people as well as improve the surrounding [56].Researchers have observed that in the most sustainable countries there is a proper equilibrium, with growth and energy consumption in the long term being kept at the same level.Hence, the author ponders on the fact that green energy, clean water and sewage access, and sustainable tourism should be regarded as the key components of such ventures in a quest for environmental sustainability and economic development together [57]. Energy-growth-environment nexus is, in essence, a complicated and mutually reinforcing trilateral relationship that links the energy production/consumption, industrial productivity, and environmental fitness [63].The unraveling of this connection is a central part of the process of modern development, especially in the case of China, but this progress cannot occur without it.This section aids in the understanding of the general framework that I am using and what the major relational aspects between energy, growth, and environment are.It has such features as more production, income, and job opportunities( [64].Energy assumes the vital role of production of commodities, making of infrastructures and creation of good living standards.The sustainability of environment relates to the wise utilization of the nature-given resources, minimization of pollution, and slash of the environmental effects.Energy generation and consumption process that nature can be harmed by it; pollution of air and water, release of greenhouse gasses and habitat loss are the examples of disadvantages of energy production and consumption.Supply of clean water, facility of sanitation and electricity, being essential for both personal health and economic productivity, should be given a priority.[65], There are various methods that can be employed to attract and retain the attention of target customers for a SaaS product in the software industry.The interrelation between regulations set by the government, energy offering, and economic activities varies, together with the goal to protect the environment.The efficient legislations can be aimed at promoting green energy at the same time striking balance between the sustainable economic growth and the environment [66].Government policies and regulations shape the energy landscape, economic activities, and environmental protection measures.Effective policies can promote sustainable energy use, economic growth, and environmental conservation.Recognizing the dynamic nature of the nexus, feedback loops represent the consequences of actions taken within the system.For example, increased energy consumption may lead to environmental degradation, which, in turn, necessitates environmental policies that influence energy production [67]. Within the energy-growth-environment nexus, several key interactions and linkages shape the outcomes: A robust supply of affordable and reliable energy is essential for economic growth.Industrial processes, transportation, and modern conveniences all rely on energy.With economic expansion, demand for power is usually higher which results in higher generation, consumption and consequently, higher negative impact on the environment.Using fossil fuels as a source for energy is a Chief factor of environmental degradation.GHGs, particularly, air pollutants and the depletion of the finite natural resources known are the important environmental obstacles.On the other hand, moving to make power derived from the sun and wind contributes the reduction of these impacts.The increase of access to clean water, plumbing, and electricity upgrades living conditions and incorporates the foundation of economic success [68].Basic Services have a huge role in the process of the making human lives easier by decreasing risks and sustain productive life while also making people able to join the formal economy.Element of economic empowerment, electricity provision support many aspects of essential services supply.It is the good water source for water pumping and sanitation issues, and it helps to healthcare services and supports educational institutions.On the one hand, the insufficiency of energy can offset the supply and The delivery of the most basic services such as lighting, heating, health, and communications.The formulation of government policies is one of the crucial elements that determines energy sector trends and influences the environmental results.Through the adopting of rules that sustain renewable energy adoption, that partly regulate emissions and that into the creation of a green economy, the environmental sustainability and the economic growth get a positive impact [69].Clarifying the contribution of such links and associations to the overall formulation of strategies and policies on economic development and environmentally sound methodologies for equitable provision of social amenities is also of paramount importance [70]. Water shortage is one of the most destructive elements for agricultural production and is a big factor for food security issues.Industries that use water, and for instance, manufacturing and textile production industries, may face production collisions sometimes, affecting the economy of a country.The water-energy connection is two-way as the use one affects the other.A large amount of water is drawn for cooling in power plants, carbonation, desperately of water among other energy production activities.Whereas, on one hand S. Wang et al. the supply and treatment of water are essential factors, on the other hand it is water availability that makes it a paramount issue [71] on energy availability.It must be mentioned here that the nexus shows the need for having comprehensive planning for the sake of mutual security not only of water, but also energy sector.Water scarcity can become a great challenge especially in places where the resources being used are already limited.In China this is the inseparable aspect that emphasizes the sustainable management of resources onward.Water is a pivotal ingredient for quite a number of economic activities ranging from agriculture, to manufacturing to services industry.Sufficient water serves economic growth mechanisms namely, by raising agricultural production, conservation of industries, and creation of urban settlements.China which entirely depends on the irrigation of its agriculture activities, is the main player in the water issue [72].in the nation's economy.Overall, provision of water for irrigation is mainly the source for growth of crops and food security.The water estate due to grow cities requires increasing the storage, transportation, treatment and distribution of water.A situation when people do not enjoy access to clean water and sanitation in the urban areas can be detrimental to overall economic growth and good living standards.Human activities are found to be responsible for the unsustainability of water ecosystems and the environment.Pollution, overfishing and the destruction of marine ecosystems can endanger biodiversity and even the viability of various particular water species [73].In addition, water-related hazards, for instance, flooding and drought, can have both high and low status environment impacts.The ecosystems with healthy water are the ones that lead to biodiversity and many ecosystem services e.g.clean water supply or flooding regulation.Impacts of climate change represented by altered precipitation patterns and the ever more regular heat wave even can turn water-related environmental challenges to the worse.Learning about a water-energy-growth-environment link are key pillars for China's sustainable development.Availability of good quality water, using the water with care or water efficient methods, and guarding of water sources are among the priorities [74].water as a major concern for dealing with water scarcity, economic advancement, and environmental security issues.Rule makers of China must take these complicated interdependencies into account as they work on their strategies for natural resource management, energy production, economic growth, and to preserve environment [75]. This study extended the literature (Table 1) the first study on evaluating the possible long-run dynamics of socio-economic development on environmental degradation in China.The Goal 6 of SDGs "Ensuring availability of water and sanitation for all" has been much improved up-till 2015 (over 90 & 66 percent) of world population has access to improved drinking water sources and sanitation facilities respectively.At the same time, under the Goal.7 of SDGs Ensure access to affordable, reliable, sustainable and modern energy for all" from 2000 to 2016, the access to electricity has increased to 87 percent from 78 percent globally but still 43 percent of the world's population is using polluting sources for cooking purpose being highest in Asian and African countries. The data From 1990 to 2021, this study used data from the World Development Indicators about China that was collected every year [76].The variables chosen for the analytical framework were per capita CO2 emissions, per capita energy consumption, per capita GDP, the share of renewable energy use [77], the share of tourism exports, the level of improved access to water, the level of improved access to sanitation, and the level of access to electricity [78]. Descriptive analysis Table 2 gives a full picture of the summary data for the factors being looked at in this study, with a special focus on China [79].The table shows the mean, median, minimum, maximum, kurtosis, skewness, and the Jarque-Bera normality test, all of which are important statistics measures.The average amount of CO2 emitted per person was 0.91 metric tons, with a low of 0.68 metric tons [80] and a high of 0.72 metric tons.In terms of GDP per person, the mean was 722.12 with the minimum value was 741.80 and maximum value was 1041.31.On average, each person used 475.35 kg of oil equivalent worth of energy.Also, the study of the data showed that green energy made up an average of 48.72 % of all the energy used The study also looked at how easy it was for people to get better water, better toilets, and power.On average, 78.95 % of people had access to better water sources, 78.16 % had access to power, and 42.38 % had access to better sewage [81].Notably, the number of people with access to better cleaning stayed below 50 % throughout the whole study.Also, the factors in Table 2 were tested for normality with tools like kurtosis, skewness, and the Jarque-Bera test.The data showed that all factors had a normal distribution, which makes them easy to study in the future [82,83]. Econometric methodology First, it was important to make sure how things are going together and if there is a good match between them.So, first we checked if our data was stationary by using tests for unit roots. Stationarity analysis Four ways were used to check if stationarity exists.This was done to get better guesses and because of some math mistakes related with every single test.This study used a test by Ref. [85].It also included tests such as the ADF or PP method. Cointegration analysis We used the Kao cointegration test to study how certain variables are connected over a long period of time.Two variables have a connection of (1, 1) when their mix in straight line stays stable but they do not stay the same by themselves.Scientists said that when one thing causes another, it always goes in just one way between the factors and there is something called cointegration happening [86]To check if the cointegration worked for the error correction model [87],test was suggested by Aïssa and others in 2014.This is based on something called ADF-statistics [88] also used the same basic ideas in their test.But, Kao (1999) shows same coefficients and unique starting points in the first-step regressor [89].It uses DF and ADF methods by using the null hypothesis (meaning there's no connection).If we look at two-variable model, Kao is shown by the diagram below [89]. in term of Eqs.(7)-9) y it = y it− 1 + u it (8) x it = x it− 1 + ε it (9) in here, alpha can change but stays constant at different lines and b shows the level of slope.The random events in each case represent yit for i's total, with xits being independent incidents only when all are equal or higher than a fixed value.In this same situation we compute residual effects from ADF test in terms are described in equations ( 10) and ( 11) respectively, If null hypothesis that there is no cointegration is considered, estimated ADF test is shown as: The estimated variance is σ ] is estimated in the form of Eq. ( 12) Estimation of long-run covariance is done through the kernel estimator in the form of Eq. ( 13): where k shows the supported kernel function while b denotes the bandwidth. Regression analysis Kao and Chiang (2001) along with [90]suggest panel Dynamic Ordinary Least Square as a potential method because it seems that there are issues like poor efficiency or inconsistency when using the normal least square approach to estimate regression lines over time for co-integrated data.Suggested we use Fully Modified Ordinary Least Square, which is a term-free method for working out long-run flexibility.This plays an important part in how certain things react to changes over time.FMLOS says we need to have stable variables and can face issues with connections in a series or problems with repeated causes of estimation using OLS.DOLS, a team version of one-time series regressions created by Ref. [91] The regression model is represented in terms of Eq. ( 14): The error term, which must follow the I(0) process, is represented by μit.Equation ( 15) becomes when the DOLS coefficient is estimated. where Xit can be reached by using 2(K+1) multiplied by 1 and X raised to the power of it can found from.DOLS is a method that uses lag spots of changed regressors.It helps fix endogeneity and serial connections problems.It can deal with biases caused by having a small number of samples. Unit root test result Using three alternative scenarios form (level) (intercept and trend), form (first difference) (intercept only), and form (level) (intercept and trend).The empirical results of three experiments are displayed in Table 3.In scenario 1, both the 1 % and the 5 % significance level tests conclude that a unit root exists in all of the chosen variables, rejecting the stationarity hypothesis.However, the stationarity in all variables was confirmed by all three tests only when expressed in first difference form, indicating that the correct integration order was 1, or I (1).In these evaluations, The specified lag length automatically based on the Schwarz criterion. Cointegration test result Johansen cointegration test results are presented empirically in Table 4.Long-term elasticity and Granger causality both rely on the presence of cointegration.Two indicators, including trace statistics and maximum eigenvalue statistics, are provided by this test.For the existence of cointegration between the chosen variables, the null hypothesis must be rejected because it implies the absence of cointegration.In the current study, China has 2 cointegration(s) associations, according to both the maximum eigenvalue test and the trace test.Because of the cointegration, Granger causality could be investigated.If cointegration is absent, then the VAR causality model is the most suitable. Granger causality analysis Table 5 explores the VECM result for the causality between carbon dioxide emissions, Energy utilization, GDP, and green energy, the percentage of exports attributable to tourism, and the improved access to electricity, sanitary facilities, and water.The Granger causality analysis explored that whether a variable x causes a variable y.A variable x is considered as Granger-cause variable y if the present value of y is predicted by using past values of x.A negative and significant ECM coefficient is required to establish the long run causality of a specific variable with all other variables. In Table 5, the VECM result is used to analyze carbon dioxide emissions, Energy utilization, GDP, and green energy, the percentage of exports attributable to tourism, and the improved access to electricity, sanitary facilities, and water.The Granger causality test The values in parentheses are the p-values.a significance at 1 % level.b significance at 5 % level.c significance at 10 % level.S. Wang et al. investigated links between x and y to see if there was any evidence of causation.It is claimed that x is a Granger-cause of y in the event that its current value can be predicted using just past information about x.An ECM that is statistically significant and negative coefficient indicates a long-term causal connection between the variables and the one that is provided.The Granger causality study saw the availability of improved water, sanitation, and power as separate factors. Long-term carbon dioxide emissions in China were found to be causally related to the availability of better water, sanitation, and electricity, as indicated by a negative and statistically significant ECM coefficient.The selected variables were discovered to possess a causal effect on long-term CO2 emissions.The final case study examined the ultimate causes of switching to renewable energy sources. The feedback hypothesis linking CO2 emissions to GDP per capita (cases 1 and 3) was supported by short-run causality results, as were the hypotheses linking tourism to consumption of renewable energy (all cases) and to increased availability of potable water (case 1), sanitary facilities (case 2), and electricity (case 3).The theory of feedback suggests that these elements influence one another through a feedback loop.In the short-run, using more energy leads to higher CO2 levels (in all cases); tourism boosts GDP per person (all situations too).Meanwhile, there's a connection between renewable energy and less CO2 emissions in one case.More travel results in increased carbon dioxide production as well.Lacking electricity access means greater emissions from human activities 3 times respectively. Granger causal relationship analysis supports the inclusion of these factors in the analysis.Access to clean water, modern sewage systems, and reliable electricity were all shown to be critically important by the energy-growth-environment framework.Improved water accessibility boosted economic productivity, tourism, renewable energy, and energy consumption.It highlighted the urgency of upgrading the country's water infrastructure.An increase in sanitary facilities has a similar multiplicative impact on tourism, renewable energy, energy use, and GDP.Access to electricity was a contributing portion of the increase in CO2 emissions, GDP, energy use, renewable energy, and tourism.In the context of the environment-growth nexus, the findings of the causality analysis highlighted the significance of renewable energy, tourism, and improved access to power, water, and sanitation. The long-run elasticity of CE t Estimating Elasticity in the long run coefficients of CO2 emissions regarding inclusive growth, usage of energy, renewable energy, and the role of tourism of exports, and access to better sanitation and water resources, and power is explored in Table 6.Two cointegration regression methods, FMOLS and CCR, were used to estimate the regression coefficients for more accurate estimates.Three separate experiments were conducted to estimate long-run elasticity, with better water, sanitation, and electricity availability serving as independent variables. Based on FMOLS's findings, a 1 % increase in Inclusive growth leads to a 0.342 % (Model1) and 0.258 % (Model 3) increase in CO2 emissions per person.Economic development pushes carbon dioxide emissions and in turn the more you spend, the more emissions arise.The bigger the economy the better, it the higher the consumption of energy [92]. An increase of 1 percent in energy consumption per person resulted in a rise of 1.343 % in CO2 emissions in Case 1, 0.524 % in Case 2, and 0.618 % in Case 3. The emission of CO2 which is mainly attributed to the development of urban areas have a positive long run relationship with urbanization. 1 % additional renewable energy resources incorporation into the energy mix.CO2.Release is the main driver of 0.865833 % rise of the urbanization.0.681991 % respectively.Moreover, over an urbanization of a one more percent as well as an increase in CO2 emission there will be a readiness in energy consumption, that is how we see it [93]. 0.571683 % and 0.883922 % respectively Increasing the tourism sector's proportion of total exports by just one percent will reduce CO2 emissions by 0.221 % (case 1) and 0.234 % (case 3).Certainly, tourism and travelling are among the reasons of emission of greenhouse gases from such sources as CO2 and CH4.However, nox, HFCs, PFCs, and SF6 are also large contributors of greenhouse gases [94].For every A 1 % rise in the utilization of green energy, CO2 emissions were reduced by 1.117 % (case 2) and 1.093 % (case 3).If only one percent more people had access to clean water and reliable energy, global CO2 emissions could be reduced by 1.553 % and 0.0.233%, respectively. Based on CCR findings, a 1 % improvement in inclusive growth leads to a 0.403 % decrease in per capita CO2 emissions (case 2) and a 0.123 % decrease (case 3).In scenario 1, the rise in CO2 emissions per person was 1.556 %; in case 2, 1.252 %; and in case 3, 1.321 %.Case 1 had a 0.201 % drop, Case 2 saw a 0.222 % drop, and Case 3 saw a 0.271 % drop in CO2 emissions per person because of a 1 % rise in the tourism sector's proportion of exports.In cases 2 and 3, a 1 % increase in the use of renewable energy led to a 0.810 % and 0.734 % reduction in CO2 emissions per person.A 1 % rise in the populace's access to better sanitation and water, and power resulted in a 2.262 %, 0.234 %, and 0.254 % drop in CO2 per capita, respectively. The regression results make it abundantly evident that the rise in CO2 emissions per capita owing to rising energy use and GDP can be readily counteracted by the growth in renewable energy consumption, tourism, and population access to improved water, sanitation, and power.Thus, the government of China should boost the percentage of renewable energy in the country's overall energy mix.The government should also take action to boost the tourism industry's contribution to total exports.An uptick in visitors means more money from out of country and more work for those already employed in the sector.The government's responsibility in attracting international tourists is twofold: maintaining popular destinations and finding. Conclusion and policy implications' Extreme weather, such as floods and pollution, are among the natural calamities that China has experienced because of environmental deterioration.China is a developing nation that recognized the importance of energy to its development goals of raising GDP per capita.It is also crucial to boost the tourism industry's contribution to total exports.Citizens have a right to expect that their S. Wang et al. fundamental needs, such as access to clean water, adequate sanitation, and reliable energy, will be met.Thus, the current study synthesizes these topics and describes the interplay between carbon dioxide emissions, inclusive growth, per capita energy use, renewable energy sources, tourism, and the availability of better water, sanitation, and power infrastructure in China between 1990 and 2021.Three-unit root tests were used to specify the proper sequence of integration before any empirical analysis was performed.Long-term cointegration was demonstrated by the Johansen cointegration test.Long-term the relationship between CO2 emissions and the country was demonstrated by the vector error correction model.The feedback hypothesis between carbon dioxide emissions and GDP per capita, tourism and renewable energy, tourism and improved water and sanitation, and tourism and access to electricity, was developed by using short-run causality data.According to the feedback hypothesis, there is a causal relationship between the aforementioned factors.Economic output, energy consumption, renewable energy, and tourism all benefited from easier access to better water.It drove home the need for better water infrastructure throughout the country.In a similar vein, GDP, energy consumption, renewable energy, and tourism were all influenced by the availability of better sanitation.Electricity availability also had a significant causal effect on carbon dioxide emissions, GDP, energy consumption, renewable energy consumption, and travel and tourism.Renewable energy, tourism, and access to improved water, sanitation, and electricity were all highlighted as having a significant impact on the environment-growth nexus by the results of the causality analysis.Increasing the tourism sector's proportion of total exports by just one percent will reduce CO2 emissions by 0.221 % (case 1) and 0.234 % (case 3).For every 1 % increase in the utilization of renewable energy, CO2 emissions were reduced by 1.117 % (case 2) and 1.093 % (case 3).If only one percent more people had access to clean water and reliable energy, global CO2 emissions could be reduced by 1.553 % and 0.0.233%, respectively.According to the long-run elasticity coefficients, the rise in CO2 emissions per person owing to rising energy use and GDP is readily offset by rising use of renewable energy, tourism, and access to improved water, sanitation, and electricity.The government of China must take action to boost tourism and the use of renewable energy sources.The government's responsibility in attracting international tourists is twofold: maintaining popular destinations and finding new ones.The government should take the necessary steps to give every citizen in the country access to modern utilities like running water, flush toilets, and power.Sites and uncover the new destinations that will entice the world's travelers.To boost tourism in the north, the government should work to strengthen the region's security.Improving people's standard of living is a top priority for governments everywhere.The key factors in determining a person's level of life are their availability to basic infrastructure like running water, proper sanitation, and power.Water, sanitation, and energy availability are all indicators of economic health.Therefore, the government should take the required steps to provide modern water, sanitation, and energy to the entire population. Caveats and limitations Due to time and financial constraints, this study has some limitations.Data Limitations: The paper uses the data starting from 1990 which is the time when the cold war was over to 2021, when there may have been agreeable changes in policies, technology, and the world economy that could have not been fully captured in the data.Using current data can help to take a clearer picture of what is happening right now and shed light on possible future of the issue.Second: this study used Co2 emission as dependent variable, future study could use Ecological footprint as indicator of environment.Third: Future studies could check the cross-country analysis and panel data analysis for the variable under consideration. Availability of data and material Data will be available on request.S. Wang et al. Funding source Name 1 : Guilin Tourism University 2023 Professional Core Course Construction Project "Materials and Construction" Number: 2023zyhx011.Name 2: Guilin Tourism University 2023 Specialized Innovation Integration Demonstration Course Construction Project "Homestay Design" Number: 2023ZCRH004.Name 3: Guilin Tourism University's 2023 Education and Teaching Reform Research Project "Innovative Research and Practice of the Training Model for Applied Talents in the Environmental Design Major of" School Enterprise Joint Training and Collaborative Education" Number: 2023XJJG023.Name 4: Guilin Tourism College 2022 Specialized Innovation Integration Demonstration Course Construction Project "Furniture and Soft Decoration Design" Number: 2022ZCRH005. Table 3 Unit root tests result. Table 4 Results of the Johansen co-integration test. a denotes rejection of the hypothesis at the 0.05 level.S.Wang et al. Table 5 VECM Granger F-test results. Table 6 Long run elasticity of CE t .
2024-05-12T15:09:27.917Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "f8deb96c1801592d7eada3796ba34eea2ba5a797", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.heliyon.2024.e31097", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "729487a69d213b6d311e94f7967ae9e4564bcf97", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
147705638
pes2o/s2orc
v3-fos-license
Changes in functional outcome over five years after stroke Abstract Objectives Data on the long‐term time course of poststroke functional outcome is limited. We investigated changes in functional outcome over 5 years after stroke in a hospital based cohort. Materials and Methods Consecutive patients who were independent in activities of daily living (ADL) and admitted to a Stroke Unit at Skaraborg Hospital, Sweden for a first acute stroke from 2007 to 2009 (n = 1,421) were followed‐up after 3 months and thereafter annually over 5 years using a postal questionnaire. Clinical variables at acute stroke and 3 months post stroke were obtained from the Swedish Stroke Register. ADL dependency was defined as dependence in dressing, toileting or indoor mobility. Results The proportions of survivors who reported ADL dependency remained stable throughout follow‐up (19%–22%). However, among survivors who were ADL independent at 3 months, about 3% deteriorated to dependency each year. Deterioration was predicted by age (HR 1.11; 95% CI 1.08–1.13), diabetes (HR 1.65; 95% CI 1.12–2.44), NIHSS score (HR 1.07; 95% CI 1.04–1.10), and self‐perceived unmet care needs one year post stroke (HR 2.01; 95% CI 1.44–2.81). Transitions from ADL dependency to independence occurred mainly during the first year post stroke. Improvement was negatively predicted by living alone before stroke (HR 0.41 95% CI 0.19–0.91), NIHSS score (HR 0.90; 95% CI 0.86–0.95) and ischemic stroke (vs. hemorrhagic stroke), HR 0.39; 95% CI 0.17–0.89. Conclusion Transitions between ADL independence and dependency occur up to 5 years after stroke. Some of the factors predicting these transitions are potentially modifiable. The gap of knowledge with respect to the long-term course of functional outcomes after stroke is an obstacle for delivering adequate care and support to stroke survivors. If disability during the chronic state after stroke is not stable, it may also be modifiable. Thus, better knowledge of the long-term course of disability after stroke is crucial for the development of timely interventions that can better meet the needs of people afflicted by stroke. The aim of this study was to contribute descriptive data on the time course of disability over 5 years after stroke, and to identify predictors for improvement as well as deterioration beyond the first three months after stroke. | Materials Data were obtained from two quality registers assessing stroke care; the Swedish Stroke Register (Riksstroke) (Asplund et al., 2011), and the Skaraborg Longitudinal Stroke Register (SLAG). The SLAG register is a local register containing data from annual follow-ups over 5 years for all patients with an acute stroke treated at the Stroke Units at the Skaraborg Hospitals. The register was established in collaboration with the Swedish Stroke Register in order to assess quality aspects of long-term stroke care. The Skaraborg Hospital has two collaborating Stroke Units located at two county hospitals, serving approximately 285,000 inhabitants of the entire Skaraborg County in southwestern Sweden. According to local and national guidelines, all patients presenting with a suspected acute stroke, regardless of severity and age, occurring in the Skaraborg County should be admitted to one of the two Stroke Units at the Skaraborg Hospital. Assessments from the Swedish Stroke Register (Riks-Stroke, 2011) show that a high proportion (>90%) of acute stroke patients were cared for at Stroke Units at the Skaraborg Hospitals during the study period. For the present study, we included all patients who were independent in ADL before stroke and were admitted to one of the Stroke Units at the Skaraborg Hospitals with a first ever stroke between 1 January 2007 and 31 December 2009. | Ethical considerations Written information about the SLAG register, the voluntary nature of participation, and the possibility to actively decline participation was given to the participants during their hospital stay. As data was collected primarily for quality assessment, formal written informed consent was not obtained. Approval to use the collected data in the quality register for a retrospective analysis was received from The Regional Ethics Board in Gothenburg (ref. nr 270-14). | Clinical variables at baseline and at three months post stroke Data on age at acute stroke, sex, previous strokes, ability in ADL before and 3 months post stroke, living situation before and post stroke, type of stroke, stroke severity at admission measured as National Institute of Health Stroke Scale (NIHSS) score, presence of vascular risk factors, and secondary preventive treatments at discharge were obtained from the Swedish Stroke Register. In this register ability in ADL 3 months post stroke is assessed by three questions with the following response alternatives; (a) "How is your mobility now? Independent/independent indoors/need help", (b) "Do you need help from someone to visit the toilet? Yes/ No," and (c) "Do you need help getting dressed and undressed? ADL dependency was defined as needing help with indoor mobility, dressing or toileting, whereas those who were independent in indoor mobility, dressing and toileting were regarded as ADL independent. | Clinical variables one to five years post stroke Data were collected by postal questionnaires sent annually to all stroke survivors one to five years post stroke. Information on vital status was obtained by linkage to the Swedish Population Register. The annual postal questionnaire was developed in collaboration with the Swedish Stroke Register. In order to enable comparison with functional outcome at 3 months, the postal questionnaire was based on the Swedish Stroke Register's survey for three-month follow-up of stroke survivors and included the identical questions and response alternatives about ADL. Both questionnaires included instructions to the respondents to use help from a relative or a caregiver if they were unable to complete the questionnaire on their own. The annual follow-up questionnaires also included questions about self-perceived unmet care needs. The latter included the following five items; "Have your needs of home care service been met with respect to (a) Health care (described as help with medication, wound dressing or catheter care), (b) Service (described as help with cleaning or grocery shopping), (c) Personal care (described as help with dressing, hygiene, or toileting)?", "Have your needs of disability aids been met?", and "Have your needs of rehabilitation or training after stroke been met?". The response alternatives for the questions about unmet care demands were "No needs", "Fulfilled needs", "Partly unmet needs", "Completely unmet needs" and "Does not know". Self-perceived unmet care needs were defined as perceiving one of these items as partly or completely unmet. The other response alternatives were regarded as perceiving their needs for care as fulfilled. | Patients lost to follow-up Patients who did not return the questionnaire received one postal reminder and the study nurse made attempts to contact them by telephone. The group "Patients lost to follow-up" comprised those patients who did not return the questionnaire after one reminder, were not available by telephone and persons who had no valid address. Declining patients were not invited for further follow-ups. | Statistical analyses Descriptive statistics as mean and standard deviation for continuous data and frequencies and percentages for categorical data was presented. Change in ADL ability from 3 months to 5 years post stroke was analyzed separately in survivors with respect to ADL ability 3 months post stroke using the Kaplan-Meier method. For both groups this analysis was stratified with respect to age (up to and above 75 years). Factors associated with improvement or deterioration in ADL ability were investigated separately using univariate and multivariate Cox logistic regression models. Variables associated with improvement or deterioration in ADL ability in univariate Cox regression analyses (p < 0.1) were selected for the multivariate models. As patients did not report on their perception of met or unmet care needs at 3 months follow-up, we used data from follow-up at one year after stroke for this variable. To investigate to what extent the results were influenced by those who deteriorated or improved between three months and one year, i.e., before the perception of met or unmet care needs were collected, a sensitivity analysis excluding individuals deteriorating from ADL independency to dependency between three months and one year was performed. p-values <0.05 were considered as statistical significant. IBM SPSS v.22 was used to perform all statistical analysis. | RE SULTS In total, 2,167 stroke events were recorded during the study period. Of these, 1,421 were first-ever strokes in previously ADL-independent individuals. Baseline characteristics for the 1,421 participants and perceived unmet care needs at the first annual follow-up are given in Table 1. The mean age was 75.9 years, 643 (45%) were females, mean NIHSS was 6.4, and 90% of the strokes were ischemic. At one year after stroke 324 (31%) reported self-perceived unmet care needs. The most common reported unmet care need were rehabilitation, reported by 22%, followed by disability aids (10%), service (8%), health care (6%) and personal care (5%). Missing data was low for all variables (<8%), except for the NIHSS score for which data was missing in 22% at baseline and in 18% among survivors at 3 months post stroke. A study flow chart showing the follow-up annually over the 5 years is given in Figure 1. At each time point, the response rate was >90%. The proportion of the returned questionnaires that were answered by the stroke survivors themselves was stable during follow-up and between 57%-63% at the different time points. The corresponding proportions of returned questionnaires answered by the stroke survivor with help from a relative or care giver was 24%-31%, and 11%-13% for questionnaires completely answered by relatives or care givers. Among those returning the questionnaires, information about ADL status was missing in 5%, 7%, 9%, 7% and 8% at year one, two, three, four and five, respectively. The cumulative survival was 85% at 3 months, 79% at year one, 72% at year two, 66% at year three, 59% at year four and 54% at year five post stroke. Over the first two years, survival was poorer among those with hemorrhagic stroke with a cumulative survival of 76% at year one, 73% at year two, 65% at year three, 62% at year four, and 57% at year five. At 3 months, 908 survivors reported ADL independency, 272 reported ADL dependency, 211 were dead, and data on ADL status was missing in 30. As shown in Figure 2 the absolute number of ADL-dependent stroke survivors decreased over time. However, the proportion of survivors reporting ADL dependency remained stable (22% at 3 months, 20% at one, 20% at two, 21% at three, 19% at four, and 20% at five years post stroke). As shown in Figure 3, Panel A, ADL-independent survivors at 3 months after stroke deteriorated to ADL dependency at a slow but constant rate throughout follow-up. In total 192 individuals deteriorated, with cumulative proportions of 3% at one, 6% at two, 11% at three, 14% at four, and 18% at five years post stroke. Similar patterns were observed in subjects above and less than 75 years of age, although those <75 years deteriorated at a slower rate. Cox regression multivariable analysis identified age (hazard ratio [HR] 1.11, 95% confidence interval [CI] 1.08-1.13), diabetes (HR 1.65; 95% CI 1.12-2.44), NIHSS score (HR 1.07, 95% CI 1.04-1.10), and self-perceived unmet care needs (HR 2.01; 95% CI 1.44-2.81) as independent predictors for deterioration to ADL dependency (Table 2). Analysis, in which we excluded those who deteriorated to ADL dependence between the three-month and the one-year follow-up, showed similar results (data not shown). Among the 192 individuals censored for a first deterioration during follow-up, 31 (16%) returned to ADL independency during follow-up. Improvement to ADL independency occurred in 44 of those 272 reporting ADL dependency 3 months after stroke. This transition occurred mainly during the first year, but was observed throughout the follow-up, especially in those <75 years (Figure 3, Panel B). In this group, one third of those improving to ADL independency during follow-up did so during the second and third year after stroke. Cox regression multivariable analysis (Table 3) Among the 44 individuals who were censored for a first transition to independency, 24 (54%) remained independent throughout follow-up. | D ISCUSS I ON Our results show that the proportion of survivors that is dependent on others for ADL remained relatively constant over 5 years post stroke. However, when investigating the trajectories of disability stratified by ADL dependency at 3 months after stroke, we found that transitions between dependency and independency occurred throughout the study period. Deterioration from ADL independence to dependency from 3 months and onwards was predicted by age, NIHSS score at baseline, diabetes and self-perceived unmet care needs, while improvement from ADL dependency to independence was predicted by living alone before stroke, ischemic stroke, and NIHSS score at baseline, (all inverse associations). % ADL-independent % ADL-dependent % missing data on ADL % dead Also, among those with ADL dependency at 3 months post stroke, the functional status was not static and transitions to ADL independency occurred up to 5 years after stroke, especially in those <75 years of age at acute stroke. Significant improvements of disability in the chronic phase after stroke has been reported before (Hankey et al., 2002;Magalhaes et al., 2014;van de Port et al., 2006), but have so far received little attention. We found that improvement to ADL independence was less likely to occur among those living observation of a nonstatic long-term course warrants further studies on how improvements in those with ADL dependency could be supported in the chronic phase after stroke. Recently, a substantial increase in ADL dependency among stroke survivors between 3 and 12 months after stroke was reported from the Swedish Stroke Register by Ullberg et al. (2015). In that study, the ADL dependency rate among stroke survivors increased from 16.2% at 3 months to 28.3% at 12 months. Deterioration to ADL dependency from 3 to 12 months was predicted by female sex, diabetes, stroke severity, previous stroke, stroke type and atrial fibrillation. Some of the methods used in the study by Ullberg et al. (2015) are similar to our study. Both studies are hospital based, were conducted in Sweden, and used the same method for measuring ADL dependency. However, although the studies were partly conducted during the same time period, the study populations do not overlap, as the Skaraborg Hospitals only joined the one-year follow-up in the Swedish stroke register after 2010. We did not replicate the dramatic loss of ADL function during the first year. The explanation for the different findings is not clear, but may partly be attributed to some methodological differences. In the study Ullberg et al. (2015) a relatively large proportion of the population was lost to follow-up at 12 months, and their analysis was not restricted to first ever stroke. Although highly speculative, it is also possible that local variations in access to rehabilitation and care services may contribute, as we found that deterioration to ADL dependency was associated with a perception of unmet care needs. | CON CLUS IONS In conclusion, in this study we show that despite stable proportions of ADL dependency among stroke survivors at different time points after stroke, transitions between ADL independence and dependency occur up to 5 years after stroke, indicating that the chronic phase after stroke is not static. We also found potentially modifiable factors predicting these transitions, emphasizing the need for interventions and support also during the chronic phase after stroke. ACK N OWLED G M ENTS The authors thank Inger Nordin and Eva Åkerhage for their excellent work with data collection and registration. The authors would also like to convey their gratitude to all the informants, without whose participation this study could not have been carried out. CO N FLI C T O F I NTE R E S T The Authors declare that there is no conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
2019-05-09T13:09:56.098Z
2019-05-07T00:00:00.000
{ "year": 2019, "sha1": "64b1a6d06954088cc3a4cd9e09b0fcb44c521d21", "oa_license": "CCBY", "oa_url": "https://www.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.1300", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64b1a6d06954088cc3a4cd9e09b0fcb44c521d21", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
135349644
pes2o/s2orc
v3-fos-license
Analysis of the Mismatch between Tanzania Household Budget Survey and National Panel Survey Data in Poverty and Inequality Levels and Trends This study carries out a thorough investigation of the potential sources of mismatch in poverty and inequality levels and trends between the Tanzania National Panel Survey and Household Budget Survey. The main findings of the study include the following. First, the difference in poverty levels between the Household Budget Survey and the National Panel Survey is essentially explained by the differences in the methods of estimating the poverty line. Second, the discrepancy in poverty trends can be mainly attributed to the difference in inter-year temporal price deflators, and, to a lesser extent, spatial price deflators. The use of the consumer price index for adjusting consumption variation across years would show a decline in poverty during the past five years for the Household Budget Survey and the National Panel Survey. Third, despite noticeable differences in the methods of household consumption data collection, the Household Budget Survey and National Panel Survey show close mean household consumption levels in the last rounds, when using the consumer price index to adjust for inter-year price variations. Mean household consumption levels in the Household Budget Survey 2011/12 and National Panel Survey 2010/11 are comparable, and the mean consumption level in the National Panel Survey 2012/13 is around 10 percent higher. The difference is driven by higher levels of aggregate and food consumption by the better-off groups in the National Panel Survey. Fourth, the mismatch in inequality trends and pro-poor growth patterns between the two surveys could not be resolved and is a subject for further analysis. Policy Research Working Paper 8361 This paper is a product of the Poverty and Equity Global Practice. It is part of a larger effort by the World Bank to provide open access to its research and make a contribution to development policy discussions around the world. Policy Research Working Papers are also posted on the Web at http://econ.worldbank.org. The authors may be contacted at nbelghith@ worldbank.org. This study carries out a thorough investigation of the potential sources of mismatch in poverty and inequality levels and trends between the Tanzania National Panel Survey and Household Budget Survey. The main findings of the study include the following. First, the difference in poverty levels between the Household Budget Survey and the National Panel Survey is essentially explained by the differences in the methods of estimating the poverty line. Second, the discrepancy in poverty trends can be mainly attributed to the difference in inter-year temporal price deflators, and, to a lesser extent, spatial price deflators. The use of the consumer price index for adjusting consumption variation across years would show a decline in poverty during the past five years for the Household Budget Survey and the National Panel Survey. Third, despite noticeable differences in the methods of household consumption data collection, the Household Budget Survey and National Panel Survey show close mean household consumption levels in the last rounds, when using the consumer price index to adjust for inter-year price variations. Mean household consumption levels in the Household Budget Survey 2011/12 and National Panel Survey 2010/11 are comparable, and the mean consumption level in the National Panel Survey 2012/13 is around 10 percent higher. The difference is driven by higher levels of aggregate and food consumption by the better-off groups in the National Panel Survey. Fourth, the mismatch in inequality trends and pro-poor growth patterns between the two surveys could not be resolved and is a subject for further analysis. I. Background and Main Objectives The official poverty figures announced by the Government of Tanzania in November 2013 revealed a decline in the basic needs poverty rate from around 34 percent to 28.2 percent between 2007 and 2012-this is the first significant decline the country has experienced during the last 20 years. This reduction in poverty has been confirmed in the recently published Poverty Assessment report for Mainland Tanzania. The report examined the recent trends in poverty and inequality and their determinants and explored the responsiveness of poverty reduction to economic growth using the Household Budget Survey (HBS) collected in 2011/12. The report shows that poverty dropped by approximately 1 percentage point per year between 2007 and 2011/12, and that inequality, measured by the Gini coefficient of real per capita monthly consumption, declined from around 39 to 36 during the same period. The report also found emerging signs of pro-poor growth, despite a persistently high number of people living in poverty. However, the declining trend in poverty revealed by the HBS data is in contrast to the increasing trend that is observed using the National Panel Survey (NPS) data, which show an increase in poverty from 14.6 percent to 18.1 percent and subsequently to 21.2 percent between 2008/09, 2010/11 and 2012/13. The data also show a slight increase in inequality across the three rounds and do not support the pro-poor growth pattern revealed by HBS data. Although HBS is the official source for official poverty numbers, this mismatch in poverty levels and trends between the two surveys is puzzling. The NPS is a national level longitudinal survey designed to collect data from the same households over time in an attempt to better track the progress of the National Strategy for Growth and Reduction of Poverty (MKUKUTA), understand the poverty dynamics and evaluate policy impacts. This study aims to carry out a thorough investigation of the potential sources of mismatch in poverty and inequality levels and trends between the NPS and HBS. The investigation will focus on the key candidate sources for the divergence between the two surveys. These include the methodological differences in the construction of the consumption aggregates and the estimation of the poverty lines, Source: HBS 2007and 2011/12 and NPS 2008/09, 2010/11 and 2012 Discrepancies in poverty incidence and trends are observed at the regional level. As shown in Table 1, the poverty level in the rural areas that was estimated using the NPS data is two times lower than that estimated using the HBS data, while the poverty incidence in urban areas measured using NPS is over three times lower than that estimated using HBS. In addition, HBS suggests a decline in poverty in all regions, while NPS indicates a decline in poverty only in Dar es Salaam (and Zanzibar). At the regional level, both NPS and HBS reveal that inequality is higher in Dar es Salaam and secondary cities than in rural areas. HBS data suggest that the distribution of consumption equalized over time in all the regions, with the most substantial improvement occurring in the rural areas, as can be seen from the changing shape of the Lorenz curves in Figure 3. Much of the reduction in inequality seems to have been driven by an increase in the welfare share accruing to the poorest segment of the population, as the consumption share of the poorest quintile grew by more than 16 percent between 2007 and 2011/12, with an increase exceeding 20 percent in the rural areas. In contrast, NPS data suggest a deterioration of the distribution of welfare particularly in the rural areas where inequality seems to have significantly increased between 2008 and 2013. As shown in Figure 4, the increase in inequality seems to have been driven by a deterioration of the consumption share accruing to the poorest population groups, which declined by around 12 percent at the national level and by 11 percent in rural areas. The consumption share of the poor seems however to have improved in the urban sector, particularly in Dar es Salaam where it increased by over 20 percent. C. Distributional pattern of growth Using changes in household consumption as the measure of growth, this section examines whether both NPS and HBS support the emerging signs of "pro-poor" growth during the recent years. The Growth Incidence Curve (GIC) for HBS 2007-2011/12, which shows the percent change in average consumption for each percentile of the distribution, is downwardly sloped, indicating higher growth among the poorest ( Figure 5). However, the pattern of real consumption growth using HBS differs from that using NPS as indicated by the upwardly sloped GIC for NPS 2008/09-2012/13 which suggests that the richer groups were the main beneficiaries of growth. D. Comparison of HBS and NPS density functions To further explore the differences in expenditures between NPS and HBS, we plot the kernel density functions of consumption expenditure of both surveys in Figure 6.a and Figure 6.b. Figure 6 The differences between the HBS and NPS data sets in poverty and inequality levels and trends as well as in the distributional pattern of growth are puzzling and deserve an in-depth investigation of the sources of such differences. The following sections will explore whether these discrepancies are due to differences in survey methods or to differences in the approaches used for the construction of the consumption aggregates, measurement of the poverty lines or estimation of the price deflators. We start by reviewing the survey methods, focusing on the method of data capture, length of reference period for reporting consumption and the level of commodity details. We then examine the approaches used to estimate poverty and consumption in both surveys and investigate how the estimates vary if the same approach is used in both data sets. III. Comparison of HBS and NPS Survey Methods The section reviews the differences between the HBS and NPS data sets in survey characteristics and methods of collecting consumption data. Table 2 presents some general characteristics of both surveys including the date and duration of the fieldwork, sample size, and total (weighted) estimated population. The HBS has a large sample size and is restricted to the mainland, while NPS has a smaller sample size but covers the Zanzibar archipelago. However, eliminating Zanzibar from the comparative analysis does not induce significant differences in the results. The sample sizes in both the HBS and NPS are considered large enough to give reasonably precise estimates of poverty at the national level and by geographic domain (rural, other urban, and Dar es Salaam). While the increase of the population of approximately 25 percent shown by the NPS data appears to be an overestimate, adjusting the NPS weights using a linear interpolation of population between the 2002 and 2012 census and re-scaling weights accordingly revealed only slight changes in the NPS poverty and inequality levels and no changes in the trends. This points to the need to further investigate this issue. A. Survey characteristics Also, there may be concerns about the potential effect of attrition in the panel surveys since nonrandom attrition can cause the survey samples to become unrepresentative of the general population over time. However, the attrition rates of 3 percent in NPS 2010/11 and 4 percent in NPS 2012/13 are too low to significantly bias poverty and inequality estimates and to affect their trend. It should be noted that the NPS used propensity score matching to address attrition by compensating for the lost households. The HBS and NPS data were collected over a period of approximately 12 months, which excludes seasonality as a potential source of the mismatch in poverty and inequality indicators. While the survey years are relatively close, the advent of the financial crisis in 2008 might have induced substantial changes in household consumption patterns that may have been captured by NPS 2008/09 and NPS 2010/11, which could explain the increase of poverty between the two rounds. Table 3 compares the methods of consumption data collection between HBS and NPS. The two surveys diverge in several aspects, which induce fundamental differences in survey design, explained below, that are difficult to adjust ex-post and will remain present when comparing the two surveys' results. Households were asked to report the total expenditures for women's education and for men's education. 6 Households reported their expenditures on school fees, books and materials, uniforms, transport, extra tuition, other contributions, and the cost of meals. B. Consumption data collection methods The main differences between the HBS and NPS in consumption data collection methods can be summarized as follows: i. "Recall" versus "Diary" method Both the HBS 2007 and 2011/12 used a 28-day diary to collect data on food consumption. In 2007 the diary was administered for the whole month but at the analytical step this was adjusted to create expense values for 28 days. Diaries in the HBS 2007 always started at the beginning of the month, while diaries in the HBS 2011/12 were staggered across the months. In both surveys, each household member aged five years and above was asked to fill out a 'booklet' to record his/her daily transactions for consumption purposes, including consumption of own-produced items. Enumerators were then instructed to transcribe the data from these booklets into the main diary form (every other day). However, in practice this might not have happened in this manner since enumerators were expected to have worked with one household member every other day to fill in the main diary form directly (rather than transcribing the information from the booklets). In contrast, the NPS uses a seven-day recall method to collect data on food consumption, asking the head of the household or their spouse to recall how much they consumed of various food items in the past seven days. According to a study by Beegle et al. (2012), the diary method produces lower food and total consumption aggregates, higher poverty levels and lower inequality levels, though the variations reported in the study are not as important as those observed between the HBS and NPS data sets. Food consumed outside the household is captured in the HBS through an additional diary filled in only by adult household members, while it is collected in the NPS by way of recall of the last seven days asked to all household members. The HBS 2007 does include the data on food consumed outside. Non-food items were collected using both the diary and recall methods in the HBS, while in the NPS they were collected using the recall method only. To avoid the duplication of expenditures under the diary and recall module in the HBS, an effort was made in both years to exclude non-food items already recorded in the diary from the recall module. In 2011/12, interviewers were asked to carefully check potential duplication between non-food items reported in the recall and those recorded in the diary. Potential duplication was also carefully checked at the analytical stage during the evaluation of the welfare aggregates and poverty estimates. In the HBS 2007, interviewers were asked to "request details of irregular purchases of consumer durables and costs of other services during previous twelve months excluding the survey month". Excluding the survey months from the recall most likely also intended to reduce double-counting. Potential duplication was also checked at the analytical stage following the same procedure as in 2011/12. The HBS 2007 used a 12-month recall period for the collection of non-food items with the exception of rent, while the HBS 2011/12 used recall periods of 1, 3 and 12 months depending on the item (see Table 3). The NPS used a 12-month recall for some non-food items and recall periods of 1 and 4 weeks for others such as transportation, health and education. While changes in the recall period can affect the welfare and poverty estimates, the induced variations would not be expected to be as high as those observed above in Section II (see Beegle et al., 2012). ii. Evaluation of home-produced food For many Tanzanians, particularly those living in the rural areas, most of their caloric intake comes from food that they produced themselves. The value of own-produced food is difficult to evaluate as its market price cannot be directly observed. Different methods have been suggested in the literature to estimate own-produced food values, each with its many pros and cons. In the HBS, the value of own-produced food is reported directly by the household. Respondents are asked to report a shilling value for all food consumed, whether it is purchased or produced at home. In the NPS, the valuation of own-produced food is based on the prices paid by the household for similar items in the same geographic stratum. iii. Degree of commodity details Another key difference between the two surveys arises from the degree of commodity details. The list of non-food items collected in the HBS is more extensive than in the NPS. This is particularly true for HBS 2011/12 where households were provided a very detailed list of the items to be reported in the recall module. For example, HBS 2011/12 solicited information for over 300 non-food items compared to the 52 non-food items solicited in NPS 2012/13. According to Beegle et al. (2012), a more detailed commodity list is expected to lead to higher consumption aggregates and lower poverty levels, but the differences between HBS and NPS poverty measures is contrary to what should be expected as HBS poverty indicators are significantly higher than NPS indicators. Further, it is worth noting that the HBS survey instruments have improved significantly over time, while there were no substantial changes in survey methods between the NPS waves except for a few additions in the questionnaire for the third round. Great attention is generally devoted to the supervision of the NPS. To ensure strict control over data quality during fieldwork, the NPS survey uses a smaller and more closely supervised group of enumerators. The survey uses mobile teams, each consisting of seven people (1 supervisor, 4 enumerators, 1 data entry operator and 1 driver). Since the decline in poverty between 2007 and 2011/12 using the HBS data may be due to the changes in the survey design, different imputation methods are used to address this issue and check whether the reduction in poverty is a reality. The different prediction approaches supported the decline in poverty between 2007 and 2011/12, although they revealed a slightly lower pace of poverty reduction, suggesting that the improvements of survey methods in the HBS are not the cause of the difference in poverty trends between HBS and NPS data. Both the HBS and NPS used the Fisher price index to adjust for spatial and intra-year differences in the cost of living. In the HBS, separate food and non-food Fisher price indices are estimated based on unit values (value/quantity) from the survey data. The overall (food and non-food) price deflator was computed using the weighted average of food and non-food indices, where the weights were the average budget shares on total nominal food and non-food consumption. Price indicators were calculated by geographic stratum and quarter. The NPS data used a similar method to adjust for spatial and intra-year price differences, but the Fisher price index was based on food unit values only. Table 6 compares the values of the spatial price deflators by survey quarter and strata and shows no To better understand the potential effect of the differences in (survey based) inflation rates on the consumption trends in HBS and NPS, we also use the CPI to adjust for inter-year temporal price variations. As most of the difference is coming from the significantly higher consumption level of the richest quintiles, we would expect either higher underreporting in the HBS or significant differences in the sampling between the two surveys. This point will be discussed below. D. Aggregate consumption When adjusting for inter-year price variations using the survey price deflators, mean household consumption levels appear to be higher in the HBS 2011/12 than in NPS 2012/13. The difference seems to be due to much lower consumption levels of the poorest quintiles in the NPS than in the HBS. While there is almost no difference between HBS and NPS consumption levels of the richest population groups, the difference seems to be very important for the poorest groups attaining around 40 percent. Also, HBS shows an increase in mean household consumption levels over time, mainly driven by an increase of the consumption of the poorest groups, while NPS shows a decline in mean household consumption levels over time mainly driven by a reduction of the consumption of the poorest quintiles. As these figures are difficult to compare due to the differences in the inter-year price deflators and base year, we also use the CPI to adjust for inter-year price variations and take HBS 2011/12 as base year for all HBS and NPS rounds. This reduces the discrepancy between HBS and NPS mean household consumption levels and shows a similar upward trend in mean household consumption over time for both surveys. While we continue to observe a larger increase in mean household consumption in the HBS than in the NPS and a larger increase in consumption levels of the better off in the NPS compared to HBS, both surveys now display improvements over time of mean household welfare, particularly for the better-off groups. The main differences that stand out can be summarized as follows: i) There is a decline in aggregate consumption as well as in food consumption between NPS 2008/09 and NPS 2010/11, no matter the inter-year price deflator used. This can be explained by the advent of the financial crisis and food price hikes in 2008 whose effects may have started to appear after 2009. The decline of food consumption levels while non-food consumption remained stable lends support to this presumption. ii) There is a decline in overall consumption between NPS 2010/11 and NPS 2012/13 (using the survey inter-year deflators) for the three poorest quintiles of the population, with the decline being more substantial for the poorest quintile. This decline seems to be driven by a reduction of food consumption accompanied by an even greater reduction of non-food consumption. However, this decline vanishes when the CPI for inter-year price adjustment is used, as we observe an increase in consumption levels of all population groups including for the poorest quintiles even though the improvements remain more substantial for the better-off groups. iii) In contrast with NPS, the HBS data show a significant increase of the food and total consumption levels of the poorest segments of the population and a slight reduction of the food consumption levels for the richest group. The following sections will explore other potential sources of the mismatch between the two surveys. Education expenses did not come from the Education section where questions were asked for each household member, they come from Form II which had a very detailed section specific for education expenditures, divided into private, public, formal, informal. Unlike the NPS, the structure of the questions was very similar to the other nonfood items. Education expenses were included in household expenditures but they were collected separately (Section C). Information is collected for each household member over 5 years old with a recall period of 12 months (question 14 in 2008/09, question 28 in 2010/11 and 2011/13). Total expenses were calculated by the numerator by adding up individual expenses. Notice that these expenses included some clothing (uniforms) and footwear (shoes). Health expenses did not come from the Health section where questions were asked for each household member. There was a separate section with 15 questions for health expenditures. Unlike the NPS, the structure of the questions was very similar to the other nonfood items. Health expenditures were included in household total expenditures but they were collected separately. The NPS collected information for each household member over 12 years and old (Section D). Some questions referred to the past four weeks, some to the past 12 months (question 13 onwards) but they seemed properly harmonized in the do files. Health expenditures included expenses related to visits to the health practitioner, health treatments, hospitalization and medications. There is a section for vehicle and a separate section for public transport expenses. Transport expenses included: Public transport (7 days recall), "petrol or diesel", "motor vehicle service, repair, or parts", "bicycle service, repair, or parts" (30 days recall). Communications included telephone landlines, mobile phones, personal computers, satellite decoders. Communication expenses had a 30-day recall period (Section L). They included: "cellphone vouchers" and "phone, internet and postal services". Recreation and spare time section was much more complete, it solicited information about: sport and camping equipment; swimming pools, gym, tennis courts expenditures; tickets to sporting shows, concerts, theater, museums, etc.; lottery tickets, photographic equipment, musical instruments, amusement items, etc. Recreation expenditures were collected using a recall period of 12 months (Section M). It included: "sports and hobby equipment, musical instruments, toys" and "film, film processing, camera". There were only two questions in this category and the reported values were low compared to the HBS. Detailed questions about the main dwelling expenditures included questions on: electric power; fixed telephones; mobile phones; TV subscriptions; internet subscriptions; water; common expenditures such as lighting cleaning on primary and secondary buildings; gas, charcoal, kerosene, coal, and firewood. Miscellaneous non-food expenditures included around 25 detailed questions about furniture and furnishing, tools and appliances for household maintenance, small electric household appliances, dishes, utensils and domestic workers. -Household expenses had a 30-day recall period and included: milling fees and grain; household cleaning products (dish soap); wages paid to servants; repairs to household and personal items; carpet, rugs, drapes, curtains; linens, towels, sheets, blankets; mats for sleeping or for drying maize flour; mosquito nets, mattresses; repairs to consumer durables. -Miscellaneous non-food expenditures had a 30-day recall period and included: bar soap (body soap or clothes soap); laundry soap (powder); toothpaste, toothbrush; toilet paper; glycerin, Vaseline and skin creams; other personal products (shampoo, razor); insurance (health or auto); other costs not stated anywhere. There is a section on travel, holidays and hotels outside and inside Tanzania and another one on restaurants. The non-food consumption did NOT include expenses incurred at restaurants (This is different from HBS). However, the NPS collected detailed information on food consumed outside the home (which included full dinner with a 7-day recall period) in Section F. Alcohol expenditures included information from the non-food question as well as from Section F (food outside household), which included beer, wine, or hard drinks consumed outside the household in the past 7 days. Table 8 presents average levels of food consumption per adult per month for different sub-groups of food commodities. All values are presented in nominal terms, real terms adjusted for the spatial and seasonal differences in the cost of living, and for inter-year price variations using the CPI and taking HBS 2011/12 and NPS 2010/11 as base periods for HBS and NPS, respectively. The number of food items is much higher in the HBS than in the NPS. The level of consumption expenditure seems to be higher on food items such as meat, milk and cheese, fruits, sugar, and coffee, tea and soft drinks in the NPS than in the HBS. Both surveys show similar trends in consumption on most items, except for bread and cereals, fish, milk, cheese and eggs, and vegetables, with the variations being much more important in the HBS than in the NPS. Figure 9 presents the shares of food groups in total food consumption. It shows that Tanzanians tend to consume bread and cereals the most, which make up about one-third of food consumption. This is consistent across the HBS and NPS surveys. The second most consumed food group is vegetables for the HBS and meat for the NPS. Table 9 shows non-food consumption per adult per month as well as non-food consumption separated by groups of goods and services. All values are presented in nominal terms, real terms, and adjusted for inter-year price variation using the CPI. In general, non-food expenditures are larger in the HBS compared to the NPS. Among the sources of these differences is the fact that the first two waves of NPS did not include "clothing and footwear". NPS also does not include "restaurants and hotels". Moreover, it is worth noting that the value of "housing, water, electricity, gas and other fuels" F. Non-food consumption in the HBS 2011/12 is almost 3 times larger than the value in the NPS 2010/11 and that the "recreation and culture" expenses are over 25 times larger in the HBS; however, expenses on education and miscellaneous items are around 3 times larger in the NPS. Both surveys show similar trends in consumption at the sub-aggregate levels, except for education and miscellaneous expenditures, but with the variations being much larger in the HBS. This can be partly explained by the changes in the survey design in the HBS. It is worth mentioning that when addressing the changes in the HBS design through imputation methods, we still observe a significant increase in the consumption of food and non-food items. IV. Comparison of NPS and HBS Methods for Poverty Line and Poverty Indicators Measurement This section examines the differences in the measurement of the poverty line and estimation of the poverty indicators between HBS and NPS. It also investigates the potential sources of differences in the inequality indicators. More specifically, the analysis re-evaluates the poverty line of NPS using the methodology of the HBS 2011/12, to separate differences in the poverty indicators resulting from the estimation approach (that can be addressed) and those stemming from the survey methods and that are more difficult to adjust. We also estimate the poverty numbers using the US$1.90 international poverty line to explore how the levels and trends of poverty compare between the two surveys. We finally examine the potential sources of discrepancy between the two surveys in the consumption distribution patterns for NPS data and decomposition methods. A. Poverty line estimation The poverty line in the NPS is not directly comparable with the poverty line in the HBS due to the differences in the consumption measurement methods listed in the above sections as well as differences in the reference period, reference population, measurement of the cost per calorie and adjustment for inter-year price variations. Most of these differences affect the food line and are explained more in detail below. i. Food line Both where qk is the total quantity of item k consumed in the reference population, p0k is the national median price of item k, qhk is the quantity of item k consumed by household h, and calk is the corresponding caloric conversion factor of each item established by the Tanzania National Bureau of Statistics. Median prices p0k are based on the most frequent unit of consumption for each item, with all units being converted to the most frequent unit when possible. If the household consumed the food item in a unit that does not have a metric conversion to the most frequent unit (e.g. piece to kg) the respective item is dropped. 11 The reference group in the NPS includes the bottom 50 percent of the population ranked in terms of real per adult equivalent consumption as opposed to nominal per adult equivalent consumption. Real consumption is obtained by adjusting the nominal consumption according to temporal and spatial cost-of-living differences. Temporal price differences are associated with seasonal differences (quarters), while spatial differences are associated with the location of a household (geographic stratum: Dar es Salaam, other urban, rural, Zanzibar where pk is the median price of item k in the reference population, is the average per adult equivalent consumption of item k, and • ∑ is the total caloric consumption per adult equivalent by household. shows the food line and extreme poverty rates resulting from all the adjustments when they are applied simultaneously. Line (e) shows that using the HBS method to estimate the NPS food line leads to close values of both lines; however, the proportion of extreme poor population substantially increases to 11.8 percent in the NPS. When the food line is adjusted for inter-year differences in the cost of living using the CPI, extreme poverty initially increases and then stagnates in 2012/13. iii. Basic needs poverty line In both surveys, the non-food component of the basic needs poverty line is based on average nonfood consumption of households whose total consumption is close to the food poverty line. In the HBS 2011/12, the households in the reference group are those whose total consumption lies within the interval between the food line and 1. show a slight increase in poverty between the first two rounds. It is worth noting that the estimation of the international poverty rates follows the Povcalnet method which does not adjust the consumption values for spatial cost of living differences and which seems to partly resolve the mismatch in poverty trends between the two surveys. iv. Comparison of the basic needs poverty lines across the survey rounds Based on these findings, it seems that despite the differences between the HBS and NPS in survey design and methods of consumption data collection, the discrepancies in poverty levels and trends between both surveys are mainly resulting from the differences in: 1) the methods of calculation of food and basic needs poverty lines; 2) the inter-year price deflators; and 3) to a lesser extent, the spatial price deflators. V. Comparison of Inequality and Distributional Patterns between HBS and NPS This section compares the inequality indicators between the HBS and NPS surveys and performs the unconditional quantile decomposition to examine the specific household attributes that contribute to the changes of consumption over time in both the HBS and the NPS. A. Inequality indicators As stated in the first section, HBS and NPS show different inequality trends, declining in the HBS and increasing across the NPS waves. The adjustments above are relevant for the poverty estimates only and cannot help in addressing the inequality discrepancies. B. Unconditional quantile decomposition This section investigates the basic factors that might explain the discrepancy in inequality (and propoor growth patterns) between the HBS and NPS surveys by performing the unconditional quantile regression decomposition technique. The method decomposes the changes in consumption over time into two components: one component that is due to improvements in personal characteristics or endowments (better education, increased ownership of land and other assets, access to employment opportunities, local infrastructure, and so forth) and one component attributable to changes in the returns to those characteristics (returns to education, land productivity, returns to business, and so forth). These components are then further decomposed to identify the specific attributes that contribute to the changes of consumption. The decomposition is applied at each decile group of the consumption distribution to understand the patterns of the changes for the different welfare groups. 15 We start by examining the factors contributing to the variation of consumption between 2007 and 2011/12 using HBS data. The results are reported in Figure 10 and indicate that the increase of poor households' consumption is due mainly to improvements in households' endowments. Returns also improved but to a lesser extent and only for the 20 percent poorest groups. One can observe from Figure 10 an improvement of households' endowments for all the population groups, but the improvements are more marked for the 30 percent poorest segments. The increase of the endowments is driven by a significant expansion of asset ownership, mainly transportation and communication means, and to a lesser extent agricultural land. Educational attainment of household heads has improved as well but less significantly. The access to local infrastructure has deteriorated in general, but access to local roads seems to have slightly improved for the poor. The decomposition indicates also a decline of households' engagement in business activities, particularly among the poorest groups. The improvements of households' endowments were coupled with an increase of the returns to those endowments, but only for the poorest quintile group. Except for the first two deciles, returns appear to have declined over time. But this decline masks divergent trends across the different attributes. As observed from the table in Figure 10, the gains from household businesses, essentially nonfarm activity, increased quite significantly between 2007 and 2011/12 particularly for the three bottom deciles. Returns to land seem also to have improved over time, though less significantly for the poor. The returns to community infrastructure also improved, indicating a higher positive influence of access to local markets and roads on needy households' living standards. Large household size and number of children seem to be a continuing constraint on household wellbeing, although their negative impact appears to have diminished somewhat, as is apparent from the positive change in the returns to demographic structure. However, the observed improvements in the returns to some household attributes have been offset by a significant decline of the returns to assets followed by a decline of returns to education, inducing a loss of returns for the moderate poor and better-off households. We also apply the unconditional quantile regression decomposition technique to analyze the factors contributing to the variation of consumption between 2008/09 and 2012/13 using NPS data. The results are presented in Figure 11. Similar to the HBS data, NPS reveals a quite significant improvement in household endowments over time; however, in contrast to the HBS results, the improvement of endowments is more marked for the richest population groups. We also observe a quite significant deterioration of returns, particularly for the poorest groups, that have offset the endowments' improvements, inducing a decline in total consumption. It is important to note that in this decomposition procedure, consumption is adjusted by the (inter-year) survey price deflators. The use of the CPI for adjusting consumption would have shown a less sharp decline in returns and a slight increase in overall consumption over time; however, the variations across the different deciles would have remained unchanged. As for HBS, the results in Figure 11 indicate that the increase of the endowments is driven by a significant expansion of asset ownership, mainly transportation and communication means. Educational attainment of household heads has improved as well but to a lesser extent. Access to local markets seems to have improved but only for better-off households. The NPS findings also indicate a potential decline in households' engagement in business activities, but the results are not significant. 44 As for HBS, NPS data indicate a decline in returns to households' endowments, but contrary to HBS findings, the decline seems more marked for the poorest groups. Here again, this decline masks divergent trends across the different attributes. Returns to education and assets seem to have improved, while returns to access to markets appear to have declined. These results deserve further investigation and confirmation. VI. Some Concluding Remarks This study attempts to investigate the underlying causes of the mismatch in poverty and inequality levels and trends between the NPS and HBS surveys. The analysis has focused on the key candidates for the divergence between the two surveys. These include the differences in methods of consumption data collection, methodological differences in the construction of the consumption aggregates and estimation of the poverty lines, adjustments for temporal and spatial price variations, and the consistency of within-household spending and asset ownership trends with poverty trends. The main findings can be summarized as follows: I. Despite noticeable differences in the methods of household consumption data collection, both HBS and NPS show close consumption levels when using comparable inter-year price deflators. The comparison of the levels and distribution of consumption between the two surveys, when adjusted by the CPI, shows that total consumption and food consumption are III. The discrepancy in poverty trends can be mainly attributed to the difference in temporal price deflators and, to a lesser extent, spatial price deflators. The use of the CPI for adjusting consumption variation over time would show a decline in poverty during the last five years for both HBS and NPS. However, the decline in poverty revealed by the HBS data would remain much higher than that observed with the NPS data. Given the greater degree of commodity detail in the consumption module that was added to the HBS 2011/12 questionnaire-which would suggest a better capture of consumption information-it is possible that the HBS under-estimated consumption in 2007 and hence overestimated poverty then and its subsequent decline. Also, the increase in poverty between NPS 2008/09 and NPS 2010/11 continues to be observed and could potentially be explained by the financial crisis and international food price variations. IV. The mismatch in inequality trends between HBS and NPS could not be resolved. The analysis of the variation of consumption distribution over time using HBS and NPS data shows that both surveys indicate significant improvements in households' endowments over time. However, while HBS reveals that endowments increased faster for the poorest groups, NPS shows that the richest groups experienced higher improvements in their endowments. Also, while HBS shows a slight increase in returns for the poorest groups, NPS reveals a deterioration. This might be partly driven by the inter-year deflator but would need further investigation and confirmation. All these results point to the importance of examining the sampling design. Based on these findings, we would suggest the following recommendations: 1) Enhance closer collaboration inside NBS between the teams working on HBS data and those processing NPS surveys and harmonize the methodologies for the evaluation of the poverty lines and price indicators; 2) Attempt as much as possible to harmonize the HBS and NPS design, particularly the methods for household consumption data collection; 3) Examine further the sampling procedure and the potential differences resulting from sampling design. 4) Further explore the underlying causes of the divergence between both surveys in growth and distributional patterns.
2019-01-01T21:31:34.805Z
2018-03-07T00:00:00.000
{ "year": 2018, "sha1": "00f0d8947926e890f1470c740d0d41507d5bdd2f", "oa_license": "CCBY", "oa_url": "https://openknowledge.worldbank.org/bitstream/10986/29455/1/WPS8361.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "00f0d8947926e890f1470c740d0d41507d5bdd2f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Geography" ] }
119103335
pes2o/s2orc
v3-fos-license
The Quark-Gluon Vertex and the QCD Infrared Dynamics The Dyson-Schwinger quark equation is solved for the quark-gluon vertex using the most recent lattice data available in the Landau gauge for the quark, gluon and ghost propagators, the full set of longitudinal tensor structures in the Ball-Chiu vertex, taking into account a recently derived normalisation for a quark-ghost kernel form factors and the gluon contribution for the tree level quark-gluon vertex identified on a recent study of the lattice soft gluon limit. A solution for the inverse problem is computed after the Tikhonov linear regularisation of the integral equation, that implies solving a modified Dyson-Schwinger equation. We get longitudinal form factors that are strongly enhanced at the infrared region, deviate significantly from the tree level results for quark and gluon momentum below 2 GeV and at higher momentum approach their perturbative values. The computed quark-gluon vertex favours kinematical configurations where the quark momentum $p$ and the gluon momentum $q$ are small and parallel. Further, the quark-gluon vertex is dominated by the form factors associated to the tree level vertex $\gamma_\mu$ and to the operator $2 \, p_\mu + q_\mu$. The higher rank tensor structures provide small contributions to the vertex. Introduction The interaction of quarks and gluons is described by Quantum Chromodynamics [1][2][3][4], a renormalizable gauge theory associated to the color gauge group SU (3). The quarkgluon vertex has a fundamental role in the description of hadron phenomenology, to the understanding of chiral symmetry breaking mechanism and, eventually, on the realisation of confinement. Despite its relevance for strong interactions, our knowledge of the quarkgluon vertex from first principles calculations is relatively poor. At the perturbative level, only recently a full calculation of the twelve form factors associated to this vertex was published [5]. The twelve form factors needed to describe the quark-gluon vertex have been computed for various kinematical configurations namely the symmetric configuration (equal incoming, outgoing quark and gluon squared momenta), the on-shell configuration (quarks on-shell with vanishing gluon momentum) and the asymptotic limit (all momenta much larger than the current quark mass). The study of the asymptotic limit of the vertex was then used to investigate various ansätze that can be found in the literature [6][7][8][9][10][11] and, in particular, test its description of the ultraviolet regime. At the non-perturbative level, the quark-gluon vertex has been studied within continuum approaches to QCD by several authors [11][12][13][14][15][16][17][18][19][20][21][22][23]. Typically, the computation is performed after writing the vertex in terms of other QCD vertices and propagators, preferably taking into account the perturbative tail, in order to simplify its calculation. Most of the computations also include only a fraction of the total number of the twelve form factors required to describe the quark-gluon vertex. In the calculation performed within massive QCD, i.e. using the Curci-Ferrari model [18], the vertex was computed in perturbation theory and all its (perturbative) tensor structures were accessed. In [17,22,23] the authors solved the theory at the non-perturbative level gathering information on the vertex from QCD symmetries and relying on one-loop dressed perturbation theory. The quark-gluon vertex has also been computed using lattice simulations both for quenched QCD [24][25][26] and for full QCD [27]. On lattice simulations typically only a limited set of kinematical configurations are accessed with the soft gluon limit, defined by a vanishing gluon momenta, being the most explored. In particular, for full QCD so far only a single form factor, that associated with the tree level tensor structure, was measured in the soft gluon limit. The lattice calculations need a proper estimation of the lattice artefacts, so that they can be subtracted to the results of the simulations, to report the corresponding continuum functions. One can also find on the literature attemps to combine continuum non-perturbative QCD equations with results from lattice simulations to study the quark-gluon vertex. In [28] a generalized Ball-Chiu vertex was used in the quark gap equation together with lattice results for the quark, gluon and ghost propagators to address the quark-gluon vertex. In [29], the full QCD lattice data for λ 1 was investigated relying on continuum information about the vertex. The use of continuum equations with results coming from lattice simulations requires high quality lattice data to feed the continuum equations that should be solved for the vertex. In this approach, the computation of a solution of the continuum equation requires assuming some type of functional dependence for various functions. In recent years, there has been an effort to improve the quality of the lattice data, in the sense of being closer to the continuum and producing simulations with large statistical ensembles, both for propagators and for vertex functions. For the practitioner oftentimes it is sufficient to have a good model of the vertex that should incorporate the perturbative tail to describe the ultraviolet regime, some "guessing" for the infrared region and, hopefully, comply with perturbative renormalization [5][6][7][8]31]. A popular and quite successful model was set in [32], named the Maris-Tandy model. It assumes a unique tensor basis for the vertex, its tree level tensor structure, and simplifies its functional form. Further, this model assumes that the vertex depends only on the gluon momentum and accommodates the results from perturbative theory at high momentum transfer with a considerable enhancement of the vertex at low momenta. Such type of vertex that appears in the Dyson-Schwinger and the Bethe-Salpeter equations can be seen as a reinterpretation of the full vertex tensor structure, after rewriting its main components in a way that formally can be associated with the gluon propagator. Although the Maris-Tandy model is quite successful for phenomenology, the model is not able to describe the full hadronic properties and fails to explain the mass splittings of the ρ and a 1 parity partners, underestimates the weak decay constants of heavy-light mesons and cannot reproduce simultaneously the mass spectrum and decay constants of radially excited vector mesons to point out some known limitations. For a more complete description see, for example, [33,34] and references therein. The goal of the present work is to explore further the quark-gluon vertex in the nonperturbative regime from first principles calculations. Our approach follows the spirit of the calculations initiated in [28] that combines continuum methods with results from lattice simulations to solve the quark Dyson-Schwinger equation for the vertex. If in that work the quark-gluon vertex was computed using a generalised Ball-Chiu vertex, herein we go much beyond and include other tensor structures that imply taking into account the dependence of the vertex with the quark momentum and the angle between quark and gluon momentum. As in the above cited work, the calculations performed use the Landau gauge to profit from the recent high quality lattice data for the quark, the gluon and the ghost propagators and taking into consideration only the longitudinal components of the vertex. Moreover, the computation also incorporate the recent analysis of the full QCD lattice simulation for the quark-gluon vertex in the soft gluon limit that identifies an important contribution associated to the gluon propagator [29]. We use a Slavnov-Taylor identity to write the longitudinal components of the vertex as a function of the quark wave function, the running quark mass, the quark-ghost kernel form factors and the ghost propagator. We also incorporate the normalisation of the quark-ghost kernel form factors X 0 , see below for definitions, derived in [17] for the soft gluon limit. The normalisation of X 0 had an important role in the analysis of the full QCD lattice data analysis for λ 1 , the form factor associated with the tree level tensor structure (see below for definitions), done in [29]. Our solution for the quark-gluon vertex returns a X 0 that deviates only slightly from the normalisation condition referred above and longitudinal quark-gluon vertex form factors that are strongly enhanced in the infrared region. The enhancement of the four longitudinal form factors occurs for quark and gluon momentum below 2 GeV and at high momentum the form factors approach their perturbative values. The computed quark-gluon vertex is also a function of the angle between the quark four momentum p and the gluon four momentum q that, clearly, favours kinematical configurations where p and q of the order of 1 GeV or below. Furthermore, we found that the vertex is enhanced when all momenta entering the vertex (see Fig. 1) tends to be parallel in pairs, which is solved by the compromisse that the momenta are restricted to a region around Λ QCD . Within our solution for the quark-gluon vertex the dominant form factors are those associated to the tree level vertex γ µ and to the scalar 2 p µ + q µ , with the higher rank tensor structures giving subleading contributions. The paper is organised as follows. In Sec. 2 we introduce the notation for the propagators, the Dyson-Schwinger equations and the quark-gluon vertex. Moreover, we use a Slavnov-Taylor identity to rewrite the vertex in terms of the quark propagators functions Figure 1: The quark-gluon vertex. and the quark-ghost kernel. The parametrization of the quark-ghost kernel is also discussed. In Sec. 3 the scalar and vector components of the DSE in Minkowsky space are given together with the corresponding kernels and we start to set up the ansatz to be used to solve the integral equations. In Sec. 4 the DSE are rewritten in Euclidean space and, after performing a scaling analysis of the integral equations, we introduce the ansatz for the vertex. In Sec. 5 we give the details of the lattice data used in the current work for the various propagators and the details of the functions that parametrize the lattice data. The kernels for the Euclidean space DSE are discussed in Sec. 6, together with the solutions for the vertex of the gap equation. The quark-gluon vertex form factors are reported in Sec. 7 for several kinematical configurations. Finally, on Sec. 8 we summarise and conclude. The Quark Gap Equation and the Quark-Gluon Vertex In this section the notation used through out the article is defined. The equations to be discussed below and in the first part of this work have all expressions defined in Minkowsky space with the diagonal metric g = (1, −1, −1, −1). Following the notation of [35], in the quark-gluon vertex represented in Fig. 1 all momenta are incoming and, therefore, verify (2.1) The one-particle irreducible Green's function associated to the vertex reads where g is the strong coupling constant and t a are the color matrices in the fundamental representation. The quark propagator is diagonal in color and its spin-Lorentz structure is given by where Z(p 2 ) = 1/A(p 2 ) stands for the quark wave function and M (p 2 ) = B(p 2 )/A(p 2 ) is the renormalization group invariant running quark mass. The inverse quark propagator reads 4) [ p Figure 2: The Dyson-Schwinger equation for the quark. The solid blobs denote dressed propagators and vertices. The Dyson-Schwinger equation for the quark propagator, also named the quark gap equation, is represented in Fig. 2 and can be written as where Z 2 is the quark renormalization constant, m bm the bare current quark mass and the quark self-energy is given by where Z 1 is a combination of several renormalization constants, ∆ ab µν (q) is the gluon propagator which, in the Landau gauge, is given by In the following, both ∆ ab µν (q) or the form factor ∆(q 2 ) will be referred as the gluon propagator. A key ingredient in gap equation (2.5) is the quark-gluon vertex. Indeed, it is only after knowing Γ a µ that one can compute Z(p 2 ) and M (p 2 ). The Lorentz structure of the quarkgluon vertex Γ µ , see Eq. (2.2), can be decomposed into longitudinal Γ (L) and transverse Γ (T ) components relative to the gluon momenta, i.e. one writes where, by definition, By choosing a suitable tensor basis in the spinor-Lorentz space, Γ µ can be written as a sum of scalar form factors that multiply each of the elements of the basis. The full vertex Γ µ requires twelve independent form factors and, using the Ball and Chiu basis [6] it becomes The operators associated to the longitudinal vertex are while those associated to the transverse part of the vertex read QCD Symmetries and the Quark-Gluon Vertex The global and local symmetries of QCD constrain the full vertex Γ µ and connect several of the Green's functions theory. For example, the global symmetries of QCD require that the form factors λ i and τ i to be either symmetric or anti-symmetric under exchange of the two first momenta -see, e.g., ref. [35] and references therein. On the other hand, gauge symmetry imply that the Green functions also satisfy the Slavnov-Taylor identities (STI) [36,37]. This identities play a major role in our understanding of QCD and, in particular, the longitudinal part of the quark-gluon vertex is constrained by the following identity where the ghost-dressing function F (q 2 ) is related to the ghost two-point correlation function as and H and H are associated to the so-called quark-ghost kernel. As discussed in [35], these functions can be parametrized in terms of four form factors as The STI given in Eq. (2.14) can be solved with respect to the vertex [13] and, in this way, the longitudinal form factors λ i (p 1 , p 2 , p 3 ) can be written in terms of the quark propagator functions A(p 2 ), B(p 2 ) and the quark-ghost kernel functions X i and X i as A nice feature of the above solution for the various form factors that can be check by direct inspection is that the symmetry requirements on the λ i due to charge conjugation are automatically satisfied independently of the functions A, B, X i and X i . This is a particularly important point if one aims to model the vertex. Decomposing the Dyson-Schwinger Equation into its Scalar and Vector Components The Dyson-Schwinger equation for the quark propagator is written in (2.5), with the quark self-energy being given by (2.6). This equation can be projected into its scalar and vector components by taking appropriate traces. The scalar part of the equation is given by the trace of (2.5) which, after some algebra, reduces to after insertion of the vertex decomposition given in (2.8), taking into account only its longitudinal part, where k = p − q, and C F = 4/3 is the Casimir invariant associated to the SU(3) fundamental representation. The vector component of (2.5) is obtained after multiplication by / p and then taking the trace of the resulting equation to arrive on The two equations (3.1) and (3.3) can be simplified further by modelling the quarkgluon vertex. For example, in [13,28] the vertex was parametrized using the solution of the Slavnov-Taylor identity (2.17)-(2.20) and setting X 1 = X 2 = X 3 = 0. The rationale for such a choice comes from perturbation theory which gives, at tree level, X 0 = 1 and X 1 = X 2 = X 3 = 0. This ansatz, that ignores all form factor associated to the quarkghost kernel but X 0 , assumes that at the non-perturbative level the hierarchy of the form factors follows its relative importance that is observed for high momentum. Furthermore, in order to compute a solution of the Dyson-Schwinger equations it was assumed a further restriction on X 0 , namely that it depends only on the incoming gluon momenta, i.e. that X 0 = X 0 (q 2 ). In order to proceed the analysis of the Dyson-Schwinger equations for the vertex it will be assumed that, in what concerns the momentum dependence, the form factors associated to the quark-ghost kernel factorize as where g i (p 2 1 , p 2 2 ) = g i (p 2 2 , p 2 1 ) are symmetric functions of its arguments. This type of factorisation is compatible, for example, with the Maris-Tandy quark-gluon vertex [32] and simplifies considerably the analysis of the solutions of the Dyson-Schwinger equations. In [17] it was proved that, to all-orders, that, for the ansatz considered above, imply g(p 2 , p 2 )X 0 (0) = 1 and g 1 (p 2 , p 2 )X 1 (0) = g 2 (p 2 , p 2 )X 2 (0) . (3.6) In order to comply with the second relation given in Eq (3.5) it will be assumed from now on that X 1 = X 2 for any kinematical configuration. Note also that by choosing the g i to be symmetric functions of the arguments, the form factors X i and X i become identical. By taking into account the ansatz for the quark-gluon vertex just set in into the solutions of the Slavnov-Taylor identities (2.17)-(2.20), it follows that where and k = p − q; we call the reader's attention that we have used X i to representX i . Then, the scalar component of the Dyson-Schwinger equations becomes where the kernels are defined as with the kernels given by The Dyson-Schwinger Equations in Euclidean Space Our aim is to solve the Dyson-Schwinger equations for the quark-ghost kernel, said otherwise for the quark-gluon vertex, and this requires the knowledge of the quark, gluon and ghost propagators. For that we will rely on lattice inputs that provide first principles nonperturbative results and also demand that the above expressions should be rewritten in Euclidean space. The Wick rotation to go from Minkowski to Euclidean space is achieved by making use of the following substitutions on Eqs. (3.12) and (3.16). For completeness, below we will provide all expressions in Euclidean space. The scalar component of the Dyson-Schwinger equations reads while its vector component is given by The kernels appearing in Eqs. (4.2) and (4.3) are The study of the solutions of the above equations using lattice inputs requires the use of the renormalized Dyson-Schwinger equations and, therefore, all quantities appearing on these equations should be finite. This requirement constrains the integrand functions g i (p 2 , (p − q) 2 ) X i (q 2 ) and, in particular, its possible behaviour in the limits where q → 0 and p → +∞. Let us start by considering the ultraviolet limit of the integrand functions appearing in Eqs. (4.2) and (4.3). In the large q limit it follows that up to logarithmic corrections associated to the various propagators. In this limit, the integrand function appearing on the scalar equation (4.2) read The requirement of having a finite integral demands that at large q or that these functions are proportional to a higher negative power of q. The logarithmic corrections, not taken into account in this analysis, are sufficient to avoid the UV logarithmic divergence suggested by the naive power counting. Indeed, these logarithmic corrections that are introduced by the renormalization group improvement are, for large momenta, of type log(q 2 /Λ 2 ) γ , with γ being one of the anomalous dimensions. Our large q analysis should take into account the logarithmic corrections coming from the gluon, the ghost and the quark propagators that for N f = 2 result in γ = γ glue + γ ghost + γ quark = −137/116. Then, assuming a large q behaviour as in (4.12) times the log correction, the integration function at high momenta becomes resulting in a finite value for the integral. The difference between the naive power counting and taking into account the log corrections is illustrated on Fig. 3. As seen on this figure, the log corrections suppress further the integrand function at high momenta. In what concerns the quark-ghost kernel form factor X 0 (q 2 ) at high energies, the power counting analysis is compatible with a X 0 (q 2 ) = 1 at large momenta as required by perturbation theory and by the all-orders result summarised in Eq. (3.6). The same analysis for the vector component (4.3) gives, up to logarithmic corrections, where {· · · } stand for finite expressions involving A(p 2 ), A(q 2 ), B(p 2 ) and B(q 2 ). The conditions given in (4.12) are sufficient to ensure a finite result associate to the UV integration over q on the vector component of the Dyson-Schwinger equations. The QCD dynamics generates infrared mass scales for the quark and gluon propagators that are sufficient to eliminate possible infinities associated to the low momentum limit in the integral of the quark gap equation. For full QCD, the λ 1 form factor of the quark-gluon vertex was computed in the soft gluon limit, i.e. vanishing gluon momenta, using lattice simulations in [27]. The analysis of the lattice data performed in [29] show that the lattice data is well described by where a and b are constants, that in terms of X 1 and X 3 translates into . This results suggests to write that in the high q limit gives X 1 ∼X 1 (q 2 )/q 2 and regularizes the ultraviolet behaviour in agreement with the discussion summarised in (4.12). Similarly, equation (4.16) also suggests to write and at high q momenta X 3 ∼X 3 (q 2 )/q 2 and, in this way, the ultraviolet problems referred in (4.12) are solved. Furthermore, for large quark momentum the ansatz (4.17) and (4.18) give implying the vanishing of the kernels (4.4) -(4.9) for sufficiently large p. In summary, our ansatz for the quark-gluon vertex used to solve the Dyson-Schwinger equations reads (4.20) The quark gap equation should be solved taking into account the constraint (3.6) that demands X 0 (0) = X 0 (q → +∞) = 1 . Our goal is to use the gap equation to study the quark-gluon vertex and, therefore, the knowledge of the various propagators over all range of momenta appearing in the integral equation is required. This is achieved by fitting the Landau gauge lattice propagators using model functions that are compatible with 1-loop renormalization group improved perturbation theory. In this way, it is ensured that the perturbative tails are taken into account properly in the parameterization of the various functions used to solve the Dyson-Schwinger equations. In order to compare the present work with the results of [28], on App. B we compared the new fits, to be discussed below with those previously used. Note the differences between the two sets of curves that necessarily change quantitatively, but not qualitatively, the solutions presented herein and in [28]. Landau gauge lattice gluon and ghost propagators The lattice gluon propagator has been computed in the Landau gauge both for full QCD and for the pure Yang-Mills. Nowadays, the gluon propagator is well known for the pure Yang-Mills theory and it was calculated in [42] for large statistical ensembles and for large physical volumes ∼ (6.6 fm) 4 and ∼ (8.2 fm) 4 . Furthermore, in [42] the authors provide global fits to the lattice data that reproduce the 1-loop renormalisation group summation of the leading logarithmic behaviour. Of the various expressions, we will use to solve the Dyson-Schwinger equations the fit to the (6.6 fm) 4 volume given by the χ 2 /d.o.f. However, given that the level of precision achieved on lattice simulations for the quark propagator is considerable smaller than for the gluon propagator, one should distinguish between the various fitting functions provided in [42]. Our option was to use the simplest functional form given in this work. The lattice data for the Landau gauge gluon dressing function p 2 ∆(p 2 ), renormalized in the MOM-scheme at the mass scale µ = 3 GeV and the fit associated to Eq. (5.1) can be seen on the left part of Fig. 4. For the ghost propagator we take the data reported in [41] for the 80 4 lattice simulation and fit the lattice data to the functional form Lattice Quark Propagator For the quark propagator we consider the N f = 2 full QCD simulation in the Landau gauge of [27] for β = 5.29, κ = 0.13632 and for a 32 3 × 64 lattice. For this particular lattice setup, the corresponding bare quark mass is 8 MeV and the pion mass reads M π = 295 MeV. Our fittings to the lattice data, see below, take into account that the lattice data is not free of lattice artefacts; see [27] for further details. At high momenta the lattice quark wave function Z(p 2 ) is a decreasing function of momenta, a behaviour that is not compatible with perturbation theory that predicts a constant Z(p 2 ) in the Landau gauge. As reported in [27], the analysis of the lattice artefacts relying on the H4 method suggests that, indeed, Z(p 2 ) is constant at high p. In order to be compatible with perturbation theory, we identify the region of momenta where Z(p 2 ) is constant and, for momenta above this plateaux, we replace the lattice estimates of Z(p 2 ) by constant values, i.e. the higher value of the quark wave function belonging to the plateaux. The original lattice data and the ultraviolet corrected lattice data can be seen on the left of Fig. 5. The UV corrected lattice data is then fitted to the rational function The removal of the lattice artefacts for the running quark mass is more delicate when compared to the evaluation of the quark wave function lattice artefacts [24,38]. The lattice data published in [27] and reported on Fig. 5 (right) was obtained using the so called hybrid corrections to reduce the lattice effects [24] . The hybrid method results in a smoother mass function when compared to the one obtained by applying the multiplicative corrections. The differences on the corrected running mass between the two methods occur for momenta above 1 GeV, with the multiplicative corrected running mass being larger than the corresponding hybrid estimation. The running mass provided by the two methods, corrected for the lattice artefacts, seems to converge to the same values at large momentum. The running mass reported on Fig. 5 (right) is not smooth enough to be fitted. To model the lattice running mass in a way that reproduces the ultraviolet and the infrared lattice data and is compatible with the perturbative behaviour at high moment, we remove some of the lattice data at intermediate momenta. On Fig. 5 the data in the region with an orange background was not taken into account in the global fit to the running quark mass. The remaining lattice data was fitted to where γ m = 12/29 is the quark anomalous dimension for N f = 2 and The fitted parameters are M q = 349 ± 10 MeV GeV 2 , m 2 integration will be performed as described in App. A, i.e. by introduction an hard cutoff Λ, and with all integrations performed with Gauss-Legendre quadrature. For the angular integration we consider 500 Gauss-Legendre points as in [28]. After angular momentum integration, one is left with the kernels that can be seen on Figs. 6, 7 and 8 without taking into account the Gauss-Legendre weights associated to the integration over the gluon momentum. The inclusion of the Gauss-Legendre weights associated to the q momentum integration does not change the outcome reported on Figs. 6, 7 and 8 and the main difference being that the associated numerical values are considerable smaller. The major contributions of the N A reproduce the behaviour observed in [28] and, therefore, the momentum integration in Eqs. (4.2) and (4.3) associated to the kernels that are coupled to X 0 (q 2 ) is finite. The function N A (p, q) displays a similar pattern and, again, the integration over the gluon momentum associated with N (1) A is expected to be well behaved. On the other hand the remaining kernels, i.e. N (1) , are all increasing functions of q. The requirement of a finite integration over q demands that X (1) and X (3) approach zero fast enough to compensate the increase with q of these kernel functions; see the discussion of the kernels ultraviolet limit in Sec. 4. The ansatz (4.20) -(4.22) adds a multiplicative gluon propagator term that is just enough to regularize the high momentum associated to N A (p, q), i.e. their main contribution to the integral equations is for p 2 GeV and q 2 GeV, and, therefore, the inclusion of the gluon propagator in the kernels makes the integration over q finite. The Dyson-Schwinger equations are solved using a hard cutoff that is set to Λ = 20 GeV and all quantities are renormalized in the MOM scheme and using the same renormalization scale as in [28], i.e µ = 4.3 GeV, so that one can compare the results of the two works. It follows that the renormalized quantites satisfy the identities The bare quark mass quoted in the lattice simulation for the ensemble used here reads m bm = 8 MeV [27]. In the following we set Z 1 = 1, take the value for Z 2 from the vector component of the gap equation at the cutoff and "measure" the bare quark mass using the A (p, q)/p 2 kernels including the term of the gluon propagator as defined in (4.21). See also the caption of Fig. 6. A (p, q)/p 2 kernels including the term of the gluon propagator as defined in (4.22). See also the caption of Fig. 6. scalar component of the gap equation at the cutoff momenta. In this way m b.m. does not coincide with the value quoted in the simulation but, as can be seen below, its value is close to the 8 MeV quoted above. The results shown on Secs. 6.1, 6.2, 6.3 and 6.4 were computed using the same value for α s (µ) = 0.295 as in [28]. In Sec. 6.5 we allow α s (µ) to change and provide a best value. From Sec. 6.5 onwards, the results reported use the optimal value for the strong coupling constant. One-Loop Dressed Perturbation Theory for The ansatz considered here demands the knowledge of the three form factors X 0 , X 1 and X 3 . However, the quark gap equation provides only two independent equations and, therefore, it is not possible to compute all the form factors at once for the full range of momenta. A first look at the quark-ghost kernel form factors is possible if one computes X 0 within one-loop dressed perturbation theory with a simplified version of a quark-ghost kernel where Figure 11: One-loop dressed perturbation theory one sets X 1 = X 3 = 0 and, then, solve the gap equation to estimate X 1 and X 3 . The way the solutions of the Dyson-Schwinger equations for X 1 and X 3 are built also illustrates the numerical procedure used to solve the integral equations. The one-loop dressed approximation to the quark-ghost kernel is represented on Fig. 11 that, in the simplified version of kernel, translates into the following integral equation and where H 1 (q 2 ) stands for the ghost-gluon vertex. When solving this equation we consider two version of the ghost-gluon vertex, namely its tree level version where H 1 (q 2 ) = 1 and an enhanced dressed vertex as given in [43] that takes 3) to a linear system of equations that was solved using the QR decomposition of the matrix appearing in the linear system. The numerical solutions for X 0 can be seen on Fig. 12 and are, essentially, those reported in [28]. According to one-loop dressed perturbation theory, the deviations of X 0 (q 2 ) from its tree level value are, at most, of the order of 15%. We have also looked at the iterative solutions for (6.3). No convergence was observed and the solution produced by a single iteration results in a X 0 (q 2 ) that is enhanced relative to the solutions of Fig. 12. The estimation of X 0 allows to solve the gap equation for X 1 and X 3 . In order to compute a solution of the Dyson-Schwinger equations, after performing the angular integration, the scalar and vector components of the equations are rewritten in the form of the larger Figure 12: Simplified one-loop dressed perturbation theory estimation for X 0 (q 2 ) from now on we will adopt the short name version B = N X to refer to this linear system of equations. Note that X 1 has mass dimensions, while X 0 and X 3 are dimensionless. We could have rescaled X 1 to make it dimensionless. The only natural scale in QCD is provided by Λ QCD or, alternatively, by the propagators at some mass scale. However, given that we have no idea of which mass scale to consider, we did not introduced any scale or, said otherwise, the results reported here can be viewed as taking this mass scale to be 1 GeV. This is also true for the full solution of the Dyson-Schwinger equations discussed in next section. A direct solution of B = N X results in a meaningless result, with the components of X oscillating over very large values due to the presence of very small eigenvalues of the matrix N , that translates the ill defined problem in our hands. The linear system can be solved using the Tikhonov regularization [44] that is equivalent to minimize ||B − N X|| 2 + ||X|| 2 , where is a small parameter to be determined in the inversion. In this way we look for solutions that solve the linear system but whose norm is small. For real symmetric matrices, Tikhonov regularization replaces the original system by N T B = (N T N + )X. In our case N is not a symmetric matrix and we will solve the system as given by its normal form. In order to determine the regularization parameter, we solve N T B = (N T N + )X for various values of and look at how ||B − N X|| 2 and ||X|| 2 change with . The outcome of the inversions for different can be seen on Fig. 13. On this figure, smaller values of are closer to the original ill defined problem and correspond to solutions with larger norms for X 1 and X 3 . On the other hand, larger values of are associated to the solutions of the modified linear system with smaller X 1 and X 3 norms. The optimal value of is given by the solution whose residuum, i.e. the difference between the lhs and the rhs of the original Figure 13: Residuum versus norm for the scalar and vector equation when solving the gap equation for X 1 and X 3 with X 0 as given by one-loop dressed perturbation theory. The left plot refers to the inversion using H 1 (q 2 ) = 1, while the right plot are the results for the inversion using the improved gluon-ghost vertex. Smaller values of the regularizing parameter are associated to solutions with larger norms, while larger values of produce X 1 and X 3 with smaller norms. Recall that X 1 has mass dimensions, while X 3 is dimensionless. equations, is among the smallest values just before the norms of X 1 and X 3 start to grow but without changing the residuum. On the above figure we point out three solutions in the region where takes approximately its optimum value. Our first comment on Fig. 13 being that both the scalar and vector components of the Dyson-Schwinger equations can be resolved with the ansatz considered, i.e. setting X 0 (p 2 ) to its one-loop dressed perturbative result and getting X 1 (p 2 ) and X 3 (p 2 ) from solving the gap equations, provided we let the norm of X 1 and X 3 be large enough. Of course, for large norms X 1 and X 3 are free to vary over a large range of values and, therefore, solutions with smaller norms are preferred. From Fig. 13 three typical solution close to the optimal solution, as defined previously, are identified. For the X 0 perturbative solution using the tree level (TL) ghost-gluon vertex, the characteristics of these solutions are have the same m b.m. and Z 2 as the previous ones and ||X 0 − 1|| 2 = 0.45283. The norms of X 0 , X 1 and X 3 are the norms of the vectors that appear in the linear systems. The quality of the solutions can be appreciated on Fig. 14 where we show both the l.h.s. of the scalar and vector components of the gap equation, together with the difference between the l.h.s. and the computed r.h.s. using the X 0 from one-loop perturbation theory and X 1 and X 3 that solve the modified linear system. The relative error both for the scalar and vector components of the Dyson-Schwinger equations are shown on Fig. 15. On the figures we have defined ||∆Sca|| 2 and ||∆Vec|| 2 should be understood as the sum of the squares of the components of the linear systems (6.6) and (6.6), respectively, over the Gauss-Legendre points. As Fig. 15 shows, the relative error on the DSE equations is below 10% for the scalar equation and below 8% for the vector equation. Surprisingly, despite the larger values of ||∆Vec|| 2 relative to ||∆Sca|| 2 , the vector component of the gap equation is better resolved. This is also due to the fact that A(p 2 ) spans a narrower range of values relative to B(p 2 ). We have tried to rescale the linear system by 1/A(p 2 ) for the vector equation and by 1/B(p 2 ) for the scalar equation to try to improve the quality of the solutions, specially at large momentum. However, the numeric solutions of the rescaled linear systems produced X 1 and X 3 that don't seem reasonable and, for example, result in a X 1 at the cutoff that is far away from zero. Further, for the rescaled systems the ∆Sca and ∆Vec are larger than the ones obtained without rescaling the linear system. For all these reasons we disregard the rescaled linear system solutions. The quark-ghost kernel form factors X 1 and X 3 computed for the various associated to the solutions I (TL) -III (TL) and I (Enh) -III (Enh) can be seen on Fig 16. For X 1 the outcome of resolving the integral equations using either the tree level or the enhanced ghostgluon vertex result on essentially the same function. Further, the various solutions provide essentially the same X 1 (p 2 ), with the exception of III (TL) and III (Enh) that return a suppressed form factor relative to the other solutions. For X 3 the situation is similar, with the form factor associated to the solutions I (TL) and I (Enh) being enhanced at momentum above 2 GeV. Looking at Fig. 15, one can observe that solutions II (TL) and II (Enh) are those with smaller relative errors over the full range of momentum considered. So, from now on we will take these solutions as our best solutions associated to the perturbative X 0 form factor. Note that the scalar equation is solved with a relative error 8% and the vector equation is solved with a relative error 6%. Solving the Dyson-Schwinger Equations Let us now discuss the simultaneous computation of X 0 , X 1 and X 3 from the modified linear system of equations that replace the original Dyson-Schwinger integral equations. The procedure to build the linear system as well as the regularization of the corresponding linear system of equations follow the steps described on the previous section. Figure 16: The quark-ghost kernel form factors X 1 and X 3 computed for the tree level ghost-gluon vertex (left) and the enhanced gluon-ghost vertex (right). Sol. I (Enh) [GeV] Sol. II (Enh) [GeV] Sol. III (Enh) [GeV] large linear system as follows that again we refer, as a short name, by B = N X. The upper most component of the large vector X contains the form factor X 0 defined in all the set of Gauss-Legendre points used in the integration over the loop momenta. The remaining components of the large X vector are the form factors X 1 and X 3 defined at the lower first half set of Gauss-Legendre points used in the integration over the momentum. This means that the solution of the linear system (6.8) returns X 0 (p 2 ) for p ∈ [0, Λ] and X 1 (p 2 ) and X 3 (p 2 ) for p ∈ [0, Λ/2]. For X 1 and X 3 and for p > Λ/2 the form factors will be assumed to vanish. In order to fulfil the boundary conditions for X 0 (q 2 ) we write X 0 (q 2 ) = 1 +X 0 (q 2 ) and solve the linear system forX 0 (q 2 ), rebuilding X 0 (q 2 ) at the end. The resulting linear system is then regularized using Tikhonov regularisation scheme and the corresponding N T B = (N T N + )X linear system solved for various . The choice of the optimal regularization parameter will be made following the criteria discussed in Sec. 6.1. We have checked that by interchanging the roles of X 0 , X 1 and X 3 when building the large linear system the solutions are unchanged; see more on that below. The differences only occur for those functions calculated only for q ∈ [0, Λ/2], compared to the version of the linear system were they are computed in the range q ∈ [0, Λ]. In the first case, i.e. for the solutions computed only for q ∈ [0, Λ/2], there appears a discontinuity at q = Λ/2 (recall that the form factors are set to zero for momenta above Λ/2) but for smaller q the form factors of all versions of the linear system are indistinguishable. On Fig. 17 On Fig. 17 we identify four solutions associated to an around its optimal value and whose characteristics are The relative errors for the solutions I -IV of the regularized linear system are show on Fig. 18. In general the solution for the vector component of the equation is satisfactory, with the scalar component of the equation being more demanding and not all of the solutions I -IV resolve the scalar part of the gap equation with a relative error below 10%. Only solutions III and IV resolve the DSE equations with a relative error below 8%. In particular for these solutions the value of ||X 0 − 1|| is of the order of 10 −2 suggesting that the nonperturbative solution prefers having a X 0 1 and, in this sense and for this form factor, are close to the result from perturbation theory discussed on Sec. 6.1. The observed growth of the relative error for p 10 GeV is probably related also to the missing components of X 1 and X 3 which are set to zero for this range of momenta. The form factors X 0 , X 1 and X 3 associated to the solutions I -IV can be seen on Figs. 19 -21. On Fig. 19 besides solutions I -IV we also show the perturbative X 0 (q 2 ) computed using one-loop dressed perturbation theory using both the treel level ghostgluon vertex and its enhanced version. The perturbative solutions and those obtained solving the Dyson-Schwinger equations have rather different structures, with perturbation theory providing larger X 0 (p 2 ) and predicting a relatively large tail. Indeed, the solutions of the regularized linear system recover their tree level value X 0 (p 2 ) = 1 from p 10 GeV onwards, while the perturbative solution only reproduces its tree level value at much larger momentum. The momentum scale associated to the absolute maxima of X 0 (p 2 ) occurs at essentially the same p ≈ 400 MeV, while the perturbative results points to a maximum of X 0 (p 2 ) at momenta slightly above the GeV scale. Qualitatively, the nonperturbative solutions all have the same pattern for this form factor. The exception being Sol. I which clearly overestimates |X 0 | for p 1 GeV. The solutions III and IV are those which resolve the gap equation with the smaller relative errors below 8% -see Fig. 18. The non-perturbative solution of the DSE gives a X 0 (p 2 ) that differs from its tree level value by less than 5%, that are above unit for momenta p 1 Gev. At this momenta scale the form factors reaches values below one, reaching a minimum for p just above 1 GeV, and approaching its tree level value at high momentum from below. The differences between the non-perturbative X 0 and its tree level value for p 10 GeV are rather small. Our non-perturbative estimations for X 1 (p 2 ) can be viewed on Fig. 20. All the solutions I -IV reproduce the same pattern for this form factors, with a positive maxima around p 400 MeV and with X 1 becoming small for p 1.5 GeV. In particular, for the solutions III and IV, X 1 (p 2 ) is particularly small ( 0.4 GeV) for p 1.5 GeV. One should not forget that the quark-ghost form factor appearing in the quark-ghost kernel is not X 1 but this function times the gluon propagator -see Eq. (4.21). The same applies to X 3 as can be 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Figure 19: X 0 (p 2 ) from inverting the Dyson-Schwinger equations together with its estimation using one-loop dressed perturbation theory by solving exactly Eq. (6.3). The form factor X 3 (p 2 ) is reported on Fig. 21. It turns out that this function is positive for p 400 MeV and for p 1 GeV, takes negative values in between, takes its maximum value at p 1.5 GeV and then slowly approaches its tree level value from above. Given the way the solutions are computed, on Fig. 21 X 3 (p 2 ) shows a jump at p 10 GeV which corresponds to Λ/2. A similar behaviour can be seen on Fig. 20 for X 1 (p 2 ). However, given that for p 10 GeV X 1 (p 2 ) 0, this sudden jump is not so easily observed. Finally, on Fig. 22 we provide the various solutions for X 0 (p 2 ), X 1 (p 2 ) and X 3 (p 2 ) after permuting the role of the form factors when writing the extended vector X. For the so-called X 0 X 1 X 3 the extended vector included X 0 over the full set of Gauss-Legendre points with X 1 and X 3 being obtained only in the range q ∈ [0 Λ/2]. For the so-called X 1 X 0 X 3 the extended vector included X 1 over the full set of Gauss-Legendre points with X 0 and X 3 being obtained only in the range q ∈ [0 Λ/2]. For the so-called X 3 X 0 X 1 the extended vector included X 3 over the full set of Gauss-Legendre points with X 0 and X 1 being obtained only in the range q ∈ [0 Λ/2]. We call the readers attention to the stability of the solution of the various linear systems. Furthermore, the comparison of Fig. 16 from Sec.6.1 and Fig. 22 show quite similar X 1 and X 3 suggesting, once again, that X 0 almost does not deviates from its tree level value. Solving the DSE for The non-perturbative solutions of the Dyson-Schwinger equations discussed on the previous paragraph suggest that X 0 (p 2 ) 1. Therefore, herein we present the results found when solving the DSE setting X 0 (p 2 ) = 1. The residua of the scalar and vector equations against the norm of the two remaining form factor X 1 and X 3 can be seen on Fig. 23. The characteristics of the solutions highlighted in the figure and associated to an close to its optimal value are Fig. 24 which shows that solution I resolves the DSE up to p 10 GeV with an error that is smaller than 3% for the scalar equation and error of about 1% for the vector equation. However, the form factor X 3 associated to solution I does not seem to be converged for momenta above 2 GeV. The solution named IV solves the scalar equation with a relative error below 10% and the vector equation with a relative error below 6%. The form factors X 1 (p 2 ) and X 3 (p 2 ) associated to the solutions I -IV are reported on Fig. 25 and reproduce the same patterns as those of the solutions computed in the previous sections. Figure 25: The solutions of the DSE for X 1 and X 3 when X 0 = 1. Full Form Factors and Comparison of Solutions In Secs. 6.1, 6.2 and 6.3 we have solved the Dyson-Schwinger equations assuming that the quark-ghost kernel form factors are given by Eqs. (4.20) -(4.22). So, besides, X 0 , the full form factors appearing in H and H, see Eqs. (2.16), should be multiplied by the gluon propagator at the proper kinematical configuration. Herein, we aim to compare the various solutions found in previous section and, in this way, provide an estimation of the systematics associated to our ansatz, and also to provide the form of the full functions appearing on H and H. Looking at the relative errors on the Dyson-Schwinger equations and at the convergence of the form factors at higher momenta, the comparison will be done using Sol. II computed using the perturbative X 0 and the tree level ghost-gluon vertex, Sol. III computed when the gap equation is solved for the full set of form factors and Sol. IV when the gap equation is solved for X 0 = 1. Let us start with the X 0 that we have assumed to be only a function of the gluon momenta. The perturbative solutions are compared with the solution obtained inverting the Dyson-Schwinger equations for the full set of form factors used in our ansatz can be seen in Fig. 26. This figure repeats partially Fig. 19 providing a clear view of the solutions. All solutions show a X 0 that essentially is close to its tree level value, i.e. X 0 = 1, while the perturbative solutions have the largest deviation from unit. The form factor X 1 (p 2 ) can be seen on Fig. 27 for all the solutions. Note that all solutions reproduce essentially the same function of the gluon momentum, with X 1 (p 2 ) being small for p 1.5 GeV and showing a sharp peak at p 400 MeV. X 1 (p 2 ) is positive defined except for a small range of momenta p ∈ [0.75 , 1.4] GeV where it takes small negative values. The form factor X 3 (p 2 ) can be seen on Fig. 28 for all the solutions. Surprisingly, X 3 (p 2 ) seems to have a relative large tail that appears for all the solutions. Up to momenta p 3 GeV the solutions reproduce essentially the same function. However for p 3 GeV the solution associated to X 0 = 1 is enhanced relative to all the others, with the solutions associated to the one-loop perturbative X 0 being slightly enhanced relative to the nonperturbative solution obtained from inverting the gap equation. X 3 (p 2 ) shows a maxima at p 200 MeV, an absolute maxima at p 1.4 GeV and an absolute minima at p 650 MeV. This form factor is positive defined at infrared momenta p 350 MeV and the high momenta p 900 MeV taking also negative values in p ∈ [0.35 , 0.9] GeV. Tunning α s The results for the relative errors on the scalar and vector components of the Dyson-Schwinger equation seen on Figs. 15, 18, 24 show a relative error that for p 10 GeV grow with p and take its maximum value ∼ 10% at the cutoff. This can be viewed in many ways Figure 28: The quark-ghost kernel form factor X 3 (p 2 ). and one of them being that our choice for the strong coupling constant is not the best one. In our approach we mix quenched lattice results with dynamical simulations and, in order to be able to solve the gap equation for the quark-ghost kernel, the renormalization constant Z 1 , see Eq, (2.6), is set to identity. Although the original integral equation is linear on the form factors X 0 , X 1 and X 3 , the regularized system that is solved introduces an extra parameter that needs to be fixed in the way described above and, therefore, changing the strong coupling constant changes the balance between the regularizating parameter and the various form factors, allowing for adjustments on the solutions. Therefore, the relative errors on the integral equations can be adjusted by changing the strong coupling constant. In this section, we report on the results of solving the regularized linear system of equations that replace the original equations in the way it is described on Sec. 6. As the figures shows, lowering the value of α s (µ) solves the problem of the increase of the relative error observed in Sec. 6.2. Moreover, of the various solutions considered, for α s (µ) = 0.22 one can observe solutions whose relative error is of the order of ∼ 1% for the scalar equation and ∼ 3-4% for the vector components, the solutions named Sol. I and II in Fig. 30. The relative error associated to the remaining solutions given on Figs. 24, 29, 30 and 31 are larger and, therefore, we take α s (µ) = 0.22 as the optimal value for the strong coupling constant within our approach. The corresponding quarkghost kernel can be seen on Figs. 32, 33 and 34, together with the corresponding solution computed using α s (µ) = 0.295. The solutions for the two values of α s are similar, although those associated to the smaller value of α s achieve higher values. If at momenta p 1 GeV Sol. I takes absolute values that are higher than those of Sol. II, at lower momenta the difference between the two solutions is marginal. The Quark-Gluon Vertex Form Factors In the previous section we have computed the quark-ghost kernel form factors X 0 (q 2 ), X 1 (q 2 ), X 3 (q 2 ) that, together with the gluon propagator, define the full form factors as given in Eqs. (4.20), (4.21) and (4.22). Once the full quark-ghost kernel form factors are known, then the longitudinal quark-gluon form factors can be computed using Eqs. (3.7) -(3.10), after performing the rotation to the Euclidean space and identifying the g i (p 2 1 , p 2 2 ) functions to g 0 (p 2 1 , p 2 2 ) = 1 and g 1 (p 2 1 , p 2 2 ) = g 2 (p 2 1 , p 2 2 ) = ∆ For completeness, we write the full expressions for the longitudinal form factors in Euclidean space Figure 34: The quark-ghost kernel form factor X 3 (p 2 ) computed using α s (µ) = 0.22. Note that by taking into account structures of the quark-ghost kernel other than X 0 the quark-gluon vertex deviates considerable from a Ball-Chiu type and it is now a function both of p, q and of the angle between the quark and gluon momenta. The angular dependence appears associated to the scalar products (pq) and also on the argument of the gluon propagator ∆ (p 2 + k 2 )/2 . For the calculation of λ 1 -λ 4 we will use Sol. II computed using α s (µ) = 0.22; see Sec. 6.5 for details. We recall the reader that the calculation performed here considers only the longitudinal form factors and that the ansatz for the vertex takes into account the dependence between the angle of the incoming quark momentum and the incoming gluon momentum. The overall picture of the various form factors when the angle between the incoming quark momentum p and the incoming gluon momentum q is θ = 0 can be seen on Fig. 35. On Fig. 36 the λ 1 to λ 4 are given for a θ = 2π/3. The form factors λ 1 to λ 4 are finite for all p and q and approach asymptotically their perturbative values. Further, for our definition of the operators L µ , see Eqs. (2.12) for their definition in Minkowsky space, the corresponding form factors are essentially positive defined. The exception being λ 4 that takes both positive and negative values and whose maximum absolute value is negative and appears for small p and q. The relative magnitude of the λ i suggest that the quark-gluon vertex is essentially saturated by λ 1 and λ 3 , with λ 2 and λ 4 playing minor roles, i.e. the tensor structures of the longitudinal part of the vertex seem to play a subleading role; see also the discussion for the soft quark limit, defined by a vanishing quark momentum, and the symmetric limit below. Our result differs significantly from the perturbative estimation of the form factors [5], where all the strength appears associated to λ 1 . For example, for the kinematical configuration defined by p 2 = (p − q) 2 at vanishing p we have λ 1 ≈ 1.1 and λ 2 ≈ 0.12 GeV −2 and λ 3 ≈ 0.18 GeV −1 for a current mass m q = 115 MeV, a renormalization scale µ = 2 GeV and for α s = 0.118. Of course, one should look to the relative values of the various λs and not to their absolute values. For the comparison of the contributions from the various form factors one can use the non-perturbative momentum scale of 1 GeV to build dimensionless quantities. Then, as seen on Figs. 35 and 36 the scales for λ 1 and λ 3 are similar, while the maximum of λ 2 is about 10% relative to the maxima of λ 1 and λ 3 and the maximum for λ 4 is about half of that for λ 2 . The comparison of our results with those reported in [17,22,23] is difficult to perform but in these works λ 1 clearly dominates. On [17] λ 2 reaches at most 16% of the maximum value of λ 1 , while λ 3 seems to have the possibility of taking large values. On [22,23], λ 2 and λ 3 take, at most, a numerical value that is about 23% of the maxima of λ 1 , with λ 4 being essentially negligible. Our solution shows a vertex dominated by λ 1 and λ 3 with these form factors reaching numerical values of the same order of magnitude -see also Fig. 41. As seen on Figs. 35 and 36 the quark-gluon form factors are significantly enhanced for low values of p and q. The momentum region where one observes the enhancement of the λ 1 to λ 4 happens for p 1 GeV and q 1 GeV, with its maximum values showing up for p ≈ q ≈ Λ QCD -see, also, the discussion below on the angular dependence. The infrared enhancement of λ 1 to λ 4 with the gluon momentum is a direct consequence of using the Slavnov-Taylor identity (2.14) to rewrite the form factor. Indeed, as can be seen on Eqs. (7.2) -(7.2), all the form factors have, as a global factor, the ghost dressing function F (q 2 ). The ghost dressing function is enhanced in the infrared, see Fig. 4, implying the increase of the λ i as q −→ 0. The infrared enhanced of the form factors with the quark incoming momentum is more subtle. It is linked to the ansatz considered herein, that relies on the analysis of the soft gluon limit of the Landau gauge lattice data for λ 1 performed in [29]. Indeed, this work identified a dependence of λ 1 on the gluon propagator that was incorporated in our ansatz when the quark-ghost kernel form factors X 1 and X 3 were made proportional to ∆((p 2 + (p − q) 2 /2). This term is crucial to have well behaved kernels in the integral equations, i.e. to ensure that the Dyson-Schwinger equations are finite, and it introduces an additional dependence on the angle between the quark and the gluon momenta. The gluon propagator is a decreasing function of its argument and, therefore, for a given q and angle between the quark and gluon momenta, the terms proportional to X 1 and X 3 increase as p decreases. This explains, in part, the observed enhanced of the quark-gluon form factors. The quark-gluon form factors are functions of the p, q and of the angle between the two vectors. Their dependence on the angle can be seen on Figs. 37 -40. These figures also provide a clear picture of the maxima of the various form factors as functions of the gluon momenta. For λ 1 and λ 3 the maxima are for q ≈ 300 MeV, while for λ 2 the maximum is at q ≈ 600 MeV. λ 4 seems to be a more complicated function of p, q and θ. Indeed, this later form factor shows various maxima of the same order of magnitude for different p, q and θ values. All the form factors appear to be monotonous decreasing functions of the angle between the incoming quark and incoming gluon momenta θ. If the pattern of the q dependence of λ 1 , λ 2 and λ 3 seems to be independent of θ, λ 4 seems to reverse is behaviour relative to the q − axis for θ π/3. Clearly, the maximum values for all the form factors occurs for θ = 0, i.e. the quark-gluon vertex favours the kinematical configurations with small values of p and q and also of the angle between the quark and gluon momentum 1 . It follows that the quark-gluon vertex seems to favour low values of the quark and gluon momenta and a p and a q that are preferably parallel vectors. From the point of view of the momentum dependence, our solution for the quarkgluon vertex is closer to that of the Maris-Tandy model [32] than those computed in [17,22,23]. Recall that the Maris-Tandy model considers a single form factor, that would be equivalent to our λ 1 , and ignores the dependence of the vertex on the quark momentum. In particular, for this model we also checked that the region where the quark-gluon form factors computed here are enhanced occurs essentially within the same range of momenta as for the corresponding form factor of the Maris-Tandy model. Note also that the maxima of the form factors computed in the present work occur for momenta where the kernels appearing in the original equations take there maximum values -see Figs. 6, 9 and 10. It is difficult to measure the relative importance of the contribution of the longitudinal form factors λ 1λ 4 to the quark-gluon vertex. However, an idea of their relative importance can be "measured" looking at particular kinematical configurations. Herein we consider the soft quark limit where the incoming quark momentum vanish and the totally symmetric limit where p 2 = q 2 = k 2 and θ = 2π/3. The corresponding form factors multiplied by appropriated powers of momenta to build dimensionless function can be seen on Fig. 41 (computed using the θ = 2π/3 data). If for the symmetric configuration the dominant form factor seems to be λ 1 , for the soft quark limit that role is played by p λ 3 . Note that the maximum of the later is about 1.3 times larger than the maximum of the former. Curiously, the maxima of λ 1 and pλ 3 occur at exactly the same momentum scale p = 310 MeV. As the figure shows it seems that the quark-gluon vertex is dominated by λ 1 and λ 3 with the tensor structures associated to λ 2 and λ 4 playing a minor role. Finally, let us consider the soft gluon limit whose λ 1 form factor as recently being computed using lattice simulations [27]. The data was investigated in [29] revealing an important contribution to λ 1 linked with the gluon propagator. Our estimation of λ 1 in the soft gluon limit can be seen on Fig. 42 -see the full curve in black. This curve was (arbitrarely) normalized to reproduce the lattice data at 1 GeV of the β = 5.29 and M π = 295 MeV simulation 2 . Clearly, our ansatz underestimates λ 1 in the infrared region. As discussed in [29], in the soft gluon limit where p is the quark incoming momenta, and in our notation Our solutions has X 1 (0) ≈ 0 GeV and X 3 (0) ≈ 0 and, therefore, it underestimates λ 1 (p 2 ) in the infrared region. Note that herein X 1 and X 3 are assumed to be a function only of the gluon momentum and, due to the integration over the gluon momenta q in the Dyson-Schwinger equations, these form factor are multiplied by q 3 and, therefore, the inversion probably is not able to resolve correctly X 1 (q 2 ) and X 3 (q 2 ) in the deep infrared region. If in the calculation of the soft gluon limit one assumes that X 1 (0) deviates from zero by a small quantity, the agreement with the lattice data is considerable improved both in the infrared and in the ultraviolet. This is represented by the two full curves in colour of Fig. 42 where X 1 (0) is set to a small value. The colour curves suggest a X 1 (0) ∼ 0.5 -0.7 GeV. Further, the agreement in the ultraviolet region can also be improved if X 3 (0) assumes small and positive values; recall that X 3 (q 2 ) approaches zero from the above when q 2 → 0 as can be seen on Fig. 34. Summary and Conclusions In this work we investigate the non-perturbative regime of the Landau gauge quark-gluon vertex (QGV), taking into account only its longitudinal components, and relying on lattice results for the quark, gluon and ghost propagators, together with continuum exact relations, namely a Slavnov-Taylor identity and the quark propagator Dyson-Schwinger equation. Furthermore, we incorporate the exact normalisation condition for the quark-ghost kernel form factor X 0 [17]. In addition, we take into account an empirical relation that links the gluon propagator and the soft gluon limit of the form factor λ 1 checked against full QCD lattice simulations [29]. The full set of the quark-ghost kernel tensor structures are taken into account to build an ansatz for the longitudinal quark-gluon vertex that is a function of both the incoming quark p and gluon q momenta, and the angle between p and q. The quark-ghost kernel requires four scalar form factors X 0 , X 1 , X 2 , X 3 [35]. For the construction of the quark-ghost kernel a perfect symmetry between incoming and outgoing quark momentum is assumed, which simplified the description of the QGV in terms of X 0 , X 1 = X 2 and X 3 . Charge conjugation demands that for the soft gluon limit, defined by q = 0, λ 4 = 0 and our construction implements such constraint. Noteworthy to mention that our ansatz goes beyond the Ball-Chiu type of vertex [6] and includes it as a particular case, when X 1 = X 3 = 0 and X 0 = 1. The Dyson-Schwinger equations are solved for the quark-gluon vertex that are written in terms of the unknown functions X 0 , X 1 and X 3 . From the point of view of the quark-ghost kernel form factors, these are linear integral equations. The corresponding mathematical problem is ill defined and requires regularization in order to obtain a meaningful solution. The original integral equations for the scalar and vector components of the quark gap equation are transformed into a set of linear system using Gauss-Legendre quadratures to perform the integrations and after doing the angular integration. In our approach we rely on the Tikhonov linear regularization that is equivalent to minimize ||B − N X|| 2 + ||X|| 2 . The solutions are found numerically after writing the regularized linear system in its normal form. The small parameter is set by looking at the balance between the associated error on the Dyson-Schwinger equations, i.e. the difference between the l.h.s and the r.h.s. ||B − N X|| 2 , and the norm of the corresponding quark-ghost form factors, i.e. ||X|| 2 , for each solution of the regularized linear system. The resulting quark-gluon vertex form factors λ 1λ 4 show a strong enhancement in the infrared region and deviate significantly from their tree level results for quark and gluon momenta below ∼ 2 GeV. At high momentum the form factors approach their perturbative values. In what concerns the gluon momentum, the observed infrared enhancement for the QGV form factors can be traced back to the multiplicative contribution of the ghost dressing function introduced through the Slavnov-Taylor identity. Recall that the gluon dressing function peaks at q = 0 and, therefore, favours that the incoming and outgoing quark momentum to be parallel. On the other hand, the infrared enhancement associated to the quark momentum is linked to the gluon dependence that was observed on the analysis of the soft gluon limit of the QGV and, clearly, favours small quark momentum p ∼ 0 and also p parallel to q; see Eqs. (4.21), (4.22) and (7.2) -(7.5) and, in particular, the argument appearing on the gluon propagator term. The maxima of the computed form factors are essentially at the maxima of X 0 , X 1 and X 3 and they appear for momenta of p, q ∼ Λ QCD , which again seems to set the appropriate non-perturbative momentum scale. Recall that the momentum scale comes from the use of lattice data for the propagators. Further, we find that the quark-gluon vertex is dominated by the form factors associated to the tree level vertex γ µ and to the scalar 2 p µ + q µ , with the higher rank tensor structures giving small contributions. Overall, our findings are in qualitative agreement with previous works both with phenomenological approaches, as in the case of the Maris-Tandy vertex [32], and those based on first principles ab initio continuum methods, see e.g. [23] and references therein. The high momentum behaviour of the quark-gluon vertex form factors reproduces their perturbative values. However, the matching between the computed form factors and their perturbative tail is not yet implement. In addition, we verified that for the soft gluon limit, λ 1 is not able to reproduce quantitatively the lattice data from full QCD simulations, apart the qualitative momentum behaviour. This can be traced back to the poor resolution of the kernel in the deep infrared region, due to the q 3 factor coming from the momentum integration. As we have verified, a small tuning of X 1 and X 3 at q = 0 is enough to reproduce the soft gluon limit lattice data within the present framework. This two challenging problems, together with inclusion of the transverse part of the vertex, call for an improvement of the approach devised herein and are to be tackled in a future work. Despite of that, we expect that the present results can help understanding the non-perturbative dynamics of quarks and gluons in the infrared region and that can motivate further applications to the study of hadron phenomenology based on quantum field theoretical approaches as those using Bethe-Salpeter and/or Faddeev equations -see e.g. [45] and references therein. A 4D Spherical Coordinates and integration over momentum In 4D the spherical coordinates are related to the cartesian coordinates as follows where Λ stands for the cutoff introduced to regulate the theory. B Comparing Propagator Fits with Previous Works For completeness and in order to allow for a better comparison between the of the current work with those reported in [28], we provide the fits used in both works with the gluon propagator, the ghost propagator and the quark wave function curves renormalized at µ = 4.3 GeV within the MOM scheme. On Fig. 43 the curves referred to as JHEP are those of [28], while those designated as NEW are the curves mentioned in Secs. 5.1 and 5.2. As the figure shows, there are differences between the two sets of curves, not only at the infrared region but also on the running at high momentum.
2019-01-29T09:07:34.000Z
2018-07-26T00:00:00.000
{ "year": 2019, "sha1": "421540ab958b0ad22d9ef5a0e041dfe7c0a548d0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-019-6617-7.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "89240b8818d90c0befc2bcd104a635f77dd18d68", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
267966922
pes2o/s2orc
v3-fos-license
Discontinuation of affirmative action: Consequences for black educational equity, neurosurgical residency, and medical diversity, with consideration of potential adversity as a new path forward Background The underrepresentation of the Black community in neurosurgery is concerning, especially given projections that racial minorities will become the majority in the U.S. by 2044. Yet, despite these forecasts, Black candidates make up less than 4% of those in neurosurgical training programs. The recent Supreme Court decision to end Affirmative Action underscores the urgency of addressing this disparity. This research delves into the implications of eliminating Affirmative Action on neurosurgery admissions and residencies. Methods A comprehensive literature search was performed using PubMed, OVID Embase, and OVID Medline, employing the keywords “Black”, “Neurosurgery”, and “Residency”. The Maslow Adversity Index (MAI) was created to integrate adversity as a factor in neurosurgery residency evaluation. Results After Affirmative Action, Black college enrollment increased, peaking at 36% by 2020. However, Black medical students remain underrepresented in neurosurgery residencies. ALDC (Athletes, Legacies, Dean’s List, Children of faculty/staff) admissions criteria favor White students. Furthermore, studies have highlighted the beneficial impacts of racial concordance on patient outcomes. The end of Affirmative Action necessitates new diversity strategies in admissions. A points-based assessment, inspired by Maslow's hierarchy, recognizes adversities faced by underrepresented applicants which could help residency programs enhance diversity, inclusivity, and equity in selection. Conclusion Despite the growth in Black college attendance, disparities persist in specialized medical fields like neurosurgery. The end to Affirmative Action policies might exacerbate these disparities. Embracing holistic admission approaches, rooted in Maslow's hierarchy. This consideration is key for inclusive representation, impacting education, professions, and health outcomes. Introduction The Black community has consistently grappled with challenges of inequality, particularly the lack of equitable representation in medical subspecialties such as neurosurgery. 18This deficiency in representation has persisted over time, emphasizing the pressing need for universities to address and rectify this disparity.In the wake of the 2020 Black Lives Matter movement, academic and medical institutions globally initiated introspective evaluations, leading to the restructuring of their practices to better embody principles of equity and anti-racism, as epitomized by initiatives. 7However, despite this progressive global stance, a pivotal decision on June 29, 2023, by the Supreme Court signaled the conclusion of Affirmative Action-policies and legal measures designed to redress past and prevent future discrimination, ensuring that individuals are not denied opportunities based on race, ethnicity, religion, nationality, age, gender, or disability.This verdict emerged from lawsuits filed against Harvard University and the University of North Carolina, challenging their race-conscious admission strategies. 13Historically, institutions have judiciously weighed various factors, including race, in their admissions processes. 13The intent of this paper is to elucidate the broader implications of this decision and offer a strategic framework to guide medical school admissions and neurosurgical residency programs in supporting historically marginalized populations. Looking ahead to 2044, projections suggest that racial minority groups will account for a majority of the U.S. population. 12Despite these demographic changes, disparities persist, especially in specialized medical fields such as neurosurgery.Studies indicate a notable interest in neurosurgery among Black medical students. 3However, these Black medical student candidates secure neurosurgical residencies at rates lower than their non-Black peers. 14Furthermore, Black residents constitute less than 4% of participants in neurosurgical training programs. 3Concerns arise that removing or reducing the emphasis on race and ethnicity in admissions could impact the efforts of medical institutions in maintaining diverse student cohorts and might influence the pool of minority candidates considering advanced medical training. 13his study is grounded in an exhaustive literature review examining the disparities faced by Black individuals in the field of neurosurgery.The findings reveal inequality that disproportionately affects Black medical school candidates interested in this specialization.Prompted by the discussions surrounding the elimination of Affirmative Action, this research endeavors to assess its potential impact.A comprehensive analysis of relevant sources underscores the existing challenges encountered by Black medical students aspiring to pursue neurosurgery.As such, the primary aim of this investigation is to harness insights from the literature to guide neurosurgery residency committees and medical school admission committees in ways to effectively support Black students in their academic and professional pursuits. Methods A literature review was conducted to investigate the current corpus of research pertaining to Black residents in neurosurgery.This thorough investigation involved searching PubMed, OVID Embase, and OVID Medline databases.The search duration encompassed the period from the establishment of each individual database up until September 2023.The primary search strategy employed the keywords "Black" "Neurosurgery" "Residency" to retrieve relevant studies and articles.The term "Black" was selected to inclusively represent individuals with darker skin tones, acknowledging the diversity within this demographic that encompasses a global population beyond the singular identity of African American.The central aim of this study is to critically analyze the experiences and the systemic challenges faced by the Black community, specifically pertaining to their underrepresentation in neurosurgery residency programs. Additional articles were selected outside the database search criteria, along with supplementary searches were carried out on governmental and nonprofit websites.These sources included The White House, the Postsecondary National Policy Institute, National Bureau of Economic Research, and the Association of American Medical Colleges. Moreover, a computational psychological model was developed based on Maslow's Hierarchy of Needs.This model incorporates an equation, as seen in Fig. 1, specifically designed to measure diversity scores.The equation is as follows. Comprehensive residency adversity score The MAI equation quantitatively evaluates the fulfillment of each level of needs, assigning weighted values to different aspects of Maslow's hierarchy.The model thus provides a numerical score that reflects the diversity in the satisfaction of these needs, enabling a more nuanced analysis of psychological states in varying contexts. Table 1 compiles data from various academic studies focusing on Black academic inequality across college, medical school and residency.The initial reference derives from "AFFIRMATIVE ACTION: HISTORY AND RATIONALE" and "Black Students in Higher Education". 1,5As depicted in Fig. 2, following the inception of affirmative action, there was a noticeable surge in Black college enrollment from the 1970s, peaking at 36% by 2020. 1,5Arcidiacono et al's research points out that White ALDC students enjoy specific benefits during the admissions process.Legacy students see a fivefold spike in admission rates, dean's list students experience a sevenfold rise, and recruited athletes are highly likely to be accepted. 2Concurrently, Bhutta et al's 2019 SCF emphasizes that significant racial wealth gaps persist, with White households holding roughly eight times the wealth of Black households. 4 Furthermore, Maqsood et al, 2021 also provided insights.In the broader context, the US Census Bureau data underscores that minority groups, like the Black community, are undergoing swift population growth. 16As of 2015, nearly half of American children below 18 years are part of ethnic minorities, and the foreign-born population more than doubled from 1990, constituting a third of the overall population increase. 16Gabriel et al, 2021, elaborates on changes that will be happening by 2044 in the United States where racial minorities is projected to soon become the majority. 12Figs.3 and 4, adapted from the work of Gabriel et al, present an analysis of racial disparities in neurosurgery applications and residencies over a nine-year period from to 2018.Fig. 3 illustrates a significant disproportion in the number of White versus Black medical students applying for neurosurgery.Specifically, the data reveal a consistently higher number of White applicants compared to their Black counterparts.Furthermore, there is a noticeable downward trend in Black medical student applicants: from 9% in 2009 to 7% in 2015, and further declining to 5% in 2018.In contrast, the proportion of White applicants shows more fluctuation: 53% in 2009, decreasing to 46% in 2015, and then to 43% in 2018.Fig. focuses on the racial composition of neurosurgery residencies.Over the same nine-year span, the percentage of Black residents in neurosurgery programs remained relatively constant, hovering around 5%, indicating a persistent lack of growth in representation in this specialty. Moreover, Table 1 provides a comprehensive overview of studies examining inequalities faced by Black medical students.Persad-Paisley et al 2022 observed that from 2012 to 2020, applications to neurosurgery programs from Black medical students decreased, while the rates from White students remained unchanged. 18Building on this, Barrie et al 2022 underscore a pronounced disparity: while Black medical graduates frequently express interest in specialized fields like neurosurgery, their actual representation in these programs is starkly lower.This suggests potential systemic barriers hindering minority students in Fig. 1.Illustration of the 'Maslow adversity Index (MAI)', a quantitative measure designed for evaluating adversity in neurosurgery applicants.The study shows a decrease in neighborhood poverty disparity among Black Americans from 1980 to 2010 compared to other racial groups, but highlights that Black Americans still face higher overall rates of poverty. Lee et al, (2018) The main idea of the research is that discrimination exposure, particularly in predominantly White neighborhoods, leads to altered cortisol levels in African American emerging adults, indicating a link between neighborhood racial composition and stress-related health impacts. A. Alan et al. medicine. 3Furthering this narrative, Kabangu et al 2023 reported that Black medical school applicants that are applying for residency experience a lower match rate in neurosurgery compared to their non-Black counterparts. 14Addressing a broader context, Charles et al 2023 noted that even though the percentage of Black neurosurgical residents in the U.S. was minimal in 2019, the subsequent rise of the Black Lives Matter movement in 2020 influenced universities to prioritize diversity.This shift is rooted in the understanding that diverse medical teams enhance both patient care and research. 7n a 2023 study by Hamilton and colleagues, they delved into the Supreme Court's examination of race-conscious admissions policies at notable institutions like Harvard and the University of North Carolina back in October 2022.The Court's potential rulings at that time were seen as jeopardizing these policies, with broader ramifications for diversity across different sectors. 13Following this examination, the Supreme Court, by June 2023, concluded to terminate Affirmative Action. The next articles in Table 1 discuss the benefits of diversity and strategies to further diversify the educational system.The paper, titled "Do Black patients fare better with Black doctors?", highlights the influence of racial concordance between doctors and patients on patient satisfaction, comprehension, adherence to medical advice, and overall health outcomes. 8For instance, in 2018, Duff-Brown discusses how African American doctors can reduce cardiovascular mortality among Black men by 19% and that Black patients are 29% more inclined to discuss health concerns and consent to advanced screenings when treated by Black doctors. 9Moreover, in a 2022 study by Nelson and colleague, the findings of the Kaiser Family Foundation's survey reveals a perception gap regarding racial bias in healthcare.While 29% of physicians acknowledge the existence of racial bias, the general population perceives race as affecting health care at a rate of 47%.This discrepancy is stark among White and Black physicians, with a mere 4% of White physicians recognizing frequent racial bias compared to 41% of their Black physicians.The consequences of this limited racial diversity in healthcare professions are significant and profound. 17uilding on these insights the concluding article, FACT SHEET: The Current Administration Announces Actions to Promote Educational Opportunity and Diversity in Colleges and Universities, details how the White House is striving to ensure lawful, diverse, and inclusive college admissions practices. 10The White House's Administration is doing this by, including lawful admissions practices and valuing students' resilience in the face of adversity, aiming to sustain a diverse workforce.Such diversity is essential for culturally competent care in health care and improves health outcomes by ensuring that healthcare providers can effectively address the varied needs of all patient populations. The persistence of a racial gap in healthcare perceptions and practices can be further compounded by systemic racism, leading to discrimination and bias that negatively impact patient experiences and health outcomes.William et al, (2019) in Table 1 found that the critical cultural competence that racial concordance provides goes beyond a mere soft skill; it becomes a significant determinant of health outcomes.The lack of diversity among healthcare providers not only restricts the scope for culturally sensitive care but also perpetuates the structural barriers that foster health disparities.Implicit biases in clinical settings can lead to substandard medical care and poorer communication, while stereotype threats and internalized racism may degrade patient trust and adherence to medical advice, worsening health outcomes.It is imperative to enhance diversity in the medical workforce to counteract the entrenched biases and systemic obstacles that compromise minority health. 19he recent literature on socioeconomic barriers and disparities faced by Black Americans highlights several critical issues.Research conducted in 2015 by Braveman and colleagues draws attention to the persistently higher rates of preterm birth among Black populations compared to White Americans.This disparity is not just a medical concern but is deeply intertwined with broader socioeconomic factors.Factors such as income, wealth, education, and neighborhood characteristics -including poverty rates, unemployment, segregation, and crime -are identified as significant contributors to this health issue, underscoring the complex interplay between socio-economic status and health outcomes. 6Additionally, in a 2016 study by Firebaugh et al, the focus shifts to the spatial dimensions of racial disparities.Their findings reveal that in metropolitan areas, Black residents are disproportionately likely to live in neighborhoods with extreme poverty, defined as areas where the poverty rate exceeds 40%.This concentration in high-poverty neighborhoods has far-reaching consequences, limiting access to quality education, healthcare, employment opportunities, and robust social networks, all of which are crucial for economic and social mobility. 11Furthermore, the research by Lee and colleagues adds another dimension to this discussion by examining the physiological impacts of these socioeconomic disparities.They report that Black individuals living in predominantly White communities exhibit higher levels of cortisol, a stress hormone.This finding is significant because it links the experience of living in a racially incongruent community to tangible health impacts, suggesting that the stress of such environments may contribute to the overall health disparities observed in the Black population. 15he prevailing research highlights the complex interplay of racial disparities within the United States, delineating a web of socioeconomic, spatial, and physiological elements that shape the health and welfare of Black communities.Figs. 4 and 5 integrates MAI with Maslow Hierarchy of Needs to guide Neurosurgery Residency Interview Committees.This framework helps in recognizing the disparities that disproportionately impact marginalized groups.According to Maslow's theory, satisfying fundamental needs is essential before addressing higher-level aspirations.The hierarchy is structured from the base upwards, including physiological, safety, love and belonging, esteem, and self-actualization needs. The NEURO-ASCEND (Neurological Applicant Scoring Criteria Embracing Neurosurgery Diversity) Framework, which is part of the broader MAI (Fig. 1), can be utilized by residency committees by formulating questions to delve into each tier of needs during interviews, as illustrated in Fig. 5.Moreover, a weighted score is attributed to each level, reflecting its importance in the individual's development, as depicted in Fig. 6.This methodology empowers committees to quantify adversity through a cumulative score, which is demonstrated in Fig. 7, thereby enabling a more equitable evaluation of candidates from diverse backgrounds." Discussion The United States holds the distinction of being the world's third most populous nation.Intriguingly, recent data suggests rapid demographic shifts, with ethnic minority children now representing half of the under-18 population. 16In this context, the influence of Affirmative Action policies in expanding educational access for groups that have historically faced marginalization becomes critical to explore.This impact is vividly illustrated by the significant uptick in college enrollments among Black students.As a testament to these policies, there has been a transformative increase in their college attendance rates: starting from 4.9% in 1955, to 36% by 2020 as seen in Figs. 2 1,5 .While college attendance among Black students has risen since the implementation of affirmative action in the 1960s, they still trail behind their White counterparts, particularly in specialized fields where challenges persist.For instance, over the last decade, efforts to achieve equitable representation in academic neurosurgery have seen only moderate success. 3frican-Americans remain underrepresented, making up less than 4% of neurosurgery training programs. 3Interestingly, data indicates that Black medical students in 2012 were 18% more likely to apply to neurosurgery residencies than their White counterparts, though their absolute numbers remain low. 18Given the projected demographic shift by 2044, where racial minorities are anticipated to become the majority, addressing these academic disparities is of paramount importance. 12he cessation of Affirmative Action policies poses a significant threat to the already fragile representation of diverse backgrounds in medical institutions.Such a step could have far-reaching consequences, detrimentally affecting Black pre-medical students and amplifying health disparities for communities of color across the nation. 13Additionally, Fig. 3 shows that between 2012 and 2018, there was a notable decline in the number of Black medical students applying for neurosurgery residencies, whereas the rate for White medical students remained disproportionately much higher. 18Over the past nine years, even with more residency slots created, the percentage of Black neurosurgery residents has not increased, suggesting persistent systemic barriers to racial equity, as shown in Fig. 4 12, .Without Affirmative Action in place, these Fig. 6.Point-Based Assessment Criteria for Neurosurgery Residency Applicants Across Maslow's Hierarchy Outlines a structured scoring system, assigning weighted points to each level of Maslow's hierarchy, to objectively quantify the diverse challenges encountered by neurosurgery residency candidates.systemic biases may intensify, widening the gap in university admissions.This decline in university admissions for Black students, exacerbated by the absence of Affirmative Action, could directly contributes to the dwindling pool of Black applicants for neurosurgery residencies, further aggravating the discrepancy in match rates.Such a trend not only diminishes the diversity in the neurosurgical workforce but also indirectly leads to inferior patient care outcomes, as diverse medical teams have been shown to better understand and address the unique needs of a multicultural patient base. The lawsuit, "Students For Fair Admissions v. Harvard University," offers insightful revelations about the pivotal role of Affirmative Action in university admissions.The lawsuit illuminated the embedded biases in Harvard's admissions criteria.It was discovered that ALDCs received a disproportionate advantage.Over 43% of the admitted White students fell into one of these ALDC categories, which is in stark contrast to less than 16% of admitted African American, Asian American, and Hispanic students. 2Furthermore, studies have indicated that removing preferences for athletes and legacies would lead to a marked change in the racial makeup of the admitted cohort.The share of White students would dwindle, while figures for other racial groups would either ascend or stay consistent. 13hese findings underscore the importance of considering factors like Affirmative Action in promoting diversity in academia.Such policies aim to provide opportunities for traditionally marginalized racial and ethnic communities to access prestigious institutions and specialized programs like neurosurgery.It is important to acknowledge the significant underrepresentation of Black individuals among practicing neurosurgeons, a phenomenon that may be influenced, at least in part, by the disproportionate impact of the social determinants of health on access to medical education and training.This imbalance becomes even more apparent when we consider that the average White family possesses approximately eight times the wealth of a typical Black family. 4ignificant financial disparities present substantial challenges for aspiring doctors from economically disadvantaged backgrounds, and Black individuals often encounter particularly formidable obstacles.These challenges can begin early in life, with Black infants experiencing higher rates of mortality, lower birth weights, and preterm births. 6oreover, Black Americans are more likely than their counterparts to reside in poverty-stricken neighborhoods, where they face inadequate access to quality education, healthcare, job opportunities, and social networks. 11Systemic racism has also been associated with increased cortisol levels in African Americans, further highlighting the enduring effects of these disparities.It is important to note that these facts do not imply racial inferiority but rather point to systemic issues that need to be addressed. 15he enduring disparities in starting points, affecting individuals from various backgrounds, continue to shape their life trajectories, resulting in an uneven playing field that presents greater challenges for those aspiring to pursue a career in neurosurgery.The hurdles faced by medical students, regardless of their racial background, extend into critical aspects of neurosurgical training.Affordability becomes a concern when considering away rotations and the associated living expenses, which are crucial for gaining exposure to the field.Additionally, the cost of traveling for interviews can be a significant barrier.These economic challenges are exacerbated for students who may seek to enhance their residency applications with a research gap year, a strategy often out of reach for those without adequate financial support. Maslow's Hierarchy of Needs offers a valuable framework for understanding how these disparities impact the journeys of aspiring neurosurgeons.A medical student in this context may be contending with unmet physiological and safety needs, while also grappling with issues such as food insecurity, living in disadvantaged neighborhoods, facing transportation challenges, and coping with discrimination.They may not have the support of a loving family and may even be using their student loans to provide for their own families.In contrast, a student from a more privileged background may not face the same physiological needs, enjoying food security, residing in a more favorable neighborhood, and benefiting from a supportive family that can provide financial assistance for study materials and access to additional resources for exams. To address these disparities and foster diversity in the field of neurosurgery, our team has devised a point-based assessment model for Neurosurgery Residency Committees to utilize during candidate evaluations and interviews.As depicted in Fig. 5, this model involves querying candidates about their fulfillment across various levels of Maslow's Hierarchy of Needs, commencing with physiological and safety needs and progressing through love and belonging, esteem, and self-actualization.Fig. 6 illustrates the alignment of each hierarchy level Fig. 7. Adversity Score Classification for Neurosurgery Residency Applicants This table categorizes applicants based on adversity scores, detailing the extent of challenges faced within Maslow's hierarchy of needs, from no adversity to extreme adversity, to provide a nuanced perspective on each candidate's journey. with a set of questions, weighted using MAI, where point allocations range up to 100.Fig. 7 underscores that a higher Maslow score serves as an indicator prompting the admission committee to consider the candidate's life circumstances and the challenges they have encountered when rendering their judgment.This approach seeks to level the playing field and provide equitable opportunities for diverse students aspiring to pursue careers in fields like neurosurgery. The validation of MAI in both single and multicenter studies is proposed in Fig. 1.This initiative aims to develop the NEURO-ASCEND (Neurological Applicant Scoring Criteria Embracing Neurosurgery Diversity) Framework, a potential tool for establishing an objective measurement system that could supplant the need for affirmative action in fostering equity.It is imperative for faculty, staff, and the wider scientific community to endorse a holistic approach in the assessment of candidates during admissions processes.Admissions committees should broaden their evaluation criteria beyond traditional metrics such as grades and standardized test scores, placing greater emphasis on personal attributes and the challenges applicants have overcome. 13his expanded approach necessitates an acknowledgment of prevalent health disparities and societal inequities.For example, consideration of an applicant's ZIP code could be integral, as those from economically disadvantaged areas often confront more significant educational, environmental, and health-related hurdles than those from affluent locales.Additional factors worth considering might include an applicant's dependence on public aid programs like the Supplemental Nutrition Assistance Program or Medicaid/Medicare, or their uninsured status.This enriched perspective reshapes the conventional understanding of 'excellence'. 13While these strategies do not directly replace Affirmative Action, they pave a novel pathway towards preserving diversity and broadening the scope of excellence in higher education.Recognizing and valuing these socioeconomic barriers is a crucial step in further diversifying the academic landscape. Additionally, the limitation of not prioritizing diversity within the healthcare system could lead to suboptimal patient care and satisfaction.Studies have shown that patients tend to give higher ratings to physicians of the same racial or ethnic background. 13Furthermore, research indicates that Black patients often feel more comfortable discussing health concerns and are more likely to consent to advanced screenings when cared for by Black physicians. 9This enhanced willingness to engage in healthcare discussions and procedures may stem from shared cultural experiences and a deeper sense of relatability between the patient and the doctor.In contrast, while 29% of doctors acknowledge the existence of racial bias in healthcare, patients perceive the influence of race on health outcomes at a higher rate. 17Implicit biases in healthcare, which may not always be overtly recognized by providers, can lead to inferior medical care and impede effective communication.This discrepancy suggests that patients might be hesitant to fully engage or feel judged in healthcare settings that lack racial and cultural representation, potentially leading to a reluctance in sharing vital health information or needs and perpetuating the cycle of health care disparities. 19In counties where there are fewer primary care doctors who are Black, the life expectancy of Black inhabitants may not be as prolonged as in areas with more Black primary care doctors. 8A lack of diversity in medical school student bodies might limit the comfort and effectiveness of future physicians when treating diverse patient demographics. 13n light of the recent Supreme Court verdict abolishing Affirmative Action in higher education, it is imperative that a new course be charted by the White House that integrates consideration of adversity into the admissions processes of higher education institutions.This forwardthinking method is aimed at providing guidance on legally permissible ways to sustain a diverse student body.In support of this vision, a National Summit on Educational Opportunity is to be launched. 10o fortify this initiative, the Department of Education, in collaboration with the Department of Justice, is preparing a comprehensive report that will outline best practices and policy guideline.This report will pinpoint strategies to boost diversity and enrich educational opportunities in tertiary institutions.At the heart of these strategies is the intent to thoroughly weave adversity considerations into the admissions matrix.Moreover, there is a push for greater transparency in college admissions and enrollment procedures.The overarching objective is for states to be enabled to harness data in crafting programs that effectively reach out to historically underrepresented group.It is essential for these processes to reflect upon socioeconomic challenges, especially in light of the persistent racial and ethnic wealth divides in the United States. 10n a nation where demographic shifts highlight a burgeoning ethnic minority, the decisions that shape our educational landscape have never been more critical.The abolishment of Affirmative Action has set into motion a series of contemplative ripples, urging stakeholders to evaluate not only the spirit of diversity but also its tangible effects in our institutions.The intricacies of these decisions extend beyond mere enrollment, reaching into the very heart of the medical world where representation can directly influence patient care, satisfaction, and even life expectancy.As America stands at this educational and societal crossroads, it becomes evident that diversity isn't just a policy checkbox but a critical facet of our shared progress.The steps taken by administrations, institutions, and residency programs in weaving adversity considerations, championing holistic methodologies, and reshaping traditional notions of excellence, will signal a commitment to a more inclusive, comprehensive, and equitable future.While the tools may evolve, the enduring goal remains unchanged: creating an academic and professional realm where every background finds voice, representation, and opportunity. A . Alan et al. Fig. 2 . 5 Fig. 3 . Fig. 2. Evolution of Black Student College Enrollment from 1955 to 2020A chart depicting the percentage of Black students entry into college over decades.This graph was modified from: "Affirmative Action: History and Rationale" and "Black students in Higher Education".1,5 Fig. 4 . Fig. 4. Racial Composition in Neurosurgery Residency (2009-2018)A graph showing the consistent underrepresentation of Black residents in neurosurgery residencies compared to White residents across a decade.This graph was modified from "Diversity in Neurosurgery: Trends in Gender and Racial/Ethnic Representation Among Applicants and Residents from U.S. Neurological Surgery Residency Programs".12 Fig. 5 . Fig. 5. Assessment of Neurosurgery Candidates Based on Maslow's Hierarchy of Needs This model illustrates a tiered assessment framework based on Maslow's Hierarchy of Needs, designed to evaluate neurosurgery residency applicants on personal growth, community contribution, and adversities faced. Table 1 Black inequalities in neurosurgery. better than the standard 10%.If they are on the dean's special list, their probability surpasses seven times that baseline rate.And if they are sought-after athletes, they're almost assured an admission.Yet, interestingly, when looking at those who were admitted because of their ALDC status, only one-fourth of them would have been admitted if evaluated without the ALDC advantage.Bhutta et al, (2020) The 2019 Survey of Consumer Finances (SCF) indicates persistent wealth disparities among racial and ethnic groups, consistent with 2016 findings.Specifically, the median wealth of White families is approximately eight times that of Black families.Maqsood et al, (2021) The US Census Bureau data shows that minority groups are experiencing rapid population growth, with half of American children under 18 being from ethnic minorities.From 1990 to 2015, the number of foreign-born residents more than doubled, accounting for a third of the total population growth.Gabriel et al, (2021) By 2044, racial minorities will become the majority in the U.S., and there's a growing emphasis on promoting diversity in the medical field, covering aspects of gender, race, and ethnicity.Chares et al, (2023) In 2019, only 4.95% of neurosurgical residents in the U.S. were Black.However, in 2020, the Black Lives Matter movement prompted universities to emphasize fairness, given that diverse medical groups lead to better patient outcomes and research advancements.
2024-02-27T16:03:00.128Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "50445fbccf5dc1384c9ab6b00f53218f433c69f6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.wnsx.2024.100339", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "76386fe37cba72fb121d3b36e889e217e47f3ead", "s2fieldsofstudy": [ "Medicine", "Political Science", "Sociology", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
256274918
pes2o/s2orc
v3-fos-license
Extended geometry of magical supergravities We provide, through the framework of extended geometry, a geometrisation of the duality symmetries appearing in magical supergravities. A new ingredient is the general formulation of extended geometry with structure group of non-split real form. A simple diagrammatic rule for solving the section constraint by inspection of the Satake diagram is derived. Introduction and summary There exists a special class of supergravity theories in d = 3, 4, 5, 6, known as magical supergravities [1][2][3] whose symmetries are associated with the remarkable geometries of the magic square of Freudenthal, Rozenfeld and Tits [4,5]. The scalar manifolds arising in all magical supergravities are displayed in Table 1. The magical theories in d = 6 are parent theories from which all magical supergravities in d = 3, 4, 5 can be obtained by dimensional reduction. The geometries arising in d = 3, 4, 5 [1] were later referred to as very special quaternionic Kähler, very special Kähler and very special real, respectively. See ref. [6] for a review. The use of extended geometry as a means to provide a geometric origin of duality symmetries in string theory and M-theory is well established, see e.g. refs. for exceptional geometry and [31][32][33][34][35] for the general framework. The duality symmetries, traditionally arising as an enhancement after dimensional reduction, then become present in the unreduced models, not as global symmetries, but as structure groups of generalised diffeomorphisms. The present letter aims to fill a gap in the formalism, namely to deal with and interpret duality groups/structure groups of non-split real form. Our main application will be the bosonic sector of the (ungauged) magical supergravities, but the method is generic and can be applied to other models. We will thus provide a "geometrisation" of the duality symmetries appearing in the magic square. The groups appear as structure groups of extended geometries for different splits of the 6 dimensions into n "internal" and d "external" directions, without dimensional reduction. A brief recapitulation of magical supergravities is given in Section 2. In Section 3, we recall some basic properties of real forms and Satake diagrams, and also discuss real forms of tensor hierarchy algebras. The latter are used to identify the bosonic fields. Section 4 is devoted to the actual construction of the extended geometry, which mimics the formulation of exceptional geometry for D = 11 supergravity, and to the solution of the section constraint. When a magical supergravity is dimensionally reduced to d < 6 dimensions, the symmetry is further enhanced, leading to the groups in Table 1, forming a magic square of Lie groups [4,5]. The table may in principle be continued with infinite-dimensional algebras to the right, with a d = 2 column containing affine extensions of the algebras in d = 3 column, over-extended Kac-Moody algebras in a d = 1 column etc. The d = 5 groups are the the structure groups of the Jordan algebras J 3 (K ν ) of hermitean 3 × 3 matrices, and the d = 4 groups the conformal groups of the same algebras. Note that the group Spin (1,9) occurring in the octonionic magical supergravity is another real form of Spin(10) than Spin (5,5), the U-duality group for D = 11 supergravity reduced to d = 6, and that the modules of the 1-form and 2-form potentials also are "the same" in the two cases. In Section 3.2, we will see how these real algebras and modules appear in level decompositions of real forms of the same tensor hierarchy algebra over C. Already the magical supergravities in d = 6 will be formulated as extended geometry, where the scalar coset is parametrised as a generalised vielbein on an internal space, however with a section constraint whose solution is a point-the structure group then becomes R-symmetry. 3 Real forms and Satake diagrams Satake diagrams We do not aim to give a complete account of Satake diagrams [36][37][38] and real forms of semi-simple Lie algebras. Rather, some essential features that turn out to be relevant to the present work are described. There are essentially two alternative ways to characterise real forms diagrammatically, Satake diagrams and Vogan diagrams [39]. Roughly speaking, while the Satake diagram describes the deviation from the split real form, the Vogan diagram relates the real form to the compact one. The Satake diagrams have the advantage that they are in 1-1 correspondence with the real forms. See also the presentations in refs. [40,41] and in ref. [42], which contains examples relevant to the present paper. Exceptional extended geometry has so far exclusively used structure algebras of split real form. It will become clear, in particular in Section 4.2, where we solve the section constraint diagrammatically, that the classification using Satake diagrams is much better suited to our purposes. Let the complex semi-simple Lie algebra g C have a Dynkin diagram ∆(g C ). A real form g of g C is a subalgebra over R, whose complexification is g C . The complex conjugation of an element za, where z ∈ C and a ∈ g is (of course) defined by complex conjugation of z, za →za. This defines an (anti-linear) involution σ on g C . Conversely, the fixed points of this involution define the real form g ⊂ g C . The Satake diagram ∆(g) for the real form g encodes the involution σ, and is a decorated version of ∆(g C ). As a preparation, consider A 1 = sl (2). This complex Lie algebra has two real forms, the compact su(2) and the split (maximally non-compact) sl(2, R). The involution σ defining the split real form is the identity involution, and the one defining the compact real form is the Chevalley involution σ: e → −f , f → −e, h → −h. In the split case, the node remains undecorated (white), and in the compact case, the node is colored black. Any simple Lie algebra has a split real form, defined by the identity involution, whose Satake diagram is identical in appearance to the Dynkin diagram, and a compact real form, defined by the Chevalley involution, whose Satake diagram consists of only black nodes. There is yet another type of decoration appearing in Satake diagrams, namely arrows. To understand their meaning, consider the Lie algebra sl(2, C) as a real Lie algebra. Write an element as a + ib, where a, b ∈ sl(2, R). In the complexification sl(2, C) ⊗ C, where we use another imaginary element i ′ for the factor C, we can choose elements of the forms a ± = P ± a = 1 2 (1 ± i ⊗ i ′ )a, projecting on the two parts of sl(2) ⊕ sl (2). The involution corresponding to the real form sl(2, C) maps i ′ → −i ′ , so it interchanges the same basis elements in the two sl(2)'s. Such an involution is denoted by an arrow between the nodes of the two algebras, resulting in the Satake diagram of Figure 1. Arrows may also appear in a connected diagram. The general rules are as follows: For a black (compact) node, the involution acts as the Chevalley involution of the corresponding sl(2) subalgebra. For two nodes i, i ′ connected by an arrow, and unconnected to black nodes, . For a white (non-compact) node which is not connected to a black node, nor have a connected arrow, the involution acts as the identity on the corresponding sl(2) subalgebra. The only complication, and the only action of the involution that can not be immediately read off from the Satake diagram, is the behaviour of the generators associated to a white node, say number i, connected to black nodes (which in turn can be connected to further black nodes). The action of the involution σ on the sl(2) generators is then more complicated. In terms of the induced action of σ on the roots, a simple root α i corresponding to an undecorated white node maps to α i + j c j α j , where the range of the index j is over the group of compact nodes connected (not necessarily directly, but via black nodes) to node i. The numbers c j are positive integers. They must be chosen so that the Cartan matrix is invariant (which is obviously impossible if they are zero), and of course so that σ 2 = 1. If two white nodes (number i and i ′ ) are connected with arrows, and in addition both connected via a number of black nodes, labelled by an index j, one analogously has α i → α i ′ + j c j α j , α i ′ → α i + j c ′ j α j We illustrate with two example, of which one appears as one of the structure algebras in magical supergravity, namely e 6(−26) and e 6(−14) . The Satake diagrams and the convention for numbering of nodes are given in Figure 2. The Cartan matrix A is and the two involutions act on the simple roots as The diagonal elements of the σ's are given by the rules (+1 for white, −1 for black, 0 when connected by an arrow). Nodes i, j connected by an arrow have σ ij = 1. The remaining non-zero numbers (only present for white nodes connected to black ones, in the first example nodes 1 and 5, in the second nodes 1, 5 and 6) are not immediately visible in the diagrams, but they are completely determined by the conditions σ 2 = 1 and σAσ t = A. . Only certain arrangements of black/white nodes and arrows are admitted in a Satake diagram. We will not give a full list, nor try to argue for it. It follows from the rules that extending a Satake diagram ∆(g) by attaching white nodes to white nodes leads to a Satake diagram for an extended real Lie algebra with g as a subalgebra. The diagrams relevant for the magical supergravities are listed in Figure 3. Figure 3: Satake diagrams of the duality groups of the magical supergravities with n = 6 − d physical internal dimensions. The coordinate module corresponds to the leftmost node. The line of n − 1 nodes is the gravity line, giving a solution to the section constraint. n = 0 corresponds to deleting the "GL(1, R) node(s)" immediately connected to the gravity line, which reveals the GL(n, R) × Spin(1, ν + 1) subgroups, accompanied by the SU (2) "Rsymmetry" for ν = 4. The U (1) for ν = 2 is the compact Cartan element of the leftmost pair connected by arrows. Tensor hierarchy algebras and real forms Tensor hierarchy algebras [43] are Lie superalgebras, typically infinite-dimensional, that encode the field (and ghost) content of extended geometry (see Section 4). Given a Lie algebra g C (possibly an infinite-dimensional Kac-Moody algebra, but for our purposes a finite-dimensional semi-simple Lie algebra) and a dominant integral weight λ, tensor hierarchy algebras S(g C , λ) and W (g C , λ) over C are constructed the usual way [43][44][45][46][47]. They are associated with a Dynkin diagram where a "grey" (fermionic) node is attached to the Dynkin diagram of g C , ∆(g C ), according to the de-composition of λ in terms of fundamental weights. In the examples relevant to us, λ is a fundamental weight dual to a simple root at one end of ∆(g C ), and we will simply write S(g C ) and W (g C ). Though the tensor hierarchy algebras infinite-dimensional, each degree in a grading with respect to the fermionic root is finite-dimensional module of g C . Both S(g C ) and W (g C ) contain the lowest weight module R 1 = R(−λ) at degree 1 and R 2 = ∨ 2 R(−λ) ⊖ R(−2λ) at degree 2. In S(g C ), degree 0 consists of g, while degree −1 contains all modules that "automatically" respect the ideal R(−2λ) at degree 2, in the sense that , also a grading element is present at degree 0 and a module R(λ) at degree −1. In refs. [44][45][46], generators and relations analogous to the Chevalley-Serre construction were used to define tensor hierarchy algebras. Taking these generators as generators of a real superalgebra leads to a real form S(g, λ) or W (g, λ) which we call the split real form. At degree 0, the split real form g is found, at level 1 the real module R(−λ), etc. In order to define a real form of a tensor hierarchy algebra we need to specify a real form g of g C , with the condition that R(−λ) is a real representation. We are then guaranteed that the modules appearing at all degrees are real g-modules. The real tensor hierarchy algebras relevant to the magical supergravities can be described by Satake diagrams obtained by first extending diagrams of the types in Figure 3 with a white node 0 to the left, resulting in a Satake diagram for a real form of g + , the next diagram in the series, and then with a grey node (⊗), numbered −1, to the left. The resulting Satake diagrams associated with real forms S(g + ) of the tensor hierarchy algebras are listed in Figure 4. From the diagram one can then define the involution σ on the corresponding complex tensor hierarchy algebra, which in turn defines the real form, in the same way as for g C . The involution acts trivially on the generators associated to the white node first added to the Satake diagram of g but not on all generators associated to the grey node. This is due to a fundamental difference between the tensor hierarchy algebras and the contragredient Lie superalgebras B(g + ) of Borcherds-Kac-Moody type that are described by the same diagrams, where there is only one generator f −1 at degree −1. On the other hand, in S(g + ), there is one generator f −1,i for each node i in the Satake diagram of g. Under the involution σ, these generators transform in the same way as the corresponding Cartan generators h i . Considering the contragredient Lie superalgebra, an equivalent diagram is obtained by extending with instead of . The corresponding two algebras are isomorphic. By removing the left grey node one then sees that B(g) is a subalgebra of B(g + ). The corresponding embedding also holds for the tensor hierarchy algebras. The relevance for the identification of the fields in extended geometry is further de-tailed in Section 4. The simplified presentation above holds for tensor hierarchy algebras corresponding to d ≥ 3 (finite-dimensional g). Tensor hierarchy algebras corresponding to lower number of external dimensions exhibit more complicated/interesting behaviour, with interesting extra modules appearing [29,46,48]. 4 Extended geometry 4 .1 Generalities The (real) structure group G with Lie algebra g is the continuous version of the duality group. Let generalised vectors transform in the (real) coordinate representation R 1 = R(−λ) of g, which is a lowest weight representation with lowest weight −λ. This representation is read off from the sequential extensions, i.e., stepwise increment of n, of the Satake diagrams of Figure 3. Concretely, the line(s) connecting the leftmost node in the diagram for g + , the algebra obtained by increasing n by 1, give(s) the Dynkin index for the integral dominant weight λ. In tensor notation, we write such a vector V M . Generalised diffeomorphisms take the usual form [14] (the "Dorfman bracket") where Z is the invariant tensor [26,32] (σ is the permutation operator and η the inverse Killing metric. Normalisation of roots and weights is chosen such that a long root α has (α, α) = 2.) The commutator of two generalised diffeomorphisms becomes where the "Courant bracket" [[·, ·]] is the antisymmetrised Dorfman bracket, and Σ ξ,η is an ancillary transformation, a section-restricted local g-transformation. For the purposes of the present letter, it is present only when the number of external dimensions is d ≤ 3. This provides the beginning of the L ∞ gauge structure of extended geometry [33,34]. The section constraint reads where Y = Z + 1, i.e., Concretely, the section constraint expresses the vanishing of all subleading symmetric and antisymmetric modules in the product of two derivatives, reflecting the property of the fundamental module of a GL group. Solution of the section constraint A section is a linear subspace of the minimal G-orbit of R(λ) where all vectors p, q satisfy Y (p ⊗ q) = 0. It is well established [26,32] that representatives of such subspaces are obtained by starting from the highest weight state in R(λ) (which is a representative in the minimal orbit), and from it sequentially acting with lowering operators associated to negative simple roots along a "gravity line" of nodes in the Dynkin diagram. The section then becomes a fundamental gl module. The explicit form of the Y tensor states the corresponding property of the fundamental gl module, that the tensor product of it with itself contains a single irreducible module both in the symmetric and antisymmetric parts. Now it will also be necessary to determine how such solutions behave when the diagram does not consist only of simply laced white nodes. Naïvely, the gravity line must stop, for example since a compact node does not contribute an sl(2, R) subalgebra. Precise rules are needed for the three cases: • One or two black nodes are encountered; • A node corresponding to a shorter root is encountered; • Nodes connected with arrows are encountered. Given the procedure for solving the section constraint, it is enough to consider subdiagrams of the Satake diagrams containing the different situations. When one or two black nodes are encountered, there is always a Satake subdiagram for so(1, 2m − 1), with one white node. The section is an isotropic (light-like) subspace of the light cône, which is a light ray. The white node is not part of the gravity line (but its Cartan generator provides the scalings). The gravity line thus ends one step before encountering the black node(s), as in the last two diagrams of Figure 3. When a shorter node is encountered, there is an sp(4, R) ≃ so(2, 3) subdiagram. When a pair of nodes connected by arrows is encountered, there is an su(2, 2) ≃ so (2,4) subdiagram. In both cases, the maximal isotropic spaces of vectors are 2-dimensional, so the "rightmost" ordinary white node is included in the gravity line, as in the first two diagrams of Figure 3. The scaling is provided by the node(s) connected to it. This accounts for the identifications of the gravity lines in Figure 3. In all cases, this is of course consistent with the 6-dimensional origin of the models. It should also be noted that there in all cases is a single G-orbit of sections, since no branchings are encountered in the solution of the section constraint. Similar statements about gravity lines in diagrams for real algebras are found in ref. [41]. The above statement, that the gravity line runs along any line of simply laced undecorated white nodes, unconnected to black nodes, may also straightforwardly be derived [49] with the methods of refs. [26,32]. Then one sequentially finds the weights of R(λ), starting with the highest one, that spans a solution to the section constraint. The concrete reason a white node connected to black nodes can not be included in the gravity line is the mixture of the corresponding root with roots of compact su(2)'s under the involution dictating the reality condition. Coset dynamics A generalised metric G M N is a symmetric matrix which defines an involution τ on the Lie algebra through the "transpose" of the representation matrices: The involution τ is in the the same conjugacy class as the Cartan involution θ of (the real form) g. It has the eigenvalue 1 on a (locally defined) maximal compact subalgebra k ⊂ g. The "coset 1-form" is It follows directly that it has eigenvalue −1 under τ , and thus takes values in the orthogonal complement (with respect to the Killing metric) to k in g, k ⊥ = g ⊖ k. Note that a scale is included in the metric; we are considering the structure group G × R + . The scalings are included in k ⊥ . When considering only the internal extended geometry, it is convenient to let G have weight 1 − 2(λ, λ). Then the internal (pseudo-)Lagrangian density (the "potential"), invariant under generalised diffeomorphisms, takes the generic form (for n ≤ 3) The third term contains the invariant tensor ℓ appearing among the structure constants for the tensor hierarchy algebra S(g + ), where g + is the extension of g in the sequences of duality algebras. It appears only for d = 3, where g + is an affine algebra. Then, R(−λ) is the adjoint representation and ℓ αβ γδ = η αβ η γδ . When also external directions are considered, it is convenient to let the determinant e of the external vielbein e m a assume the rôle of the scaling degree of freedom of G, so that dGG −1 = Π α t α , The first two terms of eq. (4.9) can then equivalently, be rewritten as proportional to where k is a constant that will be specified later. An alternative (equivalent) approach to formulating the dynamics is to use a teleparallel formalism [35]. This method is well adapted to the tensor hierarchy algebra, and should be ideal for gauging. One uses the torsion T of the Weitzenböck connection, taking values in the embedding tensor modules, as a field strength, and the Lagrangian contains T 2 . Fields from S(g + ) Tensor hierarchies [50] are an important ingredient in supergravity, as they organise the form gauge fields and their transformations. In ref. [43], a class of infinite-dimensional non-contragredient superalgebras, the tensor hierarchy algebras, were constructed, that in a level expansion contain the modules of the form fields, as well as the embedding tensor module. The properties of such algebras were further examined in refs. [44][45][46][47], and their relation to the symmetries in extended geometry investigated in refs. [31,[33][34][35]. The content of fields, as well as gauge parameters (ghosts) is thus dictated by the tensor hierarchy algebra S(g + ) [46]. One introduces a double grading with respect to the two "leftmost" nodes, so that the generators at a given bidegree form a module of g (which is at bidegree (0, 0)). We choose to label the bidegree as (p, q) where p is the level with respect to the second node and −q with respect to the leftmost node in the extension of Section 3.2. The subalgebra g + is found at the line p = q. The degree ℓ of the single grading in Section 3.2 is ℓ = p − q. One of the advantages of the use of an underlying tensor hierarchy algebra is that it reduces the problem of finding fields, gauge transformations etc. to the mathematically more clearly defined problem of constructing a certain superalgebra, also in cases where g is infinite-dimensional. In the following tables, we list the content of a few levels in the tensor hierarchy algebras S(g + ) relevant for the magical supergravities with d = 6, 5, 4, 3. Note the symmetries under (p, q) → (d − 2 − p, 1 − q), signalling the presence of a non-degenerate bilinear form of the superalgebra, and relevant to dualisation in the external dimensions. Note that more standard orientations of the Dynkin diagrams are used in these tables, rather than the one where λ is associated to the leftmost node. Table 2: Some basis elements of S(g + ) for the magical supergravities with d = 6. Table 3: Some representations in the tensor hierarchy algebras for the d = 6 models. Table 4: Some basis elements of S(g + ) for the magical supergravities with d = 5. Table 6: Some basis elements of S(g + ) for the magical supergravities with d = 4. Table 9: Some representations in the tensor hierarchy algebras for the d = 3 models. The embedding tensor module is Θ = Θ ′ ⊕ 1. Extended geometry for magical supergravities Note that the Satake diagrams for the tensor hierarchy algebras in the O series, last diagram in Figure 4, is a decorated version of the diagram for S(e n+6 ), relevant for the extended geometry description of D = 11 supergravity with n + 5 physical internal dimensions. The two real tensor hierarchy algebras are thus different real forms of the same complex one. This implies that the dynamics, formulated in terms of a pseudo-action ("pseudo-" referring to the fact that the section constraint has to be imposed manually, as well as to the self-duality relations occurring for even d), takes the same formal expression in the two cases. The extended geometry formulation for D = 11 supergravity with d external dimensions is well known for d = 6 [21], d = 5 [22], d = 4 [23], d = 3 [24] and d = 2 [26][27][28], and of course for d > 6 [51][52][53]. Partial results exist for d = 0 [29]. However, even if the actions formally look the same, they describe quite different systems, due to the difference in the solutions to the section constraint, which for all versions of the magical supergravities give total physical dimension 6 (the sum of the number of external dimensions and the dimension of a section). The extended geometries for the lower K ν series, ν = 1, 2, 4, are constructed analogously to the ones in the O series. Let us call the algebras appearing in d = 6 − n algebras of type e 5+n . They have similar sets of invariant tensors, originating in the fact that they are constructed from Jordan algebras over K ν . Constructing the analogous actions for the lower series, one needs to identifiy these invariant tensors and the relations they obey, including proper normalisation. We will give a concrete example for d = 4 and algebras of type e 7 . The only numerical constant that enters the generalised diffeomorphisms is (λ, λ), the length 2 of the lowest weight in the coordinate representation (the representation of a generalised vector). It turns out to take the same value for all algebras of the same type, it is thus independent of ν, (λ, λ) = d These algebras all display the same behaviour, indeed the one expected for models with d = 4. In all cases, ∨ 2 R(−λ) = R(−2λ) ⊕ adj. A few degrees are listed in Table 6. The g-modules appearing in Table 6 are listed in Table 7. R 1 is the coordinate module, which is self-conjugate in these cases. Θ is the embedding tensor module. The presence of a singlet in R (2,1) signals the presence of an ancillary 2-form. The ν = 8 case for split structure group E 7(7) is formulated in ref. [23]. For the magical models in d = 4, the structure groups are the conformal groups of the Jordan algebras J 3 (K ν ) of hermitean 3 × 3 matrices with elements in K ν . They are Sp(6, R), SU (3, 3), SO * (12) and E 7(−25) , with coordinate modules R 1 as in Table 7. Thus dim R 1 = 6ν + 8. In all cases, (λ, λ) = 3 2 . The coordinate module is self-conjugate and symplectic; there is an invariant tensor Ω M N , which is used raise fundamental indices by left multiplication. We use the convention Ω M P Ω N P = δ M N . There is also an invariant symmetric 4-index tensor, which can be chosen as c M N P Q = P (M N P Q) , where P is the projector on the adjoint. The second Casimir operator in the representation R 1 , C 2 (R(λ)) = 1 2 (λ, λ+2̺), takes the value C 2 (R 1 ) = 3 4 (2ν + 3). The projector on the adjoint in where the constant k takes the values 2 (4.14) 1 There is a difference in normalisation of the Killing metric compared to ref. [23]. We use canonical conventions where the quadratic Casimir operator isĈ2 = 1 2 η αβ tαt β , tα being representation matrices, with η normalised so that it in the adjoint representation becomes 1 2 η γδ fγα ǫ f δǫ β = g ∨ δα β , i.e., C2(adj) = g ∨ , the dual Coxeter number. 2 The first equality in eq. (4.13) holds also in other dimensions, as long as the structure algebra is simple, otherwise more than one constant is needed to form a projection. For d = 5, k = 2 ν+4 , and for d = 3, k = 1 2g ∨ = 1 6(ν+2) . The section constraint contains the adjoint in the symmetric part of the tensor product and the singlet in the antisymmetric part, Notice the relation to eq. (4.6) with (λ, λ) − 1 = 1 2 . The fields needed, in addition to the coset element, are read from the content of the tensor hierarchy algebra S(g + ), Tables 6 and 7. They are: a gauge connections A m M , 2-forms B mn α , and also ancillary 2-forms B mn M . The calculation copies the one in ref. [23], one only needs to keep track of the constant k appearing in various places. We therefore only summarise the results briefly. The covariant 2-form field strength is, according to standard tensor hierarchy construction, where F is constructed through the Courant bracket as The field strengths are demanded to be selfdual according to There are also field strengths H mnp α and H mnp M for the 2-form fields. They appear in the Bianchi identity for the 2-form field strength, The improved Riemann tensor-the improvement needed for Lorentz invariance in the external directions-is where the spin connection is obtained from the vierbein using the covariant derivative (4.21) The full pseudo-Lagrangian density then consists of a covariantised Einstein-Hilbert term L EH , a kinetic term for the coset L sc , a Yang-Mills kinetic term L YM , a potential term V and a topological term L top . The non-topological terms are (the last line in V replaces the terms in eq. (4.9) containing the scale connection π M ). The topological terms is most conveniently written in terms of integration over a 5dimensional manifold with the external 4-manifold as boundary, so that The internal integration " [dY ]" should be seen as purely formal. It is not an integral over the (6ν + 8)-dimensional internal space, rather over a solution to the section constraint. This is a pseudo-action with the purpose as a book-keeping device for the equations of motion. All essential calculations needed to show full invariance of the pseudo-action under internal generalised diffeomorphisms as well as external diffeomorphisms (depending both on external and internal coordinates) have been performed in ref. [23]. They require, as usual, cancellations between all terms, and fix the pseudo-action completely. The same holds for other values of d. For d = 5, for example, the construction mimics the one in ref. [22]. In addition to the constant k of eq. (4.13), one will also need to keep track of the normalisation of the invariant symmetric 3-index tensor d M N P . All fields and algebraic structures are otherwise identical. Outlook We have demonstrated how extended geometry is formulated for structure groups of arbitrary real forms, with real coordinate modules. The underlying real tensor hierarchy algebra is defined by these data (together with some normalisation when λ is not a fundamental weight dual to a long root [47]). The procedure for solving the section contraint has been explained, resulting in a diagrammatic rule. Coupling to hypermultiplet scalars in the framework of extended geometry presents no further problem, since they are singlets under the structure group. They contribute terms to the Lagrangian density. This is relevant for cancellation of anomalies. In particular, it is noteworthy that only in the (ungauged) ν = 8 model coupled to 28 hypermultiplets that the gravitational anomalies vanish identically. It may be interesting to understand how an anomalous 6-dimensional theory is encoded in an extended field theory in which the external dimensions are say 3 or 5, where the anomalies must arise from some interplay between external and internal directions. Our construction only involves the bosonic degrees of freedom. A full supersymmetric version is of course desirable. It could use a component field version, with explicit check of the local supersymmetry transformations as in ref. [30], or a superfield formulation as in ref. [56]. A true extended supergeometry will demand an extension of the structure group itself to a supergroup [57]. The method can be used to obtain extended geometry formulations of other models, with other homogeneous spaces as scalar cosets. Just to pick one example without working out the details, let us choose the structure group as G = SU (1, 5), Figure 6, and the coordinate module as a 3-form. This leades to an extended geometry similar to the d = 4, ν = 2 magical supergravity, but with a 0-dimensional section, i.e., with SU (1, 5) as R-symmetry. Since the coordinate module of g + is the adjoint of E 6(−14) it should correspond to a 4-dimensional theory. Then, the presence of 20 self-dual gauge fields tell us that this is D = 4, N = 5 supergravity [58]. In a 3 + 1 split, the structure group becomes E 6(−14) , Figure 2.
2023-01-27T06:42:46.676Z
2023-01-26T00:00:00.000
{ "year": 2023, "sha1": "01fae85fce31609c38b21a88bf88293b363c871b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "01fae85fce31609c38b21a88bf88293b363c871b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236392027
pes2o/s2orc
v3-fos-license
Factors influencing surface carbon contamination in ambient-pressure x-ray photoelectron spectroscopy experiments Carbon contamination is a notorious issue that has an enormous influence on surface science experiments, especially in near-atmospheric conditions. While it is often mentioned in publications when affecting an experiment ’ s results, it is more rarely analyzed in detail. We performed ambient-pressure x-ray photoelectron spectroscopy experiments toward examining the build-up of adventitious carbon species (both inorganic and hydrocarbons) on a clean and well-prepared surface using large-scale (50 × 10 mm 2 ) rutile TiO 2 (110) single crystals exposed to water vapor and liquid water. Our results highlight how various factors and environmental conditions, such as beam illumination, residual gas pressure and composition, and interaction with liquid water, could play roles in the build-up of carbon on the surface. It became evident that beam-induced effects locally increase the amount of carbon in the irradiated area. Starting conditions that are independent of light irradiation determine the initial overall contamination level. Surprisingly, the rate of beam-induced carbon build-up does not vary significantly for different starting experimental conditions. The introduction of molecular oxygen in the order of 10 mbar allows for fast surface cleaning during x-ray illumination. The surface carbon contamination can be completely removed when the oxygen partial pressure is comparable to the partial pressure of water vapor in the millibar pressure range, as was tested by exposing the TiO 2 (110) surface to 15 mbar of water vapor and 15 mbar of molecular O 2 simultaneously. Furthermore, our data support the hypothesis that the progressive removal of carbon species from the chamber walls by competitive adsorption of water molecules takes place following repeated exposure to water vapor. We believe that our findings will be useful for future studies of liquid-solid interfaces using tender x rays, where carbon contamination plays a significant role. I. INTRODUCTION Ambient pressure x-ray photoelectron spectroscopy (APXPS) is a powerful chemical analysis technique for surfaces and interfaces under close to operando conditions. It is based on recent technical innovations that lift the pressure limitations of the conventional XPS analysis, traditionally an ultrahigh vacuum (UHV) technique, 1 through the use of differentially pumped electrostatic lens and hemispherical energy analyzer systems. [2][3][4] This development has been one of the many branches of photoelectron spectroscopy that has been strongly influenced by Charles S. Fadley. It allows to perform XPS in the presence of a gas atmosphere typically up to a few tens of mbar or to study interfaces between solid surfaces and liquids of sufficiently low vapor pressure when an appropriate experimental approach is taken. [5][6][7] In surface science investigations, the results of an experiment depend critically on the conditions of the sample and its environment. The situation is aggravated for studies under near-atmospheric pressures (1-30 mbar), as there is limited control over partial pressures of residual gases or contaminants within liquids that are brought in contact with the sample surface. The build-up of carbon contamination is a typical and well-known effect in APXPS and often plays a role in surface chemistry when exposed to synchrotron radiation, 8 but this role is rarely discussed in the literature. 9,10 Studies have observed how the incident radiation and emitted photoelectrons and secondary electrons decompose precursor molecules (typically CO and hydrocarbons) present in the residual gas even under typical UHV conditions, forming a solid phase that is often termed "adventitious carbon." [11][12][13] Dosing small quantities of O 2 at elevated temperatures is often done to induce the oxidation of these deposited carbon species and to minimize carbon contamination, for example, as maintenance for optical elements in synchrotron beamlines. [14][15][16] When working at pressures of a few millibar, this approach is clearly not possible. Most of the current APXPS studies are performed using soft x rays where the strong attenuation in the gas phase limits pressures in practice to a maximum of around 1 mbar. Relatively few APXPS studies utilize pressures above 10 mbar, mainly because of the limited number of tender x-ray beamlines with photon energies of 3-5 keV that can operate under such conditions. 7,17,18 Here, we will consider both of these pressure regimes. This work aims to illustrate how APXPS experimental conditions can influence the observed level of contamination and how contamination can be minimized by choosing the proper experimental procedure. The rutile TiO 2 (110) surface was chosen as a case in point after witnessing how carbon contamination increased substantially after contact with liquid water or water vapor and significantly affected our in situ study of a well-prepared and previously clean surface. TiO 2 (110) is one of the best-known metal-oxide surfaces with adsorption studies for most simple molecules reported in the literature. [19][20][21] The adsorption-induced chemistry of water, 22-27 oxygen, 28 and organic contaminants 29-32 on this surface has been previously investigated, and the surface cleanliness was found to correlate with the hydrophobic versus hydrophilic behavior of TiO 2. 27,33,34 At the same time, it has been reported how ex situ preparation of TiO 2 (110) through chemical etching and rinsing in pure water can lead to clean, atomically flat surfaces. 35 Inspired by this approach, some insight was achieved on how ex situ preparation can produce surfaces equivalent to those obtained by classical in situ methods such as sputter-anneal cycles. To our knowledge, this is the first study of a solid-liquid interface on a well-defined metal oxide single crystal sample probed by the dip-and-pull method, 5 and we believe that our findings will be useful for future studies of well-defined solid-liquid interfaces using synchrotron tender x rays. II. EXPERIMENTAL SETUP AND METHODOLOGY All experiments were performed at the Swiss Light Source, using the recently commissioned solid-liquid interface chamber endstation connected either to the PHOENIX I tender x-ray beamline or to the soft x-ray In Situ Spectroscopy (ISS) beamline. A detailed description of the experimental setup is included in Ref. 36; therefore, we only mention experimental details specific to the experiments reported in this paper. The photon flux on the sample was calibrated with a photodiode (AXUV20HS1) in UHV: at the PHOENIX I beamline, the photon flux is 3.1 × 10 11 photons/s at 4000 eV, and at the ISS beamline, the flux is 3.9 × 10 11 photons/s at 1000 eV. Rutile TiO 2 (110) single crystals (floating zone material, one-side polished) with dimensions of 50 × 10 × 1 mm 3 size were acquired from SurfaceNet GmbH. Because of the large dimensions, which are needed for dip-and-pull experiments, these samples are denoted as "flag-type" samples throughout the manuscript. They were cleaned by Ar + sputtering with a mean kinetic energy of ∼780 eV. 37 The sample was moved every 5 min by 8 mm to achieve uniform Ar + ion bombardment, and subsequently vacuum annealed in a custom-built oven to 550°C for 30 min (see Figs. 1(a), 1(b), and a detailed description of the oven in Fig. S1 38 ). This preparation produced a uniform (1 × 1) surface structure as verified by low-energy electron diffraction (LEED) in the annealed area [see Fig. 1 Other experiments (data shown in Fig. 5 and Figs. S2 and S4 38 ) have been performed using a similar 7 × 7 × 0.5 mm 3 rutile TiO 2 (110) single crystal purchased from PI-KEM Ltd.. For dip-and-pull experiments, MilliQ (type 3) water was used as a source and further filtered toward type 1 using a Millipore Direct-Q 3 UV-R system. A significant difference between type 3 and type 1 water is that the former is treated only by reverse osmosis, while the latter undergoes treatment with a high-intensity UV lamp and additional filtering with activated carbon. Such water was outgassed using a single freeze-pump-thaw cycle in a dedicated exsiccator before inserting it into the analysis chamber through a short exposure to air (see SI of Ref. 36 for details). Results shown in Fig. 2 used filtered type 3 water with a resistivity of 18.2 MΩ, using de-ionized water as a source. Experiments at the ISS beamline (data shown in Fig. 5 and Fig. S2 38 ) used MilliQ water from the source described above, purified by four freeze-pump-thaw cycles using liquid nitrogen for freezing and a turbomolecular pump for pumping. At ISS, a custom-made round bottom flask connected to a glass-to-metal adapter was used as a water reservoir attached to the analysis chamber via a high-precision leak valve. Oxygen dosing was performed by backfilling the experimental chamber from a miniCan (99.999%, PanGas) using a second high-precision leak valve. Complementary to standard sputter-anneal cycles, for some experiments the samples have been cleaned ex vacuo using a combination of oxygen, water vapor, and UV irradiation, which has proven to be effective in other efforts to obtain clean surfaces. 15 LEED images were acquired using a low incident electron beam current of nominally 10 nA to minimize any possible electron-induced processes and then processed with LEEDCal 2013 (Version 4.1) to reduce the distortion induced by the nonspherical field caused by the planar microchannel plates. This is achieved by associating the visible LEED spots to the surface lattice parameters and running an iterative algorithm to compensate for geometric errors, such as radial "pincushion" distortion for MCP-LEED and asymmetric distortions unique to each instrument. 39 APXPS data were acquired in a vacuum chamber that was not baked and has a base pressure of 10 −9 mbar. All XPS spectra were acquired with 30°photon incidence and 60°electron emission geometry using linearly polarized light. The electron analyzer (Scienta R4000 HiPP-2) was equipped with an entrance cone aperture diameter of 300 μm and a working distance (l) of 600 μm was used. Unless stated otherwise, all spectra were recorded using 4000 eV photon energy, a pass energy of 100 eV, and analyzer slits nearly fully open (1.5 × 30 mm 2 ). The binding energy (BE) scale was calibrated using a ARTICLE avs.scitation.org/journal/jva polycrystalline gold sample (Au 4f 7/2 , BE = 84.0 eV). The energy resolution of the Au 4f 7/2 peak with the experimental settings given above yielded a full width at half maximum of 0.77 eV when fitted with a GL(70) function. Fitting of spectra has been performed using CasaXPS V.2.3.19 (see fitting details in SI 38 ), while data integration and plot production were done using Igor Pro 6.37; error evaluation is estimated according to Poissonian statistics for peak intensity and background area variation. 40 The coverage of carbon is referenced with respect to the number of coordinatively unsaturated metal cations on the surface, where 1 monolayer (ML, 5.2 × 10 14 atoms/ cm 2 ) corresponds to one carbon atom per surface (1 × 1) unit cell [see Fig. 1(c)]. Cross sections and asymmetry parameters used here have been calculated by Trzhaskovskaya and Yarzhemsky, 41 utilizing the angular cross section for horizontal, linearly polarized light of the following equation: where σ is the cross section, β is a dipole parameter, γ and δ are nondipole parameters, P 2 is the second-order Legendre polynomial, θ is the angle between the photoelectron emission direction and the polarization of the incoming photons, and w is the angle between the photon momentum vector and the plane containing the photoelectron emission direction and photon polarization. To calculate the carbon coverage, a thin film approximation approach is adopted, 42,43 as the mean free path of the relevant photoelectrons through glassy carbon for photon energies of 4000 eV is approximately 8 nm, 44 according to a TPP-2M model. 45 The coverage of carbon referenced to the TiO 2 surface unit cell is given by the following equation: Here, I C and I Ti are given by the total integrated counts for each core level, λ Ti is the inelastic mean free path (IMFP) of electrons in the TiO 2 (110) substrate, dσ C =dΩ and dσ T i=dΩ are the differential cross sections, ψ is the polar emission angle, and d ┴ is the interplanar distance for the TiO 2 (110) lattice. An additional attenuation term could be included to account for attenuation in gas phase water and liquid water; however, due to the high kinetic energy of photoelectrons from C 1s and Ti 2p core levels, we obtain similar values for the respective IMFPs when travelling through gas phase water. Because of this, the gas phase attenuation has only a negligible influence on Eq. (2) since we compare the intensity ratio of two elements. In the presence of a condensed liquid water layer on the surface (in most cases evaluated to be in the range of a few angstrom), an additional attenuation term is added for the emission from the substrate, 7 with the water layer thickness "d" estimated from the attenuation of the substrate component of the O 1s peak using the following equation: Here, λ O water is the IMFP of electrons passing through liquid water, 46 λ O sub is the IMFP of electrons in the TiO 2 substrate, ρ H2O is atomic density of oxygen in liquid water compared to the density of O atoms (ρ O ) in TiO 2 , and I represents the intensity of the O 1s peak from liquid water (I H2O ) or TiO 2 (I O ). The attenuation due to adventitious carbon is not considered in Eqs. (2) and (3). A. Beam effect on surface carbon contamination To distinguish beam-induced effects from environmental effects in surface carbon contamination, time-lapsed measurements were performed under fixed experimental conditions and summarized in Fig. 2. Here, a clean TiO 2 (110) sample was exposed to 24 mbar of water vapor in equilibrium with a liquid water-filled glass container present inside the chamber. The sample was not dipped into the liquid and the experiment represented the starting conditions of a typical dip-and-pull experiment. 5 The measurement was started immediately after moving a UHV-prepared sample via a UHV transfer chamber to the analysis chamber. After a long measurement sequence in spot "A," which shows a progressive increase over time in the amount of carbon contaminants, the sample was moved to illuminate spot "B" at a distance (3 mm) greater than the beam size (400 μm). Resuming the measurement on this new area shows a significantly lower amount of carbon, comparable to the beginning of the previous sequence, followed by a similar increase over time. The initial carbon coverage is quite significant, as there are over ten carbon atoms per (1 × 1) unit cell of the TiO 2 (110) surface, which corresponds to about 2 monolayers of condensed graphitic carbon. It should be noted that these spectra were acquired after installing a new component (a gate valve placed between the chamber and the turbomolecular pump) onto the transfer chamber that was used as a buffer chamber when transferring samples from the UHV preparation chamber to the analysis chamber with liquid water inside (25 mbar). While this component was clean based on UHV standards, it was never previously exposed to water vapor dosed under vacuum conditions. We speculate that this can contribute to high initial carbon contamination. Another possible cause might be the use of MilliQ water that was not treated with UV light (see Sec. II for details). Other well-known sources of contamination, such as from gasphase water reacting with a hot filament, 9 can be excluded in the used setup. 36 B. Pressure effect on surface contamination To investigate the effect of sample environment and history, we monitored the amount of carbon on the surface after exposure to different environmental conditions. As a starting point, a sample underwent the same UHV preparation procedure each time. For most of the measurements, the chamber walls had already been exposed to water vapor for 3 days and had thus been effectively "rinsed" (see below for a further discussion of this effect). In these experiments, we used Type 1 milliQ water that was properly filtered and UV-treated, as opposed to the experiment presented in Fig. 2. The results are summarized in Fig. 3 where the initial contamination level correlates strongly with the environmental conditions. Higher water vapor pressures lead to higher initial levels of carbon contamination, and considerably higher contamination is observed for a dipped surface compared to a sample exposed only to water vapor. In addition to the immediate carbon build-up, the carbon signal increases further over time. This behavior is generally observed for solid/liquid interface experiments using the dip-and-pull method and APXPS. 8 Considering also the results FIG. 2. Calculated carbon coverage build-up over time at 24 mbar H 2 O pressure: while on spot "A" (black), the amount of carbon increases over time, moving to a spot "B" (orange) at a distance (3 mm) larger than the synchrotron beam size (400 μm) shows a coverage comparable to the initial coverage at the previous spot. If we assume that the contamination has a density equal to that of graphite (2.26 g cm −3 ), the coverage observed at the beginning of the experiment corresponds to approximately two layers of graphitic carbon. shown in Fig. 2, we attribute the additional carbon build-up to synchrotron beam-induced effects. Maintaining weak and constant pumping balanced by continuous liquid evaporation (resulting in a pressure of 14 mbar) creates a dynamic condition for the water vapor present in the chamber, which was able to significantly reduce the amount of initial carbon, even for a sample immersed in liquid water (first data points in the green and blue curves in Fig. 3). This is attributed to continued pumping of carbon-containing molecules in the gas phase resulting from adventitious carbon displaced from the chamber walls by the water vapor. Finally, the "history" of the vacuum chamber appears to be a significant factor. "Rinsing" the chamber walls by repeated exposure to water vapor over an extended period of time (3 days) while still pumping seems to reduce the carbon presence in the environment. The detected amount of contamination is visibly lower than upon introduction of the sample into the chamber right after HV conditions have been established after nitrogen venting. This is demonstrated in Fig. 3 by comparing data points for the rinsed chamber (pink) and the nonrinsed chamber (brown), measured at an equilibrium water vapor pressure of 24 mbar under otherwise identical conditions. C. Effect of oxygen presence during APXPS experiments The effect of adding molecular oxygen to an environment otherwise in equilibrium with liquid water was also investigated. On a TiO 2 (110) surface previously contaminated with carbon, no signal above the noise level was detected in the C 1s region once 15 mbar of O 2 was added to a partial pressure of 15 mbar H 2 O (total pressure of 30 mbar) [see Figs. S1(a) and S1(b) 38 ]. While it is not possible to observe the effects of oxygen at partial pressures far below the millibar range during in situ experiments in the presence of liquid water, being limited by the resolutions of the pressure gauge in the chamber and of a quadrupole 20 consecutive spectra. A higher water vapor pressure leads to a higher carbon coverage, and similarly dipping in liquid water increases the amount of carbon compared to vapor exposure. However, if the chamber has been exposed to vapor for a long time under pumping, the level of contamination decreases (compare data points shown in brown and pink: the former experiment was performed after the chamber had been vented and right after HV conditions were established, while in the latter case the chamber walls were rinsed by repeated exposure to water vapor over several days). mass spectrometer sampling gas within the differential pumping stages of the analyzer, it is possible to observe such effects when adding oxygen at partial pressures comparable to water vapor pressure. This experiment is shown in Fig. 4: after reaching an oxygen partial pressure of few millibars, it was possible to observe a transient phase consisting of only a few energy scans of the analyzer, leading from a significantly contaminated surface to one comparable to the initial contamination observed for a similar total water vapor pressure. This suggests that maintaining a limited oxygen partial pressure during APXPS measurements can assist in reducing or preventing the build-up of carbon over the duration of the measurement. This cleaning effect seems to be due to a combined effect of oxygen partial pressure and x-ray irradiation. After observing a completely clean surface from the addition of 15 mbar O 2 to 15 mbar H 2 O [see Fig. S1(b) 38 ), the oxygen partial pressure was reduced to below the detection limit of the quadrupole mass spectrometer, located in the second differentially pumped stage of the electron analyzer. When moving the sample to a new, non-irradiated spot (distance > 1 mm), the C 1s region showed a similar intensity to the one observed before dosing oxygen [see Fig. S1(c) 38 ]. A similar experiment was performed at lower pressures by simultaneously co-dosing 1 mbar H 2 O and 1 mbar O 2 at the In Situ Spectroscopy beamline of the Swiss Light Source at a lower photon energy (1000 eV), using the same analysis chamber as in the previous experiments. The design of this experiment is shown schematically in Fig. 5(b). Following UHV preparation and the transfer of the sample to the HV analysis chamber (P < 4.2 × 10 −7 mbar), the quantification of the C 1s core-level spectra revealed that the initial carbon coverage was 1.7 ML as measured on spot "1" of the sample surface [black spectrum in Fig. 5(a)]. The high base pressure was a result of continuous exposure of the chamber to water vapor (up to 24 mbar) for two consecutive days before these measurements. After exposure to 2 mbar of water vapor for a total time of 15 min (without irradiation), HV conditions were restored (P < 2.6 × 10 −6 mbar). In spot "2," the carbon coverage increased to 2.5 ML as a result of this exposure [blue spectrum in Fig. 5(a)]. Then, spot "1" was irradiated with a synchrotron beam for 1 h while the chamber was backfilled with 1 mbar H 2 O and 1 mbar O 2 , to allow for possible beam-induced cleaning. In the end, HV was restored (P < 5 × 10 −6 mbar) and both spots "1" and "2" were measured again. Clearly, on spot "1," the effect of beam-induced cleaning became apparent [green spectrum in Fig. 5(a)] as the carbon coverage is reduced from 1.7 to 1.2 ML, while on spot "2," it has further increased to 2.8 ML. A similar experiment was performed also at 14 mbar H 2 O plus 14 mbar O 2 (total pressure of 28 mbar), again with soft x-ray illumination (1000 eV, data in Fig. S5 38 ). The results are consistent with the 1 mbar exposure shown in Fig. 5, where in contrast to what is observed when using tender x rays [see Fig. S1(b) 38 ], the carbon contamination is not completely removed. D. In situ versus ex situ sample preparation The method of preparation for a sample will undoubtedly have an influence on the surface conditions. This could be particularly important when comparing in situ and ex situ preparation. As an alternative method to the standard sputter-anneal cycles for surface cleaning, ex situ irradiation of TiO 2 (110) with UV light was performed in the presence of de-ionized water and a slight oxygen flow. This method was observed to produce clean surfaces 15 and to achieve a superhydrophilic TiO 2 (110) surface as verified by visual inspection of the water contact angle in air. Using this method, a sample was prepared and transferred through air to the experimental chamber, which was pumped to HV before performing XPS measurements. The time of exposure to air was less than half an hour. The results in Fig. 6 show that there is no significant difference in terms of carbon contamination between in situ and ex situ preparations (4.4 vs 4.2 ML). The comparison of LEED patterns for the two preparations (see Fig. S4) 38 finds them identical and leads to the same conclusion. We note, however, that the ex situ UV-irradiated sample had been previously prepared by means of UHV sputter-anneal cycles before exposure to air. The UV irradiation under humid and oxygen-rich condition 15 on a previously cleaned TiO 2 (110) surface resulted mainly in the removal of carbon contamination on an otherwise well-defined surface. The similar level of contamination observed after the two different cleaning procedures could be attributed to a very thin coating of clean water formed on the titanium dioxide surface during ozone cleaning, associated with superhydrophilicity exhibited by clean TiO 2 . We speculate that this thin water film protects the surface from air-induced contamination during transfer to the UHV chamber. It is known that even at very low relative humidity, a monolayer of water forms on clean TiO 2 (110). 22 Once the sample is reintroduced into the UHV system, this liquid layer is quickly desorbed, leaving a relatively clean surface behind. IV. DISCUSSION The removal of organic contaminants from a TiO 2 (110) surface was studied by Zubkov et al., 34 finding the need for exposure to both oxygen and water vapor to achieve a clean, hydrophilic surface. It was also found that the UV irradiation time needed to achieve a clean surface depended on the amount of contaminants. However, this view may be too simple based on the recent studies of clean TiO 2 (110) exposed to ultrapure water, when no hydroxylation of the surface was observed, 27 although the hydroxyl groups are generally agreed to be the binding sites for molecular water on metal-oxide surfaces. 9,23,47 The discrepancy might be caused by the formation of oxidized carbonaceous species that are often inherently hydrophilic. 9 In our experiments, we can observe the role of oxygen in a water vapor environment toward reducing the amount of surface carbon under x-ray illumination. As the data shown in Fig. 4 and Fig. S1 38 differ based on the pressure of oxygen in the chamber, it follows that the relative abundance of O 2 and H 2 O in the gas phase sets the limit for the carbon-removing processes. The incident radiation then appears responsible for both a build-up of carbon over time, from the decomposition of residual carbon species originating from the gas phase, [11][12][13] and its removal in the presence of water and oxygen by the creation of active oxygen radical species, as determined in Fig. 5. While the environmental conditions in the experiments shown in Figs. S1 and S5 38 are nominally identical, the difference in beam energy might play a key role. In vacuum, the beam fluxes for the PHOENIX I and In Situ Spectroscopy beamlines are comparable, but the transmission of photons at these pressures depends on the photon energy. In 30 mbar water vapor, the transmission for 4 keV photons is approximately 96% over a distance of 20 cm, while it is only 36% at 1 keV, with stronger attenuation for longer distances. This will result in a significantly lower flux of x rays impinging on the surface and thus in fewer secondary electrons. 16 The generation of oxygen radicals in the gas phase in close proximity to the surface will then be significantly reduced. Using the more bulk sensitive x rays, the presence of 30 mbar of the gas phase leads to a lower sensitivity for low levels of carbon contamination. We estimate that for the given background noise in Fig. S1(b) 38 any coverage below 2 ML would not be visible, for a region measurement lasting approximately half an hour. Another possibility is that the number of secondary electrons resulting from the higher energetic photoelectrons leads to a more efficient oxygen radical formation and thus removal of carbon from the surface. In this case, one should expect a difference in the reaction pathway between UV and x-ray illumination. While UV light will create ozone by photolysis of oxygen, the cleaning effect that follows x-ray irradiation will likely come from the extracted photoelectrons and secondary electrons entering the gas phase. 48 Since tender x rays can excite higher energy photoelectrons compared to soft x rays, it could reasonably lead to a higher number of energetic secondary electrons originating from ionization cascades moving into the gas phase and the liquid phase, similarly to what happens in gas cascade amplification known in the field of environmental scanning electron microscopy. 48 In the absence of gas phase oxygen, the dissociation of water under x-ray illumination appears to not provide any significant cleaning effect: this can be attributed to the high reactivity of ozone and oxygen radicals with the adventitious contamination's double and triple C-C bonds. 49 Experiments with pure water previously performed by Balajka et al. demonstrated that clean TiO 2 (110) can be exposed to liquid FIG. 6. XPS data in the C 1s region measured in HV for TiO 2 (110) as prepared by UHV sputter-anneal cycles (red) and by ex situ ozone cleaning (black) using the setup described in Ref. 15. The C 1s signal is normalized to the Ti 2p intensity. water with no carbon contamination present on the surface. 27 Their experiment was done in a compact chamber with a small overall volume and an internal surface area. A later paper from the same group highlights that several water vapor exposures/ re-evacuations need to be performed to minimize the amount of contamination from the chamber walls. 50 Our experiments appear in agreement with this latest publication, as keeping the chamber under limited, constant pumping during or after exposure to water vapor did reduce the observed amount of carbon on the surface as shown in Fig. 3. The effectiveness of this procedure could be greatly enhanced if clean and outgassed liquid water could be introduced without breaking vacuum, e.g., via a load lock. V. SUMMARY AND CONCLUSIONS Measurements have been conducted to isolate factors that determine the level of carbon contamination as seen in ambientpressure XPS experiments. Environmental conditions were systematically varied, such as water vapor pressure, gas composition, exposure time to x rays and photon energy, and a pretreatment of the experimental chamber involving repeated interaction of water vapor with the internal surfaces. We observed how higher water vapor pressures correlate to a higher level of surface contamination with adventitious carbon, originating either from the water or being displaced from internal chamber walls. Similarly, dipping into outgassed liquid water leads to a higher level of contamination compared to simply exposing the sample to water vapor. This contamination can be reduced by keeping the system in a dynamic equilibrium, providing a small pumping speed compensated by liquid water evaporation. By doing so, the surfaces inside the chamber are "rinsed," which allows to remove the displaced carbon from the chamber. X-ray illumination has a significant impact on the build-up of carbon over time, while the initial level of contamination is determined by the environmental conditions. This impact of beam-induced carbon build-up can be minimized by periodically moving the sample to illuminate a fresh surface area. It is also possible to introduce molecular oxygen to remove surface carbon under x-ray illumination if the sample is not sensitive to oxidizing conditions. The effectiveness of the process seems strongly related to the impinging photon energy, photon fluence, and ambient gas pressure, while the kinetics of the process appear fast for XPS data acquisition timescales (as in Fig. 4). These experiments, while not spanning all the possible environmental and experimental conditions, can provide some insight into processes that can influence near-atmospheric pressure experiments and suggest mitigation scenarios for some of these phenomena.
2021-07-27T00:06:07.423Z
2021-05-20T00:00:00.000
{ "year": 2021, "sha1": "1706de662ff8170960f71d0fa46dcb46b609e4dc", "oa_license": "CCBY", "oa_url": "https://avs.scitation.org/doi/pdf/10.1116/6.0001013", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "0ad37f89dc0345914316fa7b344d329d0a3d36c9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
85977936
pes2o/s2orc
v3-fos-license
Development of chicken lymphoid system. I. Synthesis and secretion of immunoglobulins by chicken lymphoid cells. Synthesis and secretion of Ig by chicken lymphoid cells was studied. Both spleen and bursa cells synthesize and secrete IgM and IgG whereas Ig was not detected in thymus cells. In contrast to the spleen cells which synthesize H and L chains in balanced quantities, the bursa cells synthesize and secrete free L chains. In addition to the lymphoid cells which secrete IgM or IgG, the bursa appears to contain a cell population which synthesizes nonsecretory Ig. The structure of this Ig was studied by specific serological precipitation and by SDS-acrylamide gel electrophoresis. The H chains of this nonsecretory Ig are serologically related to micro-chains and exhibit a smaller molecular weight (i.e., approximately 50,000) in SDS-acrylamide gel electrophoresis than H chains of IgG and IgM synthesized by the spleen cells (i.e., approximately 70,000). Wis.).The age of the chickens used for the experiments was 6-12 wk.For immunization, chickens were injected subcutaneously with 5 mg of bovine gamma globulin (BGG, Sigma Chemical Co., St. Louis, Mo.) in 0.5 ml of complete Freund's adjuvant every 2 wk.A total of three to four doses was given to each animal.Lymphoid ceils were obtained from the chicken 3-5 days after the last dose. Incorporation of Radloisotopes.--Experiments on the kinetics of Ig synthesis and secretion were performed with cell suspensions prepared from the lymphoid tissues as previously described in the experiments with mouse myeloma cells (4).It is essential for defining precursorproduct relations that the cells maintain a constant rate of protein synthesis during the length of time needed to establish a state of equilibrium.When the bursa cells were incubated at a concentration of 2 X 107 cells/ml in a leucine-less Eagle's medium (5), containing 5.0% fetal calf serum and 20 ttCi/ml L-leucine-(4,5JH) (40 Ci/mmole; Schwarz Bio Research Inc., Orangeburg, N. Y.), the rate of protein synthesis was constant for at least 6 hr.Incubations were performed in a humid tissue culture incubator at 37°C in 15% CO~-85% air. For continuous-labeling experiments, the cell suspensions were prewarmed to 37°C before adding radioactive amino acids.Aliquots were then distributed into individual Petri dishes and placed in the incubator.At each sample time the incubation mixture was transferred to a centrifuge tube, quickly chilled in an ice-water bath, and centrifuged at 3000 g for 10 min to separate the cells from the supernatant.The cell pellet was suspended in 0.05 ~ tris(hydroxymethyl)aminomethane (Tris)-HC1 (pH 7.6, 4°C)-0.025~ KC1-0.005~ MgC12 (TKM), lysed by adding Nonidet P-40 (NP-40; Shell Chemical Co., New York) to a final concentration of 0.5%, and the nuclei and ribosomes were removed by centrifuging at 105,000 g for 120 rain at 4°C (6).Both the cell lysate and supernatant fractions were then divided into three aliquots.Two aliquots were assayed serologically and the remaining one was used for determining trichloroacetic acid (TCA)--precipitable radioactivity. The lymphoid cells from the spleen were prepared differently.The cell suspensions prepared by mechanical disintegration of the tissue and filtration were brought to a concentration of ~10 s cells/ml and allowed to stand at 37°C for 45-60 min to sediment erythrocytes.The cells which did not sediment with erythrocytes were used for the incorporation experiment.The concentration of the spleen cells used for the kinetic experiment was 2 N 107 cells/ml.Preliminary experiments showed that this concentration of the cell suspension exhibited the most efficient incorporation of radioactive amino acids with a linear rate of protein synthesis. _Preparation of A ntisera and Serological A ssay.--Antisera against chicken Ig were prepared in rabbits by injecting purified chicken IgG and IgM.For immunization, chicken IglV[ and IgG were prepared as follows: adult chickens were stimulated by injections of (about 101° organisms) DNP-Br.ucellaper chicken, in complete Freund's adjuvant (7).Immune chicken serum was absorbed by DNP-Sepharose and eluted by 0.2 M glycine-HC1 buffer (pH 2.8) (8,9).IgM was separated from IgG by gel filtration through a Sephadex G-200 column (2.5 X 100 cm, Pharmacia Fine Chemicals Inc., Uppsala, Sweden) in 0.015 ~ Tris-HC1 buffer (pH 7.4) with 0.14 ~ NaC1 (19).Chicken Ig thus purified exhibited a single band in immunoelectrophoresis against a rabbit antiserum prepared against whole chicken serum.The purity of Ig~{ and IgG was further verified by demonstrating a single band in rabbit antiserum prepared against these purified antigens. Anti-tt antiserum was prepared from anti-IgM by a solid immune adsorption technique with polymerized chicken IgG by the method of Avrameas and Ternynck (10).Anti-T antiserum was also prepared by the above method. Anti-KLH (keyhole limpet hemocyanin) antiserum was prepared by injecting rabbits with KLH prepared as previously described by Campbell et al. (11).This antiserum did not react with any chicken serum component in double-diffusion agar. Antiserum to rabbit IgG was prepared in goats.The rabbit IgG used as antigen was prepared by the method of Fleisehman et aI.(12). Chicken Ig labeled with radioactive amino acids were complexed with an excess of rabbit anti-chicken IgM serum and the complexes were precipitated by goat anti-rabbit IgG.This indirect precipitation technique was used throughout.Titrations of the rabbit anti-chicken Ig and of the goat anti-rabbit IgG were performed using leucine-3H-labeled Ig that had been secreted by the chicken spleen cells.The detailed method of preparation of serological precipitates for quantifying radioactive antigen was described elsewhere (13). Acrylamlde Gel Analysis.--Themethod of sodium dodecyl sulfate (SDS) acrylamide gel electrophoresis of serological precipitates was described in detail elsewhere (13).The immune precipitates were collected by centrifugation, washed three times with phosphate-buffered saline at 4°C, and dissolved in 0.3 ml of 10 ~ urea-l% SDS--0.5 ~ "iris (ion 8.5).For reduction of precipitates, 2-mercaptoethanol was added to a final concetration of 0.2 ~ and the mixture was incubated for 3 hr at 37°C.Iodoacetamide, 0.5 N in 2 ~ Tris, pH 8.5, was added for alkylafion to a final concentration of 0.25 M and incubation was continued for 60 min at 37°C.The samples were dialyzed overnight at room temperature against 0.01 ~ phosphate buffer, pH 7.2, containing 0.1% SDS and 0.5 M urea.Reduced and alkylated samples were dialyzed against the same buffer with 0.2 M 2-mercaptoethanol.0.05-0.1 ml dialyzed samples were electrophoresed in acrylamide gels at 8 ma/gel for 3.5 hr, fractionated, and the radioactivity was counted as described previously (14,15). Serological Precipitation of Leucine3tt-Labeled Cytoplasmic Extract and A crylamide Gel Electrophoresis.--The serological assay using indirect precipitation was shown to be very useful in quantifying Ig synthesized by the mouse myeloma cells (13,16,17).In contrast to the myeloma cells which synthesize and secrete relatively large quantities of monoclonal Ig, nonspecific radioactivity precipitated by anti-KLH was significantly high when the serology was performed with radioactive proteins of the normal lymphoid cells.Hence, the specificity of serological assay was carefully studied by quantifying radioactivity precipitated by specific and nonspecific antisera and by analyzing serologically precipitated proteins in acrylamide gel electrophoresis as described below. An aliquot of the cell suspension prepared from the spleen was labeled with leucine-3H for 3 hr and then centrifuged at 3000 g for 10 rain to separate the cells from the incubation media.The cell pellet was suspended in TKM buffer, solubilized in 0.5 % NP-40 TKM, and the nuclei were separated from the cytoplasm by centrifuging at 3000 g for 10 rain (6).The cytoplasm was further centrifuged at 105,000 g for 2 hr to sediment the ribosomes.The resultant cell extract and the incubation media containing the secreted proteins were both subjected to serological precipitation and acrylamide gel electrophoresis.As shown in Fig. 1 a, no distinct peak was observed in the gel of nonspecific precipitates, whereas the specific precipitates showed a distinct peak of chicken Ig as well as background radioactivity.To eliminate background radioactivity, nonspecific precipitation with anti-KLH serum was first performed, and then the supernatant of this nonspecific serology was subjected to specific precipitation with anti-chicken IgM.Acrylamide gel electrophoresis of this specific precipitate ,, a~Oo po~ Froctions Fio. 1. Specificity of serological precipitation assay analyzed by acrylamide gel electrophoresis.Intracellular proteins were labeled with leucine-3I-I by incubating the spleen cells for 3 hr as described in Materials and Methods.The detergent-soluble fraction of cells was subjected to serological precipitation and analyzed by SDS-acrylamide gel eleetrophoresis.Fractions were numbered from the negative to the positive electrode.Specific precipitates with exhibited more remarkable peaks of IgM and IgG (Fig. 1 c).This "clean-up" procedure did not result in loss of specific radioactivity attributable to loss of Ig.These Ig peaks were further verified by acrylamide gel electrophoresis after reduction and alkylation of the same serological precipitates, which showed that more than half of the radioactivity found in IgM and IgG was recovered in the heavy (H) and light (L) chain peaks (Fig. 1 b). The molecular weights of IgM, IgG, H, and L chains were also determined on SDS-acrylamide gels (Fig. 2) (18).The molecular weights thus determined were as follows: IgG, ~170,000; H chain, ~70,000; L chain, ~-~23,000.IgM did not migrate into the gel.These results agree with the values previously reported for chicken Ig purified from serum (19). Based upon the above experimental results, leucine-*H-labeled Ig were 40 30 2o fo x'X•e (2) "~'e (3) ,"<' quantified by a two-step serological assay as follows.An aliquot of leucine-*Hlabeled cytoplasm which was solubilized by NP-40 was first subjected to nonspecific precipitation.The supernatant from the above was divided into two aliquots for specific and nonspecific serology.The difference in radioactivities between them was assumed to represent the Ig.Nonspecific radioactivities were found to be very small in the secreted proteins which contained more than 75 % leucine-3I-t-labeled proteins as Ig.FIG. 3. Kinetics of incorporation of leucine-aH into trichloroacetic acid-precipitable material and Ig of the spleen cells.A cell suspension (2 X 107 cells/ml) in leucine-less Eagle's medium containing 5% fetal calf serum was incubated at 37°C with leucine-aH (16/zCi/ml).Aliquots of 1 ml each were distributed into petri dishes and the experiment was performed as described in Materials and Methods.Intracellular protein, O; secreted protein, O; (a) trichloroacetic acid-precipitable material; (b) serologically precipitable Ig. in the incubation mixture in order to ensure a linear synthesis of cellular proteins. Synthesis and Secretion of Ig by the Spleen The kinetics of incorporation of leucine-aH into Ig were different from incorporation into total trichloroacetic acid-precipitable material (Fig. 3 b).The amount of labeled Ig inside the cell increased without lag for about 3 hr and then remained constant, indicating saturation of the intracellular pool.Labeled Ig were detectable in the medium outside the cell after a lag of 30 min.The rate of secretion increased and became constant after 2 hr.After 3 hr, the amount of secreted Ig became larger than the intracellular pool.In the medium outside the cells, Ig accounted for 75-80% of total trichloroacetic acid-precipitable radioactivity. Table I shows the relative amounts of Ig and total proteins synthesized and secreted by the spleen and bursa cells at 4 hr of incubation.In the spleen of the chicken stimulated with BGG, 74% of the secreted proteins and 25% of the intracellular proteins were serologically precipitable Ig, which is comparable to that of mouse myeloma cells (13).When the chicken was not stimulated by BGG, the amount of Ig in both secreted and intracellular proteins was lower (i.e., 50 and 5 %).Such antigenic stimulation, however, had no demonstrable effect on the capacity of the bursa cells to synthesize and secrete Ig (Table I).Synthesis of Ig by the bursa cells will be further discussed below.The high specific activity of Ig in the secreted proteins was also verified by analyzing secreted material in SDS-acrylamide gel (Fig. 4 b).A protein peak of the molecular size of IgG (fraction No. 12-14) is the major component in the secreted material.This peak was shown to be IgG by acrylamide gel electrophoresis of serological precipitates with anti-Ig (Fig. 4 a). Synthesis and Secretion of Ig by the Bursa Cells.--Fig.5 shows the kinetics of synthesis and secretion of Ig by the bursa cells which is different from that of the spleen cells (Fig. 3 b).The intracellular Ig pool of the bursa cells was not saturated with leucine-aH-labeled Ig and continued to increase throughout the incubation period.The amount of secreted Ig did not become greater than that synthesized inside the cells, suggesting that part of the Ig synthesized by the bursa cells was not secreted.Small amounts of specific radioactivity which were not secreted are Ig (Table I).The specific radioactivity could always be reduced to the level of that precipitated with nonspecific antisera by adding excess purified IgM to the serological reaction, verifying the specificity of the serological assay, in the lymphoid cells where very small amounts of Ig are synthesized.As shown in the following section, the Ig which is not secreted from the bursa cells have a different structure from that which is secreted. Structural Characteristics of Ig Synthesized by the Bursa Cells.--Ig synthesized by the bursa cells were studied by SDS-acrylamide gel electrophoresis and compared with Ig synthesized by the spleen cells using 3H/14C double labeling.The lymphoid cells from the bursa were incubated with leucine-~H for 3--4 hr to label all intracellular Ig, and the spleen cells were labeled with L-leucine-14C(U) (262 mCi/mM; New England Nuclear Corp., Boston, Mass.).The radioactive proteins synthesized and secreted by two populations of the lymphoid cells were mixed in an adequate ratio of 3H/14C, subjected to serological precipitation with anti-IgM, and analyzed in SDS-acrylamide gel electrophoresis. Time (hr) Fro. 5. Kinetics of incorporation of leucine-3H into trichloroacetic acid-precipitable material and Ig of the bursa cells.Incorporation condition is the same as that used in the spleen cell suspension (Fig. 3).Intracellular protein, 0; secreted protein, O; (a) trichloroacetic acidprecipitable material; (b) serologically precipitable Ig. 19S/7S ratio of 0.24 (Table II), the bursa cells apparently secrete Ig with a 19S/7S ratio of 0.36, as well as a significant amount of free L chain.After reduction and alkylation of secreted Ig, the H/L ratio of the spleen cells was 3.0 and that of the bursa cells 1.3, suggesting that the bursa cells secrete two times as many light chains per unit of Ig than do the spleen cells.A similar result is also observed with the intracellular Ig synthesized by these lymphoid cells (Table II).Fig. 7 shows a similar analysis of the intracellular Ig.As compared to the spleen cells which synthesize and secrete only 7S and 19S Ig, the bursa cells synthesize small amounts of Ig which migrate between 7S and 19S, as well as free L chains.No free H chains were detected inside the cell.The peak next to L chains was found to be L chain dimer by serology with anti-L chain anti-serum.The absence of detectable free L chains in the cytoplasm and secreted material of the spleen cells suggests that the synthesis of H and L chains is apparently balanced in the spleen cells but not in the bursa cells (20).When serological precipitates were reduced and alkylated, the bursa Ig exhibited extra subunits of molecular size between H and L chains as seen by the three peaks in Fig. 7 a.This is not believed to be the result of incomplete reduction because the spleen intracellular Ig used as an internal standard were completely b 6'o 6~ reduced and alkylated.Furthermore, this component was not detected in Ig secreted by the bursa cells (Fig. 6 a).Such a structural difference in the Ig inside and outside the bursa cells was not observed in the spleen cells which exhibited an identical pattern of Ig, as shown in Fig. 8.In the spleen, both 19S (fraction No. 1-2) and 7S Ig (fraction No. 10-12) can be completely reduced and alkylated into H and L chain peaks without a sign of the third peak between them. We looked for the origin of this middle peak by determining whether it is derived from IgM or IgG as described below.A batch of intracellular Ig of the bursa cells was labeled for 3 hr with leucine-aH, serologically precipitated with anti-? or anti-g, and analyzed on SDS-acrylamide gel.Specificity of H chain specific antisera used in this experiment was monitored by introducing into the reaction mixture leucine-14C-labeled Ig secreted by the spleen cells.Fig. 9 a shows acrylamide gel electrophoresis analysis of serological precipitates with anti-5', which revealed only IgG at the 7S region and no significant amount of radioactivity on the top of the gel or in the region of L chains.When reduced and alkylated (Fig. 9 b), H and L chain peaks were produced with an H/L ratio of 2.3, which is almost identical to that of the spleen Ig (Table II). With anti-g, we are now able to detect not only 19S IgM (gel fraction No. 1-2), but also 7S IgM (gel fraction No. 10-13) in acrylamide gel electrophoresis (Fig. 9 c).In contrast to the reduction product of IgG, these IgM peaks produced the third peak (i.e., H0) migrating between H and L chains when fully reduced and alkylated (Fig. 9 d). We tried to determine whether this middle peak of IgM is derived from H or L chains by estimating the H/L ratio.Since no free L chains were precipitated with these antisera, we can assume that all L chains recovered after reduction were derived from those bound to H chains in Ig.As summarized in Table I[I, the H/L ratio of the bursa IgG is 2.7, which is very close to tile H/L ratio of the spleen Ig (Table II).In contrast to IgG, the H/L ratio of the bursa IgM is 1.5.The H + Ho/L ratio of the bursa IgM, is, however, 2.3, which is almost identical to that of the bursa IgG or spleen Ig.From this calculation, we conclude that this middle peak, H0 component with molecular weight of 50,000, is related to/~-chains rather than to L chains.At the present moment, we did not determine whether it is derived from 7S or 19S IgM. A similar incorporation experiment with the thymus cells failed to show any significant radioactivities serologically precipitable with antMgM.No evidence of H and L chain synthesis was detected in the thymus by acrylamide gel electrophoresis of the serological precipitates.* Prepared from the data of Fig. 9. DISCUSSION Biosynthesis and secretion of Ig by normal lymphoid cells was studied in the chicken, which has anatomically distinct lymphoid tissues (1,21).From the observations made in this study the following conclusions may be drawn.(a) The majority of the proteins secreted by the spleen cells (i.e., more than 70 %) were serologically precipitable Ig (Fig. 4 b); (b) the relative amount of Ig produced by the spleen cells varies depending upon the state of immunization, while immunization does not affect the capacity of the bursa cells to synthesize Ig (Table I); (c) a population of the bursa cells apparently accumulates Ig inside the cells which may not be secreted (Fig. 5); (d) the Ig which is not secreted from the bursa cells is shown to be IgM by serological precipitation and SDS-acrylamide gel electrophoresis (Fig. 7); (e) H chains of this IgM appear to be faster-migrating H0 chains which lie between H and L chains (Fig. 9). We came to the conclusion that the subunit (Ho) is a faster-migrating H chain than the L chain, based upon the result of calculating the H/L ratio of IgM precipitated by anti-/z serum (Fig. 9 and Table III).It is possible that the difference in migration in acrylamide gel electrophoresis between two H chains may be consequent to a lack of carbohydrate residues rather than to significant differences in protein size (22).H0 chains may have less protein-bound carbo-hydrate than H chains of IgM or IgG, The carbohydrate composition of these H chains is currently being examined.The presence of two forms of intracellular heavy chain with different carbohydrate compositions has been reported in IgM producing mouse myeloma cells (16,22).Ig which were precipitable with anti-/z serum and which migrated between 8S and 19S have been reported in malignant human lymphocytes (23).7S IgM may be the intracellular precursor of the fully assembled 19S protein before secretion (24).We doubt, however, that 7S IgM which is composed of H0 and L chains is also such a precursor because the H0 chain was not detected in the secreted proteins when fully reduced.It appears to us that the bursa may contain three different clones of lymphoid cells synthesizing Ig: one with nonsecretory IgM, the second with 19S IgM, and the third with 7S IgG.During development of lymphoid cells in the bursa it was shown that IgG-producing cells arise from a clone of IgM-producing cells (25,26).We rather believe that IgM containing H0 chains as H chains are synthesized by the clone of the cells which may be the precursor of 19S IgM-producing cells.It could even be suggested that the cells synthesizing nonsecretory Ig may represent a precursor common to both lymphoid cells secreting IgM or IgG.The previous studies by Kincade et al. (26) could not distinguish between IgM with H and IgM with H0.The ontogenetic relation among the three clones mentioned above will be further studied by analyzing the relative proportion of the two kinds of H chains of IgM at various ages of chick embryo3 Nonsecretory Ig has been reported in Burkitt lymphoma cell lines (27,28).It has been previously suggested by Klein et al. (27) that these malignant cell lines may have arisen from normal lymphocytes which do not secrete Ig.Our experiments clearly indicate that such cell lines may be derived from bursadependent or bone marrow-dependent B cells which are the precursors of plasma cells secreting Ig rather than thymus-dependent T cells (1,29).Among several classes of Ig detected on the surface of antigen-binding cells,/z-chains are reported to be the predominant H chains (29)(30)(31)(32)(33)(34).We found that, when the chicken was sensitized with BGG, the bursa cells do not synthesize antibody-binding antigen while the spleen cells in the same chicken synthesize and secrete 19S IgM and 7S IgG antibodies.3 SUMMARY Synthesis and secretion of Ig by chicken lymphoid cells was studied.Both spleen and bursa cells synthesize and secrete IgM and IgG whereas Ig was not detected in thymus cells.In contrast to the spleen cells which synthesize H and L chains in balanced quantities, the bursa cells synthesize and secrete free L chains.In addition to the lymphoid cells which secrete IglV[ or IgG, the bursa appears to contain a cell population which synthesizes nonsecretory Ig.The 2 Choi, Y. S., and R. A. Good.Manuscript in preparation.z Choi, Y. S., and R. A. Good.Manuscript in preparation. Cells.--The kinetics of synthesis and secretion of Ig were studied by incubating a cell suspension with leucineanti-IgM, O; nonspecitic control precipitates with anti-KLH, A; (a) comparison of specific and nonspeciflc precipitates; (b) reduction (R) and alkylation (A) of the specific precipitate from (c); (c) specific precipitate on the supernatant fraction obtained after nonspecific precipitation with anti-KLH.aHto label the newly synthesized proteins.As shown in Fig.3a, the incorporation of leucine-aH into trichloroacetic acid-precipitable material proceeds at a constant rate for 5 hr.The conditions of incorporation were chosen after preliminary experiments varying cell concentration and fetal calf serum content 40 Fig. 6 compares the acrylamide gel electrophoresis of the Ig secreted by the spleen and bursa cells.Compared to the spleen cells which secrete Ig with a FIG. 6 FIG. 6 Acrylamide gel electrophoresis of Ig secreted by the spleen and bursa cells.Leucine-3H-labeled Ig secreted by the bursa cells was mixed in an adequate 3H/14C ratio with leucine-14C-labeled Ig by the spleen ceils, serologically precipitated with anti-IgM, and analyzed by SDS acrylamide gel electrophoresis.3H-labeled bursa Ig, O; 14C-labeled spleen Ig, C) ; (a) reduced and alkylated Ig; (b) total Ig. FIO. 9 . Acrylamide gel electrophoresis of leucine-3H-labeled intracellular Ig of the bursa cells.14C-labeled Ig secreted by the spleen cells was used as an internal control for serological specificity and acrylamide gel electrophoresis.3H-labeled bursa IgG, 0; 14C-labeled spleen IgG, O; (a) total IgG precipitated by anti-T; (b) reduced and alkylated IgG; (c) total IgM precipitated by anti-/z; (d) reduced and alkylated IgM. TABLE I Leucine-3H Incorporation into Immunoglobulins and Total Proteins * Per 1.5 )< 10 7 cells.2~ Spleen cells from the chicken which has been sensitized by bovine gamma globuhns. TABLE II Immun~globulins Synthesized and Secreted by Chicken Lymphoid Cells TABLE III Subunits of [mmunoglobulins Synthesized by the Bursa Cells*
2014-10-01T00:00:00.000Z
1972-05-01T00:00:00.000
{ "year": 1972, "sha1": "1553d0fd74efacb9ed14febc0ec6dc18ed77e0e7", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jem/article-pdf/135/5/1133/1084706/1133.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1553d0fd74efacb9ed14febc0ec6dc18ed77e0e7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
231805263
pes2o/s2orc
v3-fos-license
Meta-analysis of the association between emphysematous change on thoracic computerized tomography scan and recurrent pneumothorax Summary Objectives At least a third of patients go on to suffer a recurrence following a first spontaneous pneumothorax. Surgical intervention reduces the risk of recurrence and has been advocated as a primary treatment for pneumothorax. But surgery exposes patients to the risks of anaesthesia and in some cases can cause chronic pain. Risk stratification of patients to identify those most at risk of recurrence would help direct the most appropriate patients to early intervention. Many studies have addressed the role of thoracic computerized tomography (CT) in identifying those individuals at increased risk of recurrence, but a consensus is lacking. Aim Our objective was to clarify whether CT provides valuable prognostic information for recurrent pneumothorax. Design Meta-analysis. Methods We conducted an exhaustive search of the literature for thoracic CT imaging and pneumothorax, and then performed a meta-analysis using a random effects model to estimate the common odds ratio and standard error. Results Here, we show by meta-analysis of data from 2475 individuals that emphysematous change on CT scan is associated with a significant increased odds ratio for recurrent pneumothorax ipsilateral to the radiological abnormality (odds ratio 2.49, 95% confidence interval 1.51–4.13). Conclusions The association holds true for primary spontaneous pneumothorax when considering emphysematous changes including blebs and bullae. Features, such as bullae at the azygoesophageal recess or increased Goddard score similarly predicted recurrent secondary pneumothorax, as shown by subgroup analysis. Our meta-analysis suggests that CT scanning has value in risk stratifying patients considering surgery for pneumothorax. Introduction Spontaneous pneumothorax is a frequent presentation to respiratory services. In some cases, little or no intervention is required, but individuals with recurrent pneumothorax benefit from surgical intervention. Predicting who will suffer recurrences and therefore require pre-emptive surgery remains challenging. The majority of spontaneous pneumothoraces appear to occur when cystic air spaces beneath the visceral pleura rupture. 1 When smaller than 1-2 cm in diameter these lesions are called blebs, while larger subpleural cysts are called bullae. Emphysematous lesions, can occur in the otherwise healthy lungs of tall thin individuals, the asthenic habitus, and can rupture to cause primary spontaneous pneumothoraces (PSPs). 2 By thoracic computerized tomography (CT), emphysematous lesions are seen in 80% of patients with PSP compared with 30% on healthy controls. 3 When blebs and bullae form owing to an underlying pulmonary pathology, most frequently smokinginduced pulmonary emphysema, they can give rise to secondary spontaneous pneumothoraces (SSPs). 4 The recurrence rate at 5 years following pneumothorax has been estimated to be around 30% overall, but 39% for those with underlying chronic lung disease. [5][6][7] Although there are only limited data concerning risk of recurrence following a second pneumothorax, 5 treatment guidelines recommend procedures to reduce the likelihood of further recurrences be considered. 8,9 In paediatric practice, this tends to involve surgical removal of blebs and bullae, while in adults, this is combined with surgical obliteration of the pleural space by pleurectomy or talc poudrage. 8 In those considered unfit for surgery, chemical pleurodesis with sclerosants is an option, most commonly using graded talc, but autologous blood is sometimes used. Some authors have advocated the use of pleurodesis as a primary intervention even following a first pneumothorax, 10 but owing to the generally benign course of PSPs and the potential for operative complications including chronic pain, others recommend deferring surgery until a recurrence has occurred or if the initial air leak fails to resolve within a week. 8,9 If recurrence could be accurately predicted, then primary surgery would be a more attractive option for selected patients. Underweight individuals appear to be at an increased risk of recurrence, but this is not highly discriminating in a population with typically low body mass indices. 11 Whether radiological appearance can predict recurrence has remained controversial with reports suggesting a variety of potential prognostic features, while other reports suggest none exist. [12][13][14][15] Crosssectional imaging by CT can identify emphysematous lesions including blebs and bullae even when the plain chest X-ray appears normal. 3,16 It is also noteworthy, that blebs and bullae seem not to account for all PSPs, with increased 'pleural porosity' having been proposed as an alternative cause. 1,17 Current British Thoracic Society (BTS) guidance is that surgical intervention be offered following the first recurrence of a pneumothorax. 8 Surgery following a first pneumothorax tends to be restricted to those who have suffered a tension pneumothorax or a persistent air leak, or individuals in highrisk occupations, such as pilots and divers. However, at present, prognostic features, e.g. CT appearances, are not part of these guidelines. We performed a systematic review and meta-analysis of the published literature in order to clarify the association between the presence of blebs or bullae at the time of diagnosis of pneumothorax and recurrent pneumothorax. Our aim was to test the hypothesis that the presence of emphysematous changes on thoracic CT imaging was associated with increased likelihood of recurrent pneumothorax. We also wished to determine if laterality of recurrence, ipsilateral or contralateral, was associated with the laterality of the CT abnormalities. Search strategy and selection criteria We aimed to identify all peer-reviewed studies reporting on risk factors for recurrent pneumothorax in individuals who have had an initial pneumothorax. Preprints and conference proceedings were not included. EMBASE, MEDLINE and the Cochrane Library databases were interrogated up to 5 May 2020 using a formal search strategy. Initial Medical Subject Heading (MeSH) and other relevant terms were kept broad to retain any potentially relevant studies [CT OR 'computerised tomography' OR 'computer assisted tomography' OR 'computed tomography' OR 'x ray computer assisted tomography' OR 'x ray computer assisted tomography' (MeSH) AND pneumothorax OR pneumothoraces OR pneumothorax (MeSH) AND recurrence OR reoccurrence OR recurrent OR repeat OR predictor OR predictive]. Authors were not contacted directly. Reference lists of publications were then scrutinized to identify additional relevant studies. Duplicates were removed. Publications examining CT findings after an initial pneumothorax and recording pneumothorax recurrence as it related to these CT findings were considered. Case reports, qualitative reviews, articles lacking patient data, studies lacking information on recurrence and studies using investigations other than CT were excluded. We included studies that reported on the prevalence of blebs and bullae in patients with either PSP or SSP with follow-up data for recurrent pneumothorax. A total of 18 such PSP and 3 SSP studies were identified. From these, we extracted: patient demographics; therapeutic interventions for the pneumothorax; number and laterality of pneumothoraces; number and laterality of recurrences; numbers of patients with CT scan evidence of emphysematous changes with reference to the side of the initial pneumothorax and any recurrence; and follow-up durations. Where articles gave figures separately for ipsilateral and contralateral pneumothorax recurrence and the presence of bullae, these were recorded separately. Where odds ratio (OR) for recurrence was not reported, we calculated the OR and 95% confidence intervals (95% CIs) based on the 2Â2 contingency table for recurrence against presence of emphysematous lesions. Following the Haldane-Ascombe correction, zero values in cells were replaced with 0.5 to prevent zero or infinite values of the OR. 18 Primary outcome was degree of association between pneumothorax recurrence and presence of CT abnormality. Secondary outcomes included association between recurrence laterality and CT abnormalities. Data analysis Data were analysed and plotted using the Tidyverse 19 and Meta 20 packages software R 21 and R studio. 22 There was evidence for heterogeneity of effects between studies and so a random effects model was used to estimate the common OR and standard error. A total of 21 studies reported on recurrence, of which 17 reported on recurrence in patients with PSP and 4 reported on recurrence in patients with SSP. The total sample size was 2475 (2334 PSP and 141 SSP) ( Table 2). Follow-up durations varied between 1.7 and 188.3 months. 40 As anticipated, patients presenting with PSP were younger than those with SSP: mean ages 24 years (range 14-98) vs. 70 years (range 45-85). Most patients were male, total male:female ratio of 7.7:1 (1516:197); for PSP the ratio was 7.5:1 (1462:196), while all but one patient with SSP was male. Of those individuals with SSP, 88% had emphysema and 12% had pulmonary fibrosis. 25,26,28 For those individuals for whom data were available, 7.5% of PSPs were treated initially conservatively, 33.5% were treated with an intercostal chest drain and 50.4% underwent primary surgery. [11][12][13]15,16,24,27,[29][30][31][32][33][34][36][37][38]40 For SSPs, no patients were treated conservatively, 5% had drainage alone while 88% underwent early surgery. 25,26,28 'Positive CT findings' typically equated to the presence of blebs or bullae. Four studies formulated a unique 'dystrophic severity scores' by assigning scores to the type/size, number and distribution of blebs/bullae. 13,27,37,40 Two other studies used the established Goddard scale. 28,35,41 One study generated a score for the degree of emphysematous change in patients with idiopathic pulmonary fibrosis. 26 Although, by definition SSP requires lung pathology, three SSP studies separated cases between those with or without specific CT findings. 25,26,28 For example, positive CT findings were defined as having 'bullae at the azygoesophageal recess' 25 or using the Goddard classification score !7, see Discussion. 28 In pooled data from 2475 individuals, recurrence of pneumothorax was observed more often in those individuals reported to have 'positive CT findings' (OR 2.49, 95% CI 1.51-4.13; P<0.01) ( Figure 2). We considered excluding those patients from three studies with SSP, but doing so had little effect on the positive association between CT changes and recurrence (OR 2.27, 95% CI 1.36-3.8; P<0.01; Supplementary Figure S1). A number of studies included children in their analysis potentially increasing heterogeneity. 15,27,[29][30][31]33,34 After exclusion of these seven studies, a positive relationship between CT changes and recurrence remained increasing our confidence in the association (OR 1.90, 95% CI 1.08-3.33; Supplementary Figure S2A). It is plausible that patients undergoing surgery will differ from those not having surgery, either in terms of ongoing air leak, history of pneumothorax recurrence or frailty. Several studies included a mixture of patients treated surgically and without surgically. When we analysed studies in which allowed us to separate the two therapeutic approaches, the association between CT changes and recurrence appeared to persist for the non-surgery group (OR 1.65, 95% CI 1.13-2.41), but was no longer apparent for the publications involving surgery, which showed more heterogeneity (OR 2.07, 95% CI 0.43-9.95; Supplementary Figure S2B). 12,14,16,24,25,28,31,38 In individual lungs with CT abnormalities the recurrence rate was 27%, but only 12% in lungs that were normal by CT scan ( Table 2). It should be noted that per-lung recurrence rates will be lower than per-patient recurrence rates because their denominators differ. Overall, 31% of patients suffered a recurrent pneumothorax. While this is lower than reported in some previous reports, [5][6][7] it might reflect the relatively high level of primary surgery (surgery after a first pneumothorax) in several published CT studies. When we focussed only on studies in which primary surgery was never performed, the overall recurrence was 41%. 12,14,31,40 By contrast, the recurrence rate fell to 10% when studies in which all patients underwent primary surgery for pneumothorax. 16,24,25,28,38 Overall, 66% of lungs from patients had CT evidence of emphysematous change, including blebs and bullae; 83% of these changes were ipsilateral to the initial pneumothorax and 51% were contralateral (Table 2). Ipsilateral recurrence of pneumothorax (27%) was twice as likely as contralateral (13%), and in both instances CT abnormalities were seen more often in cases that recurred: 30% suffered ipsilateral recurrence if CT changes were present vs. 21% without CT changes; 21% suffered contralateral recurrence if CT changes were present vs. 4% without CT changes. In the absence of positive CT findings, recurrence was 3 times more likely for PSP cases than SSP cases (12% vs. 4%) ( Table 2). However, unlike PSP, SSP is associated with a high rate of subsequent mortality, 42 and so the lower rate of recurrence in the SSP group may reflect loss of patients owing to death. Nevertheless, when CT findings were present, the rate of recurrence was similar between PSP and SSP (27% and 29%, respectively). The association held true for PSP when considering emphysematous changes including bullae and blebs. Bullae at the azygoesophageal recess or increased Goddard score were associated with recurrent SSP, as shown by subgroup analysis. The incidence of reported positive CT findings was higher in PSP than in SSP at 67% vs. 46%, but the differing definitions of 'positive CT finding' in SSP and PSP make direct comparison difficult and so should be treated with caution. We noted substantial between-study heterogeneity (I 2 ¼72%) and so we grouped studies according to the laterality of the bullae and laterality of the recurrence relative to the original pneumothorax. There was good evidence that the effect size differed between groups (Supplementary Figure S3). The effect size for ipsilateral recurrence was significantly different to contralateral recurrence (P<0.05). Contralateral recurrence was associated with CT change (OR 6.21, 95% CI 2.54-15.14), although ipsilateral recurrence was still positively associated with CT changes (OR 1.84, 95% CI 1.05-3.20) (Figure 2). Four studies Figure S3). Ipsilateral recurrence was reported in association with ipsilateral bullae, bullae in either lung, or without information of bullae laterality. A 'leave-one-out analysis' identified two articles that contribute substantial heterogeneity. Within articles that reported contralateral recurrence (Figure 2), omitting Young Choi et al. reduced heterogeneity (I 2 ) from 51% to <1% and increased the magnitude of association for contralateral recurrence from an OR of 6.21 (95% CI 2.54-15.14) to 8.12 (95% CI 4.58-14.42) (Figure 2). This may be because Young Choi et al. is the only article reporting contralateral recurrence that restricted its cohort to children. For articles reporting ipsilateral recurrence, omitting Casali et al. reduced heterogeneity (I 2 ) from 71% to 45%, decreasing the magnitude of association for ipsilateral recurrence from an OR of 1.84 (95% CI 1.05-3.20) to 1.50 (95% CI 1.13-1.97). We were not able to identify study characteristics to explain this observation (Figure 2). Some authors reported that specific features of the CT abnormalities were associated with recurrence. 13,14,23,25,31,40 Bleb size, 23,31,40 number 23,31 and distribution were positively associated with recurrence in some reports, 13,14,25,31,40 but these associations were not replicated in other studies of bleb size, [12][13][14][15]40 number [13][14][15]40 and distribution. 12,15 Too few articles provided data in a format that would have permitted their inclusion in our analysis. Overall, 66% of the lungs of patients who had suffered a pneumothorax had CT evidence of emphysematous change. When such CT changes were present, they were more likely to be ipsilateral to the presenting pneumothorax: 83% ipsilateral vs. 51% contralateral. Ipsilateral recurrence was 2-fold more likely than contralateral and was associated with the side of emphysematous change on CT. This supports a mechanistic link between emphysematous change and pneumothorax. The observed recurrence rate of 10% following primary surgery is surprisingly high. 16,24,25,28,38 In recent work, recurrence following video assisted thoracoscopic surgery (VATS) has been reported as low as 5%. 10 In our meta-analysis, only five studies reported surgery in isolation with associated recurrence data and CT findings. 16,24,25,28,38 These studies account for 380 individuals, with 37 recurrences reported (9.7%). The reason for this unexpectedly high recurrence rate is unclear but might reflect the reporting of contralateral pneumothoraces, which should be unaffected by unilateral surgery, 16 or the exclusion of patients lost to follow-up, which was performed in one study in effort to avoid attrition bias. 24 The low level of conservative management of pneumothoraces was striking in the published qualifying for this metaanalysis. In the UK, many patients with PSP are treated conservatively according to BTS guidelines. 8 In one recent series, 30% (50/168) of PSPs were treated conservatively, 43 contrasting with only 8% of the 2306 PSP cases reporting intervention in this meta-analysis. This may reflect national differences, as since UK and USA guidelines differ in their thresholds for intervention, with American guidance more interventionalist compared with the UK and other European countries. 8,44 Biases may also have arisen from the relatively large number of series from surgical centres. Indeed, surgical intervention formed an inclusion criteria for four studies, 16,24,25,38 although was an exclusion criterion for only two. 14,40 The low level of conservative management of SSP was less surprising since failure to intervene in cases of SSP can have fatal consequences. That VATS was employed in 74% of interventions across SSP studies that reported surgical intervention 25,28 is, however, at odds with typical UK experience where talc or blood patch pleurodesis is commonly used, and suggests that national differences and a degree of publication bias may skew the SSP literature. Comparisons of radiological changes between studies are challenging. Publications varied in the detail supplied, especially whether pneumothoraces were ipsilateral to CT abnormalities. 12,14,29,30,33 The definitions used for 'positive CT findings' also differed, e.g. the cut-off size of blebs vs. bullae being variously 1 11,13,16,27,31,37,40 or 2 cm 15,34 or not reported. 14,25,29,30,32,33,38,39 A proportion of studies employed unique definitions of 'positive CT findings', occasionally developing bespoke severity scores, 13,26,27,40 while some used the more established 'Goddard scale'. 28,35 In conclusion, we observe by meta-analysis a significant association between the recurrence of pneumothorax and features of emphysematous change on thoracic CT imaging. This association is particularly striking for contralateral pneumothoraces, although these are less common than ipsilateral recurrences overall. This meta-analysis has relevance for guidelines of pneumothorax management. Insufficient data are available to link specific CT features, such as lesion size or number, with recurrence and require further study. Supplementary material Supplementary material is available at QJMED online. Conflict of interest. None declared.
2021-02-05T06:16:07.558Z
2021-02-04T00:00:00.000
{ "year": 2021, "sha1": "5ea16a3dbe0c05e57712abe6daca92f143ab597f", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/qjmed/advance-article-pdf/doi/10.1093/qjmed/hcab020/41109395/hcab020.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3690b09f73d397c186159bb7e859bf315915304e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215760038
pes2o/s2orc
v3-fos-license
Measurement of range-of-motion in infants with indications of upper cervical dysfunction using the Flexion-Rotation-Test and Lateral-Flexion-Test: a blinded inter-rater reliability study in a clinical practice setting ABSTRACT Background: In infants with indications of upper cervical dysfunction, the Flexion-Rotation-Test and Lateral-Flexion-Test are used to indicate reduced upper cervical range-of-motion (ROM). In infants, the inter-rater reliability of these tests is unknown. Objective: To assess the inter-rater reliability of subjectively and objectively measured ROM by using the Flexion-Rotation-Test and Lateral-Flexion-Test. Methods: 36 infants (<6 months) and three manual therapists participated in this cross-sectional observational study. Pairs of two manual therapists independently assessed infants’ upper cervical ROM using the Flexion-Rotation-Test and Lateral-Flexion-Test, blinded for each other’s outcomes. Two inertial motion sensors objectively measured cervical ROM. Inter-rater reliability was determined between each pair of manual therapists. For subjective outcomes, Cohen’s kappa (ĸ) and the proportion of agreement (Pra) were calculated. For objectively measured ROM, Bland Altman plots were conducted and Limits of Agreement and Intraclass Correlation Coefficients (ICC) were calculated. Results: The inter-rater reliability of the Flexion-Rotation-Test and Lateral-Flexion-Test for subjective (ĸ: 0.077–0.727; Pra: 0.46–0.86) and objective outcomes (ICC: 0.019–0.496) varied between pairs of manual therapists. Conclusion: Assessed ROM largely depends on the performance of the assessment and its interpretation by manual therapists, leading to high variation in outcomes. Therefore, the Flexion-Rotation-Test and Lateral-Flexion-Test cannot be used solely as a reliable outcome measure in clinical practice and research context. Introduction In the current clinical practice, many children and infants are treated with manual therapy for various musculoskeletal and non-musculoskeletal conditions [1][2][3][4]. In the Netherlands, upper cervical dysfunction (UCD) is considered the primary treatment indication in infants [5]. Persistent UCD could induce the maintenance of postural asymmetry and lead to a reduced active and passive cervical range of motion (ROM), resulting in a fixed asymmetric position of the infant's head toward lateral flexion and contralateral rotation [6][7][8]. Infants with persistent positional preference and indications of UCD seem to have more signs of skull deformation, excessive crying, and restlessness [5,6,9]. In manual therapy practice, the clinical diagnosis of UCD is based on the assessment of upper cervical ROM using the Flexion-Rotation-Test (FRT) and the Lateral-Flexion-Test (LFT) [5]. These tests assess whether upper cervical passive mobility toward rotation in full flexion and lateral flexion is either normal or reduced. When at least one of these tests indicates reduced passive ROM, UCD is clinically diagnosed and, dependent on the direction of reduced mobility, treated with specific techniques [5]. To date, research has acknowledged good reliability of the FRT in adults [10][11][12][13] and children [14] while in infants only one study examined the intrarater reliability; one rater examined infants with torticollis and found high intra-rater reliability (ICC: ≥0.77) [15]. Even though the validity and reliability of the FRT and LFT in infants are still largely unknown, manual therapists currently use these tests in their diagnostic clinical decision-making [5]. Therefore, there is a strong need to determine the reliability of the FRT and LFT in clinical practice. Our study aims to examine the interrater reliability of (1) subjectively reported outcomes by manual therapists on the FRT and LFT and related decision-making, and (2) objectively measured ROM by inertial motion sensors during the FRT and LFT, in infants with indications of UCD. Additionally, we aimed to verify the subjectively reported outcomes with objectively measured ROM. Methods The Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were used to report our study [16]. Our study was approved by the Medical Ethical Committee for Research Involving Human Subjects of the Radboud university medical center, Nijmegen (CMO, NL.58488.091. 16). Study design In a cross-sectional observational study, the inter-rater reliability of the FRT and LFT in infants with indications of the presence of UCD was determined. Three manual therapists participated in the study. Pairs of two manual therapists independently assessed upper cervical mobility by performing the FRT and LFT in each infant. Simultaneously, two light-weight inertial motion sensors with a sampling rate of 100 Hz (MTw, Xsens BV, Enschede, the Netherlands) were used to objectively measure ROM in three dimensions during the mobility assessment. A schedule ( Figures A1 and A2) was used to ensure that both the order of manual therapists and measurements (tests) was counterbalanced. In total there were three pairs of manual therapists: therapists A-B, B-C, and A-C. Moreover, therapists were blinded for each other's outcomes on the FRT and LFT and for objectively measured ROM. The assessment was executed in the practices of participating therapists. Study population Three expert manual therapists registered in the Dutch registry of pediatric manual therapists [17] were invited for study participation and gave written informed consent. These qualified manual therapists had 10 to 17 years of experience in the treatment of infants with UCD, treated at least four infants per month, and were able and willing to recruit parents and infants for the study and to travel between the participating practices during the study period. All three manual therapists worked independently of each other in private practices in the Netherlands. Infants (<6 months) visiting these practices because of an indication of UCD (indicated by a referrer or the infant's parents) were eligible to participate in the study. Previous or ongoing treatment with pediatric physical therapy was allowed because this is usual care in the Netherlands. Infants who were previously treated by one of the participating manual therapists for the same treatment indication were excluded. Both of the infant's parents had to provide written informed consent for study participation. Study procedure Manual therapists were instructed to inform parents about the study when they registered their infant for treatment. If interested in participation, parents received an extensive information letter and were contacted by the primary author (FD) to explain the study procedure, including informed consent. In each infant, mobility assessment was performed by one pair of manual therapists. The therapist working at the practice where the infant was registered was considered the primary therapist and was therefore always one of the assessing manual therapists. The recruitment period was between June and December 2017. Further information about the study procedure is given in the Appendix. Mobility assessment The mobility assessment consisted of an intake and passive mobility assessment ( Figure 1). First, while manual therapists were in another room, parents were requested by FD to complete a questionnaire regarding infants' demographics, complaints and symptoms, and pregnancy and delivery because of the potential relation with UCD [5,6,18]. Manual therapists were blinded for this questionnaire and parents were instructed to not share any details about their child with them. Meanwhile, the motion sensors were attached to the infant's forehead, and trunk by FD. Before the mobility assessment, signs of asymmetry of head, face, and trunk were observed by therapists. To minimize potential distress in the infant, only the first assessing manual therapist assessed active cervical mobility and verified whether there were no contra-indications preventing further study participation. The side-tilt-test was used to provoke active lateral flexion of the head and test whether the provoked response was comparable bilaterally. Active rotation was facilitated using sounds, toys, or the presence of the parents. Both tests are part of the normal clinical screening in infants [5,19,20]. For passive upper cervical mobility assessment, the FRT and LFT were performed in supine with lowamplitude and low-velocity. For the FRT, the infant's cervical spine was passively maximally flexed and carefully rotated. For the LFT, the infant's head was passively laterally flexed. Both tests were performed on both sides (see Table A1). If resistance was encountered or ROM limited before the expected end-point, reduced upper cervical mobility was indicated [21,22]. When at least one of these tests indicated reduced mobility, UCD and indication for further treatment were assumed [5]. On a standardized form (see Appendix), manual therapists reported on reduced mobility (yes/no), presence of UCD (yes/no), indication for further treatment (yes/no), and other observations, such as resistance or side effects. Manual therapists were blinded for each other's assessment performance and reported outcomes until both therapists completed the assessment. FD was always present during mobility assessment for study coordination and sensor registration. FD was not one of the participating manual therapists. After study procedures were completed, FD shared the parents' questionnaire with the primary therapist to gain anamnestic information, and checked whether manual therapists disagreed on the reported 'presence of UCD' and 'indication for further treatment'. If so, discrepancy was discussed between therapists. Thereafter, the primary therapist informed parents on the assessment's findings and further possible steps in treatment, if indicated. These procedures fell outside the study's scope. Measures ROM was assessed by manual therapists (subjective, dichotomous outcomes) and measured by inertial motion sensors (objective outcomes) simultaneously. Each infant was assessed by two manual therapists, bilaterally, leading to a total of 72 measurements of both FRT and LFT. Information from the intake was used to describe the study population. Primary clinical outcome measures were the FRT and the LFT. The reported outcomes (reduced mobility yes/no) were used to determine agreement and inter-rater reliability between manual therapists. Moreover, based on the outcomes of these subjective tests, manual therapists decided on the presence of UCD (diagnosis) and treatment indication (yes/no). To assess ROM, two wireless sensors were placed on the infants' forehead and trunk (sternum) using soft bands (Image A.1). ROM was recorded in three dimensions: sagittal plane (e.g. flexion), frontal plane (e.g. lateral flexion), and transversal plane (e.g. rotation). The sensors were connected to a laptop, which simultaneously recorded and registered all 3D outcomes of the head relative to the trunk. The primary author (FD) checked the recording of the sensors during testing to ensure adequate detection of motion. Both manual therapists reported to FD when they started (start-point) and ended (end-point) a movement and to which side it was performed. This information was necessary for data verification and analysis to confirm the position of the head and trunk on time points, and to calculate ROM. Manual therapists were blinded for all sensor outcomes. The objective sensor data were used to determine the degree of agreement on measured ROM between two manual therapists (inter-rater reliability). Kinematic analysis After all subjective measurements of mobility by manual therapists were completed, the sensor data were visually checked by FD for the reported time-points in MT Manager 4.6 (Xsens Technologies BV). Data were converted and analyzed in MATLAB (version 2017b, The MathWorks BV, Natick, USA) by the second author (NK) to allow additional analysis of objective ROM and degrees of angles. First, degrees of mobility were defined based on reported time-points and specific motion analysis of that time-point. This resulted in the assessment of the ROM as measured on four time-points: starting position (start-point) before the execution of a test to the right side (T1), end position (end-point) of the movement to the right (T2), starting position before the execution of a test to the left side (T3), and end position of the movement to the left (T4). Second, for each infant in each measurement, the mean start-point was calculated by averaging T1 and T3, because of possible displacement of the sensor during the assessment due to the moving of the infant. Third, ROM to both sides was determined by calculating the difference between the mean start-point of a movement and the maximum ROM to a particular side around the time point of the end-point. For the Flexion-Rotation-Test, data were extracted from both the sagittal and transversal plane. For the Lateral-Flexion-Test, only ROM measured in the frontal plane was extracted. Statistical analysis Characteristics of the study population were analyzed using descriptive statistics. To exclude a potential order effect of measurements, objectively measured ROM was compared between the first and second measurement within an infant, for each test and each side, using a paired t-test. Because no order effect was found, data were grouped together per measurement. All analyses were performed using SPSS Statistics version 25 (SPSS, Chicago Iln., USA). Additional information about the kinematic analysis is presented in the Appendix. Objective outcomes To determine the inter-rater reliability of objectively measured ROM, the absolute mean differences in ROM between manual therapists were calculated. Additionally, the intraclass correlation coefficient (ICC) and 95%-confidence interval were calculated per test by using a two-way random effect consistency model. These analyses were performed pair-wise between manual therapists (i.e. A vs B, B vs C, and A vs C). To examine if there were systematic differences in ROM between manual therapists, a one-sample t-test was performed, Bland Altman plots were created for both the FRT and LFT and mean differences of ROM between two manual therapists were plotted against the means in ROM of these two manual therapists, and limits of agreement (LOA) were calculated [25]. Relationship subjective and objective outcomes To determine the relationship between subjectively reported outcomes and objectively measured ROM, the data reported by manual therapists and the objectively measured ROM by the sensors were compared. First, for each therapist, the mean ROM which was indicated as 'reduced' mobility and mean ROM indicated as 'not-reduced' mobility was calculated, for both the FRT and LFT. Per therapist, the differences in ROM between 'reduced' and 'not-reduced', including standard error of difference, were calculated using a paired t-test. Second, outcomes per manual therapist indicated as 'reduced' or 'not-reduced' mobility were plotted in figures to get more insight in differences between manual therapists, and between mobility reported as 'reduced' and 'not-reduced'. Results During the recruitment period, 95 potentially eligible infants were registered at the three participating practices, of which 36 infants (38%) participated in the study. Reasons for exclusion were: infants' age, parents who did not want to participate or infants had received treatment before. Characteristics of the 36 included infants are shown in Table 1. All infants were referred to (44%) or administered through direct access (56%) at the practice for manual therapy with indications of asymmetry and presence of UCD. The most reported complaints or symptoms by parents, besides asymmetry, were restlessness/anxiety (42%) and excessive crying (31%). The majority of parents (61%) reported more than one complaint or symptom. In most infants, multiple signs of asymmetry were observed by manual therapists, where positional preference of the head (61%), asymmetrical shape of the head (47%), and an asymmetric or hyperextended trunk (64%) were observed most frequently. Active lateral flexion and rotation of the head were frequently reported by manual therapists as reduced. The majority of parents reported complications in delivery (58%). No contra-indications for study participation were reported by parents nor by manual therapists. Inter-rater reliability Passive mobility assessment was performed in all 36 infants. Due to distress during the assessment, the FRT could not be tested consistently in two infants leading to data availability of 34 infants and a total of 68 measurements. The LFT was performed in all 36 infants, leading to a total of 72 measurements. No side effects or harms during mobility assessment, besides crying, were reported. Reported outcomes Inter-rater reliability and agreement between pairs of manual therapists are presented in Table 2. For the FRT, inter-rater reliability between pairs of manual therapists ranged from slight to substantial (ĸ = 0.195-0.657). The proportion of agreement between pairs of manual therapists ranged from 0.57 to 0.86. For the LFT, inter-rater reliability between pairs of manual therapists ranged from poor to substantial (ĸ = −0.077-0.727). The proportion of agreement between pairs of manual therapists ranged from 0.46 to 0.86. Inter-rater reliability on *multiple answers were allowed, **the side-tilt-test could not be adequately performed because of the age of infants (<3 months). Active rotation could not be tested because of too much crying before the test was performed, infants could not be provoked in following and turning their head or were asleep. reported diagnosis and treatment indication was high; agreement was found in, respectively, 34 (94%) and 35 (97%) infants. If passive mobility was reduced to at least one direction or side, diagnosis of UCD and an indication for further treatment were reported by manual therapists. Objectively measured ROM The inter-rater reliability of objectively measured ROM toward both flexion-rotation and lateral flexion varied between poor and moderate (Table 3). Measurements between manual therapists showed large variation and LOA were wide (Figure 2). Absolute mean differences within pairs of manual therapists were minor, while the range in mean differences was wide (Table 3). No systematic differences between manual therapists in measured ROM were found. All Bland Altman plots are shown in Figure A3. In this Bland Altman plot, the mean differences in ROM between manual therapist A and B are plotted against the means in ROM of these two manual therapists. The red line indicates the mean difference of objectively measured ROM (in degrees) between manual therapists A and B. The blue dashed lines indicate the upper and lower limits of agreement (LOA). The small mean difference indicates no systematic difference in measured ROM between manual therapists. The wide LOA indicates large discrepancies in ROM between manual therapists. Relationship subjectively reported outcomes and objectively measured ROM The mean ROM was significantly smaller in measurements indicated as 'reduced' mobility by manual therapists than in measurements that were indicated as 'not-reduced' mobility ( Table 4). As shown in Figure 3 there is an overlap in outcomes indicated as 'reduced' and 'not-reduced' mobility. Discussion Our study is the first to assess the inter-rater reliability of the FRT and LFT in infants in a clinical practice setting. The inter-rater reliability of the FRT and LFT on reported outcomes by manual therapists varied between poor and substantial among pairs of manual therapists. The inter-rater reliability on objectively measured ROM varied between poor and moderate among pairs of manual therapists. The assessed ROM varied widely within and between infants. Furthermore, we verified that ROM was statistically significantly smaller toward the side reported to be reduced in mobility by manual therapists, as compared to the not-reduced mobility side. This suggests that in infants with indications of UCD passive upper cervical mobility restrictions are present but variably measured. Table 2. Inter-rater reliability and agreement of subjectively reported outcomes between pairs of manual therapists. Previous research on cervical mobility in infants with torticollis demonstrated a measurement error between raters of 5-10⁰ [22]. In our study, in every test and in every pair of manual therapists, LOA were larger than the measurement error of 10⁰ (see Table 3), indicating a substantial discrepancy between manual therapists. Although manual therapists were instructed to move the infant's head toward the end-point in ROM, the large variation and disagreement between manual therapists within infants could indicate that the absolute end-point in ROM was not always reached. Furthermore, as shown in Figure 3, the degrees in ROM used as a potential cutoff point to conclude on either normal or reduced mobility, differed between infants within manual therapists. Manual therapists emphasized that they do not rely solely on the ROM to indicate reduced mobility, but also on the perceived feeling at the end-point, the infant's reaction and bilateral differences. Agreement on reported outcomes of the FRT and LFT between manual therapists A and C was much lower and differences in ROM were larger compared to other pairs of manual therapists. Further analysis of the subgroup of this particular pair showed that the mean age of infants was significantly lower (8.3 weeks) than infants assessed by the other pairs (12.6 and 10.7 weeks). Moreover, during mobility assessment manual therapists reported observations that may have limited the assessment in 12 infants, 7 (58%) of them were assessed by manual therapists A and C (Table A2). Given this, these infants seemed to be more resistant, which could have led to increased muscle tension and therefore inadequate mobility assessment. We suggest that lower age and stronger reactions and resistance on the assessment by infants make it harder for manual therapists to (1) perform mobility assessment, and (2) interpret the test outcomes and draw conclusions; similarity of assessment is conditional to reach an agreement. Therefore, the assessed ROM is largely dependent on the performance of the assessment and its interpretation, and the resistance of the infant. Hence, mobility assessment in these infants is difficult and needs special expertise. In line with our observations, recently published studies also highlight the challenge in reliability studies because repeated measures could result in distress in infants [26], and because of the experienced difficulty for therapists to interpret outcomes in infants [27]. Therapists could use visual inspection to assess cervical mobility in infants instead of using measurement instruments [28], but show no consistency and clarity in measurement and interpretation of outcomes regarding ROM [27]. However, reliable measurement instruments to assess cervical ROM in infants are limited [26]. A previous study showed that the intra-rater reliability of the FRT and LFT in infants with torticollis was high (ICC 0.77 and 0.99, respectively) [15]. This could indicate that therapists have their own way of performing a test and could do this reliably by themselves. But when comparing the performance with another therapist, it becomes more difficult. On the other hand, infantile torticollis is a condition of the Sternocleidomastoid muscle leading to reduced active mobility, whereas infants with UCD have reduced passive mobility. In addition, studies assessing the validity of the FRT and LFT in the pediatric population are lacking. Validity of the FRT has only been indicated in adults; Takasaki et al. showed that the FRT predominantly and validly assesses upper cervical rotation in adults [13]. Whether these tests also validly assess upper cervical ROM in infants is however still unknown. In our study, the range of ROM-outcomes reported as 'reduced' and 'not-reduced' mobility was wide and showed overlap. Possibly, individual cutoff points vary between manual therapists and manual therapists interpret ROM on a different moment in the movement. During the assessment, manual therapists were instructed to move the infants' head back to the start-point in between the measurements to the right and left side. In the sensor data analysis, we concluded that this start-point differed between manual therapists within an infant. Moreover, the calculations of ROM were based on this start-point, and objective measurements are therefore related to the position of the infant's head at the start of the movement. Hence, differences in start-points between manual therapists and not returning to the start-point could have influenced the measured ROM and therefore the mobility outcomes. This means that the interpretation of outcomes and the process of decision-making based on these tests are still unclear. In contrast, agreement on the diagnosis of UCD and treatment indication was high between manual therapists. However, the presence of indications of UCD was an inclusion criteria for our study participants and could therefore be influenced by selection bias. In addition, the participating manual therapists in our study reported difficulties in clinical reasoning and getting the total picture of the infant because they were limited to execute a small number of tests and were not informed on infant's characteristics and parentreported symptoms prior to the assessment. In clinical practice, manual therapists do have this information and pay more attention to the development and neuromuscular functions [19]. These reports indicate that only performing the FRT and LFT is not enough for manual therapists to interpret the outcomes and make clinical decisions. Hence, background information and more insight into the infants' neuromuscular functions are needed to optimize the value of performing the FRT and LFT and its interpretation. Strengths and limitations Strengths of this study were the clinical practice setting, the use of motion sensors to assess ROM objectively to support the reported outcomes of the FRT and LFT, and blinding of manual therapists for each other's reported outcomes during mobility assessment and blinding for the motion analysis outcomes. In contrast to twodimensional measures used in previous studies [26], we assessed mobility in three dimensions. Due to the use of two sensors we were able to subtract movements made in the infants' trunk and solely measure the cervical ROM. At the same time, a potential limitation was the possible measurement error due to the movement of the sensors if infants were restless, crying of moving during the assessment. We did not make video recordings, which limited us to draw conclusions on the execution of tests by manual therapists. Another important limitation was that both the subjective and objective outcomes were based on the same assessment, of the same manual therapist. This could have resulted in work-up bias. Moreover, because the Medical Ethical Committee did not approve to also include infants without UCD, all included infants in our study had indications of UCD (selection-bias). Furthermore, in the Netherlands, the use of imaging techniques, such as MRI, in infants without a life-threatening indication is forbidden. This prevented us from further validation of the FRT and LFT in infants. Conclusion Inter-rater reliability of the FRT and LFT in infants with indications of UCD varied between poor and substantial and agreement on decision-making between manual therapists was high. Assessed ROM largely depends on the performance of the assessment and its interpretation by manual therapists, leading to high variation between therapists. Because of this high variation, the FRT and LFT cannot reliably assess reduced upper cervical mobility in infants with indications of UCD. Therefore, these tests should not be used solely as an outcome measure in clinical practice and in the research context. Notes on contributors Femke Driehuis is a physiotherapist and health scientist. She is currently completing her PhD in manual therapy in infants at the Radboud university medical center, Research Institute for Health Sciences, Scientific center for Quality in healthcare, Nijmegen. Besides finishing her PhD she works as policy advisor at the Guideline Department at the Royal Dutch Society for Physical Therapy (KNGF). Here she focuses on evidence based practice physiotherapy, guidelines and professional expertise and competences. Knowledge on the underlying mechanisms of motor learning is used to increase the effect of remedial therapy within physiotherapy, speech therapy and occupational therapy. Important themes are: the patient as active participant in the rehabilitation process, behavioral change of professionals to use evidence based reasoning and to cope with patient preferences in the same time, and development of reliable outcome measurements. Rob A. De Bie is a professor of Physiotherapy Research at Maastricht University. His expertise is in systematic review methodologies and professional guidelines and is known for his contributions to the Cochrane Back Review Group, which coordinates international literature reviews of primary and secondary prevention and treatment of neck and back pain and other spinal disorders. He has also provided support to the OTseeker and PEDro projectsdatabases that contain abstracts of systematic reviews and randomized controlled trials relevant to occupational and physical therapy.His research focusses on musculoskeletal disorders and relevant co-morbidities, while he teaches mainly clinical epidemiology at the Faculty of Health, Medicine and Life sciences, Maastricht University. He is coordinator for academic skills in the medicine curriculum. J. Bart Staal is a graduation as human movement scientist in 1996, he worked as a physiotherapist both in Germany and the Netherlands. In 1998, he started working as a researcher at the Institute for Research in Extramural Medicine (EMGO-Institute) of the Vrije Universiteit in Amsterdam, where he conducted a PhD study on the effectiveness of a graded activity intervention for workers who were sick-listed due to low back pain. In 2003 he obtained his PhD-degree with a thesis entitled: 'low back pain, graded activity and return to work'. Beside his research activities, he also attended statistical and methodological courses in the Netherlands and USA. Subsequently, he started working as a lecturer and senior researcher at the Department of Epidemiology and the Center of Evidence Based Physiotherapy at Maastricht University. He taught evidence based medicine, statistics and implementation research in several bachelor and master courses both to medical and health sciences students. Currently he works as assistant professor at Radboud university medical center and HAN university of Applied Sciences, Research group Musculoskeletal Rehabilitation. His research themes include: musculoskeletal disorders, guideline development, cardiac rehabilitation, cancer rehabilitation, sports injuries, and physiotherapy in general. and Radboudumc (2015-present). As a physiotherapist and biomedical scientist, he is working with his team to study the merit of physiotherapy for people with chronic conditions. He and his team proceed from the assumption that to do clinically-relevant and -impactful research, researchers need to embrace the complexity of daily (physiotherapy) practice and need to study personalized, rather than protocolized, interventions.
2020-04-15T13:06:35.766Z
2020-04-13T00:00:00.000
{ "year": 2020, "sha1": "af2fc8fd2c57194acbbc7b3d5d746a25d813324c", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10669817.2020.1746896?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "dd29cf1a008883ed885c12375f04f858f3895818", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14343070
pes2o/s2orc
v3-fos-license
Remote Sensing Why Is the Ratio of Reflectivity Effective for Chlorophyll Estimation in the Lake Water? The reasons why it is effective to estimate the chlorophyll-a concentration with the ratio of spectral radiance reflectance at the red light region and near infrared regions were shown in theory using a two-flow model. It was found that all of the backscattering coefficients can consequently be ignored by using the ratio of spectral radiance reflectance, which is the ratio of the upward radiance to the downward irradiance, at the red light and near infrared regions. In other words, the ratio can be expressed by using only absorption coefficients, which are more stable for measurement than backscattering coefficients. In addition, the band selection is crucial for producing the band ratio when the chlorophyll-a concentration is estimated without the effects of backscattering. I conclude that the two wavelengths selected must be close, but one must be within the absorption range of chlorophyll-a, and the other must be outside of the absorption range of chlorophyll-a, in order to accurately estimate the chlorophyll-a concentration. Introduction A number of spectral vegetation indices have been developed [1,2].General vegetation indices such as the Simple Ratio Index (SR) can be calculated using the ratio of spectral reflectance at two different wavelengths, with a red light region as the denominator and a near infrared region as the numerator.The Normalized Difference Vegetation Index (NDVI), which is the most common vegetation index, is calculated from the ratio using spectral reflectance at two different wavelengths.These indices are very simple and applicable for estimating chlorophyll-a in leaves or amount of vegetation.The NDVI is more sensitive to biophysical than biochemical properties and is known to become saturated at a relatively low biomass.Also, it has been reported that the ratio of spectral reflectance at two different wavelengths correlated well with the chlorophyll concentration in lakes or inland seas polluted by many particles [3][4][5][6][7].Using data measured at 29 points of Lake Kasumigaura in Japan, Oki et al. [8,9] analyzed the distribution of correlation coefficients greater than 0.9 in three-dimensional space with the denominator and numerator of the ratio of spectral radiance reflectance, which is the ratio of the upward radiance to the downward irradiance, R w at two different wavelengths.The relationship between the ratio (R w 725/R w 675), which is calculated from the spectral radiance reflectance of 675 nm and 725 nm, and the chlorophyll-a concentration showed the best correlation of all patterns within the range of 400 nm through 850 nm.The correlation coefficient was 0.958 under various conditions in which each point was measured on different dates.It has generally been considered effective to use the wavelength ranges of chlorophyll-a-absorbing light (red light region) and strongly reflected light (near infrared region) in leaves.However, it has not been clearly explained why the chlorophyll-a concentration in leaves or in bodies of water can be precisely estimated. In this study, I attempted to determine why the ratio of reflectivity is effective for the chlorophyll-a estimation.Lake Kasumigaura in Japan was selected as the test site because a lot of data on the lake, such as the chlorophyll-a concentration and spectral signature, has already been obtained. Radiative Transfer Model at the Water Surface The underwater light field is determined by the inherent optical properties such as the absorption coefficient and scattering coefficient of the various components in the water body.Thus, upward radiance of light from the water body contains information on the components. Figure 1 shows a schematic diagram of an optical system above and below a water surface.The spectral radiance reflectance R of radiance to irradiance at the water surface is defined by where L t is the upward spectral radiance from the water surface, E d is the downward irradiance onto the water surface from total solar radiation, and L w is the upward spectral radiance just above the water surface.L r is the upward spectral radiance reflected from the water surface due to total solar radiance.E i and E s are the downward irradiance onto the water surface due to direct sun and the diffused skylight, respectively.In this study, it was assumed that the water body is a Lambertian reflector and the angular distribution of radiance in the lower hemisphere of the water surface is uniform for radiance traveling upward.In this case, L w can be expressed as ( ) where t and n are the transmittance and refractive index from water to air, respectively.L u (0) is the upward spectral radiance just below the water surface, which cannot be measured directly.Therefore, L u (0) was estimated from the upward radiance at the depth of Z as ( ) ( ) where k is the extinction coefficient for the upward irradiance of the water [3,10,11], expressed as In this study, by measuring upward spectral radiance underwater at depths of Z 1 and Z 2, we defined the spectral radiance reflectance R w as the following equation instead of Equation ( 1): Equation ( 7) removed the effect of specular reflection L r at the water surface in Equation ( 2), which gave a more accurate measurement for the chlorophyll-a concentration in the water body. Measurement of the Spectral Signature The upward radiance of Lake Kasumigaura, selected as a test site for this study, was measured by using a field spectroradiometer developed by Miyazaki et al. [12] Measurements were taken at the water surface and at several depth layers at each point (Pt1 to Pt10) shown in Figure 2.There were 29 sample points, of which 10 points at Pt1 to Pt10, six points at Pt1 to Pt6, two points at Pt1 to Pt2, eight points at Pt1 to Pt8, and three points at Pt1 to Pt3 were measured on 10 September 1993, 22 April 1994, 27 July 1994, 4 September 1996, and 5 September 1996, respectively.Figure 2 shows locations Water Surface of Pt1 to Pt10 at Lake Kasumigaura.The measurements covered the range from 400 nm to 850 nm with a resolution of 2 nm.At each measurement point, the following were measured: ・Upward spectral radiance underwater at depths of 10 cm and 40 cm from the water surface ( L u (0.1), L u (0.4)) ・Spectral irradiance of the white board (E d ) The obtained spectral signature of water was determined by taking an average of 10 scans.The upward spectral radiance (L u (0.1), L u (0.4)) of the underwater was measured once (average of 10 scans).Then, two liters of surface water of the lake collected at each point to measure the chlorophyll-a concentration by a methanol extraction method where 10 ml methanol was added on a glass filter on which the suspended substances had been filtered, and then the soluble substances were extracted by keeping the mixture at a temperature of less than 3 degrees Celsius for 12 hours [13]. Chlorophyll-a Estimation Using the Ratio of the Reflectivity Various methods for estimating chlorophyll concentration have been developed [3][4][5][6][7]14,15].In many of these methods, the ratio of spectral reflectance at two different wavelengths is well correlated with the chlorophyll concentration at lakes or inland seas polluted by many particles [3][4][5][6][7].Oki et al. [8,9] analyzed the distribution of correlation coefficients at Lake Kasumigaura, which is greater than 0.9, in three-dimensional space with the denominator and numerator of the ratio of spectral radiance reflectance at the two different wavelengths shown in Figure 4.In the study, the spectral radiance reflectance of Equation ( 7) was calculated at 5 nm intervals from Equation (4) using anteroposterior values of the spectral radiance data of the 29 samples shown in this study.To estimate the chlorophyll-a concentration in water like that in Lake Kasumigaura, it is better to use the ratio of spectral radiance reflectance at two different wavelengths, that is, using the red light region around 675 nm as the denominator and the near infrared region within the range of 700 nm through 730 nm as the numerator, than to use the other ratio of spectral radiance reflectance shown in Figure 4.The estimation results of the chlorophyll-a concentration using these wavelengths were similar to those obtained by Hoogenboom et al. [7], who developed a remote sensing algorithm for estimating the chlorophyll-a concentration in eutrophic inland waters.The red light region around 675 nm can be explained by the absorption of chlorophyll-a.However, the effectiveness of using the range of 700 nm through 730 nm has not been confirmed. Why It Is Effective to Estimate the Chlorophyll Concentration with the Ratio of Spectral Radiance Reflectance The spectral radiance reflectance of a water body derived from the two-flow model by Morel et al. [14] has been used by many researchers.The spectral radiance reflectance λ R with wavelength λ can be expressed as follows: where the subscripts water , other , and chl indicate the coefficients for pure water, suspended solids except for chlorophyll-a, and chlorophyll-a.In this study, using the ratio of spectral radiance reflectance of 675 nm and 700 nm as the red light region and near infrared region from the ratio of spectral radiance reflectance shown in Figure 4, is effective for estimating the chlorophyll-a concentration in theory using a two-flow model. The following was assumed: Assumption 1: Water polluted by many particles can be expressed as Assumption 2: In this study, it was assumed that if two wavelengths are closer, then the dependence on backscattering coefficients is more constant, which can be expressed as follows: + ≈ + From Assumptions 1 and 2, Equation ( 9) can be rewritten as follows: The absorption coefficient, λ chl a , for chlorophyll-a of Equation ( 10) is considered to be linearly related to the chlorophyll-a concentration C , that is, where λ chl a′ is the absorption coefficient per unit of chlorophyll-a concentration.Using Equations ( 8) through (12), the chlorophyll estimation model can be expressed as (13) From Equation ( 13), it can be seen that all of the backscattering coefficients can consequently be ignored by using the ratio of spectral radiance reflectance of 675 nm and 700 nm.In other words, Equation ( 13) can be expressed by using only absorption coefficients, which are more stable for measurement in comparison with using backscattering coefficients. In Equation ( 13), the values of where SS is the suspended solids concentration, which was measured at the same points as the chlorophyll-a concentration in this study.In Equation ( 13), the values used for nm chl a 675 ′ , nm chl a 700 ′ , and nm chl a 650 ′ were as reported by Morel et al. [14], and the values for nm water a 675 and nm water a 700 were adopted from Smith [16]. Figure 5 shows a comparison between the chlorophyll-a concentration and the R 700 /R 675 value calculated from Equation ( 13) and the measured R 700 /R 675 value.It was found that the calculated R 700 /R 675 was in accord with the measured R 700 /R 675 when the chlorophyll-a concentration was less than 60 μg/l.However, the values were not in accordance with each other when the chlorophyll-a concentration was greater than 90 μg/l.One reason might be that assumption 2 doesn't hold well because the value of nm chl b 675 decreases relatively in comparison with the value of nm chl b 700 with the increase in chlorophyll-a concentration.As a result, the band ratio cannot be expressed by Equation (13), which is only composed of absorption coefficients, as no backscattering coefficients were used.In other words, band selection is crucial for producing the band ratio when the chlorophyll-a concentration is estimated without the effects of backscattering.In general, we can infer that assumption 2 holds well when the two selected wavelengths must be close, but one must be the absorption range of chlorophyll-a as the spectral radiance reflectance of 675 nm, and the other must be outside of the absorption range of chlorophyll-a as the spectral radiance reflectance of 700 nm, to estimate chlorophyll-a concentration accurately.Also, the effective estimation range of chlorophyll-a concentration can be lower when the two selected wavelengths are more distant than those of the test case in this study. Conclusions In this study, spectral measurement was carried out using Lake Kasumigaura as a test site to find out why the ratio of the reflectivity is effective for making a chlorophyll-a estimation.It was found that the chlorophyll-a concentration could be accurately estimated by using the ratio of spectral radiance reflectance at two different wavelengths, that is, with a red light region around 675 nm as the denominator and a near infrared region within the range of 700 nm through 730 nm as the numerator. The reasons why it is effective to estimate the chlorophyll-a concentration with the ratio of spectral radiance reflectance at the red light region and near infrared regions were shown in theory using a twoflow model.It was found that all of the backscattering coefficients can consequently be ignored by using the ratio of spectral radiance reflectance at the red light and near infrared regions.In other words, the ratio can be expressed by using only absorption coefficients, which are more stable for measurement than backscattering coefficients.In addition, the band selection is crucial for producing the band ratio when the chlorophyll-a concentration is estimated without the effects of backscattering.In general, we can conclude that the two wavelengths selected must be close, but one must be within the absorption range of chlorophyll-a as the spectral radiance reflectance of 675 nm, and the other must be outside of the absorption range of chlorophyll-a as the spectral radiance reflectance of 700 nm, to accurately estimate the chlorophyll-a concentration. Figure 1 . Figure 1.Schematic diagram of an optical system above and below the water surface. Figure 3 Figure 3 . Figure 3 shows examples of spectral reflectance calculated just above the water surface of Lake Kasumigaura at Pt1 measured on 10 September 1993 and Pt6 measured on 22 April 1994. Figure 4 . Figure 4. Distribution of correlation coefficient in the three dimensional space. b are the absorption coefficient and backscattering coefficient with wavelength λ , respectively.The λ a and λ b can be expressed as Figure 5 . Figure 5.Comparison between chlorophyll-a concentration and R 700 /R 675 value calculated from Equation (13) and R 700 /R 675 value measured.
2016-03-22T00:56:01.885Z
2010-07-09T00:00:00.000
{ "year": 2010, "sha1": "125cdeb3e46cc06ec7537ecdda65598b6909f3a9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/2/7/1722/pdf?version=1403129186", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "125cdeb3e46cc06ec7537ecdda65598b6909f3a9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Materials Science", "Computer Science" ] }
11222494
pes2o/s2orc
v3-fos-license
Transport by molecular motors in the presence of static defects The transport by molecular motors along cytoskeletal filaments is studied theoretically in the presence of static defects. The movements of single motors are described as biased random walks along the filament as well as binding to and unbinding from the filament. Three basic types of defects are distinguished, which differ from normal filament sites only in one of the motors' transition probabilities. Both stepping defects with a reduced probability for forward steps and unbinding defects with an increased probability for motor unbinding strongly reduce the velocities and the run lengths of the motors with increasing defect density. For transport by single motors, binding defects with a reduced probability for motor binding have a relatively small effect on the transport properties. For cargo transport by motors teams, binding defects also change the effective unbinding rate of the cargo particles and are expected to have a stronger effect. Introduction The interior of living cells is characterized by highly organized complex structures. To build and maintain these internal structures, cells rely on directed active transport of various types of cargoes to different destinations within the cell. This transport is driven by molecular motors which use the energy derived from the hydrolysis of adenosine triphosphate (ATP) to move along cytoskeletal filaments [15,45]. There are three large families of cytoskeletal motors, kinesins and dyneins, which move along microtubules, and myosins, which move along actin filaments [15,45]. Since cells provide crowded environments, motors moving along filaments encounter a variety of other molecules bound to the same filaments, which may hinder their movement. These obstacles may represent other motors of the same type, and the traffic phenomena that arise in such systems with many motors have been studied extensively in recent years. Many theoretical studies have explored the formation of traffic jams and non-equilibrium phase transitions [8,17,19,32,39,40], and traffic jams of molecular motors have recently been observed in several experimental studies [26,30,39]. A system with two different species of motors that move into opposite direction has also been studied theoretically and is predicted to exhibit spontaneous symmetry breaking and the formation of separate traffic lanes for the two directions [21]. In addition to molecular motors, a variety of other molecules can bind to filaments and affect the movement of these motors. An important example is given by microtubuleassociated proteins (MAPs), which bind to microtubules to control their structure and stability. In addition, MAPs can modulate the movement of the motors along the microtubules. When overexpressed in vivo [2,7,44] or added to microtubule gliding assays with kinesin or dynein motors in vitro [11,12,33,41], MAPs decrease or completely inhibit motility of motors. More recent experiments using lower concentrations of MAPs and tracking the movements of individual motors show that most MAPs studied so far affect motor movements by modulating binding of the motors to microtubules. For example, the tau protein, a MAP specific for neurons, has been shown to decrease the binding of kinesin and dynein motors to microtubules [5,48,52]. Its effect depends on the tau isoform [52], is more pronounced for kinesin than for dynein motors [5,7,53], and has a stronger effect on cargoes pulled by several motors than on individual motors, see Refs. [23,48,52] and the discusson below. These subtle and highly specific effects seen at low tau concentration [5,48,52] suggest that tau (and other MAPs) may play important roles as regulators of transport in cells, and function as general transport inhibitors only under pathological conditions [34]. For example, the differential effects on kinesins and dynein suggest that tau can control the direction of motion of cargoes that are carried by both types of motors, as discussed in Ref. [37]. When modeling large-scale transport by molecular motors, static molecules bound to the filaments can be considered as local properties of the filament. They represent static or quenched defects of the filament that affect the motor dynamics locally. The same theoretical description may then be used for other types of defects that cause local effects on motor transport. Such defects may for example be local modifications of the filament themselves such as microtubule lattice defects or a variety of post-transcriptional modifications of tubulin, the subunit of microtubules. Some of these modifications have been shown to affect the microtubule-binding or the movement of motors [28,29,43]. Finally, in addition to these naturally occurring defects, artificial 'roadblocks' such as inactive motor mutants have been used in several experiments to perturb the movement of active motors in order to study the mechanisms of motor function [3,49]. In this paper, we study the effects of various types of defects on the movements of molecular motors using the lattice model introduced in Ref. [32]. Here we use the simple description of the dynamics of motor stepping provided by the lattice model to distinguish three basic types of static defects and to study their effects on single motors as well as on the motor traffic in many-motor systems. The three basic types of defects are given by filament sites that differ from the other filament sites in one of three motor parameters: (i) stepping defects have an altered forward stepping probability, (ii) unbinding defects have an altered unbinding probability, and (iii) binding defects have an altered binding probability. Some cases that have been studied previously can be considered as special cases of this general approach. For example a single stepping defect has been studied in Ref. [42], and a single unbinding defect without unbinding from non-defect sites in Ref. [35]. Very recently, binding defects have been studied in Ref. [10]. Stepping defects have also been investigated extensively for one-dimensional exclusion processes [16,20,25,51], which, in our model, correspond to movement of motors along a filament without binding and unbinding. We also note that in the statistical mechanics literature such defects are classified as 'sitewise' disorder, since the anomalous properties are related to a fraction of the lattice sites, as opposed to the case of 'particlewise' disorder, for which some of the moving particles exhibit anomalous properties [27]. The paper is organized as follows: In section 2, we introduce the lattice model and the system geometry used in this study as well as the description and classification of defects. We discuss the modeling of known biological defects such as MAPs within this model. We then study stepping defects in section 3, unbinding defects in section 4 and binding defects in section 5. We conclude with a few general remarks on the use of defects in transport. Lattice model for the traffic of molecular motors To study the effects of various types of defects and the transport by molecular motors, we extend the lattice model introduced in Ref. [32], which we have previously used to describe both the movement of single motors [22,32,38] as well as the traffic in many-motor systems [19,32]. This model describes the movements of a single molecular motor along a filament as a random walk on a (generally three-dimensional) lattice, which contains one or several lines of lattice sites that represent filaments. The lattice constant ℓ is given by the step size of a motor moving actively along a filament. Per unit time τ, a motor at a filament site makes a forward step along the filament with probability α, unbinds to each of the four neighboring non-filament sites with probability ε/6, and remains at the same site with probability γ = 1 − α − 4ε/6. Motors at non-filament sites perform symmetric random walks and move to each nearest neighbor site with probability 1/6. The choice of this probability implies that the time scale τ is given by the diffusion coefficient of unbound motors D ub as τ = ℓ 2 /D ub . If an unbound motor moves to a filament site, it binds to it with the sticking probability π ad . If π ad = 1, this condition modifies the probability for the movement from a non-filament site to a filament site to π ad /6. In general, we can model both freely suspended filaments, for which the filament site is connected to four neighboring non-filament sites, and immobilized filaments, for which the number of nearest neighbors is at most equal to three. In the simulations reported below, we focused on freely suspended filaments. In addition to the dynamics of single motors, the lattice model can also describe systems with many interacting motors. In the simplest case, these motors interact only through their mutual exclusion from lattice sites, which is implemented in the model by not allowing any steps to sites that are occupied by another motor. Typically the density of motors at non-filament sites is much lower than at filament site, so that the exclusion rule affects mainly binding to the filament and movement along it. By virtue of this exclusion rule, our model is a variant of driven lattice gas models or exclusion processes, which have been studied extensively as model systems for transport processes and non-equilibrium phase transitions [46,47]. Throughout this article, we will study systems that have a tube-like geometry as shown in Fig. 1. In these systems a single filament is located on the axis of a cylindrical tube with length L and radius R. This geometry mimics the structure of some types of cells, such as axons of nerve cells or hyphae of fungi, which are approximately tubular and have a unidirectional microtubule cytoskeleton [9]. Similar tube-like systems have previously x y z R L Fig. 1 Molecular motors inside a cylindrical tube with a filament aligned along its axis. This tube system mimics the geometry of elongated cellular structures such as axons. The tube has the length L and the radius R. Motors bound to the filament move actively along the filament in a directed fashion, while unbound motors perform diffusive movements. The boundary condition is periodic along the x-axis. been studied with various types of boundary conditions [19,24,32,36]. In order to keep the discussion simple, we will focus on periodic boundary conditions in the following. For periodic boundary conditions, the case without defects is particularly simple and has been solved exactly [19]. In this case, the densities of bound and unbound motors, ρ b and ρ ub , respectively, are spatially homogeneous and satisfy a binding-unbinding balance condition, and the motor current J along the filament is given by In the case of a single motor, the balance equation is from which one can derive the steady-state probability that the motor is bound to the filament, P b = ∑ x ρ b = ρ b L/ℓ, which is given by where N ch is the number of unbound channels, i.e., the number of lines of lattice sites parallel to the filament in a discretized tube with cross section φ = (1 + N ch ) ≈ πR 2 for sufficiently large radius R. The effective motor velocity, averaged over the bound and unbound states of the motor, is then obtained as where v b = αℓ/τ is the velocity of the bound motor. Lattice model with different types of defects Inhomogeneities of the filament such as those mentioned in the introduction may affect one or several of the motor properties. This can be described within the lattice model by modifying one or several of the hopping probabilities compared to the homogeneous situation. In the following, we distinguish three basic types of defects which are characterized by a single parameter that differs from the homogeneous case as show in Fig. 2: (i) Stepping defects have a changed probability α def for forward movement, but unchanged binding probability π ad and unbinding probability ε; (ii) unbinding defects have a changed unbinding probability ε def ; and (iii) binding defects have a modified sticking probability π def . In all three cases, the dwell probability γ def also needs to be adapted, so that the sum of all probabilities is again equal to one. More complicated types of defects can be considered as combinations of these basic defects. For example an inaccessible site due to a large immobile protein bound to the filament, such as an inactive mutant motor, can be described by a combination of a stepping defect and a binding defect with α def = π def = 0. Table 1 lists examples of defects that have been characterized experimentally and summarizes their effects on molecular motors. MAPs such as the tau proteins essentially represent binding defects (an exception is MAP4 [50], see Table 1). They reduce the binding rate of kinesin to microtubules and have no or only a weak effect on the velocity of bound kinesins as well as on the unbinding rate or the run length, the distance moved along the filament before unbinding [48,52]. Similar effects have been observed for dynein motors [5], [5,52] also report shorter run lengths, i.e. increased unbinding, not observed in Ref. [48]. In addition, Ref. [5] reports also a substantial fraction of immobile motors bound to microtubules. b Under the conditions of this experiment motors have rather long run lengths in the absence of tau, longer than in the experiments of Ref. [48,52,53]. c The effect depends on which isoform of MAP4 is used. A 5-repeat isoform exhibits a strong effect, while the other isoforms studied showed only small effects [50]. d Refs. [3,49] report conflicting results for the effect on unbinding. which in general are less affected by MAPs than kinesins. In vivo, tau has also been shown to reduce the run length for vesicular cargoes. The increase in unbinding rate for these cargoes is most likely a consequence of the fact that these cargoes are pulled by several motors rather than a single motor, since for cargoes pulled by several motors, the unbinding rate is a function of the single-motor binding rate [23,48]. This effect has been demonstrated in vitro for tau and beads pulled by several kinesins [52]. Therefore defects that are binding defects for individual motors, can be both binding and unbinding defects for cargoes pulled by multiple motors. The effects of post-translational modifications of microtubules on motor movements have not been characterized in much mechanistic detail. One case for which the effect is known is acetylation/deacetylation of a particular lysine residue of α-tubulin for which it has been shown that kinesin binds more strongly to the acetylated form than to the de-acetylated form [43]. Microtubules containing de-actelyated tubulin subunits therefore provide another example of binding defects. Microtubule lattice defects are believed to cause unbinding of motors, see, e.g. Ref. [4], and would thus represent unbinding defects. While this scenario is plausible, it has not been studied systematically and there is no direct experimental evidence for it. Finally, the artificial 'roadblock' motor mutants used in Refs. [3,49] represent blocked sites, i.e. combinations of a stepping defect with very low, essentially zero, stepping rate and a binding defect. Whether they also affect unbinding is unclear, since the two experiments in Refs. [3,49] reported conflicting results, for a discussion see also Ref. [18]. In another recent experiment, some motors were inactivated by irreversible crosslinking to a microtubule to obtain blocked sites [6]. Single motor with stepping defects We start by considering stepping defects. At a stepping defect, the motor has the forward stepping probability α def , while the binding and unbinding parameters are the same as at the other filament sites. We note that the probability to remain at the site is also changed compared to other sites and given by γ def = 1 − α def − 4ε/6. We consider a single filament in a tube as shown in Fig. 1 with a density ρ def of stepping defect sites. To keep the discussion simple we study the case where the defect sites are arranged periodically on the filament. This situation is then equivalent to a system with a single defect, length L = ℓ/ρ def and periodic boundary conditions. First, we consider the effect of stepping defects on a single motor. As the stepping defects do not affect the binding and unbinding probabilities of the motor, one may expect that the binding probability is the same as in the absence of defects. However, relation (2), which describes a local balance of binding and unbinding, is not valid in the presence of stepping defects, since the motor densities are not constant along the filament because of the prolonged waiting times of the motor at the defect sites. Thus relation (2) has to be replaced by the global balance of binding and unbinding which remains valid if the densities are not constant and is given by where y nn and z nn are the perpendicular coordinates of a single channel of non-filament sites that are the nearest neighbors of the filament sites, e.g. y nn = ℓ and z nn = 0. The inhomogeneity of the unbound density is relatively small because the fast motor diffusion tends to smooth the unbound density profile and taking the unbound density to be independent of the coordinates perpendicular to the filament, i.e. ρ ub (x, y, z) ≃ ρ ub (x), is usually a very good approximation [19,24]. Within this "two-state" approximation, the probabilities P b and P ub to find the motors in a bound or an unbound state, respectively, are given by and satisfy the normalization condition The flux balance relation (5) then becomes which leads to the same expression as for the case without stepping defects. To obtain the effective velocity of the motor, we introduce an effective passing time to describe the movement of the bound motor. We assume that the motor spends the time τ 0 = τ/α at a normal site and the time τ def at a defect site. Since there are L/ℓ − 1 normal sites and only one defect site on a filament segment of length L, the total time to move through such segment is t tot = (L − ℓ)τ/ℓα + τ def , provided the motor typically remains bound to the filament during such a run. The velocity of a bound motor can be estimated by The effective velocity, which characterizes the motor movement including the diffusive excursions upon unbinding, is then given by The time τ def to pass a defect remains to be specified. In the limit of a sufficiently weak defect and sufficiently processive motors as assumed so far, this time is given by the inverse of the defect stepping probability, i.e. τ def = τ/α def . In general, however, there are two ways in which a motor can pass a stepping defect: the motor can either slowly step through the defect along the filament or it may unbind from the filament and rebind to it after diffusing around the defect. The relative importance of these two pathways depends on their relative probabilities: when α def ≫ ε, the direct path through the defect dominates, while unbinding and diffusion will be the dominant pathway for α def ≪ ε. If the stepping probability α def at the defect is not large compared to the unbinding parameter ε, the probability for the motor to take the diffusion channel is comparable with the probability to move forward along the filament. To estimate the contribution of the diffusion channel, we start with the limiting case α def = 0, for which the motor can only take the diffusive channel to pass defect sites. We make the ansatz that the probability for taking the diffusive pathway is proportional to the unbinding probability ε. For α def = 0, the time it takes the motor to pass the defect is then given by τ def = τ/qε and the effective motor velocity is given by In these expressions, q is an unknown free parameter and should depend on the geometry of the system. For the parameters used in our simulations, we have determined this parameter by fitting the expression for v eff to the the simulation data for α def = 0, see Fig. 3(a), which leads to q ≃ 0.25. For intermediate values of the stepping probability α def , both pathways contribute and the total probability to pass the defect is given by the sum of the probabilities for the two channels. The effective passing time τ def for the motor is then proportional to 1/(α def + qε). This expression implies that the effective bound motor velocity (or effective stepping rate) is given by and leads to the effective motor velocity For the parameters used in Fig. 3(a), the free parameter q has been determined by fitting this expression to the simulation data for α def = 0 and applied to other cases of different value of α def . The curves obtained from expression (13) agree very well with the simulation data. We note however that this expression is not strictly valid in the limit of very weak defects with α def ≈ α. Putting α def = α in Eq. (12) leads to α eff = α + O(ε/α), i.e. to a discrepancy of order ε/α, which is very small for processive motors. Fig. 3(a) shows that the velocity is reduced compared to the case without defects. As one might expect, this reduction is larger for stronger defects and/or for higher defect densities. If the defects are sufficiently strong, even very small defect densities lead to a substantial reduction of the velocity. For example, if 1 percent of sites on the filament are stepping defects with α def = 0, the velocity of the single motor is reduced to about 20 percent compared to that without defects. Another important property of the motors that is affected by stepping defects is their run length ∆ x, i.e. the distance a motor moves along a filament before unbinding from it. In the absence of defect sites on the filament, the average run length is given by and the distribution of run lengths decays exponentially as exp(−∆ x/ ∆ x 0 ). The run length distribution for the case without defects is shown in Fig. 3(c), see the straight line. This exponential decay is modified by the presence of stepping defects, as also shown by the simulation data points in Fig. 3(c). In the presence of a low density of defects, the run length distribution decays slightly faster for large run length, and, in addition, develops a pronounced peak for short run length. This peak corresponds to short runs that start close to a defect and end at the same defect. Both effects lead to a reduction of the average run length. As the unbinding probability of the motors is not affected by stepping defects, the time a motor remains bound to a filament is the same with and without defects. 1 However, the distance moved during this time is reduced if the motor encounters a defect. We can therefore estimate the average run length using the effective velocity of a bound motor, v b , which leads to The dependence of the mean run length on the density and the strength of the defects is shown in Fig. 3(b). For strong defects with α def ≪ ε, the precise value of α def is irrelevant, as motors at the defect site typically unbind, before passing through the defect. For weaker defects, i.e. larger α def , the reduction of the run length is shifted towards larger defect densities. For α def > ε, an approximately two-fold reduction of the run length is obtained when the defect density ρ def and α def /α have the same order of magnitude. Many motors with stepping defects We now consider the effect of stepping defects on the traffic of many motors, which interact through mutual exclusion from filament sites. For the traffic of many motors in a tube with length L, we are interested in the following quantities [31]: (i) Bound density ρ b as a function of the spatial coordinate x along the tube axis; (ii) Bound current J b (x) which gives the number of motors that pass through a lattice site on the filament with coordinate x per unit time; and (iii) Average bound current J b ≡ dxρ b (x)/L which characterizes the overall transport along the filament. In general, as one increases the total number of motors in the tube, or equivalently the concentration of these motors, the average bound current of motors on the filament increases first, but eventually reaches a maximal value and then starts to decrease as the traffic becomes jammed [19], see Fig. 4(a). The presence of stepping defects decreases the average bound current compared to the case without the defects for all choices of the total number of motors within the tube. The stronger the defects, the lower is the value of the average bound current. In addition, the curve for the average bound current as a function of the overall motor number becomes broader as the strength of the defects increases, and the maximum of the average bound current is shifted towards larger values of the overall motor number, see Fig. 4(a). Density profiles of bound motors along the filament in the presence of stepping defects are shown in Fig. 4(b) for the limiting case α def = 0, for which motors can only pass the defects by unbinding, diffusion and rebinding to the filament. These profiles show that stepping defects induce local traffic jams in front of the defect and a depletion zone after it. These profiles are very similar to those found in earlier studies on closed and half-open tube systems [24,32,36] with the defect playing the role of the boundary of the tube. The spatial extension of the jammed region increases with the overall motor concentration. The end of the jammed region distal to the defect is marked by a rather sharp shock, i.e. a sudden change in density. The corresponding profiles of the current of bound motors are shown in Fig. 4(c). Weaker stepping defects with α def > 0 cause a smaller perturbation of the bound density and bound current profiles; as shown in Fig. 4(d) for the case α def = 0.5α, the effect of the defects is then confined to a small region around the defect. Single motor with unbinding defects The second type of defects that we investigate is provided by unbinding defects. Motors at an unbinding defect site move forward with probability α, unbind with probability ε def /6 to each neighboring non-filament site and remain at the same position with probability γ def = 1 − α − 4ε def /6. We study again the case where the defects are regularly distributed on the filament with concentration ρ def = 1/L and we start by considering the effect of the defects on the movement of a single motor. Since all sites have the same stepping parameter α, the velocity of a bound motor, v b = αℓ/τ is not affected by the unbinding defects and the effective velocity, which is averaged over the bound and unbound states of the motor, is given by Since the unbinding defects break the translational invariance of the system, they lead to inhomogeneous bound and unbound density profiles, so that the binding-unbinding balance is again not valid locally. As in the case of stepping defects, the bound and unbound motor densities satisfy a global balance of binding and unbinding where y nn and z nn are again the perpendicular coordinates of a single channel of non-filament sites that are the nearest neighbors of the filament sites. As a global property of motor unbinding, we introduce an effective unbinding probability, ε eff , which is defined via Using this relation in (17) together with the replacement of ρ ub (x, y nn , z nn ) by ρ ub (x) and the normalization condition (7), the probability P b as defined by Eq. (6) is now given by (19) instead of relation (9) for stepping defects. Furthermore, the flux balance relation (17) is It follows from Eqs. (16) and (19) which replaces the relation (9) for stepping defects that the effective unbinding parameters ε eff and the effective velocity v eff satisfy the relation: So far, we have only rewritten the motor properties in terms of the new parameter ε eff . In the following, we will consider several analytical approximations to determine the effective unbinding parameter ε eff , which then leads to estimates for the bound state probability P b and the effective velocity v eff . The simplest ansatz for ε eff is to take the average of the unbinding probability along the filament, which leads to with the defect density ρ def = ℓ/L. This approximation is valid if the bound motor density along the filament is approximately constant. The latter condition is fulfilled if the motor is fast with α ≫ ε and α ≫ ε def . In the opposite limit of small stepping parameter α, the flux balance arising from binding and unbinding events is approximately valid locally, i.e. for small α. This relation is exact in the equilibrium case with α = 0. Furthermore, for small α, the unbound density varies very little and can be approximated by a constant ρ ub , which again becomes exact for the equilibrium case with α = 0. It then follows from (23) that the bound density ρ b behaves as for small stepping probability α. The probability P ub for an unbound motor state is now given by (25) and the probability for a bound motor state by Inserting the two expressions (25) and (26) into Eq. (20), one obtains the relation for the effective unbinding probability ε eff in the limit of small stepping parameter α. Note that in this limit of small α, one has to average the inverse of the local unbinding parameter rather than the unbinding parameter itself as in the limit of large α. Expanding the relation (27) in powers of the defect density ρ def = ℓ/L leads to Comparison of this result for small α with Eq. (22), which is valid for large α, shows that in both cases (ε eff − ε) ∼ ℓ/L = ρ def , but with different prefactors. These relations are confirmed by simulations, see Fig. 5(a). Furthermore, the simulation data show how intermediate values of α interpolate between these limiting cases. For these intermediate values, which are typical for motors with finite processitivity, the effective unbinding probability exhibits a weak dependence on α. The simulation data are described rather well by the expression where q ′ is a free parameter determined to be q ′ ≃ 4.0 by fitting the simulation data. Note that the expression (29) interpolates between Eq. (22) for large α and Eq. (28) for small α. Using Eq. (29) also leads to a rather accurate description of the motor velocity as obtained from simulations, see Fig. 5(b), where the motor velocity is shown as a function of defect density and defect strength, i.e. the defect unbinding probability. We note that small defect densities can have a rather strong effect, if the unbinding probability at the defect is of the same order of magnitude as the stepping probability α. For example in the case ε def = 128ε, a defect density of about 4 percent reduces the effective velocity two-fold and a 10 percent defect density reduces it three-fold. Unbinding defects also reduce the run length of the motor. In the presence of unbinding defects, the mean run length can be expressed in terms of the effective unbinding probability as as obtained from Eq. (14) when ε is replaced by ε eff . The dependence of the mean run length on the defect strength and the defect density is shown in Fig. 5(c). As in the case of stepping defects studied above, see Fig. 3, small defect densities have only a weak effect on the run length, while large defect densities shorten the runs strongly. The crossover density at which the effect of the defects becomes notable has a strong dependence on the defects strength, i.e. ε def , and can be quite small for strong defects with large ε def . For example, for ε def = 256ε, a two-fold reduction of the mean run length is obtained for ρ def ≃ 0.007, that is if less than one percent of the filament sites are unbinding defects. The effect of unbinding defects on the run length is very similar to the corresponding effect of stepping defects, see Fig. 3. This similarity reflects the fact that the effects of strong stepping and unbinding defects have some common aspects: when the motor encounters the defects, it has a rather high probability to unbind from the filament, either because of the high unbinding probability at an unbinding defect or because of the prolonged sojourn time at a stepping defect. As a consequence, unbinding and stepping defects also have a similar effect on the run length distributions, compare Fig. 5(d) and 3(c). Fig. 5(d) shows run length distributions for a rather low density of unbinding defects. As in the case of stepping defects, the length scale that governs the exponential decay of the distribution is slightly reduced, and the distributions exhibit a peak at small run lengths. Many motors with unbinding defects Now let us consider the effect of unbinding defects on the traffic of many motors which interact through mutual exclusion. Fig. 6(a) shows the average bound current as a function of the overall number of motors within the tube for a low defect density. It can be seen from these plots that the binding defects do not always reduce the current, as one might expect and as found for the stepping defects discussed above, see Fig. 4(a). For small motor numbers, the average bound current is indeed slightly reduced by the presence of the defects, but for large motor numbers, the current is slightly increased. This observation can be understood as follows: For small motor concentration, the decrease of the bound motor density arising from the unbinding defects leads to a reduction of the average bound current. If the concentration of motors is however larger than the concentration for which the average bound current attains its maximal value, a reduction of the bound motor density leads to an increase of the average bound current, because the increased unbinding probability relieves the traffic jams appearing for high motor densities. Profiles of the bound motor densities on the filament are shown in Fig. 6(b). The profiles are rather constant away from the defect, but have a minimum at the defect site. This is what one would expect, since motors unbind from the filament at this site. For small overall motor concentration (or total motor number), the profiles also exhibit a maximum in front of the defect. This maximum arises from the locally increased density of unbound motors which leads to increased rebinding of motors to the filament. Since this maximum requires a locally increased motor density, it is only present if the diffusion of unbound motors is not too fast. When the overall motor concentration is increased, this maximum disappears. The corresponding bound current profiles are shown in Fig. 6(c) for a strong defect with ε def = 128ε. The current exhibits a peak in front of the defects and a depletion zone behind these defects. The depletion zone behind the defect follows the bound density profile closely, which indicates that it reflects the reduced motor density due to unbinding at the defect. On the other hand, the peak in front of the defect is present when the bound density exhibits a peak, as well as for high motor concentration when no density peak occurs. Single motor with binding defects The third class of defects we investigate are binding defects. This type of defect appears to be the most common one in biological systems as shown in Table 1. At a binding defect site, bound motors have the same hopping probabilities as at any other filament site, but unbound motors that approach the binding defect site have a reduced sticking probability π def . Similar to the case of unbinding defects as studied in the previous section, binding defects do not affect the movement of bound motors, but rather change the balance of binding and unbinding. The stronger the binding defects are, or the higher the density of binding defects on the filament is, the less likely it becomes for motors to bind to the filament. A local balance of binding and unbinding, similar to Eq. (17), is also valid in this case, but now with a site-dependent binding probability. Simulation results for the effective motor velocity as a function of the density of binding defects are shown in Fig. 7(a). This figure also includes results obtained from a mean field approximation using an effective (site-independent) binding parameter π eff ≃ π ad (x) = π ad + ρ def (π def − π ad ). As can be seen in Fig. 7(a), the latter approximation leads to good agreement with the simulation data. The most noticeable feature of Fig. 7(a) is that the effect of binding defects is rather weak. Binding defects only have a notable effect at high defect densities. But even if binding is completely suppressed at every second filament site, i.e. for ρ def = 0.5 and π def = 0, the effect remains weak, since the effective motor velocity is only decreased by 14 percent. This result is in striking contrast to the strong effects of the other two defects types. As the binding defects do not affect the movement of bound motors, the run length is not changed compared to the case without defects. Many motors with binding defects Finally we investigate the traffic of many motors in the presence of binding defects. Fig. 7(b) shows the average bound current as a function of the number of motors in the tube for binding defects with π def = 0. As in the case of unbinding defects shown in Fig. 6(a), binding defects can both increase and decrease the bound motor current. For low motor concentrations, binding defects reduce the current by reducing the probability that a motor is bound. For large motor concentrations, the binding defects increase the current compared to the case without defects, because the binding defects reduce the motor traffic jams on the filament. For weaker binding defects with π def > 0, the effect is similar, but even smaller. [19]. The other parameters are the same as in Fig. 4. The maximal value of the average bound current is not changed by the presence of binding defects, the defects rather shift the current maximum to larger motor numbers, see Fig. 7(b) and 7(c). Again, the effect of binding defects is much weaker than that of stepping or unbinding defects. Binding defects only have a notable effect on the traffic of many motors when the defect density is sufficiently large. Binding defects and cooperative transport by several motors We have noted above that binding defects account for most of the biologically relevant defects. In striking contrast to their importance in biological systems, our analysis shows that binding defects have very small effects on both the movements of individual motors and on the traffic of many motors. It is important to note, however, that our conclusions about binding defects apply only to the traffic of individual motor molecules or to the traffic of cargo particles that are pulled by single motors. Unbinding defects are expected to have a much stronger effect on cargo particles that are pulled by teams of several motors, the typical situation for in vivo transport [1,23,37]. Thus, let us consider a cargo particle that is pulled by N identical motors such as kinesin. The effective unbinding rate ε eff of such a cargo particle is proportional to (ε/π ad ) N−1 for strongly binding motors with ε/π ad ≪ 1 [23] where ε and π ad are the previously defined unbinding and sticking probabilities of a single motor. Thus, for a cargo particle pulled by N strongly binding motors, the effective unbinding rate ε eff ∼ (1/π ad ) N−1 and is, thus, strongly affected by the value π ad for the sticking probability of a single motor. This implies that binding defects for single motors will act as unbinding defects for cargo particle pulled by several motors. This effect has indeed been demonstrated experimentally for tau proteins: As expected for binding defects, tau proteins do not affect the run length of individual motors [48,52]. However, tau proteins strongly reduce the run length for cargoes pulled by several motors [52] and, thus, act as effective unbinding defects. A quantitative description for this latter effect can be obtained by an extension of the models studied here and in Ref. [23,37]. Summary In this article, we have studied the traffic of molecular motors in the presence of different types of static defects on the filament. We have determined several properties that characterize the movement of single motors as well as the traffic behavior in many-motor systems such as motor velocities, motor run length, motor density and current profiles. We have considered three basic types of static defects, namely stepping, unbinding and binding defects. At the defect sites, the dynamics of the motors differs in only one transition probability from the dynamics at other filament sites. While stepping defects and unbinding defects have rather strong effects on the motor behavior and severely reduce the velocity, run length and currents, the effect of binding defects on individual motors is much weaker and becomes only notable if the density of the defects is sufficiently large. The run length is not affected at all by binding defects. At first sight, these results appear to be at odds with the experimental observation that most biologically relevant defects such as MAPs represent binding defects as summarized in Table 1. It is, however, plausible that MAPs mainly regulate the movement of larger cargoes, which are pulled by several motors. For such cargo particles, the effective unbinding rate ε eff depends rather strongly on the binding probability π ad of individual motors as discussed in subsection 5.3. Thus, in order to describe the regulation of cargo particles by, e.g., MAPs, one should extend the models discussed here to cooperative transport by teams of motors. In general, localized inhomogeneities on filaments, which may be both modifications of the filament itself or other molecules bound to the filaments, can modulate the patterns of molecular motor transport in various ways. While most of the systems we considered reduce motor movements, we note that unbinding defects can increase the motor current if the local motor density is high. In addition, it is easy to imagine binding defects that increase motor binding and function as loading stations that initiate filament transport, although we are not aware of any biological system with this function. In addition to their functions for intracellular transport, regulatory mechanisms via filament defects or inhomogeneities may also be of interest for the development of artificial biomimetic transport systems based on molecular motors [13,14].
2009-05-07T08:48:22.000Z
2009-03-25T00:00:00.000
{ "year": 2009, "sha1": "faea8a6d8544321018e3c0a3124a2c2ba729d1ff", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10955-009-9715-3.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "5c0292ad479985613f6a86903cf6e12841e21e17", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Biology", "Materials Science" ] }