id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
10064675 | pes2o/s2orc | v3-fos-license | Effectiveness and cost-effectiveness of a group-based pain self-management intervention for patients undergoing total hip replacement: feasibility study for a randomized controlled trial
Background Total hip replacement (THR) is a common elective surgical procedure and can be effective for reducing chronic pain. However, waiting times can be considerable. A pain self-management intervention may provide patients with skills to more effectively manage their pain and its impact during their wait for surgery. This study aimed to evaluate the feasibility of conducting a randomized controlled trial to assess the effectiveness and cost-effectiveness of a group-based pain self-management course for patients undergoing THR. Methods Patients listed for a THR at one orthopedic center were posted a study invitation pack. Participants were randomized to attend a pain self-management course plus standard care or standard care only. The lay-led course was delivered by Arthritis Care and consisted of two half-day sessions prior to surgery and one full-day session after surgery. Participants provided outcome and resource-use data using a diary and postal questionnaires prior to surgery and one month, three months and six months after surgery. Brief telephone interviews were conducted with non-participants to explore barriers to participation. Results Invitations were sent to 385 eligible patients and 88 patients (23%) consented to participate. Interviews with 57 non-participants revealed the most common reasons for non-participation were views about the course and transport difficulties. Of the 43 patients randomized to the intervention group, 28 attended the pre-operative pain self-management sessions and 11 attended the post-operative sessions. Participant satisfaction with the course was high, and feedback highlighted that patients enjoyed the group format. Retention of participants was acceptable (83% of recruited patients completed follow-up) and questionnaire return rates were high (72% to 93%), with the exception of the pre-operative resource-use diary (35% return rate). Resource-use completion rates allowed for an economic evaluation from the health and social care payer perspective. Conclusions This study highlights the importance of feasibility work prior to a randomized controlled trial to assess recruitment methods and rates, barriers to participation, logistics of scheduling group-based interventions, acceptability of the intervention and piloting resource use questionnaires to improve data available for economic evaluations. This information is of value to researchers and funders in the design and commissioning of future research. Trial registration Current Controlled Trials ISRCTN52305381.
Background
Primary total hip replacement (THR) is one of the most commonly performed elective surgical procedures in the UK, with 76,448 operations recorded in the National Joint Registry for England and Wales in 2012 [1]. The operation is often successful at providing pain relief, most commonly caused by osteoarthritis; however, approximately 10% of patients experience chronic pain in their replaced hip [2]. Patients often wait months or even years for THR surgery despite targets aimed at reducing National Health Service (NHS) waiting times [3]. In this lead up to surgery, patients report high levels of intrusive pain impacting on their lives, lack of information about managing pain, and uncertainty about where to seek advice or support [4,5].
Interventions to support patients with self-management of arthritis can improve pain, self-efficacy, symptom management and psychological well-being [6][7][8][9][10]. Trials of these interventions with patients waiting for joint replacement report positive beneficial effects on pain and skills acquisition [11,12] but the effectiveness and cost-effectiveness of a pain self-management intervention has not yet been evaluated [13]. Prior to conducting such an evaluation it is important to conduct feasibility work, because previous studies of self-management programs for patients with arthritis have faced challenges through low recruitment rates, poor uptake of the intervention and high attrition rates [8,[14][15][16][17].
Feasibility and pilot work to explore trial processes can include testing trial procedures and data collection methods, randomization processes, recruitment rates, and attrition rates [18,19]. This preliminary work can often highlight unanticipated issues with trial design and conduct [20,21], which can then be addressed to maximize the success of intervention evaluation in a full-scale randomized controlled trial (RCT). This can increase the efficiency of research funding by evaluating the likely success of processes before undertaking a definitive trial. The importance of feasibility work to evaluate trial processes has been highlighted in a systematic review of cluster RCTs in primary care, which concluded that a number of reported issues with recruitment, adherence to trial protocol and data collection methods could have been pre-emptively identified and addressed through feasibility work [22]. In addition to testing trial processes, another objective of preliminary work prior to a full-scale RCT can be to test the acceptability of an intervention, particularly if the intervention is complex in nature [23]. Preliminary work to develop, refine and pilot complex interventions is recommended by the Medical Research Council [24]. Early evaluation of the acceptability of a complex intervention can highlight aspects of the intervention that can then be modified prior to a definitive trial [25][26][27].
The aims of this study were two-fold: first to evaluate the feasibility of conducting an RCT to assess the effectiveness and cost-effectiveness of a group-based pain self-management course for patients undergoing THR, and second to assess the acceptability of the intervention. Specific objectives were to assess the feasibility of trial design and procedures, ascertain recruitment and retention rates, identify barriers to participation, develop resource-use data collection methods, assess questionnaire completion rates, and evaluate uptake and patient satisfaction with the course.
Design and ethics
The study was a single-center feasibility study of an RCT. The study was approved by the South West Central Bristol Research Ethics Committee (reference 11/SW/ 0056) and all participants provided their informed, written consent to participate. The trial was registered on the National Institute for Health Research Clinical Research Network Portfolio (UKCRN ID 11270) and ISRCTN register (ISRCTN52305381) on 28 June 2013. A CONSORT checklist for the reporting of this study can be found in Additional file 1.
Participant recruitment
Between June 2011 and June 2012, patients listed for THR surgery at one elective orthopedic center were posted a study invitation pack. The patient information booklet was designed in collaboration with a patient and public involvement group [28]. Patients interested in participating returned a signed consent form and reply slip to the research team. The inclusion criterion was being listed for a primary THR because of osteoarthritis. Exclusion criteria comprised lack of capacity or unwillingness to provide informed consent, or inability to complete English language questionnaires. To explore whether patients enrolled in the study were representative of those undergoing THR, age and gender of all eligible patients was recorded.
Telephone interviews with non-participants
Brief telephone interviews were conducted with patients who declined to participate in the study but gave permission to be contacted by a researcher to discuss nonparticipation. Reasons for non-participation were recorded by the researcher in notes on a standardized form.
Randomization
Participants were allocated to the intervention or standard care group on a 1:1 ratio using a computer-generated randomization system (Minim) [29]. Allocation was minimized by age and gender to ensure equal distribution between groups. Participants were allocated to treatment group after recruitment. Blinding of researchers and patients was not possible because the intervention involved attending a course. Participants were informed of the results of randomization via letter, and those randomized to the intervention group were telephoned to discuss course arrangements.
Assessment
Participants completed postal questionnaires at baseline (after recruitment), before surgery, and one month, three months and six months after surgery. If no reply was received after two weeks, a single reminder was sent. The questionnaires included the Western Ontario and McMaster Universities Osteoarthritis Index [30], Pain Self-Efficacy questionnaire [31], Brief COPE [32], Beliefs about Medicines Questionnaire [33], EQ-5D [34] and Functional Co-morbidity Index [35]. Patients also completed questions about socioeconomic status, pain in other joints, fatigue, pain distress, activity levels and pain medication usage. As this was a feasibility study, the results of these questionnaires are not the focus of this article.
Resource use
Economic evaluations alongside RCTs are increasingly pertinent to evaluate the cost-effectiveness of a novel intervention in a context of scarce NHS resources [36]. The three-month and six-month post-operative questionnaire included a full resource-use questionnaire to identify and measure NHS resources used including community-based doctor and nurse visits, physiotherapy and occupational therapy visits, secondary care inpatient and outpatient visits and medication, use of social services, patient expenses, informal care, and productivity losses incurred in the period. Participants were given a pre-operative resource-use diary to record any resources used from randomization until surgery. They were asked to return the completed pre-operative diary with their one-month post-operative questionnaire. One month and three months post-operation, patients were given a resource-use log to prospectively record their use of resources in the following period in order to aid them in the completion of the resource use questions in the three-month and six-month questionnaires [37]. The aim of these questionnaires was not to formally evaluate the differences in costs and consequences of delivering the intervention, but to refine resource-use data collection methods. Therefore, analyses focused on rates of missing data, which is a common issue with resourceuse questionnaires [37].
Intervention
The Challenging Pain and Keep Challenging Pain courses were delivered by two lay trainers from Arthritis Care, a registered UK charity that has been delivering self-management courses since 1994 [38]. The courses were held at the hospital from which participants were recruited. Reimbursement of travel costs (mileage and parking fees) or a pre-paid taxi was offered to all participants who attended the courses.
Challenging Pain course
The pre-operative Challenging Pain course consisted of two sessions running over consecutive weeks, with each session lasting two and a half hours [39]. The emphasis of the course was on pain management and introduced a variety of cognitive pain management techniques, with the aim of providing coping skills to enable patients to manage their pain and its impact more effectively. Delivery involved a combination of presentations, group work, pair work, demonstrations and practical sessions. The first session included introductions to conscious breathing, full body relaxation, exercise, goal setting and managing stress. The second session reviewed these topics and introduced pacing, medications and other therapies, guided imagery, managing negative thoughts, and effective communication.
Post-operative Keep Challenging Pain course
The five-hour Keep Challenging Pain course was designed by Arthritis Care, in conjunction with a physiotherapist, to be delivered specifically to post-operative THR patients. The course reviewed pain management strategies introduced in the Challenging Pain course, provided advice on recovery after THR, reviewed goal setting and problem solving, and included a practical exercise session led by a registered physiotherapist.
Course evaluation
A short structured feedback questionnaire about the course was completed by participants at the end of the both the Challenging Pain and Keep Challenging Pain courses.
Sample size
No formal sample size calculation can be performed for a feasibility study. The average sample size for feasibility studies assessing trial design and the acceptability of interventions is around 60 patients [40]. A minimum of 80 patients (40 per arm) was deemed an appropriate sample size for this trial to allow an estimate of recruitment and retention rates and explore the acceptability of the intervention.
Analysis
In line with recommendations about good practice in the analysis of feasibility studies [18], analysis was descriptive and no comparisons of the outcomes between the two arms of the trial was conducted. Descriptive statistics on recruitment rates, baseline patient characteristics, retention of participants and questionnaire return rates are presented as means and standard deviations (SD) or 95% confidence intervals (CI), medians and interquartile ranges (IQR), or percentages. Resource-use data were considered complete when the patient recorded enough data to allow for costing using a national tariff. Completion rates were reported per question and aggregated per two economic perspectives: the NHS and Personal Social Services (PSS) perspective, and a broader societal perspective. Data on reasons for non-participation were collated and coded into themes by one researcher (VW) and these themes were then discussed and agreed with a second researcher (RGH) [41].
Recruitment rate and participants
Postal invitations were sent to 385 eligible patients and 88 consented to participate, giving a recruitment rate of 23% ( Figure 1). A total of 297 patients did not return a reply slip and consent form to the research team. Participants' baseline characteristics are displayed in Table 1. Participants underwent THR surgery at a median of 12 weeks (IQR 8 to 15) after recruitment into the study. Nonparticipants had a similar median age (67 years, SD 13) to participants but were more likely to be male (46% male).
Reasons for non-participation
Brief telephone interviews were conducted with 57 nonparticipants (19%). These patients had a mean age of 71 years (SD 10) and 37 (65%) were female. Patients gave 91 reasons for non-participation, most frequently relating to perceptions and views about the pain self-management course ( Table 2). These reasons included previously attending pain self-management courses and finding them unhelpful; a perceived lack of need because pain was adequately managed; a dislike of group formats; and concerns over difficulty in attending the course because of pain, age and/or other health conditions. The second most frequently given reason for non-participation concerned issues around traveling to the hospital to attend the course.
Retention of participants
Fifteen patients (17%) were withdrawn from the study: seven from the intervention group and eight from the standard care group (Figure 1). In the intervention group, three patients self-withdrew, three patients did not undergo surgery during the study period, and one patient was withdrawn because they were recruited into another trial whose protocol precluded participation in two trials. In the standard care group, two patients self-withdrew and six patients did not undergo surgery during the study period.
Outcomes assessment and economic evaluation
The questionnaire return rates at each assessment time were high, ranging from 72% to 93% (Table 3). The rate of questionnaire return was similar between trial arms, with less than 10% difference in return rates at each time point, except the three-month post-operative questionnaire, which was returned by more patients in the standard care arm than the intervention arm (91% versus 72%, respectively). Return rates for the pre-operative resource-use diaries were low with only 35% of patients returning their diary. Table 4 presents the completion rates of resource-use data in the three-month and six-month post-operative questionnaires. For those who returned a questionnaire, completion rates for NHS resource-use questions were high for secondary care resource use (over 90%) and medication use (over 80%), and lower for communitybased resources (65% for the intervention arm and 66% for the standard care arm). PSS data also had high completion rates (over 86%), particularly in the intervention group. When accounting for non-returners, completion rates were lower, with community-based resources being the lowest completed category. Overall, data for an economic evaluation from an NHS and PSS perspective were available for 33% of patients in the intervention group and 43% of patients in the standard care group. When considering other categories of resource use beyond health and social care, travel costs was the least completed category. As a result, for an economic evaluation from a societal perspective, complete data were only available for 17% of patients in the intervention group and 19% of patients in the standard care group.
Acceptability of the intervention Pre-operative Challenging Pain course
Four pre-operative Challenging Pain courses were held, with four to nine participants attending each course. Of the 43 participants randomized to the intervention group, 28 attended the pre-operative course (17 attended both sessions, 11 attended one session) at a median of five weeks prior to surgery (IQR 2 to 8). Reasons for nonattendance are presented in Figure 1. Results from the course evaluation questionnaire are presented in Table 5. Free text comments on the evaluation questionnaires frequently gave positive feedback on the group format of the course as this provided the opportunity to meet other people undergoing THR.
Post-operative Keep Challenging Pain course
Three post-operative Keep Challenging Pain courses were held but were poorly attended, with two to five participants on each course. The courses were attended by 11 patients at a median of nine weeks post-operative Mean WOMAC Pain score a (SD) 38 (18) 37 (17) 38 (20) Mean WOMAC Function score a (SD) 37 (18) 39 (18) 35 (18) Mean Pain Self-Efficacy score b (SD) 32 (14) 35 (13) 30 (14) a WOMAC Pain and Function scores range from 0 to 100 (worst to best). b Pain Self-Efficacy questionnaire scores range from 0 to 60 (low self-efficacy to high self-efficacy). SD, standard deviation; WOMAC, Western Ontario and McMaster Universities Osteoarthritis Index.
(IQR range 5 to 14). Reasons for non-attendance are presented in Figure 1. Results of the course evaluation questionnaire are presented in Table 5. Free text comments on the evaluation questionnaires most frequently gave positive feedback on the physiotherapy session and the group format of the course.
Discussion
This study looked at the feasibility of an RCT to evaluate the effectiveness and cost-effectiveness of a group-based pain self-management intervention for patients undergoing THR and the acceptability of this intervention. Although feasibility studies are conducted to address trial design and methodology, a systematic review found that articles often include only a minimal discussion of the methodological findings and implications [42]. This feasibility study highlighted several methodological considerations that warrant further discussion.
Barriers to participation
It is important to explore barriers to participation during feasibility work because unforeseen challenges with recruitment can and do lead to the early termination of definitive trials [14]. Despite this, a recent systematic review found that only 8% of published pilot and feasibility studies provided detailed coverage of findings related to recruitment [42]. Within our feasibility study, we used brief interviews with non-participants to identify barriers to recruitment. Brief interviews were chosen over a structured questionnaire or open-text boxes to gain insight into and explore the reasons behind non-participation. Although the data collected via these brief telephone interviews were not as rich as with in-depth interviews, the use of brief, structured interviews allowed a greater number of non-respondents to be contacted and the data to be analyzed within the time and financial constraints of the feasibility study. These interviews identified that the most frequent reason for non-participation were views and perceptions of the pain management course. These findings are in line with previous research, which identified that perceptions of the course and satisfaction with current self-management were reasons for non-participation in a trial of an arthritis self-management program [15]. Difficulty in getting to the hospital was the second most frequent reason for non-participation, despite the offer of reimbursement of travel costs or a pre-paid taxi. Travel issues and the burden of additional appointments are commonly reported barriers to trial participation [15,43]. Future trials of group-based interventions may benefit from consideration of the location of the intervention. For example, interventions held in the community may have greater uptake than those delivered in a hospital, although trials of community-based group interventions also found that difficulties with travel is a common reason for nonparticipation [15]. Conducting these short interviews with non-participants identified a number of barriers to participation that could be addressed in further refinement work, highlighting the importance and value of conducting research with non-participants in feasibility studies. Based on our findings, we would advocate that brief interviews with non-participants should form a core component of pilot and feasibility studies.
Recruitment, retention and outcomes assessment
The recruitment rate for this trial was 23%, which is lower than the 42% to 79% recruitment rates reported in previous trials of pain self-management interventions for patients undergoing joint replacement [11,12]. However, other feasibility and pilot studies using a postal recruitment method have reported similarly low response rates [25,44,45]. Despite the low recruitment rate, retention of Mean satisfaction with delivery (95% CI) 8.4 (7.7 to 9.0) 9.0 (8.2 to 9.8) Usefulness and satisfaction questions rated on 0 to 10 scale (worst to best). CI, confidence interval. NB: One patient attended the Challenging Pain course but did not complete an evaluation questionnaire.
participants and questionnaire completion were high and similar between the trial arms, suggesting that randomization and outcomes assessment were acceptable. Recruitment into trials is known to be challenging and considerable research has been conducted into improving trial recruitment. Methods such as telephone reminders to non-responders, 'opt-out' recruitment strategies and financial incentives have been found to improve recruitment rates [46]. However, potential issues around coercion and undue influence can pose challenges to the implementation of these strategies. Financial incentives for research participation is a debated issue, and ambiguities remain around what level of incentive constitutes undue influence, with little standardized guidance for ethics committees [47]. For example, based on feedback from our patient and public involvement group, we planned to offer participants free one-year membership to Arthritis Care, but the ethics committee perceived this as potentially coercive and asked for this offer to be removed from the study protocol. This demonstrates the challenges researchers can face in implementing measures to maximize recruitment into trials while remaining in keeping with preferences of the NHS research ethics committee.
Economic evaluation
Economic evaluations within clinical trials are prone to missing data and therefore we explored whether it was feasible to collect resource-use data using self-complete questionnaires [48]. The economic evaluation work highlighted the difficulty of collecting resource-use data from randomization until surgery for this patient group. However, average waiting time for surgery in this patient group was three months, and we would not expect the intervention to lead to behavior change that would produce differences in cost drivers in the shorter term. In comparison to the pre-operative diaries, the post-operative resource-use questionnaires achieved good completion rates, allowing for a health and social care payer evaluation perspective to be taken. The completion rates could be further improved after imputation of community-based resources data. Although completion rates for a societal perspective were low, categories on productivity losses and informal carer time were well-completed and can be of added value to a sensitivity analysis in a definite economic evaluation.
Acceptability of the intervention
In addition to assessing trial processes, this study evaluated the acceptability of the intervention. Feedback on the course was positive, suggesting that the course was acceptable and well-received by those who attended. In particular, positive feedback was received on the group-based format, with patients commenting that they appreciated the opportunity to meet other people undergoing THR surgery. Studies evaluating group-based interventions in other clinical settings have also reported positive feedback on this format of intervention delivery [27,44,49]. Therefore, although the group format was a reason for nonparticipation for some patients, those who attended the course enjoyed the format and engagement with other patients. This highlights an issue affecting many trials: a potentially bias sample because of the self-selection of participants with a preference for the intervention. Differences in the characteristics of participants and nonparticipants is well known, with an under-representation of older people, women and ethnic minorities in clinical research [50]. Addressing willingness to participate due to the nature of the intervention in feasibility work has the potential to lead to refinements in the intervention for a definitive trial, and this knowledge has implications for the roll-out and uptake of interventions if subsequently implemented in clinical practice.
The Challenging Pain and Keep Challenging Pain courses were highly rated by participants, however attendance at the post-operative course was lower than the pre-operative course. Reasons given for non-attendance were predominantly because people were unavailable on the dates set for the course. The logistics of scheduling group-based interventions is challenging, as many patients have limited availability due to other commitments [15,27]. Increasing flexibility in the scheduling of group-based interventions can be challenging, particularly within the financial constraints of a trial, but having the flexibility to run multiple courses is an important factor to consider when costing a trial. Our short interviews with non-participants also highlighted the importance of offering courses outside of working hours to avoid disadvantaging those patients in employment from participating in clinical trials.
Conclusions
Undertaking feasibility work for an RCT and evaluating the acceptability of an intervention can be a laborintensive exercise. However, this study highlights the importance of conducting such work prior to undertaking a full-scale RCT to assess the effectiveness and costeffectiveness of an intervention. In particular, interviews with non-participants provided valuable information about barriers to participation. The low recruitment rate and poor attendance at the intervention suggest that roll out of the feasibility study to a definitive trial in its current design at our center would not be feasible. Further research would be necessary to evaluate strategies to improve recruitment rates and increase flexibility in the scheduling of the group-based intervention. However, questionnaire completion rates, retention of participants and satisfaction ratings with the intervention were all high, suggesting that further methodological work could lead to a feasible trial design.
Although this study was limited to a single orthopedic center, several key messages can be taken from our experience. First, conducting brief telephone interviews with non-participants is an efficient method of collecting data on barriers to participation, and we recommend it should be a core component of feasibility studies. These data can also provide insight into whether unwillingness to participate is due to the nature of the intervention, thereby providing early indications of potential issues in a definitive trial and with uptake of the intervention if implemented into clinical practice. Second, attempts to implement methods to improve patient recruitment need to be carefully designed in light of ethical considerations, such as the potential for inducements to be seen as coercion. Third, the logistical difficulties in scheduling groups and ensuring high attendance should not be underestimated and the potential to increase flexibility by running multiple courses should be considered when designing a budget for a trial. Fourth, the ability of piloting resourceuse questionnaires is a major advantage to improve the quality of resource-use data available in the definitive economic evaluation. Finally, given the need to ensure that research is efficient and provides value for money, our study highlights that feasibility studies are able to identify areas that should be considered in the design or commissioning of research addressing similar interventions or populations. | 2017-06-27T13:15:22.073Z | 2014-05-20T00:00:00.000 | {
"year": 2014,
"sha1": "de4b8a2d994dbbfdd2d972742e9a879b860a3c9d",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/1745-6215-15-176",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "624dc078cf43e149a250c83e33895df11bb67687",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236975311 | pes2o/s2orc | v3-fos-license | Industrial policy against pandemics
Abstract The coronavirus disease 2019 (COVID-19) pandemic illustrated the inability of the market to meet the needed production scale and speed of essential medical products. The state should adopt a risk-based approach, allowing for experimentation with various technological solutions such as vaccines and tests, while ramping up their production. The intervention should resolve uncertainty, combine resources, coordinate technological choices, lift barriers to entry, ensure knowledge sharing, and support the value chain. The cost of this strategy is dwarfed by the economic fallout of a pandemic. Universal testing, an overlooked solution, is a key component of an infrastructure against future pandemics.
Introduction
The coronavirus disease 2019 pandemic, which afflicted the world in early 2020, took an enormous toll on human life and economic activity-trillions of dollars of lost income and more than 2 million deceased in about a year (Johns Hopkins University, 2021). The development of successful vaccines to defeat the pandemic, although unprecedented in history in terms of speed, took almost a year, and scaling up vaccine production and distribution has been taking months at a huge cost. Even with the ongoing effort and global collaboration, there is huge uncertainty about the time it would take-months or years-to inoculate most of the global population to achieve herd immunity and end the pandemic, and the longer it takes, the greater the risk of the emergence of more dangerous virus mutations.
The colossal cost of the pandemic calls for an urgent appraisal of the appropriate industrial policy response to gear up the world's preparedness against future pandemics. Whether a safe and effective vaccine or a successful cure can be developed and produced at scale fast enough when the next pandemic strikes is highly uncertain. Even a few months of raging pandemic, lockdowns, and disruptions to economic and social life are very costly in terms of income and lives lost.
We argue that a risk-based approach to industrial policy in the face of pandemics consists in funding and facilitating the experimentation with a wide range of plausible technological solutions while simultaneously intervening to scale up their production. The portfolio of possible solutions would naturally include multiple vaccine candidates and involve potentially novel technologies. For instance, in the early phase of the COVID-19 pandemic, there were several vaccine candidate technologies, including the novel mRNA vaccines, many of which turned out to be successful. Strong government intervention such as Operation Warp Speed in the USA, granting billions of dollars to fund several firms and "untested" technologies in vaccine production, would be justified ex ante even knowing that most would likely fail ex post. The same logic calls for the inclusion of other candidate technological solutions in the industrial policy response such as possible cures and, as we argue below, rapid tests. Meanwhile, as shown by the slow ramp up of production of vaccines worldwide in 2020-2021, there is a need to simultaneously tackle the bottlenecks and market failures facing the manufacturing of these solutions at large scale.
In the absence of a vaccine or a cure or while their production is being ramped up, we argue that the most viable way to squash a pandemic rapidly and reopen economies safely is universal testing and isolation policy, which requires an industrial policy intervention to ramp up the production of test kits quickly. Developing and/or producing vaccines or cures could take a long time while designing and producing rapid test kits could be done relatively fast, including in many developing countries. This approach will buy the necessary time needed to develop vaccines and cures and even fight off potential virus mutations. With a testing infrastructure in place, the pandemic can be squashed relatively quickly, in a matter of months rather than years. The major components of this testing infrastructure are the development of rapid test kits and a scale up of manufacturing production. The existence of both research and manufacturing facilities for testing during normal times would help minimize the impact when a pandemic strikes.
We focus on testing to illustrate several key aspects of the risk-based approach to industrial policy. First, there is evidence that it is a plausible, economical, and relatively fast solution to safely reopen economies, although potentially temporary while vaccines are being developed. As an untested solution, it was largely overlooked throughout 2020 during the COVID-19 pandemic. Second, the development and deployment of rapid test kits would have been much faster if it were explicitly included early in the portfolio of solutions targeted by industrial policy to tackle regulatory hurdles and facilitate experimentation to show a proof of concept. Third, the needed massive scaling up of production of test kits faces numerous market failures requiring appropriate state intervention and drawing lessons for industrial policy modalities for vaccines, cures, and other medical goods such as personal protection equipment.
The feasibility of a rapid scale up in the production of test kits or other essential medical goods is akin to war mobilization efforts during WWII, when the USA and the Soviet Union increased drastically their production of military equipment and machinery on an unprecedented scale and scope and in a record amount of time, in some instances building ammunition factories from scratch in 3 months. The potential existential threat of war motivated policymakers to spring into action. Although not trivial, the current task is minuscule in comparison, while the danger of an endemic or a "whack-a-mole" pandemic is real, which would entail huge costs.
The state intervention can be done using the principles of a True Industrial Policy (TIP) (Cherif and Hasanov, 2019). TIP's key principles such as creating capabilities in sophisticated products with (domestic and international) competition and accountability for the support received are key ingredients to resolve existing hurdles. To ramp up production fast, the state can facilitate the organization and coordination of resources. Production along the whole value chain needs to be supported, while intellectual property (IP) and knowledge-sharing should be tackled to prevent bottlenecks and create synergies. Moreover, the state needs to coordinate technological choices, assume risks and provide enough financing, and support the redesign of or building from scratch manufacturing facilities. Pooling both public and private resources would achieve the needed economies of scale, creating a market for test kits and bringing the costs down substantially.
Universal testing: an overlooked solution
A large and growing number of studies show that frequent testing and isolation would be an effective strategy to halt a pandemic. The susceptible, infected, and recovered model, a workhorse model in epidemiology, predicts that a continuous testing and isolation of the infected, at a rate of about 5%-15% of the population per day, would lead to a rapid reopening of the economy even if test kits are relatively imprecise and isolation imperfect 1 (Cherif and Hasanov 2020;Larremore et al., 2020;Romer, 2020a,b;Siddarth and Weyl 2020). Moreover, large-scale testing was shown to drastically decrease the number of COVID-19 cases when it was implemented, including in China, Slovakia, and the UK, while many smaller scale experiments throughout the world helped schools, universities, and senior housing facilities to reopen while preventing outbreaks.
However, as an untested solution, universal testing was overall ignored in 2020 for three main reasons, all related to the lack of an industrial policy to face pandemics. First, contrary to vaccines that have been used since at least the 19th century to fight infections (and lockdowns since the middle ages), universal testing was never tried at scale to halt a pandemic, generating skepticism and debate, including among experts. Second, it requires a potential change in regulatory framework for epidemiological purposes since test kits are usually designed for a clinical diagnosis rather than for epidemic surveillance. Third, to be feasible, universal testing depends on massive and quick ramp up of production of test kits, facing numerous market failures, which can only be resolved using an appropriate state intervention.
For universal testing to work, an "epidemiological" rather than a "clinical" approach to testing is needed, sacrificing the precision of test kits for scalability, convenience, and speed to identify enough potentially infected rather than to diagnose them (Cherif and Hasanov 2020;Larremore et al., 2020;Mina et al., 2020). Standard regulation assesses tests for clinical purposes, that is, tests used to decide on a treatment course by a doctor. Consequently, tolerance for false negatives and positives is usually small. Universal testing requires frequent and rapid testing to identify the infected that can, thus, sacrifice precision of test kits, making them cheaper as well. This approach requires a change in the regulatory framework to distinguish between clinical and epidemiological purposes. This change may be hard to make in the absence of a coherent and high-level industrial policy tackling all the hurdles.
The perception of "infeasibility" of universal testing stems also from the adoption of a "laissez faire" approach to production. Indeed, the sheer number of test kits needed compared to current production in each country or globally is huge. In addition, the difficulty of approving and scaling up do-it-at-home test kits or collecting and processing a large number of samples seems to imply that universal testing is nearly impossible (e.g., Kofler and Baylis, 2020; Rose, 2020). Even if regulation is changed to achieve a greater scale and convenience while minimizing costs, market failures would preclude an unprecedented increase in the production of test kits needed in a short period of time. The market for test kits during a raging pandemic is laden with market failures stemming from uncertainty, capacity constraints, coordination failures, externalities (e.g., positive externalities related to massive testing akin to network effects and resilience rather than efficiency in production decisions), and market power. In addition, the market would not internalize the long-run positive spillovers and would underprovide compared to the socially optimal quantity. Similar obstacles were observed in the market of medical equipment as many countries, including advanced ones, faced huge shortages in the early phase of the pandemic (e.g., Azmeh, 2020;Bradley, 2020). However, ramp up of their rapid production in several countries also show that state intervention can radically change the situation.
Ramping up the production of test kits is feasible in most countries, and its cost is dwarfed by the cost of the pandemic. To put it in perspective, the annual cost of production of test kits would amount to less than two months of the projected global economic losses and fiscal stimulus packages induced by the pandemic in 2020 (IMF, 2020). As an illustration, the number of test kits needed is less than half of the equivalent number of soft drink cans consumed globally (about a trillion per year). If enough firms pull their resources, combined with substantial public funding, support, and coordination, many countries could meet the demand for test kits in a matter of few months, and eventually, global demand could be met as well.
A risk-based industrial policy to face COVID-19 or future pandemics would start by including all plausible solutions to end the pandemic. These would include untested ones such as universal testing, which was called for by prominent epidemiologists and economists early in the crisis. Given the colossal cost of the pandemic, in addition to supporting vaccine development and production, a sizable amount could have also been invested in experimenting and providing a proof to random testing would help minimize the number of test kits needed; otherwise, the daily testing rate required would be as high as 20%-30% of the population to halt the pandemic (if transmission rates are relatively high). of concept for universal testing even if the odds of success were low. For a risk-neutral policymaker, undertaking such a policy would be worthwhile even if the odds of success were about 1% (accounting for economic and social costs of the pandemic). In addition, the diversification of plausible solutions would fit the classical approach to reducing risk while expected returns stay relatively high. This would result in a higher Sharpe ratio or higher expected return per unit of risk than that of pursuing each individual solution (Lo, 2021). However, pursuing all solutions requires much more financing and resources, further justifying a state intervention.
Market failures in testing times
The provision of test kits and other critical medical products should be easily met in the context of a standard supply-demand model with perfect competition, perfect information, immediate and costless adjustment, and no capacity constraints. In this theoretical case, the market would provide all the needed products at an equilibrium price reflecting both technological constraints and preferences. In normal times, the market for critical medical products such as test kits would be broadly in equilibrium in most countries. However, amid the COVID-19 pandemic, severe shortages of medical goods have appeared, and the lack of test kits has become the critical bottleneck toward a decisive defeat of the virus. Even with the development of successful vaccines, the production has not been fast enough to ensure quick inoculation of the majority of the global population. We argue that during a pandemic the market could cease to provide the quantity of test kits and other critical medical goods that society would require, leading to severe rationing and welfare losses. We outline the policies and institutional apparatus needed to tackle these market failures using the example of test kits.
3.1 From the invisible hand of the market to the leading hand of the state
The inefficiency of the market during a pandemic
The lack of competition is an exacerbating feature but not the binding constraint in the context of a pandemic. Monopolists or oligopolists would choose a smaller supply and higher price compared to a market supplied by price takers with the equilibrium quantity most likely lower than the quantity needed. Yet, the main challenge is the capacity constraints faced rather than the market structure. If the capacity to produce all the needed test kits existed, issuing regulation (e.g., Defense Production Act in the USA) for monopolies to increase production to the needed level or the ramping up of production by competitive firms, irrespective of the initial market structure, would potentially solve the shortage of medical goods. Even if some consumers (e.g., hospitals) are being rationed because of higher prices, various support schemes could be designed to meet the needed production levels at existing prices (e.g., government subsidies to firms or consumers).
However, leaving existing firms in the driver's seat of the market certainly raises the question of the feasibility of a rapid ramp up of production. The procurement of a huge quantity of medical goods at a price set by the state, as we argue below, would not necessarily maximize their profits. More important, when asked if it were "feasible" to attain a certain production target within a short period of time, firms would refer to their standard market objective of maximizing returns to answer it (with most likely no). Even with state subsidies, it might take longer than desired to scale up the production. In the world of maximizing efficiency to minimize costs ("just in time" delivery) rather than factoring in redundancy and resilience of value chains makes it even harder to scale up relatively quickly. Resilience may not be much factored into the production decision of a firm despite being valued on an industry level in case of unexpected large shocks. The market logic of tackling uncertainty, maximizing earnings, minimizing costs, and taking constraints on the inputs and logistics as given is certainly a guide in "peace" times. The choice of technology could also be influenced by a profit-maximizing motive and run contrary to the need for a simple design and ease to produce and operate. For instance, this has been a major concern in the ability to stockpile and ramp up the production of ventilators in the early days of the COVID-19 pandemic (Azmeh, 2020). In the market for test kits, the test technology to be used, the constraints in the value chain, and the lack of manufacturing facilities indicate the challenges of a complete reliance on the market.
Facing such an urgent crisis by completely relying on the workings of the market is imprudent as the market is riddled with market failures and cannot resolve many of the hurdles faced. Even if the market attempted to ramp up production relatively fast, most likely capacity constraints would be hit due to the huge demand shock. Only with the large resources and coordinating ability of the state could both public and private resources be combined for a common goal. There is a need for coordinating among different actors of the production value chain, overcoming administrative, and regulatory hurdles, considering the social benefits rather than the narrow profits of firms, enforcing accountability for the support received, deciding on the best production technology, and lifting any other constraints. This critical moment requires the leading hand of the state (Cherif et al., 2016).
The fog of uncertainty
In the face of uncertainty, firms may not invest enough in the capacity needed to meet all the demand. One of the reasons for the shortage of masks, ventilators, and test kits in many countries is the fact that firms could not have predicted the scale of the increase in demand. Even if they could have somehow anticipated it, they might not have invested enough ex ante to meet the demand of a tail event. Many firms are scaling up their production in response to the spike in demand. However, investment is costly and the prospects of demand over the months ahead remain largely uncertain, especially in terms of test kits. There are many factors that are difficult to predict such as how long it would take for the virus to disappear, how many people would be infected, when vaccines would be available, and which technology would be picked for mass testing.
Given the asymmetry in the cost-benefit tradeoff, firms would always prefer to err on the conservative side, preferring to take the risk of rationing the market rather than flooding it with extra supply. Firms may still remember the 2009 H1N1 flu pandemic when major pharmaceutical firms ended up with excess capacity to produce vaccines as the virus faded, resulting in large losses (The Economist, 2020). A heightened uncertainty increases the likelihood of underprovision by the market.
For the nascent test kit market, there is an additional layer of uncertainty related to the technological choice. Not only are firms unable to predict the extent of the market, they run into the risk of betting on the wrong type of test, especially in the context of mass testing where the state might choose a limited set of technological solutions to be scaled up. Moreover, as we argued earlier, the lack of distinction between tests for clinical and epidemiological objectives may discourage investment in the most scalable technologies, hampering the effort to halt the epidemic. For instance, rapid do-it-yourself (DIY) test kits could be less precise than rapid point-of-care (POC) test kits but are cheaper and more scalable (under $5). The standard polymerase-chainreaction (PCR) test kits are very precise but are more costly (above $20) and not scalable enough (unless combined with group or pooled testing when the prevalence rate is low) and require expensive equipment, which could delay reporting of the test results.
Race to the swift
In theory, if the demand increases, because test kits have become critical products, supply would eventually increase to reach a new equilibrium at a higher quantity and price. Even if we assume that the new equilibrium could be met with all the tests needed to defeat the virus, the key issue is how long it would take for supply to increase to meet the necessary demand. There are many constraints that would delay a swift ramping up of production.
One of the key constraints to substantially increasing the production of test kits is ramping up the production of the whole value chain. The shortage in critical inputs due to uncertainties, capacity constraints, and other hurdles could potentially derail the effort. There is a need for foresight and coordination at every level of the value chain (e.g., chemical reagents, swabs, assays, and logistics) to quickly add the needed production lines, equipment, and workers (which is harder during a pandemic). And without the resources and coordinating role of the state, many of these supply constraints may not be overcome.
Not only the standard long regulatory approvals for medical goods production but also other business regulations need to be expedited. This is particularly important in terms of test approval. Regulatory agencies need regulations for emergency approvals of "epidemiological" tests, which purpose is to detect the infected (e.g., rapid DIY tests) rather than diagnose the disease (e.g., PCR tests) that usually required test kits. The precision could be sacrificed to some extent to allow for cheaper, more scalable, and convenient options. In addition, it covers a wide array of activities such as hiring workers, acquiring licenses and land, expanding existing facilities or building new ones, and importing critical machinery or inputs. A one-stop shop with power of expediting and resolving all these challenges is needed.
Lifting barriers to entry or encouraging "forced" entry could be necessary to increase production quickly. Involving new entrants could be needed along the expansion of existing firms. A main barrier to entry consists in IP rights and knowledge of production processes. In this regard, IP and production process knowledge related to test kits should be provided to all the firms producing the product and its inputs. The state could design various mechanisms such as patent pooling to compensate the IP holders and reward adequately innovation while reining in patent trolls (Stiglitz et al., 2020). If firms do not comply, invoking compulsory licensing (allowed for pharmaceuticals by the World Trade Organization, especially during emergencies) may provide a credible threat for firms to cooperate. In addition, taking advantage of already existing production capabilities and trained staff in related industries, requiring existing firms to re-orient some of their production capacity toward the production of test kits, could be necessary. Both large firms with their enormous ability to plan and execute complex logistical chains and small firms with their agility and entrepreneurial spirit would be called for action.
There is a strong economic case for "forced" entry. If production is not ramped up fast enough, the whole economy suffers not only from temporary output losses and larger unemployment but also from a greater risk of a persistent depression and potential civil turmoil. There is a positive externality of contributing to the universal testing effort, which cannot be captured by an individual firm. By mandating firms to participate, the state can tackle this type of market failure.
The ramp up in production would not necessarily result in a sunk cost in case the virus disappears by a cure, vaccine, or "miracle." Even if the virus disappears by "miracle" in the short run, as a result of mutation, for example, the world should still need to urgently develop and maintain a massive production capacity of test kits as part of the epidemic preparedness. This is the key component of building a testing infrastructure for pandemics. In the face of the future pandemics, deploying a test kits would be the best line of defense. It is much faster than creating a vaccine or an anti-viral drug, and it would avoid long and costly lockdowns. In addition, in a pandemic, potential "overshooting" of production of test kits should not be an issue. Since the successful mitigation of the virus in other countries would help lower the risks at home, there is very little likelihood of unused production as there is a limited production capacity in many lowincome countries. There is also a good case to subsidize or donate the test kits to lower-income countries, especially by neighboring countries or those with strong ties to a home country (e.g., through trade and immigration).
Demand for test kits: free and mandatory for all
Even if the constraints on supply were lifted, the market demand might not necessarily result in universal testing even if testing were free. As discussed above, the amount of testing needed per day implies a large ramp up of production. Let us assume that the constraints on supply were all tackled, and the supply can meet the necessary demand. If everyone is simultaneously internalizing the benefits of universal testing and is willing to pay a relatively high price for the test, then there would be no need for an intervention. A person paying for a test would find it beneficial only if essentially everyone else would do the same to be able to safely interact with others-an externality of testing. The more affordable the test is, the more likely it is to get more people tested. Yet even if the test were free, it would still require consumers and others to voluntarily get tested and demand for testing may be lower than required. Internalizing the benefits of other people getting tested may not happen fully in the market and coordinating voluntary testing may be complicated even in a repeated setting. Ultimately, the most reliable solution consists in imposing a compulsory and free test for all to resolve a coordination failure, positive externality, and price rationing. Some compliance mechanism to monitor enforcement is needed-a proof of testing for jobs, building entry, test passports, etc.
Sketching a strategy
The challenges and constraints discussed, which the market, even a highly competitive one, would fail to address, illustrate a sketch of state policies needed. The strategy is based on the TIP principles of setting ambitious goals, building capabilities and adapting fast, engaging the private sector, and providing necessary support while ensuring accountability (Cherif and Hasanov, 2019). Many features of the strategy sketched below are followed in advanced economies to some extent, albeit without the same focus and speed. However, these principles are barely applied in developing ones. The strategy can be summarized along the following lines: • Objective: In addition to identifying a portfolio of plausible solutions, a clear and ambitious production objective (e.g., select a scalable epidemiological test such as rapid DIY or POC test for 5%-15% of the population a day for free and mandatory) is needed with numerical targets (e.g., produce test kits on the order of 5%-15% of the population a day), deadlines (e.g., 2-3 months) and an endgame (e.g., virus free within a month and an early warning system thereafter).
• Institutions: The state needs to set up a taskforce responsible for ramping up production and directly reporting to the high-level council in charge of applying the strategy and involving major actors across government agencies and levels (e.g., central and regional), and the private sector with regular meetings and communications to the public. The key agencies such as science, treasury, central bank, development bank, and others would be part of the council.
• Incentives and accountability: The task force should have the authority to change the incentives (e.g., moral suasion, tax breaks, and financing) and enforce accountability (e.g., quality and quantity) for firms once the clear objectives have been agreed with them (e.g., new lines in existing factories, new plants, production targets, etc.). It would run the operation and coordinate across firms, the value chain, and government agencies. The access to financing would be provided (e.g., via a development bank).
Dealing with all these challenges calls for a collaboration among firms and policymakers to reduce coordination and informational frictions and gain speed. While the main mechanism of the market is competition, in the crisis times, there is a need to shift toward collaboration. Information-sharing among firms concerning production processes, technology, and resources would help combine efforts to solve common bottlenecks and learn from each other. It would particularly support new entrants to learn from incumbents. Setting up informal and fast information-sharing forums at different levels of the firm (e.g., Research and Development R&D personnel, engineers, and technicians), using industry associations and public-private industry alliances would contribute to knowledge flows, coordination, and collaboration. The incentives could be put forward by the government to encourage collaboration. The SEMATECH alliance of the US semiconductor companies in the 1980s is an example of the public-private industry alliance in support of the US semiconductor industry. Others have proposed a more direct intervention to create "Pandemic Testing Board" that takes its name and function from the WWII's war production board (WPB) (Maier and Kumekawa, 2020). Yet, another approach could be what the Federal Reserve Board of the USA did during the 2008 financial crisis, in which it used its crisis powers to coordinate among banks, bring them into one room, and organize bailouts and liquidity support. A high-level policymaking agency could take on a similar role in fighting the pandemic crisis. This type of collaboration or agency would also be needed during normal times, as part of a testing infrastructure, that could be used for preparedness and scaling up of production during crisis times.
Producing a few hundred billion of test kits a year globally may seem like a staggering number, but the world has been producing billions of various medicines and consumer goods. For instance, in the USA, in 2012-2013 about 39 million people have used statins against high cholesterol levels, amounting to 221 million prescriptions and probably about tens of billions of pills a year (Salami et al., 2017). Johnson & Johnson is producing about 5 billion contact lenses a year (Johnson and Johnson, 2019). In 2018-2019, about 170 million flu vaccines were distributed in the USA and much more globally (CDC, 2019). In the consumer goods markets, in 2019, about 128 billion of equivalent cans (in terms of volume) of soft drinks were produced in the USA and about a trillion of equivalent cans globally (Statista, 2020). About 2.1 billion smartphones, tablets, and personal computers, including 1.5 billion smartphones alone were shipped globally, which are much more complex products to produce with a lot of complex inputs (Lunden, 2020). In 2020, about a trillion of semiconductor units were expected to be shipped globally (Statista, 2021). With a cost of a few dollars per test kits, the production of test kits would be about equal to the global pharmaceutical market, about 1.4 trillion dollars, not a trivial increase but still a small fraction, about one and a half percent, of the global output (The IQVIA Institute, 2020).
A good example of what could be achieved is seen in the efforts of advanced countries to develop and produce vaccines in the race against COVID-19. The pharma companies have ramped up production to produce hundreds of millions of doses with manageable cost. "Operation Warp Speed" of the US Government has set a goal of 300 million vaccines to be ready by early 2021. The government funds have been flowing to biotech and pharma companies to expand production. A Boston-based biotech company Moderna has received about $500 million to expand its facilities to produce tens of millions of vaccines a month by 2021. Manufacturing hundreds of millions of doses would cost firms with existing facilities and personnel about $50 million, reaching $700 million for the new facilities, according to Gavi, a vaccine alliance (Miller and Kuchler, 2020). On a global level, much larger production is needed both for a vaccine and test kits.
To achieve success, incentives must be aligned, and accountability has to be enforced. The objectives and accountability for all the relevant actors should be clearly set. The relevant agencies in charge of regulation and administrative issues (e.g., agencies regulating medical products) need to switch to an emergency mode operation. It should have the responsibility of not only doing quality control but also helping firms meet the needed requirements within the shortest time possible. It should also act as an information disseminator as to how to reach the quality standards. The same applies to the firms involved in the production chain of test kits. If the production target such as the number of test kits, amounts of inputs needed, or specific infrastructure required is clearly specified, incentives would be aligned and accountability can be enforced. A mechanism to share the burden among firms and potential incentive mechanisms to compete and collaborate (e.g., prizes for development and various financial incentives such as tax breaks and loan guarantees) could also be considered as the success of the firms involved would benefit the whole economy. A high-level government task force would coordinate the production orders and information flows.
In addition to production, the whole testing infrastructure needs to be planned out. Deploying tests en masse requires logistical support and potential quarantine facilities (and financial support for the quarantined) and may face bottlenecks depending on the selected test technology. For example, if a test requires face-to-face interaction to collect samples (e.g., rapid POC test), enough protective gear should be made available for the testing centers. Enforcing the isolation of infected people in a quarantine would require similar planning. Similar to voting, testing a large part of the city or country's population daily can be done using the facilities and parking lots of schools and community centers, making the task manageable.
Learning from the WWII's production ramp up
Although a huge ramp up in global production of medical goods, including test kits and its inputs, is urgently needed during the pandemic, it is a fraction of the production ramp up during the WWII mobilization in the USA. In the 20 August 1945, issue Time reported: "In the five years since the fall of France, The U.S. industry and labor had turned out: 299,000 combat planes (96,000 last year); 3,600,000 trucks; 100,000 tanks; 87,620 warships (including landing craft), 5,200 merchant vessels; 44 billion rounds of ammunition; 434 million tons of steel; and 36 billion yards of cotton textiles for war" (Waxman, 2020). In those few years, new technologies were invented, new industries were started from scratch, hundreds of factories were built and expanded, productivity skyrocketed, and labor force increased. Government spending reached about 40% of gross domestic product from less than 10% in the 1930s (Bossie and Mason, 2020).
Like Kennedy's call for a moonshot a couple of decades later, the goal the president Roosevelt put forth before the nation was ambitious and seemed insurmountable. In his fireside chat on 26 May 1940, he said that the USA needed to produce 50,000 combat airplanes in the next year when it barely had 3000 mostly obsolete planes and it had not produced this amount, even cumulatively, since the first flight of the Wright brothers in 1903 (Trainor, 2019). Three years later, the USA was producing more than 50,000 combat airplanes, a 30-fold increase from the 1940 level. And airplanes were only part of what was needed for the mobilization effort. The construction of Liberty ships went from about a year (from keel laying to delivery) to less than two weeks (and even a few days in some cases) within a couple of years at Kaiser's shipyards (Tassava, 2003).
Pushing for ambitious targets and inflating the requirements on the production needed, Roosevelt famously quipped to his advisor questioning the numbers: "Oh, the production people can do it if they really try" (Klein, 2013;Zeitlin, 2020). William Knudsen, the president of General Motors who became Roosevelt's force organizing and coordinating mass production as the director of what later became known as the War Production Board, said: "We won because we smothered the enemy in an avalanche of production, the like of which he had never seen, nor dreamed possible" (This is Capitalism, 2020).
To meet Roosevelt's call to ramp up production needed a different approach than the marketdriven approach tried and failed during WWI. Then, the war mobilization was essentially driven by the private sector. Only 10% spent on new plants and equipment in 1917-1918 was provided by the government. Although the War Industries Board, overseen by the Wall Street financier Baruch, managed to mobilize production, it catered to major corporations, and war profiteering was extensive (Rosenblatt, 2018a). Decentralized purchasing led to bidding wars among military units, production delays, and hiked prices (Brunet, 2020). On the other hand, risk was still largely borne by the private sector as many contractors were left with unwanted goods when the government canceled orders after the sudden end of the conflict in late 1918 (Wilson, 2020).
The approach Roosevelt took at the wake of the war was for the state to take the lead in the mobilization effort. Roosevelt knew he needed industrialists at his side to meet the gargantuan increase in demand at each stage of the production chain. He called Knudsen and asked him to lead the effort and bring industrialists onboard. Knudsen came to Washington, went to his hotel room, and two days later produced a plan to turn the USA into the global manufacturing powerhouse within 18 months (This is Capitalism, 2020). Roosevelt created several agencies to oversee various functions of production and finance with limited and overlapping powers and responsibilities (giving him brokering and decision powers), put in charge capable leaders, and relied on them and the private industry to do the job. When agencies or leaders faltered, they were quickly replaced with others to carry on (Hone, 1991).
The agencies were instrumental in achieving the ambitious goals set. One of the key agencies was the WPB that managed and coordinated production chain. The WPB matched production orders with interests and capabilities of firms and tasked large established firms with more complex orders. Another key agency was the Reconstruction Finance Corporation (RFC) that financed operations. It was a Hamiltonian-style national bank and was instrumental in directing credit during the New Deal (Rosenblatt, 2018a). There were also specialized agencies tasked, for instance, with developing synthetic rubber industry. The National Defense Advisory Council, established by Roosevelt, served as a coordinating body across all the agencies.
The state used various tools and incentives to have the private sector step up production substantially and quickly. Initially, tax credits and incentives were tried but had limited success as the projects were mostly of safe nature. So was the program (Emergency Plant Facilities) that reimbursed firms in the future for building plants thus requiring a large initial investment from the private sector. Despite the future reimbursement, the private sector was reluctant to take on huge upfront costs. Loan guarantees-the V Loan Program-worked relatively well as they tripled the bank lending to war industries to about 18% of bank loans in 1943. However, they accounted for a small share of total war financing.
The Defense Plant Corporation (DPC), that was a subsidiary of RFC, began directly investing to build factories and financing industries. It would then lease built factories to firms for a notional one dollar per year and cap the profits at a fair and reasonable amount (after the war, firms had an option to buy factories back, but the state retained production rights when needed). These government owned and contractor operated (GOCO) plants were a key mechanism of expanded production. Through the war, the federal government had contributed directly twothirds of the total invested, ending up owning large, and in many cases majority, shares of the U.S. heavy industry (Bossie and Mason, 2020).
Securing demand for orders allowed firms to ramp up production. The ramp up of machine tools industry made up of many small specialized firms illustrates this point. The tools were a key input in the production of aircrafts, tanks, trucks, and other equipment, and each factory required tens of thousands of tools. A huge shortage of machine tools prompted DPC to create a pool of guaranteed machine tool orders and finance it, spending about $2 billion during the war (about $28 billion in 2020 dollars). Production increased tenfold to about 300,000 machine tools per year from 1938 to 1942 (Rosenblatt, 2018b). More important, DPC placed orders in the pool even before the specific buyers were known to expedite the production process (Bossie and Mason, 2020).
The "leading hand of the state" played a crucial role in creating new industries such as synthetic rubber industry and increasing supply of raw materials. As the supply of natural rubber from Asia, a key input in many industries, was disrupted by the war the DPC invested $700 million (about $10 billion in 2020 dollars) to build 51 plants to produce 700,000 to a million tons of synthetic rubber a year. The production increased by 3000% between 1941and 1945(Bossie and Mason, 2020. In addition, the federal government also provided funds for R&D for both basic and applied research. Even in the initial stages of development, the DPC provided seed money to chemical companies to develop synthetic rubber, and the licenses were shared with other producers (Rosenblatt, 2018c). The state essentially owned the industry well into the mid-1950s. Similarly, the large shortages of raw materials such as aluminum, copper, and other metals that were needed for the production effort were addressed by the Defense Supply Corporation, another subsidiary of the RFC. For instance, aluminum production increased from about 400 million pounds a year in 1940 to about 2.25 billion pounds a year in 1943 with over half of the output produced in the facilities built by DPC (Rosenblatt, 2018d).
The transformation of the auto industry in shifting production to the military needs was also remarkable. While more than 3 million cars were produced in the USA in 1941, only 139 were manufactured during the whole war (PBS, 2020). The task was enormous despite the fact that three-quarters of financing for the airplane development came from the DPC (Rosenblatt, 2018c). For instance, Chrysler discovered that a prototype of a tank with 3500 parts required about 200 pounds of blueprints (Rosenblatt, 2018b). When Ford was tasked with producing B-24 bombers, the car of the day had about 15,000 parts and weighed 3000 pounds while the B-24 had 450,000 parts and 360,000 rivets in 550 sizes and weighed 18 tons. 2 Many doubted Ford could build the whole airplane, but Ford proved them wrong. The famous Willow Run plant at its peak produced a B-24 bomber every hour, day and night. At the beginning of the venture, Ford's production chief overnight designed an assembly line that emphasized standardized interchangeable parts and orderly continuous flow like that of the auto assembly. His team disassembled the two planes flown in and came up with the blueprints needed. 42,500 employees were working at the plant, but the mass assembly had not begun until the year after the factory opened as all the bottlenecks such as housing, essential input specifications and input delivery, and labor relations, had to be fixed. To deal with continuous modifications to the plane and avoid costly factory shutdowns, many parts were outsourced to about 1000 Ford factories and independent suppliers so that the Willow Run factory could operate under more predictable conditions (Trainor, 2019).
The enormous and fast ramping up of the production of a large number of sophisticated goods required for the war mobilization suggests a few key lessons. First, the effort has to combine the coordinating and financing role of the "leading hand of the state" with the production capabilities of the private sector. Second, a high-level council with key state agencies needs to be set up to drive the agenda that has to establish ambitious and clear targets, specify accountability framework with deliverables, profit margins, and labor relations, coordinate information flow across agencies and firms, provide for sharing of designs and IP among firms, engage all capable firms to allow for competition and potential failure, and clear up bottlenecks in supply chain and regulatory regime. Competency and talent of leaders in charge cannot be more emphasized. Third, to reduce uncertainty and risk for firms, demand has to be guaranteed and financing has to be sufficient and, in many cases, it may involve a direct ownership of facilities such as GOCO plants to ensure the provision of critical inputs in the value chain. Lastly, it was continuous effort and ingenuity of many firms and workers, including civil servants in government agencies, that worked together to reach the goal in front of them while removing all the obstacles on their path.
The WWII speedy development of vaccines against various diseases also holds lessons for collaboration and institutional structures for research and development of test kits, vaccines, and cures. The close collaboration among academic, industry, and military scientists in targeted R&D programs was instrumental in harnessing existing knowledge and applying it to the development of vaccines. The institutional arrangement in the Office of Scientific Research and Development (OSRD) featured project managers with clear objectives to develop, test, scale up, and manufacture vaccines. This governance structure combined with the collaboration with the military as a lead user of vaccines, thus providing the needed feedback, improvements, and future demand, produced a lot of innovations in a short time (Hoyt, 2006). As James Conant, the president of Harvard University, and a member of National Defense Research Committee under Vannevar Bush, wrote in 1945: "There is only one proven method of assisting the advancement of pure science-that of picking men of genius, backing them heavily, and leaving them to direct themselves. There is only one proven method of getting results in applied science-picking men of genius, backing them heavily, and keeping their aim on the target chosen. OSRD…had achieved its results by the second procedure…because…its objective was not to advance science but to devise and improve instrumentalities of war" (Hoyt, 2006).
Based on the lessons we drew from the WWII mobilization effort, we provide a blueprint for industrial policy to ramp up test kits, clarifying several aspects of the policy sketch outlined in the previous section and putting forth specific policy instruments. A task force akin to the WPB (e.g., Testing Pandemic Board) would employ competent and experienced people, put forth ambitious and clear objectives in the development and production of test kits, study and solve the challenges such as value chain bottlenecks and regulatory approvals of rapid test kits, and coordinate among private sector and government agencies. It would coordinate among academic and industry scientists and experts on developing test kits under targeted R&D programs with relevant feedbacks from the research community and industry. It would invite new and existing firms to partake in the effort to resolve the demand uncertainty and ensure steady demand on the part of the government. It would facilitate building new plants and expand existing lines to produce both inputs (e.g., assays and swabs) and final products such as rapid test kits at the scale needed. It would coordinate financing (e.g., loans, guarantees, and grants), and if needed, facilitate building GOCO plants. It would ensure accountability for the support provided and IP and knowledge sharing by capping the rate of return earned by firms.
Why most developing countries should follow industrial policy now
There are several reasons justifying why most countries, including developing ones, should start working on developing a productive capacity in test kits. There is a huge gap between the quantity needed to achieve universal testing and current production in advanced countries. It may take a long time for developing countries to access imported test kits. There is also no guarantee that advanced economies would follow the above strategy and ramp up their production sufficiently. This could lead to a dystopian situation where some parts of the world, mostly advanced, would be open to business resuming relatively free movement among themselves, while the rest would be fighting for the limited supply of test kits. Most of these countries might not succeed at joining in and could face repeated lockdowns and potentially huge loss of life and suffering. Defeating the virus through testing depends critically on the ability of each country to quickly access the needed test kits and the only effective way to achieve it is through developing its own production capacity. In fact, this scenario has played out in the global vaccine production and distribution as a few successful vaccines were developed against COVID-19. More important, the know-how of producing test kits would lay the ground for the ramping up of vaccine production and for building pharmaceutical and manufacturing capabilities.
Emerging economies have a relatively higher chance to succeed at this endeavor than lowincome countries as they already have the needed human capital, industrial knowledge, and financing. But even smaller emerging market and lower-income economies could coordinate regionally to share the costs and human capital to produce the needed test kits with the technical and financial assistance from other economies. In additional to regional cooperation, international organizations could provide further technical and financial support. The good news is that the universal testing and isolation strategy is easier to implement for a small population.
As to the question if countries could do it or it would be too costly and time-consuming, cheap and fast test kits already exist. There are test kits being developed that are technically simple, akin to pregnancy test kits, and cheap and do not require complex equipment or even electricity to provide the results. In terms of the industrial scale, for example, soft drink companies already produce more than 100 million drinks a day on the African continent.
Even innovation could come from developing economies with experience in fighting past pandemics such as Ebola. For example, a test invented by the Senegalese National Institute for Health and a British biotechnology company (Bradley, 2020) could cost as little as a dollar per test. Not only can low-income countries in Africa and elsewhere produce at an industrial scale, they can also innovate on adapted technologies laying the ground for a recovery and manufacturing renaissance and paving the way for sustained long-run growth.
Finally, one could argue that for many low-income countries with low capabilities and urgent needs across a wide spectrum, engaging in industrial policy to produce test kits could be a luxury they cannot afford, although it may eventually more than repay the spent resources by creating new industries. The answer would depend on the true cost of such a policy, which could be relatively low even for low-income countries that need to diversify their exports and economies, and whether the prospect of a severe and resurgent pandemic with devastating effects is taken seriously enough.
Conclusion
A risk-based approach to industrial policy to fight pandemics is a key to reopening economies faster and saving lives. In the face of a pandemic, this approach would simultaneously pursue key technological solutions like vaccines, cures, and test kits, invest substantial amounts, and coordinate resources and efforts with the private sector. This diversification of solutions would provide the largest bang for the buck with a high Sharpe ratio or high expected return per risk. Experimentation and provision of a proof of concept, followed by a ramp up in production and distribution, are key elements of this strategy.
Among plausible solutions, we argue that a viable strategy to end the pandemic is universal testing and isolation policy coupled with industrial policy to increase production. For future pandemics, while a vaccine or a cure is developed, testing policy could be the first line of defense and building a testing infrastructure is a key to preparedness. Countries following this strategy would need to achieve a rapid increase in the number of test kits produced during the pandemic. As cheap test kits are developed and with economies of scale, the cost of the production ramp up would be negligible compared to the economic fallout and lives lost due to repeated lockdowns or the spread of the pandemic.
The production ramp up of test kits is possible with the support of the state, that is, the application of industrial policy against the pandemic. An epidemiological approach to testing, sacrificing precision to provide cheap, fast, and scalable test kits, needs to be taken into account by the regulator. Market failures stemming from market power, externalities, short-term profit motives, capacity constraints, and coordination challenges would not be solved by the private sector alone. Instead, it is the combination of the state and the market, thanks to coordination mechanisms, large resources, and harnessing market forces that could support a rapid ramp up of production and distribution of test kits and critical medical products. This approach would constitute a key feature of the testing infrastructure for future pandemics. Both the institutional structure and research and productive capabilities to quickly develop and then scale up the production and distribution of test kits would need to be in place as part of preparedness for future pandemics.
The principles of a TIP strategy, especially in developing countries, and the WWII mobilization effort are a blueprint for such an industrial policy. The TIP strategy needs to set an ambitious goal in terms of the number of test kits produced. A task force at the highest level of the government needs to coordinate among all the key stakeholders in public and private sectors to tackle bottlenecks (e.g., regulation, supply chain, distribution, etc.). Large firms as well as innovative small firms need to contribute to this endeavor. Sharing information and knowledge would expedite the process. The state needs to provide support to firms but must make firms accountable for the goals agreed upon. The resources spent on this endeavor cannot compare to the costs of high unemployment and potential social unrest and even starvation of the poorest in developing countries. State intervention in this context can draw lessons from war mobilization efforts, albeit at a miniscule scale in comparison.
Ramping up the production of test kits and other medical gear in developing countries is even more pressing. Doing so would not only stimulate production and growth in the short run when major service sectors (e.g., tourism) and commodity markets are suffering, but it would also pave the way for the manufacturing of vaccines (Okonjo-Iweala, 2020). The acquired capabilities would prepare developing countries for future pandemics. More important, this production mobilization would also be an opportunity for developing countries to refocus their resources from non-tradable services back to manufacturing, creating manufacturing capabilities, reversing "premature deindustrialization," and paving the way for sustained growth.
As the US President Franklin D. Roosevelt said, "Powerful enemies must be out-fought and out-produced." The strategy outlined in this paper is necessary and urgent not only to fight the COVID-19 pandemic but also to have an insurance policy against future pandemics. Building a testing infrastructure is a key to a country's pandemic preparedness program. Even in an optimistic scenario where a vaccine becomes available in a few months, countries should not miss this opportunity to build a testing infrastructure as a bridge to mass vaccinations. However, it is equally plausible that when another pandemic strikes, perhaps more lethal than COVID-19, no vaccine would be quickly found, and humanity ought not to look back and regret this missed opportunity. | 2021-08-12T05:21:16.132Z | 2021-07-28T00:00:00.000 | {
"year": 2021,
"sha1": "751b51d00585deb40ade46f0ad7271c873c61248",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "751b51d00585deb40ade46f0ad7271c873c61248",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118455674 | pes2o/s2orc | v3-fos-license | AC Josephson Effect Induced by Spin Injection
Pure spin currents can be injected and detected in conductors via ferromagnetic contacts. We consider the case when the conductors become superconducting. A DC pure spin current flowing in one superconducting wire towards another superconductor via a ferromagnet contact induces AC voltage oscillations caused by Josephson tunneling of condensate electrons. Quasiparticles simultaneously counterflow resulting in zero total electric current through the contact. The Josephson oscillations can be accompanied by Carlson-Goldman collective modes leading to a resonance in the voltage oscillation amplitude.
Pure spin currents can be injected and detected in conductors via ferromagnetic contacts. We consider the case when the conductors become superconducting. A DC pure spin current flowing in one superconducting wire towards another superconductor via a ferromagnet contact induces AC voltage oscillations caused by Josephson tunneling of condensate electrons. Quasiparticles simultaneously counterflow resulting in zero total electric current through the contact. The Josephson oscillations can be accompanied by Carlson-Goldman collective modes leading to a resonance in the voltage oscillation amplitude.
I. INTRODUCTION
Electric and spin transport near ferromagnetparamagnet interfaces received a large attention boost with the discovery of the giant magnetoresistance effect 1 and the subsequent developments in magnetoelectronics and spintronics. Aronov 2 , and later Johnson and Silsbee 3 theoretically predicted that an electric current through such an interface leads to an accumulation of nonequilibrium spin polarization with an accompanying spin current in the paramagnetic metal. A reverse effect also takes place, a pure spin current from the normal metal gives rise to an electric potential difference in the ferromagnet. The physics of these phenomena is quite simple. A sufficiently large difference of conductivities of spin-up and spin-down electrons in ferromagnets induces spin polarization of the electric current therein. Spin polarized currents passing through ferromagnet-paramagnet boundaries result in an accumulation of nonequilibrium magnetization near the interface. Both the spin injection and detection of this spin polarization has been experimentally demonstrated in Refs. 4-6 in systems containing two or more junctions of thin normal metal wires with ferromagnets. One of them acts as a spin injector, while the other is a detector, where the voltage created by diffusing spins can be measured. Related spin-polarized transport phenomena have been investigated in many spintronic applications, such as giant magnetoresistance 1 , spin Hall effects 7 , current induced magnetization dynamics 8 , spin-pumping 9 , and spin caloritronics 10 .
In the case of superconducting systems, spin injection and detection within a nonlocal setup similar to the one studied in Ref. 4-6 was investigated both theoretically 11-13 and experimentally 14 . These studies have been focused on DC transport. They revealed a strong renormalization of spin-related transport parameters as compared to normal systems. These changes were mostly caused by the modified density of states in a superconductor. Beyond such quasi-particle transport properties, the macroscopic coherent state of the superconducting condensate can give rise to a quite different transport phenomenon associated with the spin-polarized transport.
Below we will consider an AC effect produced by a DC spin current towards a thin ferromagnetic contact. The DC potential induced by this polarization flux gives rise to an AC electric current of condensate electrons. Since in the considered experimental setup the total current of the superconducting and normal components must be zero, the AC condensate oscillations result in an AC potential difference between the opposite sides of the contact. A schematic of a possible experimental setup is shown in Fig. 1. A current is passed from a ferromagnet to a normal metal generating an associated spin accumulation and spin current therein. In the non-local geometry, this spin accumulation also diffuses transversely in a contacted normal metal towards another normal metal reservoir via a ferromagnet contact. The non-local potential V increases with the injected DC current I and the nonlocal resistance R nl = V /I describes the spin transport properties in the device. We will demonstrate that when the normal metals become superconducting, R nl acquires an AC component in addition to the DC com- The outline of this paper is as follows. A model system used in our calculation is described in Sec. III. Also in this section we present a simple calculation of the AC voltage oscillations assuming a local thermodynamic equilibrium between quasiparticles and the condensate. A microscopic analysis based on coupled kinetic equations for the superconducting order parameter and the quasiparticle distribution function will be given in Sec. III. A discussion of results will be presented in Sec. IV.
II. AC VOLTAGE OSCILLATIONS IN A LOCAL THERMODYNAMIC EQUILIBRIUM
Our model system consists of two superconducting wires in contact via a spin-active barrier. We consider this contact to be weak in the form of a thin ferromagnetic layer with, if necessary, additional insulating layers. Such a barrier can be characterized by two resistances R ↑ and R ↓ corresponding to two spin eigenstates. We assume that a non-equilibrium spin polarization is created in the left wire (see Fig.1), either by spin injection, as shown in Fig. 1, or by other means. Moreover, we assume that the electron's energy relaxation is faster than their spin relaxation, so that up and down spin distributions can be characterized by the respective chemical potentials µ L ↑ and µ L ↓ , resulting in the spin accumulation potential δµ s = µ L ↑ − µ L ↓ . In the right (R) wire δµ s is much smaller, if the spin relaxation is faster than the influx of polarization from the left reservoir through the ferromagnetic contact. This is satisfied when the contact resistance is much larger than the resistance of the wire of the length l s = √ Dτ s , where D is the diffusion constant and τ s is the spin relaxation rate. This is true in many practical cases, in particular, in the systems studied in Ref. 5,6. We, therefore, simplify our model assuming where µ is the equilibrium chemical potential and V is the charge potential difference between two wires.
With these definitions the electric current through the contact in the normal state is where the inverse charge and spin resistances are given by In an open circuit, electro-neutrality requires I = 0, and is the spin current polarization of the contact. This is just the voltage induced by the spin current trough the contact, as it has been experimentally demonstrated in Ref. 4-6.
Let us now consider this situation for superconducting wires. We assume that 2δµ s ≪ k B T c , so that the nonequilibrium spin polarization does not cause depairing 12 . The difference between Cooper pair energies on opposite sides of the contact is 2eV . This potential difference gives rise to the AC Josephson current I J of condensed electrons. In addition, electro-neutrality causes an oppositely directed current I n of quasi-particles, so that the total electric current is zero. This results in DC and AC voltage differences between the left and right superconductors that we will now compute.
The simplest approach to this problem is based on the assumption that in the vicinity of the critical temperature T c − T ≪ T c , the quasiparticle current remains expressed by Eq. (1). We will discuss in the next section in which regime this approach is valid. Denoting the phase difference of the order parameters between the left and right wires as φ, and taking into account that dφ/dt = −2eV / , electro-neutrality I n + I J = 0 and Eq.
(1) dictates where I c is the critical Josephson current. It is easy to see that when I c ≪ δµ s /2eR s the Josephson current is dominated by harmonic oscillations with the frequency ω = P δµ s / ≡ 2eV 0 / . Hence, the voltage induced by the spin current is The DC component −V 0 of this voltage is exactly the same as in the case of normal metals. Additionally, V (t) contains a term that oscillates with a frequency determined by the DC (normal state) contribution of the nonlocal signal. The magnitude of the oscillating voltage can be estimated by noting that at temperatures close to T c where ∆ is the superconducting gap and ζ is a dimensionless coefficient that takes into account the depairing effect inside the ferromagnetic contact layer 15 , leading to exponential suppression of the Josephson current and, consequently to small values of ζ. It should be noted that according to Eqs.
(3) and (4) the oscillation amplitude I c R c does not explicitly depend on the transmission coefficient, apart from the weak dependence through the depairing factor ζ.
III. NONEQUILIBRIUM EFFECTS AND COLLECTIVE MODES
The above analysis was based on the assumption that near the critical temperature, the current carried by quasiparticles can be represented by the expression in the normal state Eq. (1), ignoring small corrections associated with the gap in the quasiparticle spectrum. The small gap alone, however, does not justify this assumption. In particular, when quasiparticles are transmitted between the left and right wires they may not be in the local thermal equilibrium with the respective condensates, that have been assumed when deriving Eq.(2). To take into account nonequilibrium effects, one needs to consider time dependent transport and relaxation of the quasiparticles. There are two physical effects that determine kinetics of quasiparticles in the superconducting wires. The first one is the so called 16-18 charge (or branch) imbalance of electron and hole excitations. It is produced by quasiparticle tunneling between superconducting electrodes, leading to a quasiparticle distribution with a local chemical potential different from that of the condensate. This difference relaxes during a time much longer than the electron-phonon scattering time. Another effect is related to condensate space-time oscillations. It dominates over the charge imbalance relaxation when ω is large enough. We will demonstrate that the spin injection then enables detection of collective condensate-quasiparticle modes, Carlson-Goldman modes 19 which are characterized by oppositely directed oscillations of condensate and normal fluids. There is an important difference with respect to the usual Josephson effect, since our device requires no net current I = 0. The usual Josephson effect does not couple to Carlson-Goldman modes and is not reduced at low temperatures, T → 0. In contrast, to provide a counterflow we need excitations that vanish at low temperatures. The coupling to the collective modes is enabled by a spin-driven battery effect induced by the spin injection.
Let us now detail the calculations. Assuming a small deviation from equilibrium we employ the linearized time dependent kinetic and Ginzburg-Landau equations in the diffusive regime 18 , when the elastic mean free path is much less than the superconductor's coherence length, as well as other relevant length scales. In this case, the isotropic quasiparticle distribution function f σ (E, t), where σ is the spin projection, depends only on the energy E and time t. Within the linear theory the singlet condensate couples to the spin-independent part f (E, t) ≡ (f ↑ (E, t) + f ↓ (E, t))/2 of the distribution function. Therefore, after ignoring small terms (δµ s /k B T ) 2 , the unperturbed spin-independent distributions takes the form of Fermi equilibrium functions f L/R 0 (E, t) of the left and right wires with respective electrochemical potentials eV /2 + µ and −eV /2 + µ. In its turn, the corresponding gap functions of unperturbed condensates are ∆ exp(iφ/2 − 2iµt) and ∆ exp(−iφ/2 − 2iµt). It is easy to see that in this unperturbed state the spin independent contribution to the quasiparticle current through the contact is given by the first term in the right-hand side of Eq. (1). Taking into account above condensate functions one can easy obtain Eqs. (2) and (3). In the perturbed state we have f (E, r, t) = f 0 (E, t) + δf (E, r, t) (we will skip here and below the labels L and R). Since the perturbation violates the electron-hole symmetry, it gives rise to a spatially dependent potential ϕ(r, t) near the contact. Also, a correction to the order parameter δ∆(r, t) appears. In order to simplify the further analy-sis, we assume that ω ≪ ∆ and 1/τ E ≪ ∆, where 1/τ E is the electron-phonon relaxation rate. Besides that, the critical supercurrent I c is taken small enough, so that the time dependence of all functions is dominated by harmonic oscillations. Accordingly, we introduce the time Fourier components δf ω (r, E), δ∆ ω (r) and ϕ ω (r). From Refs. 18,21 it follows that f ω obeys the kinetic equation where f 0 = 1/4k B T cosh 2 (E/2k B T ) and I st is the electron-phonon scattering integral, whose explicit form can be found in Ref. 18,21. Furthermore, D = D(N 2 1 + N 2 2 ), where N 1 and N 2 are the spectral functions: where In its turn, the linearized Ginzburg-Landau equation takes the form We will employ the above equations for the analysis of our model in two limiting cases of weak and strong energy relaxation versus the Josephson frequency, τ E ω ≪ 1 and τ E ω ≫ 1, corresponding to very different physical situations. In the former case slow time variations of δf may be ignored, so that the quasiparticle kinetics is dominated by the charge imbalance of electron and hole excitations. The deviation from the thermodynamic equilibrium decreases with increasing distance from the contact on the characteristic length scale √ Dτ R , where τ R = 4k B T c τ E /π∆ is the charge imbalance relaxation time, that is much longer than τ E . In the opposite highfrequency regime inelastic collision processes are not important, because the quasiparticle distribution oscillates fast. Therefore, one can neglect 1/τ E and I st in Eqs. (5,7). In this case, since Josephson oscillations of the condensate take place at zero total current, they strongly couple to Carlson-Goldman modes. Therefore, one can expect such modes to be excited near the contact and propagate along the left and right wires.
In both low-frequency and high-frequency regimes, using Eqs. (5,7) with a reduced form of I st from Refs. 21 and 18 and taking into account the zero electric current condition, one arrives to the equation for the potential where κ 2 (ω) = 1/Dτ R at τ E ω ≪ 1 and at τ E ω ≫ 1, where the sound velocity c s = √ 2D∆. Eq. (8) is well known. At τ E ω ≪ 1 it describes the charge imbalance relaxation 17,18 , while in the opposite limit it gives the dispersion of Carlson-Goldman modes 20 .
For our geometry, when κ −1 is much larger than the width and thickness of the wire, ϕ ω depends only on the coordinate x along the wire. Then, at τ E ω ≪ 1, ϕ ω exponentially decreases with increasing distance from the contact, while at τ E ω ≫ 1 it shows decaying oscillations. We assume that the left and right wires are of the same length L. Since the system is symmetric with respect to x → −x, the oscillating part of the electrochemical potential is −V ω /2 + ϕ ω (x) at x > 0 and V ω /2 − ϕ ω (−x) at x < 0, where V ω denotes the Fourier component of V (t). The solution of Eq. (8) has the form with the boundary conditions ∇ x ϕ ω (±L) = 0 and where A is the wire cross-section area and σ is the normal state conductivity. These boundary conditions provide a zero electric current of quasiparticles at the wire ends and the current equal to the injected one at x = 0. From Eqs. (8),(10)-(11) one obtains a periodic part of the injected current where Hence, the result is a renormalization of R c in Eq.
(1), such that R c → R c + 2R w . To find the voltage V ω , the quasiparticle current (12) must be equated with the Josephson current. By this way we obtain a new expression for a time dependent part of V , instead of the second term in the right-hand side of Eq. (3): (14) IV. DISCUSSION Let us analyse above results in some limiting cases. Since κ(ω) → ∞ in both cases of high frequencies ω → ∞ and strong energy relaxation τ E → 0, it follows from Eq. (13) that R w → 0. We thus obtain Eq. (3), that is an expected result, because in these limits a deviation from equilibrium is small. On the other hand, the nonequilibrium effect of quasiparticle's kinetics becomes strong when R c 2R w . The assumed linearization condition, however, restricts this inequality. This condition can be expressed in the form ζ|R w /R c | ≪ 1. Therefore, the linear theory allows R c R w only at small ζ. It should be noted that, according to Eq. (13), R w can be enhanced due to resonances of Josephson oscillations with collective modes at ImκL = πn, if they are not overdamped (if ReκL ≪ 1). R w also increases at small enough L, when 2|κ|L ≪ 1. In practice R w may be varied in quite wide range. In Al wires from Ref. 6 V 0 = 10 −6 V, resulting in ω −1 ≃ 0.3 · 10 −9 s. Since ω −1 ∼ τ E ∼ 10 −9 s 22 , a regime intermediate between charge imbalance relaxation and generation of Carlson-Goldman modes will be realized, with κ −1 about several µm. Therefore, strong resonances in R w are not expected. One can evaluate R w ≃ 50Ω, that is much less than R c = 600Ω. Hence, in the considered parameter range Eq. (3) remains valid. In samples with higher polarizations P and at larger spin current through the contact the Josephson oscillation frequency is expected to be large enough to produce noticeable collective resonances of R ef f in Eq. (14).
The above calculations of Josephson voltage oscillations have been restricted to ∆ ≪ T c . At the larger gap the oscillation amplitude is expected to decrease, because less excitations are available to compensate the supercurrent through the contact. On the other hand, in this range one should take into account that besides quasiparticles the spin transport through the contact can be associated with triplet components of Cooper pair states that appear due to spin dependent tunneling and nonequilibrium spin polarization of superconducting wires. Further studies are needed to understand the effect of such transport.
In conclusion, we considered an AC Josephson effect induced by a DC spin current through the contact whose transmittance depends on the spin orientation of tunneling electrons. The oscillations of the voltage across the contact at zero electric current have, in certain parameter range, harmonic time dependence with the frequency proportional to the spin current. The amplitude and phase of these oscillations depend on coupled kinetics of quasiparticles and condensate in superconducting wires. The corresponding calculations have been performed within linearized kinetic equations at temperature close to T c . We predict that at the high enough frequency the measured AC voltage will show up the resonance structure associated with excitation of Carlson-Goldman modes. | 2010-09-29T07:30:32.000Z | 2010-09-29T00:00:00.000 | {
"year": 2010,
"sha1": "7b2da95ced441da5bde25b596aba4443cd48b65e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1009.5790",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7b2da95ced441da5bde25b596aba4443cd48b65e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
348243 | pes2o/s2orc | v3-fos-license | Development and validation of a tool to evaluate the quality of medical education websites in pathology
Background: The exponential use of the internet as a learning resource coupled with varied quality of many websites, lead to a need to identify suitable websites for teaching purposes. Aim: The aim of this study is to develop and to validate a tool, which evaluates the quality of undergraduate medical educational websites; and apply it to the field of pathology. Methods: A tool was devised through several steps of item generation, reduction, weightage, pilot testing, post‑pilot modification of the tool and validating the tool. Tool validation included measurement of inter‑observer reliability; and generation of criterion related, construct related and content related validity. The validated tool was subsequently tested by applying it to a population of pathology websites. Results and Discussion: Reliability testing showed a high internal consistency reliability (Cronbach’s alpha = 0.92), high inter‑observer reliability (Pearson’s correlation r = 0.88), intraclass correlation coefficient = 0.85 and κ =0.75. It showed high criterion related, construct related and content related validity. The tool showed moderately high concordance with the gold standard ( κ =0.61); 92.2% sensitivity, 67.8% specificity, 75.6% positive predictive value and 88.9% negative predictive value. The validated tool was applied to 278 websites; 29.9% were rated as recommended, 41.0% as recommended with caution and 29.1% as not recommended. Conclusion: A systematic tool was devised to evaluate the quality of websites for medical educational purposes. The tool was shown to yield reliable and valid inferences through its application to pathology websites.
INTRODUCTION
The number of medical information web sites is increasing. The quality of such websites is highly variable, difficult to assess and are published by a variety of bodies such as government institutions, consumer and scientific organizations, patients associations, personal sites, health provider institutions, commercial sites, etc. [1][2][3] Without tools and methodologies for evaluating their content information, the web's potential as a universe of knowledge could be lost. [4][5][6][7][8][9] Moreover, no clear guidelines are yet set for medical teaching websites. [10] A need therefore exists for the development of an evaluation procedure that assists teachers to assess the value of such websites.
AIMS
• To develop and validate a rating tool which evaluates the quality of undergraduate medical educational websites; and apply it to the field of Pathology. • To enable teachers to better evaluate online medical education materials and hence better selection of the most appropriate websites as learning resources for their students, particularly students of problem based learning curricula. • By promoting the application of agreed quality guidelines by all medical schools, the overall quality of medical educational websites will improve to meet the demanded quality and the web will ultimately become a reliable and integral part of the undergraduate medical education.
MATERIALS AND METHODS
The methodology of this research involved systematic review of the literature of available tools, tool development, tool validation process and tool application.
Tool validation is described in several levels, as follows: 1. Developing a draft tool (this encompasses criteria to be used in evaluating medical education websites). 2. Pilot testing of the tool. 3. Revising the tool according to pilot tests. 4. Validating the tool.
These are detailed as follows
Developing a Draft Tool
Medical Educational Website Quality Evaluation Tool (MEWQET) was developed as described below using the principals and methodology of systematic review as outlined by Hamdy et al. [11] Item Generation A comprehensive literature review was carried out. Such a review served to clarify the nature and range of the content of the target construct. Existing tools and criteria for evaluating websites pertaining to education, medical education, general health related educational websites and website quality in general were obtained via searching peer reviewed medical journals websites and other websites as follows.
Search was done using the following search strings: "Quality Rating Instruments AND medical education", "(evaluation OR guidelines OR criteria) AND (website OR internet OR online OR www) AND medical education" and "(evaluation methods website quality)", "(reliability OR validity) AND (evaluation method OR questionnaire OR tool)" and variations of the following: "quality," "Internet," "World Wide Web", "rating," "ranking," "evaluate," "award," and "assess" and combinations thereof.
Additional resources were obtained by investigating references to the obtained articles, connections to relevant articles, author links and hyperlinks from the initial results.
Data Extraction Criteria were extracted and compiled into groups.
Item Reduction
Initially, items were reduced by removal of items which were repetitive or not relevant to medical education, raw items were generated and further modifications were carried out according to the researcher experience pertaining to undergraduate medical education in general and pathology education in particular. Following the pilot testing, further item reduction was carried out whereby further redundant items were removed and item scaling was adjusted.
Item Scaling Items were either scaled on a dichotomous basis or as a multilevel scale. The former is a yes/no answer. Example is item 1.1 and 1.4 of the tool. The latter is exemplified in items 1.2, 1.3, 1.5 and 1.6 [Appendix I]. Such items cannot be answered by a simple yes or no answer.
Item Weighting
This was carried out based on already weighted items of pre-validated tools in the literature. Items that were modified or devised were weighted according to
Pilot Testing of the Tool
The tool was piloted using a sample (10%) of the population of websites upon which the tool is to be ultimately used.
Modification of the Tool
The preliminary tool was applied to those websites and further item reduction and modifications of weightage was carried out according to the results of the pilot study.
The grand total score was categorized as recommended, recommended with caution and not recommended.
Validating the Tool
All pathology teaching websites were rated according to MEWQET by one pathologist (the main researcher, referred to as the first observer). A second pathologist (referred to as the second observer) was recruited to evaluate a random sample of the websites using the tool. A 30% randomly selected sample of websites was used.
Training the Second Observer to Use the MEWQET Tool
The second observer acted as the trainee and the main researcher as the trainer. The second observer was given one random website to rate independently. Both the main researcher and the second observer discussed the MEWQET tool using the first website as an example with discussion and few clarifications. Subsequently, the second observer rated another five randomly selected websites. Another discussion session followed with further clarifications. The second observer then rated the remainder of the websites using the MEWQET Tool. Concordance rate was calculated and websites with discordant rating were re-examined and discussed by both the first and second observer in order to reach a consensus.
Reliability Measures
The reliability of the tool was evaluated by comparing the first and second observer's scores using the MEWQET tool and measuring the internal consistency reliability (Cronbach's alpha), Pearson correlation and intraclass correlation coefficient.
Further reliability of the tool was evaluated by comparing the first and second observer's categories using the MEWQET tool. This was done using kappa statistics.
Criterion Related Validity Measures
Testing the tool against a gold standard: Approximately 50% of the websites were randomly selected for review (140 websites) and were independently rated by two expert pathologists (more than 20 years of pathology teaching experience) using their general judgment rather than the tool. This is considered to be the gold standard as no other gold standard exists for this particular area under study. Gold standard one and gold standard two (referred to as GS1 and GS2 respectively) were blinded to the details of the study, the MEWQET tool, the nature of items within the tool and the methodology. Both gold standards ranked the websites as: "Recommended," "recommended with caution" or "Not recommended" for educational purposes independently and then reached a consensus on the discordant cases. GS consensus results were compared with the outcome of the MEWQET tool to determine its sensitivity and specificity of identifying websites of good quality suitable for teaching purposes or otherwise and positive and negative predictive values.
Content Validity Measures
Comparing the MEWQET Tool with general website rating tools such as Google PageRank and Alexa Traffic Rank.
Two common general website ranking tools such as Google Page Rank and Alexa traffic rank were accessed via www.google.com and www.alexa.com respectively. Their respective toolbars were used to automatically rank every website accessed for the study. Ranks of both Google Page Rank and Alexa traffic rank were compared with the tool consensus categories using kappa statistics.
Gold Standard Rating of the Tool The tool was revealed to both gold standards after they finished their blinded rating. GS1 and GS2 rated each item and sub-item as: Highly important (HI), important (I) and not important (NI). The Weightage of each item and sub-item was also judged. In addition, the opinion of both gold standards was solicited verbally in a discussion session that followed the completion of the evaluation process.
Construct Validity Measures a. The relationship of Gold standard consensus with the actual score of the tool: The scores of the tool were compared with the rating of the gold standard consensus. The mean score for each category was calculated and compared with the tool consensus score. b. The relationship of Gold standard consensus with both Google PageRank and Alexa Traffic Rank: Ranks of both Google Page Rank and Alexa traffic rank were compared with the gold standard consensus categories using kappa statistics.
Further validation of the tool was sought by applying it to one of the well-known, robust websites amongst pathologists, namely http://www.pathologyoutlines.com.
Application of the MEWQET Tool
This was applied to pathology websites according to the following eligibility criteria.
Inclusion Criteria
All free of charge, English language websites for pathology education, teaching, online image banks and interactive tutorials.
Exclusion Criteria
Websites for other disciplines, websites for research or experimental pathology, websites of Journals or periodicals of Pathology, websites of databases and search engines, online manuals and textbooks or online dictionaries and glossaries are all excluded.
Sampling Method
All pathology education websites found on the web using Google (The most widely used search engine) and all links from official pathology related websites were used (virtually 100% sample). Search through www.google.com was done using the following search string: "Pathology and education." The first 50 hits plus all related links were taken. Search ended on 6 th June 2008.
Statistical Analysis Used
All statistics were carried out using SPSS software version 17.0.
Tool Development
Items were generated as per the methodology described above and then compiled into groups. After compiling the items into groups, items were reduced by removal of items which were repetitive, or not relevant to medical education. Raw items were generated and this resulted in 19 items and a total of 124 sub-items with a maximum global score of 312 points. Some items were modified, some were split into two separate items, and some were summed together as one item depending on relevant importance. This modification resulted in 12 major items, a total of 74 sub-items with a maximum global score of 127 points. This was categorized as poor, weak, fair, good, very good and excellent. Piloting was performed on a 10% random sample which resulted in 30 websites. The preliminary tool was applied to those websites and further item reduction and modifications of weightage was carried out as well as the addition of few important clarifications termed as "hints".
This resulted in the final version of the tool, which comprises 12 major items and 42 sub-items with a maximum global score of 100 points [Appendix I]. This was categorized arbitrarily as follows: • >65: Recommended.
Categorization was changed from a six tiered system to just three for simplification purposes as the former was found cumbersome to apply during the pilot period and it would have proven complicated for comparison purposes. The final version of the tool was then used for the study [Appendix I: MEWQET].
Validating the Tool
A total of 278 websites were evaluated and categorized by the main researcher (first observer) after the exclusion of 17 websites due to inaccessibility at the time of application of the tool. Following categorization by the second observer and comparing the categories, 61 were found to be concordant and 11 were discordant [ Figure 1]. Discordant websites were reviewed together by both first and second observer and the consensus was reached in all (100%).
Criterion Related Validity Measures
When the categories of tool scores were compared with the gold standard categories, the level of agreement was found to be substantial (κ =0.61) [ Table 1] (level of agreement as indicated by Landis and Koch, 1977).
In order to determine the sensitivity, specificity, positive predictive value and negative predictive value, the recommended with caution group was combined with the recommended group and compared with the not recommended group. This showed that the tool has 92.2% sensitivity, 67.8% specificity, 75.6% positive predictive value and 88.9% negative predictive value [ Tables 3 and 4].
In addition, both gold standards expressed their opinion verbally that the items in the tool can act as an "aid memoire" had they not been put in a checklist format they can be easily overlooked. This opinion was obtained during a discussion session that followed the completion of the evaluation process.
Construct Validity Measures
The relationship of gold standard consensus with the actual score of the tool.
The rating of the gold standard consensus was compared with scores of the tool and the mean was calculated. The mean scores for each category were as follows: Recommended: 67.8, recommended with caution: 59.9 and not recommended: 49.6.
The Relationship of Gold Standard Consensus with Both Google Pagerank and Alexa Traffic Rank
To examine whether this tool performs better than Google or Alexa, the latter were each compared against the gold standard in the same way that the tool was compared with each of Google and Alexa.
Comparing both Google Page Rank and Alexa traffic rank with the Gold standard categories using kappa statistics showed kappa level of agreement, κ = −0.007 with a P value of 0.92 and0.038 with a P value of 0.53 respectively.
Upon seeking further validation of the tool by applying it to one of the well known, robust websites amongst pathologists, namely http://www.pathologyoutlines.com, the following was found: This website scored 79% and hence it is in the recommended category [Appendix II].
This gives further support to the validity of the MEWQET tool. Pathologyoutlines.com got the maximum score in 34 out of 42 sub-items (80.9% of the total sub-items). The website did not get the highest score in only eight sub-items and these are described as follows.
In sub-item 1.3, the maximum score is 5 but the website only scored 3 because the target audience is physicians rather than medical students. In sub-item 1.6, it scored 0, while the maximum score is 3 as the domain of this website is a.com (denoting a commercial website, as opposed to.net or.org). In sub-item 2.3 which relates to its relevance to medical students with respect to their maturity and cognitive abilities, it scored 3 while the maximum score is 5. This is because the website is geared for postgraduate professionals where the information is presented swiftly in a compact bulleted format. This may be conceived as "hard to understand" by medical students as it does not give the elaborative explanatory text that the undergraduate medical student might need. In sub-item 4.1, it scored one while the maximum score is 3. This is because the date of last update of some chapters of the website was longer than a year. In sub-item 10.2, the website scored-2 as it did contain flashing, scrolling, or otherwise visually distracting graphic and text displays. In both sub-items 11.2 and 11.3 (interactivity), the website scored 0 in each, while the maximum score is 3 and 4 respectively. This is due to the fact that the website does not have activities that are challenging, interesting, and appealing for the intended learner with prompt feedback whenever needed nor does it have any provision for relevant action on the part of the learner. Lastly, in sub-item 12.2, it scored one while the maximum score is 4 and that is because it did have commercial advertisements.
Application of the Tool
The search methodology described above yielded 414 websites [ Figure 3].
The quality of pathology websites for undergraduate education was measured using the validated MEWQET Tool. Out of a total of 278 websites, the tool identified 83 websites as recommended (29.9%), 114 websites as recommended with caution (41.0%) and 81 websites as not recommended (29.1%) [ Figure 4]. In other words, the tool distinguished around two thirds of the websites as suitable and one third as not suitable. The full list of websites and their rating is outlined in Appendix III.
The websites were ranked according to the actual scores of the tool within each category. The top 10 recommended websites and the bottom 10 not recommended websites are displayed in Tables 5 and 6 respectively.
Background
The exponential advances of online technologies have led to significant enhancement of medical education.
The study of medicine depends on analysis and synthesis of the vast amount of information that includes highly visual and complex data. This is particularly true for a field like pathology, where it is highly dependent on interpretation of complex visual images opening the way for the internet to rapidly emerge as an attractive method for learning. [12][13][14] With the growing popularity of using the internet as a source of information in general and for educational purposes in particular and the concern of lack of proper scrutiny for quality; a need has arisen to devise instruments whereby the quality of such material is systematically evaluated for its quality and possible potential use by students. [15] In contrast to the ever-growing websites and internet usage around the world, literature pertaining to evaluating the quality of web based materials has been scarce. [1,2] Most of website evaluating tools that were found in the literature pertain to evaluating health related websites with the consumer, lay person or patient in mind. [9,[16][17][18][19][20] Other evaluating tools were found to be devised by librarians who are concerned about the quality of published information in general on the web. [21][22][23] Tools pertaining to evaluating education related websites are also available [24,25] as well as those related to medical undergraduate education. [4,26] Despite many attempts to devise new tools pertaining to examine the quality of health information on the internet, there does not appear to be any universally accepted reliable website rating tool that can be used. Of the attempts to devise education related evaluation tools, criteria lacked comprehensiveness and well described development procedures. [27][28][29] Finding tools that have already been validated was challenging as a number of tools were devised and applied without proper scrutiny for their reliability or validity. [2,[30][31][32] Studies have shown that finding reliable sources of information on the web can be confusing and difficult. [33,34] Tool Development The tool of this study was developed pooling together "standards" or "items" from a variety of tools identified via extensive literature searches. Such tools encompassed those designed either for health websites, pure educational websites, websites in general form librarian perspective or medical educational websites.
Items of the Tool
The standards or items (Authorship and Credibility, Aim, Scope and Intended Audience, Comprehensiveness, Currency of Information, Content, Navigability, Speed, Access, Hyper-links, Graphics and Design, Interactivity and Disclosures) that were found to be essential or important amongst all the tools searched were included in this tool.
Agencies and institutions concerned with health consumer issues over the net in their publications regarding evaluating online health related information also considered aim, scope and intended audience, credibility, content, authority and reputation, relevance, coverage, accuracy, currency, accessibility, ethics, design and layout, disclosure, links, interactivity and ease of use in their guidelines. [9,[16][17][18]20,35] This was further supported by (Bernstama, Sheltona, Waljia and Meric-Bernstamb, 2005) who reviewed 80 instruments that are available to assess the quality of health information on the web and found the above as common elements in such instruments. [8] Other investigators had similar findings. [1,2,5,34,36,37] General and medical library resources considered authorship, accuracy, currency, coverage, design, referral to other resources, purpose, audience value of content and navigation as essential items in their online publication regarding evaluating internet resources. [21][22][23] Tools assessing educational materials on the web were rated by several investigators outlining authorship, accuracy, intended audience, clarity, aim, comprehensiveness, interactivity, navigability and scope as their evaluation checklists. [24,25,38] Emphasized the importance of active learning and hence the importance of interactivity as one of the standards in educational website. [10] Interactivity was also included in our tool. In addition, similar standards were considered in guidelines set for medical and health information sites [39] as well as medical educational website evaluations. [4,26] Some studies evaluated online educational materials for practicing pathologists and those outlined accuracy, ease of navigation, relevance, updates and completeness as some of their standards. [40]
Lack of Existing Comprehensive Tool Pertaining to Pathology Education
Review of the literature did not reveal any study pertaining to pathology education that incorporated as a comprehensive review of earlier tools or as systematic item generation with item listing and extraction as was carried out in this study. In this study, all earlier tools pertaining to evaluating health education, general librarian, education, practice of discipline and medical education were thoroughly reviewed, items were generated from these tools and they were listed and categorized. This was followed by a process of eliminating repeated or redundant items, reducing the items to the most relevant ones in relation to undergraduate medical education.
Item Weightage
Item weightage was not thoroughly discussed in the majority of studies encountered, [34] however, in (McInerney C and Bird, 2004) study weightage was judged by the level of importance decided by website quality. [37] In this study, items and sub-items were assigned weightage according to some of the literature that mentioned their item weightage or from the researcher's own experience regarding pathology education and the importance of the various items as it pertains to medical education. The Net Scoring ® criteria to assess the quality of health internet information for example have grouped 49 criteria into eight groups. Each criterion has a weight: Essential criterion rated from 0 to 9, important criterion rated from 0 to 6 and minor criterion rated from 0 to 3. This was done according to the relevance of the criterion or item to the core item, which is the educational value of the resource. In this study, a similar approach was followed. [19] This was further refined following the pilot stage of the study. This approach strengthened the methodology of the study as it followed the acceptable fundamentals of tool development technique. Such item weightage suffered from the inevitable disadvantage of relying on certain assumptions. This was overcome, however, by the systematic tool validation that followed.
Tool Validation
Very few tools have undergone rigorous validation. Of those who have, some show good validity and reliability of the their tools, [1,8,9,34] while others show poor validation measures including poor inter-rater agreement in a wide range of tools. [2,30,31] Inter-observer agreement was found to improve after a period of training of the second rater or the rater who did not develop the tool. For example, (McInerney C and Bird, 2004) found that Spearman rho's correlation improved from 0.775 to 0.985 after such training. [37] In this study training of the second rater was incorporated from the beginning in the methodology of tool validation.
The results of this study show high reliability suggested by statistically significant and quantitatively large kappa value, intraclass correlation coefficient, pearson correlation coefficient and Cronbach's alpha.
Analyzing the inter-observer agreement by category revealed that the not recommended category showed a 100% agreement, the recommended category showed a 94% agreement and no website rated as recommended by one observer was rated as not recommended by the other observer, the disagreement was only confined between the recommended group and the recommended with caution group. In the recommended with caution group, the discordance was spread between both recommended and not recommended category. In other words, there was no more than one category difference in the discordant group [ Table 1].
According to (Bland and Altman, 1997) the minimum requirement of satisfactory reliability as measured by Cronbach's alpha is 0.70. In our study, the level of Cronbach's alpha correlation coefficient is 0.92 indicating the high reliability of our tool. [41] Comparison of the tool consensus score with the gold standard consensus showed high sensitivity, moderately high specificity and high positive and negative predictive values of the tool. In other words, the tool proved to be able to pick out the most suitable websites for medical education.
The MEWQET tool showed higher sensitivity than specificity. This was expected as it was designed to pick out most of the suitable websites for medical education, though some of the tools selected may not be suitable. The high negative predictive value denotes an added advantage of the tool as it indicates that a high proportion of "not recommended" websites are correctly assigned as not recommended.
No similar comparison was found in the literature where sensitivity, specificity, positive and negative predictive values of tools was measured. This may be due to the fact that most tools reflected their results in a multiple tiered categorization systems such as (excellent, very good, good, poor) rather than our system where we managed to combine the two categories "recommended" and "recommended with caution" together and measure it against the third category "not recommended" in a binary fashion. This approach gave an added depth to the meaningfulness of the statistical results in that it focused on what matters most, which is: Will the tool be able to pick the most suitable or most recommended websites or not. In addition, no instrument used a gold standard in the manner that was carried out in this study.
Very few instruments used the gold standard approach as described in this study. Some used "gold standard" for specific information on the web such as information on the management of cough or reliability of information about miscarriage measured against set criteria as established by the Royal College of obstetrics and Gynecology for example. Gold standard in this context was established guidelines about a specific disorder, but not for a broad topic such as pathology education. [2,32] Since Google PageRank and Alexa Traffic Rank are general rating tools which are designed basically to measure how popular a website is by tracking only traffic to and from it, it was anticipated that they will not correlate with the tool as the tool was applied to pathology educational websites. The general popularity ranking of these websites by Google and Alexa was not expected to be favorable out of the entire population of websites that are available on the web. The results of this study showed that there was no correlation between the tool and Google and Alexa which adds to the content validity of the tool. To support this further, gold standard rating was compared with that of each of Google and Alexa in the same way as it done for the tool and similar negative correlation was found between the gold standard rating and both Google and Alexa ranking system. It is also known that general website ranking tools can be subject to manipulation, spoofing and spamdexing inflating the real popularity of websites. [42,43] High content validity of the tool was further supported by the gold standard evaluation where all items were rated as either HI or important and no item was rated as NI. In addition, up to 93% of sub-items were rated as either HI or important and Gold standard agreed with the weightage of all items. In addition, both gold standards expressed their opinion verbally that the items in the tool can act as an "aid memoire" had they not been put in a checklist format they can be easily overlooked.
Positive correlation was found between the gold standard consensus with the actual score of the tool while an inverse correlation was found between the gold standard consensus with both Google Page Rank and Alexa Traffic Rank thus supporting the high construct validity of the tool.
When (Griffins, Tang, Hawking and Christensen, 2005) compared Google PageRank with a tool designed to find good web-based information about depression, poor correlation was found. [36] In addition, no correlation was found between the high scoring health related websites and the popularity ranking by either Google Page Rank or Alexa Traffic Rank as studied by (Zeng and Parmanto 2004). [44] Application of the Tool The MEWQET tool was then applied to a population of pathology websites. The results were shown to follow a normal distribution pattern, which is expected as the majority of the websites were recommended with caution, fewer websites were recommended and an equal number of websites were deemed not recommended by the tool.
From the experience of the investigator those websites that were picked by the tool as recommended are websites that are already known to many reputable medical schools as highly suitable websites to give to students for further reading or as a reference. Moreover, evaluation of the website: http://www.pathologyoutlines. com which was chosen as an example of well reputable, trusted and robust website by the tool showed its rating as " recommended." This adds evidence to support the validity of the tool.
In Parikh et al., 's study on validated websites on cosmetic surgery, they found that 89% of the websites studied failed to reach an acceptable standard, while Frasca et al., evaluated anatomy sites on the internet in 1998 and again 1999 they found a significant increase in the quality of anatomy websites. [33,29] It is therefore difficult to compare with other studies as each is using a different tool with different validation methods.
Many studies devising rating tools, apply their standards or criteria directly without subjecting their tools to rigorous validations. [27,38] This study, however, demonstrated a thorough and systematic procedure in tool development and validation and established a good example of developing effective evaluation tool for medical educational websites.
It is of note that even though the tool was developed using pathology educational websites, it is applicable to any medical education website.
How to Use the MEWQET Tool
It is proposed that the tool be used by educators to vet websites or sections/subsections of websites for use by medical students to help fulfil specific learning objectives. Once vetted, the educator should give the link of this website as part of the reading material, post it on the intranet or virtual learning environment of the college. Students' feedback about the website should be encouraged. This exercise should be done at the beginning of each learning module. This is supported by other studies where it was found that students even though they enjoy the flexibility of online learning, [45,46] but they reported that the content was too much for the allotted time. Instructors are therefore expected to carefully scrutinize all content material before recommending them to students. [45,46] By promoting the application of agreed quality guidelines by all medical schools, the overall quality of medical educational websites could improve to meet the demanded quality and the Web might ultimately become a reliable and integral part of the undergraduate medical education.
CONCLUSIONS
Many medical education websites are available on the web with unknown quality and similarly many un-validated website evaluation instruments are also available.
It is crucial not to instantly rely on any material found on the internet for educational purposes and instead to critically and systematically evaluate websites for medical education. The tool developed in this study fills a vacuum that exists in this area and further scrutiny is needed for this tool and other similar tools that are available now or will be available in the future to ensure optimum choice of the best websites for educational purposes.
The ultimate objective was to increase the awareness of website authors/owners not to publish on the net without enough scrutiny otherwise, readers will simply stay away from their websites. Therefore, the results of this study could serve to increase the quality of materials published on the internet and ultimately to increase the reliability of the internet as an educational resource. | 2018-04-03T06:07:50.947Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "fc13ac563b0999880c7c25c28c29f3508e066c8e",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2153-3539.120729",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c05714720a75d5feefb42003bea2828475ad77a8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11275785 | pes2o/s2orc | v3-fos-license | Direct Effect of 10-Valent Conjugate Pneumococcal Vaccination on Pneumococcal Carriage in Children Brazil
Background 10-valent conjugate pneumococcal vaccine/PCV10 was introduced in the Brazilian National Immunization Program along the year of 2010. We assessed the direct effectiveness of PCV10 vaccination in preventing nasopharyngeal/NP pneumococcal carriage in infants. Methods A cross-sectional population-based household survey was conducted in Goiania Brazil, from December/2010-February/2011 targeting children aged 7–11 m and 15–18 m. Participants were selected using a systematic sampling. NP swabs, demographic data, and vaccination status were collected from 1,287 children during home visits. Main outcome and exposure of interest were PCV10 vaccine-type carriage and dosing schedules (3p+0, 2p+0, and one catch-up dose), respectively. Pneumococcal carriage was defined by a positive culture and serotyping was performed by Quellung reaction. Rate ratio/RR was calculated as the ratio between the prevalence of vaccine-types carriage in children exposed to different schedules and unvaccinated for PCV10. Adjusted RR was estimated using Poisson regression. PCV10 effectiveness/VE on vaccine-type carriage was calculated as 1-RR*100. Results The prevalence of pneumococcal carriage was 41.0% (95%CI: 38.4–43.7). Serotypes covered by PCV10 and PCV13 were 35.2% and 53.0%, respectively. Vaccine serotypes 6B (11.6%), 23F (7.8%), 14 (6.8%), and 19F (6.6%) were the most frequently observed. After adjusted for confounders, children who had received 2p+0 or 3p+0 dosing schedule presented a significant reduction in pneumococcal vaccine-type carriage, with PCV10 VE equal to 35.9% (95%CI: 4.2–57.1; p = 0.030) and 44.0% (95%CI: 14.–63.5; p = 0.008), respectively, when compared with unvaccinated children. For children who received one catch-up dose, no significant VE was detected (p = 0.905). Conclusion PCV10 was associated with high protection against vaccine-type carriage with 2p+0 and 3p+0 doses for children vaccinated before the second semester of life. The continuous evaluation of carriage serotypes distribution is likely to be useful for evaluating the long-term effectiveness and impact of pneumococcal vaccination on serotypes reduction.
Introduction
Streptococcus pneumoniae (pneumococcus) remains a major cause of serious bacterial infection worldwide, especially in infants living in developing regions [1,2].
Children are the major reservoir of this pathogen. Pneumococcal carriage is highest during the first two years of life, and its frequency varies according to geographic location and socioeco-nomic pattern [3,4]. Individual nasopharyngeal (NP) carriage or colonization is a prerequisite for pneumococcal disease and the only source of transmission [5].
Several studies have reported the effectiveness of PCV vaccines in reducing both invasive and non-invasive disease in children and adults [9][10][11][12]. Following PCV7 introduction, a reduction in pneumococcal carriage of PCV7 serotypes was described, as well as an increase in colonization by non-PCV7 serotypes [13,14]. Evidence has indicated that overall pneumococcal carriage rates are unaltered after a period of PCV7 introduction, since decreases in vaccine serotypes are counteracted by increases in carriage of certain non-vaccine serotypes [14][15][16][17][18][19].
In 2010, PCV10 was introduced into Brazil's National Immunization Program (NIP), with universal access for all children [20]. The early impact of vaccination on the reduction of pneumonia hospitalization in children in Brazil was recently reported [21]; however, the effectiveness of PCV10 on nasopharyngeal carriage has not yet been evaluated. Data describing the impact of PCV10 vaccination on carriage are scarce and to date no significant effects of PCV10 on carriage have been described [22,23].
Although pneumococcal diseases are found in every age group [24], the primary reservoir of pneumococcus are children under five years old [2]; because of that, the effect of PCV10 vaccination on pediatric carriage and serotype distribution is critical.
The effect of PCV vaccination on carriage is of interest, since it could play a significant role in the overall vaccine impact on morbidity and mortality through minimizing vaccine serotype transmission to the non-vaccinated population.
Assessing PCV effect on carriage is not an easy task. Most studies have taken advantage of clinical trials to assess pneumococcal colonization as an outcome of interest. After vaccine introduction, one of the proposed methods to assess vaccine effectiveness on pneumococcal carriage is repeated sampling of the same individuals over time, which is expensive and invasive [24]. Recently, a cross-sectional study design was proposed to estimate vaccine effectiveness on carriage [24].
With the aim of assessing the direct effectiveness of PCV10 on pneumococcal carriage vaccine types, we conducted a crosssectional study, during the first year of vaccination. We also investigated whether there was any difference of vaccine effectiveness between the different dosing schedules used in Brazil's NIP during PCV10 introduction.
Study Design
A cross-sectional population-based household survey was conducted. We measured PCV10 vaccination effect by comparing vaccinated and unvaccinated individuals, all of whom are covered by the same voluntary vaccination program [25,26]. The main outcome of interest was pneumococcal vaccine serotype carriage in children. The critical parameter of interest was PCV10 uptake assessing as vaccine doses. Prevalence of pneumococcal carriage was estimated. Children with carriage of vaccine serotypes (''cases'') were compared with children that were negative for pneumococcal carriage (''non-cases''). Non-vaccine serotype plus non-pneumococcal carriage were combined within the same category.
Study Location
The study was conducted in Goiania Municipality (,1,300 000 inhabs) in the Central West region of Brazil. Goiania is a highly urbanized municipality in Brazil, where there is a high rate of pneumococcal carriage and high vaccination coverage in infants [27]. The study period was from December/2010 to February/ 2011.
Schedules Recommended by the Brazilian Immunization Program
PCV10 was made available free of charge to all children in Brazil through its introduction into the Public Healthcare System (Sistema Ú nico de Saúde, SUS) during March to October of 2010 in all municipalities. Prior to that, PCV7 was available by the private healthcare system. In addition, SUS provided the vaccine for selected high risk individuals through its special immunization activities [20].
Three different schedules were used during PCV10 introduction in Brazil [20]: For the country as a whole, the estimated vaccine coverage of a complete 3-dose series during the first year of the vaccine introduction was approximately 90% [28].
PCV10 Introduction and Schedules Evaluated at the Present Study
PCV10 vaccine was introduced in Goiania Municipality in June 14 th , 2010. The present study was conducted early during the first year of PCV10 introduction as routine immunization. The following PCV10 schedules were evaluated: 3p+0 (for children aged #6 m); 2p+0 (for children aged 7-11 m); and one catch-up dose (for children aged 12-23 m). Booster dose schedules (3p+1 and 2p+1) were not evaluated as the time period from vaccine introduction and case enrollment was short (6-8 months) and children enrolled did not have the opportunity to have been exposed to the booster dose.
Study Population and Sample
For recruitment, the study targeted children of two age-groups: 7-11 months (7-11 m) and 15-18 months (15-18 m). The rationale behind that choice was to assess vaccine effectiveness considering the primary dose series as well as catch-up schedules used by NIP during the vaccine introduction period.
A list of all children aged 7-11 m and 15-18 m resident in Goiania Municipality was obtained from the National Information System of Live Births with information on each child's gender, date of birth, address, and mother's name. That list was sorted by gender, district of residence (as obtained with the address) and age. A systematic sampling process was put in place, so that a proportional stratified random sample of children was obtained. Sampled children were assessed through home visits.
Sample size was estimated at 1,287 considering a cross-sectional design and estimated percentage of pneumococcal carriage among non-vaccinated children of 58%. For the analysis of effectiveness, this sample size would provide the study with a power of 80% to detect as statistically significant (at the 5% level) a relative risk of 0.67 or more for a range of frequency of exposure among controls from 20% to 70% [29,30]. Prevalence of pneumococcal carriage was obtained from a previous survey conducted in Goiania [27]. We estimated that about 65% would meet the study's eligibility criteria; therefore a total of 1,906 children were initially sampled. Estimated relative risk was calculated considering data reported from PCV clinical trial among children 9 months of age [31].
Inclusion and Exclusion Criteria
Sampled children were eligible to participate in the study if they lived in Goiania, had no prior antibiotic use during the previous seven days, and whose parent or legal guardian approved their child's participation in the study with supplied informed consent signature.
Exclusion criteria included absence of the child's parent or legal guardian during the home visit, lack of vaccination card, failure to collect NP swabs, and receipt of pneumococcal vaccines from different manufacturers.
Ethical Considerations
Written informed consent was obtained by the child's legal parents/legal guardians. The protocol was approved by the Ethics Committee of the Federal University of Goias (protocol #145/ 2010).
Data Collection
During home visits, a standardized questionnaire was administered to legal guardians of all children included in the study. Data on sociodemographic characteristics and factors potentially associated with PCV10 effectiveness on vaccine-type carriage such as day-care attendance, number of children living in the same household, mother's schooling, and access to private health insurance were collected. PCV10 vaccination dates were obtained from vaccination cards. For 9.1% of children, vaccination cards were not available. Thus, vaccination dates were obtained from Goiania's Vaccination Online Database managed by the Municipal Data Processing Company, which is a comprehensive database, with high completeness, and updated on time by vaccination workers at the moment of vaccine administration in vaccination services.
Nasopharyngeal Specimen Collection
Nasopharyngeal (NP) swabs were collected from all children included in the study by trained research assistants during home visits. Children were swabbed only once. Specimens were collected with flexible perinasal calcium alginate swabs (Fisherbrand, Fisher Scientific, Pittsburg, PA), which were placed into eppendorf tubes containing skim milk/tryptone/glucose/glycerol (STGG) transport medium [3]. The wired portion of the swab was cut at the top level of the tube, allowing the calcium alginate portion of the swab to drop into the vial. The samples were sent immediately after collection to the Bacteriology Laboratory of the Federal University of Goias, Brazil, vortexed to disperse the organisms from the swab, and frozen at 280uC.
Specimen Processing
Cultures for S. pneumoniae were performed for all STGG samples. The frozen vials containing the NP swabs in STGG were thawed at room temperature and then vigorously vortexed for 20-30 s. A 200 uL aliquot was inoculated into 6 ml of THY broth (5 ml Todd-Hewitt broth with 0.5% yeast extract plus 1 mL of rabbit serum) for enrichment culture and incubated for 6 h at 37uC in a 5% CO 2 incubator for conventional culture on a blood agar plate [32]. Alpha hemolytic colonies were tested for optochin susceptibility and bile solubility. Colonies with different morphologies from each sample were analyzed separately. Pneumococci confirmed isolates were frozen and sent to the Streptococcus Laboratory, Centers for Disease Control and Prevention (CDC) for serotyping by Quellung reaction with CDC-prepared antisera. Non-typeable (NT) pneumococci isolates were tested for the presence of 40 different capsular biosynthetic loci by conventional multiplex PCR (cmPCR), with 8 sequential, that identify a total of 22 serotypes and 18 small serogroups (see http://www.cdc.gov/ ncidod/biotech/strep/pcr.htm for latest updates), as previously described [32].
Definitions
Age-groups at 1 st PCV10 dose: we used the date of birth and the date at enrollment to assign children to each age-group. Children ages were retrospectively calculated based on the date of the first received PCV10 dose. For unvaccinated children, ages were retrospectively calculated based on the date of PCV10 introduction in Goiania Municipality (June, 14 th , 2010). Children were then accordingly classified into three age-groups: #6 m; 7-11 m and 12-18 m, since each age-group had a different recommended number of doses (3p; 2p and one single catch-up), during the first year of PCV10 introduction.
PCV10 dosing schedules: Children were categorized according to the age-group at 1 st PCV10 dose and number of doses administered: a) Unvaccinated: Children who did not receive any dose or those who received only one dose before 12 months of age (for which, there was a recommendation of two or three doses). The rationale behind this assumption based on results of efficacy trials in Hib vaccine on carriage which showed that high serum anti-capsular type are needed for prevention of mucosal colonization [33]. b) 2p+0: children who received two doses at any time during the first year of life; c) 3p+0: children who received three doses at any time during the first year of life; d) One catch-up dose: children who received one dose at or after 12 months of age.
In order to differentiate the booster dose from primary doses, we estimated the interval between doses; any dose administered with an interval of at least 6 months was considered a booster dose.
Pneumococcal carriage: children who had a positive NP swab culture for S. pneumoniae. Children were classified into the following pneumococcal carriage categories: 1) PCV10 serotype pneumococcal carriage (vaccine-type): children who were culture-positive for any of the pneumococcal serotypes 1, 4, 5, 6B, 7F, 9V, 14, 18C, 19F, or 23F; 2) Non-vaccine serotype pneumococcal carriage (non-vaccine type): children culture-positive for any serotype other than the ones included in PCV10; 3) non-pneumococcal carriage: culture-negative for pneumococci (regardless of culture results for other bacteria).
Data analysis
Statistical analyses were performed using the STATA software, version 12.0 (Stata Corp, College Station, TX). The distribution of children according to age-group at recruitment and at 1 st PCV10 dose was compared.
Prevalence of pneumococcal carriage among children was estimated considering number of children colonized by S. pneumoniae in the numerator, and number of children surveyed for whom NP swabs were collected and processed in the denominator.
Pneumococcal serotypes identified by Quellung reaction were described (including those present in PCV10 and PCV13 composition). The main outcome of interest (exposure variable) was pneumococcal carriage of PCV10 vaccine type. Unvaccinated and vaccinated groups were compared regarding the following variables potentially associated with colonization: age-group (at enrollment), gender, number of children living in the same household, mother's schooling, and day-care attendance.
Rate ratio (RR) for NP pneumococcal vaccine-type carriage, and its respective 95% confidence interval, was estimated as the ratio between the prevalence of vaccine-types carriage in children exposed to different dosing schedules (2p+0; 3p+0; and one catchup dose), being unvaccinated children the reference group. Children with missing isolate serotype results were excluded from the analysis.
Confounding variables related with both, vaccine uptake and the outcome of interest (vaccine-type carriage) were included into the multiple regression model to estimate adjusted RR. Poisson regression with robust variance estimator was fitted to adjust the RR for confounding variables. For this investigation, variables associated with crowding (day-care attendance, and number of children in household), and mother's schooling were identified as confounder; in addition, age (continuous variable) was also entered into the model. PCV10 vaccine effectiveness (VE) on vaccine-type carriage was calculated using the adjusted RR and defined as the percentage reduction in the risk of pneumococcal vaccine-types in vaccinated children as compared with unvaccinated children as follows [24,34]:
VE~(1{RR) x 100:
VE was reported with 95% confidence intervals (95%CIs). Statistical significance of VE was established if the lower limit of the 95% CI around VE was greater than 0 (zero). Table 1 displays the categories of interest used to assess the RR and VE on PCV10 vaccine-type carriage.
RR for vaccine-types~( a=N 1 )=(c=N 2 ):
Therefore, the vaccine effectiveness VE for vaccine-types can be estimated as.
The characteristics of 1,287 children according to vaccination status are shown in Table 3. Unvaccinated and vaccinated groups Table 4, there was a significant reduction of rate ratio from unvaccinated children to those who received 3p+0 dosing schedule for pneumococcal nasopharyngeal vaccine-type carriage (x 2 for trend = 5.08; p = 0.024). Table 5 shows the results of PCV10 effectiveness for pneumococcal carriage vaccine-types in a multiple regression model. After adjustment for confounders, children who had received 2p+0 or 3p+0 dosing schedules presented a significant reduction in pneumococcal vaccine-type carriage, with PCV10 effectiveness equal to 35.9% (p = 0.030) and 44.0% (p = 0.008), respectively, when compared with unvaccinated children. In contrast, for children who received one catch-up dose, no significant reduction in vaccine-types was detected (p = 0.905).
Discussion
To the best of our knowledge, data describing the impact of PCV10 on carriage after its introduction in routine schedule of NIPs has not yet been reported. Brazil was the only country that used multiple schedules during the period of PCV10 introduction, with and without booster doses. Therefore, in this study, we were able to evaluate different dosing schedules before the administration of the booster dose (2p+0 and 3p+0, and also one catch-up dose). Hence, this investigation represented a unique opportunity to gather evidence of PCV10 effectiveness during a transition period, in which several dosing schedules were used for different age-groups, some of which were not used nor are currently being used by other countries.
The present study adds to the body of evidence describing PCV10 effects on NP carriage following different dosing schedules. Our data indicate that the primary series of either 3p+0 or 2p+0 dosing are effective in reducing PCV10 vaccine-type carriage. The overall vaccine uptake in Goiania reached high rates during the first 8 months of vaccination [28], which surely contributed to the rapid reduction of vaccine-serotypes within the age-group targeted by NIP for the primary schedules. Because no previous observational study has evaluated the effectiveness of 3p+0 or 2p+0 PCV10 dosing schedules on pneumococcal carriage, comparison of our findings is hampered. The majority of studies assessing PCV7 effectiveness against pneumococcal carriage was conducted at least 2 years after vaccine introduction. These studies also did not consider the number of doses [13,14,17,18,35]. In Colombia -Latin America, 2 years after PCV7 vaccination using a 2p+1 dosing schedule, an important reduction on vaccine-type carriage was observed among vaccinated children aged 12 to 18 months [36]. A recent systematic review of clinical trials assessing pneumococcal NP carriage provided evidence of decreased vaccine-type carriage for 2p+0, 2p+1, 3p+0 and 3p+1 schedules compared with no vaccine uptake, with the reduction greatest for the 3 primary dosing schedule [37].
As reported by other studies, we also observed a high prevalence of NT carriage isolates [27,38,39]. In addition to Quellung reaction for serotyping isolates, all NT pneumococci were tested by cmPCR. Considering that NT pneumococci recovered from carriage are often of non-encapsulated lineages, or can arise by mutations within cps genes or reduced gene expression, molecular tools are necessary for further characterization of NT strains [40,41]. A considerable variety of serotype-specific cps loci were identified (n = 35) within NT isolates, with cmPCR types 19A, 19F, and serogroup 6 (26.6%) accounting for a large percentage of this isolate category (13% for 19A and 19F combined, 26.6% for [42,43]. Considering that serotypes 6A and 19A are included within PCV13, the use of this vaccine could further decrease carriage of common diseasecausing serotypes. Some limitations should be taken into account when interpreting our results. Because this was an observational study, we measured associations between PCV10 introduction and changes in pneumococcal vaccine-type carriage. This does not reflect causality, although data on potential confounding variables was collected and considered in the analysis. We are also aware that some unmeasured factors such as viral infections, seasonal variations, vaccine coverage, and temporal trends could not be taken into consideration, as it was a short-term cross-sectional survey design. Nevertheless, cross-sectional studies of pneumococcal carriage could be a feasible, rapid and timely approach, when compared with follow-up studies for monitoring post-vaccination effects on pneumococcal serotype distributions. This methodology would be useful, mainly in settings where PCV has been recently introduced and/or vaccination coverage have not reached high rates [15,44,45].
Currently, the World Health Organization recommends the administration of 2 catch-up PCV10 doses at an interval of at least 2 months to unvaccinated children aged 12-24 months at the time of initial PCV10 vaccination [8]. Indeed, in our study, one single catch-up dose was not effective in preventing vaccine-type carriage for children 12 months or older. However, the possibility of lacking power should be considered, as the analysis was stratified by dosing schedule. This reduced the sample size, leading to a wide 95%CI of PCV10 effectiveness, which might have not allowed the detection of an eventual reduction in vaccine-type carriage for this schedule.
Unfortunately, since data collection took place only once and very early after vaccine introduction, there was not enough time to enroll children who received the booster doses, thus, preventing the assessment of 2p+1 and 3p+1 dosing schedule. Recent evidence has shown that the administration of a booster dose after two primary doses reduces pneumococcal spread from older children to other members of the community [46]. Further crosssectional studies could further address this area.
In conclusion, we found that PCV10 was associated with high protection against vaccine-type carriage with a 3p+0 and 2p+0 dosing schedules for children vaccinated before the first year of life, soon after vaccine introduction into the routine immunization schedule. Anticipating that further changes in serotype distributions in all ages is likely to occur as a consequence of continued vaccination combined with herd effect, continued monitoring of carriage serotype distributions is valuable for evaluating the longterm effectiveness and impact of pneumococcal vaccination on reducing vaccine serotypes and on emergence of non-vaccine serotypes in Brazil. | 2017-04-04T09:31:55.704Z | 2014-06-03T00:00:00.000 | {
"year": 2014,
"sha1": "ade0cdb2ec21ae0af0ba3dd51053c191b9704670",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0098128&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c187c106078af36dce2dcc181bbfa39b2e1fba45",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253633101 | pes2o/s2orc | v3-fos-license | Joint time-state generalized semiconcavity of the value function of a jump diffusion optimal control problem
We prove generalized semiconcavity results, jointly in time and state variables, for the value function of a stochastic finite horizon optimal control problem, where the evolution of the state variable is described by a general stochastic differential equation (SDE) of jump type. Assuming that terms comprising the SDE are C1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C^1$$\end{document}-smooth, and that running and terminal costs are semiconcave in generalized sense, we show that the value function is also semiconcave in generalized sense, estimating the semiconcavity modulus of the value function in terms of smoothness and generalized semiconcavity moduli of data. Of course, these translate into analogous regularity results for (viscosity) solutions of integro-differential Hamilton–Jacobi–Bellman equations due to their controllistic interpretation. This paper may be seen as a sequel to Feleqi (Dyn Games Appl 3(4):523–536, 2013), where we dealt with the generalized semiconcavity of the value function only in the state variable.
Introduction
In this article we continue our work initiated in [26] on establishing generalized semiconcavity results for the value function of a finite horizon jump diffusion optimal control problem. While in [26] we dealt with the problem of obtaining generalized semiconcavity estimates for the value function in the state variable, uniformly in time, here we prove generalized semiconcavity results in time and state variables jointly.
Under appropriate assumptions on the data-which follow from those made in this paper-the value function can be interpreted as the unique vis-The results of this paper are a (small) part of the vast regularity theory of PIDEs. First results on this subject were obtained assuming nondegenerate diffusions or elliptic second-order differential (local) terms as in [7,28,30] (just to mention a very few references without any pretense of completeness) and references therein. Recently, there has been a revival of interest on the theory of PIDEs which is due to the work on one hand of Caffarelli et al. [13][14][15][16][17][18], and Barles, Chasseigne, Ciomaga, Imbert on the other [4][5][6]. These authors, differently from the earlier ones, prove regularity results, such as Hölder, Lipschitz, C 1,α -estimates, under a kind of ellipticity assumption, which is not any more due to the second-order local terms (or to the presence of nonsingular diffusions), but comes either from the nonlocal terms or from the combined effect of both local and nonlocal terms. Related results have been obtained by other authors as well [10,25,32,34,36,44,45,49,50].
Our interest in the regularity theory of partial integro-differential equations of HJB type and related optimal control problems arose from the recent theory of Mean Field Games (abbr. MFG) developed by Lasry and Lions [46][47][48], which yields limiting models for symmetric, non-zero sum, non-cooperative N -player games with the interaction between the players being of mean-field type. It is of interest to study MFG models where the dynamic of an average or representative agent is a jump diffusion because stochastic phenomena in Economics and Finance applications exhibit jumps and other deviations from pure diffusions. The MFG paradigm would lead in this case to PIDEs of HJB for the optimal values of the average agents coupled with Fokker-Planck PI-DEs for probability distributions of their optimal dynamics. To our knowledge the study of such systems of PIDEs remains largely to be done. In particular, we are interested in extending to these systems of PIDES our results in [3,26].
The proof is based on interpreting the said solution of (1.1) as the value function of a stochastic optimal control problem for jump diffusion processes, that is, processes which are solutions of appropriate stochastic differential equations of jump type driven by Brownian motions and Poisson random measures independent of each other (abbr. SDEs) see, e.g., [54] and references therein. Furthermore, we rely on the method of affine time changes for Brownian motions as in [11,12] and for Poisson random measures as in [33]. While the corresponding change of variable formula for Wiener integrals is rather easy, for stochastic integrals with respect to Poisson random measures, the formula is more involved and requires a change of probability on the underlying sample space via the so called Kulik's transformation; see [33] for more details and references. Other tools are Burkholder type inequalities as stated for example in [43], and of course Gronwall's inequality.
The paper is organized as follows. Main results (Theorems 2.2 and 2.11) are stated in the next section. The proof of technical lemmas is postponed to the "Appendix" (Sect. 3) in order to ensure a better readability of the paper.
Notation In accordance with common practice, we usually use the same letter (here C δ ) to denote possibly different constants in a chain of estimates/inequalities, which however depend only on the same data; see, e.g., the proofs of Lemmas 2.5 and 2.7.
that satisfy the following conditions: • is a complete filtered probability space such that the filtration F satisfies the usual hypotheses (that is, F is right continuous and every sub-σalgebra F t , for 0 ≤ t ≤ T , is complete with respect to the probability measure P; is a F-adapted Poisson random measure on R + × Z and on probability space (Ω, F, P) with intensity measure ν on Z, and with associated compensatorÑ =Ñ (dtdz) = N (dtdz) − dtν(dz); • W and N are independent of each-other and moreover have increments that are independent of the filtration F, that is, be (measurable) maps, p ≥ 2, C i , L i ≥ 0 fixed constants and ω i regularity moduli, for i = 1, . . . , 6. Assume that the following hold true: 2 Which we could call probability references if we were to adapt a terminology analogous to the one adapted in [27].
(ii) (Lipschitz continuous costs) Since p ≥ 2 and ν(Z) < ∞, then it follows that estimates for H and K hold also for p = 2. We cannot handle arbitrary moduli, therefore we have to make assumptions on the moduli as well. However, these assumptions are not very restrictive and are verified by the moduli appearing in most applications of interest. We should notice that in many cases we can replace the regularity or semiconcavity modulus of a map by a larger one so that it satisfy our assumptions. For alternative assumptions on the moduli see Theorem 2.11 below.
To begin with we make either one of the following assumptions on the moduli. (MP) (Power type moduli).
(i) (Moduli of the dynamics). We assume that for given 0 < α i (≤ 1), k i ≥ 0 and also that (ii) (Moduli of the costs). Furthermore, we assume that for given 0 < α i (≤ 1), k i ≥ 0 and also that Alternatively, we assume the following hold true.
(i) (Moduli of the dynamics). The functions (ii) (Moduli of the costs). Furthermore, the functions are concave and, if r i ≥ 1 are such that r −1 i + q −1 i = 1 for all i = 5, 6, then we assume also that Remark 2.1. The larger the p is the more restrictive these assumptions become, so we aim at proving results for p ≥ 2 as small as possible. In the case of (MP), by (2.3), (2.4), it suffices to assume that the above estimates (L), (S) hold true for p = 4. as it is done in [33], where the case of classical semiconcavity estimates (that is, ω-semiconcavity estimates with linear ω-s) is treated. Indeed, it is not reasonable to take α i > 1 (i = 1, . . . , 6), that is, a superlinear modulus, otherwise, by [21, Theorem 2.1.9], an ω-semiconcave function would just be concave and a C 1,ω map would just be constant. In such a case one may just take ω i = 0, that is, k i = 0 and α i = 0. Still we cannot assume p = 2 unless our results trivialize for this would force us to take α i = 0 for all i = 1, . . . , 4.
For any s ∈ [0, T ], R ∈ R s as in (2.1), we consider the following optimal control problem: (admissible controls) we take as set of admissible controls A R (s, T ) the set of R-predictable 3 processes α(·): [0, T ] → A; (controlled system) for any x 0 ∈ R d , α(·) ∈ A R (s, T ) we consider the stochastic differential equation of jump type: for all s ≤ t ≤ T is the solution 4 to (2.5), the cost is given by where, for each α(·) ∈ A R (s, T ), x(·) is the solution of Eq. (2.5); we consider also , R ∈ R s , and V is actually the unique viscosity solutions of (1.1) with polynomial growth [54,55]. Actually, in [33] it is proved that V is Lipschitz continuous on [0, T − δ] × R d for any δ ∈ ]0, T ].
As we pointed out in the introduction, V is not in general locally Lipschitz continuous (and therefore not semiconcave in generalized sense) . However, we prove that, for every 0 < δ ≤ T , V is ω-semiconcave on [0, T − δ] × R d for some modulus ω which can be expressed in terms of the moduli of the data of the problem.
Thus we fix also a δ ∈ ]0, T ]. We prove the following generalized semiconcavity estimates in time-space. In order to prove generalized semiconcavity estimates for V on [0, T − δ] × R d (and in particular, Theorem 2.2 above) we take s 1 , T ), and denote by τ 1 , τ 2 the affine "time changes" that transform [s 1 , T ], respectively, [s 2 , T ], into [s λ , T ], that is,
Theorem 2.2. Assume (B), (L), (S) and either (MP) or (MC). Then the value function
. We take where F i is defined as in (3.6) and Q i as in (3.9). It is easy to see that Denoting by x i (·) the solutions of Eq. (2.5) for R = R i , α(·) = α i (·) and initial conditions s = s i , x 0 = x 0 i , for i = 1, 2, respectively; and by x λ (·) the solution of (2.5) for the previously fixed R ∈ R s λ , α(·) ∈ A R (s λ , T ), initial conditions s = s λ , , we obtain, by Burkholder inequalities and change of variable formulas for stochastic integrals with respect to affine time changes-see the detailed proof in the "Appendix"-the following estimates:
Lemma 2.3. For some c ≥ 0, that depends only on d, m, T, p, ν(Z), and for every t ∈ [s λ , T ],
For a better readability of the paper, the proof of this lemma and the others stated in this section is postponed to the "Appendix".
We need the following simple technical lemma which can be checked by straightforward computation, hence its proof, which in any case can be found in [11], is omitted. provided that γ, defined by setting γ(ρ) = ρ β ω 2 (ρ) q for all ρ ≥ 0, where
By (L)-(ii), (S)-(ii)
, By Lemma 2.8 and Lemma 2.4, more specifically, (2.14), (2.18), for some constant C δ ≥ 0 that depends only on d, m, p, δ, T, ν(Z), C i , L i , i = 1, . . . , 6. Under our assumptions on the moduli (MP) or (MC) Lemma 2.7 holds true, which, in case of assumptions (MC), we apply together with Lemma 2.8, in order to deduce, by using also a final time Lemma 2.5, that (2.9). From this last estimate, since R ∈ R s λ and α(·) ∈ A R (s λ , T ) are arbitrary, it follows that V is ω-semiconcave. Remark 2.9. Up to estimate (2.2) in the proof above we do not use the assumptions on the particular form of the moduli. This is important to notice for the general estimate (2.24) may by used to obtain generalized semiconcavity estimates for other types of moduli from those envisioned in Theorem 2.2.
It should be now rather straightforward to state results under the assumption that some of the moduli ω i are of power type while the others satisfy suitable concavity properties (as stated in Lemmas 2.6 and 2.8).
In many cases of interest it is possible to choose the moduli ω i concave, and by growth assumptions contained in (B), (L), it is also possible to assume these moduli ω i bounded as well. This remark can be used to derive ω-semiconcavity results by means of the following lemma. Lemma 2.10. (bounded concave moduli) Fix q, r ∈ [1, ∞] such that 1/q+1/r = 1 and let ξ be as in Lemma 2.6 (or as in Lemma 2.8). Assume that ω q is concave for some q > 0, and ω is bounded by some constant k ≥ 0. Then Then this lemma can by used to prove the following theorem in the same fashion as we did with Theorem 2.2.
Relying on the lemmas and techniques given above, one can obtain additional results on the time-space semiconcavity of the value function, estimating, if one so wishes, the semiconcavity modulus of the value function in terms of the moduli of the data (that is, results of the type of Theorems 2.2 and 2.11) when one assumes moduli of "mixed type", that is, some moduli of power type and the others having suitable concavity properties and/or being bounded. Since the resulting statements and method of proof of these results should be clear by now, we are not providing them here. We just emphasize that, in obtaining such results, the starting point is estimate (2.24), which holds true for any moduli ω i , i = 1, . . . , 6. Then one needs to apply Lemma 2.6 and/or the first part of Lemma 2.10, firstly, to obtain a new version of Lemma 2.7 (based on the assumptions on the type of the moduli o i , i = 1, . . . , 4), and finally, one concludes by using this new version of Lemma 2.7, estimate (2.24) and/or Lemma 2.8 and/or the second part of Lemma 2.10 (whether and which of the said lemmas is to be used or not depends on the assumptions on ω i -s).
Appendix
Proof of Lemma 2.3. Fact 1. (Burkholder-Davis-Gundy inequalities [43]) For for all predictable processes σ ∈ L 2 s i , τ −1 i (t) × Ω, dr ⊗ P; R m×d , t ∈ [s i , T ]. Next, we use a transformation of a Poisson random measure with respect to affine time changes which is called Kulik's transformation. The reader interested for more information on this transformation is referred to papers [41,42], or even [33] for a quick and very readable introduction. We define | 2022-11-19T14:08:38.680Z | 2019-01-17T00:00:00.000 | {
"year": 2019,
"sha1": "748ac163e404a117d3beb71dde5f18743cc0c4a6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00030-018-0550-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "748ac163e404a117d3beb71dde5f18743cc0c4a6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
221651196 | pes2o/s2orc | v3-fos-license | Multiple Core Fibers versus Multiple Fibers Enabled Space Division Multiplexing Based Elastic Optical Networks: A Performance Comparison
— Space Division Multiplexing (SDM) based Elastic Optical Networks (EONs) (SDM-b-EONs) have been envisioned as a solution to both, the required upgradation of the single-mode fiber‟s network capacity which is constrained by the non-linear Shannon‟s limit and the capacity provisioning which will be necessitated by future diverse Internet traffic. The current SDM-b-EON designs are based on the use of Multiple Fibers (MF) technology; however, recently the Multiple Core Fiber (MCF) technology has gained momentum after laboratory experiments conducted on the MCF models established much lower inter-core crosstalk values. In the current work, we focus on the design of a MCF enabled SDM-b-EON for which, we formulate an Integer Linear Programming (ILP) model and then propose a heuristic algorithm to obtain the solutions for large sized networks in reasonable execution times. We then proceed to the performance comparison of the MCF and the MF enabled SDM-b-EON under the consideration of realistic parameters and network topologies which are characterized by different node(s) numbers and link distances. The obtained results demonstrate that the performance of the MCF enabled SDM-b-EON is very close to that of a MF enabled SDM-b-EON which implies that the network operators can benefit by utilizing the existing components for the MCFs rather than incurring any
I. INTRODUCTION
The ever growing diverse traffic and its related bandwidth requests have rendered the optical networks (OTNs) limited by the capacity. To quench the demands for large capacity and heterogeneous granularity by the traffic of the next-generation OTNs, the flexi-grid elastic optical networks (EONs) have been thoroughly investigated [1,2]. However, owing to its use of only two multiplexing dimensions, in the near future, the EONs will also lead to a crunch in the fiber capacity. The aforementioned scenario can be ameliorated by the adoption of space division multiplexing (SDM) which introduces a new dimension of "space" and in which, many fibers are utilized in parallel thus provisioning an increase in the spectral resources which can be utilized [3]. The SDM based EONs (SDM-b-EONs) will hence be able to efficiently serve the next generation"s applications and the Internet which have been anticipated to handle the traffic growth at a rate much greater than Petabit per second level [4] owing to their large bandwidth capacity and efficient accommodation of both, the low bit-rate (BR) lightpaths and the high BR super-channels [5].
Currently, the SDM-b-EON solutions proposed by both, the research community and the network operators is based on the extension of the already deployed networks which have been converted into infrastructures that are enabled by multiple fibers (MFs) wherein, each fiber link bundles many single mode fibers (SMFs) [6]. However, for the SDM-b-EONs to appeal economically, new fiber design solutions are required to ensure parallelization, which is a mandate in the SDM-b-EONs [7]. Amongst the many other existing solutions, SDM-b-EONs enabled by multiple core fibers (MCFs) are promising since such a technology (i) can provide higher spectrum usage flexibility in conjunction with the bandwidth scalability far beyond the non-linear Shannon"s limit, and (ii) makes it possible to benefit from system components which are integrated, cost-effective, and already existing for the MCFs [7,8]. Also, as already demonstrated in [5], owing to low levels of XT, the MCFs are completely flexible and hence there occurs a possibility of resorting to the use of both, spectral and spatial super-channel techniques. However, use of the MCF technology also implies (i) insertion of several single mode cores within a single fiber cladding, (ii) constraint of increase in core(s) number(s) essentially due to space non-availability within fiber cladding, and (iii) counteracting of degradations due to coupling among the cores i.e., inter-core crosstalk (ICXT), which can degrade the optical signal(s)" transmission reach (TR) to such an extent that it may be required to adopt a less efficient however, more robust modulation format (MF).
Many existing studies have addressed the design of EONs under the assumption of single core fibers, a survey of which is presented in [2]. However, with SDM introducing "space" as a new freedom degree, the SDM-b-EON scenario is very recent and hence, only few works exist on the design of SDM-b-EONs. The authors in [9] have experimentally evaluated a four node programmable multiple granularity SDM switching network using "two" seven core MCFs. The obtained results demonstrate an adequate end to end performance on all the channels. In [10], the authors have proposed an Integer Linear Programming (ILP) based optimal method followed by the proposal of heuristic schemes for the routing, MF, core and spectrum assignment (RMFCSA) problem in a SDM-b-EON with an aim to reduce the maximum spectrum slices amount required in the MCF"s any core. The results, which are obtained under the assumption of a "three" core MCF considering estimations of ICXT, demonstrate a good approximation of the proposed heuristic with the optimal solution obtained from the formulated ILP model. The authors in [11] have proposed heuristic schemes for the similar problem as in [10] considering the optical white box and the black box devices. The aim of the proposed schemes is to jointly optimize the switching and the spectrum resource efficiency during provisioning of the demands requiring diverse capacity(s). The results are obtained considering a "six" core MCF with the same ICXT estimations as in [10] and it is demonstrated that, the ICXT aware schemes improve the provisioned traffic volume significantly for the SDM network based on the technology of architecture on demand which is a scalable and cost-efficient solution for future SDM networks. In [3], the authors have evaluated the benefits of using SDM for dynamic bandwidth allocation in an EON. The main aims of the study are to (i) compare the spectral and spatial super channel assignment policies in a SDM network which relies on SMFs bundles, (ii) investigate the impact of MF choice on the performance, and (iii) study the performance of various SDM switching options. The results of the study show that (i) under the consideration of a multiple channel single MF system with 50 GHz sub channel spectrum occupation, for both, spectral and spatial super channel assignment policies, the DP-8-QAM MF offers the best compromise between spectral efficiency (SE) and TR, (ii) with the consideration of a multiple channel MF system, network performance improvement can be obtained, and (iii) joint switching is able to offer similar performance as that provided by independent switching for particular network load profiles, while allowing a significant reduction in the number of wavelength selective switches. The authors in [7] have proposed a cost-effective Reconfigurable Optical Add/Drop Multiplexer (ROADM) architecture for SDMb-EONs which are enabled by the MCFs and also minimize the technological requirements and associated costs in exchange for demanding core continuity along the end to end communication. The authors have also proposed a heuristic algorithm for solving the RMFCSA problem which is compared to its ILP counterpart. The obtained results demonstrate that, in addition to decrease in the network expenditures, in terms of the maximum throughput, when the proposed ROADM architecture is deployed, approximately similar performance is obtained compared to when the existing ROADM architectures are operational in the network.
In the current work, we focus on the design of a MCF enabled SDM-b-EON. Owing to the adoption of a flexigrid technology, during heterogeneous BR(s) lightpath(s) assignment, the optimization problem aims to maximally use the spectrum of every core. Further, we consider an EON links system detailed in [5] which comprises of single core fibers amount that equals the cores amount within the MCFs. The aforementioned implies that in our current work (i) spatial super channels are not considered, and (ii) at every node, there occurs spatial demultiplexing of the incoming MCFs. As a key contribution which distinguishes our current work from the existing studies, we compare the performance of a MCF enabled SDM-b-EON with that of a MF enabled SDM-b-EON under the consideration of realistic parameters and network topologies which are characterized by different node(s) numbers and link distances. To the best of our knowledge, there is no existing study in literature which has focused on such a comparison. We also advocate for our current work, since it is the first study which demonstrates that the performance of MCF enabled SDM-b-EON is close to that of a MF enabled SDM-b-EON and hence, shows that the network operators can indeed benefit by using the existing components for the MCFs rather than incurring any extra expense to provision the same traffic amount.
Rest of the paper is structured as follows: In Section II, initially, we detail the SDM-b-EON design using an ILP formulation followed by proposal of a heuristic algorithm in view of obtaining the solutions for large sized network. In Section III, we detail the SDM-b-EON scenario and various simulation assumptions followed by the discussion on various obtained simulation results. Finally, Section IV concludes the study.
II. DESIGN OF SPACE DIVISION MULTIPLEXING BASED ELASTIC OPTICAL NETWORK
In this section we detail the problem statement of the SDM-b-EON design. Initially, an ILP model is formulated to obtain the optimal solutions followed by the proposal of a heuristic algorithm to obtain the solutions for large sized network topologies in reasonable execution times.
A. ILP Formulation
To formulate the ILP model for the SDM-b-EON design, we define the following: With the above definitions, we address the RMFCSA design problem. Specifically, the aim is to find the candidate lightpaths which are to be assigned with an objective to reduce the (i) FSs amount that is utilized in the network"s any MCF"s any core, and (ii) aggregate FSs amount which is allocated within the network. The aforementioned is required to be achieved under the constraints of (i) assignment of demand successfully i.e., for each demand dD , a permissible candidate lightpath must be assigned from within d CL , and (ii) capacity of the multiple core fiber i.e., in any MCF e , the provided FS fs FS can be utilized at most times. For a given demand, the formulated ILP model allocates a candidate lightpath hence making a decision on its route, MF and SA. Further, a decision on the core in every MCF which occurs on the candidate lightpath"s route is conducted on the basis of the FSs which are occupied. To perform the aforementioned, the formulated ILP model uses the following binary variables: The first term of the optimization objective function in (2) ensures the least FS which is required hence reducing the FSs amount utilized in any MCF"s any core in the network. The aforementioned objective however is a common metric of evaluation in regard to the SDM-b-EON since within the network, FSs that are assigned in only a single core of an individual MCF may differ to the FSs amount which are assigned in the aggregate cores of all the MCFs. Keeping in view of the aforementioned, second term of the objective function in (2) targets the reduction of the aggregate FSs amount which are allocated in the network. In the second term of (2), is a very small positive number which is real valued, The following two points must also be highlighted in regard to our current study: (i) we have not considered grooming of the traffic which implies that any F-TP is assigned to at most one demand and further, every demand needs exactly a single lightpath assignment, and (ii) for any specific MCF link, we have not considered the allotment of a specific core owing to the assumption of the core switching flexibility provisioned by the F-OXCs nodes. Specifically, in the current work, instead of allocating the candidate lightpaths to particular MCF(s) core(s), based on their spectral resources which are available, the candidate lightpaths are assigned in any MCF core along the route.
B. Heuristic Algorithm
Number Existing studies in literature have already shown that ILP modelling for the design of EONs incurs NP-hard complexity and hence, it is ineffective when large sized networks are considered [1,2]. Therefore, for the design of SDM-b-EONs, which is an extension of the EONs design, requiring solution to RMFCSA problem, the ILP model is certainly not an effective strategy.
In view of the aforementioned, in this sub-section we design a heuristic algorithm based on the Simulated Annealing (SA) strategy that solves a similar problem as detailed in the previous sub-section however, generating practical solutions in reasonable amounts of execution times. The proposed heuristic algorithm, named as SArelying-RMFCSA (SA-r-RMFCSA), is designed by modifying the Simulated Annealing Greedy Lightpath Allocation (SAGLA) heuristic algorithm from our previous study [1] such that the SA-r-RMFCSA algorithm is able to find those candidate lightpaths that provision all the demands in the set D under the consideration of the graph G with an aim to satisfy the objective function given by (2). Further, the SA-r-RMFCSA algorithm uses a simple greedy-RMFCSA (g-RMFCSA) process to find an initial solution by provisioning fast solutions to the various demand(s) order(s) instants of the RMFCSA problem.
In regard to the SA method, it is known that the SA process generally admits, with a certain probability, solutions which are of the non-improving type [12,13]. The starting temperature coefficient (T) is useful in allowing the SA process to admit such non-improving type solutions with a certain probability which increase the value of the thus far found final solution"s objective function. Also, it must be noted that, even if admitted, a non-improving solution may not be the optimum solution; however, it may permit the SA process to escape a local optimum value. Lastly, with the evolution of the heuristic algorithm, the probability of non-improving solutions" acceptance minimizes since; T is reduced by a specific cooling rate value in every iteration.
In SA-r-RMFCSA, the SA procedure (i) aims to find a viable solution space so as to locally evaluate the solutions, and (ii) guides the g-RMFCSA process which obtains fast solutions by processing the demands one after the other as per a specific demand order followed by the assignment of lightpaths to the demands so as to reduce the objective function given by (2). The output of the SA-r-RMFCSA algorithm is a solution that provides the chosen candidate lightpaths set and the maximum iteration numbers. Further, the following must be noted in regard to the SA-r-RMFCSA algorithm: (i) in the g-RMFCSA procedure, the shortest candidate lightpaths which have the least spectral parts are tried initially since, the candidate lightpaths in the set are increasingly arranged in terms of both, physical distance and spectral positioning, and (ii) in the SA algorithm"s iterative procedure, before obtaining the "Next Solution" from the g-RMFCSA algorithm, we perform a swap of the demands. The aforementioned in conducted in view of the fact that a MCF enabled SDM-b-EON can provide large capacity and also may require the provisioning of large numbers of demands. Thus, if only an individual demands pair is swapped, it may result in solutions that only marginally vary from each other, in turn leading to the SA process not being able to move away from the local optimum value. Therefore, we swap two large demands sub-sets which are ordered. Flow-chart of the SA-r-RMFCSA algorithm is as shown in Fig.2.
III. SIMULATION RESULTS
For performance evaluation, we consider three realistic network topologies: the Deutsche Telekom (DT), the Telefónica (TID), and the GEANT. The details of these topologies with their various dimensions values can be found in our previous study [1]. In the aforementioned considered networks, we assume that (i) at start, in every core, there is an availability of complete 4 THz of the C-Band, (ii) the spectrum which is available is split into 12 GHz FSs [14] hence resulting in 320 FSs in every core, and (iii) the F-TPs deployed at the network nodes can operate at either 40/100/400 Gbps, and can use any of the following MFs: BPSK, QPSK, 16-QAM or 64-QAM. Following the study in [7], we conduct the simulations considering "seven", "twelve" and "nineteen" cores MCFs and further, adopt the most limiting TR values assuming that either nonlinear interference (NLI) or ICXT dominates as the prominent degradation. The TR values are used by the proposed SDM-b-EON design strategy (i.e., ILP model or SA-r-RMFCSA) which always resorts to use of the most efficient MF based on these TR values. Lastly, a 10 GHz spectral GB value is also considered between contiguous connections [15].
As for the demands set, we load the considered networks with demands set consisting of unidirectional demands and following an optimum distribution [16]. Further, we consider two profiles of traffic (PoT): (i) multiple-rate PoT (MR-PoT) which considers connections to take the values which correspond to 40/100/400 Gbps, and (ii) flexible PoT (F-PoT) which considers size of connection to range from "one" to the "maximum FSs" value. To consider realistic scenarios, we assume that MR-PoT is a less period case wherein, the 40/100/400 Gbps demands occur as 35%, 55%, and 10% of the total demands which are offered whereas, F-PoT is a long period case wherein, only 100 and 400 Gbps demands occur as 45% and 55% of the total offered demands. A similar study can also be conducted considering alternative traffic models in view of investigating PoT"s impact on SDM-b-EON performance.
It must also be noted that the 400 Gbps demands have very low TR and this leads to such demands not reaching the destination irrespective of the MF which is adopted. In the current study, in the case when, even though after resorting to the use of a less efficient MF, a 400 Gbps demand is found to be impermissible during the candidate lightpath pre-evaluation process, we construct its corresponding candidate lightpath considering "four" 100 Gbps lightpaths which follow both, contiguous assignment and joint switching between source and destination. The aforementioned however incurs four times the FSs amount which is required in addition to 40 GHz of GBs. The current study can however be extended by considering a translucent SDM-b-EON scenario in which the advanced MFs can be utilized in longer lightpaths which will be the focus of our future work.
A. Tuning of SA-r-RMFCSA Algorithm and Choice of Number of Shortest Paths
The performance of the SA-r-RMFCSA algorithm is dependent on the SA process which in turn requires tuning of the following parameters: (i) rate of cooling for each iteration, (ii) starting temperature coefficient (T), and (iii) maximum iteration numbers. Therefore, before proceeding with the performance evaluations, we tuned the aforementioned SA parameters. We conducted extensive simulations in regard to the optimization objective function in (2) considering various demand values with both PoTs for the considered network topologies with "nineteen" core MCFs accounting for the variations of amount of FSs allocated, FSs amount, and iteration numbers. From the obtained results, we report those values which demonstrated the "best" performance (see Table 1), and for the remaining simulation experiments we use the same values. In regard to the maximum iteration numbers, it is known that, increase in the SA iterations value does not necessarily result in an improvement of objective function rather, it only leads to increased times of executions. Hence, in our simulations we fix the maximum iterations number to 10000. To generate d CL for each dD , we resort to the use of k-shortest paths (k-SPs) algorithm. However, to decide on the most appropriate value of "k" to be used in the subsequent experiments, initially, we conducted simulation experiments considering the maximum numbers of cores or fibers (i.e., "nineteen") in the GEANT topology which also presents worst results (see sub-section 3.3). The obtained results are shown in Fig. 3. From Fig. 3(a) it can be observed that in order to minimize the FS , between the source and the destination, rather than traversing only the one (k = 1) SP, the candidate lightpaths must be permitted to span substitute routes. This occurs owing to the fact that when the candidate lightpath is constrained to only the one SP, compared to the border portion, where the FSs remain underused, link congestion in the network"s middle portion is much higher resulting in the requirement of large FSs amount. However, this FS minimization lowers as the k value increases and further, for k ≥ 5 and above, this reduction ends. The aforementioned occurs since (i) larger hop(s) amount(s) is spanned by the longer routes which in turn requisites that, to provision lightpath (s) over such routes, larger FSs are assigned, and (ii) traversing of longer distances implies TR issues leading to the usage of lesser efficient MFs. Contrary to the behaviour of FS observed in Fig. 3(a), from Fig. 3(b) it be seen that as the value of k increases, there also occurs an increase in the aggregate FSs allocated since, there occurs an availability of longer routes with larger k values. Also, the aforementioned effect occurs more in the MF cases since, the long TR permits the SA-r-RMFCSA algorithm to use long routes since the primary aim of the algorithm is a reduction of FS ; however, this simultaneously results in an increase in the aggregate FSs allocated within the network. Overall, following the obtained results in Fig. 3, we set the value of k = 5 in our simulations also noting that larger values will only result in increased times of execution for the SA-RMFCSA algorithm.
B. Performance Comparison of ILP Model and SA-r-RMFCSA Algorithm
In this sub-section, we compare performance of the ILP model with that of the SA-r-RMFCSA algorithm considering the DT network. To find solutions of the ILP model, we use the CPLEX optimization software [17]. For this set of the simulations we assume 1000 , 800 , 600 , 400 , 200 D as the randomly generated demands all of which follow MR-PoT. Also, for every run of the ILP model, the CPLEX solver is set to execute for a maximum time of 24 hours. We show the obtained results in Table 2 in the terms of FS , the aggregate FSs amount allocated, the execution times, FS and FSs aggregate , which indicate the performance gaps in terms of FSs and total assigned FSs.
It can be observed from Table 2 that irrespective of the demand size, both, ILP model and SA-r-RMFCSA are able to obtain approximately similar FS . When the demand size is 800 and 1000, a difference of 2 and 3 FS is observed between the ILP model and SA-r-RMFCSA, respectively to be required in any MCF"s any core of the DT network. The aforementioned results in FS values of 2.98% and 2.77%, respectively. In regard to the aggregate FSs amount allocated, it can be observed that SA-r-RMFCSA is able to obtain close performances to that shown by the ILP model with a maximum FSs aggregate value of 3.41% for all the executions. As for the execution times, with larger demand values, the ILP model requires more than 24 hours to obtain the solutions whereas, the maximum execution time required by the SA-r-RMFCSA heuristic is of 285 seconds which demonstrates that the ILP model indeed has a limit on its scalability.
C. Performance Comparison of SDM-b-EON with MCF versus MF Technology
Having established the suitability of the SA-r-RMFCSA algorithm in the previous sub-section, in this sub-section we use it to compare the performance of a MCF and MF enabled SDM-b-EON. It must be noted that the major difference in the use of MCF and MF technology lies in the fact that, in the MF technology the amplified spontaneous emission (ASE) noise is a major factor that leads to limitation of the TR [3] whereas, in the MCF technology the TR is limited substantially due to ICXT which results in an inefficient use of resources and also renders the advanced MFs impermissible [7]. For this set of simulations we assume that a MCF enabled SDM-b-EON has "seven", "twelve" or "nineteen" cores whereas a MF enabled SDM-b-EON has the same numbers of fibers in every link. We consider the TID and GEANT networks which are loaded with 5000, 10000, and 15000 demands under F-PoT or MR-PoT. For both the networks, in Fig. 4 we show the obtained results in regard to the FSs amount and the aggregate allocated FSs.
It can be observed from Fig. 4 (a) and (c) that when GEANT network is considered, for the "seven" and "twelve" cores or fibers case, in terms of both, FSs amount and aggregate FSs allocated, there occurs approximately no difference between the results obtained With increase in cores amount to "nineteen", IXCT starts to affect both, the 40 Gbps and 100 Gbps lightpaths which in turn has a detrimental effect on obtained results. Specifically, between the results obtained for the MCF and MF technology, it results in a performance difference of 12-13% and 11-12% in terms of FSs amount and aggregate FSs allocated, respectively. However, the aforementioned performance gaps also occur due to the assumption of a 400 Gbps lightpath being provisioned as "four" 100 Gbps lightpaths.
From Fig. 4 (b) and (d) similar aforementioned result trends in terms of both, FSs amount and aggregate FSs allocated can be observed for the TID network. However, compared to the GEANT network, the links in the TID network are much shorter which results in the TID SDMb-EON enabled by MCF to obtain much closer results to its MF counterpart as compared to the GEANT SDM-b-EON. Specifically, in the case of "nineteen" cores or fibers, the difference in obtained results remains lesser than 6.67% and 9.09% for the FSs amount and aggregate FSs allocated, respectively. After having established that the performance differences in MCF and MF enabled SDM-b-EON occurs only at larger values (i.e., "nineteen") of the core, we now aim to investigate the reason for the occurrence of such differences. Specifically, we focus on the values of BR(s) and usage of MF(s) by the F-TPs which are under operation in a "nineteen" cores or fibers SDM-b-EON case. The obtained results are shown in Fig. 5.
It can be observed from Fig. 5 (a) that when GEANT network is considered with the "nineteen" cores MCF, use of the advanced MFs is either very limited (e.g. 16-QAM) or does not occur at all (e.g. 64-QAM). The aforementioned occurs owing to the fact that in the GEANT network, between the source and the destination, the lightpaths are required to traverse longer distances in addition to the presence of ICXT over the MCFs which enhances the limit on TR. On the other hand, it can be observed that irrespective of the BR, almost all the F-TPs which are operational resort to the use of QPSK MF. From Fig. 5 (c) it can be observed that when GEANT network is considered with the "nineteen" fibers per MF link, there occurs more usage of advanced MFs since in this scenario, ASE noise is the major TR limiting factor. Specifically, in both, MR-PoT and F-PoT cases, F-TPs operating at 100 Gbps majorly use 16-QAM and 64-QAM MFs. It can also be observed that no F-TP resorts to the use of BPSK MF since, compared to the QPSK MF, irrespective of the BR, it provisions no benefit in the TR. Finally, with the results shown in Fig. 5 (a) and (c), it can be inferred that the assignment of lightpaths in the network can be conducted efficiently provided advanced MFs are utilized. Through the aforementioned we are able to justify the marked differences in results between the SDM-b-EON using "nineteen" MCFs/MFs technology that were observed in Fig. 4. From Fig. 5 (b) it can be observed that that when the TID network is considered with the "nineteen" cores MCF, compared to the case when the GEANT network was considered, majority of the F-TPs are able to utilize advanced MFs such as 16-QAM and 64-QAM owing to the TID"s shorter link distances. Due to the aforementioned, as observed in Fig. 4, the efficiency in resource(s) usage is approximately equal to that obtained by the "nineteen" fibers/link MF case. Further, during the simulations we also found that in the TID network with MCFs case, owing to impermissible TR, no 400 Gbps request requisited its provisioning as "four" 100 Gbps lightpaths; however, the aforementioned occurred multiple times in the GEANT network with MCFs case which resulted in larger numbers of F-TPs being required in case of every demand. Further, when the MF case was considered, the aforementioned effect was even more prominent since, at various instants, the SA-r-RMFCSA heuristic resorted to the use of longer "four" 100 Gbps lightpaths rather than utilizing a single 400 Gbps lightpath in view of minimizing the network FSs amount however needing more numbers of F-TPs. The aforementioned effect can be reduced by incorporating the minimization of F-TPs as an aim within the optimization function which will be the topic of interest in our future study.
IV. CONCLUSION
In the current work, we have compared the performance of a MCF and MF enabled SDM-b-EON. Initially, we designed the SDM-b-EON using an ILP formulation followed by the proposal of the SA-r-RMFCSA heuristic algorithm in view of obtaining solutions for large sized networks in reasonable execution times. For simulations, we considered realistic parameters and network topologies which are characterized by different node(s) numbers and link distances. The obtained performance comparison results demonstrated that the performance of the MCF enabled SDM-b-EON is very close to that of the MF enabled SDM-b-EON. Hence, our study establishes that benefits can be obtained from the use of existing components for MCFs rather than incurring any extra expense to provision the same traffic amount, and the aforementioned information is helpful to the network operators. | 2020-07-09T09:10:17.485Z | 2019-08-08T00:00:00.000 | {
"year": 2019,
"sha1": "2796fc0c4d3aae16424ffe9966170309d4e047b2",
"oa_license": null,
"oa_url": "http://www.mecs-press.org/ijcnis/ijcnis-v11-n8/IJCNIS-V11-N8-2.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5761456779608aa4a60bea65f8f5bd295cc49aaf",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Computer Science"
],
"extfieldsofstudy": []
} |
16478757 | pes2o/s2orc | v3-fos-license | SHELXT – Integrated space-group and crystal-structure determination
SHELXT automates routine small-molecule structure determination starting from single-crystal reflection data, the Laue group and a reasonable guess as to which elements might be present.
Introduction
Although crystal structure determination by means of X-ray diffraction has had a major scientific impact for the last 100 years, it still requires the solution of the crystallographic phase problem. This problem arises because although methods for measuring the intensities of the diffracted X-rays have made considerable progress during that time, the direct experimental measurement of their relative phases is still only rarely practicable. Small-molecule crystal structures are usually solved by the use of probability relationships involving the phases of the stronger reflections, the so-called direct methods (Sheldrick et al., 2001;Giacovazzo, 2014) or more recently by the iterative use of Fourier transforms, e.g. dual-space methods such as charge flipping (Oszlá nyi & Sü to , 2004;Palatinus, 2013), in which the phases are constrained by the observed reflection intensities in reciprocal space and by the properties of the electron density in real space.
Before the phase problem can be solved, the usual procedure is to determine the space group of the crystal with the help of the Laue symmetry of the diffraction pattern, the presence or absence of certain reflections (the systematic absences) and statistical tests (e.g. to distinguish between centrosymmetric and non-centrosymmetric structures). This space-group determination may be upset by the presence of dominant heavy atoms or by pseudo-symmetry affecting the intensities of certain classes of reflections, and in some cases the space group is ambiguous. For example, the space groups I222 and I2 1 2 1 2 1 have the same systematic absences, as do Pmmn and two different orientations of Pmn2 1 .
Many dual-space methods perform at least as well when the data are first expanded to the nominal space group P1 (Sheldrick & Gould, 1995). In this paper 'P1' will be used to cover the centred triclinic non-centrosymmetric space-group settings such as C1 as well; the data do not need to be reindexed for the primitive cell. After solving the phase problem in P1, the space group can be determined using the P1 phases (Burla et al., 2000;Palatinus & van der Lee, 2008) and this turns out to be a very robust general approach. SHELXT also employs this strategy. The systematic absences are not then used for the space-group determination, but all the weak reflections are still useful for identifying the best solution. Fig. 1 summarizes the course of structure determination using SHELXT. The individual stages will now be discussed in detail. The current version of SHELXT is intended for singlecrystal X-ray data and is not suitable for neutron diffraction data.
2. Solving the phase problem for data expanded to space group P1 SHELXT reads standard SHELX format :ins and :hkl files. It extracts the unit cell, Laue group (but not space group) and the elements that are expected to be present (but not how many atoms of each). A number of options, e.g. that all trigonal and hexagonal Laue groups should be considered (ÀL15), may be specified by command-line switches. A summary of the possible options is output when no filename is given on the SHELXT command line and further details are available on the SHELX home page.
The data are first merged according to the specified Laue group and then expanded to P1. In theory, SHELXT could also have been programmed to determine the Laue group, e.g. by calculating the R values or correlation coefficients when the equivalent reflections are merged. However, the Laue group has to be known to scale the data, which is an essential step for the highly focused beams now common for synchrotrons and laboratory microsources, because the effective volume of the crystal irradiated is different for different reflections and needs to be corrected for. So in practice it is best to determine the Laue group first anyway. Even though programs such as XPREP (Bruker AXS, Madison, WI 53711, USA) are no longer required to determine the space group, it is still necessary to identify the correct unit cell and metric symmetry.
Dual-space iteration starting from a Patterson superposition
The P1 dual-space recycling in SHELXT may start with random phases, but the default option of starting from a Patterson superposition minimum function (Buerger, 1959;Sheldrick, 1997) is usually more effective. Two copies of the sharpened Patterson function, displaced from each other by a strong Patterson vector, are superimposed and the minimum value of the two is calculated at each grid point. The resulting map is used as the initial electron density for the dual-space recycling. In an ideal case it is a double image of the structure consisting of 2N peaks, where N is the number of unique atoms, but the space-group symmetry has been lost. Since the dual-space recycling is being performed in P1 anyway, this is a good start and 2N is a significant reduction from the N 2 peaks in the original Patterson. The subsequent dual-space recycling is performed using the modified structure factors where E is the normalized structure factor, and a new density map is calculated by a hybrid difference Fourier synthesis with phases ' c and coefficients where ' c and G c are obtained by Fourier transformation of the current map. The default values for m and q are 3 and 0.5, respectively, but may be changed by the user. Based on experience with other structure-solution programs, q should probably be larger for large equal-atom structures and smaller for structures involving heavy atoms (to reduce Fourier ripples), but in practice it is rarely necessary to change the default values.
SHELXT adds unmeasured data above and below the resolution limit of the data in the :hkl file similar to the free lunch method described by Caliandro et al. (2005). This enables structures to be solved at an earlier stage in the data collection and is particularly useful for data collected with diamond-anvil high-pressure cells, with which it is not always possible to collect complete data. It reduces the effects of series-termination errors in the Fourier syntheses, but tends to make the electron-density integration used to assign the element types less reliable.
The random omit procedure
Omit maps are frequently used in macromolecular crystallography to reduce model bias. A small part of the structure is deleted and the rest is refined to reduce memory effects, then a new difference-density map is generated and interpreted. This concept plays an important role in SHELXT, but because no model is available at the P1 dual-space stage, it is implemented differently. The following density modification is performed unless otherwise specified by the user. A mask M(x) is constructed consisting of Gaussian-shaped peaks of unit volume at the positions of the maxima in the electron-density map. A small number of these Gaussian peaks are then deleted from the mask at random, usually every third dualspace cycle, and the new density is obtained by multiplying the original density (x) with the mask: 0 ðxÞ ¼ ðxÞMðxÞ at each grid point x in the unit cell. This allows the random omit method to be implemented efficiently using fast Fourier transforms (FFTs) in both directions. Imposing a shape function in this way improves the atomicity of the map. Negative density is truncated to zero, a common theme in phase improvement by density modification (Shiono & Woolfson, 1992). Compared with charge flipping, the stronger imposition of atomicity probably allows the resolution requirements to be relaxed. On the other hand, charge flipping research papers should be better for the solution of severely disordered or modulated structures, precisely because they are not atomistic! To decide which P1 solution is best, three criteria are considered: (a) The correlation coefficient CC between G o and G c , where G c are the amplitudes obtained by Fourier backtransformation of the modified electron density. (b) The structure factors G c are normalized to give E c and R weak is calculated as the average value of E 2 c for the 10% of unique reflections (including systematic absences) with the smallest observed normalized structure factors E (Burla et al., 2013). In this way, the weak reflections can still play a decisive role in the structure solution even though they were not used directly to determine the space group. (c) The chemical figure of merit CHEM is calculated by performing a peak search and calculating all bond angles involving two distances in the range 1.1 to 1.8 Å . CHEM is the fraction of these angles that lie between 95 and 135 (Langs & Hauptman, 2011). The combined figure of merit CFOM is given by where X is 1.0 unless reset by the user. For organic or organometallic structures, especially for low resolution or incomplete data, the alternative, is sometimes better, but this is not the default option because it is not appropriate for inorganic and mineral structures. If CFOM is less than a preset threshold, the program refines further sets of starting phases, increasing the number of iterations each time this is done.
Using phases to find the origin shift and space group
The idea of trying all possible space groups in a specified Laue group is also sometimes used in macromolecular crystal structure determination. For example, if the crystal is orthorhombic P, Laue group mmm, and only the Sohncke space groups need to be considered, a molecular-replacement program can be asked to test all eight possibilities. If only one of the eight gives a solution with good figures of merit, both the crystal structure and the space group have been determined! For chemical problems the situation is more interesting, because there are 30 possible orthorhombic P space groups and a total of 120 possibilities when different orientations of the axes are taken into account (as in SHELXT).
The procedure used in SHELXT to find space groups and origin shifts that are consistent with the P1 phases is based closely on the methods proposed by Burla et al. (2000) and Palatinus & van der Lee (2008), so it only needs to be summarized here. For a reflection h with P1 phase and its mth symmetry equivalent h m = hR m with P1 phase m , where R m is a 3 Â 3 rotation matrix and t m is the corresponding translation vector, we define For the correct space group and the correct origin shift Áx, should be close to zero. To facilitate comparisons, the figure of merit is defined as the F 2 -weighted sum of 2 over all pairs of equivalents for all reflections, normalized so that it should be unity for random phases. should be as small as possible for the correct combination of space group and origin shift.
SHELXT first calculates for the space group P1; this value is referred to as 0 . If 0 is less than about 0.3, the space group is probably centrosymmetric. For centrosymmetric space groups, the P1 origin shift may be used to place a centre of symmetry on the origin; however, SHELXT has to take into account that the space group may possess more than one nonequivalent centre of symmetry. For P1, is calculated with a FFT and for non-centrosymmetric, non-polar space groups a two-dimensional grid search followed by a one-dimensional search is performed to speed up the calculation. The spacegroup search is performed in parallel for all space groups that need to be tested. Although the solution with the lowest value is often the correct one, only unlikely solutions with greater than a specified value (default 0.3) are eliminated before going on to the next stage.
Assigning chemical elements to the electron-density peaks
Each solution with a reasonable value is first subject to ten cycles of density modification in the chosen space group after applying the origin shift. This density modification consists only of averaging the phases of equivalent reflections taking the space-group symmetry into account and resetting negative density to zero. A peak search is then performed, and the density inside a sphere (default radius 0.7 Å ) about each peak is summed. It is better to use integrated densities rather than peak heights because the atoms may have different atomic displacement parameters. However, these integrated densities are not on an absolute scale, so the problem is how to set the scale so that they correspond to atomic numbers and the elements can be assigned. SHELXT attempts to set the scale as follows, going on to the next test only if the previous tests are negative: (a) If carbon is specified as one of the elements present, the program searches for peaks with similar integrated densities separated from each other by typical C-C distances (i.e. between 1.25 and 1.65 Å ). If enough are found, the scale is set so that they will have average atomic numbers of 6.
(b) If boron is expected, boron cages with distances between 1.65 and 1.8 Å are searched for.
(c) A search is made for oxyanions. The oxygen atoms should have similar integrated densities to each other and similar distances to a central atom.
(d) If the above tests are negative, it is assumed that the heaviest atom expected corresponds to the peak with the highest integrated density. This can run into trouble if, for example, there is an unexpected bromide or iodide ion in the structure and it has not been possible to fix the scale by one of the above methods.
When the density scale has been found, it is used to assign elements to the remaining atoms. If it then appears that there are high-density peaks that cannot be assigned because only light atoms were expected, chlorine, bromine or iodine atoms are added. Some rudimentary checks are made to ensure that the element assignments are chemically reasonable.
Isotropic refinement and absolute structure determination
After the atoms have been assigned, an isotropic refinement is performed using a conjugate-gradient solution of the leastsquares normal equations. This is similar to the CGLS refinement in SHELXL (Sheldrick, 2008(Sheldrick, , 2015 and is performed in parallel. For non-centrosymmetric space groups this is followed by the determination of the Flack parameter (Flack, 1983) by the quotient method (Parsons et al., 2013) and inversion of the structure if the value of the Flack parameter is greater than 0.5. It is thus very likely that the structure determined by SHELXT will correspond to the correct absolute structure (so far no examples to the contrary have been reported). If 0 is below 0.3 and no atom heavier than scandium is expected, the program stops after finding a plausible centrosymmetric solution. The Àa command-line switch may be used to force the program to test all space groups in the assumed Laue group.
Building the structure
The following algorithm used to assemble the structure is diabolically simple but almost always builds and clusters the molecules in a way that is instantly recognizable. No covalent radii etc. are used, so the algorithm is independent of the element assignments.
(a) Generate the SDM (shortest-distance matrix). This is a triangular matrix of the shortest distances between unique atoms, taking symmetry into account.
(b) Set a flag to À1 for each unique atom, then change it to þ1 for one atom (it does not matter which).
(c) Search the SDM for the shortest distance for which the product of the two flags is À1. If none, exit.
(d) Symmetry transform the atom with flag À1 corresponding to this distance so that it is as near as possible to the atom with flag þ1, then set its flag to þ1.
(e) Go to (c). The next stage is to centre the cluster of molecules optimally in the unit cell. This is complicated, but makes extensive use of the tables of alternative origins for the different space groups given in Chapter 3 of Giacovazzo (2014). For example, for space group I4m2 there are four alternative origins (0, 0, 0; 0, 0, 1 2 ; 1 2 , 0, 1 4 ; 1 2 , 0, 3 4 1 ), but for I42m there are only two (0, 0, 0; 0, 0, 1 2 ). These are combined with the lattice centring (in this case 0, 0, 0; 1 2 , 1 2 , 1 2 ). For polar space groups the optimal position along the polar direction(s) (e.g. along the body diagonal of the unit cell for space group R3 indexed on a primitive rhombohedral lattice) that minimizes the maximum distance of any atom from the centre of the unit cell is determined.
Examples
The first example is an organoselenium compound (Clegg et al., 1980) for which an extract from the :lxt listing file from SHELXT is shown in Fig. 2. Four different Patterson superposition vectors were used by default to start four dual-space structure solution attempts in parallel. This was a good choice because the computer had an Intel i7 processor with four cores. On the evidence of the combined figure of merit CFOM, one of the four (try 1) is a good P1 solution. The correlation coefficient CC and the chemical figure of merit CHEM clearly indicate the correct solution, but R weak is less clear. N is the number of peaks used in the density modification, Sig(min) is the height of peak N divided by the r.m.s. (root-mean-square) Fourier map density and Vol/N is the volume per peak in Å 3 .
The best phase set was then used to search for the space group and three space groups are reported (Fig. 3); the other 11 space groups tested were rejected because one or more figures of merit were too high. The space group P2 1 is clearly research papers 6 George M. Sheldrick SHELXT Acta Cryst. (2015). A71, 3-8
Figure 2
An extract from the :lxt listing file for an organoselenium compound.
Figure 3
Possible space groups for the organoselenium compound.
indicated by the values of R1, R weak , and the Flack parameter, so there can be little doubt that it is correct, and in fact all the atoms are assigned to the correct elements. Note that although 0 is less than 0.3, the non-centrosymmetric space groups were searched as well because an atom (Se) heavier than scandium was specified on the SFAC instruction. The second example (Mü ller et al., 2006) involves a reorientation of the unit cell. Since two orientations of Pmn2 1 have the same systematic absences, both (and possibly also the centrosymmetric Pmmn) would have had to be tried for a conventional structure solution. SHELXT finds only one solution and all atoms are correct (Fig. 4). The Flack parameter is still rather approximate but is sufficient to indicate the correct absolute structure; it improves on anisotropic refinement including the hydrogen atoms.
The third example (Walker et al., 1999) contains a bromine atom and so the non-centrosymmetric space group P1 is also tested, despite the good R1 and values for the centrosymmetric solution (Fig. 5). In fact, this structure is pseudocentrosymmetric and contains a mixture of diastereoisomers that imitates a centre of symmetry. The P1 solution is completely correct. Both solutions have similar figures of merit because the main difference is the position of one carbon atom that appears to be disordered in P1 but not P1, but the Flack parameter strongly indicates P1.
The last example shows what can go wrong. This structure was published by Barkley et al. (2011) in the noncentrosymmetric space group P62c, but there are two warning signs: checkCIF (Spek, 2009) detects an inversion centre (a B alert) and the Flack parameter is dubious: the current SHELXL (Sheldrick, 2015) gives a value of 0.46 (11). Often a value close to 0.5 indicates a centrosymmetric structure. At first glance, SHELXT appears to indicate P6m2 because of a significantly lower R1 value. Unfortunately, the Flack parameter cannot be determined by SHELXT for this space group because the deposited data had been merged in a different non-centrosymmetric point group (hence 'no Fp' in Fig. 6).
However, neither P62c nor P6m2 are correct! Basically all the solutions are the same structure and the correct space group is the centrosymmetric P6 3 /mmc of which all the other space groups are subgroups. The cause of the debacle is that only for P6m2 were the elements assigned completely correctly and hence this space group has a lower R1 value. For the correct space group P6 3 /mmc the manganese atom has been incorrectly assigned as calcium. With the correct element assignments all the figures of merit would have been very similar for all the space groups. In such cases the highest-symmetry (centrosymmetric) space group is almost always correct.
Program development and distribution
SHELXT is compiled with the Intel ifort Fortran compiler using the statically linked MKL library and is particularly suitable for multi-CPU computers. It is available free to academics for the 32-or 64-bit Windows, 32-or 64-bit Linux and 64-bit Mac OS X operating systems. The program may be downloaded as part of the SHELX system via the SHELX home page (http://shelx.uni-ac.gwdg.de/SHELX/), which also provides documentation and other useful information. Users are recommended to view the 'recent changes' section on the home page from time to time.
The initial development of SHELXT was based on a test databank of about 650 structures, mostly determined in Gö ttingen, covering a wide range of problems. It has also been tested by more than 200 beta-testers for up to three years, in the course of which several thousand structures were solved (and a few not solved). It is difficult to generalize, but the correct space group was identified in about 97% of cases, and for about half of the structures every atom was located and assigned to the correct element. Most of the remaining structures were basically correct, the most common errors being carbon assigned as nitrogen or vice versa. Poor solutions were sometimes obtained when the heavy atoms corresponded to a centrosymmetric substructure but the full structure An example where reorientation of the unit cell occurs.
Figure 5
Results for a pseudo-centrosymmetric bromine compound containing a mixture of diastereoisomers.
Figure 6
An example showing difficulties that can be encountered when trying to determine the space group. possessed a lower symmetry. It is always essential to check the element assignments, especially if the program has added extra elements, and also to check for the presence of disordered solvent molecules that may have been missed. The biggest danger is that inexperienced users may assume that the program is always right! The author is very grateful to the many SHELXT betatesters for patiently reporting bugs, suggesting improvements and providing interesting data sets for testing. He is particularly grateful to Bruker AXS for their help with the logistics of the three-year beta-test, and for the use of their email list for rapid communication with the beta-testers. He thanks the Volkswagen-Stiftung and the state of Niedersachsen for the award of a Niedersachsen (emeritus) Professorship. | 2016-05-12T22:15:10.714Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "4ba113563011716788e947e34ac4c68520d2525c",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/a/issues/2015/01/00/sc5086/sc5086.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ba113563011716788e947e34ac4c68520d2525c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
} |
216145466 | pes2o/s2orc | v3-fos-license | Longitudinal data reveal strong genetic and weak non-genetic components of ethnicity-dependent blood DNA methylation levels
ABSTRACT Epigenetic architecture is influenced by genetic and environmental factors, but little is known about their relative contributions or longitudinal dynamics. Here, we studied DNA methylation (DNAm) at over 750,000 CpG sites in mononuclear blood cells collected at birth and age 7 from 196 children of primarily self-reported Black and Hispanic ethnicities to study race-associated DNAm patterns. We developed a novel Bayesian method for high-dimensional longitudinal data and showed that race-associated DNAm patterns at birth and age 7 are nearly identical. Additionally, we estimated that up to 51% of all self-reported race-associated CpGs had race-dependent DNAm levels that were mediated through local genotype and, quite surprisingly, found that genetic factors explained an overwhelming majority of the variation in DNAm levels at other, previously identified, environmentally-associated CpGs. These results indicate that race-associated blood DNAm patterns in particular, and blood DNAm levels in general, are primarily driven by genetic factors, and are not as sensitive to environmental exposures as previously suggested, at least during the first 7 years of life.
Recently, results from cross-sectional studies have shown that DNAm in blood cells differ across racial and ethnic groups at birth [29,30] and later in life [31][32][33][34], suggesting that it might contribute to race/ ethnicity-associated health disparities [30,31]. Because racial and ethnic group definitions reflect both common genetic ancestries and shared diet and exposure histories [35][36][37][38], it has been postulated that race/ethnicity-associated blood DNAm patterns are an amalgam of genetic and non-genetic components, and understanding the contribution of each can help inform the relative contribution of genetic and sociocultural diversity to variation in DNAm levels [31]. For example, a previous study [31] partitioned variation in DNAm levels into genetic and non-genetic sources, and concluded that non-genetic, sociocultural sources had a significant impact on race/ethnicity-associated blood DNAm levels. However, that study, and all previous studies that identified race/ ethnicity-associated DNAm marks, relied on cross-sectional data and were therefore not able to asses the temporal stability of those marks. Understanding the stability of race/ethnicity-dependent DNAm present at young ages can help to determine the extent to which race/ethnicity-dependent properties of epigenetic-driven diseases can be attributed to the innate or acquired methylome [29], and identify CpGs whose DNAm is robust or sensitive to accumulated exposures. We therefore sought to fill this gap by first identifying the factors contributing to and the temporal stability of race/ethnicity-dependent blood DNAm levels, and consequently, determining the relative contributions of genetic and environmental factors to the variation in blood DNAm levels in general.
To do so, we studied global DNAm patterns at over 750,000 CpG sites on the Illumina EPIC array in cord blood mononuclear cells (CBMCs) collected at birth and in peripheral blood mononuclear cells (PBMCs) collected at 7 years of age from 196 children participating in the Urban Environment and Childhood Asthma (URECA) birth cohort study [39,40]. This cohort is part of the NIAID-funded Inner City Asthma Consortium and is comprised of children primarily of Black and Hispanic self-reported ethnicity, with a mother and/or father with a history of at least one allergic disease, and living in low socioeconomic urban areas (see O'Connor et al. [40] for details of enrolment criteria). Mothers of children in the URECA study were enrolled during pregnancy and children were followed from birth through at least 7 years of age.
The longitudinal design of the URECA study provided us with the resolution to partition genetic from non-genetic effects on race/ethnicity-associated DNAm patterns, and yielded new insight into the factors affecting DNAm patterns at CpG sites in mononuclear (immune) cells during formative developmental years in ethnically admixed children. Using a novel statistical method that provides a general framework for analysing longitudinal genetic and epigenetic data, we show that while DNAm levels vary with chronological age, race/ethnicity-dependent DNAm patterns are overwhelmingly conserved over the first 7 years of life and that these patterns are strongly associated, and often mediated, by local genotype. Relatedly, the variation in DNAm levels at previously reported robust exposure-associated CpGs was overwhelmingly dominated by genetic rather than environmental factors in these children.
Considering the results of our study and those of a recently published comprehensive review on environmental epigenetics research [41], we suggest that race/ethnicity-dependent blood DNAm levels in particular, and blood DNAm levels in general, are primarily driven by genetic factors, and are not as responsive to environmental exposures as previously suggested [31], at least during the first 7 years of life.
Results
Our study included 196 children participants in the URECA cohort who had high-quality DNA from both CBMCs and PBMCs collected at birth and age 7, respectively, available for our study [39] (see Methods). The URECA children were classified by parent-or guardian-reported race into one of the following categories: Black, n ¼ 147; Hispanic, n ¼ 39; White, n ¼ 1; Mixed race n ¼ 7, and Other, n ¼ 2. A description of the study population is shown in Table 1. Genetic ancestry, assessed using principle component analysis (PCA), revealed varying proportions of African and European ancestry along PC1 ( Figure 1). Because there was little separation along PC2, and no genome-wide significant correlation between PC2 through PC10 and DNAm levels at either age, we defined PC1 as inferred genetic ancestry. The reported races of the children are also shown in Figure 1. We included only the 186 self-reported Black and Hispanic children in subsequent analyses of reported race.
Reported race effects on DNA methylation patterns are conserved in magnitude and direction between birth and age 7
We first attempted to determine the temporal stability of reported race-associated DNAm patterns by addressing three questions. What is the effect of reported race on DNAm levels at individual CpG sites at birth and age 7? Are the directions and magnitudes of these effects conserved from birth to age 7? Do the effects at birth and age 7 differ significantly? While these questions are important in their own right, their answers can also help determine the nature of these reported race-associated patterns. For example, race-associated DNAm levels that differ at birth and age 7 might reflect racedependent exposure histories, while race-associated DNAm patterns that are conserved may be genetic in nature, since genetically-dependent DNAm patterns are conserved from birth to later childhood [42]. Standard hypothesis testing can be used to answer the first question but is not appropriate for answering the second or third because failure to reject the null hypothesis that the effects are equal at birth and age 7 does not imply the null hypothesis is true. Additionally, because our studies were conducted in CBMCs at birth and PBMCs at age 7, DNAm levels at birth and age 7 may differ slightly due to differences in cell composition [43]. To address these issues, we built a Bayesian model (see Model (1) in Methods) and let the data determine both the strength of the effect of reported race (based on self-report) on DNAm levels, and how similar the effects are at birth and age 7. We then answered the above three questions by defining and estimating the conserved (con) and discordant (dis) sign rates for each CpG g ¼ 1; . . . ; 784; 484: con g ¼ Posterior probability that CpG g 0 s reported race effects at birth and age 7 were non-zero had the same sign AND the sign was estimated correctly: dis g ¼ Posterior probability that the reported race effect for CpG g was non-zero at one age and zero or in the opposite direction at the other age: For a given posterior probability threshold, these quantities partition the reported race-associated CpGs into two groups: those whose reported race effects were non-zero and conserved from birth to age 7 and those whose reported race effects were different at birth and age 7. Detailed descriptions of our model and estimation procedure are provided in Methods and in the Supplementary Material. Supplemental Figure S1 shows how the conserved sign rate and standard P values compare. After fitting the relevant parameters in the model to the data, we were able to estimate the fraction of CpGs with non-zero reported race effects at both ages and assign them into one of four possible bins: the two effects were completely unrelated (ρ ¼ 0), moderately similar (ρ ¼ 1=3), very similar (ρ ¼ 2=3), or identical (ρ ¼ 1). Note that if a non-trivial fraction of CpG sites had ancestry effects that were in opposite directions at birth and age 7, they would be assigned to the first bin (ρ ¼ 0). In fact, we estimated that only 0.2% of the CpGs with non-zero reported effects at both ages had unrelated or moderately similar reported race effects, whereas 30.7% fell in the very similar bin and 69.1% had identical reported race effects at birth and age 7 (Supplemental Figure S2). These data indicate that when reported race effects on DNAm levels are present (i.e., nonzero) at both birth and age 7, they tend to be very similar or exactly the same at both ages with respect to both direction and magnitude.
We then estimated the conserved and discordant sign rates for all 784,484 probes and classified a CpG as a reported race-associated CpG (RR-CpG) if its conserved or discordant sign rate was above 0.80 (i.e. con g � 0:8 or dis g � 0:8). At this threshold, we identified 2,162 RR-CpGs, 2,157 (99.8%) of which were conserved in sign (con g � 0:8). Compared to self-reported Hispanic children, self-reported Black children tended to have higher DNAm levels at 1,288 (60%) of the conserved RR-CpGs (P ¼ 8:6 � 10 À 38 ). This trend replicated when we substituted inferred genetic ancestry for reported race and is in accordance with previous observations [6,33], indicating individuals with more African ancestry tend to have overall more DNAm. Interestingly, there was an under enrichment of RR-CpGs in CpG islands (P ¼ 3:10 � 10 À 12 ), which mirrors the observation that CpGs whose DNAm is under genetic control typically lie outside of CpG islands [44]. The fact that only 5 of the 2,162 RR-CpGs had discordant reported race effects at birth and age 7 (dis g � 0:8) corroborates the observations made in the previous paragraph and answers the second question in the affirmative: if DNAm levels are associated with reported race at birth, the magnitude and direction of the effects are almost certainly conserved at age 7 (and vice-versa).
Inferred genetic ancestry has a larger effect on DNA methylation than does self-reported race
The observed association between self-reported race and DNAm levels may reflect differences in environmental exposures [31,33], due to associations between race or ethnicity with socio-cultural, nutritional, and geographic exposures, among others [35][36][37][38]. In fact, a previous cross-sectional study suggested that self-reported ethnicity explained a substantial proportion of the variance of blood DNAm levels measured in Latino children of diverse ethnicities [31]. They concluded that ethnicity captured genetic, as well as the socio-cultural and environmental differences, that influence DNAm levels. If this were the case in the URECA children, the effect of inferred genetic ancestry on DNAm levels should be comparable to that of reported race. To assess this possibility in the URECA children, we repeated the analyses described above but substituted inferred genetic ancestry for reported race. This analysis revealed 8,597 inferred genetic ancestry-associated CpGs (IGA-CpGs), of which 8,579 (99.8%) were conserved in sign (con g � 0:8). This was significantly more than the 2,162 RR-CpGs identified in the reported race analysis above ( Figure 2 (a-b)), and we show in the Supplement that this difference is robust to any differences between the powers of the reported race and inferred genetic ancestry analyses.
To further explore this finding, we examined the overlap between RR-CpGs and IGA-CpGs ( Figure 2 (c)). Because reported race is an estimate of inferred genetic ancestry, there is a substantial overlap between IGA-CpGs and RR-CpGs. Contrary to the results from the previous study [31], which estimated that only 35% of their ethnicity-associated CpGs were also genetic ancestry-associated CpGs ( Figure 5(a) in [31]), 66% of the RR-CpGs in our study were also IGA-CpGs, and therefore represent only a subset of the IGA-CpGs. This indicates that while IGA-CpGs include most RR-CpGs, reported race Overlapping ancestry CpGs at birth and at age 7. (a): self-reported race-associated CpGs (RR-CpGs) with con g � 0:8 (violet) or dis g � 0:8 (red or blue). A discordant RR-CpG was classified as significant at birth but not at age 7 (blue) if the marginal posterior probability that the effect was non-zero at birth was greater than that at age 7. Discordant RR-CpGs that were significant at age 7 but not at birth (red) were defined analogously. (b): The same as (a), but for inferred genetic ancestry-associated CpGs (IGA-CpGs). (c): The overlap between RR-CpGs (con g � 0:8 or dis g � 0:8) and IGA-CpGs (con g � 0:8 or dis g � 0:8).
does not capture most of the variation in DNAm levels attributable to genetic ancestry in these children.
The differences between our results and those reported in the aforementioned study may be due to the fact that sample collection site explained 80% of the variance in Mexican versus Puerto Rican ethnicity in [31], but was not accounted for in their analyses. The fact that sample collection site was associated with the DNAm levels of 865 CpGs at birth or age 7 at a 5% FDR in our study suggests that sample collection site could have confounded the relationship between ethnicity and DNAm in the previous study (see pp 7-8 in the Supplement).
The association between DNA methylation and reported race is largely genetically driven
To further address the question of whether reported race effects on DNAm levels at either birth or age 7 were primarily due to genetic variation or to environmental exposures, we used local genetic variation (within 5kb of a CpG site) and DNAm data at birth and age 7 in the 147 selfreported Black children in our study to map methylation quantitative trait loci (meQTLs). Of the 519,696 CpGs within 5kb of a SNP, 65,068 and 70,898 had at least one meQTL in CBMCs at birth and in PBMCs at age 7, respectively, at an FDR of 5%. In addition, 51% of all RR-CpGs with at least one SNP in the � 5kb window had at least one meQTL at birth or age 7 at an FDR of 5%, which was a significant enrichment when compared to the 17% observed for non-RR-CpGs ( Figure 3(a-b)).
To provide additional evidence that local genotype mediates the effect of reported race on DNAm levels, we used logistic regression to regress the genotype of each SNP within � 5kb of a RR-CpG. The goal was to determine the fraction of RR-CpGs at which the observed variation was mediated through local genotype, i.e. RR-CpGs with both edges a and c in Figure 3(a). Since genotype is highly correlated with race, most SNPs will possess edge c. Therefore, a reasonable upper bound for this quantity is 51%, the fraction of RR-CpGs with at least one meQTL in their � 5kb window. To determine a lower bound, we used the results of the abovementioned logistic regression to conservatively estimate that at least 26% of all RR-CpGs with at least one SNP in their � 5kb windows had both edges a and c (pp 9-11 in the Supplement). Interestingly, substituting inferred genetic ancestry for self-reported race in the above analysis yielded the DNAm (m) at a CpG site, the genotype (g) at the SNP within � 5kb of the CpG that had the smallest meQTL P value and self-reported race (RR). Each graph corresponds to a unique CpG. (b) Plots of the meQTL P value for edge a in CBMCs at birth, where CpGs were stratified by whether or not it was an RR-CpG (con g � 0:8 or dis g � 0:8). The ten enlarged red circles are just for visual aid.
nearly identical upper and lower bounds, providing evidence for local genotype mediating the effects of reported race on DNAm levels at RR-CpGs.
Genetic and biological factors explain most of the variation in blood DNA methylation levels
Given the suggested genetic nature of race/ethnicity-dependent blood cell DNAm levels, we next sought to determine the relative contributions of genetic variation, age and environmental factors on CBMC and PBMC DNAm levels in general at birth and age 7 in the URECA cohort. First, we identified 2,836 gestational age-related CpGs at birth and 16,172 age-related CpGs (CpGs whose DNAm levels changed from birth to age 7) at 5% FDRs. These two sets of CpGs were strongly enriched for CpGs used to predict gestational age in Knight et al. [21] and to predict chronological age in Horvath [18], as well as for CpGs whose blood DNAm levels changed from birth to age 5 in Pérez et al. [45] ( Figure S3 in the Supplement). Moreover, the estimates of the age effects among age-related CpGs in our study showed the same direction of change as their corresponding estimated gestational age effects at birth in 97% of the 16,172 age-related CpGs. This included 14,186 gestational ageassociated effects that were not significant at a 5% FDR threshold but showed the same direction of change. This concordance in direction of effect is unlikely to occur by chance (P value < 10 À 119 ; pp 11-13 in the Supplement). Taken together with the enrichments for age-associated CpGs described above, we suggest that the majority of the changes in DNAm levels from birth to age 7 are due to ageing-related mechanisms rather than agedependent environmental exposures.
We next attempted to determine the relative contributions of genetic and environmental factors on DNAm levels in blood. With the exception of maternal cotinine levels during pregnancy, which previously showed robust and reproducible associations with blood DNAm levels at birth [11][12][13][14][15] and in early childhood [10,13,16], none of the direct or indirect measures of exposures that were available in this cohort were associated with DNAm levels at either age after adjusting for multiple testing (p 2 in the Supplement). Therefore, in order to maximize our chances of identifying environmental variation in these data, we restricted our analyses to the 6,073 maternal smoking-related CpGs identified in Joubert et al. [15], who performed a meta analysis of maternal smoking during pregnancy on 6,685 infants from 13 cohorts. In our data, DNAm levels at birth and age 7 at 505 (9.2%) and 407 (7.4%) of the 5,500 maternal smoking-related CpGs that passed QC in our study, respectively, were nominally correlated (P value � 0:05) with maternal cotinine levels (enrichment P values = 7:08 � 10 À 34 and 6:49 � 10 À 8 ). While this enrichment was not unexpected, we were surprised to observe that the maternal smoking-related CpGs were enriched for meQTLs ( Figure 4). Additionally, there was a strong enrichment of the 8,579 conserved inferred genetic ancestry-associated CpGs among the 5,500 maternal smoking-related CpGs that passed QC in our study (fold enrichment ¼ 2:53; P value = 6:42 � 10 À 33 ), indicating the maternal smoking-related CpGs were enriched for genetically regulated CpGs. Furthermore, genotype at the closest SNP for over 95% of the maternal smoking-related CpGs explained a greater proportion of the variance in DNAm levels at birth than did maternal cotinine levels ( Figure 4; pp 13-15 in the Supplement). These results were nearly identical for DNAm measured at age 7, and showed that genetic, and not environmental, factors are responsible for the majority of the variation in DNAm levels at even the most robust and replicated environmentally-associated CpGs in these children.
Discussion
The relationships between DNAm, chronological age, and race/ethnicity have the potential to shed light on disease aetiology and may help determine the relative genetic and environmental contributions to the observed inter-individual variability of the epigenome [17][18][19][20][21][22][23][29][30][31][32][33][34]. While it has previously been shown that race/ethnicity is related to DNAm in cross-sectional studies [29][30][31][32][33][34] and that statistically significant meQTLs are conserved as individuals age [42], it has yet to be shown that race/ethnicity-dependent DNAm marks are conserved as children age, and relatedly, that exposure histories explain a comparatively small fraction of the variation in blood DNAm levels.
Exposure histories and other related nongenetic factors change substantially from birth to early childhood, which include changes in diet, immune profile [46], the microbiome [47] and the metabolome [48], to name a few. The putative effect of these exposures on blood DNAm [49] and the notable differences in the levels of these exposures between children of different ethnic groups [36][37][38] have prompted researchers to suggest that genetics only partially explain the association between ethnicity and blood DNAm levels, and that non-genetic environmental factors make a significant contribution to ethnicity-dependent blood DNAm patterns in children [29,31]. We were therefore surprised to find that self-reported race effects on DNAm were overwhelmingly conserved in both direction and magnitude from birth to age 7. This result, as well as our novel Bayesian inference paradigm used to obtain it, is important in and of itself because it provides an example of, and a general method for identifying, DNAm patterns that are conserved over time, and differentiating between environmentally responsive and temporally stable DNAm marks, which has been highlighted as both a gap in current knowledge and a critical area of future epigenetic research [49].
While the observation that reported race effects are conserved from birth to age 7 gives credence to the hypothesis that the effects are genetic in nature, it does not rule out the possibility of environmental components or gene-environment interactions that could result in race/ethnicityassociated DNAm patterns prior to birth that persist as the child ages. It was therefore interesting to find that there was a significant under enrichment of RR-CpGs in CpG islands, which agrees with the under enrichment previously observed for CpGs under genetic control [44]. To further explore this, we showed that the RR-CpGs were enriched among CpGs with meQTLs identified in our study, indicating that DNAm levels at many of the RR-CpGs are mediated by local genotype and that much of the reported race-DNAm association could be attributed to genetic variation. Moreover, the RR-CpGs were only a small subset of IGA-CpGs in our study. Contrary to previous crosssectional studies in infants and children [29,31], our results provide evidence for genetics accounting for an overwhelming majority of the associations between blood DNAm levels and reported race, which suggests the non-genetic contribution . The x-axis of the latter was defined as the ratio of the proportion of variance in DNAm levels explained by the genotype of each CpG's closest SNP to the sum of the aforementioned genetic proportion and the proportion explained by maternal cotinine levels during pregnancy. A ratio > 0:5 indicates that local genotype explained more variance than maternal cotinine levels during pregnancy.
to variability in blood DNAm levels may be smaller than previously thought.
There were several other notable features in these data connoting that genetic, and not environmental, factors were most responsible of the variation in blood DNAm levels in these children. The first was that although average DNAm levels of 16,172 CpGs changed significantly from birth to age 7, the direction of the change in 97% of those CpGs matched the direction of the corresponding correlation between DNAm levels and gestational age at birth. This manifest concordance in the 'epigenetic clocks' present at birth and later in life, along with the observation that the 16,172 agerelated CpGs were enriched for CpGs used to predict gestational and chronological age, suggests these age-related changes are coordinated by age-related mechanisms, and not due to agedependent environmental exposures. Second, with the exception of maternal cotinine levels during pregnancy, none of the direct or indirect measures of exposure history were associated with DNAm levels at birth or age 7. This included measures of prenatal depression and anxiety that have ostensibly been shown to be associated with cord blood DNAm patterns in other studies [50][51][52]. These observations are congruent with the results of a recent comprehensive review on environmental epigenetics research, which suggested that the effects of many environmental exposures on DNAm in blood are probably too small to estimate with even large sample sizes [41]. It also coincides with the rather unfortunate finding that many of the previously reported associations between exposure histories and blood DNAm are based on erroneous statistics and therefore might be spurious [53] (see pp 3-7 in the Supplement).
The third, and possibly most surprising, observation in support of strong genetically-and weak environmentally-determined blood DNAm levels were that genetic, and not maternal cotinine levels, were most responsible for the variation in DNAm levels at over 95% of the maternal smoking-associated CpGs identified in Joubert et al. [15]. This is consistent with, and significantly extends, the results in Gonseth et al. [54], which identified genome-wide significant meQTLs for three of the top 10 most significant maternal smoking CpGs identified in the URECA5 study. It is also in line with Hannon et al. [55], which showed that genetic factors explained far more variation in the blood DNAm levels of BMIassociated CpGs than environmental factors did. One possible explanation for our observation, as demonstrated in the Gonseth et al. study, is that genotype confounds the relationship between maternal smoking and DNAm. While we did not have sufficient data to confirm this here, it remains an important area of future investigation.
Although the longitudinal features of this cohort add many strengths to our study, we must acknowledge some limitations. First, the majority of our data were derived from only two populations, self-reported Black and Hispanic children. While studying these groups makes important progress towards understanding the epigenetic architecture of underrepresented populations, it will be important to see if our conclusions replicate in other populations. Second, we only sampled DNAm through early childhood. It will be useful to assess the extent to which race/ethnicity-associated DNAm patterns persist through puberty and into adulthood.
In summary, the results of our study suggest that DNAm levels in blood cells are fairly robust to environmental exposures, including those that are associated with self-reported race. A better understanding of tissue-specific DNAm responses to environmental exposures could inform the design of future studies and provide insights into the mechanisms through which exposures and gene-environment interactions influence health and disease.
Sample composition
URECA is a birth cohort study initiated in 2005 in Baltimore, Boston, New York City and St. Louis under the NIAID-funded Inner City Asthma Consortium [39]. Pregnant women were recruited. Either they or the father of their unborn child had a history of asthma, allergic rhinitis, or eczema, and deliveries prior to 34 weeks gestation were excluded (see Gern et al. [39] for full entry criteria). Informed consent was obtained from the women at enrolment and from the parent or legal guardian of the infant after birth.
Maternal questionnaires were administered prenatally and child health questionnaires administered to a parent or caregiver every 3 months through age 7 years. Gestational age at birth and obstetric history were obtained from medical records. Additional details on study design are described in Gern et al. [39]. Frozen paired cord blood mononuclear cells (CBMCs) and peripheral blood mononuclear cells (PBMCs) at age 7, were available for 196 of the 560 URECA children after completing other studies. After QC, DNAm data were available for 194 children at birth, 195 children at age 7, and 193 children at both time points; genotype data were available in 193 children. The sample size for each analysis is given in Table 2.
Maternal cotinine levels were measured in the cord blood plasma at birth, and we categorized mothers as smokers ( � 10ng/mL; n ¼ 31) or non-smokers ( < 10ng/mL; n ¼ 150), where cotinine levels were missing in 15 mothers. The 10 ng/mL threshold was the same as that used in Joubert et al. [15] to define a pregnant mother with a sustained smoking habit, where 147/150 (98%) of the non-smokers in our data had cotinine levels below 2 ng/mL, the detection limit of the assay.
DNA methylation
DNA for methylation studies was extracted from thawed CBMCs and PBMCs using the Qiagen AllPrep kit (QIAGEN, Valencia, CA). Genome-wide DNA methylation was assessed using the Illumina Infinium MethylationEPIC BeadChip (Illumina, San Diego, CA) at the University of Chicago Functional Genomics Facility (UC-FGF). Birth and 7-year samples from the same child were assayed on the same chip and the data were processed using Minfi [56]; Infinium type I and type II probe bias were corrected using SWAN [57]. Raw probe values were corrected for colour imbalance and background by control normalization. Three out of the 392 samples (two at birth and one at age 7) were removed as outliers following normalization. We removed 82,352 probes that mapped either to the sex chromosomes or to more than one location in a bisulphite-converted genome, had detection P values greater than 0.01% in 25% or more of the samples, or overlapped with known SNPs with minor allele frequency of at least 5% in African, American, or European popula tions. After processing, 784,484 probes were retained and M-values were used for all downstream analyses, which were computed as log 2 ðmethylated intensity þ 100Þ À log 2 ðunmethylated intensity+100Þ. The offset of 100 was recommended in Du et al. [58].
Genotyping
DNA from the 196 URECA children was genotyped with the Illumina Infinium CoreExome+Custom array. Of the 532,992 autosomal SNPs on the array, 531,755 passed Quality control (QC) (excluding SNPs with call rate < 95%, Hardy-Weinberg P values < 10 À 5 , and heterozygosity outliers). We conducted all analyses in 293,696 autosomal SNPs with a minor allele frequency � 5%. Genotypes for three children failed QC and were excluded from subsequent analysis that involved genotypes, including methylation quantitative locus (meQTL) mapping, inferred genetic ancestry, or used genetic ancestry PC1 as a covariate. These three children were included in all other analyses.
Estimating inferred genetic ancestry
Ancestral principal component analysis (PCA) was performed using a set of 801 ancestry informative markers (AIMs) from Tandon et al. [59] that were genotyped in both the URECA children and in HapMap [60] release 23.
Univariate statistical methods
To determine the effect of gestational age and maternal cotinine levels (smoker vs. non-smokers) on DNAm levels in CBMCs at birth or PBMCs at age 7, we used standard linear regression models with the child's gender, sample collection site, inferred genetic ancestry and methylation plate number as covariates in our model. We controlled for gestational age in the maternal cotinine analysis. We also estimated cell composition and other unobserved confounding factors using a method described in McKennan et al. [61]. We then computed P values for each CpG site and used q-values [62] to control the false discovery rate at a nominal level. We took the same approach to determine CpGs whose DNAm changed from birth to age 7, except the response was measured as the difference in DNAm at birth and age 7. In this analysis, we included the child's gender, gestational age at birth, inferred genetic ancestry and sample collection site as covariates. Because all paired samples were on the same plate, we did not include plate number as a covariate in this analysis. We also estimated unobserved factors that influence differences in DNAm at birth and age 7 using McKennan et al. [61] and included these latent factors in our linear model.
Joint modelling of DNA methylation at birth and age 7
We used data from the self-reported Hispanic and Black individuals with DNAm measured at both time points to analyse the effect of ancestry on DNAm levels at CpGs g ¼ 1; . . . ; p ¼ 784; 484 using the following model: where δ 0 and δ ð0;0Þ are the point masses at 0 2 R and ð0; 0Þ 2 R 2 . The vector y ðaÞ g 2 R n contained the DNAm levels at CpG g at age a, X 2 R n contained each child's inferred genetic ancestry or self-reported race and β ðaÞ g was the effect due to ancestry at age a. X was standardized to have variance 1 when X was inferred genetic ancestry. The nuisance covariates Z contained an intercept for the cord blood and PBMC samples, sample collection site, gender, gestational age at birth and plate number. Since gestational age was only correlated with cord blood DNAm, we assumed the effect of gestational age on DNAm at age 7 was zero for all CpG sites. We estimated the unobserved covariates C with McKennan et al. [63], which accounts for the correlation between samples from the same child.
The entries of the weight vector π ¼ π ð0;0Þ ; π we ignored the proportion when k ¼ 1, because τ 1 was too small to differentiate from zero. The estimated proportion of CpGs in the ρ s ¼ 2=3 or ρ s ¼ 1 bins was still over 98% when we included τ 1 . To fit the model, we first regressed out Z and the estimated C from both y g and X � X and used the residuals in the downstream analysis. We estimated σ 2 g and δ 2 g for each g ¼ 1; . . . ; p with restricted maximum likelihood (REML) and followed Stephens [65] and estimated π by empirical Bayes via expectation maximization. Supplemental Figures S2 and S4 plot the estimate for π in the reported race analysis. We then defined con g and dis g for each CpG g ¼ 1; . . . ; p as con g ¼P β ð0Þ g ; β ð7Þ g > 0jy g ; π; σ 2 g ; δ 2 g n o _P β ð0Þ g ; β ð7Þ g < 0jy g ; π; σ 2 g ; δ 2
Determining meQTLs
We performed meQTL mapping in the 145 genotyped, self-reported Black children using the set of 269,622 SNPs with 100% genotype call rate in this subset. We restricted ourselves to this subset of samples to minimize heterogeneity in effect sizes. To identify CpG-SNP pairs, we considered SNPs within 5kb of each CpG, as this region has been previously shown to contain the majority of genetic variability in DNAm [8] and is small enough to mitigate the multiple testing burden, and computed a P value for the effect of the genotype at a single SNP on DNAm at the corresponding CpG with ordinary least squares. We then defined the meQTL for each CpG site as the SNP with the lowest P value. In addition to genotype, we included inferred genetic ancestry (i.e., ancestry PC1), gestational age at birth, gender, sample collection site and methylation plate number in the linear model, along with the first nine principal components of the residual DNAm data matrix after regressing out the intercept and the five additional covariates. We then tested the null hypothesis that a CpG did not have an meQTL in the 10kb region by using the minimum marginal P value in the region as the test statistic and computed its significance via bootstrap. We used q-values to control the false discovery rate.
Ethical statement
We used de-identified single nucleotide polymorphism, DNA methylation and phenotype data from samples taken from human subjects as part of the Urban Environment and Childhood Asthma study. The WIRB approved human samples to be used in the Urban Environment and Childhood Asthma study (WIRB project number: 20,142,570).
Disclosure statement
No potential conflict of interest was reported by the authors. | 2020-04-27T14:14:15.945Z | 2018-06-06T00:00:00.000 | {
"year": 2020,
"sha1": "5508d5e13b4325d8fad26a21adaa090ef7ab0577",
"oa_license": "CCBYNCND",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15592294.2020.1817290?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "57cdb6e59a6c7469e7475c98e1b6d9a998f11a5d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
216251995 | pes2o/s2orc | v3-fos-license | Surgical Treatment of Primary Cardiac Tumor Associated with Malignant Arrhythmias
Objective: Pediatric primary cardiac tumor is an extremely rare disease. The tumor can extend into the conduction system and cause malignant arrhythmias. We retrospectively reviewed 6 consecutive cases of children with primary cardiac tumor that manifested as rhythm disturbance. Methods: In our center, 6 children were enrolled from October 2009 to August 2016. Detailed operative data and follow-up information were comprehensively collected and statistically analyzed. Results: The patients were ages 1 to 16 years and weighed 7.9 to 44.5 kg. Preoperative ventricular tachycardia was present in 3 patients, frequent ventricular ectopic beats in 1 patient, supraventricular tachycardia in 1 patient, and atrial flutter in 1 patient. All 6 patients underwent a complete tumor resection. The tumors were localized in the left ventricular free wall (3 patients), left ventricular outflow tract (1 patient), left atrium (1 patient), and right atrium (1 patient). One patient received 2 radiofrequency ablation procedures before tumor resection. Postoperative sick sinus syndrome occurred in 1 patient because the tumor infiltrated the sinoatrial node. Tumors from 2 patients were pathologically diagnosed as fibroma and 4 as rhabdomyoma. Reoperation of mitral valve repair was performed in 1 patient 1 year after tumor resection. The mean (± SD) follow-up time was 63.7 ± 31.4 months, and all children were well, with Ross functional classification I and no signs of recurrence or metastasis. Conclusions: In conclusion, cardiac tumor is a rare but nonneglectable reason for arrhythmia, and surgical resection is the optimal procedure, with satisfactory results.
INTRODUCTION
Pediatric primary cardiac tumors are extremely rare, with estimated prevalence between 0.027% and 0.08% [Nadas 1968;Isaacs 2004]. Clinical manifestations are variable, as the masses differ in terms of size, location, multifocality, and extent of invasion. Patients with tumor can be asymptomatic or present with outflow tract obstruction, congestive heart failure, arrhythmia, respiratory distress, pericardial effusion, syncope, and even sudden death [Xu 2017]. When tumors extend into the conduction system, they can cause arrhythmias. It has been reported that clinically significant arrhythmias (cardiac arrest, ventricular fibrillation, ventricular tachycardia, supraventricular tachycardia, etc.) occur in approximately one-quarter of pediatric tumors [Xu 2017]. Surgical resection of tumors is an effective option. In recent years, 6 consecutive pediatric patients with primary cardiac tumors and associated arrhythmias received total tumor resection surgery in our center. Herein, we describe our center's experience in treating these patients.
The Heart Surgery Forum #2019-2823 23 (2), 2020 [Epub March 2020] doi: 10.1532/hsf.2823 Figure 1. Echocardiography examination and operative and pathologic findings of a large left ventricle fibroma (patient 2, preoperative frequent ventricular ectopic beats). A, Echocardiography shows a welldefined hyperechoic mass with diameter ~51 × 58 mm on the left ventricular posterior wall. B, Perioperatively, tumor was apparent on the left ventricular posterior wall. C, The surgeon sharply stripped and radically resected the tumor body. D, Pathologic results showed that tumor cells were arranged in fascicles; muscle tissue (dyed brown) and fibrous tissue (dyed red) were of mixed distribution (hematoxylin and eosin stain, 20 × 10).
METHODS
All subjects' parents gave their written informed consent, and the study protocol was approved by the institute's committee on human research (in accordance with Declaration of Helsinki) and the ethics committee of our hospital. We retrospectively reviewed all 6 consecutive pediatric patients who presented to our institution from October 2009 to August 2016 with rhythm disturbances and were diagnosed as having cardiac tumor. Records were collected for all information including demographic characteristics, narrative history, electrocardiography (ECG), 24-h Holter monitors, electrophysiological examinations, echocardiography, chest x-ray, chest computed tomography (CT), cardiac magnetic resonance imaging (MRI), operative details, postoperative data, and pathologic diagnoses. Clinically significant arrhythmias were defined according to Miyake [2011]. After discharge, follow-up data were collected through clinical consultations and regular telephone interviews. Statistical analysis was performed with SPSS software (version 22.0, IBM Corp., Armonk, NY). Normal data are presented as mean ± SD, and nonnormal data are presented as median and range.
General Data
A total of 6 consecutive patients were selected for the surgical series (2 males and 2 females; mean age at operation, 5.8 ± 5.5 years; median 4.7; range 1 to 16). Their main clinical presentations were all rhythm disturbances. Preoperative ECG and 24-h Holter monitor showed that ventricular tachycardia (VT) was present in 3 patients, frequent ventricular ectopic beats in 1 patient, supraventricular tachycardia (SVT) in 1 patient, and atrial flutter/atrial fibrillation (AF) in 1 patient. The masses were detected by echocardiography (all 6 cases), CT (4 cases), and MRI (3 cases) for a complete diagnosis. The mass was localized in the left ventricular free wall (3 patients), left ventricular outflow tract (1 patient), left atrium (1 patient), and right atrium (1 patient). Details of patients' characteristics are shown in Table 1.
Operative Details
We followed the concept of radial excision of tumors whenever feasible; thus all 6 children underwent a complete resection. Surgery was performed through a median sternotomy under cardiopulmonary bypass (CPB) with bicaval and ascending aortic cannulation and moderate systemic hypothermia (30 to 32°C). Associated procedures included left ventricular patch reconstruction in a 7-year-old girl (patient 2), as the tumor had widely invaded the left ventricle ( Figure 1). The mean CPB time was 80.0 ± 36.1 minutes (range 36 to 136). The mean aortic cross-clamping (ACC) time was 56.0 ± 38.3 minutes (range 10 to 121). The median mechanical ventilation was 10.8 ± 6.1 hours (range 6 to 22.6). The median intensive care unit length of stay was 2.2 ± 1.7 days (1 to 9). The sizes of masses were 5, 6, 10, 51, 35, and 59 mm. It was pathologically confirmed that 4 cases were rhabdomyoma and 2 cases were fibroma. Further operative details are shown in Table 2.
Early Outcome
All patients survived. There was no postoperative low cardiac output syndrome, pericardial and pleural effusion requiring drainage, multiple organ dysfunction syndrome, or thromboembolic event. One patient (patient 3) was diagnosed Figure 2. Perioperative findings of a pediatric rhabdomyoma infiltrating the sinoatrial node. The child was diagnosed with sick sinus syndrome postoperatively (patient 3, preoperative atrial flutter). A, The tumor was located in the right atrium and widely extended to the superior vena cava. B, The left atrium and superior vena cava were excised, and the tumor was sharply stripped and radically resected. Figure 3. Echocardiography examination and operative and pathologic findings of a left ventricular outflow tract (LVOT) rhabdomyoma that caused refractory VT (patient 6, preoperative VT). A, Echocardiography showed a well-defined hyperechoic mass with diameter 6.7 × 5.0 mm at LVOT. B, The aorta was excised, and the tumor was exposed. C, The tumor was of extremely small size. D, Pathologic results show typical spider-like cells in the middle of rhabdomyoma cells (hematoxylin and eosin stain, 20 × 10).
E180
with sick sinus syndrome (SSS) postoperatively, because the mass localized in the right atrium and infiltrated the sinoatrial node ( Figure 2). The patient received oral sotalol (a nonselective β-blocker) without any obvious discomfort. The other 5 patients were discharged in good clinical condition.
Follow-Up
The mean duration of follow-up was 63.7 ± 31.4 months (range 26 to 112). Reoperation for mitral valve repair was performed in 1 patient with multiple masses (patient 4). In the initial operation, we found that the masses were close to the anterior mitral leaflet. Echocardiography early after surgery showed mild mitral valve regurgitation, but after 1 year, the patient had progressively severe mitral valve regurgitation. During follow-up, patient 3 felt no discomfort with medication therapy. The other 4 patients were all well, with Ross functional classification I, free from any adverse events, without taking any medication, and with no signs of recurrence or metastasis.
DISCUSSION
Pediatric primary cardiac tumor is an extremely rare heart disease. Neoplasms can grow anywhere within the cardiac chambers or in the myocardium. More than 90% of tumors are benign [Tzani 2017]. As tumors differ in term of size, location, multifocality, growth rate, and extent of invasion, clinical patterns vary greatly. Some children can remain asymptomatic until adulthood. Some may experience complications such as obstruction of outflow tracts, compression of coronary artery, thromboembolism, and refractory arrhythmias [Shi 2017]. With the emergence of new noninvasive imaging techniques such as echocardiography, CT, and MRI, earlier diagnoses and treatment of pediatric cardiac tumor have become available [Kwiatkowska 2017].
If tumor invades the conduction system, it might cause various rhythm disturbances, which is an important manifestation for pediatric patients. Miyake [2011] defined the clinically significant arrhythmias for pediatric patients as follows: (1) sudden cardiac arrest with documented or suspected ventricular fibrillation, (2) ventricular tachycardia, (3) manifest pre-excitation, and (4) supraventricular tachycardia. They confirmed that when these arrhythmias are present, surgical resection of masses was strongly advocated. According to those definitions, all of our patients had clinically significant arrhythmias with surgery obviously indicated. Notably, our results demonstrated a close relationship between the location of masses and the episodes of arrhythmia. The reason may be that the masses encroach into the cardiac conduction system and interfere with electrical conduction to cause arrhythmia. For the diagnoses of tumors, echocardiography has an obvious diagnostic value and can be the primary imaging method applied in children, because it can correctly show the location and size of tumors. Echocardiography is also very sensitive to hemodynamic changes and helps to determine the timing of surgery. All patients in our series received echocardiography to identify the location and size of tumors and sufficiently evaluate hemodynamic changes. In addition, CT and MRI are of significant importance in helping make a confirmed diagnosis and gain more information for surgery strategy.
For the timing of surgery, the multicenter European Congenital Heart Surgeons Association Study confirmed that surgery is advocated with symptoms, ECG abnormalities, or apparent echocardiographic impairment [Padalino 2012]. A study by Delmo [2016] suggested that the indications for surgery included hemodynamic disturbances, respiratory distress, severe arrhythmia, and significant embolization risk. A Chinese study indicated that the resection of tumors should be undertaken as soon as possible after masses are found [Wang 2016]. In our series, all 6 children had symptomatic arrhythmias and apparent indication for surgery.
The strategy we follow is radical excision of tumors whenever feasible. Refined and gentle manipulation is suggested so as not to cause tumor fragmentation and embolism. On the premise of not damaging adjacent tissue structures, the masses should be completely excised to avoid the regrowth of tumors. In this series, all 6 pediatric patients received total resection. The results showed that surgery effectively terminated arrhythmias in all 6 cases. During follow-up, all patients had satisfactory outcomes. Our experience suggests that total resection is a safe and effective strategy for children with primary cardiac tumor and associated arrhythmias.
Our experience also indicates that for refractory and malignant arrhythmias, aggressive surgical intervention is the optimal option. Patient 6 received radiofrequency ablation (RFA) twice because of repeated episodes of VT; however, VT was still present. Echocardiography was rechecked, and a mass beneath the right coronary cusp was found ( Figure 3). Although the mass was small in size, it was strongly correlated with the marked site in electrophysiological mapping. Finally, a total resection of the mass was performed ( Figure 3). The patient's VT was completely eliminated in the postoperative period. Based on the experience of this case, indications are that arrhythmias resulting from the cardiac mass were refractory, and poor results are achieved with antiarrhythmic medication or RFA. For these patients, physicians should consider whether resection of a mass is indicated, regardless of tumor size.
In conclusion, pediatric primary cardiac tumor is a very rare disease. Some patients can manifest with symptomatic arrhythmias. Surgical resection of tumor is the optimal procedure to terminate arrhythmias, with satisfactory early and late results. The Heart Surgery Forum #2019-2823 | 2020-04-02T09:18:59.140Z | 2020-03-31T00:00:00.000 | {
"year": 2020,
"sha1": "5a68d4d13fd75b69381d362eb13602244f104027",
"oa_license": null,
"oa_url": "https://journal.hsforum.com/index.php/HSF/article/download/2823/4545",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3f2b45aa4abceaecf61c1f11e21f718fa750b5a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226282004 | pes2o/s2orc | v3-fos-license | Dynamics of two excitatory coupled neuron-like phase models
A simple model of neuron-like ensemble is proposed. Within it, we consider dynamics of two excitable neurons interacting via the excitatory coupling. In the parameter space of the model, the regions of, respectively, in-phase, anti-phase synchronous behaviour and of quiescence are determined. Bifurcation transitions between all these states are studied in details.
I. INTRODUCTION
Central pattern generators (CPGs) are circuits in self-contained integrative nervous systems able to generate and control basic repetitive patterns of coordinated motor behaviour without sensory feedback or peripheral input. They are responsible for such vital rhythmic motor behaviours as heartbeat, respiratory functions and locomotion 1 -5 . One of the bestknown case studies in this field is of locomotion in vertebrates: several decades of evidence (see e.g. 6 ) support the hypothesis that walking, flying, and swimming are largely governed by a small network of spinal neurons in all vertebrate species, from lampreys to humans. Recent evidence suggests that plasticity changes of some CPG elements may contribute to the development of specific pathophysiological conditions associated with impaired locomotion or spontaneous locomotor-like movements 7 . Despite the relevance of the topic and substantial progress in the field, including proposed pattern generation mechanisms 8 -15 , the genesis of the motor patterns is still not fully understood 16 .
One of the most widespread approaches in the numerical modelling of CPGs (as well as of other neuronal networks) uses the Hodgkin-Huxley equations 17 or different kinds of their reductions, such as the FitzHugh-Nagumo equations 15 , delivering detailed description of CPG.
Since the reproduction of temporal patterns, not the dynamics of an individual neuron, plays a crucial role 18 in the paradigm of CPG, one may use reduction to phase equations in order to lower the computational complexity. The patterns of motor activity are stable regimes of phase differences demonstrated by elements in the network, hence it looks logical to adopt a phase oscillator as a model of an individual neuron. This approach goes back to the early modelling of animal locomotor CPG, where coupled systems of ODE were reduced to phase models 19 -26 .
Our goal is a model of CPG based on simple neuron-like units that, on the one hand, can reproduce a number of CPG dynamical patterns observed in experiments and reproduced in biologically plausible models 27 -28 , and, on the other hand, allows for analytical study.
Biological experiments witness that most CPGs have some kind of a universal constituent known as a half-center oscillator (HCO) 30 . To account for the generation of rhythmic pattern, Brown 31 first proposed the concept of HCO, in which two mutually inhibitory coupled a) Electronic mail: tatiana.levanova@itmm.unn.ru.
neurons burst in anti-phase. HCO can consist of endogenously bursting neurons, intrinsically tonic spiking or even quiescent neurons that start to generate alternating activity when coupled. As shown in numerous theoretical studies 35 -39 , the formation of an antiphase bursting rhythm is tightly connected to slow time scale dynamics, associated with the slow membrane currents. Simple HCO can contribute to more complex modular CPG networks, such as swimming CPG of Melibe leonina and Dendronotus iris 13 .
To understand better the dynamical principles underlying the behaviours of larger networks, we introduce a simple model of HCO based on two coupled units. Individual element in this case is an active rotator described by the equation: where φ corresponds to the phase of the individual element and γ is a control parameter. (1) describes an excitable neuron. Bottom row: phase space (unit circle) and time series of the system (1) at γ > 1.
Single element is in the oscillatory state; its phase changes continuously in time (right lower panel), so the element generates spikes (left lower panel).
This model, introduced in 29 , is evidently similar to the classical theta-neuron equation 40 . In dependence on γ, Eq.(1) reproduces excitatory behaviour (γ < 1, see upper panel in Fig. 1) or self-oscillatory behaviour (γ > 1, see lower panel in Fig. 1). Below we consider the first case.
In the present study our point is to reproduce the most valuable dynamics typical for CPG and to gain more insights in the fundamental principles of HCO functioning by studying symmetries and bifurcations, which allow CPG to be ultimately flexible and multifunctional [41][42][43] .
The paper is organized as follows. First, we propose a simple phenomenological model of HCO and describe the way we have constructed it. Secondly, we introduce several necessary definitions and discuss general properties of the introduced model. After that we focus on main valuable types of neuron-like activity typical for biological HCO. Our study includes, but is not limited to properties of these states, as well as bifurcation transitions, which lead to their onset and disappearance. In conclusion, we summarize the results of our findings, discuss pros and cons of the proposed model and directions of future studies.
II. THE SIMPLE MODEL OF HCO
As a new simple model of HCO we propose the motif of two excitable neurons, mutually interacting via the excitatory coupling. Mathematically it is described by a system of two differential equations: . (2) Here, the parameter d regulates the strength of symmetric excitatory couplings I(φ).
In accordance to the biological principles 44 , we model excitatory coupling by the function Coupling of this form, first introduced in 45 , and tested in subsequent studies 46,47 , simulates the transmission of a signal from the presynaptic element to the postsynaptic one. When the phase φ of the active presynaptic element reaches α, the current of constant amplitude is applied to the postsynaptic element. The duration of the impact of this stimulus is defined by the difference δ. Dependence of the coupling function I(φ) on the phase of the presynaptic element φ is sketched in Fig. 2(a). The diagram in Fig. 2(b) shows the regions of the joint phase space, where the elements are activated by each other. The system (2) with the coupling (3) is governed by five parameters: γ, d, k, α, δ. Of these, we fix below the values γ = 0.7 and k = 50.
The coupling function (3) takes into account the basic principles of chemical synaptic coupling: (i) presence/absence of the activity in the postsynaptic element depends on the activity level in the presynaptic element; (ii) all interactions between neuron cells are inertial due to the fact that the transfer of neurotransmitter is not instantaneous. So, the form of the function I(φ) reflects the first principle. The parameters α and δ are responsible for inertia and duration effects, respectively; by adjusting them, we can simulate synapses with different neurotransmitters.
Formally, the period of the coupling function with respect to the parameter δ is 4π. In fact, δ takes values from the interval [0, 2π), since the activation range is the segment [α, α + δ], that is, at δ = 2π both elements always activate each other.
III. DEFINITIONS AND PROPERTIES OF THE PROPOSED MODEL
The phase space of the system (2) is a two-dimensional torus. As already mentioned, we focus both on various types of neuron-like activity, like the inphase and anti-phase spiking patterns, and on bifurcation scenarios behind the onset and destruction of these patterns in the simple model of HCO (2). Below, the term in-phase limit cycle denotes a limit cycle in which the phases of both elements coincide: φ 1 (t) = φ 2 (t). Further, anti-phase limit cycle denotes a limit cycle with some period T in which the phases are shifted with regards to each other of by half-period: φ 1 (t) = φ 2 (t+ T 2 ). These definitions correspond to those in 45 .
Let us briefly discuss the basic features of the system (2), utilizable for further analysis. We start with properties that hold regardless of the (continuous) function I(φ).
Property 1. Since the system (2) is invariant under a permutation of variables φ 1,2 , the phase portrait is symmetric with respect to the invariant diagonal φ 1 = φ 2 .
Property 2. Suppose that an anti-phase cycle of the period T exists in the phase space of the system (2).
Then, for each of its points (φ * 1 , φ * 2 ), the cycle also contains the symmetrical counterpart (φ * 2 , φ * 1 ), shifted in time by the half-period. Property 3. Two or more anti-phase limit cycles cannot coexist in the phase space of the system.
We start the proof of this property with a remark that an anti-phase cycle, due to Property 3, cannot be entirely confined either to the triangle 0 < φ 1 < φ 2 < 2π or to the symmetric triangle 0 < φ 2 < φ 1 < 2π. Hence, the phase curve of the cycle should intersect the axes φ 1 = 0 and φ 2 = 0. Assume that there are two anti-phase limit cycles. Let the first one include a point with coordinates (0, a) (0 < a < 2π). Then (Property 2) it also contains a point with coordinates (a, 0), which on the 2-torus is identified with a point (a, 2π). Let the second anti-phase cycle pass through the points with coordinates (0, b) and (b, 2π) (0 < b < 2π), an let b exceed a. Two continuous curves crossing the triangle 0 < φ 1 < φ 2 < 2π, so that the first of them passes through the points with coordinates (0, a) and (a, 2π), whereas the second contains points (0, b) and (b, 2π), are obliged to intersect. This invalidates the assumption on the existence of more than one anti-phase cycle.
From described above it follows that the bifurcation diagram in the parameter space (α, δ) is symmetric with respect to the fixed set of this transformation: lines δ = π − 2α and δ = 3π − 2α, on which the system becomes reversible.
Property 5. The system (2) has two types of equilibrium states: those with φ 1 = φ 2 , (i.e., on the diagonal) and those with unequal coordinates (the off-diagonal ones). Due to permutation symmetry, the off-diagonal equilibrium states appear in symmetric pairs. Existence of such pair implies presence of the steady state on the diagonal.
Further properties take into account the coupling function I(φ) as defined by (3).
Here the sign "+ " is taken for the case δ = π − 2α. The set of points, with respect to which the phase space is symmetric, is the line φ 1 + φ 2 = π (mod 2π). The involution implementing this symmetry is the mapping R : (x, y) → (π − y, π − x).
IV. DYNAMICS OF THE SYSTEM
In this paper we have found out that system (2), depending on the values of control parameters α and δ of excitatory coupling, can generate all main types of neuron-like activity that are typical for HCO: excitability regime and regimes of in-phase and antiphase oscillatory activity. Let us show how described regimes arise and disappear in the system (2) with change in coupling strength parameter d.
This Section is organized as follows. In the first subsection we present an overall dynamical sketch of the system for the case of strong couplings. It includes, first of all, a detailed description of bi-parameter diagram. Then a description of regions of multistability is presented. After that we give a detailed description of the phase space and regimes of neuron-like activity for parameters taken from each region on the bifurcation diagram. In the next paragraph obtained regimes are observed in application to HCO modelling. In the last part of the first subsection we describe bifurcation scenarios that lead to appearance and destruction of all obtained regimes of neuron-like activity. The second subsection is devoted to the study of evolution of the regime of excitation with the changes in coupling strength d. The last subsection presents rigorous analysis of evolution of tonic spiking regimes, namely, in-phase and anti-phase regimes, for changing coupling strength.
A. Overall dynamical sketch for fixed coupling strength Using the analytical and numerical methods the map of neuron-like temporal patterns shown in Fig. 3 was constructed on the (α, δ) parameter plane. Here in Fig. 3(a) coupling strength d is small but enough to produce all main types of neuron-like behavior. Note, that if the coupling strength d is less than some threshold value d th (certain value depends on other parameters of the system), the motif can exhibit only excitable behaviour, which highly resembles the dynamics in single element. Increasing of coupling strength above d th leads to collective spiking dynamics arise in the system. In Fig. 3(b) one can see how regions of different temporal patterns evolve with further increase in the value of coupling strength d up to d = 1. The main effect can be represented as emergence of additional quite wide region D of bistability between regions B (excitable state) and C (anti-phase spiking). This phenomenon can be explained as follows. With increase in d borderlines of stability regions for steady state and for limit anti-phase cycle start to overlap that results in the situation, when in the phase space of the system two attracting sets coexist. The borderlines of other regions of neuron-like temporal patterns also change with the increase in d, namely, for some values of α and δ excitable state is replaced by oscillatory activity (both in-phase and anti-phase).
Let us give the detailed description of regimes of neuron-like activity, that can be observed in all regions shown in Fig. 3.
Region A is the region of in-phase spiking activity (i.e. φ 1 (t) = φ 2 (t)). Mathematical image in the phase space of the system for this type of activity is a stable in-phase limit cycle. In region B only excitable state exists. Although dynamics in this region is simple, it corresponds to different stable equilibria with their own basins of stability. From the point of neuroscience, coexistence of different excitable states could describe different conditions of membrane potential of neuron-like elements, including depolarization and hyperpolarization. In region C the system (2) demonstrates only anti-phase spiking activity, which mathematically can be described with stable anti-phase limit cycle. The region D is the only region of bistability, where anti-phase spiking patterns coexist with excitable behavior.
In the framework of HCO modelling the most interesting and valuable regimes are regimes of anti-phase activity, which allow to alternate the order of two usually opposing behaviours. In Fig. 4 time series of stable anti-phase limit cycles, as well as their images in phase space are given depending on values of governing parameters. First of all, let us describe the transition between regions B and A. In order to do this we fix δ = π and start to decrease the governing parameter α from the value α = 0.885 to the value α = 0.875 to cross the borderline between these regions (see Fig. 5). As a result of saddle-node bifurcation on the invariant curve, a stable in-phase limit cycle appears in the phase space of the system.
The transition from region C to the region A is more sophisticated. To describe corre- sponding bifurcation scenario we fix δ = 3π 2 and build phase portraits of the system for values of parameter α taken from region B near the transition, on the borderline between two regions and in region A after bifurcation takes place. Fig. 6 shows the bifurcation, as a result of which the in-phase limit cycle becomes stable. As one can see, in Fig. 6(a) an unstable in-phase and a stable anti-phase cycles are presented. When a parameter α reaches its bifurcation value α = 7π 4 (see Fig. 6(b)), a closed trajectory passes through each point of the phase space. After the bifurcation, the in-phase cycle becomes stable, and the anti-phase one becomes unstable (Fig. 6(c)).
The borderline between regions C and D is complex and contain several scenarios of birth of bistability of anti-phase spiking pattern and excitable state. The first scenario is presented in Fig. 7. For α = π − 0.01, an unstable anti-phase limit cycle exists in the phase space, so unstable saddle separatrices tend to a stable state of equilibrium. One stable separatrix of each saddle tends to an unstable equilibrium (in reverse time), and the other two tend to an unstable limit cycle. During bifurcation (α = π), two homoclinic trajectories are formed, which limit the region of the phase space, through each point of which closed trajectories pass. For α = π + 0.01 a stable anti-phase limit cycle exists in the phase space. Stable saddle separatrices now tend to an unstable equilibrium (with time reversal). One unstable separatrix of each saddle tends to a stable equilibrium, the other two tend to a stable limit cycle.
The second scenario of birth of stable anti-phase limit cycle during transition from region C to region D presented in Fig. 8 and involves appearance of heteroclinic cycle (8(b)). In Figure 8(a) one can see that all unstable separatrices of the saddle tend to a stable equilibrium. If we will continue to increase the value of α up to α bif = 4.1691, in the phase space of the system a pair of heteroclinic trajectories arise between two saddles. These heteroclinic trajectories together with the saddles form a heteroclinic cycle presented in Fig. 8(b). After bifurcation occurs, a stable anti-phase limit cycle, which attracts unstable separatrices of the saddles, is formed on the basis of described heteroclinic cycle, see Fig. 8(c).
The third scenario can be observed if we fix α = 1.026 and start to increase the value of δ from δ = 0.8 up to δ = 1. In this case on the line φ 1 = φ 2 a heteroclinic cycle between saddles appears ( Fig. 9(b)), which further give a birth to stable anti-phase limit cycle.
The fourth scenario can be described as follows. Right before the bifurcation in the phase space of the system an invariant curve exist. It contain two saddle points, one stable equilibrium on the diagonal line and separatrices connecting them ( Fig. 10(a)). On the described invariant curve a saddle-node bifurcation takes place, as a result of which a stable anti-phase limit cycles emerges ( Fig. 10(b)).
The fifth scenario also connected to the emergence of heteroclinic cycle. At the first stage, a pair of heteroclinic trajectories appear, see Fig. 11(b), which together with two saddles form a heteroclinic cycle (Fig. 11(c)). This heteroclinic cycle evolves to stable anti-phase limit cycle with further increase in the value of parameter δ (Fig. 11(d)).
In the following subsections we study in details how presented main temporal patterns are changing with increase or decrease in coupling strength d. Now let us study various bifurcation scenarios that lead to the appearance of oscillatory regimes, including in-phase and anti-phase spiking.
In-phase limit cycle appears as a result of saddle-node bifurcation on the invariant curve if the following condition is met: The first scenario we describe takes place near the threshold value of coupling strength d th = 0.3 and is related to appearance of in-phase spiking (see Fig. 12). As one can see in Fig. 12(a), here for d = 0.29 two non-smooth closed invariant curves exist: the first one consists of unstable separatrices (red curves) of saddles (blue point), saddles themselves and stable steady state (green point). Described curve passes through a stable equilibrium state twice and at this point it is non-smooth. The second invariant closed curve is formed by stable separatrices (green curves) of saddles, saddles themselves and an unstable equilibrium (red point). With the increase in d up to the value d = 0.299 described equilibria tend to approach each other and merge at value d ≈ 0.3. After the bifurcation (Fig. 12(b)) for coupling strength value greater than threshold value, e.g. for d = 0.301, equilibria disappeared, instead of it an in-phase stable limit cycle (green curve) and an anti-phase unstable cycle (red curve) appeared. As a result, one can observe in-phase tonic spiking regime in the system.
The condition for the birth of an anti-phase limit cycle can be found approximately. First of all, the necessary condition for existence of limit cycles is γ + d ≥ 1. Replacing the coupling function I(φ) by a piecewise constant implies that the cycle will exist if the time of motion of a phase point along the arc (α, α + δ) for non-excited element is not less than the time of motion of phase point along the arc (arcsin γ, π − arcsin γ) for the excited element: This condition can be rewritten in the following form: In Fig. 8(b) one can see phase portrait of the system under study for (8) satisfied. 31. In (a) red curves correspond to unstable separatrices and green curves -to stable ones. In (b) red curve corresponds to unstable anti-phase limit cycle, while green curve corresponds to stable in-phase limit cycle. Blue dots mark saddles, green dots -stable equilibria, red dots -unstable equilibria. See description in the text for more details.
Bifurcation scenarios related to the appearance of anti-phase spiking pattern can be described as follows (see Fig. 13). For coupling strength near the threshold value d th , e.g. for d = 0.29, a closed invariant curve exists composed of unstable separatrices (red curves), two saddles (blue dots) and stable equilibrium (green dot), see Fig. 13(a). This curve passes through the stable equilibrium twice, and, at this point it is not smooth. With the increase in the value of coupling strength up to d = 0.299 described stable equilibrium undergoes a pitchfork bifurcation, as a result of which all unstable separatrices enter one of two stable equilibria. In Fig. 13(b) one can see that two separatrices pass closely to the saddle on the diagonal line. The closed invariant curve now consists of unstable separatrices, two stable nodes and two saddles and becomes smooth. For further increase in coupling strength d Red curves correspond to unstable separatrices and green curves -to stable ones. In (b) green bold curve corresponds to stable anti-phase limit cycle. Blue dots mark saddles, green dots -stable equilibria, red dots -unstable equilibria. See description in the text for more details.
( Fig. 13(c)) up to the d = 0.2999 stable equilibria (green dots) approach saddles that do not belong to the diagonal line (blue dots). In Fig. 13(d), shortly after the bifurcation for d = 0.301 one can observe a stable anti-phase limit cycle (bold green curve) as a result of two saddle-node bifurcations on the invariant closed curve. This cycle comes very close to the saddle on the diagonal line (blue dot), but, as we can see, does not go through it. As a result, the system under study demonstrates an anti-phase spiking regime.
V. CONCLUSIONS
In this study we have proposed a new phenomenological single neuron-like model and build a model of HCO on its basis. On the one hand, this model of HCO is simple and allows to conduct analytical study, on the other hand, it reflects the main properties of biological HCO. It is constructed of two excitable neurons coupled by chemical excitatory synapses. Despite its simplicity the proposed model demonstrates all typical for HCO temporal patterns: excitable state, in-phase and anti-phase spiking. We have used bifurcation theory to provide a mathematical description of main types of neuron-like activity under variation of couplings' parameters of this model. Described anti-phase and in-phase spiking patterns are crucial for motor pattern generation and, according to 48 , may be associated with swimming and synchrony patterns of spiking activity, respectively, that observed in a Xenopus tadpole CPG. From the point of view of nonlinear dynamics, each of these temporal patterns corresponds to a stable periodic motion of a certain type in the phase space of the system.
Moreover, the detailed studies of bifurcations leading to the appearance of these types of neuron-like activity have been carried out. On the parameter plane (α, δ) where α corresponds to the start time of the activation of postsynaptic element and δ is responsible for duration of the couplings impact, the regions of different types of neuron-like activity have been determined, such as stable in-phase and anti-phase tonic spiking. Our analysis has also shown the presence of wide region of excitable state (quiescence), where the motif can generate activity only as a response on external stimulus.
Our analysis has helped to reveal regions of bistability, for which the system can demonstrate both excitable and anti-phase spiking behavior, so the same pattern generator circuit can support several types of neuron-like activity.
For changing coupling strength d we have studied the transition from excitability to spiking, starting from the case of truly weak coupling. Obtained results, on the one hand, have helped us to study more precisely origins of spiking behavior near the excitability threshold, and, on the other hand, to gain more insights into functions of HCO.
In summary, our new developed simple model can be used, first of all, as a building block in specific complex CPG networks in a wide range of studies of motor control, dynamic memory, information processing, and decision making in animals and humans. One possible application of such studies is a development of new efficient treatment of neurological diseases related to CPG arrhythmia. Another area, where described results can help to advance, is referred to more efficient robot locomotion, which requires more insights in CPG multistability 50 -55 .
This work was partially funded by Ministry of science and education project # № 14.Y26.31.0022 (the study of bifurcation scenarios) and RFBR grant # 18-29-10068 (the study of the neuronal temporal patterns). | 2020-11-10T02:00:59.044Z | 2020-11-09T00:00:00.000 | {
"year": 2020,
"sha1": "de6e6a0c65e9b144f1a20fc1c2f20d719be9411c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "de6e6a0c65e9b144f1a20fc1c2f20d719be9411c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
58931730 | pes2o/s2orc | v3-fos-license | Travelling Salesman Problem Solution Based-on Grey Wolf Algorithm over Hypercube Interconnection Network
Travelling Salesman Problem (TSP) is one of the most popular NP-complete problems for the researches in the field of computer science which focused on optimization. TSP goal is to find the minimum path between cities with a condition of each city must to visit exactly once by the salesman. Grey Wolf Optimizer (GWO) is a new swarm intelligent optimization mechanism where it success in solving many optimization problems. In this paper, a parallel version of GWO for solving the TSP problem on a Hypercube Interconnection Network is presented. The algorithm has been compared to the alternative algorithms. Algorithms have been evaluated analytically and by simulations in terms of execution time, optimal cost, parallel runtime, speedup and efficiency. The algorithms are tested on a number of benchmark problems and found parallel Gray wolf algorithm is promising in terms o f speed-up, efficiency and quality of solution in comparison with the alternative algorithms.
Introduction
Due to the large increases in the number of cities in the world, mobility between cities has become difficult because of there existing many dissimilar roads to reach the same city with different travelling cost (Vukmirović and Pupavac, 2013), where there are several places that are all directly connected to each other by different long roads and the passenger wants to make the shortest trip.Some Algorithms can be used to guide people using one of the transport or movement methods (walking, train, car, and bus) to reach their destination on the shortest route.(Zhan and Noon, 1996).
TSP is arisen in many different practical applications such as School bus routes, Computer wiring, job -shop scheduling and many more (Matai, et al., 2010).While there is many applications of TSP then applied new algorithms and architecture could give the opportunity to inspire new solutions that could be better than existing ones, which means, optimizing for all of these applications.
TSP has received great interest from researchers and mathematicians, as it is easy to describe, difficult to solve.TSP problem has a place in a big class of problems known as NP-complete, as shown in Figure 1.In particular, if an efficient algorithm (polynomial time) can be found for solving TSP, then efficient algorithms could be found for all others.(Karla, et al., 2016).
Figure 1.NP -Complete Problems (Al-Shaikh, et al., 2016) TSP is the problem of finding the shortest route between cities or nodes which classified as minimization problem (Lam and Newman, 1985), the problem is to create the shortest route in the form of aggregation to visit each node exactly once and then return to the initial node (Kan and Shmoys, et al., 1985).
TSP is a permutation problem which required O (n!) as time complexity (Kaempfer and Wolf, 2018).There exist an algorithm called Held and Karp algorithm as in (Chekuri, 2017) based on dynamic programming to solve TSP where they reduce the computational time complexity to O (2nn2) but it's still too high for solving big size of realworld instance.
TSP Formulation
Miller (as cited in Sawik, 2016) shows that TSP can be defined as an integer linear program as follow: Xij = 1 There is rout from city i to city j 0 No rout Where x is a variable for an n city problem, by defining the Travelling cost between two cities i and j is ci.j and ui is a temporary variable, Miller shows how TSP can be defined as an integer linear program as : Where equations 2 and 3 are used to ensure that each city on the route can only arrive from another city exactly once, Equalities 4 and 5, it is necessary for each city on the route, there is an exit to the another city.Fin ally, equation 6 is used to ensure that there is only one route covering all cities, which means that they cant be multiple, simultaneous and unconnected paths.
While the importance of the TSP in numerous fields and for its applications, many researchers s olve it using different approaches aspiring to get a better solution than existing such as : Bee Colony Optimization (BCO) (Wong, el al., 2008), Ant Colony Optimization (ACO) (Yang, et al., 2008), Firefly algorithm (FA)(Kumbharana and Pandey)( 2013) and Heuristic Algorithm (Hernández, el al., 2016).These methodologies do not generally locate the optimal solution.Rather; they will often find the near-optimal solutions for the problem.Most of these algorithms called Swarm optimization (SO) (Poli, el al,2007) or Meta-Heuristic optimization mechanisms (Blu m, el al, 2007), SI is an intent of design or distributed problem-solving inspired devices by the collective behavior of colonies of insects social and other sociedades animals (Raja, 2015), which can be used to solve many types of optimization problems.As an examples in natural systems of Swarm optimization are include Bacterial Growt h (Zwietering, el al.,1990), whales (Mirjalili and Lewis, 2016), Bird Flocking (Reynolds, 1987), Fish Schooling (Bastos, et al., 2008) and spiders (James and Li, 2015).Swarm intelligence techniques become very popular and commonly used in solving many types of optimization problems due to many advantages, some of these advantages are (Kordon): • Easy to implement.
• Fewer numbers of parameters to adjust.
• Less memory to save (less space complexity).
• Obtained good results in responsible time.
Grey Wolf Optimizer (GWO) is a recently established SI optimization mechanism where their many researchers used it to solve many optimization problems such as Parameter Estimation in Surface Waves (Song, el al., 2015), Economic Emission Dispatch (Song, et al., 2015) and Scheduling problem (Komaki and Kayvanfar, 2015).
The inspiration for GWO is from a species of wolves called the Grey Wolf (Canis lupus), by imitating its hunting methods and hierarchical pack distribution, which are referred to as; Alpha α, Beta β, Delta δ, and Omega (ω); these are used to imitate the series of commands as shown in Figure 2.
Figure 2. Hierarchy of Grey Wolf (Mirjalili and lewis, 2014) As seen in figure 2, sovereignty reclines from top to bottom.The first level is Alpha (α), which is the leader, which is not necessary to be the most robust wolf but the superior to other wolves in managing the pack.Thus it is responsible for the decision making.The second level is Beta (β), which helps Alpha in decision making.Thus, it represents as the mentor to Alpha and an educator to the pack.The third level is Delta (δ) which controls Omega (ω).This category could be Scouts , sentinels, elders, hunters, and caretakers.Finally, the fourth level is Omega (ω) that acts as the scapegoat and gives up to all dominant wolves.
In the real world, there are many complicated events that occur simultaneously and in a temporal sequence like weather and galaxy formation, where that far exceeds the capabilities of single-processor architectures (Worboys, 2005).
Therefore, the concept of parallel computing seemed to increase performance and reduce computation time to solve these problems (Barney, 2010) where parallel machines break a single problem in parallel tasks that were performed simultaneously, as TSP problem.Parallel computing is much more suitable for modeling and simulating complex problems (D'Angelo, 2011).
Hypercube is a multi-dimensional mesh of processors with exactly 2 processors in every dimension; this means that a two-dimensional hypercube is made up of processors p = 2 ^ d.For example, a zero -size hypercube is a single processor.In general, a hypercube (d + 1) -dimensional is constructed by connecting the corresponding twodimensional hypercube processors as shown in Figure 3 (Bhuyan and Agrawal, 1984).In this study, we decide to use hypercube because of its performance measure proved that it has a better performance compared with static network topologies (Kiasari, et al., 2008) wise of Diameter, Bisection Width, Arc Connectivity and the Cost as shown in Table1.And also, because of the topology of hypercube was implemented in many supercomputers such as Endeavour Supercomputer by NASA (Cathleen, 2011).
The main contributions of this paper are summarized as follows: • This study adapted GWO to solving the TSP problem, its executed sequentially on different size of World TSP benchmarks and measure the performance in terms of execution time and optimal cost.
•
To compare the result of the GWO with other meta-heuristic algorithms, GA (genetic algorithm) and CRO (optimization of the chemical reaction) are chosen and adapted to solve th e TSP, the performance metrics are measured in terms of execution times and optimal costs.
• Development of the parallel GWO.The parallel GWO is developed on the basis of both data and computation distribution techniques through the hypercube interconnectio n network.Data distribution technique is designed based on dividing the dataset map (cities) with a goal of achieving load balancing among the interconnection network.Computation distribution is provided by distributing GWO iterations through the interconnection network to reduce the computing time.
• A comparison between PGOW (Parallel Grey Wolf Optimizer), PCRO (Parallel Chemical reaction optimization) and PGA (Parallel Genetic algorithm) in terms of execution time, parallel runtime, speedup, efficiency and optimal cost.The PGWO shows better performance results than PCRO and PGA.
The remaining of this paper is structured as follows: In Section II, a work related to this study is presented, section III addresses our parallel model for GWO for solving TSP, then in Section IV, the analytical evaluation of the sequential and parallel version of TSP-GWO is presented.Section V shows the discussions of our experimen t al results.Finally, conclusions and future work are in section VI.
Related Works
Several researchers have been conducting research on solving TSP and apply their algorithm on different topologies; below are some of these very recent studies.
A recent meta-heuristic algorithm used to solve TSP in (Kumbharana and Pandey, 2013), where authors used the Firefly Algorithm (FA) to solve TSP problem, the experimental results obtained on different size TSP instances.Authors show that the proposed algorithm provides better results than (ant colony optimization) ACO, genetic algorithm (GA) and simulated annealing (SA) in most of the instances.Also, in (Bhardwaj and Pandey, 2014), they presented a Parallel Ant Colony (ACO) algorithm to solve TSP in heterogeneous platform using the OpenCL framework.All the parameters of the algorithm in ant system are been investigated to their best values as control parameters, where α and β represent the dependency of probability on the pheromone content or the heuristic and equal to α = 1, β = 5 and ρis the evaporation rate = 0.5.The parallel implementation is done on CPU and GPU using OpenCL, where GPU gives better results.In (AbdulJabbar and Abdullah, 2016), authors proposed a hybrid algorithm based on two metaheuristic methods: simulated annealing (SA) and tabu search (TS).The goal of using tabu search is to resolve the long computation times that take from SA by keeping the best -founded solution in each SA iteration.By comparing with the basic version of SA, authors found the proposed approach reduces the time complexity by finding the optimal path (best solution) with a few numbers of iterations.In another study (Anjaneyulu, et al., 2014), they used approximation algorithms to find near-optimal solution, the approximatio n algorithm used for maximizat ion or minimizat ion based on the problem, when it comes to TSP it is minimizat io n .They focused on a special case of TSP, which is Metric TSP (the distance between two cities is the same in each opposite direction), then they proposed a parallel two -approximation algorithm for metric TSP.Finally, they reported that the algorithm found near optimal solution with a significant reduction in runtime.In (Razali and Geraghty, 2011), authors have used the Genetic Algorithm (GA) but with Different Selection Strategies to solve the TSP problem, they firstly use Tournament selection strategy as a popular selection method in genetic algorithm then they try another selection strategy called Proportional Roulette Wheel Selection and as a final selection strategy, they use a Rank-based Roulette Wheel Selection.The algorithms are cod ed in MATLAB and tested by eight TSP instances.They found that GA with the rank-based roulette wheel selection always gives better solution quality than the other selection strategy.
Because of the importance of TSP and its applications (Applegate, et al., 2007), this study presents a solution to the TSP by using the GWO, where GWO is a recent establish meta-heuristic optimization mechanisms and its success in solving many types of optimization problems with good results.To compare the result of GWO with alternative meta-heuristic algorithms, GWO will be compared with GA and CRO.GA regardless of achieving great success in solving many optimization problems, it is also used for comparison in most meta -heuristic optimizatio n research like in (Shaheen and Sleit, 2016), (Ross and Corne, 1995) and (Ingber and Rosen, 1992).Also, CRO used to compared with GWO because it is one of the newest meta-heuristic algorithm where it also obtained good results in solving NP problems as in (Barham, et al., 2016), (Shaheen, et al.,2018) and (Sun, et al., 1990).
Proposed Approach
GWO is a new nature-inspired metaheuristic (Swarm intelligence) where this type of algorithms are inspired by natural systems (Mirjalili, et al., 2014).GWO gets its name from the nature of the social hierarchy of wolves, as well as their hunting behavior.The Hunting behavior of Grey Wolves is split into four procedures: (1) Chasing, (2) Encircling, (3) Hunting and (4) attacking the victim.In Chasing phase, The Algorithm considers that α is the best solution; β is the second best solution and δ is the third best solution.However, ω represents the rest candidate solutions.Thus the hunting is led by the dominant wolves (α, β, and δ).In other words; Grey Wolves could recognize the position of the prey through an iteration process and surround it as in encircling phase, Grey Wolves bounded the victim through the hunt (optimization) by calculating the distance between the location of the prey.
In hunting phase, The hunt generally is led be the leader (α).However, sometimes β and δ contribute in hunting.
In another hand, there is no idea about the position of the prey that represents the optimum.Therefore, the algorith m assumes that α, β, and δ has preferable knowledge about the position of prey.Thus, the algorithm saves the first three best solutions then update the locations of the rest wolves (ω) depending on the position of the dominant wolves (best search agent) and in attacking phase where it's the final phase, the hunting proceeding obtained the optimization solution the prey stops proceeding.
In this paper, we used the concept of GWO to solve TSP.The proposed solution is implemented in two approaches.
In the first approach, the algorithm is implemented sequentially using the standard JAVA programming language where the second approach of the implementation uses Java multithreading and aims to make the most of CPU processing power.The reason for implementing two approaches of the algorithm is to demonstrate the feasibility of the parallel structure.The approaches are implemented in the most logical way possible.
Sequential ALGORITHM: "GWO-TSP"
In GWO, there are four types of wolfs: Alpha (α), Beta (β), Delta (δ), and Omega (ω) and the prey, wolfs applied the hunting methods to hunt prey, this is being implemented in a hierarchical way until Alpha wolf take the decision of attack.TSP contains number of cities while there is a cost of Travelling between each pair of cities, the objective is to find the shortest path going through all cities, which means a simple cycle tour, which starts and ends at city 1.By applying GWO to find the possible solution for TSP, Figures 4 present the pseudo -code for the proposed "GWO_TSP" sequential algorithm, TABLE 1 shows the main attributes and their meaning related to the proposed algorithm "GWO_TSP", in comparison with wolfs meaning in GWO.
Fitness function
Current optimal TSP solution.
The GWO-TSP algorithm, as shown in Figure 4 (see lines 1, 6 and 23, present these three stages).First, the initialization stage can be shown in Figure 4 (see lines 1-5) to assign initial values for the algorithm parameters.Each individual in the Population is an array, which represents the maximum number of candidate solutions.While each of them consists a full tour as in Table 1.Next, initialize and as sign the value of three preys, this is because as the concept of GWO, its assumes that first three wolves (α, β, and δ) has preferable knowledge about the position of prey and as Table 1 each prey represent a city from the city map.Population (see line 4) is an empty array used to constructing the candidate solutions.In order to start building solutions, three preys (or cities) will be selected randomly from the city map (data-set) and added as an initial population where all towns that surrounding them are considered the grey wolves which is the final step in the initialization stage.The goal of Iteration stage is to generate and build the candidate solutions (or tours) until reaching the best solution.Iteration stage is shown in Figure 4 (see lines 6-22).After generating the three required population as in the initialization stage, each of them will contain a prey that surrounded by a group of wolves.The function (see line 9) used to calculate the destination of all wolfs from the prey then returns the nearest three wolfs from the prey and added to Xα, Xβ, and Xδ respectively.Now, based on the positions of the three best wolves, the Xα will be added to the original population and two new population will be created and added the prey in each one plus Xβ and Xδ respectively.That's mean, each population will generate two new population and each of them will contain two preys.In other words, after the first iteration, the number of population will be equal nine and each of them will contain two preys.Next, the algorithm will calculate the fitness for each population, which means, the cost of Travelling between cities.While increasing of number of population, the function (see line 20) used to remove the most costly tour.This will happen when the number of population is larger than the allowed population size.Iteration stage keeps working until reaches the stopping criteria, two stopping criteria used to stop the iteration stage.Firstly, the number of population must be larger than the allowed population size, and each population should be contained a full solution.Which means, as equal as the number of cities, this is done through FullTour function.In the final stage, the best solution found will be retrieved.
To understand how this study uses GOW to optimize TSP problem, as in figure 5.a.It shows a TSP problem which is an undirected graph of four cities and six edges, each edge has its own Travelling cost, which will be used in this example to represents the possible solutions.Based on TABLE 1, each element in GWO represent what its means in GWO-TSP, for instance, Gray wolf population in GWO represented in GWO-TSP as the candidate solutions of TSP.By adapting the elements in GWO to our proposed algorithm, The example that was displayed in Figure 5.a this will become as shown in Figure 5.b.
Parallel ALGORITHM: "GWO-TSP"
The large number of possible solutions, even with a GWO, opens the possibility of parallelizing the solution for the TSP problem, in which the main objective of this paper is to study the possibility of parallelizing the GWO to solve the TSP problem.In our proposed model, we have created the parallel version of GWO (PGWO) that can be efficiently executed on the Hypercube interconnection network.
For developing PGWO, original TSP map must be divided to achieve load balancing among all processors.In this study, We divide the problem of TSP into sub-problems by creating districts from the original map (dataset) during the partition operation, then we apply the GWO-TSP steps on each part of the district generated by the partition operation.The partition operation consists of two steps: • Find the highest and lowest values edges of the map.
• Divide the map into multiple districts by subtracting the highest v alues from the lowest values then dividing the difference by the desired value.Figures 6.1 -6.3 show how districts are created from a original map.The number of districts depends on the number of processors we use, for example, if the number of processors is 16, then the number of districts must be at least equal16.The equation 7 was used to find the number of districts.
D = N (7)
Where D is the number of districts, N is the number of processors.Figure 7 presents the districts algorithm.As discussed in the introductory section, the hypercube interconnection network contains 2d nodes, where d is the number of hypercube dimension.we can find the number of dimensions of hypercube from the number of nodes using equation 8, where N is the number of nodes.
To distribute districts over hypercube, label all nodes in the hypercube interconnection network, where the number of bits is required to label all nodes equal to the dimension value.For example, if the dimension is equal to 3, label all the nodes in three bits (000, 001, 010, ..., 111), which provides eight numbers of nodes.To distribute districts through the hypercube network, find d using equation 8, then check all the nodes.If d = 3, the loop will be executed three times, each time the districts are divided in half and send half to the next node along the dimension (d -1).Each node, the same operation is performed until it reaches the dimension of d = 0 as an indicator to be stopped (see Figures 9). Figure 9 shows the communication mechanism for d= 2. where a is a factor between [0-1], districtSize equals the number of cities in district and the population size is the maximu m number of candidate solutions.
The data combination is performed by overturning the order of the steps in the distribution phase.After each node completes all the planned iterations, sends the best solution to the node that contains a complete route (shorter path) and links the solution with the solution that it provides, in the end, the master node will contain the final solution (full path).Figures 4 present the pseudocode for the proposed parallel algorithm "GWO_TSP".
Figure 10.PGWO-TSP algorithm on the hypercube interconnection network
Analytical Evaluation
This section provides the analytical evaluation of sequential GWO-TSP algorithm in term of time complexit y and the parallel version on Hypercube interconnection network in terms of parallel time complexity, speedup, efficiency, and cost.
Analytical Evaluation of Sequential GWO-TSP Algorithm
As it is described in before, GWO-TSP consists of multiple steps, where initially create initial population then create new generation by Calculating the destination between cities.
All the terms that precede (see line 6) are constants.As shown in Figure 4 (lines 1-5), the outer while loop is expected to run until reach the population size where each population must contain a Full solution, as shown in Figure 4 (lines 2-8).In the worst case, the number of population is equal to the number of cities which means O (n).Inside the main loop, another loop runs equal to population size as shown in Figure 4 (lines 8-18), where in each iteration, three cities are picked and updated the population O(n).The function in line 9 which used to Calculate the destination of all wolfs is require O(n) while line 10 to 16 are constants.In line 17, the time complexity for the function of Calculate the fitness value for each Population is O(n).Variables in lines 17 to 19 are constants.
The total time complexity of sequential GWOTSP is shown in Equation 10where T is the time complexity, N is the number of population and C is constants: Equation 10 can be reduced to Equation 11T The largest term of equation 3 is n 2 , Thus, the final time complexity will be O (n 2 ).
This section provides the analytical evaluation of GWO-TSP algorithm on Hypercube interconnection network in terms of parallel time complexity, speedup, and efficiency.
Parallel Time Complexity
The parallel execution time is equal to the total of computation time plus the total of communication time.The time required to apply the sequential GWO-TSP on a set of cities represents the computation time and the communication time is equal to the number of communication steps required in both phases, distribution and combination.
The analytical evaluation of the of the parallel execution time for all phases of the GWO-TSP algorith m over hypercube interconnection network is demonstrated by tracing the algorithm in Fig. 10, as shown in Table 2.
The overall complexity of the parallel execution time of phases 1-4 is shown in equation ( 12) where T is the complexity of time, N is the number of cities, P is the number of processors and d is the size of the hypercube.
Table 2.All phases of GWO-TSP algorithm on Hypercube interconnection network.
Phase 1 (Load balancing phase) Root processor executes the Create districts algorithm in fig. 7. the execution will split the map to a number of districts, from line 1-3 the time complexity equal to O(5C+1), fro m line 4-10 the time complexity equal to O(n), finally, the time complexity for grouping cities in districts takes O(C×N 2 ), where C is the number of cities in input data and N is the array of districts.The total time complexity is O(C+n +C× n 2 ) ≈ O(n 2 ) Phase 2 (Data distribution phase) In the hypercube interconnection network, the number of steps necessary to distribute the data through all hypercube is required d steps, where d is the size of the hypercube, which is log P, the general execution time is O (d) Phase 3: Local Repetitive.
All processors run the sequential GWO-TSP on each district.This will require N /P ×N 2 time complexity, where N is the number of cities, P is the number of processors and N 2 is the run time complexity of the sequential GWO-TSP.Phase 4: Data Combining Phase All processors will send the solution to the root node, this will be performed in log P steps which equals d and root processor required O(n) as time complexity to combining all solutions.The total time complexity is O(d + n)
Speedup
Speedup is an important measure for a parallel algorithm, used to calculate the relation of the sequential computation time and the parallel time as equation 14: where TS is the time required by the sequential algorithm and TP is the time required by the parallel algorithm.
The sequential time complexity for GWO-TSP is O(n 2 ) and the parallel version is required which is illustrated in Eq. ( 3), the speedup of GWO-TSP over hypercube is shown in equation 15: S = n 2 * p / n 2 + d + n 3 /p (15)
Efficiency
One of the important factors to measure parallel performance is parallel efficiency which measures how much the processors being utilized in the interconnection networks.It is equal the ratio between speedup and the number of processors as equation 16: Where E is the Efficiency, S is Speedup as equation 6 and p is the number of processors.
Simulation Results
For our experiments, we used a computer with Intel Core i5-3317U CPU 1.70GHz with 8 GB of RAM.The simulation for GWO, CRO, and GA has implemented in Java JDK 8 programming language.The algorithms were tested by 6 different size TSP problems taken from the World TSP (TSP website, 2009); XQF131, XQG237, PMA343, PKA379, PBL395, and PBN423.The parameters are fixed on follows: number of wolves equals the number of cities in each TSP instance and the maximu m number of solutions equals 70% from the number of cities in the dataset, this value is selected to make the algorithm more scalable and to reduc e both of computation time and the required space.For fairness, the same specifications and same stopping criteria are used in our simulations for all algorithms.
Sequential Results
Since GWO, CRO, and GA are meta-heuristic mechanisms, the results obtained in different executions could be different.Because that we repeat the simulation 25 times and record the results as shown in Table 3.In Table 3, the first column shows the name of the instance (the numbers in the names indicate the nodes of each instance).The second column shows the best-known solution for each instance taken from the World TSP [42].
For each algorithm there are four columns, the best column shows the best fitness value of the best exe cution.The mean column shows the average quality of 25 executions of the algorithm.The error rate column shows the fitness function (minimum) of the best individual provided algorithm and the optimal TSPLIB.The error is calculated as in equation 18, finally, the time column shows the time needed to execute the entire program in seconds.
𝐸𝑟𝑟𝑜𝑟 = (
Where Error is the relative value of difference from the optimum tour, Best Solution is the tour length obtained by the experiment and Optimal Solution is the tour length of optimum solution.From Figure 11, it is clear that GOW always gives the highest solution quality (minimu m traveling cost) for all TSP instances tested.This is followed by CRO and GA algorithms.However, the quality of solution reduces as the size of instance increase with an increase in execution time for all algorithms.
From Figure 12, we can observe that the runtime for all algorithms is almost the same with a slightly different but the best runtime comes from CRO for some data instance such like PMA343 and PBN423.Also, it is clear that the experimental and theoretical time converge.
Parallel Results
For the Parallel results, we used a big TSP problem (IRW2802) taken from the World TSP, which contains 8423 cities with 5533 lengths as the known optimal solution.
In Figure 13 shows that speed-up increases in all the algorithms apart from the increase in the number of nodes used until a certain number of nodes begins to decrease.We can see th at the speedup is almost linear for PGWO when it uses 32 to 256, while it is sub-linear when there are 512 processors and then begins to decrease.This is due to the communication overload in the 512 and 1024 processors scenario, which is much more than th at in 256 processors or less.PCRO got better Speedup compared to PCRO and PGA.Comparing the error rate of fitness values from Figure 14, we find that in most cases the quality of the solutions generated by PGWO is better than PCRO and PGA.This is because in GWO, populations (Solutions) are built from scratch, where the CRO and GA populations are created randomly.The results obtained by CRO are better than GA, because there are four types of reactions that improve the solutions more than the GA.Moreover, with the increase in the number of processors, the error rate has increased.This is mainly due to the division data and the number of iterations on the processors.
Figure 15.Relative efficiency for GWO, CRO and GA Figure 15 shows the relative efficiency for PGOW, PCRO, and PGA.As it shows that efficiency decreases in all the algorithms aside with increasing the number of nodes used, since the amount of data shared for each node decreases, so the difference between communication time and computation time is reduced, which affects the speedup.linear with the increase in the number of nodes used, which in turn affects the efficiency, but we can see that the efficiency of GWO is better than PCRO, PGA, where the worst efficiency is obtained from PGA .
Conclusions and Future Work
This study introduces a parallel model of GWO algorithm to solve the TSP problem called "GWO-TSP" in a hypercube interconnection network.The analytical evaluation of the sequential and parallel is presented.GWO is compared first with the sequential GWO and then with PCRO and PGA.The simulations are performed by TSP instances of different sizes.To be honest, the same stopping criteria are used in our simulations for all algorithms.
The results show that GWO for TSP can improve the fitness value and reduce the computation time with a higher speed-up and better parallel efficiency.
For future work, we intend to compare GWO-TSP with other meta-heuristic algorithms, design and test a deferential interconnection network.
Figure 6 .
Figure 6.Splitting the TSP map into multiple districts
Figure 11 .
Figure 11.Quality of solutions for GWO, CRO, and GA
Table 3 .
The experimental results of sequential GWO, CRO, and GA in terms of fitness value, quality of solution and the execution.
Table 4 .
Table2, the first column shows the number of processors, the time column shows the parallel time (best time) it takes to run the entire program in seconds on nodes.The third column shows the Speedup a ratio of the computation time of the sequential time and the parallel time as equation 14.The error rate column shows the error value of the fitness function (minimu m) of the best individual as equation 18. Parallel time, Speedup and Error rate for PGWO, PCRO, and PGA. | 2018-12-15T00:36:08.064Z | 2018-07-28T00:00:00.000 | {
"year": 2018,
"sha1": "43c586fbea33aead4b41ac7504f609c49929b839",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/mas/article/download/76080/42502",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "43c586fbea33aead4b41ac7504f609c49929b839",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
256761883 | pes2o/s2orc | v3-fos-license | Peptidic Inhibitors and a Fluorescent Probe for the Selective Inhibition and Labelling of Factor XIIIa Transglutaminase
Factor XIIIa (FXIIIa) is a transglutaminase of major therapeutic interest for the development of anticoagulants due to its essential role in the blood coagulation cascade. While numerous FXIIIa inhibitors have been reported, they failed to reach clinical evaluation due to their lack of metabolic stability and low selectivity over transglutaminase 2 (TG2). Furthermore, the chemical tools available for the study of FXIIIa activity and localization are extremely limited. To combat these shortcomings, we designed, synthesised, and evaluated a library of 21 novel FXIIIa inhibitors. Electrophilic warheads, linker lengths, and hydrophobic units were varied on small molecule and peptidic scaffolds to optimize isozyme selectivity and potency. A previously reported FXIIIa inhibitor was then adapted for the design of a probe bearing a rhodamine B moiety, producing the innovative KM93 as the first known fluorescent probe designed to selectively label active FXIIIa with high efficiency (kinact/KI = 127,300 M−1 min−1) and 6.5-fold selectivity over TG2. The probe KM93 facilitated fluorescent microscopy studies within bone marrow macrophages, labelling FXIIIa with high efficiency and selectivity in cell culture. The structure–activity trends with these novel inhibitors and probes will help in the future study of the activity, inhibition, and localization of FXIIIa.
Introduction
The transglutaminase (TGase) family of enzymes is comprised of eight calciumdependent isozymes and the non-catalytically active erythrocyte membrane protein band 4.2 [1][2][3]. These enzymes carry out numerous functions in biological settings with a primary role in crosslinking proteins through the formation of N ε (
Introduction
The transglutaminase (TGase) family of enzymes is comprised of eight calcium-dependent isozymes and the non-catalytically active erythrocyte membrane protein band 4.2 [1][2][3]. These enzymes carry out numerous functions in biological settings with a primary role in crosslinking proteins through the formation of N ε ( ɣ -glutaminyl)lysine bonds using a Cys-His-Asp catalytic triad [4][5][6]. Within the TGase family are two isozymes of current therapeutic interest, transglutaminase 2 (TG2) and Factor XIII (FXIII). Human TG2, also referred to as tissue transglutaminase, is ubiquitously expressed throughout virtually all tissues [7,8]. TG2 is noteworthy due to its transamidase activity involved in liver fibrosis, its deamidation role in celiac disease, and its intracellular Gprotein activity [9][10][11] which has been implicated in numerous cancer models including the epithelial-mesenchymal transition of cancer stem cells [12][13][14][15][16][17]. The major role of FXIII is within the final step of the coagulation cascade, where it mediates the crosslinking of insoluble fibrin monomers to form a rigid 3D blood clot network. This makes it a viable therapeutic target for the development of novel anticoagulant drugs in the treatment of venous thrombosis [18]. Current drugs, including heparins and coumarins, target a -glutaminyl)lysine bonds using a Cys-His-Asp catalytic triad [4][5][6]. Within the TGase family are two isozymes of current therapeutic interest, transglutaminase 2 (TG2) and Factor XIII (FXIII). Human TG2, also referred to as tissue transglutaminase, is ubiquitously expressed throughout virtually all tissues [7,8]. TG2 is noteworthy due to its transamidase activity involved in liver fibrosis, its deamidation role in celiac disease, and its intracellular G-protein activity [9][10][11] which has been implicated in numerous cancer models including the epithelial-mesenchymal transition of cancer stem cells [12][13][14][15][16][17]. The major role of FXIII is within the final step of the coagulation cascade, where it mediates the crosslinking of insoluble fibrin monomers to form a rigid 3D blood clot network. This makes it a viable therapeutic target for the development of novel anticoagulant drugs in the treatment of venous thrombosis [18]. Current drugs, including heparins and coumarins, target a multitude of upstream clotting factors, thus preventing soft clot formation and increasing the risk for severe bleeding [19][20][21]. Inhibition of the downstream FXIII is believed to provide a milder alternative that would [33], ZED2360 [41], ZED3197 [22], Merck's imidazolium inhibitor 16 [39], and Keillor's TG2 targeted small molecule inhibitor NM72 [42].
While Factor XIIIa therapeutics remains an active area of research, the chemical tools currently available for studying this crucial clotting enzyme are comparatively limited. To our knowledge, only two probes for monitoring the activity of FXIIIa in biological systems have been reported to date [43,44]. Both probes serve as α2-antiplasmin substrate mimics and are tagged with either a near-IR fluorophore or Gd-chelating magnetically resonant contrast moiety. These compounds label blood clots through FXIIIa-mediated incorporation into fibrin, allowing for the indirect deduction of FXIII activity and localization. No known probes have been reported to specifically label the active form of the enzyme itself; the development of such a tool would aid in the study of the localization, migration, and activity of FXIII in cellulo.
In our work on TGases, we developed numerous activity assays, probes, and targeted covalent inhibitors that show high isozyme selectivity, mainly within the context of studying TG2 [42,[45][46][47][48][49][50]. In the current work we design, synthesize, and evaluate inhibitors of FXIIIa, and use the optimised inhibitor design in the production of a fluorescent probe. In order to achieve the desired isozyme selectivity for FXIII over TG2, we relied upon previous kinetic data and the binding pocket differences between FXIIIa [33,41,51] and TG2 [42,45,46,52] to design several series of peptidic and small molecule inhibitors. More specifically, we explored structure-activity relationships (SAR) with respect to the linker length, electrophilic warhead functionality, hydrophobic moiety, and acidic group, while keeping the scaffold backbone constant. Evaluation of the peptidic and small molecule inhibitor series exposes inconsistent and scaffold-dependent trends, potentially due to conformational dynamism within the active site of the enzyme [41] akin to that observed recently in the TG2 field [45]. We then further modified the most potent scaffold to incorporate a fluorescent moiety, thereby creating a high-affinity fluorescent probe for the specific labelling of FXIIIa within biological settings. The fluorescent probe was assayed in cell culture to display its effectiveness at labelling FXIIIa in cellulo.
While Factor XIIIa therapeutics remains an active area of research, the chemical tools currently available for studying this crucial clotting enzyme are comparatively limited. To our knowledge, only two probes for monitoring the activity of FXIIIa in biological systems have been reported to date [43,44]. Both probes serve as α 2 -antiplasmin substrate mimics and are tagged with either a near-IR fluorophore or Gd-chelating magnetically resonant contrast moiety. These compounds label blood clots through FXIIIa-mediated incorporation into fibrin, allowing for the indirect deduction of FXIII activity and localization. No known probes have been reported to specifically label the active form of the enzyme itself; the development of such a tool would aid in the study of the localization, migration, and activity of FXIII in cellulo.
In our work on TGases, we developed numerous activity assays, probes, and targeted covalent inhibitors that show high isozyme selectivity, mainly within the context of studying TG2 [42,[45][46][47][48][49][50]. In the current work we design, synthesize, and evaluate inhibitors of FXIIIa, and use the optimised inhibitor design in the production of a fluorescent probe. In order to achieve the desired isozyme selectivity for FXIII over TG2, we relied upon previous kinetic data and the binding pocket differences between FXIIIa [33,41,51] and TG2 [42,45,46,52] to design several series of peptidic and small molecule inhibitors. More specifically, we explored structure-activity relationships (SAR) with respect to the linker length, electrophilic warhead functionality, hydrophobic moiety, and acidic group, while keeping the scaffold backbone constant. Evaluation of the peptidic and small molecule inhibitor series exposes inconsistent and scaffold-dependent trends, potentially due to conformational dynamism within the active site of the enzyme [41] akin to that observed recently in the TG2 field [45]. We then further modified the most potent scaffold to incorporate a fluorescent moiety, thereby creating a high-affinity fluorescent probe for the specific labelling of FXIIIa within biological settings. The fluorescent probe was assayed in cell culture to display its effectiveness at labelling FXIIIa in cellulo. Although extensive SAR studies have been performed on the amino acid sequences of the ZED3197 and ZED1301 scaffolds, the crucial warhead residue remains less explored [51,53,54]. With respect to the electrophilic moiety itself, it is noteworthy that the α,β-unsaturated ester warhead is a common feature among Zedira's inhibitors, while other amide-based electrophiles, such as acrylamides and α-chloroacetamide, had not apparently been tested on these peptidic scaffolds. This is surprising as both these amide-based warheads have shown great success in achieving potent transglutaminase inhibition while remaining stable to degradation by glutathione in the cell [55]. Furthermore, the distance between the scaffold and electrophile, hereinafter referred to as the linker length, was not varied in the Zedira FXIII inhibitor studies available in the literature [51,53,54]. We found that the linker length has a profound impact on peptidomimetic TG2 inhibitor potency, with longer linkers being more efficient than shorter ones [46]. Combined with crystal structure evidence that the active site tunnel leading to the catalytic Cys thiolate is shallower in FXIII than in TG2 [33,52], we hypothesised that the decreasing linker length may be a viable method of increasing potency of FXIIIa inhibition and increasing selectivity over TG2. Thus, a series of peptidic FXIIIa inhibitors with three different electrophilic warheads (α,β-unsaturated ester, acrylamide, and α-chloroacetamide) and four linker lengths (one through four methylene units) was designed in order to investigate the impact of these structural features on the potency of FXIIIa inhibition and selectivity over TG2 (Figure 2). The ZED1301 peptide scaffold was selected for this study due to its known affinity for FXI-IIa and ease of synthesis. ZED1301 itself was also independently synthesised and evaluated in order to supplement its original kinetic characterization, a condition-dependent IC 50 value [33]. Condition-independent k inact and K I values were acquired for more accurate comparisons with irreversible FXIIIa inhibitors developed herein and in other works. unsaturated ester warhead is a common feature among Zedira's inhibitors, while other amide-based electrophiles, such as acrylamides and α-chloroacetamide, had not apparently been tested on these peptidic scaffolds. This is surprising as both these amide-based warheads have shown great success in achieving potent transglutaminase inhibition while remaining stable to degradation by glutathione in the cell [55]. Furthermore, the distance between the scaffold and electrophile, hereinafter referred to as the linker length, was not varied in the Zedira FXIII inhibitor studies available in the literature [51,53,54]. We found that the linker length has a profound impact on peptidomimetic TG2 inhibitor potency, with longer linkers being more efficient than shorter ones [46]. Combined with crystal structure evidence that the active site tunnel leading to the catalytic Cys thiolate is shallower in FXIII than in TG2 [33,52], we hypothesised that the decreasing linker length may be a viable method of increasing potency of FXIIIa inhibition and increasing selectivity over TG2. Thus, a series of peptidic FXIIIa inhibitors with three different electrophilic warheads (α,β-unsaturated ester, acrylamide, and α-chloroacetamide) and four linker lengths (one through four methylene units) was designed in order to investigate the impact of these structural features on the potency of FXIIIa inhibition and selectivity over TG2 (Figure 2). The ZED1301 peptide scaffold was selected for this study due to its known affinity for FXIIIa and ease of synthesis. ZED1301 itself was also independently synthesised and evaluated in order to supplement its original kinetic characterization, a condition-dependent IC50 value [33]. Condition-independent kinact and KI values were acquired for more accurate comparisons with irreversible FXIIIa inhibitors developed herein and in other works.
Design of Small Molecule Inhibitors of FXIIIa
In the hope of developing a FXIII inhibitor with better drug-like properties than the peptide-based inhibitors investigated herein, we also screened a series of small-molecule compounds. Stemming from our extensive research on small-molecule TG2-selective inhibitors, we commenced our search for small-molecule FXIIIa inhibitors by using a previously published scaffold that exhibits low potency against TG2, again noting that the short linker length produces poor TG2 inhibition [46]. Two key elements from the reported high-affinity peptidic sequences [46] for FXIIIa were adapted into the small molecule inhibitor design and retained across our series of compounds, specifically the negatively charged N-terminal moiety and hydrophobic C-terminus. Thus, our SAR work in this small molecule FXIIIa inhibitor investigation encompassed a variety of N-terminal acids and C-terminal hydrophobic units that have shown promise in previous TGase studies ( Figure 3) [46]. Small changes in the linker length were also investigated. The distance between the warhead and scaffold was reduced further through incorporation of a D-Dap warhead-bearing residue, allowing the coupling of the acid to the sidechain amine and the warhead to the α-amine. Given the broad scope of the structural features explored in this work, the known reactivity of the acrylamide towards TGases, and its superior presumed stability and selectivity compared to chloroacetamides and esters, the acrylamide
Design of Small Molecule Inhibitors of FXIIIa
In the hope of developing a FXIII inhibitor with better drug-like properties than the peptide-based inhibitors investigated herein, we also screened a series of small-molecule compounds. Stemming from our extensive research on small-molecule TG2-selective inhibitors, we commenced our search for small-molecule FXIIIa inhibitors by using a previously published scaffold that exhibits low potency against TG2, again noting that the short linker length produces poor TG2 inhibition [46]. Two key elements from the reported high-affinity peptidic sequences [46] for FXIIIa were adapted into the small molecule inhibitor design and retained across our series of compounds, specifically the negatively charged N-terminal moiety and hydrophobic C-terminus. Thus, our SAR work in this small molecule FXIIIa inhibitor investigation encompassed a variety of N-terminal acids and C-terminal hydrophobic units that have shown promise in previous TGase studies ( Figure 3) [46]. Small changes in the linker length were also investigated. The distance between the warhead and scaffold was reduced further through incorporation of a D-Dap warhead-bearing residue, allowing the coupling of the acid to the sidechain amine and the warhead to the α-amine. Given the broad scope of the structural features explored in this work, the known reactivity of the acrylamide towards TGases, and its superior presumed stability and selectivity compared to chloroacetamides and esters, the acrylamide warhead was mostly retained throughout the small-molecule FXIIIa SAR. A few derivatives bearing α,β-unsaturated methyl esters, akin to the Zedira peptides, were also synthesised and evaluated in order to provide a preliminary warhead comparison within this small molecule scaffold and between series with the peptidic compounds developed.
Molecules 2023, 28, x FOR PEER REVIEW 5 of warhead was mostly retained throughout the small-molecule FXIIIa SAR. A few deriv tives bearing α,β-unsaturated methyl esters, akin to the Zedira peptides, were also sy thesised and evaluated in order to provide a preliminary warhead comparison within th small molecule scaffold and between series with the peptidic compounds developed. Design approach for small-molecule inhibitors of FXIIIa, based on a known low poten TG2 "inhibitor 17" and driving selectivity for FXIIIa with N-terminal acids and C-terminal hydr phobic groups [42,46].
Design of Fluorescent Probe of FXIIIa
After completion of the synthesis and evaluation of the peptidic and small-molecu FXIIIa inhibitors, the optimised linker, warhead, and scaffold were used to design a rh damine B labelled probe for studying FXIIIa. Rhodamine B was attached at the N-term nus through a flexible 6-aminohexanoic acid linker to allow the bulky fluorophore to held far away from the binding site. The rhodamine B fluorophore was chosen due to desirable, bright red emission, and minimal overlap with background cellular autoflu rescence. It was also linked through a proline residue since the tertiary amide linkage a lows the rhodamine to preserve its intrinsic fluorescence (Figure 4).
Synthesis
The general synthesis of the inhibitors disclosed herein was achieved through a com bination of solid-phase peptide synthesis (SPPS) and in-solution chemistry. For assemb of the peptidic inhibitors bearing amide-based warheads, amino acid monomers with va ying linker lengths leading to the appropriate warheads, or precursors thereof, were sy thesised and subsequently incorporated into the linear octapeptides. The syntheses of t unsaturated ester-bearing peptides were carried out in a manner similar to that report by Zedira [51,53] Figure 3. Design approach for small-molecule inhibitors of FXIIIa, based on a known low potency TG2 "inhibitor 17" and driving selectivity for FXIIIa with N-terminal acids and C-terminal hydrophobic groups [42,46].
Design of Fluorescent Probe of FXIIIa
After completion of the synthesis and evaluation of the peptidic and small-molecule FXIIIa inhibitors, the optimised linker, warhead, and scaffold were used to design a rhodamine B labelled probe for studying FXIIIa. Rhodamine B was attached at the N-terminus through a flexible 6-aminohexanoic acid linker to allow the bulky fluorophore to be held far away from the binding site. The rhodamine B fluorophore was chosen due to its desirable, bright red emission, and minimal overlap with background cellular autofluorescence. It was also linked through a proline residue since the tertiary amide linkage allows the rhodamine to preserve its intrinsic fluorescence ( Figure 4). warhead was mostly retained throughout the small-molecule FXIIIa SAR. A few derivatives bearing α,β-unsaturated methyl esters, akin to the Zedira peptides, were also synthesised and evaluated in order to provide a preliminary warhead comparison within this small molecule scaffold and between series with the peptidic compounds developed.
Design of Fluorescent Probe of FXIIIa
After completion of the synthesis and evaluation of the peptidic and small-molecule FXIIIa inhibitors, the optimised linker, warhead, and scaffold were used to design a rhodamine B labelled probe for studying FXIIIa. Rhodamine B was attached at the N-terminus through a flexible 6-aminohexanoic acid linker to allow the bulky fluorophore to be held far away from the binding site. The rhodamine B fluorophore was chosen due to its desirable, bright red emission, and minimal overlap with background cellular autofluorescence. It was also linked through a proline residue since the tertiary amide linkage allows the rhodamine to preserve its intrinsic fluorescence ( Figure 4).
Synthesis
The general synthesis of the inhibitors disclosed herein was achieved through a combination of solid-phase peptide synthesis (SPPS) and in-solution chemistry. For assembly of the peptidic inhibitors bearing amide-based warheads, amino acid monomers with varying linker lengths leading to the appropriate warheads, or precursors thereof, were synthesised and subsequently incorporated into the linear octapeptides. The syntheses of the unsaturated ester-bearing peptides were carried out in a manner similar to that reported by Zedira [51,53]. The small molecule inhibitors were synthesised through careful in-
Synthesis
The general synthesis of the inhibitors disclosed herein was achieved through a combination of solid-phase peptide synthesis (SPPS) and in-solution chemistry. For assembly of the peptidic inhibitors bearing amide-based warheads, amino acid monomers with varying linker lengths leading to the appropriate warheads, or precursors thereof, were synthesised and subsequently incorporated into the linear octapeptides. The syntheses of the unsaturated ester-bearing peptides were carried out in a manner similar to that reported by Zedira [51,53]. The small molecule inhibitors were synthesised through careful in-solution manipulations of protecting groups, allowing for the sequential installation of first the various hydrophobic groups and next the electrophilic warhead. These key intermediates with free N-termini were accumulated, allowing for divergent functionalization to final inhibitors bearing different N-terminal acidic moieties. A detailed discussion of the inhibitor syntheses, as well as full experimental procedures and characterization data, is provided in the Supplementary Materials, along with Schemes S1-S21. Between the peptidic and small molecule scaffold series, a total of 22 inhibitors and 1 probe were synthesised and evaluated for inhibition of FXIIIa.
Kinetic Evaluation
All inhibitors synthesised herein were evaluated for inhibition against FXIIIa, using the fluorescence-quenched A101 isopeptidase assay [56,57], and against TG2, using the colorimetric AL5 transamidase assay [46,50]. The assay substrates A101 and AL5 both present labile bonds that are cleaved by the activities of their corresponding transglutaminases, resulting in the release of a fluorophore or chromophore moiety, respectively. The rate of consumption of these assay substrates, indicated by increases in fluorescence or absorbance over time, can be detected for measuring enzyme activity. Assays were run in duplicate at constant reporter substrate concentration with varying concentrations of inhibitor. Representative fluorescence-time plots are shown in Figure 5A,C. The type of inhibition was determined through visual inspection of the fluorescence-time curvature; termination of the enzymatic reaction with the substrate at a plateau lower than the no-inhibitor positive control is indicative of irreversible inhibition. On the other hand, convergence to a common plateau indicates reversible inhibition. solution manipulations of protecting groups, allowing for the sequential installation of first the various hydrophobic groups and next the electrophilic warhead. These key intermediates with free N-termini were accumulated, allowing for divergent functionalization to final inhibitors bearing different N-terminal acidic moieties. A detailed discussion of the inhibitor syntheses, as well as full experimental procedures and characterization data, is provided in the Supplementary Materials, along with Schemes S1-S21. Between the peptidic and small molecule scaffold series, a total of 22 inhibitors and 1 probe were synthesised and evaluated for inhibition of FXIIIa.
Kinetic Evaluation
All inhibitors synthesised herein were evaluated for inhibition against FXIIIa, using the fluorescence-quenched A101 isopeptidase assay [56,57], and against TG2, using the colorimetric AL5 transamidase assay [46,50]. The assay substrates A101 and AL5 both present labile bonds that are cleaved by the activities of their corresponding transglutaminases, resulting in the release of a fluorophore or chromophore moiety, respectively. The rate of consumption of these assay substrates, indicated by increases in fluorescence or absorbance over time, can be detected for measuring enzyme activity. Assays were run in duplicate at constant reporter substrate concentration with varying concentrations of inhibitor. Representative fluorescence-time plots are shown in Figure 5A,C. The type of inhibition was determined through visual inspection of the fluorescence-time curvature; termination of the enzymatic reaction with the substrate at a plateau lower than the no-inhibitor positive control is indicative of irreversible inhibition. On the other hand, convergence to a common plateau indicates reversible inhibition. Irreversibility was tested further through substrate spike experiments. As shown in Supplementary Materials Figures S1-S3, an additional substrate was added to inhibited and uninhibited enzyme reactions after the completion of the positive control reaction. Further increases in fluorescence upon substrate spike, as in Figure S1, indicate reversibility of inhibition, while the lack of an additional increase, as in Figure S2, suggests that all enzyme has been irreversibly inhibited. Analysis was carried out through corrected Dixon modelling in the case of reversible kinetics, to obtain K i values ( Figure 5B), and hyperbolic saturation modelling was used to obtain k inact and K I for inhibitors showing irreversibility ( Figure 5E) [58,59]. For cases in which the tested inhibitor concentrations were not high enough to reach saturation, a linear regression provided an estimate of the inhibitor's k inact /K I ratio ( Figure 5D). Further details concerning data collection and analysis are provided in the Materials and Methods section.
Kinetic Evaluation of Peptidic Inhibitors of FXIIIa
The structures of the 11 peptidic inhibitors synthesised in this work are shown in Table 1. All kinetic data from their evaluation are summarised in Table 2. Several interesting trends can be noted when investigating the impact of the warhead functionality and linker length on the potency of FXIIIa inhibition. While the acrylamide-bearing inhibitors 12-14 show reversible competitive FXIIIa inhibition, the analogous chloroacetamides 21-23 operate in an irreversible manner. Since both warheads are known to be capable of covalent bond formation in the presence of a suitable thiolate, the observed differences in inhibition mode with FXIIIa must be due to geometry rather than intrinsic reactivity. Perhaps binding of the warhead carbonyl into the enzyme's oxyanion pocket places the acrylamide's electrophilic carbon too distant from the thiolate, preventing S-C bond formation, while that of the chloroacetamide, being one position closer to the carbonyl, is just close enough to allow for covalent bond formation. It is also interesting to note that the acrylamide and chloroacetamide series show similar linker length trends. Potency is relatively insensitive to changes in the linker length between 2 through 4 methylene units, as the K i values for acrylamides 12-14 are of the same order of magnitude, and the same can be said for the k inact /K I ratios of chloroacetamides 21-23. However, in both cases, reducing the linker down to 1 methylene, as in compounds 11 and 20, results in a complete loss of inhibition. Since the electrophilic carbon placement is different between compounds 11 and 20, and that of 11 matches 21, a known inhibitor with an f-position electrophilic carbon, it is clear that the lack of inhibition cannot be explained by improper electrophile placement. Instead, improper carbonyl carbon placement may be responsible for the lack of FXIIIa inhibition in compounds 11 and 20. Both place the carbonyl carbon at the d-position, which is apparently too close to the scaffold, and may not allow for appropriate binding in the enzyme's active site.
ZED1301, the lead compound (46), was found to be the most potent irreversible inhibitor of the series, with k inact /K I ratio several orders of magnitude higher than any of the chloroacetamides. The superior potency of ZED1301 (46) can be attributed to its unique warhead structure, as the unsaturated ester was the only warhead studied in this work that places the electrophilic carbon closer to the scaffold than the more distal carbonyl. It is believed that this geometric arrangement allows this inhibitor to take full advantage of the non-covalent oxyanion hole interactions and covalent S-C bond formation, as shown in the crystal structure published by Zedira [33], while inhibitors with the acrylamide and chloroacetamide warheads may be unable to form these interactions due to excessive induction of strain required in the enzyme or inhibitor. This work also represents the first report of condition-independent kinetic parameters, k inact and K I , for ZED1301 (46), which will allow for accurate comparisons with inhibitors developed in this work and other studies. Either decreasing or increasing the linker length from ZED1301 results in a loss of inhibitory potency and changes to inhibition mode. The shorter-linker esterbearing compound 45 shows weak reversible inhibition similar in potency to that seen with acrylamides 12-14, while the longer-linker ester-bearing inhibitor 47 displays highly potent reversible FXIIIa inhibition. These results further support the strict geometric requirements for binding and covalent bond formation in the active site of FXIIIa. As for the impact of linker and warhead on selectivity over TG2, no clear trends were noted. While none of the acrylamides result in any noticeable TG2 inhibition up to 350 µM, most of the chloroacetamides and ester-bearing inhibitors are irreversible TG2 inhibitors. The lack of inhibition seen with compounds 21 and 45, as well as the superior potency of the ester warhead, are attributed to geometrical factors governing the enzyme-inhibitor interaction. With the peptidic inhibitors fully evaluated and with ZED1301 (46) identified as the most potent of the series, we then evaluated our small molecules as potential selective FXIIIa inhibitors. Table 1. Structural classification of peptidic FXIIIa inhibitors. The linker length (n methylene units) and warhead are presented along with an alphabetical nomenclature system for noting the distances of the electrophilic carbon (E + C) and carbonyl carbon (CO C) from the peptide backbone. For example, a designation of d implies that the feature is present at the δ carbon, the 3rd carbon atom away from the backbone's a/α-carbon.
Molecules 2023, 28, x FOR PEER REVIEW 8 of 22 Table 1. Structural classification of peptidic FXIIIa inhibitors. The linker length (n methylene units) and warhead are presented along with an alphabetical nomenclature system for noting the distances of the electrophilic carbon (E + C) and carbonyl carbon (CO C) from the peptide backbone. For example, a designation of d implies that the feature is present at the δ carbon, the 3rd carbon atom away from the backbone's a/α-carbon. ZED1301, the lead compound (46), was found to be the most potent irreversible inhibitor of the series, with kinact/KI ratio several orders of magnitude higher than any of the chloroacetamides. The superior potency of ZED1301 (46) can be attributed to its unique warhead structure, as the unsaturated ester was the only warhead studied in this work that places the electrophilic carbon closer to the scaffold than the more distal carbonyl. It is believed that this geometric arrangement allows this inhibitor to take full advantage of Table 3). The dansylated inhibitor scaffold was desirable as it could act in a multi-faceted role as a potent inhibitor scaffold as well as a fluorescent probe which would allow for FXIIIa localization studies. An investigation into the preferences of various alkyl acids revealed that sulfonyl 79 provided the greatest affinity with an apparent K i of 26.7 ± 1.3 µM (See Supplementary Materials). However, upon analysis of the similarity in the kinetic results (of both the acid variants and the methyl ester 73), some doubts were raised regarding the assay conditions. A substrate spike of the inhibition assays revealed that the FXIIIa was still active, and the inhibitors were not acting in an irreversible manner; potentially implying a reversible mechanism (Supplementary Materials Figure S3). Spiking the 79-inhibition assay with another dose of inhibitor 79 caused a noticeable decrease in apparent RFU (Supplementary Materials Figure S4). Due to the potential fluorescent quenching of both acrylamide [60] and the fluorescent dansyl moiety of the inhibitor scaffold, they were both evaluated in the inhibition assay. Dansyl amide resulted in nearly identical traces as those obtained using the dansylated inhibitor library (Supplementary Materials Figure S5). A further spectrophotometric analysis of both dansyl and anthranilic acid in the literature revealed that the dansyl absorption peak and anthranilic acid excitation at 313 nm overlap perfectly. Since A101, as well as the similar substrate A138, are the standard continuous assays for FXIIIa activity [56,61] and were found to be incompatible with our fluorescent dansyl inhibitors, the inhibition data for these compounds were set aside, and the scaffold was optimised to eliminate the fluorescence interference effect. Table 2. Kinetic parameters, inhibition modes, and isozyme selectivity of acrylamide-, α-chloroacetamide, and α,β-unsaturated methyl ester-bearing peptidic inhibitors of FXIIIa inhibitors with varying linker lengths. Data were determined using distinct continuous activity assays for FXIIIa and TG2 detailed in the Materials and Methods section.
FXIIIa TG2
Inh. molecule compound 77 shows reversible inhibition with Ki = 69 µM. Similarly, while 46 (ZED1301) shows the most potent irreversible FXIIIa inhibition in this work, the small molecule with the same two-methylene linker and unsaturated ester warhead in compound 87 shows only weak reversible inhibition with Ki = 131 µM. This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates 69 This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates Sulfonyl-naphthalene derivative 76 provided a K i of 64.3 ± 5.1 µM (Supplementary Materials); however, this scaffold also produced an absorption band that overlaps with the excitation of A101. Further tailoring gave rise to the naphthoyl derivatives 72 and 77 that do not have photochemical properties that interfere with the fluorescent assay. Succinyl derivative 77 was the lead inhibitor from the L-Dap naphthoyl series, with a K i of 69.0 ± 4.1 µM. An attempt to decrease the linker length to the warhead by one methylene unit, with the incorporation of the D-Dap residue 84, resulted in the K i rising to a value of 107.6 ± 19.1 µM. This contradicted our initial intuition that shorter linkers would improve potency against FXIIIa. Since none of the acrylamide-bearing inhibitors in this series led to irreversible inhibition but were clearly binding to the enzyme's active site, two compounds bearing an unsaturated ester warhead (87 and 88) were evaluated. We hoped that the replacement of the acrylamide warhead with the Michael acceptor would result in an irreversible small molecule inhibitor of the enzyme. To our surprise, both the succinic derivative 87 and phthalic acid derivative 88, with K i values of 131.1 ± 7.9 µM and 265.0 ± 46.8 µM respectively, had reduced potency versus FXIIIa compared to the acrylamide-bearing compounds. A drastic drop in selectivity was also observed for these scaffolds, such that they were more selective for TG2, with irreversible K I values of 14.3 ± 7.0 µM and 21.2 ± 9.0 µM, respectively. This unexpected finding hints at some scaffold dependence for the ideal warhead and linker combination for achieving potent, irreversible FXIIIa inhibition. While it was found that a linker length of 1 methylene on the peptidic scaffold and an acrylamide warhead (compound 11) leads to no inhibition, the analogous small-molecule compound 77 shows reversible inhibition with K i = 69 µM. Similarly, while 46 (ZED1301) shows the most potent irreversible FXIIIa inhibition in this work, the small molecule with the same two-methylene linker and unsaturated ester warhead in compound 87 shows only weak reversible inhibition with K i = 131 µM.
Inh. Mode
This finding is mirrored perfectly by TG2 investigations in our group, in which we found that longer linkers bearing acrylamides on a peptidomimetic TG2 scaffold led to more potent inhibition than shorter ones [46]. However, more recently we showed that on an abbreviated small-molecule TG2 scaffold, the shortest linker achieves the most potent inhibition [45]. We hypothesize that a significant degree of conformational dynamism exists in the TG2 binding pocket, creating a scaffold dependence for the ideal linker length, based on the enzymatic conformation induced by the binding of different-sized substrates or inhibitors. We believe that a similar phenomenon is at play here with FXIIIa and our inhibitors disclosed herein. It is clear that the two-methylene linker and ester warhead, which are ideal on the peptide, are not as effective on the small molecule scaffold. This notion is further supported by Zedira's recent study revealing a transient hydrophobic pocket in FXIIIa's active site [41]. It appears that trial-and-error may be required to perfectly tailor the linker and warhead for each new scaffold size tested for FXIIIa and that further screening of linkers and warheads on this known binding small molecule scaffold may potentially produce an irreversible, drug-like, small molecule FXIIIa inhibitor.
Since no novel sub-10 µM K I small molecule inhibitors were developed and 46 (ZED1301) remains the most potent irreversible FXIIIa inhibitor evaluated in this work, we opted to adapt this peptidic scaffold for a potential role as a novel fluorescent probe research tool.
Kinetic Evaluation of Fluorescent Rhodamine B FXIIIa Probe
Fluorescent probe 93 (aka KM93, Figure 6), whose design incorporates a two-methylene linker, ester warhead, and peptide scaffold from ZED1301, retains irreversible FXIIIa inhibition with acceptable potency for biological applications. The k inact /K I ratio against FXIIIa (127,300 ± 2890 M −1 min −1 ) is approximately three-fold lower for the probe than for the parent ZED1301 (46), showing that some potency was lost with the incorporation of the rhodamine B moiety and flexible linker. Fortunately, selectivity was maintained, as the probe remains around 6.5-fold selective for FXIIIa over TG2 (k inact /K I = 19520 ± 2180 M −1 min −1 ). ylene linker, ester warhead, and peptide scaffold from ZED1301, retains irreversible FXIIIa inhibition with acceptable potency for biological applications. The kinact/KI ratio against FXIIIa (127300 ± 2890 M −1 min −1 ) is approximately three-fold lower for the probe than for the parent ZED1301 (46), showing that some potency was lost with the incorporation of the rhodamine B moiety and flexible linker. Fortunately, selectivity was maintained, as the probe remains around 6.5-fold selective for FXIIIa over TG2 (kinact/KI = 19520 ± 2180 M −1 min −1 ).
Irreversible Labelling of Purified FXIIIa by SDS-PAGE
Labelling of commercially available FXIIIa with the rhodamine B fluorescent probe KM93 was then evaluated by SDS-PAGE analysis. FXIIIa was incubated in the presence of 30 µM KM93 and calcium; SDS-PAGE analysis of the resulting sample revealed a protein band that was red fluorescent ( Figure S6). Coomassie staining then showed that this band corresponds to a molecular weight of ~80 kDa-coinciding with that of the commercially available FXIIIa [30,62]. The appearance of this red fluorescent band, on a gel run under denaturing conditions, allowed us to conclude that KM93 successfully labelled FXIIIa in a robust, covalent, and irreversible manner.
Labelling of FXIIIa in Bone Marrow Macrophages
Covalent labelling of FXIIIa by KM93 was tested in cell culture in murine bone marrow macrophages (BMM). BMMs are 'gold standard' expressors of FXIIIa and are responsible for the production of circulating FXIIIa [63]. Labelling of FXIIIa was first performed in cell lysate using 0-20 µM KM93 and evaluated via SDS-PAGE, using fluoroimaging of the rhodamine B moiety. Supplementary Materials Figure S8A shows detectable, concentration-dependent covalent incorporation of KM93 to a band at 80 kDa, with strongest labelling at 8-µM and 20-µM concentrations of the probe over both 6-h and 48-h incubations. A background band was detected at ~100 kDa, in the control as well as the test group, but the identity of this protein is unknown. Coomassie staining of the gel confirms equal loading ( Figure S8B).
Fluorescent microscopy of the labelling showed rapid clear incorporation of KM93 into BMM cells at 20 µM concentration after 48-h incubations (Supplementary Materials Figures S9 and Figure 7A). Labelling of Factor XIIIa was deemed to be intracellular ( Figure 7A) as confirmed by co-staining with actin ( Figure S9) and showed colocalization with an FXIII-A antibody (yellow arrows, Figure 7D). Not all FXIII-A antibody labelling was colocalized with the labelling by KM93 (white arrows, Figure 7C). Since labelling by KM93 is based on the enzyme activity of FXIIIa, this result suggests that Factor XIII may be present but not active in these specific macrophages.
Irreversible Labelling of Purified FXIIIa by SDS-PAGE
Labelling of commercially available FXIIIa with the rhodamine B fluorescent probe KM93 was then evaluated by SDS-PAGE analysis. FXIIIa was incubated in the presence of 30 µM KM93 and calcium; SDS-PAGE analysis of the resulting sample revealed a protein band that was red fluorescent ( Figure S6). Coomassie staining then showed that this band corresponds to a molecular weight of~80 kDa-coinciding with that of the commercially available FXIIIa [30,62]. The appearance of this red fluorescent band, on a gel run under denaturing conditions, allowed us to conclude that KM93 successfully labelled FXIIIa in a robust, covalent, and irreversible manner.
Labelling of FXIIIa in Bone Marrow Macrophages
Covalent labelling of FXIIIa by KM93 was tested in cell culture in murine bone marrow macrophages (BMM). BMMs are 'gold standard' expressors of FXIIIa and are responsible for the production of circulating FXIIIa [63]. Labelling of FXIIIa was first performed in cell lysate using 0-20 µM KM93 and evaluated via SDS-PAGE, using fluoroimaging of the rhodamine B moiety. Supplementary Materials Figure S8A shows detectable, concentrationdependent covalent incorporation of KM93 to a band at 80 kDa, with strongest labelling at 8-µM and 20-µM concentrations of the probe over both 6-h and 48-h incubations. A background band was detected at~100 kDa, in the control as well as the test group, but the identity of this protein is unknown. Coomassie staining of the gel confirms equal loading ( Figure S8B).
Fluorescent microscopy of the labelling showed rapid clear incorporation of KM93 into BMM cells at 20 µM concentration after 48-h incubations (Supplementary Materials Figure S9 and Figure 7A). Labelling of Factor XIIIa was deemed to be intracellular ( Figure 7A) as confirmed by co-staining with actin ( Figure S9) and showed colocalization with an FXIII-A antibody (yellow arrows, Figure 7D). Not all FXIII-A antibody labelling was colocalized with the labelling by KM93 (white arrows, Figure 7C). Since labelling by KM93 is based on the enzyme activity of FXIIIa, this result suggests that Factor XIII may be present but not active in these specific macrophages.
An additional experiment was performed in order to confirm the specificity of labelling by KM93. Cultured BMM cells were first treated for 2 h with ZED1301 (46), a known inhibitor of Factor XIIIa. The cells were then treated with fluorescent probe KM93 for an additional 4 h, which was followed by washing and imaging. Negligible fluorescent labelling was observed, relative to cells that were not blocked with ZED1301 prior to being treated similarly with KM93 (see Supplementary Materials, Figure S10). This strongly suggests that KM93 only reacts with the same cellular target as ZED1301 and increases our confidence in the specificity of KM93 for labelling FXIIIa. . A subset of the macrophages did not incorporate the probe, despite the presence of FXIII-A (white arrows, Panel (C)), indicating that the enzyme may not be active in these specific macrophages. Some cells also showed strong red fluorescence but weak green fluorescence, giving the appearance of little colocalization, despite both probe and enzyme being present at these cellular compartments (red arrows, Panel (C)). White magnification bar represents 20 µm.
An additional experiment was performed in order to confirm the specificity of labelling by KM93. Cultured BMM cells were first treated for 2 h with ZED1301 (46), a known inhibitor of Factor XIIIa. The cells were then treated with fluorescent probe KM93 for an additional 4 h, which was followed by washing and imaging. Negligible fluorescent labelling was observed, relative to cells that were not blocked with ZED1301 prior to being treated similarly with KM93 (see Supplementary Materials, Figure S10). This strongly suggests that KM93 only reacts with the same cellular target as ZED1301 and increases our confidence in the specificity of KM93 for labelling FXIIIa.
Conclusions
In this work, we set out to explore structure-activity relationships for the linker length and warhead functionality on a peptidic scaffold, as well as for the linker length, hydrophobic group, and acidic group on a small-molecule scaffold. While none of our compounds display improved potency over the lead inhibitor 46 (ZED1301), we showed that the mode and potency of FXIIIa inhibition are highly dependent on both the linker length and warhead functionality and that the optimal combination of these features may be scaffold-dependent due to the conformational dynamism in the enzyme. This work also represents the first report of the condition-independent kinetic parameters for ZED1301 (46). These findings set the stage for further developments in FXIIIa inhibitors Figure 7. Immunofluorescence visualization of the labelling of FXIIIa by KM93 in murine bone marrow macrophages (BMM). Cells were incubated with 20 µM KM93 for 48 h in microscopy chamber slides. At the endpoint, cells were fixed and stained or treated with sheep anti-human FXIII-A antibody, followed by a secondary antibody, donkey anti-sheep AlexaFluor ® -488 detection (green, Panel (B)). Nuclei were visualised with DAPI staining (blue). KM93 was visualised at 568 nm (red, Panel (A)). All cells were positive for FXIIIa, incorporation of KM93 into cells was clear and the probe colocalized with FXIII-A (Panel C and yellow arrows, inset Panel (D)). A subset of the macrophages did not incorporate the probe, despite the presence of FXIII-A (white arrows, Panel (C)), indicating that the enzyme may not be active in these specific macrophages. Some cells also showed strong red fluorescence but weak green fluorescence, giving the appearance of little colocalization, despite both probe and enzyme being present at these cellular compartments (red arrows, Panel (C)). White magnification bar represents 20 µm.
Conclusions
In this work, we set out to explore structure-activity relationships for the linker length and warhead functionality on a peptidic scaffold, as well as for the linker length, hydrophobic group, and acidic group on a small-molecule scaffold. While none of our compounds display improved potency over the lead inhibitor 46 (ZED1301), we showed that the mode and potency of FXIIIa inhibition are highly dependent on both the linker length and warhead functionality and that the optimal combination of these features may be scaffold-dependent due to the conformational dynamism in the enzyme. This work also represents the first report of the condition-independent kinetic parameters for ZED1301 (46). These findings set the stage for further developments in FXIIIa inhibitors as potential anticoagulants, as it is believed that the optimization of linkers and warheads on a smallmolecule scaffold may be able to provide a drug-like inhibitor. The structure of ZED1301 was also adapted for the design of the rhodamine B-labelled fluorescent probe 93 (aka KM93), which was shown to retain irreversible inhibitory activity against the enzyme, to be effective at labelling FXIIIa in cellulo, demonstrating its applicability for localization studies in the further study of the biological roles of FXIIIa.
Kinetic Evaluation of FXIIIa Inhibition
A previously reported fluorescence-quenched isopeptidase assay using the commercially available substrate A101 (Zedira GmbH, Darmstadt, Germany) was employed for the determination of FXIIIa inhibition kinetics [56,57]. All assays were performed at 37 • C on 96-well plates, with fluorescence (excitation at 313 nm, emission at 418 nm) measured continuously using a BioTek Synergy 4 plate reader. For each inhibitor, duplicate trials were run in parallel at 4 inhibitor concentrations, along with duplicate positive controls (no inhibitor) and negative controls (no enzyme and no inhibitor). First, 125 µL of aqueous buffer 1 (111 mM Tris, 16.7 mM CaCl 2 , 333 mM NaCl, pH 7.5) and 18 µL of aqueous buffer 2 (55.6 mM TCEP, 149 mM H-Gly-OMe, pH 7.5) were added to each well, along with 5 µL of A101 stock solution (4000 µM in DMSO). Next, the appropriate amount of the desired inhibitor, dissolved in DMSO, was added, in addition to the amount of distilled water necessary to reach a total volume of 180 µL in each well. Negative controls were diluted to a final volume of 200 µL. Thrombin-activated human FXIIIa (T070, Zedira GmbH, Darmstadt, Germany), stored at −80 • C in 20-µL aliquots (1 mg/mL enzyme) of storage buffer (50.0 mM Tris, 1.00 mM TCEP, 150 mM NaCl, pH 7.5) was warmed to room temperature and diluted to 0.0741 mg/mL through the addition of 250 µL of storage buffer. The diluted enzyme solution was then aliquoted into 10 separate PCR tubes. The assay plate and enzyme tubes were subsequently incubated at 37 • C for 10 min. After the completion of the incubation period, 20 µL of diluted FXIIIa solution was added from each PCR tube to all wells in the plate excluding the negative controls using a multi-channel pipette. This produced a final volume of 200 µL per well, with final reaction conditions of pH 7.5, 74.4 mM Tris, 10.4 mM CaCl 2 , 223 mM NaCl, 5.10 mM TCEP, 13.4 mM H-Gly-OMe, 100 µM A101, and 7.41 µg/mL (94 nM) FXIIIa. The range of inhibitor concentrations tested spanned from 1 to 200 µM, and the final DMSO concentration in the reaction mixture was kept below 10% v/v. Monitoring of fluorescence emission was commenced after quick stirring by aspiration. Assays were terminated once the fluorescence had plateaued.
Introduction
The transglutaminase (TGase) family of enzymes is comprised of eight cal pendent isozymes and the non-catalytically active erythrocyte membrane prot 4.2 [1][2][3]. These enzymes carry out numerous functions in biological settings w mary role in crosslinking proteins through the formation of N ε ( ɣ -glutamin bonds using a Cys-His-Asp catalytic triad [4][5][6]. Within the TGase family are zymes of current therapeutic interest, transglutaminase 2 (TG2) and Factor XII Human TG2, also referred to as tissue transglutaminase, is ubiquitously e throughout virtually all tissues [7,8]. TG2 is noteworthy due to its transamidase involved in liver fibrosis, its deamidation role in celiac disease, and its intrace protein activity [9][10][11] which has been implicated in numerous cancer models i the epithelial-mesenchymal transition of cancer stem cells [12][13][14][15][16][17]. The major role is within the final step of the coagulation cascade, where it mediates the crossli insoluble fibrin monomers to form a rigid 3D blood clot network. This makes it therapeutic target for the development of novel anticoagulant drugs in the trea venous thrombosis [18]. Current drugs, including heparins and coumarins, -pnitrophenyl ester)Gly-OH) was used to determine TG2 inhibition kinetics as described previously [42,45,46,50]. In brief, assays were conducted at 25 • C in 96-well plates, and absorbance at 405 nm was monitored continuously using a BioTek Synergy 4 plate reader. For each compound, 6 inhibitor concentrations were tested, along with positive (no inhibitor) and negative (no enzyme, no inhibitor) controls. All assays were run in duplicate as separate independent experiments. Human TG2 expressed and purified as described previously [64] and stored in aqueous buffer at −80 • C, was first thawed and diluted to a working concentration of 50 mU/mL in the buffer. The diluted enzyme solution was stored on ice. The appropriate amount of distilled water, 125 µL of aqueous buffer (111 mM MOPS, 15.6 mM CaCl 2 , pH 6.9), the desired amount of inhibitor (from a DMSO stock solution), and 5.0 µL of AL5 (from a 5.56 mM DMSO solution) were added in that order to 7 Eppendorf tubes to reach final volumes of 250 µL per tube. After mixing thoroughly, 180 µL of the mixture from each tube was transferred to plate wells. To initiate the assay, 20 µL of the diluted TG2 solution was added using a multichannel pipette to each well except the negative control, which was treated with water. The final reaction conditions were pH 6.9, 50.0 mM MOPS, 7.0 mM CaCl 2 , 100 µM AL5, and 5 mU/mL TG2. The range of inhibitor concentrations tested spanned from 1 to 350 µM, and levels of DMSO were kept below 10% v/v. Data collection was then initiated after mixing by quick aspiration, and the assay was allowed to run for 20 min.
Determination of Type of Inhibition
The type of inhibition for both TG2 and FXIIIa was primarily determined by visual inspection of kinetic traces. Inhibitors producing fluorescence-time or absorbance-time kinetic curves that either reached plateaus at the same level as that of the positive control or did not plateau at all throughout the time course of the experiment, even at high inhibitor concentrations, were deemed to be operating through a reversible mechanism. This assumption was tested and validated with inhibitor 47 through an A101 substrate spike experiment. Additional A101 (5 µL of 4000 µM stock in DMSO) was added to enzymatic reactions lacking in inhibitor (positive control) and pre-treated with inhibitor (50, 100, 150, 200 µM) after the initial reactions were complete and fluorescence plateaus had been reached. Data collection at 37 • C was re-commenced immediately after the addition, and fluorescence was monitored over time as previously described. On the other hand, inhibitors producing kinetic curves in which earlier and lower plateaus in fluorescence or absorbance were observed as inhibitor concentration was increased were assumed to be operating through irreversible time-dependent covalent inactivation. This assumption was tested and validated with inhibitor 23 through an A101 substrate spike experiment in which an additional 5 µL of A101 substrate stock solution (4000 µM in DMSO) was added to the positive control and highest inhibitor concentration (200 µM) reaction. In the case of the small molecule inhibitors, an A101 spike experiment was conducted as outlined above, or an inhibitor spike with 79 the substrate spike was replaced with the highest concentration of inhibitor being assayed. Analysis was carried out as described above.
Analysis of In Vitro Kinetic Data
Data analysis for the determination of kinetic inhibition parameters for both FXIIIa and TG2 was performed using Microsoft Excel and GraphPad Prism. Absorbance-time and fluorescence-time plots from all positive controls and inhibitor treatments were corrected for background substrate hydrolysis through subtraction of the negative control and were then set to an initial fluorescence or absorbance of zero through subtraction of y-intercepts at time zero from all points.
For inhibitors displaying reversible competitive kinetics, the first 10% conversion, taken as the time point at which the relative fluorescence or absorbance reached 10% of the maximum at the plateau, was used to obtain the initial rates from linear regressions of fluorescence or absorbance over time. The means and standard deviations from duplicate trials of the ratios of uninhibited (positive control) to inhibited initial rates (ν un /ν in ) were then plotted against the corrected inhibitor concentrations ([I]/α), producing a normalised Dixon plot defined by the relationship ν un ν in Inhibitor concentrations were corrected by division with the parameter α, defined as the constant correcting for competition between the substrate and inhibitor for the enzyme's active site. Based on the known K M of 8 µM for A101 [42,46] and its final concentration of 100 µM under these reaction conditions, α for FXIIIa kinetics was determined to be 13.5 through the equation The corresponding α value for TG2 inhibition kinetics was determined to be 11 based on the AL5 K M of 10 µM [42,46] and 100 µM concentration. Apparent K i values for inhibitor potency against FXIIIa or TG2 were taken from the reciprocal slopes of linear regressions forced through (0,1) from each normalised Dixon plot and are reported with their graphically determined standard errors.
Inhibitors deemed to be operating irreversibly were evaluated under Kitz & Wilson conditions [58,59]. Observed pseudo-first-order rate constants (k obs ) for enzyme inactiva-tion were extracted from the fluorescence-time or absorbance-time data at each inhibitor concentration through non-linear fitting using a mono-exponential decay model as in The k obs values from duplicate trials were then averaged and used to determine the first-order half-lives. All fluorescence-time and absorbance-time data sets were treated over 3 half-lives, and the observed rate constants were calculated as described. The mean and standard deviations for the observed rate constants were then plotted against the inhibitor concentrations after correction by division of the appropriate α parameter, producing hyperbolic saturation plots. For inhibitors where a clear plateau in the observed rate constant was observed at high inhibitor concentrations in the saturation plot, non-linear hyperbolic fitting was performed according to Equation (4).
This fitting was used to determine k inact and K I values, along with their standard errors. These extracted values were then used in the calculation of k inact /K I , with errors carried forward appropriately. For inhibitors that did not reach an obvious saturation in the observed rate constant, a linear regression (forced through the origin) was performed on the plots of k obs vs [I]/α using the lowest inhibitor concentrations. The ratio of k inact /K I in these cases was taken from the slope of the linear fit and its corresponding standard error.
Fluorescent Labelling of FXIIIa by SDS-PAGE
To a 1.5-mL Eppendorf was added 10 µg of thrombin-activated FXIIIa (T070 Zedira GmbH, Darmstadt, Germany) as a solution in 10 µL storage buffer (50 mM TRIS, 1 mM TCEP, 150 mM NaCl pH 7.5). A 5-µL aliquot of FXIII kinetic assay buffer (111 mM TRIS, 16 mM CaCl 2 , 333 mM NaCl pH 7.5) was added to the tube and gently vortexed. The rhodamine B derivatized probe 93 (aka KM93), 15 µL of a 60 µM aqueous solution, was subsequently added to the tube, the tube was gently vortexed, and enzyme labelling was allowed to occur for 20 min at room temperature. A 30-µL aliquot of Bio-Rad 2X Laemmli Sample Buffer (5% β-mercaptoethanol) was added to the sample and the solution was boiled for 5 min at 100 • C to ensure denaturation. Once the sample had cooled, 30 µL was loaded into a well of a Bio-Rad Mini-PROTEAN TGX precast 4-20% acrylamide gel. A 10-µL aliquot of Bio-Rad Precision Plus Unstained Protein Standards was loaded and electrophoresis was performed at 120 V for 1 h. The gel was first visualised using a Bio-Rad ChemiDoc MP Imager for fluorescent bands (green epifluorescence illumination 605/50 nm filter). The gel was then stained with Coomassie and visualised again. | 2023-02-11T16:03:11.793Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "991ead1160e6877c89b5e76c69b83a2a13141b9f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "57e1f02581202eb5c2e3d7bf05f53533f899790c",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118857430 | pes2o/s2orc | v3-fos-license | The effect of interference on the trident process in a constant crossed field
We perform a complete calculation of electron-seeded pair-creation (the trident process) in a constant crossed electromagnetic background. Unlike earlier treatments, we include the interference between exchange diagrams. We find this exchange interference can be written as a contribution solely to the one-step process, and for small quantum nonlinearity parameter is of the same order as other one-step terms. We find the exchange interference further suppresses the one-step process in this parameter regime. Our findings further support the crucial assumption made in laser-plasma simulation codes that at high intensities, the trident process can be well-approximated by repeated iteration of the single-vertex subprocesses. The applicability of this assumption to higher-vertex processes has fundamental importance to the development of simulation capabilities.
When an electron propagates in an intense EM field, there is a finite probability that the radiation it produces will decay into an electron-positron pair. If the EM field is weak, in that the effect is perturbative in the charge-field interaction, it corresponds to the linear Breit-Wheeler process [1], where one photon from the background collides with the photon radiated by the electron to produce a pair. Although important in astrophysical contexts [2,3], this linear process has still to be measured in a terrestrial experiment [4]. If the laser pulse intensity is strong, in that all orders of the charge-field interaction must be included in calculations, the photon decay into a pair corresponds to the nonlinear Breit-Wheeler process. A quarter of a century after electron-seeded pair-creation was first calculated theoretically in constant magnetic [5] and crossed [6] backgrounds, the combination of nonlinear Compton scattering followed by the nonlinear Breit-Wheeler process was measured in the landmark E144 experiment performed at the Stanford Linear Accelerator Center (SLAC) [7,8]. The importance of this experiment to the laser strong-field QED community can be understood in light of continued interpretation and analysis of the E144 results in the literature [9][10][11]. In addition to also having astrophysical importance, a measurement of electron-seeded pair-creation in a terrestrial laser-particle collision would allow the study of nonperturbative quantum field theory. As the intensity of the laser pulse increases, for a fixed frequency and seed particle energy, the process moves from the perturbative, to the multi-photon and finally to a tunneling regime [9], in which dependency on the charge-field coupling takes a non-perturbative form.
To aid experimental design and analysis, there is an interest in including electron-seeded pair-creation * b.king@plymouth.ac.uk in traditional plasma Particle-In-Cell (PIC) code, using Monte Carlo techniques. Lowest order processes such as nonlinear Compton scattering [12,13] and photon-seeded pair-creation [14][15][16] are included in various simulation codes [17][18][19] and their combination in laser-driven electromagnetic pair-creation cascades is a topic of study [20][21][22][23][24][25][26][27][28]. Interest has also grown in including higher-order processes such as photon-photon scattering [29] in simulation, in which low-energy [30][31][32][33] and high-energy [34] (with respect to the electron rest mass) solvers are being implemented. However, a general framework for including second and higher-order processes is still under development. A key assumption of including quantum effects in classical PIC codes is the locally-constant-field approximation (LCFA). This assumes the formation region of the process is much smaller than the field inhomogeneity scale (typically, the wavelength), such that a good approximation is acquired by assuming the background to be "locally constant" [35,36] by using rates for a constant crossed field (CCF). However, a crucial issue to be addressed when going beyond single-vertex processes is the nature of interference between those channels where intermediate states remain on-shell and those where they remain virtual, as occurs in electron-seeded pair-creation [5,6,37] and double nonlinear Compton scattering [38][39][40][41]. (Reviews of laser-based strong-field QED can be found in [42][43][44][45][46].) Past understanding of the trident process in a CCF has been based on considering just the sum of probabilities of each of the exchange terms, whilst neglecting the "exchange interference" between these diagrams. Unless the seed electron's quantum nonlinearity parameter was very high, the "step-interference" occurring in calculating the probability of a single diagram between the purely one-step and two-step processes, had the consequence that the one-step process was suppressed [37]. However, recent calculations of the full process in a plane wave pulse indicate that this step-interference, can, in some parameter regime, be negligible [47]. The need for clarification of this point in a CCF background motivates the present study. Until now, the reason given for explicitly neglecting this exchange interference is the appearance of a rapidly-oscillating phase occurring at the level of the probability, which is absent in the probability of just a single diagram [37,48]. (We mention a recent analysis of the total trident process in a plane-wave background that appeared during preparation of this work, which discusses the locally constant field limit [49].) In the current paper, we calculate the effect on the total and differential rate of the one-step process due to this exchange interference in order to make a final conclusion about the occurrence of the one-step process in a CCF. This is part of a much more general question, of how to correctly include off-shell processes in numerical codes simulating high-intensity laser-plasma interactions. Indeed, it has already been assumed by some simulation models [50], that one can include off-shell pair-creation channels whilst simultaneously assuming the background is locally constant. The applicability of this approximation to higher-vertex processes is therefore of fundamental importance to the further development of simulation capabilities. (The interference effects also prevent one using the Weizsäcker-Williams [51,52] approximation to include the off-shell contribution.) The paper is organised as follows. In Sec. I the objects to be calculated, terminology and notation are defined. Sec. II gives an overview of the derivation, highlighting parts specific to the exchange-interference terms. Sec. III gives the expressions for the total probability of exchange-interference that are to be numerically evaluated. In Sec. IV, the differential rates are presented and some notes are made on the numerical integration strategy used. In Sec. V, the total one-step probability is presented and low-χ behaviour highlighted. In Sec. VI, the implication of the results and the weak-field limit are discussed and in Sec. VII, the paper is concluded. Appendix A contains a more detailed derivation of the exchange interference contribution and Appendix B gives some specific formulas for expressions used in the main text.
I. INTRODUCTION AND DEFINITIONS
The trident process can apply to both positron and electron seeds of pair-creation. Since the total rates in a CCF are identical for a positron, in this paper we just consider electron-seeded pair-creation in a laser background: which is the leading-order pair-creation process in dressed vertices. By "dressed vertex" we refer to vertices attached to fermionic states in a classical electromagnetic (EM) plane-wave background, described by well-known "Volkov states" [53]. We use electron Volkov states: and positron Volkov states: in a plane wave of scaled vector potential a µ (ϕ) = eA µ (ϕ) (e denotes the electron charge) with phase ϕ = κ · x (κ · κ = κ · a = 0), where the semiclassical action S(p) of an electron is given by: and the Feynman slash notation / κ = γ µ κ µ has been employed where γ µ are the gamma matrices and u r (v r ) are free-electron (positron) spinors satisfying Further symbols are defined in Tab. I Since there are two identical outgoing particles, electron-seeded pair-creation comprises two decay channels at the amplitude level, as shown in Fig. 1, with a relative minus sign due to exchange symmetry. We write the scattering amplitude S fi as: where: (we suppress the spin labels in the definitions Eqs. (1) and (2) and hereafter use the notation ψ j to signify a fermion with momentum p j ) and the photon propagator is: where we choose for gauge-fixing parameter λ, and take g µν = diag(1, −1, −1, −1) µν to be the metric.
Although all quantities can be written in a covariant way, we choose the standard "lab frame" depicted in Fig. 2 of aligning Cartesian axes with the background electric field, magnetic field and wavevector respectively.
The two electron-seeded pair-creation reaction channels.
x 1 , a a a, ε ε ε To form the probability, the scattering amplitude must be mod-squared: The final two terms on the right-hand side of Eq. (8) comprise what we refer to as the exchange interference, i.e. interference at the probability level between the direct diagram described by − → S fi and the exchange diagram ← − S fi . The exchange interference has hitherto not been calculated directly in a CCF (but see the recent locally constant field limit in [49]).
As the background can contribute energy and momentum to each vertex, processes with one dressed vertex are permitted, unlike in standard perturbative QED. We refer to the chain of processes: where γ * refers to a real, on-shell photon, as the twostep process, which is distinguished from the onestep process, in which the intermediate photon remains virtual. By cutting the photon propagator with the Sokhotsky-Plemelj formula [54]: where P refers to taking the principal value of the corresponding integral, we are able to write [37]: where − → S (j) scales as L j for background field spatiotemporal extent L, and − → S (1) s is the interference between one-step and two-step channels, which we refer to as total step interference. It is unlikely that this decomposition can be performed for a general pulsed plane-wave background, but for a CCF, it appears to seed electron momentum p2, p3 outgoing electron momenta p4 positron momentum κ background momentum k photon momentum a = eA scaled vector potential ε primary background polarisation vector ε secondary background polarisation vector ϕx = κ · x external-field phase at x x position of first vertex (NLC) y position of second vertex (pair-creation) ξ defined for a constant-crossed-field by electric field amplitude mξκ 0 / √ α ξ intensity parameter in a plane wave be unambiguous. The separation in Eq. (9) is also independent of gauge choice. At the level of the fermion trace, we found the dependence on the gauge-fixing terms in Eq. (7) completely disappeared. Therefore, the decomposition in Eq. (10) is also gauge-invariant.
Comparing Eq. (8) and Eq. (10), we see that contributions to the total probability can be split into i) no interference; ii) exchange-interference; iii) step-interference and iv) step-and-exchange-interference. To the best of our knowledge, only the step-interference terms have been evaluated directly in a CCF (but the constant field limit of the plane-wave calculation has recently been taken in [49]). First by Baier, Katkov and Strakhovenko [5] and Ritus [6] by cutting the two-loop diagram in Fig. 3a, as well as more recently in a direct calculation [37]. However, a second, two-loop diagram must also be cut in order that the exchange and step-exchange interference terms are included, shown in Fig. 3b, which has not yet been calculated. (It should be mentioned, Fig. 3a also contains a one-loop correction to one-photon nonlinear Compton scattering (NLC) and Fig. 3b contains two-photon NLC.) The goal of the current paper is to investigate how the total probability for electron-seeded pair-creation P is related to the purely one-step probability P (1) and a purely two-step probability P (2) by evaluating the interference contribution X: Two covariant and gauge-invariant parameters will be particularly important in quantifying the total probability. First, the classical nonlinearity parameter ξ, which for a plane wave vector potential A µ with pulse envelope g(ϕ), can be written eA µ = mξε µ g(ϕ), for ε · ε = −1, and is sometimes [55] referred to as "a 0 " or the "intensity parameter". The parameter ξ can be defined through the electric field strength E = (mξκ 0 / √ α)g ′ (ϕ) for |g ′ (ϕ)| ≤ 1. (Some definitions use the root-meansquare integrated value instead of the peak value as given here.) Second, the quantum nonlinearity parameter for the seed electron χ 1 , where, for a particle with momentum p j , χ j = ξ (p j · κ)/m 2 [42].
II. EXCHANGE INTERFERENCE DERIVATION OUTLINE
The derivation is based on the Nikishov-Ritus method of performing phase integrals at the level of the amplitude. A more detailed version can be found in Appendix A.
A. Mod-square of scattering amplitude
We begin by calculating − → S fi from Eq. (5) (the calculation of ← − S fi follows analogously), and reproduce some of the main steps of [37]. First, one notices: where − → f µ x , − → f ν y are some spinor-valued functions that depend only on the external-field phase at x and y. Fourier transforming: and inserting into Eq. (5), one arrives at: where r * = δ p 2 /2κ · δ p is related to the momentum contributed by the field at the first vertex, r * κ if the photon is produced on-shell (i.e. for NLC), ∆P = ∆p + (r + s)κ is the total change in momentum, ∆p = p 1 −(p 2 +p 3 +p 4 ) and δ p = p 1 − p 2 is the change in momentum at the first vertex. We refer to − → Γ and − → ∆ as "vertex functions" for the NLC and pair-creation vertices respectively. When this matrix element is squared, one has to deal with: which can be written as: and simplified to (more details can be found in Appendix A): for spatial three-volume V . By evaluating the s and s ′ integrals, one finds: 4 and the substitution t = r + r * has been made. By analogy, we see: where
B. Previously proposed justification for neglecting exchange interference in a CCF
Specifying the plane-wave background to a CCF by choosing: where ε · κ = 0 and ε · ε = −1, we see that the nonlinear phase of the Volkov wavefunctions takes the form of a cubic polynomial in ϕ x,y . Bearing in mind that the pre-exponents in − → f µ x,y (ϕ x,y ) are quadratic polynomials in a(ϕ x,y ) and hence in ϕ x,y , we note that each vertex function can be written as a sum of integrals of the form: for n ∈ {0, 1, 2}. Here we recall results from [37]: and c 2 , c 3 are obtained from Eq. (26) with the replacement p 2 ⇆ p 3 . How a process scales with field extent in a CCF can be extracted with the Nikishov-Ritus method by calculating how many outgoing momentum integrations parallel to the (electric) field the integrand is independent of. Since the complicated phase in the exchange interference terms depends nonlinearly on these outgoing momentum integrations parallel to the field, it was argued in [37], that the contribution is subleading and can be neglected. In [6] it was argued that "at high energies the main contribution is made by . . . Fig. 3a since the exchange effects described by the diagram Fig. 3b are very small". We will show that exchange effects are present at the one-step level and can be just as large as the one-step terms originating from cutting Fig. 3a, albeit for low values of χ 0.5, which however, are the most accessible in future experiments (the SLAC E144 experiment reached χ 0.3 [7]).
C. Justification for including exchange interference in a CCF
Let us define the total probability P as: where the prefactor of 1/4 comprises 1/2 from averaging over initial electron spins and 1/2 to take into account identical final particles.
At the amplitude level, the integration over the phase at each vertex Eq. (19) has stationary points at: (z is given in Eq. (23)). Since the contribution from the Airy functions is strongly peaked around z = 0, one argues that the two stationary points effectively merge to a single stationary point on the real axis at ϕ = ϕ * . This is the part of the external-field phase where the process at that vertex is assumed to take place. To illustrate this point, let us consider Eq. (15) and its exponent of the form Eq. (24). Then: where ϕ * ± = ϕ * x ± ϕ * y and − → F (t) has been defined using Eq. (15) to simplify discussion of the integration. There is a linear one-to-one map from the stationary phase of a vertex and the component of one (either can be chosen) of the emitted particle's momentum at that vertex, parallel to the background vector potential. For example, here: As the pre-exponent is independent of p 2 · ε and p 3 · ε, one can write: Using the decomposition in Eq. (9), one arrives at: where: where h.c. refers to taking the Hermitian conjugate and θ(·) is the Heaviside step function. These refer to the two-step, one-step and step-interference terms respectively. The contribution from | ← − S fi | 2 is analogous. Crucially for the two-step term, both parts of the photon propagator (on-shell and off-shell) contribute to the two-step term, providing the causality preserving θ-function that ensures pair-creation from a photon occurs after NLC production of that photon.
For the exchange-interference term, let us write Eq. (17) as: From Eq. (28), we notice that the difference of stationary phases is a key quantity. For the exchange term however, the differences of phase for each of the two diagrams becomes mixed and we note: where the remaining terms in . . . originate from the 2c 3 2 /27c 2 3 terms and the r * terms in Eq. (23), and are independent of the virtuality variables t and t ′ . A key observation is that ϕ * − is almost antisymmetric in the exchange p 2 ⇆ p 3 , apart from the denominator. In other words, by writing: we see that the factor ψ is common to both exchange terms. Then one can make the substitution: Using this substitution, from Eq. (25) we can see that the form of the exponent will be: where γ 1 and γ 3 are coefficients independent of ϕ * + . For context, they are given by: In other words, this substitution casts the complicated nonlinear exponent in the exchange interference term, η(→, ←), in exactly the form of an Airy exponent, with one integration direction dropping out and disappearing from the integrand. So the exchange interference only ostensibly depends on p 2 · ε and p 3 · ε independently, but there is in fact a linear combination of these variables on which the integration does not depend. Using the decomposition in Eq. (9), one arrives at: where: The Scorer function Gi(·) (an inhomogeneous Airy function [56]), occurs when evaluating the integral: and also occurs in the calculation of the one-loop polarisation operator in a CCF [57]. Eq. (36) demonstrates the difference between the exchange and non-exchange interference, namely, the appearance of a cubic term in the exponential, generating an extra Airy function that reflects the exchange interference.
We see therefore, that the contribution from the exchange interference is divergent with the same factor ( d ϕ * + ) as the one-step channel from previous studies of the trident process in a CCF (correcting the suggestion in Eq. (31) of [37] of "zero-step" behaviour). The imaginary part of the integration is exactly cancelled by the Hermitian conjugate of this exchange term, but we have written it here for completeness. The same steps that led to Eq. (30) yield also in this case a two-step exchange-interference term: which, however, has zero support and so does not contribute to the probability. The reason for this is somehow intuitive. In Fig. 1a, the vertex with p 3 (pair-creation) must occur after the vertex with p 2 (nonlinear Compton scattering), but for Fig. 1b this is reversed. The contribution from having both at the same time is identically zero.
A. Total exchange interference contribution
The explicit expressions for the exchange interference contribution to the total probability are lengthy and more specific formulas are relegated to Appendix B.
Here we give the general form.
Let us write the probability from Eq. (27) in terms of the interference decomposition: However, at the same time, we can use the splitting of the total probability into different steps (Eq. (11)) to write: To aid discussion, and to make a comparison with the literature, it will be useful to separate terms in the interference: which refer to the step-interference, exchangeinterference and step-and-exchange-interference terms respectively. Then the "one-step" results in [5,6,37] refer to P (1) + X s and the new results from this work lead to the total exchange-interference X e + X se . Therefore we have: . Then from each of the four phase integrals (over ϕ x , ϕ y , ϕ x , ϕ y ), we have an Airy function, so we note that the form of f (t, t ′ ) is: where ⇄ c j are functions of the particle momenta and A l,j is either Ai or Ai ′ (the specific combinations are given in Eq. (B1)). Before defining the functions z j , it turns out that, for the purposes of evaluating the integral, it is useful to rescale the virtuality variables t and t ′ in the following way (so that we may compare to previous results in [37]): This ensures the integrand depends on ξ and κ 0 only as the product of ξκ 0 so that the constant-field limit of κ 0 → 0 is well-defined. Then, in the lab system (one can write expressions in a covariant way, as shown in Eq. (B2)), we have, defining p jy = −p j · ε for j ∈ {1, 2, 3, 4} and recalling (The arguments z 3 (v) and z 4 (v) are acquired from z 1 and z 2 respectively by making the substitution p 2 ⇆ p 3 .) The arguments z j of the Airy functions are identical to in the non-exchange terms, but with the first two Airy function arguments here being identical to the two different arguments in P (1) (→, →) (identical in form to [37]) and the second two being from P (1) (←, ←). Eq. (43) has been written using − → y γ and − → y e as they are exactly the Airy-function arguments for NLC and pair-creation from the − → S fi term. We note two points: i) the two-step limit is immediately apparent -if z 3 and z 4 were replaced with z 1 and z 2 respectively, and v = 0 were set, then the integration in p 2y and p 3y can be easily performed and the resulting Airy functions would have arguments exactly equal to that for NLC and pair-creation. ii) it can be shown that v = (k 2 /m 2 )(χ 1 / − → χ k ), which means that z 1 (v) is exactly the form of NLC emitting an off-shell photon [58].
The exchange interference terms can then be written: where: and for brevity of notation we defined: We note the positive coefficient of the integrals. This derives from the integration at the probability level of the nonlinear phases particular to the exchange-interference term. Although the coefficient from this integration is negative (specifically the minus sign from Eq. (34) that occurs premultiplying the integrals in e.g. Eq. (35)), the exchange probability acquires another negative sign from the definition of exchange interference (e.g. in Eq. (8)). We also note the proportionality to ξ d ϕ * + . This term is divergent because the integration is unbounded. However, this allows one to define a rate for the expression by dividing through by this factor, which is used in calculations in the LCFA.
IV. NUMERICAL EVALUATION AND DIFFERENTIAL RATES
The numerical evaluation of the exchange probability Eq. (45) involves at least one integral in a virtuality variable (v or v ′ ), an integration over the remaining transverse outgoing momenta (p 2 · ε, p 3 · ε) and an integration over minus-component momenta in (χ 2 , χ 3 ) for the scattered and created electrons. Different strategies were used in each of these three types of integrals, which are summarised here.
A. Transverse momenta integrals
Since the transverse momenta are unbounded, we make the conformal transformation p 2,3 · ε/m → tan u 2,3 so that: This works well because the parts of the Airy arguments containing p 2,3 · ε are always positive, which makes the integration smooth. When all other variables are integrated out, the integrand also does not change sign.
For low values of seed-particle χ-parameter, χ 1 1 the integrand is centred at the origin and symmetric along p 2 · ε = ±p 3 · ε. As χ 1 is increased, the shape changes slightly, but the extrema remain within |u 2,3 | 1.
In Fig. 4, we have plotted the differential rate of the one-step contribution in the emitted electrons' remaining transverse momentum components. These are significant for two reasons: i) even when the total probability is negative, there are regions of phase space for the one-step process that are positive, as shown for χ 1 = 10 in Fig. 4; ii) in these positive regions of phase space, the onestep process integrals can be significantly larger than the two-step process, again shown in χ 1 = 10 in Fig. 4. The possible implications of using the transverse momentum distribution for measuring the one-step process in experiment were investigated and commented on in [37].
cretise. For v, v ′ = 0, all Airy arguments are positive, so the integrand in (a, b) is smooth. If v, v ′ = 0, then for every Airy argument with a negative coefficient multiplying v or v ′ , there is a symmetrically opposite one with a positive coefficient multiplying v or v ′ . This essentially prevents any nonlinear oscillations arising in the (a, b) integration plane when v and v ′ are held constant. When χ 1 ≪ 1, the integration region for all terms R (1) , X s , X se , X e , becomes peaked around χ 2 = χ 3 = χ 4 = χ 1 /3. As χ 1 is increased, positive regions appear around χ 2 → χ 1 and χ 3 → χ 1 in all terms. In Fig. (5) and (6), we plot the triangular region given by the Mandelstamlike variables: for χ 2 , χ 3 ∈ [0, χ 1 ], choosing the physical region u < 0, and note that s + t + u = 2. In the triangular plots, the horizontal axis, diagonal axis with positive gradient and diagonal axis with negative gradient correspond to ∂R (1) /∂χ2χ3: ∂Xs/∂χ2χ3: FIG. 5: Plots of differential rates of non-exchange parts of the "one-step" contribution to the trident process in a CCF for χ 1 = 1, 10 (left-to-right).
the contours u = 0, s = 2 and t = 2 respectively. The symmetry around the line χ 2 = χ 3 is evident from the plots.
C. Virtuality integrals
We recall the asymptotic behaviour of the Airy functions [56] as x → ∞: The non-exchange probability is an integration over sums of products of four homogeneous Airy functions, which are highly oscillating and decay only slowly for negative argument. However, as can be seen from the arguments of the Airy functions, given in Eq. (43), each negative argument is balanced by a positive one, ensuring the integrands are relatively well-behaved in the virtuality variables v and v ′ . One major difference in the exchange probability integrands is the appearance of an extra Airy function in v and v ′ . So when this Airy function multiplies the zero-virtuality F j (0, 0) terms in Eq. (45), it is not balanced by decay in v or v ′ from another function (for example the amplitude of Gi(−x) ∼ x −1/4 for large argument [60]). Therefore the integration over the virtuality variables in the exchange interference oscillates nonlinearly and decays only slowly.
The differential rate of ∂X se (v)/dv is plotted in Fig. 7. Although we see some nontrivial oscillation for negative argument, at all values of χ 1 , the asymptotic 1/v 2 tails are important in the convergence of the integral and a larger range of virtuality v must be integrated over.
A particular feature of the integration over the two virtuality variables in the X e calculation is that the oscillation due to exchange interference depends on the sum of virtualities v+v ′ . Rotating the v, v ′ integration plane to a dependency on v ± = v±v ′ , half of the integration plane is characterised by a nonlinear oscillation, as demonstrated in Fig. 7. The numerical integration for this term was carried out adaptively -increasing the density points and interval of integration, until the result converged.
V. TOTAL ONE-STEP PROBABILITY
In plotting the total one-step probability, it is instructive to study the individual contributions, as shown in Fig. 8. As reported in [5] and [6] the total non-exchange one-step probability becomes negative for small enough χ 1 . (This is not a contradiction, since the two-step probability contains a divergent multiplicative factor and is always positive.) In [37] it was calculated that for χ 1 20, the "one-step" terms were negative. We find that the contribution from exchange terms does not change this conclusion. In the region where the one-step process was negative, the exchange terms bring more negativity, whereas in the region where the one-step process was positive, they contribute to more positivity (but are much suppressed in this regime of high χ 1 ). In general, for small seed-particle χ-parameter, the exchangeinterference terms are as large as the non-exchangeinterference terms, whereas this interference then drops off considerably as χ 1 is increased above 0.1 (where NLC becomes probable).
A. Low-field behaviour
In the limit of low field, the trident process in a plane wave should be well-approximated by the one-step process because the two-step process would be of a higher perturbative order in the expansion parameter ξ. One may pose the question whether this behaviour can be seen in the CCF results. Suppose we write the divergent multiplicative factors as: where E = E/E cr is a dimensionless field strength and E cr = m 2 / √ α is the so-called "Schwinger limit", and where L ± are phase lengths normalised by the Compton phase length: The L ± factors are formally divergent, but only insofar as the process can occur anywhere in the constant field.
The scaling of the various contributions to the rate can then be written: and we define the rate per unit phase formation length R = P/EL + . From [5] we know: In other words, the step-interference term R s dominates the one-step part of electron-seeded pair-creation R (1) log 10 R when exchange interference is neglected. From [6], we know: both of which agree with the χ 1 → 0 limit of our numerical integration method. In the low-field limit E → 0, also χ 1 → 0. If we write χ 1 = ξη 1 where η 1 = κ · p 1 /m 2 , then in the low-field limit we have: The rate R must clearly be a non-negative quantity, and so as E → 0, one might expect that the contribution from the exchange interference, R e + R se , compensates for the negativity of the step-interference term R s . However, our results indicate that the low-field or "low-χ" expansion of the CCF does not reproduce this naïve behaviour. This presumably indicates that it is problematic to interpret L − (and hence d ϕ * − ) as a finite quantity.
If one compares the exchange interference integrals Eq. (45) with the integrals from mod-squaring a single diagram Eq. (46), one notes the appearance of homogeneous (Ai(·)) and inhomogeneous (Gi(·)) Airy functions of the first-kind representing the exchange interference. We recall from Eq. (35) that the argument of these terms is: As χ 1 → 0, the seed particle's χ-factor is shared equally among the three product particles. So if one extracts the χ 1 -dependency by writing, χ 2 = χ 1 a and χ 3 = χ 1 b(1−a), then a → 1/3, b → 1/2 as χ 1 → 0 and the limit of χ 1 → 0 corresponds to large Airy argument. Using the result [56]: we can write: which is almost identical in form to the calculation of R (1) , apart that Eq. (53) is symmetric in the interchange of p 2 ⇆ p 3 . However, as χ 1 → 0, as already remarked, the integration region becomes centred around p 2 = p 3 , so that also in the integration for R (1) , the integration region becomes symmetric around p 2 = p 3 . Therefore, one expects Eq. (53) to be very similar in magnitude to R (1) as χ → 0. In Fig. 9, this is indeed what we find.
A further test of our expression was to derive X s by rewriting X se so that the arguments all come from the same diagram. This was achieved by replacing the exchange term trace: with the non-exchange term trace: replacing the dp 2x dp 3x → d ϕ * + d ϕ * − Jacobian for the nonexchange version as well as another prefactor originating from the photon propagator. Otherwise, an identical R (1) FIG. 9: As χ 1 → 0, the contribution from −X e tends to that from the one-step process R (1) .
derivation was performed, and it was found that the integrand tended exactly to the one used in [37], which compared favourably with the asymptotic limits in [5,6]. The numerical integration of the integrand was then performed and found to agree with those from [37].
VI. DISCUSSION
In a CCF, the divergent factor that differentiates the one-step from the two-step process is ξϕ − , where ϕ − = ϕ y − ϕ x is the difference in the external-field phase at which the electron is initially scattered and where the pair is produced. It is divergent, because ϕ − = θ(−ϕ − )dϕ − is unbounded. The parameter ξ, is also poorly defined. Two common definitions are using the root-mean-square of the electric field (which is finite if the instantaneous value is taken, but defining ξ then requires invoking a vanishing constant-field frequency κ 0 ), or through the vector potential eA = mξεg(ϕ), for |g(ϕ)| ≤ 1, which not the case in a CCF since g(ϕ) = ϕ. However, the combinations that appear: ξϕ − and ξϕ + , are suggestive because they are independent of the limit κ 0 → 0, were one to assign physical meaning to these parameters. The one-step scales linearly and the twostep process scales quadratically with a divergent phase factor: Therefore, even though the result of the integration over final particle momenta may be negative and larger for the one-step process than for the two-step process, it is completely consistent with the total probability being a positive quantity, since the two-step process has an extra power of this (divergent) factor. Our finding that even when one includes the interference between direct and exchange channels missing in earlier treatments [5,6,37], the probability for the one-step process remains negative for χ 20, has now been firmly established. A conservative interpretation of electron-seeded pair-creation in a CCF would be to completely neglect the one-step process, because formally, it is infinitely less probable than the two-step process. To be consistent, this would imply that even when χ 20 and the probability of the one-step process is positive, it should also be neglected, which in itself, not problematic. However, the motivation for calculating the trident process in a CCF is the locally-constant-field approximation (LCFA) employed in numerical codes that simulate strong-field QED effects occurring in intense laser-plasma experiments. In the LCFA, it is assumed a good approximation to replace rates for nonlinear Compton scattering and photon-stimulated pair-creation in an arbitrary intense EM background with ξ ≫ 1, with those in a CCF, and then to integrate these constant-field processes over the arbitrary background. The LCFA has been shown to be applicable at large values of the ξ parameter for single-vertex processes [42] although the spectrum of nonlinear Compton scattering at small lightfront parameter κ · k/κ · p has been recently shown to be misrepresented [36,61]. To the best of our knowledge, the applicability of the LCFA to higher-vertex processes such as trident has not yet been formalised. A natural question to ask for higher-vertex processes is then: above what value of the intensity parameter, can one safely use the LCFA?
In the context of the current work, it is manifestly clear that the low-ξ behaviour of the trident process cannot be reproduced by the standard LCFA prescription of replacing instantaneous rates by those in a CCF. (This is not a surprise, as the LCFA is not presumed to be accurate for low-ξ phenomena, but is an issue we highlight here.) The issue is that in the low-field limit of the trident process, the one-step process must be dominant as it is of a lower perturbative order in the intensity parameter of a plane wave,ξ (we choose to distinguish the physical plane-wave parameterξ from the CCF parameter ξ), than the two-step process, as illustrated in Fig. 10. However, in a CCF, an expansion in small ξ∆ϕ would violate the assumption used in the derivation that this is a large parameter, and an expansion in small χ gives a negative value for the onestep process, as shown by the numerical results in Fig. 8.
Even without calculating the trident process in an oscillating background, one can ascertain approximate limits on when the one-step process will be dominant, by simply considering the kinematics of the intermediate photon. Suppose one regards the diagram given by − → S fi in a circularly-polarised monochromatic background. Then in this case, the photon momentum is given by: where s y is the integer number of photons absorbed from the field at the pair-creation vertex, y (theξ 2 -term contributes to the effective mass squared m 2 * = m 2 (1 +ξ 2 ) of an electron in an oscillating background [62]). After some rearrangement, we find: where we recall − → y e > 0 is the argument of the Airy functions for pair-creation in a CCF. As is clear from Fig. 10, in the one-step process, pair-creation can take place even when s y < 1, whereas the two-step process requires s y ≥ 1. Suppose we look at the region of phase space where the pair is created on-axis and set − → y e = 1 (for an integral over the positive argument of homogeneous Airy functions such as Ai(x), most of the contribution comes from the range 0 < x < 1). Then from Eq. (54), we see that in order that s y ≥ 1 in the two-step process, one requires at leastξ ≥ 1. However, we also notice that in the one-step process, where k 2 > 0, the s y < 1 channels are accessible whenξ < 1, as plotted in Fig. 11. This seems to suggest that whenξ < 1, the one-step process should dominate. This channel-closing behaviour that occurs in the weak-field regime is obviously beyond the LCFA, but of relevance to current parameter regimes available in experiment. (Channel-closing behaviour has been suggested as a mechanism to distinguish between the one-step and two-step processes in experiment [9].) In particular in the SLAC E144 experiment, where nonlinear Breit-Wheeler process was observed for the first time,ξ peaked at aroundξ ≈ 0.36 [7].
VII. CONCLUSION
We have performed the first calculation of the exchange interference contribution of the trident process in a constant crossed field background, which had been neglected in previous analyses [5,6,37], thereby obtaining the complete probability. The total probability has been shown to be split into a "two-step" part, which involves an integration over each subprocess of nonlinear Compton scattering and photon-stimulated pair-creation, and a "one-step" part, which includes all contributions where the intermediate photon is off-shell. This split was found to be gauge-invariant and unambiguous. It was already known that the rate for the one-step part was negative when exchange interference is neglected, and we have shown that when it is included, this conclusion remains unchanged. Only when the quantum parameter of the seed electron is around χ ≈ 20 or above, does the rate for the one-step process become positive.
Numerical simulation of experimental set-ups often rely upon the locally-constant-field approximation. This is where the rates for quantum processes are assumed to be well-approximated by defining an "instantaneous rate" equal to the rate of the process in a constantcrossed field. When this approximation is valid (believed to be when the intensity parameter is much greater than unity), our results suggest that the contribution to trident from the one-step process is negligible. However, it is also clear that as the intensity parameter is reduced, at some point the one-step process should dominate. When exactly this occurs, is a subject for future work. | 2018-01-22T20:10:51.000Z | 2018-01-22T00:00:00.000 | {
"year": 2018,
"sha1": "b6556542e4bf5fec9ba491256b76b874ba4731d1",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.98.016005",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "da729edbb8e95d7a403c352825b1a4ababbc4eb8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
203700477 | pes2o/s2orc | v3-fos-license | The application of failure mode and effects analysis (FMEA) for the risk assessment of changes in the maintenance system of railway vehicles
Abstract This paper presents the application of failure mode and effects analysis (FMEA) for the risk assessment of changes in the maintenance system of railway vehicles based on the example of the 6Dg type shunting locomotive. The application example is preceded with an introduction to the methodological basis of FMEA, which is specified in literature and standards. In order to ensure the comparability of the analysis results with vehicles of a similar type and to quantify the risk components (the probability of hazard occurrence, the consequences of the occurrence of a hazard and the possibilities of hazard detection) the classification which applies to shunting locomotives was used. Based on the conducted analysis, the possibility to make changes to the maintenance plan for 6Dg locomotives which would not be in breach of the acceptable safety level was demonstrated and preventive safety measures were determined.
Abbreviations ALARP -as low as reasonably practicable CSM -common safety method ETA -event tree analysis FMEA -failure mode and effects analysis FMECA -failure mode, effects and criticality analysis FTA -fault tree analysis HAZOP -hazard and operability study MDBHF -mean distance between hazardous failures (km) MSD -maintenance system documentation MTBHF -mean time between hazardous failures (hr) PHA -preliminary hazard analysis RAMS -reliability, availability, maintainability, safety RPN -risk priority number (-) VSC -vehicle safety controls
Introduction
The prevailing formal document for the assessment of safety in rail transport is Directive 2004/49/EC of the European Parliament and the Council of 29 April 2004 on safety on the Community's railways. The currently applicable version was amended by Directive 2008/110/EC of the European Parliament and the Council of 16 December 2008 and Commission Directive 2014/88/EU of 9 July 2014. The principles for the common safety method (CSM) concerning the risk analysis are described in Commission Implementing Regulation (EU) No. 402/2013 [2].
A detailed algorithm for the process of risk management is presented in the appendix to the aforementioned regulation entitled Risk management process and independent assessment. The procedure of risk qualification in the case of technical, operational or organisational changes in rail transport requires an analysis of the significance of the proposed changes. The procedure is not required to be applied where the proposed change does not have an effect on the safety of the railway system or if, after the application of the criteria specified in Article 4(2) of the appendix, it is certain that the risk involved therein falls within the permitted level. If there is no such certainty, the change should be subjected to the risk qualification procedure [13].
The aim of the risk qualification is to demonstrate the conformity of the change with the safety requirements. To begin, the system needs to be defined with regard to its scope, functions and interfaces, which is then followed by a risk analysis comprising the identification and classification of hazards and the choice and application of the risk acceptance principle. This forms the basis for performing risk analysis and identifying the relevant safety requirements or measures to be implemented as the ultimate effect of the risk qualification process.
If it is demonstrated during the identification and classification of the hazards that the risk concerning the changes under analysis is essentially permitted, then the process which has been commenced is stopped and the decision taken need only be substantiated and documented; if this is not the case, the process is continued. In accordance with the regulation, at least one of three risk acceptance methods needs to be chosen; these are as follows: ▶ application of the codes of practice, ▶ application of a reference system, ▶ explicit risk estimation. The last principle requires the choice of specific safety criteria; these may be either qualitative or quantitative. The quantitative criteria are defined in the regulation and include estimated frequency of 'accidents and incidents resulting in harm caused by a hazard' and the estimated 'degree of severity of the harm' . Appendix E of the standard PKN-CLC/TR 50126--218 [9] presents a comparison of a dozen or so methods of estimating the explicit risk used in analysing railway systems, including rail vehicles; these methods are as follows: ▶ FMEA (failure mode and effects analysis); ▶ HAZOP (hazard and operability study); ▶ PHA (preliminary hazard analysis); ▶ FTA (fault tree analysis); ▶ ETA (event tree analysis); ▶ matrix method; ▶ index-based method (e.g. risk score), and others. Depending on the acceptance principle which has been adopted, it should be decided at the risk assessment stage whether the risk that is analysed is permissible compared with the existing criteria. The standards for the assessment of safety in railway systems [3][4][5][9][10][11] present general guidance which enables a reduction of the occurrence of hazards to the minimum acceptable level in accordance with the ALARP (as low as reasonably practicable) principle which is based on the division of risk into the following three areas: 1. upper limit where it is mandatory to take up measures to reduce the risk; 2. tolerable risk (so-called ALARP) area where appropriate remedial measures and risk control measures should be undertaken; 3. lower risk limit where the risk level is acceptable and further measures are not required. The distinctions between acceptable, tolerable and non-acceptable risks are set by acts of law on railways (directives, regulations, standards, internal procedures of the safety management system of railway carriers) -these are blurred dividing lines which, in qualitative terms, relate to applicable requirements set for objects. If a vehicle meets these requirements, it is considered safe for humans and for the environment. This paper presents a method of estimating explicit risk through the application of FMEA (failure mode and effects analysis), which is amongst those methods most frequently applied by Polish railway carriers.
Methodological basis of failure mode and effects analysis
As stated in the introduction, FMEA is one of many methods of explicit risk estimation. The aim of FMEA is to assess the risk involved in the occurrence of hazards and undertake measures to control or eliminate it, primarily with regard to hazards relevant for the railway system. The FMEA method with reference to various technical systems and facilities is widely described in literature [1,7,8,11,12,[14][15][16][17][18]
Application of the FMEA method for risk qualification
As an application example of FMEA for risk qualification, changes in the maintenance system of 6Dg diesel locomotives (Fig. 2) is presented. FMEA is required by the procedure Identification of hazards and risk assessment of the Safety Management System of the railway carrier operating the locomotives.
Risk of hazard occurrence
FMEA is a quantitative method in which the risk of occurrence of any identified type of hazard is expressed using the RPN (risk priority number). According to the standard EN 60812:2018 Failure modes and effects analysis (FMEA and FMECA), the RPN may be obtained using the following expression [6]: where: r 1 (z k ) -risk component corresponding to the criterion of the probability of hazard occurrence 'O', r 2 (z k ) -risk component corresponding to the criterion of the consequences of the occurrence of a hazard 'S', r 3 (z k ) -risk component corresponding to the criterion of the possibilities of hazard detection 'D', k -cause of hazard. The above elements are assessed on a scale of 1 to 10 based on the classification criteria which were adopted. The risk assessment ratio RPN takes values from between 1 and 1000. Various techniques for categorising risk components are proposed in standards and literature. The number of categories, their scale and description should match the particular object of study in order to ensure the comparability with vehicles of a similar type operating in similar conditions. In the case of a 6Dg locomotive, the divisions which apply to shunting locomotives are used to quantify the frequency of the occurrence of hazard O (Table 1). The probability of the occurrence of a hazard is marginal and will likely not occur.
3-4 10 -6 < H ≤ 10 -5 10 -7 < H ≤ 10 -6 rather unlikely The probability of the occurrence of a hazard is low. The causes of the hazard are very rare.
9-10
The probability of hazard occurrence is very high. It is nearly certain that the hazard will occur.
The scale of losses involved in the occurrence of hazard S was referred to human losses estimated by means of the equivalent fatalities and financial losses. The classifications of the consequences of the occurrence of a hazard are presented in Table 2. The effects of the hazard may be serious and cause a considerable reduction in the safety level (railway accident, seriously injured persons, fatality).
9-10 many fatalities (c > 1) more than 2 million catastrophic The effects of the hazard may be very serious and lead to a dramatic reduction in the safety level (serious railway accident, fatalities).
The parameter of the potential of identification of hazard D defines the possibility of diagnosing a potential hazard ( Table 3). The inclusion of this characteristic makes FMEA different from other risk acceptance methods. The possibility of earlier hazard detection by advanced systems of on-board diagnostics or the application of advanced tools and methods of tests during checks or maintenance has a material effect on the ensuring of a high level of safety in the operation of the vehicle. Table 3. Categories of the possibilities of hazard detection
Ratio 'D'
Qualitative classification Description of hazard detection possibilities
1-2 very high
The probability of hazard detection is very high. Identification of the cause of the error is certain.
3-4 high
The probability of hazard detection is high. The control measures which are applied enable the identification of the cause of the error. Symptoms for the occurrence of the cause are noticeable.
5-6 average
There is an average probability of hazard detection. The control measures may enable the identification of the cause of the error. Symptoms may be established and identified which indicate the possibility of hazard occurrence.
7-8 low
There is a low probability of hazard detection. It is very likely that the control measures which are applied will not make it possible to identify the cause of the error. It is very difficult to identify the cause of the error.
9-10 very low
Minimal probability of hazard detection. It is practically impossible to identify the cause of the error.
In accordance with the guidelines for the procedure of the identification of hazards and the technical risk assessment applied by the carrier, the FMEA method identifies three risk levels on the basis of the so-called risk matrix (Table 4). Depending on the calculated RPN, an assessment is performed of which hazards involve the highest risk. Hazards with an RPN figure higher than 120 are relevant. The higher the RPN figure, the more relevant the hazard for the railway system. RPN figures above 150 relate to events which pose a direct threat to the safety of the railway system. Where the risk R is in class 3, process control measures should be undertaken to eliminate the hazard or limit its effects. Preventive, corrective measures should be addressed in the first instance to items with a high RPN figure. Means and/or measures eliminating the hazard and reducing risk should be identified.
RPN > 150 unacceptable
This is a hazard which poses a direct threat to the railway system safety. Table 5 presents the mean times to failure and mean times between hazardous failures for selected systems and elements of a 6Dg locomotive having an impact on the safety of railway transport. Based on the above-calculated figures and the aforementioned assessment criteria, Table 6 presents a FMEA sheet with the results of the estimated risk for the identified hazards relevant for the safety of the railway transport of a 6Dg locomotive.
Analysis of the results and preventive safety measures
The analysis demonstrated that the highest frequencies of the occurrence of threats (parameter O) relate to failures of the vehicle movement safety controls. Detailed identification of the recorded occurrences showed that the measuring devices and the radiotelephone are the weakest elements in this structural group.
The highest figures of losses involved in the occurrence of a threat (parameter S) and the highest chances for detecting the threat (parameter D) were estimated for the threats which are not currently present and which link to the possibility of fatigue-related cracks in the structural nodes of the vehicle frame (support) and the bogie support. Analysis of the results demonstrated that the permitted risk level of RPN ≤ 120 was not exceeded for any of the hazards. The highest risk of hazard was noted for failures of the automated vehicle safety controls, checking apparatus or radiotelephone RPN 9 = 70 (O = 7, S = 5, D = 2). Table 6. Risk estimation sheet using the FMEA method for a 6Dg locomotive In most cases, the risk level reaches RPN = 20 (Fig. 3, 4). A higher figure was found for: ▶ failures of brake elements RPN 3 Based on the conducted analysis, the possibility to make changes to the maintenance plan for 6Dg locomotives which would not be in breach of the acceptable safety level was shown. Nonetheless, changes to the locomotive maintenance plan require particular attention during the performance of operation and repair work with regard to the assemblies and subassemblies which have a major effect on the safety of railway transport. These assemblies and subassemblies are: ▶ wheel sets, ▶ brake system, ▶ bogie support and frame. Due to the considerable age of the locomotives' support structure, special attention should be placed on visual inspection and the checking of the structural nodes of the body's support and bogie frame. The following preventive safety measures were proposed: ▶ introduction, at the P2/1 maintenance level, of visual check activities on the structural nodes of the vehicle frame and bogie frame; ▶ at the P3 level, conducting of simplified flaw-detection tests of the wheel sets; ▶ performance of penetration tests of the structural nodes on the bogie support and frame during repairs at the P4 maintenance level; ▶ in the IT system supporting the management of the carrier's transporting potential, the possibility of ongoing monitoring of the technical condition of the locomotives should be taken into account.
Conclusion
FMEA is one of the many explicit risk estimation methods mentioned in Commission Implementing Regulation (EU) No. 402/2013. It establishes a systematic approach requiring knowledge of all types of failure that are either registered during operation or are anticipated. This paper has presented an example of its application based on changes in the maintenance system of the 6Dg type locomotive. Changes in the maintenance plan require maintenance system documentation to be updated for the operations and processes allocated to particular maintenance levels. The changes were the subject of an analysis of the applicable maintenance system documentation.
In accordance with Commission Regulation (EU) No. 1078/2012 of 16 November 2012 on a common safety method for monitoring, the effectiveness of the taken control measures or preventive measures should be monitored and supervised and their effects should be verified. The regulation obliges railway undertakings and entities in charge of maintenance to ensure the exchange of relevant safety information identified in the monitoring process. After the specified time of operation of the control measures, the process should be evaluated and the new RPN risk indicator should be calculated. Preventive actions proposed during hazard identification and risk assessment by the FMEA method should be used as the input data to the safety improvement program.
The next stage of works related to the change of the maintenance strategy of the analysed locomotive should be the assessment of the effectiveness of the proposed changes using the life cycle costs (LCC) analysis. It can be particularly useful to compare the maintenance costs in the full maintenance plan of the locomotive and compare the unit maintenance costs before and after the proposed changes. | 2019-09-19T09:14:48.116Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "97643f29f2e22ecede41bf530ca2c5f265d3ae8d",
"oa_license": null,
"oa_url": "https://www.ejournals.eu/pliki/art/14663/",
"oa_status": "GOLD",
"pdf_src": "DeGruyter",
"pdf_hash": "16f3448efa8a0ecb0ded9d77a8b281df45251d86",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
257778862 | pes2o/s2orc | v3-fos-license | The Threat of Impending Pandemics: A Proactive Approach
The incessant occurrence of devastating health-related events, either on a large scale, such as pandemics, or in a local community in the form of sporadic outbreaks due to infectious agents, warrants a rapid, target-oriented, well-organized response team to combat the demonic consequences. While the world has been recovering from the clutches of the recent disastrous COVID-19 pandemic, the struggles against novel emerging and re-emerging pathogens such as monkeypox (mpox), newer evolving strains of influenza, Ebola, Zika, and the yellow fever virus continue to date. Therefore, a multisectoral, intercontinental, collaborative, interdisciplinary, and highly dedicated approach should always be implemented to achieve optimal health and avert future threats.
Introduction And Background
Since time immemorial, the human race has been exposed to a plethora of microbes that cause deadly outbreaks, epidemics, and pandemics that continue to challenge us in ways that can cripple even the most advanced healthcare systems. Emerging and re-emerging infectious pathogens and their spread have caused serious jeopardy and led to an increased incidence of outbreaks and pandemics over the last decade. Studies document the emergence of a new human infectious disease every eight months approximately, with more than 35 emerging infectious diseases infecting humans surfacing since the 1980s [1]. In contrast, the prediction of future pandemics, their control, and outbreak investigations have been largely ignored and underfunded. Details are illustrated in Figure 1.
FIGURE 1: The depiction of the emergence of pandemics with an initial circulation of pathogens in wildlife followed by a spillover to humans leading to its global spread
The image has been created by the authors.
The past few decades have seen the emergence of many novel agents that have caused pandemics like severe acute respiratory syndrome (SARS), Middle East respiratory syndrome (MERS), Ebola, flu viruses, and the most recent severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The emergence of these novel pandemic agents has never been predicted before their first appearance [2]. However, the patterns of their origins and spread need to be studied as part of the surveillance strategy. The occurrence of pandemics has substantially increased over time and is dominated by zoonoses (60%), of which almost 72% originate in wildlife. Thus, the threat posed by zoonoses-infectious diseases that jump from animals to humans-is rising. And the risk of a new pandemic is higher now than ever before, with the most likely scenario for the next pandemic being a new strain of influenza, like the avian influenza A (H7N9) "bird flu" virus, or a newly identified virus such as another novel coronavirus, all of which are zoonotic.
These lethal novel zoonotic agents have high pandemic potential and continue to threaten global health security [3]. Additionally, the use of a traditional model of the epidemiological triad also explains the occurrence and transmissibility of infections with pandemic potential. For example, in yellow fever, the triad of the yellow fever virus of the Flaviviridae family, the human host, and the expanding breeding places of Aedes mosquitoes due to deforestation, urbanization, and increasing air travel across continents have led to escalating opportunities for yellow fever in nonendemic countries. Similarly, in the case of Ebola virus infection, the interference in the epidemiological triad has resulted in a recent increase in the frequency of Ebola virus outbreaks in both endemic and nonendemic regions.
Major reasons for the occurrence of these pandemics include new infectious organisms crossing the species barrier from animals to humans, prolonged survival in debilitated and susceptible immunosuppressed cohorts, evolving and mutating microbes, mass population emigration, enhanced livestock production, an upsurge in wildlife trade, deforestation, expanding cities with exploding population statistics, increased travel, climate change, and escalated human-animal interactions [4]. In addition, the chain of transmission plays a pivotal role in disease transmissibility. For instance, in the SARS-CoV-2 virus and other coronaviruses like MERS, fauna closely inhabiting humans, like camels and civet bats, act as reservoirs, while the mode of transmission is via droplets or aerosol, and they more severely infect extremes of age and persons with comorbidities like diabetics. Similarly, in recent outbreaks of monkeypox in 2022, the unusual rise of cases, particularly in men having sex with men (MSM) communities predominantly in nonindigenous geographic locations, implicates the possibilities of evolution in the nature of the virus, its transmissibility route, a newer spectrum of clinical presentations, and the susceptible host.
Review
In the last three years, the novel coronavirus disease 19 (COVID-19) pandemic has wreaked havoc globally and ravaged health and economic growth worldwide. The lingering pandemic began with the emergence of the novel virus, followed by the unfurling of numerous new variants. The World Health Organization (WHO), in collaboration with its international network of experts and researchers, has been assessing and monitoring the changes in the virus and evolution of SARS-CoV-2 since January 2020. In the late 2020s, the emergence of variants that posed an increased risk to global public health prompted the characterization of specific "variants under monitoring" (VUMs), "variants of interest" (VOI), and "variants of concern" (VOCs) for the prioritization of monitoring and research globally as well as for direct response to the pandemic. The newly emerged variants gain mutations that increase viral infectivity and transmissibility, affecting the ability of the virus to evade the protective immune response, therapeutic options, or the efficacy of currently licensed vaccines. AZ.5, C.1.2, B.1.617, B.1.630, and B.1.640 are VUMs; Lambda and Mu variants are classified as VOIs; and Alpha, Beta, Gamma, and Delta are defined as VOCs. Further, another emergent variant, Omicron, also described as VOC, shows more than thirty mutations in the spike protein, accelerating interaction with the angiotensin-converting enzyme 2 (ACE2) receptors, higher viral infectivity and transmissibility, immune resistance, and decreased lung infectivity, and hence lower pathogenicity compared with the Delta variant [5,6]. The continued evolution of the virus demands the strengthening of surveillance and sequencing abilities for a better approach to studying the extent of transmission of circulating and mutating SARS-CoV-2 variants and detecting unusual epidemiological events.
Apart from the COVID-19 pandemic, the 2009 swine flu (H1N1) influenza pandemic, Ebola, MERS, Zika virus, and monkeypox disease have been declared public health emergencies of international concern (PHEIC) [7]. In this context, with a steady rise in spill-over events from wildlife to human hosts, analyzing the major anticipated causative agents of pandemic outbreaks and identifying the triggering changes that lead to the emergence of virulent, pathogenic strains of pathogens, their susceptibility to humans needs to be assessed to tackle them in the pre-pandemic phase itself. Further, a well-articulated action plan is required to counter these health problems, along with the application of the principles of the one-health approach. Broadly, as shown in Figure 2, the impending diseases that might explode into pandemics and should never be ignored might be an existing virus with an evolving new variant: the SARS-CoV 2 virus; a new novel virus, presumably of zoonotic origin: the SARS-CoV 2 virus; a known virus in a new geographical region: the yellow fever virus; future threats due to increased human interactions and travel: monkeypox; and an existing pathogen with a newer spectrum of disease manifestations: the Zika virus. There is little doubt that the human race may be on the brink of another influenza pandemic before we can fully recover from the clutches of the COVID-19 pandemic. Human influenza is primarily transmitted by large respiratory droplets. Annual vaccination campaigns recommend standard contact precautions and antiviral therapy can help chaperone these agents. However, there is uncertainty regarding the exact modes of human-to-human transmission of avian influenza, and there is a vigilant need for additional contact precautions due to the continued evolution of the virus that might be capable of sustained human-to-human transmission. Lack of vaccination against these evolving clades and subtypes as well as higher mortality rates of greater than 50% further increase the complexity. The southern parts of China remain the hypothetical epicentre for the emergence of H5N1 clades and subclades. Despite universal vaccination of domestic poultry, H5N1 viruses are perpetuated among domestic birds prevalent in the region. Furthermore, the rising incidence of clusters of infections in Indonesia and Turkey indicates consistently sustained human-to-human transmission. Analogous circumstances that bring together all of these situations, along with obscurity in diagnosis and the unavailability of vaccines and therapeutic options, can create despoliation [8][9][10].
Middle East Respiratory Syndrome Coronavirus (MERS-CoV): Doomed by the Dromedaries
The MERS-CoV virus, also known as the camel flu, was first isolated from a patient with severe pneumonia in 2012 and can cause severe respiratory disease, marked by life-threatening pneumonia along with renal failure [11]. Since 2012, 27 countries have reported more than 2600 cases, with 935 known deaths due to the infection and its complications, for a case fatality rate of 36% [12]. The zoonosis, with dromedary camels as the source of infection, has been reported in the Arabian Peninsula. Outbreaks of non-sustained human-tohuman transmission affecting more than 100 individuals occur, with occasional importations recurrent and outbreaks in healthcare settings in the Arabian Peninsula and the Republic of Korea [13,14]. Currently, there are numerous knowledge gaps regarding the transmission of MERS-CoV, its evolution, probable disease pathogenesis, the absence of any efficacious therapeutic options, and vaccine prospects. These sporadic and fatal MERS outbreaks, especially in hospital settings, call for a continued international collaborative approach to gain a better understanding of the virus, more effective control of animal-to-human transmission, and to prioritize research toward the development of an effective human antiviral agent and dromedary vaccine [15].
Ebola: The Massacre of Mankind
The Ebola virus, belonging to the filoviridae family, has five species: Sudan, Zaire, Tai Forest, Bundibugyo, and Reston. Bombali virus (BOMV), a novel ebolavirus that belongs to the proposed new species BOMV, has been recently detected in bats in Kenya and Sierra Leone [16]. Secondary transmission resulting from close contact between infected individuals or corpses through exposure to infectious body fluids follows initial zoonotic transmission from fruit bats. Ebola virus disease (EVD), an almost fatal zoonosis, causes fever, chills, and hemorrhagic manifestations like petechiae, ecchymoses, and internal bleeding [17,18]. Deforestation, followed by increased human-infected reservoir interaction, has been previously linked to EVD outbreaks [16]. Although there has been significant progress in research regarding the Ebola virus, various gaps remain concerning the virus ecology and ever-expanding outbreaks. The complex viral interactions of disease pathogenesis, surveillance, limited diagnostics, therapeutic options, and outbreak control dictate transmission dynamics along with the case fatality rate of an Ebola outbreak.
Monkeypox (Now "Mpox): An Unusual Turmoil Beyond Traversed Paths
Yet another zoonosis, the monkeypox disease, was characteristically Africa-limited, causing an average of a few thousand cases every year and occasional outbreaks disseminated by travel to an endemic area. The unusually rapid escalation of monkeypox cases beyond Africa as the world was recovering from the COVID-19 pandemic since May 2022 has put scientists on high alert. With more than 85,000 cases and 93 deaths reported in 110 countries as of February 15, 2023, this outbreak continues to constitute a public health emergency of international concern [19]. An alteration in virulence pattern and identification of unusual transmission modes of the virus, i.e., among men having sex with men (MSM), could be the reason for the current outbreaks indicating the emergence of strains with better transmission dynamics, though it is a rare phenomenon, especially in a large DNA virus. Atypical clinical presentations such as proctitis, urethritis, severe pain, myocarditis, and encephalitis are further perplexing [20]. The sudden upsurge of cases and its rapid global spread might have been accelerated by viral adaptation to aid human-to-human transmission of the monkeypox virus, which might have remained undiscovered for certain periods and has now redesigned itself with better specificity in the human host, as seen in SARS-CoV-2 VOCs. As scientists continue to unravel the social and epidemiological links of the mysterious outbreak, the origin, genetic, and transmission determinants; simultaneous active case detection and their isolation; contact tracing; and postexposure vaccination are being performed for impromptu containment of the disease [21].
Yellow Fever's Next Destination: Asia?
Yellow fever virus (YF), a flavivirus, and Aedes aegypti, its vector, were introduced to the Western hemisphere from Africa in the 1600s as retribution for the slave trade, resulting in major epidemics that killed thousands over 350 years in the region. Although effectively controlled by the mid-nineteenth century, the unprecedented rise of air travel in the following years, coupled with unhindered population growth and urbanization, created ideal epidemiologic ambience and socio-demographic conditions for the resurgence of the dreaded virus and its spread to newer geographical areas in infected travellers who incubated the virus. The last decade has evidenced the highest number of unvaccinated travellers being infected and exporting the virus to non-endemic nations. Although there has been no secondary transmission, the risk of importation of the virus is very high. Owing to millions of unvaccinated travellers from non-endemic countries to endemic countries, secondary transmission remains likely to occur shortly. Deprived of any immunity and prior exposure to the virus, the vastly populated Asia-Pacific region remains highly vulnerable to infection [22]. The hypothesis of cross-protection by other flavivirus exposure before YF infection exists; however, there is a possibility that the case fatality rate in those without such prior exposure could even be higher due to higher vector competence among Asian strains of Ae. aegypti, which had been reported to be much lower than its African or South American counterparts, showing a complete lack of transmission capacity to explain YF absence; but even these theories are being refuted as certain Asian vector strains are found to be more competent than their Western counterparts [23]. The combination of increased international travel from endemic to non-endemic regions, lax vaccination laws, non-immune populations, poor health infrastructure, and inadequate vaccine manufacturing capacity in at-risk nations in the face of an epidemic remains a difficult proposition for mounting an effective emergency response.
Pandemic preparedness
Pandemic preparedness focuses predominantly on research on viruses that can cause pandemics and highpriority pathogens that are most likely to threaten human health, the prediction of spillover into the human race, the occurrence of disease outside their known geographic areas, and surveillance and monitoring.
Prediction and Prevention of the Pandemic
There are several programs established for strengthening public health surveillance to issue early warning signs. The WHO Global Influenza Surveillance and Response System (GISRS) works with the prime motto of protecting people from the threat of influenza through continuous worldwide surveillance, preparedness, and response for seasonal, pandemic, and zoonotic influenza. It acts as a global platform for monitoring influenza epidemiology and disease and provides global alerts for novel and emerging influenza viruses such as H7N7, H7N9, H9N2, and other respiratory pathogens [24].
Similarly, the Global EYE strategy aims to eliminate epidemics of YF by protecting at-risk populations, preventing international spread, and ensuring rapid outbreak containment [25]. The "stop monkeypox outbreak" mission of the WHO focuses on three aspects: minimizing zoonotic transmission, interrupting human-to-human transmission for the population at greater risk of mpox exposure, and protecting the vulnerable group at risk of developing the severe disease [20].
Targeted Genomic Surveillance in Hotspots
Regions where human movement occurs around the horizon of abundant wildlife biodiversity intermingled with a vast array of microbial biodiversity, are designated as hotspots. Targeted surveillance in hotspots is imminent in the detection of emerging infectious diseases and their control [2].
The PREDICT component of the Emerging Pandemic Threats programme, under the United States Agency for International Development (USAID), has developed predictive modelling approaches to identify the hotspot regions where experts from different specialities like epidemiology, virology, genetics, informatics, and veterinary medicine work in active collaboration in hotspots in 20 developing countries, focusing on surveillance and building of diagnostic capacities at human-animal interfaces where cross-species transmission is common [2] as illustrated in Figure 3.
The following aspects are covered via surveillance.
Surveillance of animals (wild animals, domestic animals, migratory birds, and poultry); surveillance of the sentinel population (wet market workers, butchers, vets, and hunters) for known viruses of pandemic potential; surveillance of countries with inadequate sanitation and hygiene, the lack of infrastructure to deliver an intervention, and limited resources for control of zoonoses and vector-borne disease.
Rapid Detection of the Pandemic
As we proceed towards a real-time PCR (polymerase chain reaction)-based pathogen detection system, the establishment of genomic-based diagnostic laboratories that can detect microbes in the field itself with a lower turnaround time of a few hours and can perform sequencing directly from the sample is the need of the hour. Hence, a portable "lab-in-a-suitcase" sequencing platform rather than a bench-top instrument is a pressing necessity [26]. The development of newer diagnostics should focus on meeting the "ASSURED" (Affordable, Sensitive, Specific, User-Friendly, Rapid/Robust, Equipment-Free, and Deliverable) criteria of the WHO [27].
Tracking of pathogen strains to comprehend the emergence of various variants like VOIs and VOCs is pertinent for the control of the pandemics.
Regarding the role of digital technology in pandemic preparedness and response, "digital disease detection" is the current approach to disease surveillance synonymous with digital epidemiology. At least 50 digital disease detection systems are currently in place that retrieve information from a variety of sources, including newswires, digital media, official reports, and crowdsourcing; translate, process, and analyze their trends; and then disseminate this information to the community via websites, emails, media, and mobile alerts [26].
Controlling the Pandemics
The close interplay and balance between animals, humans, and their environments call for a one-health collaborative approach to controlling pandemics. Therefore, preparing for future pandemics needs special emphasis as well as national and international level funding for multidisciplinary collaboration of different scientific streams under the one-health strategy. Such a multi-level combination of research approaches will accelerate pandemic control and prevention with knowledge of pathogen origin and adaptation, strengthen infrastructure and networks for diagnostics, and enhance vaccine and therapeutic research capabilities, allowing the mitigation of emerging pandemics.
Conclusions
Incessant land use changes, exploding population statistics, continuous genetic evolution at the pathogen level, and extravagant human-flora-fauna interaction in the ecological niches induce unavoidable zoonotic spills, which require a hawk-eye vigil to seize them at an early phase to prevent socio-economic and health chaos. Thus, a highly dedicated one-health approach that is collaborative, interdisciplinary, multi-sectoral, and implemented across international borders is the ultimate need of the hour for the prevention of future threats. A workable multi-sectoral accountability framework and program reforms are also needed for the prevention of futuristic threats, in addition to carefully following the steps of an outbreak or pandemic investigation.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2023-03-29T15:13:27.750Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "d321055c6e2a72da81366b50ae93077f3f0c66cf",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/review_article/pdf/143491/20230327-17048-1gtba0e.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb6423e3f5f357df67dcbb2cfc990531119ed60b",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
152709601 | pes2o/s2orc | v3-fos-license | THE OPERATIONAL RISK MANAGEMENT IN THE ROMANIAN SMES Roxana Mironescu
The prerequisite for the smooth functioning of the business risk management is the correct identification. Basically, identifying business risks is fundamental to determine the optimal level of protection for a given activity. If the risk is underestimated, the protection will be insufficient to cover losses and, if overstated, the cost of protection will reduce the excess proceeds of the activity. The objective of this research was to identify the internal and external major factors which influence the present activities and the future of the SMEs from the North East region The method used in this paper work is based on a questionnaire applied to a sample of 120 top managers coming from various fields of activity, operating on the market for more than two years. The management of information and the data collection was conducted in the late 2013. In the first stage, a total of 35 risks was identified. The further evaluation of the results used the probability and the impact of the identified risks, based on questionnaires and by processing information from the documentation provided by the managers of SMEs taken under the analysis.
Introduction
The study of risks is born, as a science, in the sixteenth century a period of time of renaissance for challenges and discoveries.Today, risk management has penetrated the entire spectrum of activities, as a discipline that measures the degree to which the consequences of failures, the irregularities or mistakes lead to huge depreciation and disaster, damaging the rights and objectives of certain relevant stakeholders.As part of the investment risk control, specialists have discovered the full range of risks grouped in three main areas, these are the risks: credit, market and operational.The operational risk refers to the exposure of the financial losses due to downtime or lack of the correlation of the internal activities of an entity, due to events, trends or external changes that could not be known and preventable or to the internal organization and control system and ethical standards set excluding those events that are specific to market risk, credit risk and the strategic.Elements of operational risk are determined by the existence of risks of personnel, procedural risks, technological and transaction control.The participants in investment and in operational processes are all agreed that operational risk cannot be completely eliminated and the work must be conducted such that its results be as small negative as possible.Starting from the fact that any economic entity is created in order to profit from the production and sale of goods or services, the results are based on the organization and management of the entire process of capitalizing the available assets, borrowed or attracted.The operating with business assets, reflected by the indicator net income will fluctuate, depending on the entity's ability to market the produced goods and services.In the development of business, the company assumes the execution of multiple processes that have their own risks and during these operations they are subject to operational risk, caused by defects, errors, operating or technical operations.The definition of the operational risks has preoccupied, in that recent years, financial specialists who, meeting in the Basel Committee in 1998, have defined and framed at last the operational risk investment purposes named the protection against this risk: • Avoiding the unpredictable huge losses; • Avoiding the multitude of small and dangerous losses; • Improving the operational efficiency; • Improving the profit on the invested capital unit; • Reduction of earnings volatility; • A more efficient allocation of the capital; • Improving the customer satisfaction; • Improving the managers' preoccupation about the operational risks; • A better usage of the intellectual capital of the firm; • Ensuring the company manager and shareholders that these risks are properly assumed.In a business environment, the action of conducting risk management contains the entire combination of strategies, processes, infrastructure and institutions able to furnish appropriate models that help control as strict as possible, so the risk may depend on its purpose, so that: • To correctly identify all the potential risks, the company may encounter; • To elaborate and develop adequate measurement and control procedures, establishing also the upper limit of the risk bearable by the company (risk appetite); • To use in the internal processes those methods and techniques the most advanced , used by the modern economic entities in the same field of activity; • To continuously adapt the operational system to the market conditions, in a transparent manner; • To pay attention to the existence of favorable clauses or to propose such favorable specific clauses for the titles transactions; • To apply and propose particular models conforming to those of the economic entity.From the analysis of the risk management process, it seems that three fundamental components are required: • identification and risk assessment; • develop a strategy for responding to risk factors and; • the risk control.The work to identify business risk involves the identification of the risks that may arise during the conduct of an activity (still to be covered) and to determine their characteristics.The risks identification aims both the exposure of the property, of the rights and human resources and the potential hazards that may cause these exposures.The risks identification is accomplished in two stages: the risk perception, namely the awareness that a risk threatens the business project and the identification of the risk itself.The risk categories the most commonly used are: • technical risks: quality or performance, dependence on homologated technology, the requirement to obtain a certain performance; • project risk management: misallocation of time and resources, inadequate quality of the project plan, unrealistic or incomplete estimations, supply problems, poor communication techniques; • internal risks: the costs, time and goals are inconsistent, a lack of prioritization between the organizational projects, an inadequate or discontinuous funding, conflicts in funding or wrong resources allocation to different projects of the organization; • external risks: changes in legislation, market trends, labor disputes, country risks.
Study on the main environmental risk factors which influence the SMEs from the North East region, Romania. Case study on the industrial SMEs
The business risks assessment can be done by estimating two dimensions: (Böcker, K., 2008).Impact: It measures the impact of each risk and refers to the importance of the probable loss.Values are given on a scale from 1 (negligible impact) to 4 (very high impact).Probability: this establishes the possibility of occurrence of the event on a scale of 0 to 1.
In the evaluation process, there are four categories of business risks: • Strategic risks: These are risks that affect the company values and it can lead to bankruptcy, stagnation or fall in activity as a result of the inability of the organization to adapt itself to a specific competitive, constantly changing environment; these are specific risks including changes in customer priorities, threats from the traditional competitors and emerging changes in the brand perception, changes in the access to financial capital, to the human capital, new developments in technology, the global movement of the economic and geo -political factors, legal changes and regulation, quite numerous in the current economy; • Economic risks: These risks include changes in the interest rates, in the exchange rates, commodities, shares and other property, in credit and other liquidity risks; • Operational risks: There are risks related to key people and their career planning, the composition and the orientation of the Board of Directors, the orientation of the human resources and employment, information technology systems, accounting, auditing and control systems, regulatory compliance, design errors, productivity and disruptions in operations and supply chain; • Hazard Risks: the risk that determines the decrease of the non-financial assets because of the natural phenomena, physical damage of the real assets, the employees' actions, events that affect the liability, the product revocation and integrity, as well as the business interruption.In order to assess and accomplish an efficient risk management, both the small and medium organization and its managers must learn to approach the risks in terms of a holistic view and subsequently, to reduce the internal and/or external financing risks.(Terry et al., 2001).The methods were based on the interpretation of the opinions and attitudes as results of a questionnaire survey, a systematic observation of the concrete phenomena and synthesis of some previous studies in this area.The analysis of the level of preparedness of small and medium companies for the risk management was performed using as the basis a questionnaire for gathering information with a number of 110 questions, the bimodal format (Yes/No) addressed to key people (manager / responsible).By using of special computer applications afforded we obtained compliance rates for each category of risk and a risk general rate.The performances evaluation in the obtained results aimed to identify the following risks: financing risks, Branding and good reputation risks, Corporate governance Behavioral risks and safety, Ergonomy and accidents management/ losses/ employer responsibility, Absences management, Environmental management, Producer's responsibility, Property conservation, Business continuity, Computer dependence, Erisks and Internet, Human resources, Key employees, Political and credit risks.Finally, we calculated an overall score (average weighted by the importance given to each category of risk) to see how the companies are able to manage the risks they face.
To calculate the risk score of each topic, there were used different weights, given to each question in the survey, according to the extent considered appropriate to each question for the study in question.The method used in this paper work is based on a questionnaire to a sample of 120 top managers of the SMEs in the NE region, coming from the industrial activity, operating on the market for more than two years.The management of information and the data collection was conducted in the early 2013.The main objectives of the present study were: • What are the operational risks the managers coming from the analyzed SMES recognize; • Which is the evolution of the turnover of these companies, influenced by the identified risks; • What are the strategies for the future in these SMEs; The major objective of this research was to identify the internal and external major factors which influence the present activities and the future of the SMEs from the North East region in Romania.Both in Romania and the EU's countries, the SMEs are defined to be those companies that have up to 250 employees and generate a net turnover of 50 million Euros.At the EU level, there are over 23 million SMEs (a figure recorded in 2011), which represents over 98% of all the European enterprises (http://ec.europa.eu/enterprise/policies/sme/facts-figuresanalysis/performance-review/files/countries-sheets/2010 2011/romania_en.pdf.).In terms of their contribution to employment, the SMEs bring 67.4% of the existing jobs in the non-financial economy, inside the European Union, in 2012, virtually maintaining the same level as in 2011 (67.4% ), but greater than 2010 (66.9%).The share of gross value added of the SMEs suffered a slight decline in the last two years, falling to an average of 58.1% of the aggregate E.U.'s economy.Romania is well below the European average, concerning the development of the SMEs sector.If the European average amounts to 42 SMEs/1000 inhabitants, in Romania there are around 24 SMEs/1000 inhabitants.According to a recent report made up by the World Bank (Doing Business Report, 2013), Romania is positioned on the 68th place in the world range of countries, in terms of the facilities for setting up a company (a business start-up).Even in such difficult conditions, between 2003 and 2008, there was a significant increase of the number of the Romanian SMEs.Then, a period of decline followed, between 2008 and 2012 and the number of such enterprises began to decrease.The economic crisis overlapped with an unstable economic framework and generated a cessation of activity, in less than two years, for about 250,000 SMEs in our country.Looking at the data from the graphic 1, it can be concluded that a substantial fraction of SMEs in Romania (52.30%) have faced some difficult problems in 2008-2012, because of the national and European economic decline.These firms are more vulnerable to contextual challenges than the large firms and only a percentage of 34.51% reported a substantial increase in their activity.
Graphic 1. Dynamics of the surveyed SMEs between 2008 and 2012.
Source: adapted to the data from the SMEs White Book, 2012, Sigma House of Publishing, Bucharest In 2010, 13,846 small and medium enterprises offered jobs for 53,120 employees.In 2008-2010, suffering from the impact of the global economic and financial negative trends, the SMEs in this region experienced a downward trend.The largest decline was registered in the field of the construction industry and in trade.In 2010, the turnover of these companies decreased, compared to 2008, by approximately 30%.The unemployment rate, in 2009, stood at around 9%, almost double compared with the rate registered in 2007.
Results and discussions
The economic and the financial performances of these SMEs, in the present and for the future, must be analyzed taking into account some uncertainties and business risks and by the means of the activities that these companies intend to put into practice to fight them.According to a survey conducted in 2012 by the National Council of the SMEs in Romania, about 54% of respondents, as Romanian small entrepreneurs, considered the current business environment as to be not favorable to the business development.This study revealed that the main difficulties faced by the Romanian SMEs, in 2011, as they were the following: the decreased domestic demand, the excessive taxation, excessive bureaucracy, the inflation, corruption etc.It should be noted that the embarrassing conditions that affects the activity and economic performance of these companies are, as follows: • decline in the domestic demand; • excessive taxation; • bureaucracy; • excessive controls; • high borrowing costs; • delays in cashing invoices from the private companies; • corruption; • inflation; • difficult access to credit; • recruiting, training and motivating staff; • relative instability of the national currency; • non-payed invoices by the state institutions; • increasing wage costs; • poor quality of the infrastructure ; • competition made by the imported products; • decline in the export demand; • obtaining necessary consultancy and training company; • knowledge and adoption of the Communautaire acquis.Another survey, conducted by the National Council of the SMEs, among the Romanian small entrepreneurs, underlines that about half of them does not predict any activity, rather do not elaborate short, medium or long term business strategies.However, the small and medium firms in Romania are ranked as to be the second in the central and eastern region of Europe, in terms of optimism, linked to the effects of the financial crisis, after the Austrian ones.Thus, most of the small entrepreneurs from Romania intend to expand its business, in a moderate manner, in the near future (C.N.I.P.M.M.R., (2012), 2012, Cartea Albă a IMM-urilor din România, The SMEs' White Book).In the first stage of our inquiry, a total of 35 risks was identified.The evaluation results using the probability and the impact of the identified risks, based on questionnaires and by processing information from the documentation provided by the managers of SME taken under the analysis, are shown in Risks in the red zone should be resolved first because they are the ones that can create great difficulties and are critical for the small firm, while the green zone have the lowest priority, as they are minor risks.The interviewed managers have appreciated that the occurrence of some unforeseen risks in the general strategy of the company, because of a lack of a risk strategy may be a critical risk, with a probability of 0.80 and an impact factor 2, while some significant risks, such as environmental economic risk, prices risk, funding risk, credit risk, competition risk, coming from the changing environment, have a low risk for our managers, with probabilities between 0.8-1.2, and impact factor 2. There are some comments to make about the managers' economic education and their capacity to understand and face the business risks.
The compliance rate for each chapter of the identified risk by the questionnaire is shown in Table 2. Chapters such as "Environmental Management" and "Producer's Responsibility" had the highest rate (100%), while chapters "Key employees" and "Human Resources" know the lowest rates (30% The overall score -calculated as a weighted average based on the importance given by the evaluator to each chapter in question -is set to 67.7%, and is considered a good value which denotes the capacity above average of the SMEs for risk management faced by the analized companies.The score of each risks chapter is colored according to the system for comparison of the scores method.It must be remembered that this rate is based only on a percentage score, so the criticality of each category should be evaluated based on the financial impact on business operations.However, it is recommended that all the questions in each category, regardless their importance, should be considered as potential mitigation recommendations for the future.
Colors for each chapter (type) of risks
The financing risks
Financing Risk
The score for this risk chapter is 37%.This score represents a fairly low rate, as a warning, indicating that financing risk is a subject that requires more attention paid by the managers of the SMEs.The main advantage in terms of risk financing that results from the questionnaire is that the company has a proactive policy in insurances strategy, which is reviewed at relatively short intervals, allowing for some rapid reaction to changes coming from the internal and external environment of the firm.However, this strategy lacks a longer time horizon, which gives it a weakness against future risks distant in time.Fluctuations in the prices and quantities of commodities and currency have a significant, but not very big on the business activity, which increases the need for risk financing.The adoption of financial procedures to minimize exposure to such fluctuations would significantly reduce the size of these types of risks.At the same time, non-adopting a system of total cost of risk leads to over / under-sizing costs allocated to long-term financial risk.Most of these weaknesses have as the primary source the lack of a budget allocated to risk management.
Branding and reputation risk
The achieved score in this risk chapter is 67%.The reputational risks are very important to a company which, while working in the private sector, is listed on the stock market or has penetrated a new market If the company's reputation is in danger, the consequences can cause a disaster through loss of the customers' confidence and the protection by the means of insurances is difficult for this type of risk.The business operations require close attention and a factual analysis in order to estimate the potential consequences.This risk chapter has a relatively high score due the attention to own image, which is reflected in the activities of the marketing department and its financial support, including copyright and media monitoring of a direct relevance for the company.Making a financial assessment of the main official brand company product, alongside with the adoption of different ways of measuring the brand performances would increase the score of this risk chapter and would reduce the risk of reputation.
Corporate Governance
The score in this section is 42.86%, primarily due to the lack of training programs for stakeholders regarding the risk management and the poor developed information system to identify the key risks and their causes.The companies maintain a relative control of the risk at the level of senior managers, although there is a full and comprehensive implementation of the risk management process.As the implementation an integrated risk management will start and continue the effects will be beneficial in the long term.This enhances communication within the company on key risks and possible losses, with financial consequences and facilitate the alignment with the company goals and the overall vision, an aspect appreciated by us as being deficient.Within the organization, the managers wants to maximize and ensure long-term profit, by taking account of risk.This objective is the major responsibility of the General Manager of the organization.Because the risk plays an important role in achieving this objective, the General Manager or the Executive Director may be labeled as "final" risk managers and therefore, it is not unusual that the risk responsible inside the organization respond directly to the Director General.
Behavioral Risks and Safety
The score for this risk chapter is 82.50%.Even if the score is high there are some important risks in this regard.
The behavioral risk analysis is very important because in the recent years the companies has been involved in a reorganization without taking into account explicitly the impact on employees' behavior.The factors that lead to behavioral risks that may affect the companies performances were only partially identified.There are a few methods of evaluating the effectiveness of operational procedures.
Certain elements of awareness for the workplace risks and the risk reduction program are: • The review and the analysis fo the accidents causes; • The review of the potential risks at work; • Identify and eliminate risk; • An analysis for the probability and severity of risks; • Advising.
• The score for this chapter is 91.67%, which is among the largest registered by the company.An effective ergonomic program should be able to demonstrate improvements in manual handling (or ergonomic improvements documented in other fields of activity) and reduces accidents caused by physical demands.This can be demonstrated by changing instruments, tools, changing workplaces, etc. leading to a reduction of the accidents before the implementation.One field where progress is needed is the correct management of the employees' complaints.The importance of this chapter is justified given the serious challenges facing the Northeastern region in finding and retaining the qualified staff cause of the massive migration of labor in the more developed countries of the European Union.
Absences management
The score is 92.86%, its high rate appeared under the existence of developed programs for the evidence of absences inside the companies.The lack of any strategy in absences management caused by health reasons however leads to the existence of a factor risk in this area that the companies would have to eliminate.
Environmental management
The score of 100% in this chapter, which demonstrates that environmental issues are considered to be an important priority in the company's vision, so they seek for sustainable development.The existence of an official policy on the environment, managed by responsible managers leads to a high degree of uncertainty and risk fighting, in case of the environmental accidents that might occur during the current activities.In the context of high environmental concerns arising from the Romania's integration in the European Union this topic can be an advantage, when the SMEs will try to access the structural and cohesion funds for business development and economic competitiveness.
Producer's responsibility
The score for this risk chapter is 100%.Most of the points of this chapter are mandatory responsibilities, assumed by the companies, to comply with the European rules, introduced when Romania has been accepted as a member of to the European Union.The continuation of the present way of doing things will maintain a low risk in the field.Companies' management should pay a continuous attention but, still, this risk chapter stands frequent changes in the regulations.
Risk of property protection
The score for this risk sector is set at 86.36% as a result of the nine responses for questions in this section.The available property protection measures are vital to support the activity.Considering this aspect, the properties are considered critical to company, that is why it has a complex system of the property protection, periodically evaluated, and modeled on the new risks arising in this critical area.Risks that the company may encounter in this area are listed below: • Fire and natural hazards such as earthquakes, floods, storm, lightning, etc.; • Theft and intentional acts, such as fire, terrorism, bomb threats, etc.; • Failure of machinery and electronic equipment; • Employees' and third parties' accidents, either as visitors or contractors.
Business discontinuity and counteracting the risk of discontinuity
Considering the current approach of the business in the analyzed companies, the existence of a business continuity plan becomes essential.The score this risk chapter is 80%, which is above the average and it requires a special attention.A business continuity plan should include emergency responses, crisis management procedures, and communication and recovery strategies.Preparing a business continuity plan is mainly aimed to reduce the impact of service interruption, regardless of the source generating the interruption and the resumption of the main functions of the organization, which is planned by identifying the necessary period to recover for the critical functions.
Computer dependence risk, E-risks and Internet
The outcome score is 68.18% under this risk chapter.The improvement of this score is considered to be necessary, due to the rapid expansion of the field of virtual trading of various goods and services at the national and European level.A special attention should be paid to such risk assessment, to the IT training for the personnel, regarding emergency situations in the field, using the Internet as a way of selling their products and implement safety procedures, by exploiting these opportunities
Human Resources
The score is set at 33.33% for this risk chapter.This value is below the average so it requires a special attention for the management of human resources.There are many risks in this section, where the most important are generated by the events of the last period in the activity of SMEs, including a large number of layoffs and reduced periods of activity The need to counter these types of risks should be analyzed as soon as possible as the involved human resources continue their work of in optimal conditions.
Key employees
The rate for this risk section is 30%.According to Human Resources Department, the key people are identified, but the strategy to cope with a loss of key employees is nonexistent.Another important issue in order to solve the problems related to the financing of key employees is the recruiting for a substitute.The funds for the headhunting a successor must be in managers' attention.In addition, managers have to pay higher wages if it will be necessary to recruit a new employee in the case of loss in key personnel
Political and credit risk
The score is set at 50% for this risk chapter, which is mainly due to one of the critical risks of the companies, namely counterparty risk, where the companies have a revenue collection problem.The credit insurance is insufficient to fight against the very high degree of risk in this area, so that more radical and complex measures are necessary for a risk minimization.In recent years, the problems with the exportation have contributed to reduce this score.One of the most important challenges for the establishment, the growth and the survival of SMEs' remains their access to funding.Though they provide 50 % of the jobs in most developed countries, the SMEs attract only a very small percentage of the total investments.The investments in the SMEs for the G20 are 574 billion, representing 6% of the total 9.250 billion in all the forms of investment and the vast majority of the funds are coming from the bank lending.Only 17.5 % of the Romanian think to initiate a business perspective on their own within 6 months.This percentage puts our country on the 56th position of 59 evaluated countries and ranked it on the last position among the countries with efficiencyoriented economy.A percentage of almost a half (45.99 %) of the adult population of Romania avoids to start up a new company for fear of a possible financial failure.According to certain analysis and reports, the SMEs sector in the Bacau County recorded a slight recovery in 2011 and 2012.Thus, although in 2011 the number of the SMEs has declined by over 4% compared to 2010, the turnover of these SMEs increased by 17% compared to 2010, while their gross profit grew by over 40 % compared to 2010.The number of employees working in the SMEs sector, in this department of the country, increased also, by almost 6 % compared to same period of 2010.So, the vulnerabilities of the SMEs are mainly linked to the limitation of the necessary resources, to the entrepreneurs' financial decisions, to the changing business environment and to the fragile training for facing the various business risks.Productivity and profitability of SMEs reveals the precarious situation of the SMEs in Romania, in terms of efficiency and competitiveness.
Conclusions.
As a final conclusion, the SME sector in Romania is far to be a solid and competitive segment of the national economy.In order to embed the SMEs sector in Romania there is a need for a more rigorous approach, based on the implementation of business and risks management strategies.Risk management is not an exact science, it is an art of approximation, whose quality increases with time and the spent capital.The small and medium companies do not have their history of risk management, so it is recommended to introduce integrated future investment in information systems for risk management and train the employees in the integration of risk management to enhance their work efficiency.
Fig. 1 .
Fig. 1.The calculated scores for the topics in the questionnaire.
Table 1 . Table Risk Assessment
Table Risk Assessment.(see table 1). | 2018-12-06T20:59:40.587Z | 2015-07-19T00:00:00.000 | {
"year": 2015,
"sha1": "7e824f83f906f72fb45d637a454a57a24c11e73e",
"oa_license": "CCBY",
"oa_url": "http://sceco.ub.ro/index.php/SCECO/article/download/311/290",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7e824f83f906f72fb45d637a454a57a24c11e73e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
76681858 | pes2o/s2orc | v3-fos-license | Role of ayurveda in planning strategies of "aids": control in India.
"AIDS" is a new clinical entity identified around 1980. There is a marked increase in the incidence of the disease in the USA. Since our country has a lot of interaction around the world, including the areas of occurrence of the disease and also that the risk factors and high risk group exist in our country there is a scare about the syndrome.
The areas of Ayurvedic Interest in the problem have been identified and a protocol of the action plan has been proposed and discussed in this paper
THE PROBLEM
Marked disturbance in the immune functions manifest as a decrease of helper-T cells us termed as AIDS.In addition to this there is a reverse ration of helper-T cells to suppressor-T cells.IT is the first disease where an increase in suppressor -T cells is found.This makes it some-what easier to understand the clinical course with continuous and overwhelming infections with agents such as pneumocyctitis carinii, toxoplasma cryptococci, mycobacterium avium and large viruses such as herpes.Kaposi's sarcoma is an important part of the syndrome though not universal.
The syndrome commences with an ill defined malaise, weightloss, lymphadenopathy and possibly fever.Next and most characteristic phase is one or more opportunistic infections and rare malignancies such as Kaposi's sarcoma.This clinical entity was first recognized around 1980.Upto 1985, 2000 cases were diagnosed in the world.On 1 August, 1983, USA had a total of 1972 cases precisely diagnosed.Our of this 80% were from New York.In the mid 1986, 22000 diagnosed cases were there in the USA -200 cases in children below the age of 13 years.
The etiological factor has been precisely identified as "Human T-Lymphotropic Virus Type III (HTLV -III) also commonly known as "Human Immunodeficiency Virus (HIV).Lymphadenopathy associated virus (LAV).With the present status of the problem in the world and looking to the Indian contexts, we have to first understand the modus operandii of the HTLV-III virus in causing the disease.Here one of the most important factors have to be identified as the principles of ayurveda in the prevention and management of disease are very clear " the simplest way of treating a disease is preventing the cause"**
ICMR
The following body fluids have yielded on isolation, some amount of virus (HIV) in infected patients (Friedland & Klein -1987), which is the precisely known cause of the disease: -Blood -Semen -Vaginal secretions -Saliva -Breast milk -Tears -Urine -Serum -Cerebrospinal fluid -Alveolar fluid However it has been concluded on the basis of observations that the transmission of HIV occurs through only blood, sexual activity and preinatal events (Friedland & Flein -1987).Important to mention herein that the present knowledge about AIDS has specifically identified that it is most commonly seen in homosexual and bisexual males.
If we put two and two together it is very clear that our problem in India is at this stage, more of a scare and the strategy of our nation which has been very clearly defined is more of identifying and preventing.More, so since the disease has not been successfully treated so far anywhere in the world, every system of medicine must strive hard to provide very successful preventive measures.
Here is where Ayurveda can offer very valuable measures which may possibly save our country in particular and the world at large as well.The areas needless to mention are blood transfusion, perinatal services and sexual behavioral education.
AYURVEDA AND ITS CONCEPTS IN PREVENTING AIDS
Instead of entering into the controversies of what is AIDS be put in Ayurveda, we will embark upon the established practices of Ayurveda which may directly help in increasing the specific resistance of a person against the HIV.
In this context the following points are relevant:
The concept of Vyadhiksanatwa in
Ayurveda and the role of Oja in preventing the diseases.2. The concept of Achar Rasayan in the creation of a healthy society.Hence there is little doubt that Oja has a definite bearing as regards the immunological functioning of the body but it well be an over simplification if we try to pine that "AIDS" is nothing but Ojaksaya and the Treatment or the management described for Ojaksaya will be able to combat the process.However even if it is so, even Ojksaya which has been taken a synonymous to Sykraksyaya and is indicated by similar laksanas some times as even Rasaksaya is during certain circumstances Nispratikriya.We simply want to make our this point because we vaidyas are in the habit of cloamoring things some times those which do not match well with our sastrajnana and that in turn maligns the whole science of Ayurveda, we should now exercise a definite amount of self restraint before we fit in "AIDS" also in a framework, as even those who have identified the clinical entity are not yet sure as to what it is.They are tying to precisely understand the problem.
What to Do?
Now the major problem in this regard for Ayurvedists and the world alike is to plan the strategies which are likely to help humanity to overcome this situation.For this only the knowledge of Vyadhiksamatwa, Oja, Rasayana & Vajikara and the concepts of Dhatu Parinamana described in Ayurveda may be proposed as a help.Some concrete suggestions would be: 1. Ayurveda and ayurvedists should voluntarily take up the challenge and widely propagate that knowledge which is there in caraka regarding Achara rasayana which usually remains in talks and is usually not practised.As charity begins at home, the best way to propagate a thing is to practice the same.Hence, let us search our hearts as to how many of us are practicing Achara rasayana-if not let us not do it ourselves -we must educate all the people to do it.2. Ayurvedists must create a social order which provides sexual education in an authentic way to the society and build up pressure on institutions of basic education to do so from the very beginning.3. The forms of ayurveda must be utilized to highlight the importance of stiff legislation for tourists, immigrants and such other people who come to our country as we Ayurvedists are also a big chunk of society which has a two fold responsibility -one to ourselves as subjects exposed to that risk and the other to the society as an important instrument of the health delivery system of the country.4.Last but not least is to ensure a very high scientific standard of not only our talks and evaluations but also the stigma of AIDS, because we must realize the importance of the psychological impact that it would have on the personality of the person.for this I would suggest that a coordination group should be created at the capital level, which should voluntarily provide an Ayurvedic think tank to the government and render information of Ayurvedic knowledge to the proper machinery which is responsible for fighting the menace.5.The Ayurvedic authorities in the government and research institutions should leave behind their inertia and provide a regular feedback to the persons like us in the field, free of cost, the education of such newer clinical entities.I doubt very much that any thing has been actively performed till now.If so it is again lying dormant in files.
DISCUSSION
The latest update about AIDS appeared in Vol.36 of MMWR of January, 1988 which mentions that "The Public Health services of the USA have emphsised that an individual be considered to have serologic evidence of HIV infection only after an "Enzyme Immunoassay (EA)" screening tests are repeatedly reactive and another test such as westemblo (WB) or immunoflorescence assay has been performed to validate the results".This I quote here to simply raise the issue that the information available to us is yet doubtful even at the place of its origin.Under such circumstances the relevance of those in the particular context of our country is even more doubtful.The cells and the people who claim to be all powerful in plamming health strategies have miserably failed in planning even strategies for simpler and well established clinical entities.Therefore, instead of waiting for the orders and grants the Ayurvedic community should rise to save the country of this meance.The first amongst the priorities is to educate the Ayurvedic educationists regarding this.They are the torch bearers and needless to mentioned about their continuing education programme.Probably the Pages 11-16 people sitting at the helm of affairs feel that they have to pass their time.This is a dangerous situation and the Ayurvedic community is going to inherit a grossly damaged building of ayurveda if this inertia is not diligently removed.In this regard authorities should liberally encourage people of Ayurveda participating in the national strategy of combating AIDS.
The antivirals being tried for AIDS e.g Suramin, HPA 23, Ribavirin, BW A5094 (3-Azidn 3-deoxythymidine), Soscarnet have been reported to inhibit the replication of the virus to varying degrees but so far the virus has resumed its activity as soon as the therapy has been stooped.
Toxic side effects preclude the longterm use of these drugs so they are also emphasizing the restoration of immunity by bone marrow transplants, immune emphasis such as Interlukon-2, Interferron Ganut, Thymic Hormones and a variety of synthetic chemical substances.In addition to this the molecular biology of AIDS has been studied and the proteins involved have been studied.All this may be made use of in creating a bridge between the knowledge of Ayurveda and the incapacity of the modern medicine to combat it; but again the apologetic attitude of the Ayurvedic community is undesirable.Ayurveda has got an opportunit after a very long time to move in, in an aggressive manner, the only thing is to identifiy the proper people in Ayurveda, who can do so.If only chairs remain important we are sure to miss that boat once again.
SUMMARY & CONCLUSIONS
A brief introduction of the disease has been given and the Indian context has been highlighted on the basis of statistical data.The Ayurvedic concepts which may be made use of, have been briefly touched.An action plan has been suggested and discussed.
The above citations simply indicate the mechanism by which Ayurveda believes that the body is able to combat diseases.This is no different to the present day understanding about the extension of this very knowledge in the form of immunology and we ayurvedists have equal right to share this, of course this is high time that we shed off any inhibitions and incorporate the relevant portions instead of only trying to search the ways and means to escape.Pranayatnamuttaman ....................... Susruta, SU 15/22Susruta clearly mentions that "Oja" is the main Pranayata which is Somatmaka and snighdha. | 2014-10-01T00:00:00.000Z | 1989-07-01T00:00:00.000 | {
"year": 1989,
"sha1": "38021a36660a068d2c02ed77c5bc8615d1604aba",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "38021a36660a068d2c02ed77c5bc8615d1604aba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233842033 | pes2o/s2orc | v3-fos-license | Zoonotic Vaccinia Viruses Belonging to Different Genetic Clades Exhibit Immunomodulation Abilities That Are Proportional to Their Pathogenic Potential
Background The Vaccinia virus (VACV) isolates, Guarani P1 virus (GP1V) and Passatempo virus (PSTV), were isolated from zoonotic outbreaks in Brazil and belong to two different VACV clades, as dened by biological aspects that include virulence in mice and phylogenetic analysis. Considering that information about how vaccinia viruses from different groups elicit immune responses in animals is scarce, we investigated such responses in mice infected by GP1V (group 2) or PSTV (group 1) using VACV Western Reserve strain (WR) as control.
Abstract Background
The Vaccinia virus (VACV) isolates, Guarani P1 virus (GP1V) and Passatempo virus (PSTV), were isolated from zoonotic outbreaks in Brazil and belong to two different VACV clades, as de ned by biological aspects that include virulence in mice and phylogenetic analysis. Considering that information about how vaccinia viruses from different groups elicit immune responses in animals is scarce, we investigated such responses in mice infected by GP1V (group 2) or PSTV (group 1) using VACV Western Reserve strain (WR) as control.
Methods
The severity of the infections was evaluated in BALB/c mice considering diverse clinical signs and de ned scores, and the immune responses triggered by GP1V and PSTV infections were analysed by immune cell phenotyping and intra-cytoplasmic cytokines detection.
Results
Infected mice showed signi cant weight loss and developed spleen lesions as well as liver and lung damage. Mice infected with PSTV, however, developed only moderate clinical signs. We detected a reduction of total lymphocytes (CD3+), macrophages (CD14+) and NK cells (CD3-CD49+) in animals infected with VACV-WR or GP1V. VACV-WR was able to signi cantly downmodulate cell immune responses upon mice infection, and GP1V-infected animals also showed intense downmodulation in cell responses. Contrarily, PSTV presented little ability to downmodulate mice immune responses.
Conclusions
Our results suggest that VACV immunomodulation in vivo is clade-related and is proportional to the strain virulence upon infection. Our data corroborate the classi cation of the different Brazilian VACV isolates in clades 1 and 2, taking into account not only phylogenetic criteria, but also clinical and immunological data.
Background
Members of the Poxviridae family present a double-stranded DNA genome that varies from 140 kbp to 300 kbp in size, depending on the virus species. Their genomes encode more than 150 genes including some involved in immunomodulatory mechanisms [1][2][3]. In Brazil, Vaccinia virus (VACV) has been circulating in rural and wild environments for decades [4]. Brazilian VACV isolates () are classi ed into two groups: group 1 (less virulent in a murine model of infection), and group 2 (more virulent in a murine model of infection) (Fig. 1). In addition to virulence in mice, this division re ects biological properties of the isolates such as plaque phenotype in BSC-40 cells and genetic differences, including the presence or absence of an 18 nucleotide deletion in the viral hemagglutinin protein (HA A56R) encoding gene (see inset on Fig. 1) [5].
The Guarani P1 virus (GP1V) and Passatempo virus (PSTV), used in this study, were isolated in Minas Gerais, Brazil, in 2001 and 2003 respectively, from outbreaks in rural properties involving cattle and humans [6,7]. The infection by these viruses caused lesions on the udder, teats, snout and oral mucosa of cows and calves. Milkers usually develop hand lesions after unprotected contact with sick cows.
Clinical signs such as high fever, severe headache, back pain and lymphadenopathy were also reported by individuals handling infected cows [6,7]. Much of the replicative success of poxviruses is related to their capacity to obstruct, escape or subvert essential elements of their host's antiviral response. It has been proposed that the poxviruses' ability to downmodulate the host's immune response is directly proportional to their virulence in vivo [8]. Previous studies have already shown a difference in the modulation of the immune response in mice infected with non-replicating (Modi ed Vaccinia virus Ankara), attenuated (Vaccinia virus Lister), and replicative (Vaccinia virus Western Reserve) viruses [8,9].
The evaluation of infections caused by naturally circulating VACV strains GP1V (group 2) and PSTV (group 1) represents an opportunity to analyse immune responses triggered in the host as a result of infections with viruses presenting different virulence patterns. Such studies can, then, be compared to those including either laboratory strains of VACVs or vaccine strains. Here we investigated the immune responses triggered in BALB/c mice infected with the GP1V or PSTV VACV strains, originated from zoonotic outbreaks in Brazil, compared to infections by the prototype of the genus Orthopoxvirus, Vaccinia virus Western Reserve (VACV-WR). Our results are important not only to better de ne patterns of immunomodulation in vivo, caused by zoonotic vaccinia viruses, but they also corroborate the genetic classi cation of feral VACVs into two separate clades, which has been lately subject of criticism.
Viruses
Samples of GP1V and PSTV strains were kindly provided by Dr. Erna Kroon (Universidade Federal de Minas Gerais, Belo Horizonte, Brazil) and VACV-WR was gently provided by Dr. Bernard Moss (NIAID/NIH, Bethesda, EUA). The three viral samples were grown and titrated (PFU/ml) in BSC-40 cells through plaque essay and puri ed in sucrose cushions, as described before [10].
Infection of mice and clinical signs BALB/c mice used in this study were obtained from UFMG's central animal facility (Belo Horizonte, Brazil) and maintained in our experimental animal facility (Departamento de Microbiologia, Belo Horizonte, Brazil). Animals were kept into ventilated cages with food and water ad libitum. All in vivo procedures were approved by the Committee of Ethics for Animal Experimentation (CETEA) from UFMG, under permission 9/2019. Six-week-old male mice were separated into groups infected with GP1V, PSTV, VACV-WR or mock-infected (control). Animals were anaesthetized by intra-peritoneal injection of ketamine and xylazine (70 mg/kg and 12 mg/kg of body weight in phosphate-buffered saline [PBS], respectively). The intranasal (i.n.) route was used to inoculate PBS or 10 µL of puri ed viruses diluted in PBS on subgroups of ve animals each (for clinical signs and weight loss evaluation) or subgroups of seven animals each (histopathological analysis, splenocyte preparation, immunophenotyping and also detection of intracellular cytokines).
In order to monitor the infection from a clinical perspective, groups of ve animals each were inspected daily, starting from the inoculation day. Inoculums of 10 4 , 10 5 , 10 6 PFU of PSTV, VACV-WR or GP1V were used to infect the animals whilst the control group was inoculated with sterile PBS. Weight loss and clinical signs were evaluated for 10 days and registered.
Histopathological analysis
Seven days post inoculation of 10 6 PFU of GP1V, PSTV, VACV-WR or PBS, animals were euthanized and had spleens, lungs and livers harvested for histopathological analysis. Fragments of organs were xed with formalin for 24 hours and dehydrated with increasing concentrations (from 70 to 100%) of ethanol. Tissue fragments were diaphanized in xylol and embedded in para n. The segments were sectioned in a microtome (5 µm) and stained using Hematoxylin and Eosin. A pathological characterization of these slide preparations was performed considering the presence and distribution of in ammation, edema, pulmonary hemorrhage and in ammation along with hepatic and splenic reactive degeneration, through the attribution of clinical scores [11][12][13].
Splenocyte preparation, immunophenotyping and detection of intracellular cytokines
In order to evaluate the production of IFN-γ and TNF-α by CD4 + and CD8 + T lymphocytes, mice were inoculated with 10 6 PFU of PSTV, GP1V or VACV-WR through the intranasal route. Splenocytes were obtained through maceration of their spleens. For erythrocyte lysis, cell extracts were resuspended in deonized water and incubated on ice. PBS 10X was used to stop the lysis process. Cell proliferation assays were performed through splenocyte labelling with Bromodeoxyuridine (BrdU). 96-well plates containing 2x10 5 spleen cells per well received stimulus of 10 4 PFU of puri ed UV-inactivated VACV-WR, concanavalin A (ConA) or just RPMI medium (Gibco, Carlsbad, USA). Plates were incubated at 37°C in a 5% CO 2 atmosphere for 72 hours. The evaluation of cell proliferation was performed according to the For the purpose of detecting intracellular cytokines, 10 7 cells extracted from the macerated spleens were stimulated overnight with UV-inactivated VACV-WR (10 4 PFU) and incubated for 4h at 37°C with Brefeldin A (Sigma, MO, USA) at 1 mg/ml. Then, cells were washed in FACS buffer and stained with anti-CD4 and anti-CD8 antibodies (BD Pharmingen, NJ, USA) for 30 min at 4°C in the absence of light. Cells were permeabilized with FACS buffer containing 0.5% saponin, and then stained with mouse anti-TNF-, -IFN-, -IL-4, and -IL-10 (BD Pharmingen, NJ, USA) for further 30 min at room temperature. A new washing step with FACS buffer containing 0.5% saponin was followed by two steps of FACS buffer only. Cell preparations were stored at 4°C in the absence of light after xation using FACS x solution. A FACSCalibur cytometer (Becton, Dickinson, NJ, USA) was used for ow cytometry, and further analyses were performed using FlowJo software, parameters granularity (SSC) versus size (FSC) (TreeStar Inc., OR, USA).
Statistical analysis
The data was compared by analysis of variance (ANOVA) using Tukey post-test and parametric Student's T test. P values under 0.05 were considered signi cantly different. Statistical analyses were performed using Prism 8 software (GraphPad Software).
Clinical signs in infected mice
All infected mice showed dose-dependent clinical signs typical of VACV infection, including piloerection and weight loss (Fig. 2). The severity of clinical signs in animals infected with 10 6 PFU of PSTV was considered moderated and the maximum weight loss was less than 10% percent of the animals' original weight throughout the experiment ( Fig. 2A). Animals infected with 10 6 PFU of GP1V lost up to 28.84% of their initial weight (Fig. 2B). Weight loss in animals infected with 10 6 PFU VACV-WR was close to 30% ( Fig. 2C). In addition to piloerection and weight loss, mice infected with VACV-WR developed the most severe clinical signs when compared to animals infected with other VACV strains in this study. Clinical signs included accentuated arched backs, swelling of the face, and conjunctivitis.
Histopathological analysis of liver, spleen and lung
The pathological characterization of the lungs, livers and spleens of mice uninfected or infected with 10 6 PFU of GP1V, PSTV or VACV-WR was performed through the attribution of clinical scores. Scores were given in relation to severity and distribution of the evaluated parameter on the tissue and these values were converted to a total score. For severity and distribution, the scale adopted scores from 0 to 5 or 0 to 4, in which the maximum value corresponds to a greater severity and distribution of the pathological alteration in the studied tissue. Animals infected with GP1V or VACV-WR developed more intense liver in ammation and degeneration when compared to animals of the control group and to those infected with PSTV (Fig. 3). All infected mice showed higher levels of splenic reactivity (hyperplasia of the splenic white pulp) and pulmonary in ammation when compared to mock-infected animals. Greater levels of oedema and pulmonary haemorrhage were found in the GP1V-and VACV-WR-infected animals compared to the uninfected controls and PSTV-infected animals. The histological ndings are consistent with the observed clinical signs.
Splenocyte proliferation after VACV stimulation and subpopulation characterization of immune cells elicited during VACV infection
Spleen cells' proliferation rates were higher in samples of infected mice stimulated with ConA compared to those stimulated with the UV-inactivated VACV-WR (Fig. 4). This increase was similar for splenocytes from GP1V-and PSTV-infected animals and less pronounced for VACV-WR infected animals.
Analyses of the splenic immune cells' subpopulations, including T-helper cells, B lymphocytes, NK cells and monocytes from uninfected mice or animals infected with 10 6 PFU of GP1V, PSTV or VACV-WR were performed (Fig. 5). A reduction in the frequency of CD3 + cells was observed in the animals from the group infected with VACV-WR and GP1V in comparison to the group infected with PSTV and uninfected controls (Fig. 5A). A higher statistical signi cance was found when PSTV-infected or uninfected animals were compared to VACV-WR than when compared to GP1V. Although no differences were detected in the frequency of CD4 + cells in the evaluated groups (Fig. 5B), CD8 + cells were more frequent in groups infected with GP1V and VACV-WR, respectively, in comparison to those infected with PSTV or uninfected controls (Fig. 5C). There was a decrease in the frequency of CD14 + and CD3-CD49 + cells in the groups infected with VACV-WR and GP1V when compared to the PSTV and the uninfected control group (Fig. 5D-E). CD14 + subpopulation was even less frequent in animals infected with VACV-WR than in animals infected with GP1V. The unique ability of VACV strains to downmodulate subsets of the host immune cells has been described [9], including human infections by zoonotic samples of Brazilian VACV [8].
The activation pro le of TCD8 + cells, observed through the analysis of the expression of the CD28 + molecule, suggested that only animals infected with VACV-WR had a signi cant reduction in the activation pro le of these cells (Fig. 6A). The modulation of CD8 + CD28 + cells after infection by the VACV-WR virus in mice has been demonstrated previously, as opposed to infections by VACV Lister and modi ed Vaccinia virus Ankara strains [8]. By analysing B lymphocytes expressing CD80 + we observed that the frequency of these cells' subset decreased similarly in mice infected with VACV-WR and GP1V in comparison to animals inoculated with PSTV (Fig. 6B). On the other hand, animals infected with PSTV presented more CD80 + B cells than uninfected control animals (Fig. 6B). Compared to the control and PSTV groups, the frequency of CD19 + CD86 + cells in animals infected with VACV-WR was lower.
Similarly, GP1V group showed a signi cant reduction in these cells compared to PSTV group (Fig. 6C). This reduction in the frequency of CD80 + and CD86 + B lymphocytes was also reported for individuals affected by bovine vaccinia in Brazil [9].
Production of lymphocytic cytokines during infection by different VACV
We observed a general decrease in the IFN-γ-producing CD4 + T lymphocytes for the infected groups (PSTV, GP1V and VACV-WR) compared to the uninfected controls, in cells stimulated or not with UVinactivated VACV-WR (Fig. 7A). The same trend was not observed when the IFN-γ-producing CD8 + T cells were analysed (Fig. 7B). The production of TNF-α by CD4 + T lymphocytes was similar for all groups evaluated after virus-antigen stimulation or not (Fig. 7C). Although the levels of TNF-α-producing CD8 + lymphocytes were slightly higher in animals infected with PSTV, compared to all other groups, (Fig. 7D) it was clear that the effect of VACV infection in TNF production by T cells is much more subtle than for IFNγ production.
Discussion
The importance of innate, cellular and humoral immunity components on ghting Orthopoxvirus' infections has been demonstrated in several studies. The depletion of macrophages in mice results in their inability to control infection by vaccinia virus [14]. Likewise, the decline of NK cells levels in C57BL/6 mice culminates in increased ectromelia virus (ECTV) titters and disease severity [15,16]. Complementde cient mice developed more severe disease when infected with cowpox virus [17]. Evaluation of cytokines, such as IFN (I and II) and TNF also con rmed the key role of these molecules in the innate immune response against orthopoxviruses [18][19][20][21]. Both cellular and humoral responses are highly coordinated and require the combined activity of B and T lymphocytes. The primary infection of mice by ECTV cannot be controlled exclusively by TCD8 + lymphocytes [22] and production of antibodies by B lymphocytes is also essential in disease control, reinforcing the functional complementarity of the immune response to poxvirus' infections. This interaction between B and T cells is also crucial in subsequent exposures to these viruses [23]. Nonetheless, poxviruses are capable of encoding several proteins that are related to the evasion of the immune response [24,25]. Indeed, it has been demonstrated that poxviruses infecting humans are able to signi cantly modulate components of host-speci c immune response [26,27]. Likewise, many studies have demonstrated the immunomodulatory ability of poxviruses in animal infections [8,16,25,28]. Therefore, the viruses' ability to block, escape or subvert the essential elements of the antiviral response is essential for their replicative success in the host [24].
The Brazilian VACV isolates have been divided into two distinct groups. This classi cation considers characteristics such as the virulence of these isolates in a murine model, which in turn is linked to intrinsic genetic differences in their respective genomes. Analyses on how different zoonotic VACV isolates interact with their hosts, as well as other virological and biological aspects, could reinforce and support their segregation and classi cation into different genetic groups.
In this study, we showed how VACVs that belong to genetically different groups are able to modulate the immune response in mice in distinct patterns. Infections with VACV can lead to the appearance of clinical signs such as piloerection, weight loss, back arching, and facial edema. Nonetheless, animals infected with different VACV isolates show these signs differently [8,29]. Ferreira and colleagues have demonstrated that infection by VACV-WR and GP1V in mice led to the appearance of signs such as piloerection, back arching, periocular alopecia and 25% weight loss. In contrast, the same study showed that animals infected with PSTV and other VACV belonging to group 1, such as the Araçatuba virus and GP2V samples, did not exhibit typical clinical signs of the infection and did not experience marked weight loss. We have replicated these experiments and observed that animals infected with GP1V and WR presented the typical symptoms of VACV infection belonging to group 2. On the other hand, mice infected with PSTV did not manifest signi cant symptoms after virus inoculation.
Poxviruses have an extensive capacity to infect different hosts. However, viral multiplication rates vary according to the host species, considering that it depends on host-speci c antiviral mechanisms [30]. The acute infection initiated in the lung after VACV intranasal inoculation can spread to other organs of the host [29]. One hundred percent of the animals inoculated with the PSTV, GP1V and VACV-WR showed chronic interstitial pneumonia. Liver and spleen were also compromised by infection with viral samples, indicating that PSTV, GP1V and VACV-WR multiply initially in the lungs, spreading to other organs and causing systemic disease. We also found that only VACV-WR was able to cause pulmonary haemorrhage in animals. The histopathological evaluation of the samples showed that PSTV is associated with a lower degree of liver and splenic damage when compared to the other studied viruses (as shown on Fig. 3), similarly to what was described by Ferreira and collaborators [29].
As previously reported, both cellular and humoral immunity are important for controlling infections triggered by Orthopoxvirus. Cell proliferation analysis is a parameter to detect the presence of antigenspeci c lymphocytes, in order to obtain information about the cellular response induced by the infection. Gomes and collaborators [9] performed cell proliferation experiments carried out with human peripheral blood mononuclear cells (PBMCs) naturally infected with zoonotic VACV. They observed that after mitogenic and antigenic stimulation, individuals naturally infected with VACV showed a signi cant proliferative cell response compared to uninfected individuals. Similarly, our results showed increased levels of cell proliferation, after stimulation with VACV-WR, in cells from animals infected with the WR and PSTV samples (Fig. 4).
To deceive the cellular and humoral immune response, poxviruses encode several proteins capable of modulating their hosts' immune systems. Gomes and collaborators [9] also showed a lower frequency of CD14 + and an increase in CD8 + in humans infected with VACV zoonotic viruses. The immunomodulation of these subsets of cells suggests that such cells are important in controlling primary infection, preventing viral multiplication in infected cells. Furthermore, several studies have shown that the depletion of CD4 + T lymphocytes, macrophages and NK cells leads to greater disease severity in mice inoculated with VACV [16,18,31]. Some authors suggest that the primary VACV infection does not appear to be controlled solely by the activity of CD8 + T lymphocytes [9,22]. Overall, these viruses have developed speci c downmodulation mechanisms for most immune cells that are important to counter the infection. Our data re ect the differences in patterns of immune responses triggered by different VACV strains and different abilities to downmodulate such responses, culminating with distinct patterns of virulence. Infections by the GP1V and WR viruses (VACV Group 2) resulted in a robust T CD8 + response, unlike the animals infected with the PSTV sample (VACV Group 1), which presented a similar immune patterns observed in the mock-infected group. In addition, a reduction in total lymphocytes, NK cells and macrophages were observed in the group infected with VACV-WR. However, once again the group infected with the sample belonging to the phylogenetic group whose virulence characteristics in mice are milder or non-existent did not show variation in these cell groups, presenting a global pro le that was similar to the group of mock-infected animals. The cell activation patterns were also different when different VACV strains were inoculated into mice. VACV-WR-and GP1V-infected animals showed a tendency in CD19 + CD80 + cells downmodulation when compared to the uninfected controls. This was also observed in the study of VACV infections in humans [21]. Antibody production by B lymphocytes is essential to control infections caused by VACV [24]; therefore, it is not surprising that these viruses developed countermeasures that inhibit the activation of such cells. Mice infected with VACV-WR showed lower expression of CD28 in CD8 + T lymphocytes when compared to uninfected controls. Similarly, it has been suggested that this molecule is responsible for enhancing the activation of T cells after infection in mice with this VACV isolate [8].
Cytokines are secreted water-soluble proteins that act as mediators of immune responses, with autocrine and/or paracrine action. VACV produce virokines and viroceptors that mimic the molecules of the host's immune system, mainly affecting IFN, TNF and other cytokines [32,33]. We also observed a reduction in IFN-γ production by CD4 + T lymphocytes in animals infected with GP1V and VACV-WR after antigen stimulation (Fig. 7A). This cytokine participates in the activation of macrophages, stimulation of in ammation and the mounting of Th1-type responses, all essential for the control of viral infections [34].
The reduction of IFN-γ production by these cells in GP1V-and VACV-WR-infected mice emphasizes the immunomodulatory capacity of these viruses as opposed to the PSTV strain. It corroborates the different virulence patterns found on the literature and observed in a murine model in this work [29].
Our data support a model in which the primary immune responses to acute Orthopoxvirus infection has the involvement of Macrophages/Monocytes and possibly CD4 + T cells, whereas the Lymphocytemediated response CD8 + would have a secondary role in infection control. The observation of modulation of those compartments in both humans [9] and mice, reinforces this hypothesis [11]. Finally, we demonstrated here that zoonotic vaccinia viruses belonging to different clades exhibit immunomodulation properties that are proportional to their pathogenic potential. These observations reinforce the idea that the segregation of zoonotic VACVs in two distinct clade/groups re ects not only genetic differences, but distinct virological and biological aspects as well.
Conclusions
Our data suggests that zoonotic VACV belonging to different clades exhibit immunomodulation properties that are proportional to their pathogenic potential.
List Of Abbreviations
BrdU -Bromodeoxyuridine | 2021-05-07T00:03:57.079Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "56411b9c5065d7942d26965efab6808f7c3e0e00",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-251883/v1.pdf?c=1631891812000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2bb968183db88e1946df3b9061df8c6503f8d995",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
118416593 | pes2o/s2orc | v3-fos-license | Determination of refractive index of various materials on Brewster angle
Studied experimentally the origin of the non-zero reflection of p-polarized radiation (TM) of Brewster's angle. The results have shown the residual reflected light in the vicinity of Brewster angle occurs due to inaccessibility 100% polarization degree the incident linearly-polarized radiation and installation of the zero azimuthal angle. These factors create the s-component of the radiation reflected from the examined surface indeed. A smooth change of reflected light polarization in the vicinity of Brewster angle in the sequence p-s-p appears due to the changing power proportion of reflected p-, and s-components but not is the result of the atomically thin transitional layer at the border of the material/environment according to Drude model. Metrological aspects of refractive index measurement by Brewster angle are investigated: due to the above-mentioned factors, as well as due to the contribution of the reflected scattered light caused by on residual roughness of the optical surface. Advantages of Brewster refractometry for any materials and films without restrictions on the topology of samples and their light scattering and absorption are demonstrated
Introduction
In works [1,2], made on the frame of Fresnel's theory, measurements of refraction index (RI) at normal and Brewster incidence angles of laser radiation were compared. It is noted that Brewster refractometry differs in advantage owing to the single system error in the absence of restrictions for sample topology and RI magnitude. The similar method of ellipsometry [3] cedes Brewster one because the need to measure the whole three parameters: reflectivity s,-p-polarized light power and the selected angle of their registration.
The error of Brewster angle determination has in turn two different reasons. The first is connected installation the zero angle of reference attached to the sample surface. The task is solved by collimation superposition of direct and return laser beam with typically small, (<1' ) angular divergence. The second reason is connected the installation of zero azimuthal angle (α=0) demanding superposition of polarization and incident planes of radiation on the tested surface. At the same time the degree of polarization (DP) of radiation acts as the contributor to this error: it decreases the requirement of installation of a zero azimuthal angle to a value of DP. Thus, both factors reduce the determination accuracy of Brewster angle on a minimum of residual reflected power.
It is appropriate to notice also that the registered residual radiation and change of its polarization in the narrow vicinity of Brewster angle in sequence "р-s-р" was considered as violation of the classical Fresnel theory. This violation was removed at introduction of a hypothesis of availability and influence of a super thin (<<l) boundary layer at any material with the correspondent Drude`s addition l to Fresnel theory [4]. The similar monolayer cannot attribute homogeneous on depth RI transition from one media to another.
Drude's model was tested by Rayleigh on a number of liquids which surfaces were considered more optically homogeneous than when rigid bodies. Experiment with change of polarization at reflection in the small vicinity of Brewster angle was treated within Drude's addition of Fresnel theory [4]: (1) In a formula (1) ellipticity of the reflected light is determined by the relation of reflectivity Rp,s for emissions with p-, to s-polarization depending on RI=n of material, the wavelength of measurement l and a difference of some combinations RI f0r given the film. Rayleigh has measured the ellipticity value and found the thickness of a transitional layer l for water equal: Rp/Rs=42*10 -5 , l=0,00057l. Thus, in Drude's model ellipticity of radiation and component of s-polarization in the small vicinity of Brewster angle arise at the reflection of the incident p-polarized radiation from the super thin transitional layer which does not influence the value of Brewster angle of the main material.
The results of given work taking into account the fatal contribution of spolarized radiation due to the degree of polarization of emission less than 100% and due to an azimuthal angle never are zero allow to explain the seeming deviation from Fresnel's theory without appeal to a transitional layer and Drude's model based on it.
In the light of the made remarks, we will estimate influence on observed data of RI the nonzero azimuthal angle and less than 100% DP laser emission used in measurements. Really even CW lasers with Brewster elements inside cavity provide radiation with DP= 100% never.
Normalized power of the reflected light as function of an incidence angle ϕ, the azimuthal angle α and RI=n is given within Fresnel's model by next expression: At values α=90 0 or 0 0 reflected the power of linearly polarized light p-, and sorientations are described by the first or second member (2) respectively. The power of reflection of depolarized light or light of circular polarization forms an equal contribution of both members of expression (2). A condition under which the sum of incidence angles and refraction is ϕ+β=π/2, according to Fresnel (2) (at α=0) leads to reducing the reflected power of p-orientation to zero owing to the transversal structure of an electromagnetic field. By name discoverer of the phenomenon, the angle bears Brewster name.
At α≠0, the first member (2) makes a contribution in reflection for account of spolarized components from the incident p-polarized radiation even at DP=100%. While reflection on Brewster angle for p-polarization seeks to zero, reflection for spolarization radiation continuously grows with an incidence angle to 100%, the total reflected power creates the minimum shifted towards smaller angles. The magnitude of this shift is proportional to a contribution from the s-polarized components [1,2]. Similarly, the angular position of a minimum of the reflected power (MRP) depends from DP of p-oriented components of testing beam even in a case α=0.
The way out from the similar difficulty, limiting RI measurement accuracy on an angular MRP of p-polarized radiation, consists in the application of polarization analyzer in front of the reflected power receiver with tuning on the suppression of the arising s-polarized component. The made decision was validated at all next RI measurements of different materials, in particular, materials with the rough tested surface.
Following the standard definition of DP=p, from relation (2) at α≠0 one receives expression for DP of reflected beam: From (3) follows that DP in a reflected beam at α≠0 changes with an incidence angle always (Rs,p -Fresnel reflectivity from a formula (1)). At the same time on Brewster angle for the s-polarized radiation, irrespective of the size of an azimuthal angle α, p=1. For s-polarized radiation at α=0, the reflected radiation is absent at all if incident radiation is characterized by infinitely high DP.
Further, results of RI measurement and their analysis for materials of different topology with missing or weak absorption when the real part of RI much and more orders exceeds an attenuation index (n 2 >> ϰ 2 ) will be provided.
Determination of Brewster angle at one reflecting surface
Determination of Brewster angle for materials of different topology (volume samples with one optical surface, plates, and films with two optical surfaces) were carried out using CW lasers of different wavelengths at minute divergence and power in level (1÷20) mW. The angular step of sample rotation was set by a step motor and reducer and was equal 0, 024 minutes. Measurement of radiation power (in relative units -volts) was performed by means of a handmade photometer on silicon photodiodes PhD-24 in the short-circuit mode when photocurrent is directly proportional to a luminous flux in limits to 3 orders of values [5]. Magnitudes of an analog signal in volts (0,01÷4,5) were digitized by 10-digital ADT and entered into the computer for processing and graphical representation. The level of dark noise was lower 1mv and partially depended on electronic amplification of the photometer. The receiving photodiode together with a studied sample was established on a goniometer head with the scale of reference 1 angular minute. The goniometer allowed to keep the position of a reflected beam on a photodetector at all angular changes of the incident beam. The reference signal from the similar photodetector was detected and applied for correction of indications from the signal channel if power fluctuations of the measuring laser took place. When changes of reflected signal power reached to 3 orders of values, to reduce a nonlinear response of a photometer to input light power an attenuated light filters were used. Starting values of DP and laser power were measured with dichroic polaroid in both channels by above mentioned dual channel photometer. Change of linear beam polarization state from s-, p-orientations concerning the incident plane was performed by rotation of a diode laser around an optical axis on π/2 . Let's address typical results of measurement of MRP angle for p-polarized radiation for glass (K8) received with the described procedure of registration ( fig. 1.) From the aforesaid it is followed that residual power in MRP depends on emission DP and the different from zero azimuthal angle α. Besides, power in MRP depends also on the optical roughness of the surface causing light scattering with some depolarization [6].
On incidence angles up to the ≈20 0 decrease of DP is practically absent owing to small distinctions of reflectivity (2) at the change of an azimuthal angle. For example, DP of He-Ne incident radiation (632,8nm) is equal 99,92%, but after reflection by an optically polished surface of the K8 glass with azimuthal angle 70 0 reduces to 99,62% (distinction of 0,33%). Not attenuated part of depolarized components of light will grow with an incidence angle and will be registered in the vicinity of Brewster angle. The procedure of RI measurement at Brewster angle in MRP of p-polarized radiation on the example of K8 glass and l=632,8nm is presented on fig.1. Without the mentioned polarization filter (analyzer) on a photodetector, the small difference of the measured value RI from provided by Russian state standard is observed: ϕ1=56,333 0 , RI=1,5013 and ϕ2=56,5660, n=1,5145 respectively. The difference was eliminated at the proper fitting of the polarization analyzer suppressing s-component of reflected radiation.
Let's address the analysis of residual radiation in the vicinity of Brewster angle, important as for the understanding of the physical phenomenon, so for justification of measurement procedure RI at MRP. Today's understanding of the phenomenon was given in the introduction. It is based on a hypothesis of existence super thin transitional layer on the border with the environment (air). RI of such 2-dimensional film can smoothly change from RI value of air to software of a tested material. In some approach, the state of an intermediate layer can be compared to a bulk material with homogeneously distributed pores inside which diameter are smaller than wavelengths of measuring light. At an increase of pore concentration an efficient RI of similar material changes from RI of continuous material to RI of air [11]. It seems that Brewster light reflection from a similar film will be present at the wide sector of incidence angles as RI similar film i n c i d e n t a n g l e , a n g u l a r g r a d e will "be smeared" in a wide area between values for boundary homogeneous materials so to register single MRP in given films are not possible.
To study behavior of reflected light of p-polarization and irremovable s-polarized component, their reflection was registered in a wide sector of incidence angles from 10 0 to 70 0 . The polarized radiation of the injection diode laser l=660nm with DP= 99,9% and fitting of an azimuthal angle ϕ≅0 in a plane of incidence on the optical quality surface of glass K8 used. The analyzer on the photodetector was fitted in turn in two orthogonal orientations passing p, -or s-component of polarization of the reflected light. The both dependency ( fig.2.) contains the information needed for an understanding of light behavior in Brewster minimum and its influence on RI measurement accuracy. The ratio of the variable reflected radiation power of p, -and s-polarization on the angle shows how high (99,66%) initial of total radiation on small angles DP decreases up to change of a sign in vicinity of Brewster angle. In intersection points 2S-and 1P-dependences of fig.2. in the vicinity of Brewster angle polarization of total radiation disappears at all, within crossing of curves DP becomes elliptic, outside this area DP of total radiation recovers initial value. Fig. 2. Angular dependence of the reflected radiation power for K8 glass: the DP after reflection for an angle of 10 0 -99,66%, 1p-dependence is power of p-polarized part, 2S-power s-polarized components, plots a, b, c, d on curve 1P correspond to points of increase of the input power: 70,73х24,4х2,9х1=5004,85 times; The behavior of the total power (1P+2S) considered as p-polarized radiation with DP=100% led to the conclusion that in Brewster's angle vicinity the radiation experiences phase jump π-radian followed the emergence of ellipticity for reflected nonzero power of p-polarization on Brewster angle. The similar behavior of the radiation with DP=100% and α=0 disagrees to the main Fresnel model when polarization jump on π on Brewster angle is not followed with the reflected radiation completely.
In our measurements of the reflected p-polarized radiation (a curve 1P, fig. 2.) no additional changes of its power due to polarization jump are revealed. While the total power dependence 1P+ 2S in fig. 2. has all signs of similar behavior, i.e. change of polarization in the narrow vicinity of Brewster angle and appearance of some small reflected power. For experimental verification of our measurements and, respectively, conclusions, we measured phase jump of the p-polarized radiation with the azimuthal l angle of α≅0 at single TIR on a rectangular prism from K8 glass. In Fresnel's theory the magnitude of phase jump is defined for the case by next expression [4]: The manifestation of phase jump for p-polarized radiation in this example was registered on power change of the arising s-polarized components in a narrow sector of incidence angles for TIR appearance and normal reflection without phase change. Against photodetector, the analyzer to transmit arising s-polarized light component was installed. Multiple increases of the s-polarized part power of a reflected beam arising due to phase shift at TIR in narrow transitional sector 1 0 is provided on fig. 3. For RI =1,5 and ψ=45 0 estimated phase jump reaches ≅ 28,07 0 . (We do not provide data of similar calculation for s-components and their differences (δp -δs) in view of the small amplitude of the last).
Thus, arbitrary small power part of s-polarized radiation at reflection can always remains as due to of unattainability the absolute values of DP=100 % and α=0 for input light, and also owing to its depolarization, caused by light scattering on irremovable atomic and molecular scale roughness of the surface. Below we will show availability of very small light scattering at the reflection on the even atomic pure surface of mica plates. Light scattering from the liquid surface could make a contribution to the measurements taken by Rayleigh in the context of the analysis of this problem (without laser beams!).
In our example of fig. 2. level of the reflected s-polarization power on small incidence angles was below the power level of p-polarization in ≈30000 (more than 4 orders). S-polarized parts of the initial radiation could arise as from limited values of DP of the laser source and azimuthal orientation its linear polarization (1-p≠0, α≠0), so as light scattering on multiple optical surfaces of setup. The addition of the radiation power presented by curves 1P and 2S of fig. 2. leads to small to shift (units of minutes) of MRP for Brewster angle and, respectively, to an error of determination of RI using Fresnel's formulas. Therefore, application of the best analyzers (with the relation of transmissions of orthogonal components ≥1000) for suppression s-polarized components of the reflected light is a necessary condition of receiving correct measurements of Brewster angle with MRP angular position. The power level of MRP i n c i d e n t a n g l e , a n g u l a r d e g r e e in fig. 1. without polarizing filtering of s-component in the case was equal ≈33mv. With the installation of an analyzer power level in MRP fell to ≈4mv, its angular position was shifted to 56,545 0 that improved the measured RI value to 1,5134 with an agreement to state standard specification in the 4th sign.
Determination of Brewster angles of thin-layer materials
MRP generation in the reflected light for a p-polarized incident beam in the case of thin plates and films is accompanied by an interference of two beams of almost equal intensity. Though reflection by a back surface can be suppressed with an antireflection coating, roughening or blackening of back surface to attenuate the second beam, this case is necessary to consider in more detail because of a whole set of the various possibilities.
Thin layer materials take huge family film layers on substrates, various free films and thin plates made of absorptive and transparent materials. Specifics of MRP finding for it and its identification with Brewster angle in the presence of 2 beams reflected from the plane-parallel front and back surfaces appear in the general case as two-beam interference in parallel beams. If testing wavelength suffers from the noticeable light absorption or scattering in at a volume of film or on its back surface, the second beam is suppressed and a problem of determination of RI can be reduced to described above procedure. In the presence of wedge angle between surfaces reflected beams leave under the angles differing at a wedge angle and therefore physically equal angles of MRP of this film will be displaced on a size of this angle magnitude. No interference in parallel beams for the case even at small wedge ∼ 1' can be registered.
The similar results for the wedge plate from K8 glass is shown in fig. 4. The interference of equal thickness for a wedge plate is in the plane of the film, but at separate beam development it does not influence the MRP provision on each of beams. Registration of total radiation of two beams will bring in a similar case to mixed MRD from two Brewster minimum. However for a wedge plate it always possible to find the beam reflected by the front plane and definitions of true Brewster angle if a wedge magnitude is unknown. With the reduction of film thickness to several microns parallelism of the planes become better, and the interference in the parallel beam (at an equal angle of incline) shows the higher visibility because it is strongly sensitive on wedge angle value [11].
In general the two-beam interference in the reflected light from a plane-parallel plate is characterized by the space period, depending on wavelength λ, plate thickness T and RI, incident/refraction angles ϕ/ψ [7]. The angular period as a difference of refraction angles for the alternating maxima is defined by dependence (5). i n c i d e n t a n g l e , a n g u l a r d e g r e e i n c i d e n t a n g l e , a n g u l a r g r a d e х 2 , 9 х 2 , 9 The formula (5a) represents a difference of angular positions of interferential maxima through the angles of refraction/incidence ψ/ϕ and an optical thickness of a plate of nT. The approximate formula of the angular size of the space period is given by a formula (5b). The space period depends from angular period and distance between the tested plate and a photodetector. It is clear that film thickness T, RI≡n and angles ψ/ϕ determine the angular period of an interference and its relation to the tested beam divergence. Other details of the interference manifestation in parallel reflected beams have been considered in our work in connection the special coherence measurement [10]. In the context of this study -influence of interference on RI measurement by Brewster refractometry -consider measurements of MRP and, respectively, RI in the presence of the mentioned two-beam interference for plates of fused silica and coverslip microscope glasses. The space period of an interference on plates with T=1mm is equal ≈0,25' while the rotation step of our setup is 1 order smaller. Therefore if the plate is plane parallel is better than 1' interferential minimaxes should be observed and registered, however, that does not happen (see fig.5.) Registered MRP corresponds to fused quartz: ϕbrw =55,5 0 and RI(660nm)=1,455.
As thickness reduces to 150mkm (wedge decreases) interferential pattern appears and is already registered: its amplitude and the angular period decrease with rotation to bigger incidence angle in accordance expression (5), at last disappears quite in the vicinity of Brewster angle ( fig. 6.), without any obstacles for its correct determination. On insert to fig. 6 increase of the registered visibility of the interference of this sample at some increase of spatial resolution of a photodetector with the application of an entrance slit of 0,5 mm is shown. It is seen that owing to the lowest quality of optical surfaces of the coverslip and, respectively, stronger light scattering, the power of s-polarized components on the similar plates increases. However, its cut-off with the polarizing filter improves MRP angular position indirection to real Brewster angle for RI of K8 glass: ϕbrw =56,39 0 and RI=1,504.
Thus, the main conclusion which follows from last results at manifestation of a two-beam interference and relative small light scattering on both sample surfaces consists in the feasibility of the successful Brewster refractometry of similar films and plates. Similar successful RI measurements allowed to apply Brewster refractometry to films without any optical preparation of surfaces at all and even absolutely nontransparent due to light scattering material like milk or fluoro plastic. The similar nontransparent materials due to internal light scattering remove the two-beam interferometry totally. Instead, they need the single small part of the surface to be optically polished for Brewster angle measurement. Determination of MRP and RI of a polyethylene film with T= 32mkm provided the following data: ϕ=56,333 0 and PP=1,501 that is typical for the low pressured polyethylene ( fig.7.). The curve A on the fig.7. corresponds MRP measurement at the increased power of the testing beam. Small waviness on the curve arises to residual two-beam interference. Its low-visibility resulted from above mentioned poorest boundary conditions for interference, in particular, caused also by deceleration of power of the second reflected the beam from a back surface of a film owing to immersion by a glue layer of the substrate. i n c i d e n t a n g l e , a n g u l a r d e g r e e x2,9 2 x2,9 х 2 , 9 4 Brewster measurement RI for samples of submicron thickness (including absorbing) presents interesting in general and particularly for the silicate structurally arranged films prepared with sol-gel technology with the use of polymeric surfactant. The arranged pores of such films filled with polymer and were painted by R6G dye. The thickness of films measured by the method of atomic field microscopy was determined in limits (180-200)nm [8]. RI Brewster measurements were taken on wavelengths 532nm and 660nm in order the smaller one to match an absorption maximum of R6G while bigger one lay outside its absorption band.
At very high molar concentration R6G in a film sample of nanosized thickness the absorption coefficient became so high [8] that the real and imaginary part of a complex RI ϰ became commensurable. Measurements of complex RI with Brewster angle are unambiguous, in opposite case when n>>ϰ=lα/2π. In fig.8a, results of measurement of angular position MRP for the specified samples are given. The sample absorption actual for MRP measurements on l=532nm removed the second beam and an interference; measurement of MRP outside the absorption band of dye matched real Brewster angle of silicate matrix and was followed by an two-beam interference on l=660nm ( fig.8b): the main and reflected from a back surface of a substrate on which the "nanometric" layer was grown up (the back surface of this layer is antireflection by quartz substrate).
The angular standing of MRP for wavelength 532nm under total contribution of n and ϰ is described with exact but bulky Fresnel expression. If one use its approximate solution [9], the real part RI can be determined by the angle of MRP and independently from measured absorption coefficient of α. Really, when inequality of A=n 2 (1+ϰ 2 )>1 [9] and B=2n perform an angular dependence of a power reflectivity for the p-polarized radiation (TE-wave) can be provided by the following expression: The derivative extremum on the angle j of function (6a) allows to find angular position MRP: Data for MRP angle on fig. 8a permit to find RI≡n=1/cos(ϕbrw) (1+ϰ 2 )=1,437 при ϰ=0,7. For next understanding, the origin of the nonzero reflected power on Brewster angle on TE incident radiation material with the atomic pure surface (no light scattering) presents interesting. It is known that surface of mica plane is natural cleavage of a crystal structure. We used muscovite plates with a thickness of 80mk. Muscovite belongs to a monocline syngony and is optically two-axial material. The plane of the optical axis and the plane of cleavage are orthogonal with an angle between optical axis ≈40 0 . RI along optical axis differ slightly: ng=1,613 ÷ 1,596, nm = 1,607 ÷ 1,596, np=1,569 ÷ 1,561 and, respectively, is small birefringence 0,038 ÷ 0,045 [10]. Values of Brewster angle for optically of anisotropic material depends nontrivially on mutual orientation of an optical axis crystal and the incidence angle of the testing beam. The scheme of registration with the analyzer on the photo receiver, selecting ppolarized emission (TM), gave up hope to find MRP connected with an ordinary beam.
Results of measurement are presented on fig.9. The MRP angle occurs to be 58,1660 for λ=660nm corresponds to RI value equals n=1,610 which is inside of limits to ng=1,596-1,613. Fig. 9. Angular dependence of the reflected power of the p-polarized wave 660 nm for the atomic pure surface of muscovite, T= 80mk. On Brewster angle, the power of incident radiation is increased by 2,9 5 times in order improve signal/noise relation in MRP vicinity.
Dependence in fig. 9. contains the following various information: a) The interference with the contrast of visibility close to 100% is imposed on the angular dependence of reflected light radiation which is linearly polarized pconfigurations (TE) and specifies the availability of Brewster angle.
c) The angular period of an interference is described by the stated above ratio (5b) brought out of interference model in parallel beams for equal slope angles: change of the angular period is obliged to change of an incidence angle of the paraxial light beam on a plate.
c) The high contrast of interference visibility is connected only with the high optical quality of surfaces of a plate and its power of plane-parallelism i.e. good quality inherent to an interferometer [12]. e) This rotary interferometer is capable of measuring the width of space coherence of a measuring beam owing to imposing of the copies of a measuring beam arising in reflection and which are moving apart with the growth of a turning angle of a plate [12]. Therefore, the contrast of visibility with the growth of an incident angle typically decreases.
The first minimum on fig.9. on then incidence angle 32,26 0 is not connected with above mentioned effect of spatial coherence of used diode laser 660nm and can be related to accidental implementation of phase shifts between interfering beams with formation of s-polarized radiation component which is cut off by the analyzer (at turn of the analyzer on 90 0 it is confirmed by emergence of a power maximum on this angle). The registered MRP parameters for the mica strikingly differ from similar one for plates of even optical quality ( fig. 4,5,6) in the extremely high visibility of an I n c i d e n c e a n g l e , a n g u l a r d e g r e e .
x2,9
interference that is connected with the lowest level of light scattering and on its planes of cleavage and due to a very small angle wedge of the plate. The s-component power of the reflected radiation in the narrow vicinity of Brewster angle decreased a noise level even at the maximum power of the applied source of the p-polarized radiation (more than 10 mW). Distinction in the power of the reflected radiation on angles 10 0 and Brewster is not less than 3 orders!
Final provisions and conclusions
The conducted research contains the two connected parts: metrological part of the content is a study of determination opportunities of RI different materials and its topology with Brewster angle refractometry and the fundamental part connected to study of the residual radiation origin at light reflection at Brewster angle. It is shown that highest requirements to measuring light on the limit extent of linear polarization and installation of a zero azimuthal angle for a precision finding of Brewster angle are practically implemented at the installation in front of light receiver the p-polarization analyzer (the filter with a cutoff of s-polarized components in a reflected beam). Comparison of the reflected radiation integral on polarization to differential one indicates following: the nonzero azimuthal angle and the final degree of polarization influences on the angular position MRP on the same. Similar comparisons of the reflected power of the incident p-polarized radiation for s,-and p-polarized components indicates the source of an origin of s-polarized component registered in the vicinity of Brewster angle: this source is caused by optical heterogeneity of a surface unremovable by optical polishing and even when using an atomic pure crystal surface. Polarization degree change of the reflected radiation in the vicinity of Brewster angle is not accompanied by phase jump on π radian what testifies the monotonous change of residual radiation power. Experimental observation the similar phase jump at TIR is followed by growth of power in a transitional area that confirms a correctness of the used measurement technique and conclusions concerning the nature of residual radiation on Brewster angle. Thus, the hypothesis of an intermediate super thin layer and the additions made Drude's model to basic Fresnel theory for explanations of the residual radiation origin lose the relevance.
The RI measurements executed on thin plates and films even with imperfect surfaces and anisotropy showed that an interference under these conditions does not prevent a finding of Brewster minimum because in the respective angular area an interference visibility decreases to zero. Light scattering level on imperfect surfaces, when the mirror component of light scattering after reflection still remains, does not interfere to determination RI on MRP due to the registration scheme with the polarizing light filter.
The possibility of RI measurement of thin and over thin films under T≤200nmм, including a complex RI is also possible: the registered MRP determines the real part of RI, but independent measurement of an absorption (imagine part RI) is necessary. Unexpectedly successful were measurements RI of anisotropic films and plates with technological (tension) and natural birefringent like to polyethylene and mica. In particular, the defined values RI needs some theoretical justification of its binding to one of 2 optical axes of 2-axis crystal. The appeal to mica has evoked a possibility of comparative assessment of very low light scattering on the atomic pure surface of cleavage and light scattering on surfaces with optical polishing (or without that) on the determination of true Brewster angle from MRP. Extremely high visibility of the interference pattern in the reflected light at an extremely low power of s-polarized component on Brewster angle in comparison with all other samples indicates level the importance of light scattering in such measurements. The following statements are the main conclusions of all presented work: The analysis of angular dependences of reflected beam power integral and differential on polarization the s, p-component indicates a lack of the polarization change predicted by model Drude p-s-p transition in the small vicinity of Brewster angle. It allows to offer an alternative explanation of the observed change of polarization in the specified sector in the expense of a contribution of a nonvanishing component of the s-polarization arising in a reflected beam.
Closest to Brewster angle values of angular MRP position, at all possible decrease in the extent of linear polarization of the testing radiation and deviations of its azimuthal angle from zero value, are reached with the installation of the polarizing filter (analyzer) providing the strongest suppression of s-components in the reflected radiation. | 2019-04-13T13:06:25.731Z | 2015-10-23T00:00:00.000 | {
"year": 2015,
"sha1": "696d734a266d17c8aabb26d7fd21d3924b49eda2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "696d734a266d17c8aabb26d7fd21d3924b49eda2",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
236248113 | pes2o/s2orc | v3-fos-license | Evaluation and Comparison of Three Classes of Central Composite Designs
Three classes of Central Composite Design: Central Composite Circumscribed Design (CCCD), Central Composite Inscribed Design (CCID) and Central Composite Face-Centered Design (CCFD) in Response Surface Methodology (RSM) were evaluated and compared using the A-, D-, and G-efficiencies for factors, k, ranging from 3 to 10, with 0-5 centre points, in other to determine the performances of the designs under consideration. The results show that the CCDs (CCCD, CCFD and CCID) are at their best when the Gefficiency is employed for all the factors considered while the CCID especially behaves poorly when using the Aand D-efficiencies.
Introduction
Experiments are performed by researchers in every field of inquiry so as to study and model the effects of several design variables on the responses of interest. The foundation for response surface methodology (RSM) was laid by Box and Wilson [1]. Response surface methodology consists of statistical and mathematical techniques for empirical model building and model exploitation. It seeks to relate a response or output variable to the levels of a number of predictors or input variables that affect it. The form of such relationship is usually unknown, but can be approximated by a low-order polynomial such as the second-order response surface model, Most second-order designs especially the central composite designs utilize this stated model. A second-order response surface design is often chosen based on consideration of several criteria, such as those identified by Myers and Montgomery [2]. Among the most important of these is the stability of the prediction variance over the region of interest. However, the Central Composite Design, being the most popular of the many classes of response surface designs, has been studied and used by many researchers. Box and Draper [3] suggested several criteria which can be used in the selection of design. Several second-order model designs exist in the literature and they include CCD, Box-Behnken Designs (BBD), Hoke Designs, Small Composite Designs (SCD), Hybrid Designs, etc.: see, for example, Box and Wilson [1], Myers and Montgomery [2], pp.541-546 and Zahran et al [4].
It is also worthy to note that a design superior to other designs by a given optimality criterion may not be superior when evaluated by another optimality criterion. Therefore, the choice of a design may be dependent on the choice of an evaluation criterion. Four common design evaluation criteria, which exist in the literature, are the alphabetic G-, D-, A-and E-optimality criteria. By condensing a design's properties to a single value, however, much information is lost regarding a design's potential performance. For an overview of optimality criteria, see Atkinson and Donev [5]. The Central Composite Design (CCD) is one of the most popular response surface designs. Since it was introduced by Box and Wilson [1], the CCD has been studied and used by many researchers in fitting the second-order model. Box and Draper [3] suggested several criteria which can be used in the selection of the design. These criteria include, the design should: (a) allow a check on the representational adequacy of the polynomial; (b) should not contain an excessively large number of experimental runs; (c) should lend itself to blocking; etc. Dykstra [6] studied the partial duplication of the factorial portion, as well as the partial duplication of the star portion of the CCDs (Rotatable and Orthogonal Central Composite Designs) for factors k = 2, 3,…, 8. The results showed that the designs with the star portion duplicated seem to have more potential than the designs with their factorial portions duplicated or partially duplicated. Lucas [7] evaluated four types of optimum composite design in different regions of interest. The optimum designs evaluated are: a symmetric composite design (a composite design with star point distance equal to ), a symmetric smallest composite design (a saturated composite design), an asymmetric composite design with star point distance equal to +1, and an asymmetric smallest composite design. The result shows that symmetric composite designs are nearly optimum for experiments in a hypercube. Myers [8] suggested optimal CCDs under several design criteria (orthogonality and rotatability). Xianfeng and Zhang [9] evaluated and compared three CCDs -(CCCD, CCID and CCFD), from the view of region of interest, through simulation of a motor assembly. The results show that effective experiment of design cannot be gained without correct selection of these designs. Oyejola and Nwanya [10] used the D-, A-, G-and I-optimality criteria and the Fraction of Design Space graph to evaluate five varieties of CCD, which include: Spherical Central Composite Design (SCCD), Rotatable Central Composite Design (RCCD), Orthogonal Central Composite Design (OCCD), Slope Rotatable Central Composite Design (Slope-R) and Face Centre Cube (FCC), for 3-6 factors, with replicated star portions and increased centre points. Their results show that replicating the star points tend to reduce the D-and G-optimality criteria of the CCDs in all the factors considered, while it is not so for the A-optimality criterion. In I-optimality, the CCDs are relatively the same, both when the centre points and axial points are increased. The FDS plots indicate that the CCDs maintain relatively low and stable Scaled Prediction Variance(SPV) when the star points are replicated with increased centre points.
Chigbu and Ohaegbulem [11] used the D-optimality criterion to compare partially replicated cube and star portions of the rotatable and orthogonal CCD. Their results indicate that replicating the cube portion enhances the D-optimal performance of the CCD more than replicating the star portion. Lucas [12] compared the performances of several types of quadratic response surface designs in symmetric region. In the study, the CCD, BBD, Hoke designs, Pesotchinsky designs, were compared using the D-and G-efficiencies, and the result showed that the CCD performs better than the other designs, though all of the designs compared have high Dand G-efficiencies.
Methodology
We present the three Central Composite Designs as well as the optimality criteria, which will be used to assess the designs under consideration.
Designs for Comparison
The three CCDs that will be examined and compared based on the A-, D-and G-efficiencies, are the Central Composite Circumscribed Design (CCCD), Central Composite Inscribed Design (CCID) and the Central Composite Face-centered Design (CCFD).
Central Composite Design
The Central Composite Design (CCD) was introduced by Box and Wilson [1], and it is perhaps the most popular class of second-order designs. Assuming 2 k design variables, the CCD consists of the following: 1. An The α considered in this work is the rotatable α for CCCD and CCID. For the CCFD, the star points are at the centre of each face of the factorial space, so The structure of the CCD matrix, , X for any two design variables, i x and , j x with one centre point is given as: The three classes of CCD evaluated in this study are: the central composite circumscribed design, the central composite inscribed design and the central composite face-centered design.
Central Composite Circumscribed Design
The central composite circumscribed design (CCCD) is the original form of the CCD, with the star points located at some distance, α, from the centre. The star points establish extremes for the low and high settings for all factors. These designs require five levels for each factor. Augmenting a two-level factorial or fraction (resolution V) with a k 2 axial or star points and centre points can produce this design. The matrix structure of the central composite circumscribed design for
Central composite inscribed design
The CCID is a scaled down CCCD, with each factor level of the CCCD divided by, α, to generate the CCID. This design, being a scaled down of the CCCD also requires five levels of each factor, because the star points lie within the space of the factorial design. The matrix structure of the central composite inscribed design for
Central composite face-centered design
The CCFD is a special case of a CCD, in which 1 . As a result, the CCFD becomes a three-level design, because the star point is located at the centre of the face of the cube, requiring three levels for each face. The axial and the factorial points of face-centered CCD fall onto the surface of the cube. The matrix structure of the central composite face-centered design for 3 k with 1 0 n is given as: Two of these designs, CCCD and CCID, have a common characteristic; they are rotatable. A design is rotatable if the estimated response, , y has a constant variance at all points which are the same distance from the centre of the design. See, for example, Box and Hunter [14]. For a CCD to be rotatable, where f is the number of runs in the factorial portion of the CCD.
Optimality criteria
An optimal design is an experimental design that is based on a particular optimality criterion. Kiefer [15] detailed the theory behind optimum designs, which states that, if is a compact space on which the real function, t f , are continuous and linearly independent, the probability measure, , is D-optimum for an unknown m-vector, m ,..., 1 if and only if it is G-optimal. Design optimality criterion could be alphabetic because they are represented by the first letters of the names of the criteria. There commonly used design optimality criteria are the A-, D-and G-optimality criteria. Based on these optimality criteria, the design efficiencies can be calculated and can be used to compare designs. The A-, D-and G-efficiencies are used in this work.
A-optimality criterion
This criterion, introduced by Chernoff [16], seeks to minimize the trace of the inverse of the information matrix, X X ' . This criterion also results in minimizing the average variance of the estimates of the regression coefficients, and it is given by: where, N is the design size, and p is the number of model parameters.
D-Optimality criterion
D-optimality criterion, developed by Wald [17], was the first alphabetical optimality criterion developed. It is the most widely used criterion because of its computational ease. The D-optimality criterion focuses on the estimation of model parameters through the good attributes of the moment matrix, M , which is defined as, N are as defined above.
G-optimality criterion
This criterion is concerned with the prediction variance. It may be that the aim of the practitioner is to have good prediction at a particular location in the design space, or throughout the design region. To attain this, Box and Hunter [14] defined a variance function, the Scaled Prediction Variance (SPV), as: where, ) (x f is the vector coordinates of points in the region of interest, that is, , N is the total sample size, X is the design matrix and 2 is the process variance of the design. A G-optimal design is one that minimizes the maximum SPV over the experimental design region. Symbolically, it is written as The G-efficiency = . 100
Comparison of the Designs
In this section, the three classes of CCD (CCCD, CCID and CCFD) for factors 10 3 k are compared using the optimality criteria.
Design comparison using optimality criteria
In this section, the A-, D-and G-efficiencies of the three designs considered will be compared, the result will also be shown graphically. Let 0 n indicate the number of centre points and N the number of design runs.
The expanded design matrix for CCCD for 3 k with 0 0 n is given by: Using the same procedure, the following results, presented in Tables 1 to 8, are obtained for CCCD, CCID and CCFD for 3 to 10 factors with 0 to 5 centre points for each factor.
Graphical presentation of results and discussion
Graphical presentation of results in Fig. 1
Findings
Three classes of central composite design, namely: Central Composite Circumscribed Design, Central Composite Inscribed Design and Central Composite Face-Centered Design are compared for factors, k, ranging from 3 to 10 with 0 -5 centre points, respectively, using the D -, G-and A-efficiencies. The results show that the CCDs perform better when the G-efficiency is employed for all the factors considered. Also increasing the centre points tend to reduce the D-, G-and A-efficiency values of the CCFD. The CCCD and CCID behave alike in terms of the G-efficiency criterion; the CCCD performs better than the CCID and CCFD when the Dand A-efficiency criteria are employed, but with centre points greater than zero.
Conclusion
From the foregoing, it can be seen that for factors k = 3, 4, 5, 6 and 8, the G-efficiency performs better than the D-and A-efficiencies for the number of parameter, N, and the number of centre points 0 n considered. For factor k = 7, 9 and 10 the D-efficiency performs better than the G-and A-efficiencies for CCCD, while the Gefficiency performs better than the D-and A-efficiencies for CCID and CCFD, for the number of parameter, N, and the number of centre points 0 n considered.
In general the CCDs give high efficiency values when the G-efficiency is employed and it can also be seen that the CCID and CCFD have low efficiencies values under the D-and A-efficiencies respectively for the number of parameter, N, and the number of centre points 0 n considered. Finally, the CCCD performs better than the CCID and CCFD when the D-and A-efficiency criteria are employed, but with centre points greater than zero which implies that the CCCD is a better CCD, when compared but the inclusion of centre points is recommended. | 2021-07-26T00:06:23.545Z | 2021-06-05T00:00:00.000 | {
"year": 2021,
"sha1": "d7dad553eab565ee4126bb51680396841ef09a1f",
"oa_license": null,
"oa_url": "https://www.journalajpas.com/index.php/AJPAS/article/download/30304/56865",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2fb0bdfc2c78b7bb8e4a18fd8a48186e0db805cc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
206583001 | pes2o/s2orc | v3-fos-license | Comparing extreme programming and Waterfall project results
Waterfall and Extreme Programming are two software project methods used for project management. Although there are a number of opinions comparing the two methods regarding how they should be applied, none have used project data to clearly conclude which one is better. In this paper, we present the results of a controlled empirical study conducted at Carnegie Mellon University in Silicon Valley to learn about the effective transition from traditional development to agile development. We conducted a comparison research against these two approaches. Multiple teams were assigned a project; some used Waterfall development, others used Extreme Programming. The purpose of this research is to look at advantages and disadvantages based upon the outcomes, generated artifacts, and metrics produced by the teams.
Agile vs Traditional
Since the early 1970s, numerous software managers have explored different ways of software development methods (such as Waterfall model, evolutionary model, spiral model etc.) those have been developed to accomplish these goals and have been widely used by the software industry [1]. Methodologists often describe the Waterfall method as a stereotypical traditional method whereas they describe Extreme Programming as the stereotypical agile method. The Waterfall model, as the oldest traditional software development method, was cited by Winston W. Royce in 1970 [2]. He divided the software development lifecycle into seven sequential and linear stages: Conception, Initiation, Analysis, Design, Construction, Testing, and Maintenance. The Waterfall model is especially used for large and complex engineering projects. Waterfall's lasting impression upon software engineering is seen even in the Guide to Software Engineering Body of Knowledge which introduces the first five knowledge areas based upon their sequence in the Waterfall lifecycle even though the Guide does not recommend any particular lifecycle [3].
Although the Waterfall model has been adopted in many large and complex projects, it still has some inherent drawbacks, like inflexibility in the face of changing requirements [1]. If large amounts of project resources have been invested in requirements and design activities, then changes can be very costly later. High ceremony documentation is not necessary in all projects. Agile methods deal well with unstable and volatile requirements by using a number of techniques of which most notable are: low ceremony documents, short iterations, early testing, and customer collaboration. Kent Beck and Cynthia Andres define Extreme Programming 2.0 with many practices [4], like Pair Programming, Test First Programming, and Continuous Integration and so on. These characteristics enable agile methods to obtain the smallest workable piece of functionality to deliver business value early and continually improving it while adding further functionality throughout the life of the project [5].
PET project background
Carnegie Mellon University Silicon Valley students start their masters program with the Foundations of Software Engineering course. This course is team-based, project-based, and mentored. Each team builds The Process Enactment Tool (PET). The user personas are software developers and managers. The tool helps users plan, estimate, and execute a project plan while analyzing historical data. The tool's domain encourages students to learn about software lifecycles and methods while understanding the benefit of metrics and reflection.
1.2.1. PET 1.0: In 2001, Carnegie Mellon had one of the largest outsourcing firms in the world develop Pet 1.0. Later the student teams were brought in to do the next release. The initial offerings of the course had the teams follow a Waterfall lifecycle. The faculty decided to use Extreme Programming as the method for the Foundations course because it was an agile method, it had good engineering practices, and it was a safe sandbox environment for engineers to try paired programming since many managers in industry were initially skeptical about its benefits. In 2005, the faculty allowed three of the sixteen teams tried our new curriculum to see if there were any serious issues in the switch, while other thirteen teams continued to follow a start point in 2004. The feedback was extremely positive so in 2006, all teams followed Extreme Programming. For the project plan duration, Waterfall teams needed fifteen weeks to finish their tasks where as Extreme Programming teams were given only thirteen weeks, a 13% reduction in time.
PET 1.1:
In 2005, the VP of Engineering advised the three teams that rewriting the code from scratch would be easier than working with the existing code base. Team 30:1 decided to use the latest in Java technologies including Swing and Hibernate. PET 1.1, the team's product became the starting point for the students in the following year.
PET 1.2:
In 2008, the faculty switched the core technology from Java to Ruby on Rails. Ruby on Rails' convention over configuration, afforded a lower learning curve for students. For Pet 1.2, students would build their projects from scratch.
Related work
Much research has been done as to when to use an agile method and when to use a traditional method. For example, Boehm Turner's home grounds look at several characteristics, criticality, culture, and dynamism [6]. Our paper aims to extend these limitations to some degree by estimating Waterfall and XP in an academic case study, which provide a substantive ground for researchers before replicating their ideas in industry.
Basili [7] presented a framework for analyzing most of the experimental work performed in software engineering. We learned that how to conduct a controlled experiment. Andrew and Nachiappan [8] reported on the results of an empirical study conducted at Microsoft by using an anonymous web-based survey. They found that one third of the study respondents use Agile methodologies to varying degrees and most view it favorably due to improved communication between team members, quick releases and the increased flexibility of agile designs. Their findings that we will consider in our future work is that developers are most worried about scaling Agile to larger projects, and coordinating agile and traditional teams. Our work is closely related to the work by Ming Huo et al [9]. They compared the Waterfall model with agile processes to show how agile methods achieve software quality. They also showed how agile methods attain quality under time pressure and in an unstable requirements environment. They presented a detailed Waterfall model showing its software quality support processes. Other work has only illustrates one or some Agile practices such as pair programming [10].
Experimental methodology
Our research was conducted primarily using Glaser's steps [11] in the constant comparison method of analysis. Step1: Begin collecting data. We collected more than 50 teams' detailed data during a five year period as Table 1 shows. Step2: Look for key issues, recurrent events, or activities in the data that become categories for focus. The approach in software design makes us categorize the data into two distinctive software development methods, namely Waterfall and Extreme Programming.
Step3: Collect data that provides many incidents of the categories of focus with an eye to seeing the diversity of the dimensions under the categories. According to Basili [7], we provided some metrics to compare these two categories, Waterfall and XP. Step5: Work with the data and emerging model to discover basic social processes and relationships.
Requirements Metrics
Step6: Engage in sampling, coding, and writing as the analysis focuses on the core categories. During 2005, there were 13 teams following Waterfall and 3 teams following XP during the same period of time. These three teams, team Absorb, GT11 and 30:1 are interesting teams to examine as we can compare their data against the Waterfall teams doing the exact same project.
UI screens (M1) and Story cards (M2) comparison
These wide ranges can be seen in Table 2 and Table 3 where the standard deviation of the UI mockups is often half the document size. Comparing use cases to story cards in Table 3, we see that the standard deviation for use cases is much lower than the standard deviation for story cards. This is expected since use cases are a higher ceremony document when compared to story cards. Teams might give little consideration to how to represent each feature on a story card whereas a team writing a use case step by step how a user will use the system will spend much more time thinking about the coupling and cohesion of each use case.
Requirement documents (M3&M4)
Starting with PET 1.0, Waterfall teams on average add 1.7 use cases and modified 2.0 use cases. Teams were given a 28 page System Requirements Specification (SRS) and on averaged finished with a 34 page SRS. XP teams starting with PET 1.0 were given the same starting documents. Instead of modifying them, the teams created story cards that represented each new feature. Instead of spending time on writing use cases, XP teams started coding sooner. Because XP has an emphasis on low ceremony documents, they had more time to code resulting in an effort savings for the teams.
Comparing the size of the detail design documents (M5)
There are some insights from Table 4. Waterfall teams using Pet 1.0 started with a 21 page Detailed Design Document (DDD), which they altered to reflect their new use cases. Waterfall teams typically did not update their design documents at the end of the project. Given the scope of the project, Waterfall teams' final code matched the original design with respect to new classes. XP teams increased their design documents with each iteration. Because the XP teams followed Test-Driven Development, they wrote their code and had an emergent design. At the end of each iteration, the teams were asked to update the design document to reflect important design decisions they had made during that iteration. Therefore, the design document serves a different purpose in XP. It is not a template or blueprint for future construction. Instead, it can be a guide for understanding why certain decisions were made. In this regard, it is a biography of the development, not a plan of action. Table 5 shows that Waterfall teams starting with Pet 1.0 produced lines of code with a wide variance. The two XP teams starting with Pet 1.0 fell right within the middle of the average. Because instead of producing some documents up front, the XP teams spent a longer time coding, one would expect them to produce more lines of code. The research results also show that XP Teams had a higher percentage of comments in source code.
Submitted lines of test codes and ratio of test code to program code
The observation of these two metrics in Table 5 shows that the amount of test code written by the Waterfall teams equals the amount of test code written by the XP teams. Initially the faculty thought that Test-Driven Development would increase the amount of testing code, however, given a slow adoption rate of Test-Driven Development, programmers resorted to what was familiar and thus produced similar results. | 2015-09-23T00:31:53.000Z | 2011-05-22T00:00:00.000 | {
"year": 2011,
"sha1": "a264234dd6713b69f7bc83b927cb0353f761bfae",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Comparing_Extreme_Programming_and_Waterfall_Project_Results/6709808/1/files/12240668.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5ea2b59b348a24549b828a534ecdc80c0951baf1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
26908101 | pes2o/s2orc | v3-fos-license | Drugs for the Prevention and Treatment of Cardiac Allograft Vasculopathy
Cardiac allograft vasculopathy (CAV) of heart transplants is responsible for up to one-third of deaths at 5 years following cardiac transplantation. Risk factors for CAV include both traditional risk factors and immune factors. Drugs used for prevention and treatment of CAV include statins, calcium channel blockers and immunosuppressive agents. This review discusses the currently available drugs for CAV, the evidence behind their use, and future targets of therapy.
Introduction
Cardiac allograft vasculopathy (CAV) is a specific form of coronary artery disease that affects heart transplanted patients and is characterized by an early, diffuse intimal proliferation of both the epicardial and microvascular vessels, resulting in epicardial coronary artery stenosis and small vessel occlusion [1]. Intimal hyperplasia and infiltration of inflammatory cells are confined to the graft vasculature, with sparing of the recipient's own arteries, suggesting an immunemediated local process.
According to the 29 th Official Adult Heart Transplant Report, CAV affects 8% of heart transplant recipients by year 1, 30% by year 5 and 50% by year 10 after the transplant [2]. Once CAV develops, treatment is challenging and often frustrating; therefore strategies to prevent its development need to be implemented right from the time of cardiac transplantation. This article provides an overview of the various drugs available for prevention and treatment of CAV, the evidence behind their use, and clinical framework for their use.
Risk Factors and Pathophysiology of CAV
Risk factors for CAV include both traditional risk factors such as hyperlipidemia (exacerbated by calcineurin inhibitors), hypertension and diabetes mellitus (worsened by steroids), transplant-related factors such as donor factors (explosive mode of brain death, intracranial hemorrhage), and immune factors. The latter include increased levels of cytotoxic B-cell antibodies, anti-human leukocyte antigen (HLA) antibodies, expression of non-HLA antibodies such as anti-vimentin antibodies, more acute cellular and humoral (antibody-mediated) rejection, cytomegalovirus (CMV) infection, and sensitization to the monoclonal antibody OKT3 [3,4] .
Brain death itself induces an immune response. After brain death, neurohumoral and molecular changes result in cellular stress and an inflammatory response, this induces the expression of endotheliumderived major histocompatibility complex (MHC) molecules and costimulatory signals [5].
Alloimmune injury is initiated when donor MHC antigens expressed on the surface of graft endothelial cells interact with recipient dendritic cells, resulting in a chronic immune response. Recipient CD4 + T lymphocytes recognize donor MHC class II antigens on the cell's surface (HLA-DR, DP and DQ) and are activated, leading to a cascade of cytokines that further stimulate the donor endothelial cells to secrete growth and chemotactic factors [6]. These factors recruit mononuclear cells, which then secrete cytokines that activate normally quiescent vascular smooth muscle cells (VSMCs). These VSMCs then transform from contractile cells to de-differentiated synthetic cells. Activated VSMCs migrate from the media to the intima of endothelial cells, where they proliferate and cause extracellular matrix deposition, leading to reduction in the luminal diameter and loss of vascular contractility, (Figure 1). This process is responsible for most of the obliterate arterial intimal thickening present in CAV, and occurs diffusely [7].
Due to the predominant role of immunologic factors, CAV was long regarded as a form of chronic rejection. However, evidence of significant contribution of other metabolic factors to the development of CAV has led to the "response to injury" concept, according to which chronic endothelial injury from a combination of immune and non-immune factors leads to vascular cell proliferation, fibrosis, and vascular remodeling [8]. Recent evidence based on virtual histology intravascular ultrasound (VH-IVUS) suggests that ischemic etiology of cardiomyopathy prior to heart transplant may be independently associated with development and progression of plaques and higher cardiac event rate after transplant, highlighting the contribution of atherosclerosis to the pathogenesis of CAV [9]. In this study, VH-IVUS performed on 2 separate occasions after transplant revealed that patients with ischemic cardiomyopathy had significantly higher necrotic core, dense calcium, and fibrous and borderline high fibrofatty components in the plaques, similar to vulnerable plaques in atherosclerotic coronary artery disease.
The immune and inflammatory mediated endothelial injury also leads to endothelial function [10]. Using serial studies with Doppler flow-wire measurements, decrements in coronary endothelial function have been demonstrated to be associated with progressive intimal thickening and subsequent CAV development [11].
Even though the mechanisms of CAV and atherosclerotic coronary Mehra et al. [20], treatment of cardiac transplant recipients with either angiotensin-converting enzyme inhibitors or CCBs was associated with a decrease in the degree of vascular intimal hyperplasia at 1 year after transplantation.
In vitro studies have tried to elucidate the mechanism of action of CCBs in reducing CAV. Diltiazem was shown to enhance the effect of IL-1 beta and reduce IL-6 production in mixed lymphocyte cultures [21]. Thus diltiazem modulates monokine production and may affect antigen expression, thereby decreasing immune-mediated intimal hyperplasia.
After the initial large studies in the 1990s, not much research has been done in the use of CCBs for prevention of CAV. Nevertheless, diltiazem is relatively well tolerated, and has additional antihypertensive properties; therefore it continues to be used widely.
Statins
Statins inhibit HMG-CoA reductase, the enzyme that catalyzes the conversion of 3-hydroxy-3-methylglutaryl-CoA to mevalonate, and thereby also reduce the downstream products of mevalonate in the cholesterol synthesis pathway. The downstream products, farnesyl pyrophosphate and geranyl pyrophosphate, are lipid moieties that can modulate the function of certain essential signaling proteins that influence smooth muscle cells and the generation of nitric oxide (NO) [22]. Statins reduce matrix metalloproteinase secretion and SMC migration and proliferation, and the effect on SMCs may be the major mechanism by which statins decrease the development of CAV [23,24]. Statins also block activation of T-cells and natural killer (NK) cells by repressing interferon-gamma induced MHC-II expression [25].
In a study of cardiac transplant recipients randomized to pravastatin (47 patients) versus no pravastatin (50 patients), at 12 months, the pravastatin group had significantly lower mean cholesterol levels than the control group, less frequent hemodynamically significant allograft rejections (3 vs. 14 patients, p=0.005), better survival (94% vs. 78%, p=0.025) and lower incidence of transplant vasculopathy on angiography or autopsy (3 vs. 10 patients). In a subgroup of patients, the cytotoxicity of natural killer cells was significantly lower in the pravastatin group compared to the control group [26]. In a serial intravascular ultrasound (IVUS) study performed in 93 transplant recipients, although conventional atherosclerosis risk factors did not affect the development of CAV, greater change in serum LDL cholesterol level during the first year after transplant wasassociated with more severe vasculopathy, thus indicating the benefits of treating all cardiac transplant patients with statins [27]. Subsequently, the benefit of pravastatin in reducing CAV was demonstrated even at 5 years ( Figure 2) [28].
Aprospective, randomized, unmasked study initiated in 1991 compared the efficacy of simvastatin, started on the fourth postoperative day (n=35), with that of dietary therapy alone (n=37). At 4 years, significantly reduced low-density lipoprotein (LDL) cholesterol, improved survival and reduced incidence of CAV were seen [29]. After 4 years, patients in both groups received statins as open-label prescriptions. After 8 years, the Kaplan-Meier survival rate was 88.6% in the simvastatin group versus 59.5% in the control group (P<0.006) [30].
Subsequently, in a 12-month observational study comparing pravastatin 40 mg with simvastatin 20 mg after heart transplantation, rhabdomyolysis or myositis occurred only in patients on simvastatin, with no episodes for patients on pravastatin, despite similar reductions artery disease (CAD) were initially considered to be completely different,recent research has narrowed the difference between the two. While both atherosclerotic CAD and CAV are driven by adaptive immune responses to antigen, the antigens are different. The principal antigens driving atherosclerosis are altered (oxidized) low-density lipoproteins that are taken up by macrophages that become foam cells [12], whereas the principal antigens in the case of CAV are non-self MHC molecules, especially HLA-DR, expressed most abundantly on the luminal endothelial cells [13].
Pathological manifestations and diagnosis of CAV
CAV manifests as diffuse intimal hyperplasia with progressive luminal narrowing. The expanded intima comprises smooth muscle cells (SMCs), microvessels and an infiltrate formed largely of host T cells and macrophages, the majority of T cells being memory cells that express interferon-γ (IFN-γ) and transforming growth factor-β (TGF-β) [14]. The SMCs are mostly graft-derived, but recipientderived SMCs are also found as a result of seeding of graft vessels by recipient endothelial precursor cells that subsequently differentiate in to SMCs [15]. Nodular aggregates of host B cells, T cells, and myeloid cells are found in the adventitia, but the media is unaffected.
The diffuse nature of CAV makes it harder to diagnose it by coronary angiography, particularly in earlier stages. On angiography, observed luminal narrowing is compared to a reference vessel diameter for detection of significant stenosis. However, there is vascular remodeling with compensatory enlargement of the coronary vessel in the presence of a plaque, early in CAV development. Only in the advanced stages of CAV does luminal narrowing occur, making angiographic detection possible. Intravascular ultrasound (IVUS) is able to detect the extent of intimal thickening by imaging the vessel wall structure (including the presence and nature of the plaque) instead of relying simply on the diameter of the lumen, making it a sensitive tool for the early identification and diagnosis of CAV [16]. In a multicenter IVUS study, progression of intimal thickening of 0.5 mm or more in the first year after cardiac transplantation was found to be a reliable surrogate marker for subsequent mortality, nonfatal major adverse cardiac events, and development of angiographic CAV through 5 years after transplant [17].
Drugs for the prevention and treatment of CAV Calcium channel blockers
One of the first reported drugs for prevention of CAV included calcium channel blockers (CCB) such as diltiazem. In one of the earliest studies, 106 consecutive heart transplant recipients were randomized to receive either diltiazem (n=52) or no CCB (n=54). On follow-up coronary angiography, the average change in the diameter of coronary artery segments at the end of two years differed significantly between the two treatment groups (P<0.001), even after adjustment for other relevant clinical variables. New angiographic evidence of CAV developed in 14 patients not given CCB, as compared with 5 diltiazemtreated patients. Significant coronary stenoses (>50% luminal diameter) developed in fewer patients given diltiazem; death due to CAV or re-transplantation occurred in five patients in the group that did not receive CCB and in none of those who received diltiazem [18]. At 5-year follow-up, a significant difference was noted in freedom from both death and angiographic CAV (56% in the diltiazem group versus 30% in the control group) [19]. However, a major limitation of this study was the use of angiography, for the reasons described above. In an intravascular ultrasound (IVUS) study of 32 patients by in survival and LDL-cholesterol between the two groups. There was a trend towards increased incidence of immunosuppression-related deaths in the simvastatin group. These effects may be due to differences in the pharmacokinetic profiles of the two drugs. Pravastatin is not metabolized by the cytochrome P450 3A4 isoenzyme, and is excreted largely unchanged, while simvastatin competes with cyclosporine and other drugs for metabolism by cytochrome CYP3A4 in the liver and small intestine [31].
Som et al. [32] conducted a systematic review of the role of statin therapy in graft vessel disease following cardiac transplantation and found consistent benefit in reducing CAV, whether the assessment was by angiography, IVUS or post-mortem. A survival benefit of statins was also noted, as was a decrease in the number of serious rejections. The post-transplantation timing of the introduction of statin therapy appeared important, and benefit was seen only in studies where statin therapy was initiated within 30 days of transplant. Furthermore, the rate of adverse events in published studies was low, with only one study showing a significantly higher incidence of myositis in statin-treated patients; while rhabdomyolysis and hepatic derangement were rare [32].
Among all the drugs investigated for the prevention of CAV, statins are the only group of drugs to be included as a class I recommendation for all heart transplant recipients by the International Society of Heart and Lung Transplantation (ISHLT) [33].
Treatment of cytomegalovirus (CMV) infection
The most common infection post-heart transplant, CMV, affects allograft endothelial function both directly (by affecting the nitric oxide pathway) and indirectly (by activating cytokines) [34].In a study using IVUS, the 1-year change in maximal intimal thickening (MIT) assessed at 1 and 12 months after heart transplantation was compared in groups of patients routinely assigned to a preemptive strategy for treatment of CMV (i.e. anti-viral drug administration restricted to patients with laboratory indicators of CMV infection) or receiving valganciclovir prophylaxis (irrespective of CMV infection). The 1-year increase in MIT was significantly lower in patients receiving prophylaxis compared with those managed preemptively, even after adjustment for metabolic risk factors, thus suggesting a role for CMV prophylaxis in CAV prevention [35]. In another study conducted in cardiac transplant recipients that were CMV-antibody positive pre-transplant, a CMV-specific CD4 T-cell immune response in the first month after transplantation was associated with a reduction in CMV viral load, and was also associated with less transplant arteriopathy. Thus methods to enhance CMV-specific T-cell immunity may represent a therapeutic strategy for prevention of CAV [36]. Strategies to prevent CMV infection post-transplant are included in a class I recommendation for the prevention of CAV in ISHLT guidelines [33].
Mycophenolatemofetil (MMF)
In the MMF multicenter trial of 650 heart transplant patients at 28 centers, patients received either MMF or azathioprine (AZA) in addition to cyclosporine and corticosteroids. In the IVUS sub-study, patients receiving AZA compared to those on MMF had significantly more patients with first year MIT≥ 0.3 mm and a significantly lower mean luminal area, thus suggesting a greater protective effect of MMF on preventing CAV [37]. This beneficial effect of MMF may be due to its suppression of both T-and B-lymphocyte function and reduction of arterial smooth muscle cell migration and proliferation [38]. Patients treated with MMF developed lower anti-vimentin antibody titers due to its effect on B lymphocytes, and this correlated with a lower incidence of CAV by IVUS [39]. In addition, MMF decreases activation of T-lymphocytes and HLA-DR-expressing NK cells [40]. MMF may also decrease systemic inflammatory activity in heart transplant patients as indicated by reduced levels of high-sensitivity C-reactive protein [41].
As with other immunosuppressive agents, MMF has a significant adverse effect profile, including diarrhea, cytopenias (anemia and leukopenia) and increased risk of bacterial, pneumocystis and CMV infections [42]. MMF is used as part of the standard immunosuppressive regimen along with calcineurin inhibitors and steroids as an antiproliferative agent [33].
Proliferation Signal Inhibitors (PSI)
Proliferation signal or mammalian target of rapamycin (mTOR) inhibitors were first identified in 1970 when rapamycin was isolated from a strain of Streptomyces hygroscopicus in soil at Easter Island (Rapa Nui). It was found to have antifungal and immunosuppressive properties. Two PSIs are currently available commercially:sirolimus (SRL) (previously known as rapamycin) and its derivative everolimus [43]. PSIs form a complex with the intracellular binding protein FKBP-12 and inhibit the activity of mTOR, a serine/threonine kinase which functions within the cell as a transducer of information from growth factors and energy sensors [44]. This causes upregulation of thee cyclin- : Five year incidence of coronary artery disease after heart transplantation in patients receiving pravastatin versus controls. Stojanovic et al. [18]. dependent kinase inhibitor p27 kip1 , leading to inhibition of cell cycle progression at the G1 to S phase. Everolimus also blocks interleukin-2 (IL-2) and IL-15 driven proliferation of hematopoietic stem cells and vascular smooth muscle cells by inhibiting the activation of p70 S6 kinase [45,46].
In a multi-center trial involving 634 patients, Eisen et al. [47]. showed decreased progression of intimal thickness by IVUS in patients treated with everolimus when compared with azathioprine at 12 months, when given along with cyclosporine and steroids [47]. In a study by Mancini et al. [48] cardiac transplant patients were randomly assigned to SRL or MMF/ azathioprine at their annual cardiac catheterization and followed annually thereafter. As compared to the control group, the SRL group showed significant reduction in theprimary end-point of death, need for angioplasty or bypass surgery, myocardial infarction, and a >25% worsening of the catheterization score [48]. In de novo cardiac transplant recipients, Keogh et al. [49] compared SRL with azathioprine in a randomized open-label study and demonstrated that SRL-treated patients had significantly reduced intimal thickness and increased coronary lumen diameters by IVUS at 6 months and 2 years after transplantation when compared to azathioprine-treated patients [49]. Finally, in a recent multicenter randomized trial comparing MMF to everolimus after heart transplantation, the incidence of CAV (defined as an increase in MIT from baseline to month 12 of > 0.5 mm) was 12.5% with everolimus versus 26.7% with MMF, and the difference remained significant irrespective of sex, age, diabetic status, donor disease, and across lipid categories [50].
In an observational study of 29 cardiac transplant recipients who were switched from calcineurin inhibitors (CNI) to SRL for renal dysfunction compared to 40 patients who were continued on CNI, an increase in the mean plaque volume and plaque index was seen with three-dimensional IVUS in patients receiving CNI after a year, but not in those who were switched to SRL [51]. This appears to be an attractive strategy, especially because CNIs stimulate fibrogenic growth factor and cause endothelial dysfunction [52]. However, concern exists regarding increase in acute rejection with CNI discontinuation [53]. Thus this strategy may only be applicable to patients who are further out from their transplant with no significant rejections, and have significant vasculopathy.
In addition to the effect of SRL on coronary anatomy, favorable effects on coronary physiology have also been demonstrated. In a small study of 27 patients, SRL therapy was associated with improved coronary artery physiology at the level of both the epicardial artery and the microvasculature, early after cardiac transplantation. There was a significant improvement in coronary flow reserve (CFR) and index of microcirculatory resistance (IMR) in the SRL group at 1 year after transplantation, but no change in the MMF group; while fractional flow reserve (FFR) declined in the MMF group but remained unchanged in the SRL group. The changes in epicardial coronary physiology may result from SRL's effect on plaque progression, while the improved microvascular function may be due to its effects on vascular remodeling and reactivity [54].
Interestingly, data analyzed from >1000 patients in 3 trials of de novo cardiac transplant recipients revealed that everolimus was associated with a lower incidence of CMV infection compared with azathioprine and MMF [55]. This may be an indirect mechanism of reduction of CAV by everolimus.
In an IVUS based study of early (from 3-6 weeks up to 1 year posttransplant) and late (from 1 to 5 years post-transplant) CAV, both everolimus and statins were associated with lower risk of developing markers of early CAV (increase in maximal intimal thickness) . While statins were protective against late CAV development, everolimus lost its protective effect on CAV 1 to 5 years after transplant, suggesting that immune-mediated injury plays a greater role in development of CAV early after transplant, while metabolic factors predominate later [56]. This was further explored in other studies. Arora et al . [57] compared the morphologic progression of CAV using virtual histology (VH) in patients receiving maintenance immunosuppression with everolimus versus calcineurin inhibitor (CNI). VH analysis revealed a significant increase in calcified and necrotic component among everolimus patients compared to controls. This increase was most prominent in patients who were >5 years post-heart transplant and was accompanied by a significant increase in levels of von Willebrand factor and vascular cell adhesion molecule [57] . In a similar study, compared with continued CNI therapy, SRL attenuated plaque progression in recipients with early conversion from CNI to SRL (<2 years post-transplant), but contributed to increases in necrotic core and dense calcium volume in those with late conversion (>6 years post-transplant) [58]. These studies suggest that the maximum benefit of PSIs lies in prevention rather than treatment of CAV.
Unfortunately, PSIs are associated with significant side-effects which may necessitate their discontinuation in many patients. In a large cohort of maintenance heart transplant recipients taking a PSI, 16% withdrew treatment in the first year, and 25% had stopped PSI due to severe adverse events by the fourth year [59]. These adverse effects include but are not limited to [60], peripheral lymphedema [61], debilitating aphthous ulceration [62], wound dehiscence and impaired wound healing [63], hyperlipidemia [64], pneumonitisand anemia [66].
ISHLT guidelines give a class II a recommendation to substituting MMF or azathioprine with a PSI in patients with established CAV [33].
Future Trends
While several pharmacologic strategies are available for prevention of CAV, treatment strategies are limited. Focal coronary lesions can be treated percutaneously with stenting, but the ultimate treatment for diffuse CAV is re-transplantation which is neither the most feasible nor the safest option for most patients. Thus there is a need to expand the drug armamentarium for prevention and treatment of CAV. Some of the pharmacologic strategies explored in animal models are discussed below.
Memory T (T mem ) cells are activated T cells that persist after the initial T cell response and provide continual immune protection to the host. Most infiltrating T cells in coronary arteries from patients with CAV express the phenotype of T mem cells, suggesting that these cells may play an important role in the development of CAV [67]. OX40 (CD134) is a member of the tumor necrosis factor receptor (TNFR) superfamily.The OX40-OX40L signaling pathway has been found to play a key role in the survival and homeostasis of T mem cells [68]. Wang et al. [69] demonstrated that CD40L deficient T mem cells induce CAV in cardiac allografts, and blockage of the OX40 signaling pathway using anti-OX40L mAb reduces T mem cell development and prevents CAV in a mouse cardiac transplantation model. Thus, the OX40 pathway may have a potential for prevention of CAV in cardiac transplant recipients [69].
In animal studies, cholesterol-rich nanoemulsions (LDE) resembling LDL combined with paclitaxel (LDE-paclitaxel) injected intravenously were demonstrated to reduce intimal width and reduce destruction of the media [70]. This may be a promising strategy for further exploration in clinical studies.
The oxidative stress associated with ischemia-reperfusion of cardiac allografts leads to cytokine production and expression of proinflammatory adhesion molecules. This is one of the most important alloantigen-independent factors associated with CAV and various strategies to ameliorate this oxidative stress have been studied. Antioxidants such as riboflavin [71] and superoxide dismutasemimetics [72] have been found to decrease oxidative stress and reduce the incidence of CAV in murine models of cardiac transplantation. Peroxisome proliferator-activated receptors γ (PPAR-γ) receptor agonists such as pioglitazone also reduce oxidative stress and have been shown to reduce CAV [73].
Despite the evidence from animal studies, none of these pharmacologic strategies has made it to clinical trials. Hopefully some of these strategies will eventually be added to the clinical armamentarium for tackling CAV.
Conclusion
CAV remains a vexing problem in cardiac transplantation, with prevention being better than treatment. Drug therapy for CAV has modest efficacy and is limited by toxicity. Further research is needed in this area to tackle CAV and prolong graft survival. | 2019-01-24T15:48:16.279Z | 2014-10-31T00:00:00.000 | {
"year": 2014,
"sha1": "17aad4051458baa233bb9e4c75b5672a26ba487a",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/drugs-for-the-prevention-and-treatment-of-cardiac-allograft-vasculopathy-2329-6607.1000123.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f05541b112175035c7b7e8919e4a3b4208e165b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226517827 | pes2o/s2orc | v3-fos-license | Hailstorm Formation Enhanced by Meso-γ Vortices along a Low-Level Convergence Line
During a hailstorm event, near-surface meso-γ vortices along a convergence line interact with hail cells. Herein we investigate this interaction by using observational data and a high-resolution simulation of a hailstorm that occurred over Taizhou (Zhejiang, China) on 19 March 2014. The 10-m surface wind data from automatic weather stations show that several meso-γ vortices or vortex-like disturbances existed over the convergence zone and played a vital role in the evolution of the hailstorm and the location of the hail. The model results agree with the observations and present a closer correlation between the hail and the low-level meso-γ vortices than those observed. The model simulation indicates that such low-level meso-γ vortices can be used to predict the next 10-min hail fallout zone. The low-level meso-γ vortices originated over the convergence zone and then fed back into the convergence field and provoked a stronger updraft. Vorticity was initiated primarily by stretching and was extended by tilting. A three-dimensional (3-D) flow analysis shows that the existence of low-level meso-γ vortices could help enhance a local updraft. Furthermore, the simulation reveals that the low-level meso-γ vortices formed in the bounded weak echo region (WER) at the front of the hail cell, enhancing convergence and strengthening updrafts. Graupel was broadly located between the 0°C isothermal line and the top of the clouds, roughly between the 0 and −20°C isothermal lines. Accordingly, the hailstones grew rapidly. The suitable environment and the positive effect of the meso-γ vortices on the updrafts enabled hailstorm formation.
Introduction
Hailstorms adversely affect agriculture and society. In the last few years, several studies have reported increasing incidents of hail damages and increasing hailstorm durations (e.g., Niall and Walsh, 2005;Zhang et al., 2008;Kunz et al., 2009;Botzen et al., 2010). Consequently, hailstorm studies have attracted increasing interest. Various advanced observational techniques and numerical weather prediction (NWP) models have been used to study hail (e.g., Witt and Nelson, 1991;Hong and Fan, 1999;Li et al., 2002;Fang et al., 2005;Donavon and Jungbluth, 2007;Zhang and Li, 2019). However, it is still difficult to observe and predict the evolution of a hailstorm at small spatial and temporal scales (Zhou et al., 2019;Zhang et al., 2020).
Hail is a significant deep convection phenomenon. Deep convection initiation and the organization of convective storms are highly relevant to boundary layer convergence lines (Wilson and Schreiber, 1986). Fankhauser et al. (1995) found that deep convection was initiated along a low-level convergence line because the vertical velocity maximum was located over the associated convergence zone. The interactions between different convergence zones (i.e., sea-breeze fronts, gust fronts, drylines, and cold fronts) create preferred locations for deep convection (e.g., Lee et al., 1991;Harrison et al., 2009;Sun and Fang, 2013). A low-level convergence zone with misocyclones (small-scale vertical vortices with diameters smaller than 4 km; Markowski and Richardson, 2010) or mesoscale vortices can enhance updrafts so as to induce deep convection (e.g., Wilson et al., 1992;Atkins et al., 2004Atkins et al., , 2005Buban et al., 2012;Xu et al., 2015;Zhai et al., 2015). Several studies have demonstrated that strong updrafts are common at the initial stage of hail formation (e.g., Browning and Foote, 1976;Heymsfield et al., 1980;Kennedy et al., 2014). Nelson (1983) used multiple-Doppler data and a numerical hail model to analyze the influence of the storm flow structure on hail growth. He found that the storm updrafts were sufficiently strong to lift embryos before full-scale hailstorm occurred. Knight and Knight (2001) demonstrated that strong updrafts played an important role in sustaining large hail. In addition, cold fronts have been associated with significant updrafts that could enable the occurrence of hailstorm through convergence and wind shear (e.g., Locatelli et al., 2002;Schemm et al., 2016).
Pulse-type storms refer to weakly forced storms associated with severe weather (Miller and Mote, 2017). These storms are generally not tornado producers but often produce large hail and/or damaging winds. Such storms are generally characterized by slow movement, weak flow and shear environments, and an elevated core of high reflectivity; short lived, typically lasting from 30 min to 2 h; appearing randomly; and not triggered by any organized dynamic feature (Cerniglia and Snyder, 2002). Pulse-type storms, especially pulse-type hailstorms, are often difficult to provide warnings for. Numerous NWP models have been specifically designed for predicting and simulating hailstorms. Even though these models have several disadvantages, they are currently the best tools available for analyzing the structure of hail cells, hail formation, and hail growth mechanisms (e.g., Orville, 1977;Farley, 1987;Speer et al., 2004;García-Ortega et al., 2007). Orville and Kopp (1977) used a two-dimensional cloud model to simulate the evolution of hail cells and hailstorms; they revealed the structure of hail cells and the life history of a hailstorm. Guo and Huang (2002) used a three-dimensional (3-D) cloud model with hail-bin microphysics to successfully simulate a multicellular hailstorm and found that the formation of a feeder cell with a weaker updraft along the side of a main cell was important for the evolution of a hailstorm. Chevuturi et al. (2014) used the Weather Research and Forecasting (WRF) model to study a winter hailstorm event and found that deep instability in the atmospheric column led to hailstorm formation.
In this study, we focus on a pulse-type hailstorm (Luo et al., 2017) that occurred in Taizhou (Zhejiang, China) during 0700-1000 UTC 19 March 2014. In addition, we identify a series of low-level meso-γ vortices along the low-level convergence lines. The horizontal dimension of this type of a low-level vortex, for a vertical vorticity magnitude of over 1 × 10 −4 s −1 , varies from 1 to 20 km. These low-level vertical vortices are similar to misocylones but differ in their horizontal scales. Therefore, we defined them as meso-γ vortices (Orlanski, 1975). This study aims to demonstrate the existence of meso-γ vortices along convergence lines and to investigate the relationship between these meso-γ vortices and the hail location based on the output of a high-resolution model. Furthermore, the influence of meso-γ vortices on hail cells (e.g., the updraft) is examined.
The rest of this paper is organized as follows. Section 2 presents an introduction of the hailstorm event, including the damages, a synoptic background, and surface observations. The design of the model experiment and a detailed comparison of the model results and the observational data are presented in Section 3. An analysis of the relationship between the meso-γ vortices and the hail based on the numerical simulation is presented in Section 4, and conclusions are drawn in Section 5.
Case overview
Severe convective weather phenomena occurred in most parts of Zhejiang Province on 19 March 2014. Thunderstorms covered nearly the entire province, and the average 1-h accumulated rainfall during this event was approximately 10-20 mm. This severe convective system moved southeastward, and a hailstorm hit southeast Zhejiang Province (i.e., Taizhou and Wenzhou). Damage from this hailstorm, which was one of the greatest in Taizhou City history, was estimated at around 70 million Chinese Yuan. According to the observational data from Taizhou Weather Bureau, on the afternoon of 19 March 2014, thunderstorms and severe winds formed and rapidly moved southeastward between 0810 [1610 local standard time (LST)] and 1020 UTC (1820 LST). During the movement of this severe convection, gust wind at levels of 10-12 on the Beaufort scale and large surface hail were recorded at several automatic meteorological stations (mesonets) in Taizhou City. The maximum diameter of the hailstones was 3.3 cm, which was observed at the Hongjia synoptic station (Huang and Gao, 2016). This event was mainly caused by a cold front. Taizhou City was situated in front of the cold front and un-der the influence of a warm temperature ridge, which brought warm and wet air (Luo et al., 2017).
Surface feature evolution
Surface data from 0740 to 0850 UTC obtained from conventional and automatic weather station data were provided by the Hangzhou Weather Bureau. The locations of the stations are presented in Fig. 1. A Cressman objective analysis was used to interpolate the station data into the grid data (Gilchrist and Cressman, 1954).
In Fig. 2, the yellow-shaded region indicates the terrain height information, which reveals the main geomorphological features of Taizhou City. These data show that the combination of a cold front and a surface convergence line was the dominant factor of this hailstorm. Taizhou City extends northeast-southwest along a valley, which coincides with the convergence line (the southern red dashed line in Fig. 2a). A meso-γ vortex (the red arrow in Fig. 2a, with a horizontal scale of approximately 20-30 km) existed at the southern end of the convergence line. The southern convergence line and the mesoγ vortex remained essentially unchanged for about 1 h (from 0640 to 0740 UTC; figure omitted). This convergence line and the meso-γ vortex formed because of the distinctive terrain in the area. The southeast airflow was blocked by the terrain and formed a convergence line along the foothill. This distinctive terrain forced a meso-γ vortex to form at the bottom of the valley. Studies (e.g., Levinson and Banta, 1995;Aebischer and Schär, 1998) have demonstrated that terrain-forced vortices can be found in foothill areas and that topographic effects could provide a low-level source of vorticity. The cold front (the northern red dashed line in Fig. 2a) with convergence centers with vorticity of −1 × 10 −4 s −1 (blue thin dashed lines in Fig. 2b) existed between the rainfall region (color shaded region in Fig. 2a) and the northern foot of the Tiantai Mountain (marked TTM in Fig. 2). Twenty minutes later (Figs. 2c,d), the cold front and the rain belt were located over the TTM, and the cold air provided by the northwesterly flow started to come into contact with the southern convergence line. Then, the convergence increased significantly, reaching a maximum vorticity of −2 × 10 −4 s −1 . At that time, the intensity of the meso-γ vortex increased, and a vorticity center of 1 × 10 −4 s −1 (green-shaded in Fig. 2d) was established near the convergence center, which indicated a vortex distribution in the surface wind field. The 10-min accumulated precipitation over the next 10 minutes also experienced a significant increase, and a precipitation center appeared with a value of over 10 mm. In addition, a hailstorm occurred near the meso-γ vortex at 0800 UTC (red triangles in Fig. 2c). Figure 2e shows the surface wind at 0820 UTC and the 10-min accumulated precipitation between 0820 and 0830 UTC. At that time, the southern convergence line completely merged with the cold front (which was ahead of the rain belt), crossed the TTM, and caused a series of hailstorms in the valley along the convergence line. The wind convergence increased further, and the meso-γ vortex temporarily disappeared. After 30 min, the cold front moved to Linhai (marked LH in Fig. 2). An anticyclonic-vortex center, which was associated with a vorticity center of −1 × 10 −4 s −1 , emerged at the convergence center. According to the disaster report made by the Taizhou Weather Bureau, a severe convective storm outburst was reported at 0850 UTC in LH. This phenomenon might indicate that the meso-γ vortices (including cyclonic and anticyclonic) could help enhance the near-surface convergence. The maximum hail diameter at the Hongjia weather station was 25 mm, which fell on the ground 10 min (0900 UTC) after the anticyclonic and cyclonic vortex distribution formation. From the evolution of the surface wind field and the development of the severe storms, it appeared that the interaction between the cold air ahead of the cold front and the convergence line located along the foot of the TTM strengthened the severe convective storm. Moreover, the vortex distribution along the convergence line enhanced the convective storm development and somewhat reflec- DECEMBER 2020 ted the hail fallout zone.
With regard to the local geomorphology, TT also lies in a valley; therefore, we zoomed in on the TT region to check if the valley had the similar phenomenon. Figure 3 shows the evolution of the wind field and the severe storm in the TT region (the black box in Figs. 1 and 2a) at 0740-0810 UTC. To present mesoscale information in more detail, the Shuman-Shapiro filter (e.g., Shuman, 1957;Shapiro, 1970;Wang et al., 2007) was used to process the station data. As shown in Fig. 3, there is a mesoγ vortex in this region (marked "C"), and a convergence line existed along the TTM. At 0710 UTC (Fig. 3a), a meso-γ vortex circulation ("C", with horizontal scale of about 10 km) was located at the bottom of the valley. At that time, "C" had not formed a closed vortex, and was still a vortex circulation. This vortex should also be a terrain-forced vortex. A half hour later (0740 UTC; Fig. 3b), the convergence line was still along the TTM with the convergence increased. The vortex "C" formed and was combined with a convergence center. After 20 min, a hailstorm occurred at the position of the vortex "C." Then, "C" disappeared. After 30 min, a strong reflectivity center (over 60 dBZ) was observed at the vortex region. The reflectivity at the meso-γ vortex place was stronger than that at others. Reflectivity data were obtained from Taizhou station. The radar antenna wavelength is 10 cm. The beam width is 0.99°. The pulse repetition frequency (PRF) is between 322 and 1282 Hz. We plotted the cross-section ( Fig. 4b) along A-B (Fig. 4a). The meso-γ vortex was located to the east of the strong echo column, and a strong reflectivity center was located over the meso-γ vortex. The strong echo column was tilted to the top. A weak echo region (WER) existed at the vortex region.
As the scale of these vortices was small and the spatial and temporal resolutions of the observational data were limited, we could not ascertain their effects. Therefore, a high-resolution simulation was used to conduct a detailed analysis regarding the presence and effects of the surface or near-surface meso-γ vortices.
Model configuration and verification
In this section, we introduce the model configuration and verify the model results using the observational data from the conventional and automatic weather stations, the sounding data from the Hongjia observation station, and the disaster report from the Taizhou Weather Bureau.
Model configuration
The advanced research version of the WRF model (ARW; Skamarock et al., 2005;Klemp et al., 2007) was used to reproduce this case. The numerical experiment was performed by using WRF v3.7.1 with four domains and two-way nesting (Fig. 5). The model domains consisted of a 27-km grid with a mesh size of 274 × 203, a 9km grid with a mesh size of 448 × 319, a 3-km grid with a mesh size of 580 × 421, and a 1-km grid with a mesh size of 400 × 400. The vertical level was 45, and 12 levels were configured below 2 km. The time step was 90 s in Domain 1 with a time step ratio of 1 : 3 : 3 : 3. All four domains were integrated for 24 h from 1200 UTC 18 to 1200 UTC 19 March 2014. The initial and outermost lateral boundary conditions were provided by the 1°r esolution Final Operational Global Analysis data (FNL; https://rda.ucar.edu/datasets/ds083.2/) at 6-h intervals. The following model physical schemes were used: (1) the Milbrandt-Yau two-moment microphysics scheme (Milbrandt and Yau, 2005a, b), which is a multimoment bulk microphysics parameterization comprising six distinct hydrometeors, two liquid-and four ice-phase categories; (2) the rapid radiative transfer model longwave radiation scheme (Mlawer et al., 1997); (3) the Dudhia (1989) shortwave radiation scheme; (4) the Yonsei University planetary boundary layer scheme (Hong et al., 2006); and (5) the Grell 3D (Grell and Dévényi, 2002) cumulus scheme in Domains 1 and 2 with no cumulus scheme in Domains 3 and 4.
Model verification
3.2.1 Rainfall A comparison of the simulated and observed 1-h accumulated precipitation shows that the WRF model repro-duces the evolution of the precipitation systems well. Though this cold-front event process did not produce sufficient precipitation, the evolution of the precipitation systems likely reflected the evolution of the cold front well. Figure 6 indicates that the rain belt moved from northwestern to southeastern Zhejiang Province. Results showed that the simulated precipitation (shaded in Fig. 6) was larger than the observed precipitation (contour in Fig. 6), especially in the western section of the rain belt. However, the location of the simulated rain belt and its movement were similar to those observations, particularly in the eastern section of the rain belt. Therefore, despite the overprediction of precipitation, the model exhibited good skill in predicting the movement of the rain belt and the cold-front process. In addition, focusing on the period when the hailstorm occurred in Taizhou (shown in Fig. 6d), an obvious increase in the 1-h accumulated precipitation was noted in both the simulation and the observational data. The location and intensity of the precipitation center in the coastal areas of the Zhejiang Province are similar in the simulation and observations. Moreover, during the hailstorm period (0800 UTC), simulated rainfall center and strength in the coastal areas in Zhejiang Province demonstrated a great similarity to the observed. To study the hailstorm, we considered that it could be used as the key period and area for further analysis. Therefore, in terms of the evolution of the simulated 1-h accumulated precipitation, the WRF model-simulated path and movement speed of the rain belt are in good agreement with the observational data, particularly for the Taizhou City. Figure 7 compares the simulated hail fallout zone to the observed hail fallout zone. The observed hail fallout zone was derived from the disaster report drawn by the Taizhou Weather Bureau. The simulated hail precipitation amounts [also used by Chevuturi et al. (2014)] were from 0750 to 0900 UTC for the model-simulated domain at a 1-km horizontal resolution. As shown in Fig. 7, the model simulation generally agreed with the observations. For instance, the WRF model reproduced the main hail fallout zones in TT, XJ, and LH well. In addition, the two directions of hailstorm propagation (the red arrows in Fig. 7) were simulated well. Although the hail fallout zone at the south of XJ was not simulated by the WRF model, the model still demonstrated a good ability to simulate this hailstorm, when the main hail fallout zone is considered.
Sounding
To demonstrate the capacity of the model to reproduce the atmospheric fields in the regional scale environment, we compared the model and observed sounding data of Hongjia station (available at http://weather.uwyo. edu/upperair/sounding.html), which is the only sounding station in Taizhou City on 19 March 2014. The simulated and observed sounding at 0000 UTC (Fig. 8a) before the hailstorm occurred showed that the model reproduced the observed profiles of the temperature and horizontal winds on the morning of 19 March 2014 reasonably well. However, there were some differences between the simulated and the observed moisture profiles in the middle troposphere. There was a strong wind shear at low levels and an inversion layer below 850 hPa. Although the simulated and observed moisture profiles DECEMBER 2020 had similar trends (i.e., the same trend below 850 hPa, a relatively dry layer between 400 and 850 hPa, and a slight increasing trend over 400 hPa), the simulated moisture in the 400-850-hPa region was higher than that observed. This may be caused by an overestimation of the middle tropospheric southwesterly wind speed, which always contains plenty of water vapor. Furthermore, both the simulation and observation had a warm low-level ad-vection, which helped increase the low-level moisture and instability. Several studies have shown that dry level over a warm moist layer close to the ground is favorable for the occurrence of hailstorm (e.g., Craven et al., 2002;Yu and Zheng, 2020). Therefore, the environment was suitable for a hailstorm to occur. In addition, the observed wind at 700 hPa was stronger than the simulated wind. This error was likely caused by the insufficient information in the initial and boundary conditions in the model, particularly the mesoscale information. The observed wind at 700 hPa increased quickly with height. Therefore, this strong wind at 700 hPa could help form a strong turbulence and instability. The model could not capture this rapid increase in the wind at 700 hPa. However, the model also had a vertical wind shear; the 600-hPa wind speed demonstrated an obvious increase; and the middle troposphere had more moisture than observation. Therefore, convection could be triggered. After the hailstorm (1200 UTC 19 March 2014; Fig. 8b), the simulation also reproduced the temperature and horizontal wind well. Furthermore, the moisture profile was also reproduced reasonably well. In summary, despite the presence of some deficiencies in the detailed features, the abovementioned results imply that this simulation reproduces the evolution of the rain belt, the hail fallout zone, and the regional environment reasonably well. Based on the general agreement between the simulation and the observations, we can use this model output in a further analysis.
Characteristics of meso-γ vortices along the convergence line 4.1 Evolution of the meso-γ vortices
To examine the influence of the local geomorphology in Taizhou City, we chose σ = 0.954 (a height of about 400-500 m) as a representative level for near-surface horizontal flow. Results show that the vortices are in good correspondence with the next 10-minute hail fallout zone. Figure 9 shows the streamline and the convergence during the next 10-minute accumulated hail precipitation at TT (the northern part of the Taizhou City). The next 10-minute hail fallout zone moved as the mesoγ vortices shifted along the convergence line. At 0720 UTC (Fig. 9a), there was a persistent southwesterly flow existing at the south of the TT region. A convergence line (the red dashed line) was located at the north of the TT region. Besides, there had just been a flow confluence at the generation location of vortex "V1" (the red circle in Fig. 9a). Vortex "V1" first occurred at the west of Longxi at 0730 UTC (Fig. 9b). The convergence line (the red dashed line) at that time was still located at the north of the TT region. The simulated vortex and convergence line pattern are similar to those of the observed result (Fig. 3a). At 0740 UTC, the convergence line moved southward and merged with the meso-γ vortex. The convergence strength at the west of Longxi increased. "V1" was still stabilized at the west of Longxi (Fig. 9c), which was also at the bottom of valley and similar to observed meso-γ vortex, and was accompanied by a convergence center. The next 10-minute accumulated hail precipitation amount showed that the hail fallout zone was strongly correlated with "V1." With the movement of the convective systems, the convergence also continued to shift southeastward. At 0750 UTC, "V1" moved to Longxi (Fig. 9d) and was accompanied by a convergence increase. In addition, it was followed by the next 10-minute hail precipitation amount center. Then, "V1" started to fade, and another vortex "V2" formed at its eastern strong convergence zone at 0800 UTC (Fig. 9e). The vorticity intensity of "V2" exceeded 1 × 10 −3 s −1 . The hail fallout zone at the next 10 minutes of 0800 UTC showed that the second hail precipitation amount center was located at the location of "V2." Subsequently, "V2" moved northeastward, in accordance with the northeastward movement of the hail fallout zone (figure not shown).
To better examine the evolution of the near-surface vortices, we chose the "V1" vortex as a representative sample. The time-height series of "V1" (Fig. 10) de-scribed the average of 9-point (3 km × 3 km) center of the vortex (we defined the vortex center according to the streamline at σ = 0.954). Figure 10a shows the change of vorticity and divergence at the initial region. The above analysis showed that "V1" initially formed at 0730 UTC. The result suggested that positive vorticity existed before convergence. Although there was no obvious vortex existing in the streamline field, the positive vorticity already existed. At 0640 UTC, there was no storm occurring near the TT region. The positive vorticity center existed at 0700-0710 UTC. After the occurrence of the positive vorticity center, a convergence center was generated. Figure 10b shows the elements at the vortex center, which indicates the evolution of vortex. At the onset stage of the vortex, positive vorticity originated at the near surface, which indicated that the near-surface vortex was derived from the near-surface flow. Gradually, the positive vorticity center extended upward, and negative vorticity appeared after 0800 UTC. As shown in Fig. 9, "V1" started to fade at 0810 UTC at the level of σ = 0.954 (the third level), which is mutual agreement with this time-height plot. The convergence field shows that the vortex formed in the convergence zone. After "V1" appeared, the convergence increased significantly, and the maximum convergence reached −3.6 × 10 −3 s −1 . During the "V1" fading period, the convergence decreased. Obviously, the relationship between the vorticity and the divergence field reflected a feedback mechanism between vorticity and convergence.
To further examine the physical processes responsible for the development of the vortex, the budget of vertical vorticity is a favored approach (e.g., Zhang, 1992;Knievel and Johnson, 2003;Wang et al., 2016). The vertical vorticity equation can be written as follows: where u, v, and w are the wind components, and ζ is the vertical vorticity ( ). The terms on the right-hand side of Eq. (1) represent vorticity changes due to horizontal advection, vertical advection, stretching, and tilting, respectively. The left-hand side of Eq. (1) indicates the change in the vertical vorticity. Based on the foregoing analysis, "V1" originated along the convergence line; therefore, the vertical stretching of the vorticity on "V1" must have played an important role in its genesis. Moreover, the conjunction of the outflow of the severe convective storm and the topographic convergence line may have acted to tilt the vortex line so that the tilting generation of the vorticity was also important. The results of the vorticity budget (Fig. 11) also show that the stretching (Fig. 11c) and tilting (Fig. 11d) terms were more important than the others (Figs. 11a, b), particularly at low levels. The accuracy of the vorticity budget was examined by comparing the value of the sum of the terms on the right-hand side of Eq. (1) (Fig. 11e) and the tendency of local relative vorticity (Fig. 11f). These two factors were similar. The residual of the local relative vorticity and the sum of the terms on the righthand side (Fig. 11g) suggest that the residual term exhibited two weak positive centers at 0750 and 0800 UTC. The residual term is complex; it includes the numerical error, frictional effect term, and so on. Xu et al. (2015) treated the residual term as the frictional effect term. They found that the surface drag was important for mesovortex genesis. In this case, the center at 0750 UTC probably contained the frictional effect. It was more likely that the other center was the numerical error. As the position of this center was high, the effect of surface drag should be very weak. It was also reasonable that the numerical error increased with the increasing calculation time. However, the residual term was small compared with the stretching and tilting terms. Therefore, it was not given considerable attention. The analysis of the vorticity budget also indicated that the stretching and tilting effects were dominant in our case (compared to the contributions of the horizontal advection and vertical advection). At the onset stage (0730 UTC), stretching was a major factor in the generation of the low-level vortex, whereas tilting had a weak negative effect. "V1" formed in a convergence region, and the depth of the convergence region was over 2 km; therefore, it was possible that stretching played an important role at the onset stage of the meso-γ vortex. However, the skew-T plot (Fig. 8a) indicates that the near surface wind and vertical wind shear were very weak and that low-level updraft near the surface (Fig. 12a)
3-D flow analysis of the meso-γ vortices
The evolution of the 3-D flow was shown in Fig. 12, and we used this to indicate the variations of u, v, and w. The display region of this 3-D flow was depicted by the black box in Fig. 9b. Figure 12a shows the 3-D flow, which was at the center of vortex "V1" in TT at 0730 UTC. According to the horizontal streamline analysis at that time (Fig. 9), "V1" was at the bottom of the valley and had not merged with the convergence line caused by the severe convective system. As shown in Fig. 12a, there was an obvious turbulence near the surface; however, the intensity of updraft was not strong. After 10 min, there was a significant updraft of all the parcels at the center of "V1," which indicated that the existence of the low-level vortex enhanced the local updraft. At that time, "V1" and the northern convergence line merged, and the vorticity of "V1" increased. The 3-D flow indicated that these parcels were initiated by over mountain flows and the outflow from the storm along the near-surface convergence line. The low-level vortex then formed more completely. At 0750 UTC (Fig. 12b), the flow ro- DECEMBER 2020 tated and ascended simultaneously. The updraft showed a pronounced enhancement. In the meantime, followed by hail-shooting, there was a downdraft zone at the hailshooting zone (the red-shaded region).
Vertical structure
To analyze the vertical structure of "V1" and the relevant convective cell, we plotted the cross-section along M-N (Fig. 9c). As shown in Fig. 13a, the near-surface meso-γ vortex "V1" was located near the strong echo wall, the near-surface flows (the cold outflow from the convective system and the warm easterly flow) converged over "V1," and a strong updraft was tilted to the convective system reaching its top. A WER existed in association with "V1." According to the abovementioned analysis, the existence of "V1" may help enhance the updraft. The maximum reflectivity to the west of the WER exceeded 50 dBZ and extended to the ground. Another strong echo center of 42 dBZ appeared aloft between 6and 12-km altitude and extended to the overhang. The overhang was prominent, and a roll circulation occurred in the overhang region, allowing the external airflow to be well mixed with the airflow in the convective system. The divergence field along the cross-section (Fig. 13b) shows an obvious convergence region over "V1"; the maximum convergence exceeded −2 × 10 −3 s −1 . Moreover, a divergence region was observed at the top of the convective system. This pattern of low-level convergence and upper-level divergence could help to promote the development of the storm system. The potential temperature cross-section (Fig. 13c) showed that "V1" was located in the high potential temperature region, which contained high levels of unstable energy. This high potential temperature region corresponded to the strong convergence region and the strong echo region. In addition, the heights of 0 and −20°C isothermal lines were appropriate for hail. Under the promotion of powerful updraft, graupel was found to be broadly and horizontally widespread between the 0°C isothermal line and the top of the convective cloud. The hail-mixing ratio was mainly located in the strong updraft region in front of the strong echo column. This strong updraft contributed to maintenance of the growth of the hailstones.
Conclusions
Observational data from the mesonet and high-resolution outputs from a WRF model were used to study a hailstorm that occurred on 19 March 2014 in Taizhou City, Zhejiang Province, China. In particular, the mesonet data were used to examine the evolution of the surface flow and the time series of the surface pressure and wind. The WRF model was used to reproduce this case to decipher the genesis of the hail and the impact of the low-level meso-γ vortices. A series of low-level meso-γ vortices was observed by mesonets in China. These low-level meso-γ vortices formed along the near-surface convergence line and showed a strong correlation with the location of severe convective weather (i.e., torrential rainfall and hail). The streamline field suggests the formation of positive and negative vorticity centers, which could be associated with the maximum convergence centers. The 10-min accumulated precipitation was closely related to the convergence and vorticity centers. Accordingly, a high-resolution simulation using the WRF model was performed.
The model near-surface wind field clearly showed that several meso-γ vortices existed along the convergence line and exhibited good correlation with the hail fallout zone. The simulated streamline field showed that hail occurred 10 min after the occurrence of the meso-γ vortices. The evolution of the vorticity and divergence of the center of the meso-γ vortices showed that the meso-γ vortices formed in a low-level strong convergence region. The presence of the meso-γ vortices enhanced the lowlevel convergence. The analysis of the vorticity budget indicated that the stretching vorticity generation mainly affected vortex generation at the onset stage, whereas the generation of tilting vorticity occurred with the upward development of positive vorticity. At the onset stage, the increase of the vorticity was nearly entirely caused by stretching, because the southwesterly flow experienced confluence due to the terrain; after the vortex formed, the existence of the meso-γ vortex could help enhance the low-level convergence and updraft so that the stretching contribution remained large and the tilting contribution increased.
The low-level meso-γ vortices formed along the convergence line could help strengthen the low-level convergence, which promoted the ascending motion and the de- DECEMBER 2020 θ e velopment of severe convective cloud. The strong echo column corresponded with the high-region. High convective energy was beneficial to the convective cloud. The vortex located ahead of the strong echo column below the WER could enhance the near-surface convergence and updraft flow. Moreover, graupel was found broadly and horizontally widespread between the 0°C isotherms and the top of the convective clouds. The impact of the strong ascending motion contributed to the development of rich hailstones. The 3-D flow analysis indicated that vortex was formed by the environmental wind (i.e., over mountain flow and southerly flow) and the outflow from the storm cloud, and the existence of the meso-γ vortex could help in enhancing the updraft. Fig. 9c. (a) Radar reflectivity (dBZ; shaded) and winds (arrows denote the composition of along the cross-sectional wind and the vertical wind); (b) divergence (× 10 −4 s −1 ; shaded), velocity (m s −1 ; contour), and winds (arrows denote the composition of along the cross-sectional wind and the vertical wind); and (c) potential temperature (K; shaded), graupel mixing ratio (g kg −1 ; contour with red lines), hail-mixing ratio (g kg −1 ; contour with green lines), and the thin black lines represent the isotherm lines of −20 and 0°C. The red star indicates the location of the meso-γ vortex. | 2020-11-14T08:06:53.064Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "f943fd602f0b63506f145213125c484e48376f8d",
"oa_license": "CCBY",
"oa_url": "http://jmr.cmsjournal.net/article/doi/10.1007/s13351-020-0030-x",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "a2067eb8c5fe523549a6f7821d9fe788166f0c38",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Geology"
]
} |
235371846 | pes2o/s2orc | v3-fos-license | COVID-19 Sepsis: Pathogenesis and Endothelial Molecular Mechanisms Based on “Two-Path Unifying Theory” of Hemostasis and Endotheliopathy-Associated Vascular Microthrombotic Disease, and Proposed Therapeutic Approach with Antimicrothrombotic Therapy
Abstract COVID-19 sepsis is characterized by acute respiratory distress syndrome (ARDS) as a consequence of pulmonary tropism of the virus and endothelial heterogeneity of the host. ARDS is a phenotype among patients with multiorgan dysfunction syndrome (MODS) due to disseminated vascular microthrombotic disease (VMTD). In response to the viral septicemia, the host activates the complement system which produces terminal complement complex C5b-9 to neutralize pathogen. C5b-9 causes pore formation on the membrane of host endothelial cells (ECs) if CD59 is underexpressed. Also, viral S protein attraction to endothelial ACE2 receptor damages ECs. Both affect ECs and provoke endotheliopathy. Disseminated endotheliopathy activates two molecular pathways: inflammatory and microthrombotic. The former releases inflammatory cytokines from ECs, which lead to inflammation. The latter initiates endothelial exocytosis of unusually large von Willebrand factor (ULVWF) multimers and FVIII from Weibel–Palade bodies. If ADAMTS13 is insufficient, ULVWF multimers activate intravascular hemostasis of ULVWF path. In activated ULVWF path, ULVWF multimers anchored to damaged endothelial cells recruit circulating platelets and trigger microthrombogenesis. This process produces “microthrombi strings” composed of platelet-ULVWF complexes, leading to endotheliopathy-associated VMTD (EA-VMTD). In COVID-19, microthrombosis initially affects the lungs per tropism causing ARDS, but EA-VMTD may orchestrate more complex clinical phenotypes, including thrombotic thrombocytopenic purpura (TTP)-like syndrome, hepatic coagulopathy, MODS and combined micro-macrothrombotic syndrome. In this pandemic, ARDS and pulmonary thromboembolism (PTE) have often coexisted. The analysis based on two hemostatic theories supports ARDS caused by activated ULVWF path is EA-VMTD and PTE caused by activated ULVWF and TF paths is macrothrombosis. The thrombotic disorder of COVID-19 sepsis is consistent with the notion that ARDS is virus-induced disseminated EA-VMTD and PTE is in-hospital vascular injury-related macrothrombosis which is not directly related to viral pathogenesis. The pathogenesis-based therapeutic approach is discussed for the treatment of EA-VMTD with antimicrothrombotic regimen and the potential need of anticoagulation therapy for coinciding macrothrombosis in comprehensive COVID-19 care.
Introduction
Coronaviruses are a large family of viruses that usually cause mild to moderate upper-respiratory tract illnesses. Three major coronaviruses have emerged from animal reservoirs over the past two decades and have caused serious and widespread illnesses and claimed human lives according to National Institute of Allergy and Infectious Disease. 1 The coronaviruses causing severe acute respiratory syndrome (SARS-CoV) had significant outbreaks in China in 2002 and Middle East respiratory syndrome (MERS-CoV) in South Korea in 2015. Now, COVID-19 due to SARS-CoV-2 has sparked pandemic since its outbreak in late 2019 from Wuhan, China. The origin of the viral spread to human is undetermined at this time. 2,3 COVID-19 pandemic has created unprecedented political, economic and societal dislocation worldwide and claimed over two million lives as of early months of 2021.
Coronaviral infection typically begins with constitutional flu-like and mild respiratory symptoms, and in severe cases progresses to inflammation, pneumonia and acute respiratory distress syndrome (ARDS). [4][5][6] The clinical manifestations of COVID-19 have been similar to previous outbreaks of SARS and MERS. However, the transmission is faster and clinical symptoms more extensive. The outcome has been worse with higher morbidity and mortality. [7][8][9] In the beginning, particular concern was incompletely understood pathogenesis of ARDS. Later, complex hemostatic phenotypes of micro and macrothrombotic disorders and multiorgan dysfunction syndrome (MODS) had become apparent, including inexplicable gangrene involving the extremities, and digits in particular. [10][11][12][13][14] Like other pathogens, severe COVID-19 sepsis was found to be associated with complement activation, [15][16][17] endotheliopathy, 18,19 microvascular injury and thrombosis, 16 and pathologic hemostasis. 20,21 Previously, these mechanisms were affirmed to be involved in other pathogen-induced sepsis and identified as the pathogenesis causing ARDS, MODS, thrombotic disease and coagulopathy via endotheliopathy-associated vascular microthrombotic disease (EA-VMTD). 6,22 In this focused review, hemostatic nature of COVID-19 sepsis, hematologic phenotypes and thrombotic and coagulation findings will be analyzed from published clinical literatures. The pathogenesis of the sepsis will be constructed utilizing two novel hemostatic mechanisms: "two-path unifying theory" and "two-activation theory of the endothelium". These theories have already established the unique concept of EA-VMTD that is associated with activated unusually large von Willebrand factor (ULVWF) path of hemostasis, which is different from macrothrombosis and coagulopathy occurring as a result of combined activation of ULVWF and tissue factor (TF) paths. 23 The pathogenetic mechanism of macrothrombosis typified by pulmonary thromboembolism (PTE) and deep vein thrombosis (DVT) coexisting with ARDS in COVID-19 will be discussed from the concept of the hemostatic fundamentals. In the end, theory-based therapeutic approach will be proposed. Further, the management of coexisting ARDS and macrothrombotic disorder (i.e., PTE) will be separated since this is a very important practical issue in COVID-19 sepsis.
Perspective on New Therapeutic Direction Based on Theory
The clinical course of each viral sepsis is influenced by the combined expression of infectivity and virulence of the pathogen, and immune competence and response mechanism of the host. In early pandemic stage, the management of COVID-19 was centered on the efforts of identifying effective antiviral agents to eradicate the pathogen with significant and modest success. After one year of the pandemic, preventive measure is prioritized to offer adaptive immunity for world's populace via virus specific vaccines. Coordinated vaccination program has been activated and is in progress.
When pathogen intrudes into the blood stream, two nature-endowed biological mechanisms are activated to protect and maintain proper homeostasis of the body and to overcome septicemia and prevent sepsis. One is defensive physiological response through activated innate and adaptive immune system to neutralize the pathogen, and the other is healthy hemostatic system to protect the endothelial integrity and prevent destructive endothelial responses leading to sepsis, 22 as summarized in Figure 1. Since COVID-19 sepsis is characterized by complement activated endotheliopathy, the therapy can target the pathogenetic mechanism involving the endothelial dysfunction.
The main pathology of COVID-19 sepsis is microthrombosis, and its clinical phenotype is EA-VMTD, which primary manifestation is ARDS. Since ARDS is the organ phenotype of disseminated microthrombosis in the pulmonary vasculature, the treatment can be directed to counteract the formation of microthrombi. 6 Microthrombi are produced by the activated ULVWF path of hemostasis after release of the multimers from endothelial cells (ECs) and activation of platelets as shown in Figure 2. 22 Theoretically, antimicrothrombotic therapy can prevent microthrombogenesis and also resolve microthrombi formed from the molecular endothelial pathogenesis as illustrated in Figure 3.
Thrombotic Disorders and Thrombotic Syndromes
Soon after declaration of pandemic, ARDS has been found to be caused by disseminated microthrombosis in associated with endotheliopathy, which is entirely consistent with EA-VMTD as previously predicted pathogenesis of ARDS occurring in every sepsis and critical illnesses. 6 Later, in some patients, ARDS coincided with macrothrombosis such as PTE and DVT. Therefore, COVID-19 has been convinced to be a complex hemostatic disease, culminating to the composite of variable thrombotic phenotypes.
Contemporary concept of thrombotic disorder is based on the theory that all the thrombotic disorders are caused by the same coagulation process initiated by activated TF pathway following intravascular injury. To date, ARDS due to microthrombosis and PTE due to macrothrombosis are considered as the same diseases but variable expression due to involvement in different caliber and size of the vessel. I questioned this simplified concept of activated TF pathway-induced thrombogenesis. It was convinced that microthrombosis and macrothrombosis must be two different thrombotic disorders originating from two different sub-hemostatic paths as a result of different level (depth) Figure 1 Physiological and pathological response mechanisms in sepsis. Notes: In sepsis, host response is characterized by two biological mechanisms. One is physiologic defensive mechanism through immune system, and the other is pathologic destructive mechanism through endothelial system. The mechanism of physiologic and pathologic responses is summarized. It is known the complement system protecting host through innate immune system could trigger harmful endothelial molecular pathogenesis. This dual role of the complement system must be nature's rule just like normal hemostasis, which protects human lives in external bodily injury, but also may harm human lives in intravascular injury through thrombogenesis. Reproduced from Chang JC. Sepsis and septic shock: endothelial molecular pathogenesis associated with vascular microthrombotic disease. Thromb J. 2019; 17:10. 22 Abbreviations: APC, antigen presenting cell; DIC, disseminated intravascular coagulation; DIT, disseminated intravascular microthrombosis; EA-VMTD, endotheliopathyassociated vascular microthrombotic disease; MAHA, microangiopathic hemolytic anemia; MODS, multiorgan dysfunction syndrome; MOF, multiorgan failure; NO, nitric oxide; IF, interferon; IL, interleukin; LPS, lipopolysaccharide; TNF, tumor necrosis factor; TTP, thrombotic thrombocytopenic purpura. of intravascular injury. 6,22 Microthrombosis in sepsis occurs due to ECs injury, but macrothrombosis in vascular trauma (e.g., surgery) develops due to combined ECs and subendothelial tissue (SET) injury of the blood vessel wall. These hemostatic fundamental principles are detailed in Table 1 and vascular wall physiology based hemostatic mechanism is illustrated in Figure 2. Succinctly, the damage from virus-induced endotheliopathy initiating ARDS is confined to ECs and is disseminated, but the damage from in-hospital-related vascular injury initiating PTE extends to from ECs to SET of blood vessel wall and is localized at injury site.
In COVID-19, some patients had coexisting microthrombosis and macrothrombosis, which suggested two different thrombogenetic mechanisms were involved and produced two entirely different thrombi via different levels of vascular wall injury. The genesis and pathobiological feature between ARDS and PTE are summarized in Table 2. It is important to understand two phenotypes of the thrombotic disorders require different therapeutic approaches. Normal in vivo hemostasis based on "two-path unifying theory." Notes: Following avascular injury, invivo hemostatic system triggers the activation of two independent sub-hemostatic paths: microthrombotic (ULVWF) and fibrinogenetic (TF). Both are initiated by the damage of ECs and SET/EVT due to external bodily injury and intravascular injury. In activated ULVWF path from ECs damage, released ULVWF multimers recruit platelets and produce microthrombi strings via microthrombogenesis, but in activated TF path from SET/EVT damage, released TF activates FVII and produces fibrin meshes via extrinsic coagulation cascade. The final path of invivo hemostasis is macrothrombogenesis, in which microthrombi strings and fibrin meshes become unified together with incorporation of NETs, including red blood cells, neutrophils, DNAs and histones. This unifying event called macrothrombogenesis promotes "hemostatic plug" and wound healing in external bodily injury and produces "macrothrombus" in intravascular injury. Reproduced from Chang JC. Sepsis and septic shock: endothelial molecular pathogenesis associated with vascular microthrombotic disease. VMTD includes every microthrombosis-associated disease occurring in any vascular system, which could be generalized, regional, or local/focal, and be hereditary or acquired, and also be arterial or venous disease. Microthrombi are formed of multiple "microthrombi strings" composed of platelets-ULVWF multimer complexes and tend to anchor to the endothelial membrane tethering to the direction of blood flow in circulation, typically at the terminal microvascular tree. The microthrombi strings partially obstruct the vascular lumen and slow down the blood flow within arterioles and capillaries, exposing the organ and tissue to hypoxia. The classification of VMTD is presented in Table 3 to assist the reader for comprehension of VMTD in the understanding of COVID-19 sepsis. Microthrombi are the underlying pathology of disseminated EA-VMTD and can affect the microvasculature of every organ, which may lead to TTPlike syndrome and hypoxic MODS.
Acute Respiratory Distress Syndrome
The lungs are the most common organ affected by VMTD in coronaviral sepsis. ARDS is the major prototype of organ dysfunction syndrome affected by EA-VMTD. In COVID-19, two main manifestations are inflammation and respiratory distress due to disseminated microthrombosis in the pulmonary vasculature. 16,18 The character term of "microthrombi strings" representing platelet-ULVWF multimer complexes has been designated 6 after the concept derived from the insightful works of the group of scientists who demonstrated the endothelial cell-bound ULVWF multimers aggregate platelets to form platelet-ULVWF complexes ex
Figure 3
Proposed endothelial pathogenesis of SARS-CoV-2 viral sepsis based on "two-activation theory of the endothelium." Notes: Endothelial molecular pathogenesis of ARDS as one organ phenotype among MODS is illustrated. The underlying pathologic nature of ARDS is ahemostatic disease caused by endotheliopathy due to complement activation and viral Sprotein-endothelial receptor ACE2 interaction that promotes the activation of two molecular pathways. One is inflammatory pathway, which releases cytokines and provokes inflammation, including inflammatory fever, malaise and myalgia. The other is microthrombotic pathway, which promotes exocytosis of ULVWF and platelet activation and triggers much more deadly DIT via microthrombogenesis, leading to EA-VMTD. It orchestrates consumptive thrombocytopenia, MAHA, TTP-like syndrome and MODS. Abbreviations: ACE2, angiotensin converting enzyme 2; ARDS, acute respiratory distress syndrome; DIT, disseminated intravascular thrombosis; ECs, endothelial cells; IL, interleukin; MODS, multiorgan dysfunction syndrome; SARS, severe acute respiratory syndrome; TCIP, thrombocytopenia in critically ill patients; TNF, tumor necrosis factor; TTP, thrombotic thrombocytopenic purpura; EA-VMTD, endotheliopathy-associated vascular microthrombotic disease; ULVWF, unusually large von Willebrand factor. vivo. 24 These complexes are the same to the theoretical microthrombi strings proposed for ULVWF path in "two path unifying theory" of hemostasis 22 shown in Figure 2. The mechanism of microthrombogenesis will be discussed after introduction of "two-activation theory of the endothelium" (Figure 3).
The spike (S) protein of SARS-CoV-2 is attracted to the angiotensin converting enzyme 2 (ACE2) on ECs 25 and compromises endothelial function. In addition, similar to other pathogens, COVID-19 sepsis activates complement system leading to formation of terminal complement complex C5b-9. 15−17 C5b-9 neutralizes virus, but also may cause the channel (pore) formation on the endothelial membrane of the host if CD59 known as a glycoprotein protecting ECs membrane is underexpressed, 26,27 Subsequent endothelial activation and dysfunction result in inflammation and exocytosis of ULVWF multimers from damaged ECs. The inflammation is due to released inflammatory cytokines, and pulmonary vascular microthrombosis is due to ULVWF multimers forming microthrombi with platelets.
The molecular mechanism of how the tropism and endothelial heterogeneity work in ARDS is a partially understood mystery beyond the interaction between S protein and the receptor ACE2. The characteristic feature of susceptibility of the lungs producing ARDS in COVID-19 should encourage the research on the mechanisms of interfacing between human and nature as well as gene and environment that could explain other organ syndromes such as adrenal insufficiency in meningococcus, hemolytic-uremic syndrome due to Shiga toxin of (2) Hemostasis must be activated through ULVWF path and/or TF path.
(3) Hemostasis is the same process in both hemorrhage and thrombosis. (4) Hemostasis is the same process in both arterial thrombosis and venous thrombosis.
TTP-Like Syndrome
Thrombotic thrombocytopenic purpura (TTP) is a classical hematologic disease representing VMTD. It is caused by severe deficiency of ULVWF cleaving protease ADAMTS13 and is characterized by triad of 1) thrombocytopenia; 2) microangiopathic hemolytic anemia (MAHA); and 3) one or more organ dysfunction syndrome, commonly in the brain and kidneys. On the other hand, TTP-like syndrome caused by EA-VMTD is a hemostatic disease with exactly the same triad and is often associated with mild to moderate deficiency of ADAMTS13, about 25-75% of normal, and occurs with critical illnesses such as sepsis. The analysis of clinical features of TTP-like syndrome identified it was associated with sepsis and other illnesses that are promoting complement activation and endotheliopathy, 28 TTP-like syndrome typically is accompanied by thrombocytopenia as a result of consumption of platelets, and characterized by two endothelial markers, which are overexpression of von Willebrand factor (VWF) antigen and increased FVIII activity. All of the disseminated microthrombosis-associated disorders belong to the umbrella group of VMTD, which includes GA-VMTD for gene mutation-associated VMTD, AA-VMTD for antibody-associated VMTD, and EA-VMTD. 28 The critical difference between TTP and TTP-like syndrome is the fact that TTP occurs due to severe ADAMTS13 deficiency, but TTP-like syndrome occurs as a hemostatic disorder due to endotheliopathy. When COVID-19 pandemic was declared in early 2020, one of the serious concerns was a life-threatening ARDS could further progress to severe hematologic and coagulation disorders, including severe thrombocytopenia, MAHA, TTP-like syndrome, and thrombo-hemorrhagic syndrome that, in the past, claimed very high morbidity and mortality in bacterial sepsis. As presaged, ARDS was characterized by microthrombotic disease primarily affecting the lungs, sometimes with MODS. Fortunately, even in critically ill patients, thrombocytopenia has been mild to modest. Schistocytosis and MAHA were uncommonly
279
reported, and TTP-like syndrome was diagnosed rarely although well-documented cases were described. [29][30][31][32] Deadly thrombo-hemorrhagic coagulopathy, which has been known as acute disseminated intravascular coagulation ("DIC") has not been a significant issue in COVID-19 to date. The fear perceived at the beginning of the pandemic has been eased.
Our understanding of EA-VMTD was conceptualized from clinical cases of TTP-like syndrome which retrospectively was recognized as a hemostatic disease initiated by complement activation and endotheliopathy. 28 Thus, I initially assumed a good number of patients with disseminated VMTD would be associated with some degree of TTP-like syndrome. However, this pandemic has enlightened us that, depending upon the pathogenicity of each pathogen and host response, EA-VMTD can be manifested as a wide spectrum of clinical phenotypes from uncomplicated EA-VMTD to complicated EA-VMTD associated with combined micro-macrothrombotic syndrome, including severe My interpretation is the mildness of thrombocytopenia could have been due to coinciding reactive thrombocytosis when the lungs were involved by microthrombosis leading to ARDS. The extramedullary megakaryopoiesis in the lungs has been well-known mechanism. [33][34][35] Also, uncommon MAHA and fewer cases with schistocytes might have been related to less shear stress of blood flow in pulmonary circulation because arterial blood pressure in the pulmonary vasculature is normally a lot lower than systemic blood pressure at 8-20 mm Hg at rest. 36 Therefore, EA-VMTD in organ dysfunction involving primarily in the lungs was less vulnerable to intravascular hemolysis. Additionally, at molecular level in ARDS, lower degree complement activation of C3a, C3c and C5b-9 was apparent when compared to pathogen-induced sepsis, 15 which also could have contributed to less hemolytic complication.
Despite of uncommon hematologic and hemostatic complications, ARDS has been a life-threatening clinical phenotype of EA-VMTD. A retrospective propensity matched control study showed improved survival of serious COVID-19 patients with therapeutic plasma exchange (TPE), 37 which supports the benefit from indirect supply of the protease ADAMTS13 even without TTP-like syndrome. Considering these findings and data in COVID-19, "EA-VMTD" should stand as the diagnostic term of choice representing endotheliopathy with or without TTP-like syndrome.
Multiorgan Dysfunction Syndrome (MODS)
Sepsis-associated MODS can be defined as organ specific dysfunction of two or more organs due to hypoxia-induced physical and/or biochemical abnormalities caused by EA-VMTD in the patient with complement-activated endotheliopathy. 28 It may be simultaneous or sequential in organ involvement. The medical literatures of COVID-19 have recorded many cases of MODS involving in the lungs, liver, heart, brain, kidneys, muscle, pancreas, adrenals, and nerve, and others, [38][39][40][41][42][43] which phenotypes are summarized in Figure 4 with the mechanism of its pathogenesis. MODS often occurs with inflammation. 15,18,37,44,45 Sometimes it is called cytokine storm in severe case. This additional activation of inflammatory pathway had been predicted to occur according to "two-activation theory of the endothelium" as shown in Figure 3. 22
COVID-19 Sepsis
Endothelial dysfunction due to complement activation and ACE2 receptor -viral S protein interaction
281
The clinical organ syndromes, in addition to ARDS, have been termed: acute liver failure/fulminant hepatic failure, diffuse myocardial ischemia, encephalopathy/diffuse encephalopathic stroke, acute renal failure/hemolytic-uremic syndrome, acute necrotizing pancreatitis/ hypoxic pancreatitis, rhabdomyolysis, adrenal insufficiency, and peripheral neuropathy. All of these clinical syndromes have been associated with TTP-like syndrome. Any organ syndrome in sepsis should alert the potential of underlying EA-VMTD, especially when associated with thrombocytopenia. Therefore, proper diagnostic surveillance for additional hemolytic anemia is recommended in every organ dysfunction syndrome of critically ill patients. Some case reports of organ dysfunction syndrome due to sepsis and now COVID-19 interpreted that; acute pancreatitis triggered TTP-like syndrome, acute liver failure was extrapulmonary manifestation of ARDS, or hepatic encephalopathy was the result of acute liver failure causing metabolic encephalopathy. Also, some reports have thought coexisting organ syndromes were the result of cross-talk between or among organs. Now, we can understand that the concept of underling EA-VMTD has placed every organ in equal footing for the potential of causing organ dysfunction syndrome rather than one causing the other(s).
To date, the prevailing pathogenic mechanism for organ syndrome has been direct invasion of pathogen and/or toxin into the organs, but pathological findings of microthrombosis within the organs and laboratory changes of increased expression of VWF antigen and increased activity of FVIII supporting endothelial pathogenesis have further confirmed in COVID-19 that the major mechanism causing organ syndrome is disseminated intravascular microthrombosis. Since microthrombi strings partially obstruct the microvasculature causing hypoxia of the affected organ, organ dysfunction is reversible if EA-VMTD is diagnosed in a timely manner and treated with TPE. Even septic patient with delirium and coma still can recover from hypoxic encephalopathy. In general, the development of MODS indicates advancing disease and portends poor prognosis.
Macrothrombosis
Although the primary disorder of COVID-19 was ARDS, serious macrothrombosis, especially PTE and DVT, was commonly superimposed and coexisted in the same patient. [46][47][48][49][50][51][52][53] Localized acute ischemic stroke and acute myocardial infarction were also observed, but could not be blamed to microthrombosis of EA-VMTD due to their macrothrombotic nature. Because ARDS coexistence with PTE in the lungs, COVID-19 has been considered to cause a complex thrombotic disorder represented by both microthrombosis and macrothrombosis. In this pandemic, the majority of physicians has managed this complex thrombosis with traditional TF path inhibiting anticoagulation because ARDS and PTE/DVT were different expression of the same disease. In past several decades, numerous therapeutic trials for sepsis-associated coagulopathy (e.g., "DIC") had failed to show any benefit from anticoagulants relied on the pathogenesis of activated TF path. In retrospect, the failure can be recognized as the result of distinctively different two pathogeneses between microthrombosis of ARDS and macrothrombosis of PTE/VTE. This conception of the sameness of all thrombi has been persisted in medical community so long, and some pathologists have been persuaded to call microthrombi in pathologic specimens as platelet-fibrin thrombi, fibrin rich microthrombi, or fibrin deposits 35,54,55 even though microthrombi strings contain no fibrin components. This issue of the different character between microthrombi and macrothrombus should be clarified through an appropriate discussion forum. The answer should come from the understanding of true mechanism of hemostasis in vivo.
Interesting questions are; why do microthrombi and macrothrombus each occur in different-sized vessels? how can two different thrombi be produced from the same TF-FVIIa complex activated coagulation cascade mechanism? what are the differences in their character between microthrombi and macrothrombus? It is logical to conclude microthrombi of ARDS and macrothrombus of PTE are two different blood clots not only in size, location and genesis, but also in their intrinsic character of the thrombi. We know microthrombi are composed mostly of platelet-ULVWF complexes, 24 and macrothrombus is partly made of platelet, fibrin clot and extracellular traps, Assuredly, microthrombi have to be formed from another path different from TF pathinitiated hemostasis. No wonder, why anticoagulation therapy failed for the treatment of EA-VMTD of sepsis-associated coagulopathy. I have wrestled with this conceptual mystery of two different thromboses and "fibrin clot disease" occurring in acute promyelocytic leukemia. Finally, a novel "two-path unifying theory" of hemostasis was proposed and updated as shown in Figure 2 combined ECs and SET damage. 56 Disseminated endotheliopathy limited to ECs damage (e.g., sepsis) activates lone ULVWF path of hemostasis that produces microthrombosis in the microvasculature, but localized vascular injury (e.g., surgery) in the large vessel activates combined ULVWF path and TF path that produces macrothrombosis at the damage site. The difference is summarized in Table 2. Indeed, microthrombosis of ARDS is unrelated to coexisting macrothrombosis occurring in some cases of COVID-19. Since without vascular injury hemostasis cannot not be initiated and without hemostasis thrombus cannot be formed, 57 a localized vascular injury leading to macrothrombosis must have occurred as a result of vascular trauma from the risk factor such as "hospitalization".
Macrothrombosis is common complication from vascular injury after admission to the hospital, especially in intensive care unit which is a high-risk environment due to numerous vascular interventions in septic and non-septic conditions. 49,50,[58][59][60][61][62] The simple fact is the vascular events (e.g., surgery, accesses, procedures, devices and ventilators) may cause local vascular wall damage which releases ULVWF and TF and initiate thrombogenesis, leading to macrothrombosis at the injury site and spreading to other sites. In addition, distinguished from this macrothrombosis, inexplicable macrothrombotic disorder in COVID-19 characterized by "multiple" small macrothromboses mixed with microthrombotic disorder was observed in the lungs in the same patient, 52,53,63,64 and also occurred in the digits with gangrene 12-14 and brain with acute ischemic stroke The blood vessel wall is the site of hemostasis (coagulation) to produce the hemostatic plug in external bodily vascular injury and to stop hemorrhage. It is also the site of hemostasis (thrombogenesis) to produce intravascular blood clots (thrombus) in intravascular injury to cause thrombosis. Its histologic components are divided into the endothelium, tunica intima, tunica media and tunica externa, and each component has its function contributing to molecular hemostasis. As shown in the illustration, ECs damage triggers exocytosis of ULVWF and SET damage promotes the release of sTF from tunica intima, tunica media and tunica externa. EVT damage releases eTF from the outside of blood vessel wall. This depth of blood vessel injury contributes to the genesis of different thrombotic disorders such as microthrombosis, macrothrombosis, fibrin clot disease, hematoma and thrombo-hemorrhagic clots. This concept is especially important in the understanding of different phenotypes of stroke and heart attack. and cerebral venous sinus thrombosis (CVST), which pathogenesis is unidentified yet. This "multiple" small macrothrombosis typically tends to occur in severe ARDS patients.
Because the reports in the literature were not from cooperative studies providing the patient care, it was difficult to know how commonly macrothrombosis coincided with COVID-19. My impression is that the significance of macrothrombosis was overemphasized in patients with ARDS. The incidence of macrothrombosis is estimated between 5% and 10% of critically ill patients with significantly higher rate in the intensive care unit. However, general perception has been the macrothrombotic complication occurred more commonly in COVID-19 than other pathogen-associated ARDS. Could this have been due to overzealous vascular intervention for monitoring and more aggressive usage of controlled respiratory therapy in this hyped pandemic? The prevalent reports of coexisting PTE and relatively mild thrombocytopenia with evidence of active megakaryopoiesis in the lungs of COVID-19 35,65,66 may suggest locally produced platelets from reactive thrombocytosis could also have contributed to the formation of "multiple" small macrothrombi in the compromised vasculature of the lungs. Further, could it have been due to combined micro-macrothrombotic syndrome in the lungs similar to multiple peripheral digital gangrene?
VMTD with Coagulopathy
In early stage of COVID-19, initial evidence of potential coagulopathy was elevated fibrinogen, increased FVIII activity, and overexpressed ULVWF/VWF/VWF antigen with thrombocytopenia. Since fibrinogen is synthesized in the liver, hyperfibrinogenemia could have been caused by mild dysfunction of the highly vascularized liver in early Notes: In COVID-19 viral sepsis, disseminated endotheliopathy promotes inflammatory and microthrombotic pathways. The latter pathway is identical to ULVWF path due to Level 1 (ECs) damage-induced hemostasis in localized vascular injury. Disseminated endothelial molecular pathogenesis of activated ULVWF path triggers the formation of microthrombi and causes EA-VMTD, which orchestrates several hemostatic phenotypes of ARDS and MODS. If microthrombosis involves in the liver and leads to hepatic necrosis, especially in an underling disease such as liver cirrhosis. This unique coagulopathy can be termed "EA-VMTD with HC" (it can be recognized as acute "DIC"). 23 Also, ICU admission increases the risk of additional serious complication of macrothrombosis from in-hospital vascular injury, ventilator therapy with intubation and perhaps also from MODS. The Level 2 vascular wall damage of SET/EVT from vascular injury unrelated to viral sepsis could release of TF that activates TF path and produce macrothrombosis such as PTE and DVT (Table 2). Further, the interaction of microthrombi strings of EA-VMTD formed from microthrombogenesis and fibrin meshes formed by thrombin following activated TF path from in-hospital vascular damage could cause "combined micro-macrothrombotic syndrome" presenting with peripheral or limb gangrene (see text for discussion). This proposed mechanism is derived from "two-path unifying theory" of hemostasis and endothelial molecular pathogenesis. stage of microthrombosis, especially with preexisting liver disease. However, in advanced stage of COVID-19, hypofibrinogenemia was common, which was attributed to decreased synthesis of fibrinogen due to liver failure. Overexpression of ULVWF/VWF/VWF antigen and increased FVIII activity were due to the release from Weibel-Palade bodies in endotheliopathy, which have become the best endothelial markers that can be used along with mild to moderate decrease of ADAMTS13 activity in early diagnosis of endothelial dysfunction. 6 Unlike in viral hemorrhagic fever occurring in Eboli sepsis, coagulopathy presenting with hemorrhagic disease has been uncommon in COVID-19 even though mildly prolonged prothrombin time (PT) and activated partial thromboplastin time (aPTT), and increased fibrin degradation products (FDPs)/D-dimer have been encountered commonly. 67,68 Significant hepatic coagulopathy resulting from complication of EA-VMTD, which is similar to contemporary concept of acute "DIC," was uncommon. 68,69 It could have been due to relatively lower levels of activated complement factors 15 and/or possibly insignificant tropism of the viral molecules to the liver. Undetermined factors have lowered the incidence of serious thrombo-hemorrhagic phenotype, and by chance spared tragic loss of many lives in pandemic.
Combined Micro-Macrothrombotic Syndrome with Gangrene
This inexplicable thrombotic syndrome has rarely occurred but commonly been reported in the COVID-19 literature because of its oddity. 14,70-76 However, this gangrene had been described enough in sepsis of the non-COVID-19 literature 22,77-80 because of its mysterious nature and serious consequences with high morbidity and sometimes death. The diagnosis of combined micro-macrothrombotic syndrome is affirmed logically from two facts in every patient: 1) underlying disseminated microthrombosis (i.e., EA-VMTD); and 2) gangrene formation which never occurs without macrothrombosis. In the past, the pathogenesis could not be identified because of our shortcomings on the complete picture of hemostasis. Now, armed with novel "two-path unifying theory" of hemostasis ( Figure 2) and "two-activation theory of the endothelium" (Figure 3) as well as the "three essentials of hemostasis" (Table 1), the pathogenesis of this serious life-threatening disorder can be established as shown in (Figure 6).
Combined micro-macrothrombotic syndrome presenting with several different phenotypes is characterized by the gangrene presenting with single or multiple, small or large, often symmetrical and peripheral, sometimes as ischemic (acrocyanosis) or hemorrhagic, with either a large isolated or disseminated lesion(s), involving commonly digits or sometimes exposed areas of the limb, skin and subcutaneous tissue as well-demarcated gangrene. Often, it occurs as "symmetrical peripheral gangrene" involving the terminal parts of digits. These gangrene phenotypes typically occur in association with sepsis in hospitalized patients coinciding with underlying disseminated microthrombosis (i.e., EA-VMTD) following surgery or other vascular events.
It should be emphasized that "gangrene" is the dead and altered tissue lesion beyond necrosis or infarction. It is caused by "multiple" small arterial macrothrombi due to complete cutoff of blood supply to the distal parts without collateral circulation. Unlike "necrosis" or" infarction," "gangrene" is the dead, dried and shrunken tissue lesion discolored to black without much pain or inflammation, indicating the all of the microvasculature in the area is out of service supplying of oxygen and destroying the entire tissue. Gangrene is created by denatured hemoglobin molecules. Previously, I theorized that peripheral gangrene was triggered by "multiple" small macrothrombi formed from ongoing microthrombi of activated ULVWF path in sepsis unifying -according to " twopath unifying theory" -with "fibrin meshes" from activated TF path following additional vascular injury due to surgery or vascular devices. 22,77 These "multiple" small macrothrombi are suspected to be "microthrombi strings-fibrin meshes" complexes, completely shutting off blood supply to the distal part of circulation. This combined micro-macrothrombotic syndrome can explain the pathogenesis of purpura fulminans 73,80 that is associated with protein C deficiency and sepsis complicated by vascular injury, and also diabetic gangrene, Fournier's gangrene, necrotizing fasciitis, and other gangrene disorders. Some have called this syndrome was a gangrene type of "DIC." 75 In COVID-19 sepsis, for example, "microthrombi strings" are formed from activated ULVWF path due to endotheliopathy (i.e., sepsis) and "fibrin meshes" are produced from activated TF path due to vascular injury (e.g., femoral or subclavian artery device). Fibrin meshes in circulation travel to downstream arterial microvascular trees and encounter microthrombi strings localized in the microvasculature of every involved digits. These two different thrombi would unify to form "multiple" small macrothrombi DovePress composed of "microthrombi strings-fibrin meshes" complexes based on the hemostatic theory. These "multiple" small (minute) macrothrombi could completely block circulation at every similar-sized branches of small vascular trees of the digits via macrothrombogenesis. This pathogenesis is similar to "vascular access steal syndrome" seen kidney dialysis patients. The hemoglobin of red blood cells trapped within the small arteries distally would be completely deprived of oxygen supply to distal areas of the tissue without any collateral circulation. The hemoglobin would be denatured into dark organic compounds and/or inorganic molecules such as methemoglobin and/or ferric disulfide, which be deposited into surrounding dead tissues to produce dry black gangrene. This interaction between COVID-19 septic endotheliopathy and vascular access-induced injury explain the unique "combined micro-macrothrombotic syndrome." 22 Symmetrical peripheral gangrene, 79 peripheral digit ischemic syndrome, 77 limb hemorrhagic gangrene, 70 limb ischemia, 71 acrocyanosis, 14 purpura fulminans, 73,75,76,80 and perhaps coumadin-induced gangrene and diabetic leg gangrene have been associated with sepsis-associated microthrombosis, sometimes with underlying thrombophilia. These gangrene syndromes represent variable phenotypes of combined micro-macrothrombotic syndrome observed in COVID-19. The experience in COVID-19 pandemic has offered an opportunity for us to reexamine the complexity of microthrombogenesis, fibrinogenesis and macrothrombogenesis, and their interactions as well as true hemostasis in vivo.
Cytokine Storm
Endotheliopathy is manifested by both inflammation and EA-VMTD. Inflammation is common in COVID-19 due to endothelial release of various cytokines, including interleukin (IL)-1, IL-2, Il-6, tumor necrosis factors and interferons. If inflammation is severe, it is called cytokine storm. The cross-talk mechanism between inflammation and coagulation was proposed to explain frequent association of two conditions in sepsis-associated coagulopathy, but cytokine syndrome is neither the cause nor result of microthrombotic disorder. Some immunologists and coagulation specialists have thought that cytokine storm would have a major impact on the morbidity and mortality of COVID-19. However, in ARDS with severe COVID-19, circulating cytokine levels were significantly lower compared to those with bacterial sepsis, which suggests cytokine storm was not so serious feature of COVID-19. 81 The finding of relatively low level of cytokines in spite of severe ARDS tends to support EA-VMTD and inflammation are independent processes occurring in endotheliopathy, which implies anticytokine therapy would not be an effective regimen treating COVID-19.
Further, even though inflammatory response may cause toxic state with fever, headache, malaise, myalgia, and gastrointestinal symptoms, cytokine storm is not a disease, but is only a symptom complex which is transient and reversible. On the contrary, disseminated EA-VMTD is a structural disease that may cause TTPlike syndrome and lead to organ hypoxia and MODS. Eventually, organ dysfunction, if prolonged and not reversed, organ failure directly contributes to the demise of the patient. TPE was employed for the treatment of COVID-19 and had shown improved overall survival in a good retrospective case control study. 37 The benefit was claimed to be the result of removal of cytokines, but the decreased mortality was more likely from the result of replenished ADAMTS13 that had counteracted microthrombogenesis.
As anticipated from endothelial molecular mechanism, decreased activity of ADAMTS13 was very important findings in the tested patients. 82,83,90 The result is consistent with key role of the enzyme in the pathogenesis of microthrombogenesis as proposed. 6 Hemostasis in vivo can be explained best by "two-path unifying theory" initiated by activated ULVWF path and TF path. The hemostatic mechanism was derived from the physiologic syllogism of vascular wall injury as illustrated in Figure 2. This theory represents true hemostasis in vivo- 99 and has been updated. 6,22,23 The hemostatic components of blood vessel walls are consisted of two major coagulation factors: ULVWF multimers from ECs and TF from SET/extravascular tissue (EVT) shown in Figure 5. A vascular wall injury to ECs releases ULVWF multimers and the injury to SET/EVT releases TF. In external bodily injury (e.g., physical assault), bleeding occurs externally with release of ULVWF and TF, but internal vascular injury (e.g., dissecting aneurysm) releases ULVWF multimers and TF into circulation, and sometimes bleeding into EVT. ULVWF multimers recruit platelet to activate ULVWF path, and TF activates FVII to initiate TF path. The former produces "microthrombi strings" composed of platelet-ULVWF complexes via microthrombogenesis. The latter produces "fibrin meshes" via fibrinogenesis. In the last step, microthrombi strings and fibrin meshes unify together via macrothrombogenesis to form "hemostatic plug" in external bodily injury to stop bleeding, and to form "macrothrombus" in intravascular injury to cause macrothrombosis.
Sepsis is an intravascular disease involving the endothelium breached by septicemia. It is mediated by endotheliopathy which damage is limited ECs. Endotheliopathy in sepsis leads to exocytosis of ULVWF multimers and activates platelets. The released ULVWF multimers into intravascular space should be cleaved by the protease ADAMTS13 in normal person. However, if mild to moderate insufficiency of ADAMTS13 is present due to heterozygous mutation of the gene or excessive release of ULVWF multimers over the capacity handling of ADAMTS13, uncleaved ULVWF multimers become anchored to the damaged endothelial membrane and recruit platelets to form microthrombi strings via microthrombogenesis. These strings are microthrombi that partially obstruct the microvasculature.
Apart from normal hemostasis, released ULVWF in sepsis-induced endotheliopathy activates only partial hemostasis, which is called ULVWF (microthrombotic) path of hemostasis, but is disseminated in the entire microvascular system and leads to EA-VMTD. Endotheliopathy does not activate TF (fibrinogenetic) path because SET of vessel wall is intact in sepsis. Thus, coagulation factors (ie, FVII, FV, FX, FII and fibrinogen) are not involved in the formation of microthrombosis, and macrothrombus is not formed. Succinctly speaking, sepsis is caused by lone activation of ULVWF path in disseminated endotheliopathy, which results in partial hemostasis that causes pathologic microthrombosis.
ULVWF multimers are very large multimeric glycoproteins synthesized in the endothelium and stored within the Weibel-Palade bodies with FVIII. 100 Because of their close relationship each other, the exocytosis of ULVWF simultaneously occurs with release of FVIII in endotheliopathy. Therefore, increased level of VWF in plasma is always associated with increased activity of FVIII. These increased VWF and FVIII are the best diagnostic endothelial markers for EA-VMTD along with thrombocytopenia and decreased activity of ADAMTS13 (approximately 25-75% of normal). 28 The theory of hemostasis was derived from the same character of microthrombi composed of platelet-ULVWF complexes in TTP. 99,101 Severe deficiency of the protease ADAMTS13 typically less than 5% of normal occurs in TTP due to either gene mutation or due to autoantibody production. However, endotheliopathy leads to EA-VMTD/TTP-like syndrome when it is associated with mild to modest ADAMTS13 insufficiency. 28,56 The formation of microthrombi takes place on the endothelium. Some cases of EA-VMTD could cause the triad of thrombocytopenia, MAHA and organ dysfunction syndrome. In such case, it is called TTP-like syndrome. The theory of in vivo hemostasis is consistent with the works of the coagulation scientists who observed the characteristic platelet-ULVWF strings anchored to the ECs ex vivo and in vivo. 102−105 This vascular model of hemostasis can easily define the concept of phenotypes of variable thrombotic disorders via two important elements in vascular injury, which are: 1) depth of vascular wall damage; and 2) extent of involvement of vascular tree system. Now, the different clinical phenotypes of thrombotic disorders (e.g., ARDS and PTE) in COVID-19 can be understood from their different causes and pathogeneses. 56 How Does Endotheliopathy Orchestrate Molecular Pathogenesis?
In intravascular injury, when ULVWF path is activated due to focal or local detachment of a small atherosclerotic ECs plaque(s), some microthrombi strings could be produced, which clinical phenotype is focal microthrombotic syndrome such as transient ischemic attack or angina pectoris without systemic implication. 56 In contrast to this focal phenotype of endothelial injury, disseminated endotheliopathy produces a disseminated phenotype of EA-VMTD with a spectrum of clinical syndromes from consumptive thrombocytopenia to severe combined micro-macrothrombotic syndrome and diverse microthrombotic syndromes such as thrombocytopenia in critically ill patients, TTP-like syndrome, MODS, and hepatic coagulopathy in-between. This complexity cannot be reconciled by "two-path unifying theory" of hemostasis alone. Figure 3 showing "two-activation theory of the endothelium" answers the rest of pathophysiological mechanism. 57 Therefore, microthrombotic pathway displayed in endothelial molecular pathogenesis is the extended version of ULVWF path of hemostasis applicable to the pathogenesis of disseminated endotheliopathy.
In COVID-19 sepsis, when terminal complement complex C5b-9 and S protein of SARS-CoV-2 attack ECs and promote endotheliopathy, 19 similar to other pathogen, two important molecular events occur: 1) release of inflammatory cytokines, including various interleukins, tumor necrosis factors, and interferons ;44,81 and 2) platelet activation and exocytosis of ULVWF multimers. 9,19,90,92 The former promotes inflammation, which mechanism is called "activation of inflammatory pathway," and the latter mediates microthrombogenesis, which is triggered by "activation of microthrombotic pathway." These two independent pathways were proposed in the framework of "two-activation theory of the endothelium." 57 Many scientists have opined inflammatory pathway plays a major role on the pathogenesis of COVID-19. But the experience from COVID-19 pandemic, activated microthrombotic pathway contributes to more deadly pathogenesis leading to ARDS of EA-VMTD, which orchestrates consumptive thrombocytopenia, MAHA, TTP-like syndrome, MODS, thrombocoagulopathic syndrome, and combined micro-macrothrombotic syndrome.
Role of ADAMTS13 as a Modulator of ULVWF Path
ADAMTS13, which is a zinc containing metalloproteinase cleaving ULVWF multimers to smaller VWF and modulating of thrombogenesis in intravascular injury, was predicted to be decreased in ARDS based on two hemostatic theories. 6 This protease was found to be insufficient when tested in severe COVID-19 patients. 82,83,90,96,98 This important proteolytic enzyme is needed to cleave the excess ULVWF multimers that are released from damaged ECs and to prevent microthrombogenesis.
The role of ADAMTS13 preventing TTP and TTP-like syndrome (i.e., EA-VMTD) is well established. 101 This enzyme also contributes to downregulating macrothrombogenesis of stroke, 106-108 myocardial infarction, 109,110 and DVT 111 via ULVWF path perhaps before unifying mechanism prior to form macrothrombus. Therefore, ADAMTS13 deficiency is a "thrombophilia" modulating ULVWF path of hemostasis in contrast to protein C deficiency which is a "thrombophilia" modulating TF path. It is predicted "purpura fulminans" occurring in sepsis is likely associated with combined deficiency of ADAMTS13 of ULVWF path and protein C deficiency of TF path, leading to disseminated form of combined micro-macrothrombotic syndrome according to two hemostatic theories. Over the last few months, cases of purpura fulminans were reported in COVID-19. 73,76 Both ADAMTS13 and ABO blood group genes are closely linked at the same chromosome 9q34.2 location, and non-O blood group population has been associated with increased susceptibility and poorer prognosis compared to O blood group in COVID-19. 112 non-O blood group individuals are expected to have decreased ADAMTS13 activity that can cause more severe phenotypes of EA-VMTD and poorer prognosis. The relationship in the triangle of ADAMTS13 activity, ABO blood group antigen expression and severity of EA-VMTD in patient with COVID-19 and other sepsis would be an interesting epidemiologic study, which could identify the more vulnerable population to sepsis and septic progression.
Interpretation for Laboratory Findings and Diagnostic Approach
The followings are the hematologic and coagulation abnormalities observed in COVID-19. Their interpretation is summarized based on endothelial pathogenesis.
• Mild to moderate thrombocytopenia → likely due to consumption during microthrombogenesis, but with partial compensation from extramedullary megakaryopoiesis in the lungs 35,65,66 • Rare cases with schistocytes in blood film and hemolysis → likely due to: 1) uncommon hemolysis in ARDS of COVID-19 secondary to less production of C5b-9 than in viral sepsis; 15 and 2) less shear stress of blood flow at the pulmonary vasculature 35 • Prolonged PT, if present → due to decreased FVII, FX, FV, FII and fibrinogen in hepatic coagulopathy • Prolonged aPTT, if present → due to decreased FX, FV, FIX, FII and fibrinogen in hepatic coagulopathy • Overexpression of ULVWF/VWF/VWF antigen and increased FVIII activity → due to endothelial exocytosis • Decreased ADAMTS13 activity → due to: 1) heterozygous gene mutation or polymorphism; and/or 2) excessive release of ULVWF multimers creating an imbalance between the enzyme and substrate multimers • Abnormal fibrinogen levels → increased likely due to early transient liver dysfunction and decreased due to advanced hepatic necrosis resulting from microthrombosis • D-dimer → positive due to "fibrinolysis" in MODS and coexisting macrothrombosis with on-going EA-VMTD, but negative in "fibrinogenolysis" without MODS even with on-going EA-VMTD • Soluble fibrin monomer (SFM) → positive due to increased "fibrinogenesis" • FDP(s) → positive due to fibrinolysis and/or fibrinogenolysis In EA-VMTD, if PT and aPTT are prolonged, it would be helpful to determine the activity of FVII, perhaps with other liver dependent factors (FII, FV, FIX, and FX) to confirm hepatic coagulopathy. If combined micro-macrothrombotic syndrome with gangrene occurs in sepsis, in addition to ADAMTS13 activity, protein C and protein S activities, and the test for FV-Leiden and others would be needed to exclude potential underlying congenital or acquired thrombophilia. Previously, insufficient coagulation tests had precluded identifying the conceptual difference between acute "DIC" and hepatic coagulopathy in sepsis-associated coagulopathy, which is now summarized in Table 4. If hypofibrinogenemia, and prolonged PT and aPTT occur with thrombocytopenia, increased activity of VWF and FVIII, and decreased FVII in sepsis, the diagnosis of EA-VMTD associated hepatic coagulopathy can be established. Since serious hemorrhagic syndrome has not been a major issue in this pandemic, COVID19 sepsis is not considered to be a hemorrhagic disorder.
In regard to interpreting D-dimer, SFM and FDP(s), when increased fibrinogen is cleaved by plasmin, fragments X, Y, Dg and E but no D-dimer are produced, however, when cross-linked fibrin clots are cleaved by plasmin, several FDP(s) and D-dimer are produced. 115 If fibrinogen is catalyzed by thrombin following TF path activation, SFM is formed. Thus, in EA-VMTD, D-dimer should be negative unless organ damage leading to TF expression has occurred due to MODS or macrothrombosis is coexisted, but SFM becomes positive following fibrinogenesis and fibrinolysis according to the hemostatic theory. For this reason, D-dimer is negative in EA-VMTD without organ damage, but is positive in EA-VMTD with organ damage, 116 including hepatic coagulopathy. Macrothrombosis such as multiple PTE in COVID-19 with EA-VMTD causes strongly positive D-dimer because vascular injury releasing TF due to SET/EVT damage of the organ leads to fibrinogenesis and fibrinolysis. Generally, in sepsis, significantly positive D-dimer is an important marker indicative of advancing MODS 116,117 or concurrent macrothrombosis.
The theoretical diagnostic utility of D-dimer and FDP(s) in thrombotic disorders is approximated in Table Vascular 5. D-dimer is a complex marker to interpret in coexisting microthrombosis, macrothrombosis and coagulopathy as well as MODS because the value also varies depending upon not only microthrombosis with/without liver involvement, with/without MODS, and with/without additional macrothrombosis, but also the severity of thrombotic disorders. Theoretically, negative D-dimer infers uncomplicated EA-VMTD in the absence of activated TF path, but positive D-dimer in EA-VMTD may suggest the damage of SET of the vessel wall leading to activated TF path (i.e., MODS) and portends poorer prognostic outcome.
Treatment Options Based on in vivo Hemostatic Theory
Since beginning of pandemic, the use of anticoagulation and thromboprophylaxis with drugs such as TF path inhibitors (e.g., low molecular weight heparin), antiplatelet agents, and fibrinolytic therapy 118-120 was advocated for "comprehensive" treatment and prevention of both microthrombosis and macrothrombosis. Anticytokine therapy also has been attempted and debated. [121][122][123][124] Anticoagulation and anti-inflammatory regimens have shown no measurable benefit on mortality of ARDS and uncertain effect on developed macrothrombotic disorders. The lack of success could have been presaged because traditional anticoagulants counteracting TF path is theoretically ineffective for microthrombosis (e.g., ARDS) 6 and was not beneficial in numerous clinical trials for sepsisassociated coagulopathy. Anticytokine therapy suppressed inflammation but did not reduced mortality. 121 The proposition is the demise of the patient does not occur due to activated inflammatory pathway but due to activated ULVWF (microthrombotic) pathway. The main culprit of therapeutic failure is due to our lack of comprehension on different pathogeneses between ARDS in sepsis and PTE in virus-unrelated vascular injury. Targeting EA-VMTD The primary goal treating ARDS and associated MODS caused by EA-VMTD is resolving and preventing pathologic microthrombosis that is resulting from activation of ULVWF path. The logical approach counteracting the pathogenesis is to prevent and remove the excess of ULVWF multimers. The following is theoretically effective antimicrothrombotic regimens.
Recombinant ADAMTS13
Recombinant (r) ADAMTS13 has been available for clinical trials in GA-VMTD due to severe ADAMTS13 deficiency (ie, hereditary TTP). In animal model, its prophylactic administration protected ADAMTS13 knockout mice from developing TTP-like syndrome, and therapeutic dose reduced the incidence and severity of TTP findings. 125 It has not been used yet for conceptually established diagnosis of EA-VMTD in the clinical setting. Recent studies and review showed correlation between decreased level of ADAMTS13 and poor outcomes of sepsis in mice and in humans. 126,127 Furthermore, ADAMTS13-deficient mice were partly rescued from widespread microthrombosis in staphylococcus aureus sepsis by the administration of rADAMTS13. 128 Theoretically, rADAMTS13 is the best regimen for the treatment of VMTD, but it has not been utilized in EA-VMTD or TTP-like syndrome. The hemostatic nature of VMTD caused by the endothelial molecular pathogenesis should encourage
Complement Inhibitors
Since endotheliopathy in COVID-19 sepsis is promoted by complement-mediated damage to the endothelium of the host, 9,[15][16][17] which triggers microthrombogenesis, 28 the complement inhibitor eculizumab has been suggested 135 to use as an indirect antimicrothrombotic agent to suppress endotheliopathy. Complement activation inhibitor eculizumab is a long-acting monoclonal antibody approved for the treatment of paroxysmal nocturnal hemoglobinuria. This antibody targets complement component C5 and prevents its cleavage into C5a and C5b, thereby inhibiting the formation of C5b-9.
Considering eculizumab downregulates C5b-9, it could inhibit on-going endothelial dysfunction and prevent both exocytosis of ULVWF and inflammation. In view of this theoretical ground 6,22,135,136 and initial promising clinical experience from complement inhibition, [137][138][139] several clinical trials are currently underway to evaluate anticomplement agents for the treatment of COVID-19. The complement inhibitors seem to have a potential role, but should be employed with special care in immunologically safer stage of sepsis 140 since it might interfere with the innate immune system that protects the host from septicemia and undefined pneumonia.
Disulfide Bond Reducing Mucolytic Therapy
N-acetylcysteine (NAC) is N-acetyl derivative of the amino acid L-cysteine. It is a precursor in the formation of antioxidant glutathione in the body and is a disulfide bond reducing agent with mucolytic activity. The sulfhydryl group confers antioxidant effects and is able to reduce free radicals. This drug has been used in acetaminophen overdose and toxicity, cystic fibrosis and chronic obstructive lung disease, and symptomatic inhalation treatment as a mucolytic agent for thick mucus. Its oral form is overthe-counter medicine with very little side effects. It is found to possess potential antimicrothrombotic activity, 141 inhibiting the ULVWF path of hemostasis. The hemostatic function of human ULVWF depends on the normal assembly of disulfide-linked multimers from approximately 250-kDa subunits. Subunits initially form dimers through disulfide bonds near the COOH terminus. Dimers then form multimers through disulfide bonds closely at the NH2 terminus of each subunit. 142 ULVWF multimers released from ECs are intrinsically active in binding platelets 102,103 and are suspected the essential component promoting microthrombogensis in MODS of sepsis. 22 ADAMTS13 exerts a disulfide bond reducing activity that primarily targets the bonds located in the domain of A2 in plasma ULVWF multimers. 143 Therefore, it is theorized that rADAMTS13 inhibit microthrombogenesis in intravascular space by cleaving ULVWF in the patient with TTP and also proteolyze ULVWF bound to microthrombi strings anchored to the damaged endothelium in EA-VMTD.
Similar to ADAMTS13, NAC exerts analogous effect of proteolytic activity on ULVWF by reducing disulfide bonds. In addition to being involved in the antioxidant mechanism, NAC has disulfide breaking activity on the intracellular bond located in platelet binding A1 domain of ULVWF. It also inhibits VWF-dependent platelet aggregation and collagen binding. 141 The same process may explain the mucolytic action of NAC which is due to its effect in reducing heavily cross-linked mucoproteins. 144,145 A few clinical reports and animal models have suggested potential benefit of NAC in disseminated intravascular microthrombosis. [146][147][148] NAC is inexpensive and readily available drug with a very high safety profile for oral use.
NAC has been investigated in ARDS as an antioxidant agent without the knowledge of underlying pathophysiological mechanism caused by molecular endothelial pathogenesis. [149][150][151][152] The benefit as antioxidant agent was not consistently demonstrated in limited clinical trials, but in a meta-analysis some positive results had encouraged further study. 153 In my opinion, first, the case number in the clinical trials was insufficient for statistical analysis. Second, the dosage of NAC might have been too low to draw a valid conclusion on efficacy. Third, inclusion of the patients on ventilator support with tracheal intubation might have influenced the outcome due to additional hemostatic complications such as PTE related to vascular injury and skewed its interpretation. [154][155][156] These limitations could have precluded fair assessment of NAC response. In one limited, but controlled, study Suter et al 149 concluded that NAC showed improvement of oxygenation but resulted in no beneficial effect on mortality of their study patients. However, a second look at the study suggests that NAC had a beneficial effect on ARDS. This benefit on oxygenation could be interpreted as positive effect on pulmonary vascular microthrombosis even though the study used a low dose at 40 mg/kg/day and that was treated for a short duration of 3 days.
Currently, with an evidence review, clinical therapeutic trials with NAC for the prevention and treatment for ARDS in COVID-19 are in progress. 157 Clinical trials utilizing the targeted agent rADAMTS13 which are based on the "two-path unifying theory" of hemostasis and "two-activation theory of the endothelium" should be able to determine its potential benefit not only for COVID-19, but also for the entire spectrum of pathogen-induced sepsis, including ARDS and MODS and TTP-like syndrome as well as EA-VMTD. Theoretically and for practicality, NAC, if rADAMTS13 cannot be available for one reason or another for clinical trials, may become an acceptable therapeutic agent in pandemic. The treatment may be given parenterally to allow an effective dose comparable to that used in acetaminophen poisoning depending upon severity of EA-VMTD. If parenteral therapy of NAC is found to be effective, this therapeutic dosage may be estimated and calculated for an oral regimen.
Targeting Macrothrombosis in the Intensive Care Setting
In addition to ARDS caused by microthrombosis, macrothrombosis with clinical phenotypes of multiple DVT, PTE and cerebral venous sinus thrombosis, acute ischemic stroke and acute myocardial infarction is an important issue because it should be managed differently.
Considering the former is from endotheliopathy and the latter is from localized vascular damage, rational management is as follows.
• Prevention by limiting in-hospital related vascular accesses as much as possible • Thromboprophylaxis prior to significant vascular intervention • Anticoagulation for developed macrothrombosis For the prevention of macrothrombosis, a two-pronged approach should be considered; one is to limit the risk factors by minimizing the vascular damage from vascular accesses, procedures, endovascular devices and surgery, and prudent decision on ventilator support with practice guideline and surveillance; the other is a rational decision for a short term thromboprophylaxis when needed. For the treatment of developed macrothrombosis, therapeutic anticoagulation counteracting TF path of hemostasis such as low molecular weight heparin should be a standard treatment. Albeit anticoagulant therapy superimposed to antimicrothrombotic regimen is anticipated to be safe and effective, close monitoring for potential bleeding complication is warranted.
Conclusion
The COVID-19 pandemic is a 21 st century challenge to humanity and civilization. With organized efforts of medical community worldwide, the pathophysiological mechanisms of the viral sepsis are being uncovered. The underlying pathogenesis is found to be generalized endotheliopathy triggered by complement activation and subsequent endothelial molecular dysfunction, leading to inflammatory syndrome and VMTD. The endothelial molecular pathogenesis has been unequivocally established based on hemostatic evaluation, and clinical and pathologic findings. Theoretically, targeted antimicrothrombotic regimen is expected to be effective against EA-VMTD and its complications, including TTP-like syndrome and MODS. Additional anticoagulant therapy would be needed if macrothrombosis due to the virus-unrelated events in hospital coincides with ARDS in COVID-19.
Data Sharing Statement
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Acknowledgments
The author expresses sincere appreciation to Miss Emma Nichole Zebrowski for her drawing on the structure of the blood vessel wall in relation to hemostasis and the illustrative art works of Figures
Funding
There is no support of funding in research, preparation and publication of this article.
Disclosure
The author reports no conflicts of interest in this work. | 2021-06-09T07:02:25.694Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "63dbae457df92bfd4f1f63690d47d2f51fd198ab",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=70077",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63dbae457df92bfd4f1f63690d47d2f51fd198ab",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
22415680 | pes2o/s2orc | v3-fos-license | Gradual sucrose gastric loading test : A method for the prediction of nonsuccess gastric enteral feeding in critically ill surgical patients
Background and Aims: Intolerance of gastric enteral feeding (GEN) commonly occurs in surgical Intensive Care Unit (SICU). A liquid containing sugar could prolong gastric emptying time. This study was to propose a method for prediction of nonsuccess GEN using gastric volume after loading (GVAL) following gradual sucrose gastric loading. Materials and Methods: Mechanical ventilator supported and hemodynamically stable patients in SICU were enrolled. About 180–240 min before the GEN starting, a sucrose solution (12.5%; 450 mosmole/kg, 800 mL) was administered via gastric feeding tube over 30 min with 45° head upright position. GVAL was measured at 30, 60, 90, and 120 min after loading. GEN success status using clinical criteria was assessed at 72 h after the starting GEN protocol. The receiving operating characteristic (ROC) and c statistic were used for discrimination at each time point of GVAL. Results: A total of 32 patients were enrolled and completed the protocol. 14 patients (43.7%) were nonsuccessful GEN. The nonsuccess group was found to have significantly more GVAL than the other group at all‐time points during the test (P < 0.05). The most discriminating point of GVAL for the prediction of nonsuccess was 150 mL at 120 min after loading with a sensitivity of 92.3%, specificity of 88.9%, positive predictive value of 85.7%, negative predictive value of 94.1%, and ROC area 0.97 (95% confidence interval 0.91–1.00). Conclusion: A high GVAL following sucrose gastric loading test might be a method to predict nonsuccess GEN in critically ill surgical patients.
Introduction
Critically ill patients need an appropriate energy supply particularly in patients who were previously malnourished or elderly patients who had lower body reserves. [1]Postoperative patients are at a high risk of caloric deficit.Energy deficit during a period of critical illness could lead to worse outcomes and also increase infective complications.In addition, later energy provision does not alleviate these effects. [2]Although gastric enteral feeding (GEN) is a preferred option in Intensive Care Units (ICU), many mechanisms could potentially result in gastric dysmotility. [3]There are many proposed accurate methods for measurement of gastric emptying time including the scintigraphy, paracetamol absorption test, breath tests, refractometry, ultrasound, and gastric impedance monitoring. [4]However, because of method difficulties, one of the most popular and inexpensive test "poor man's test" for gastric emptying is the measurement of gastric residual volume (GRV). [5]n critically ill patients, a high GRV level is also related to disease severity and worse outcomes. [6]n a high-risk situation of gastric contents aspiration, although intubation is required during anesthetic induction, preoperative drinking of 400 mL of oral carbohydrate treatment 150 min before the estimated time of surgery is safe. [8]In addition, the gastric retention could be estimated by a saline gastric load test in surgical practice. [9,10]A positive result was defined as gastric volume after loading (GVAL) >400 mL of saline 30 min after rapid administration of 750-800 mL. [9,10]herefore, use of carbohydrate loading in critically ill patients might be a method allowing gastric emptying time estimation and prediction of GEN success as well as being a relatively harmless procedure.
To achieve the energy target particularly in malnourished patients and decrease the energy deficit, the European Society of Parenteral and Enteral Nutrition suggested that parenteral nutrition should be considered within 24-48 h in all patients who are not expected to be on normal nutrition within 3 days or if enteral feeding is contraindicated or intolerance. [11]On the other hand, routine regular GRV checking during enteral feeding is currently a controversial issue. [12]The prediction of nonsuccess GEN method might be a triage parameter for selection the patient who needs close monitoring.However, the prediction of common GEN nonsuccess is not well-estimated in clinical practices.Therefore, the objective of this study was to propose a method of GEN nonsuccess predictor at 72 h after GEN initiation using GVAL measurements after 12.5% sucrose gastric loading.
Study design and population
The study design was a delayed -cross-sectional diagnostic study in the surgical Intensive Care Units (SICU) at a university-based hospital in Thailand.Patients enrolled were those who needed enteral feeding via nasogastric (NG) or orogastric (OG) tube and required mechanical ventilator support for > 3 days between January 2011 and November 2012.An OG was inserted only if an NG tube was contra-indicated as in skull-based fracture and rhinorrhea.Patients excluded from the study were those who had unstable vital signs, needed inotropic or vasopressor drugs or had a prior gastrectomy.As there was no prior similar study, 33 patients were initially enrolled for a pilot study (one was excluded).The study flow was demonstrated in Figure 1.The Institute Ethics Committee approved this study (Study code SUR110701A13X).
Study protocol
This test was a prefeeding procedure.After informed patient consent had been secured, the patient's head was raised to 45° upright.An NG or OG (14 French) was inserted, and the position was checked by listening for an air blowing sound in the stomach.All remaining gastric contents were withdrawn.SICU nurses gradually fed 800 mL of 12.5% sucrose (12.5 g of sucrose per 100 mL; 450 mosmol/L) over 30 min via NG or OG (14 French) by feeding pump.For safety issues during the test, it was ensured that the head was elevated during the procedure.Abdominal symptoms including abdominal tenderness, distention, nausea, vomiting, patient discomfort, and aspiration were observed during the administration period.If these signs and symptoms occurred, the test was discontinued, and all gastric contents were withdrawn.A total GVAL was measured at 30, 60, 90, and 120 min, respectively.All viscous contents of the GVAL were returned after each time point.The test was performed by the same trained physician.All patients were fed on hospital prepared formula (1 calorie/mL concentration; Protein: Fat: Carbohydrate = 55:30:15%, respectively, nonprotein calories to gram nitrogen ratio = 133:1).Feeding was started between 180 and 240 min after loading test.This time period was the duration of enteral feeding preparation and transfer from hospital dietetic unit to the ICU.Blood glucose, blood urea nitrogen (BUN), creatinine (Cr), sodium and potassium levels were tested before and after loading.
Feeding method and outcome measurement
All patients received the same standard hospital enteral formula and enteral feeding protocol.Prokinetic drugs were prohibited during the study period for confounding factor prevention.The enteral feeding target was set by energy expenditure estimation recommendation (25-30 kcal/kg/day). [13]This estimation depended on patient status, the severity of disease, and attending intensivist's decision.Initially, the feeding rate was started at 40 mL/h continuously and increased progressively by 20-40 mL every 4 h if there were no signs and symptoms of feeding intolerance.Although the starting rate was slightly higher than the traditional feeding protocol in algorithms for critical-care enteral and parenteral therapy study at 25 mL/h, the recent of daily volume based PEP-up protocol in critically ill patient allowed the maximum rate up to 150 mL/h. [14,15]he feeding rate increased until the energy target was achieved.The detailing of feeding protocol and decision guideline in this study was demonstrated in Figure 2. The hospital enteral formula was changed to peptide-based formula or fiber-containing formula if patient developed diarrhea. [16]Feeding success at 72 h after starting GEN was defined as patients who could receive energy of ≥80% of the estimate required calories or target rate via the stomach route without abdominal symptoms. [11,14]
Data collection and statistical analysis
The demographic data, ICU admission details, acute physiologic and chronic health assessment evaluation II score (APACHE II), number of starving day before feeding, GVAL after the sucrose loading test at different times, calories per day that patients received at 72 h after starting feeding were collected.The data were analyzed by STATA software (version 11.0, STATA Inc., College Station, TX).Continuous variable data differences were tested using the Student's t-test for normal distribution data and reported as mean ± standard deviation or median (25-75 inter-quartile range [IQR]) for nonparametric distribution and tested with a Mann-Whitney U-test.For categorical variables, Pearson's Chi-square and Fisher's exact test were used.Differences between before and after laboratory testing values were tested using a paired t-test or Wilcoxon's sign rank based on their distribution.The authors considered receiver-operating characteristic (ROC) plots and ROC area or a c statistic for assessing test's discriminate ability to determine the optimal cut-off point of the independent variable (GVAL) for the prediction of GEN success.Hosmer -Lemeshow goodness of fit was tested for calibration between observation and the model at each time point.Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) as well as the likelihood ratio were reported.The statistical significance of the differences was considered at P < 0.05.
Results
Thirty-three SICU patients were enrolled to initiate gastric loading. 1 patient vomited during the loading test and was excluded.The before testing median day (IQR) was performed in day 3 (2-4) after ICU admission.The remainder did not develop adverse events during the test, and all protocol measurements were completed, except 1 patient felt mild abdominal discomfort.Pulmonary complications including ventilator-associated pneumonia after the procedure did not occur in all enrolled patients.No alterations of hemodynamic parameters were observed during testing.Regarding patient demographic data and admission details in Table 1, 14 patients (43.7%) were unsuccessful with GEN according to the study definitions.3 patients developed diarrhea, and the enteral formula was modified.There were no statistically significant differences (P > 0.05) between the nonsuccessful and successful groups regarding patient characteristics, including age, gender, body weight, height, body mass index, site of surgery, underlying diseases, SICU admission causes, nil by mouth reasons, energy requirement, and APACHE II score on the day of testing.Regarding the basic laboratory testing in Table 2, there was no difference in blood glucose level (P = 0.73), BUN (P = 0.40), creatinine (P = 0.57), sodium (P = 0.70), and potassium (P = 0.27) before and after sucrose loading.The median amounts of GVAL were statistically significantly different between the successful and nonsuccessful group [Table 3].The predicted model of nonsuccess was fitted to every time point when they were tested by Hosmer -Lemeshow goodness of fit.ROC area increased over time, and the highest value was the measurement at 120 min after the sucrose gastric loading test (ROC area 0.98).These meant that the most accurate time of nonsuccessful GEN prediction using GVAL was measurement at 120 min [Figure 3].
Regarding the discrimination on considering ROC [Table 4 and Figure 1], the most appropriate cut-off point for nonsuccessful GVAL after sucrose loading at 30, 60, 90, and 120 min were 400, 300, 200, and 150 mL, respectively.Although, the median GVAL between groups was significant difference at all time points, at 120 min after sucrose loading with a cut-off point at least 150 mL yielded the highest likelihood ratio (9.5), sensitivity (92.3%), specificity (88.9%),PPV (85.7%), and NPV (94.1%).On hospital discharge, 1 patient died 3 weeks after an emergency aortic abdominal aneurysm repair and this condition was not associated with the study protocol.
Discussion
This study proposed a novel method for GEN nonsuccess prediction in critically ill patients.Although feeding via the jejunal route (percutaneous jejunostomy or nasojejunal tube) might increase the success rate for early enteral feeding due to jejunal peristalsis starting earlier than gastric peristalsis and could decrease septic complications after injury, these methods are not widely available especially in limited resource ICUs and also need skilled intervention in nonabdominal surgical patients. [17]Regarding testing method, although there were no previous methods using carbohydrate loading as detailed in this setting, there were some discussion points on the detail of loading the fluid.The concentration of the testing substance in this study was 12.5% sucrose solution (450 mosmol/kg).This was selected for following reasons.(1) This concentration was recommended for preoperative oral carbohydrate loading in abdominal surgery for enhanced patient recovery and (2) high osmolarity and disaccharide containing fructose-hexose solution could increase gastric emptying time. [7,18,19]For testing volume, the authors selected 800 mL of fluid because routine surgical gastric retention diagnostic testing with rapid saline load test utilized 750-800 mL, with the measurement of GVAL performed 30 min later. [9,10]However, rapid administration of saline load test might cause harm in a critically ill patient.Therefore, gradual feeding might alleviate complications.A slow load over 30 min was established in our protocol.Although this protocol was followed, 1 patient vomited during test.For this reason, the authors recommended that all patients should be closely observed during the test especially in patients who were heavily sedated or those who were paralyzed as well as who has defected on airway protective mechanism.
The prevalence of nonsuccessful GEN was 43.7% which was slightly lower than in the previous study which found 56% in trauma ICU patients. [20]The standard clinical criteria for nonsuccessful enteral feeding were inconsistent.Although the objective measurement of myoelectric activity of the bowel wall might be a better parameter for feeding success, it was unavailable and difficult to perform in a clinical study.Therefore, this study defined nonsuccessful GEN based on the attending intensivist's decision which depended on a combination of abdominal symptoms and receiving energy compared with targeted energy following the feeding protocol. [14,15]e routine measurement of GRV is controversial, and this regular checking did not provide occurrence differences in pulmonary complications of critically ill patients. [12,21]However, disregarding GRV checking in intolerant patients particular in an ICU setting might lead to patient discomfort and suffering especially in surgical ICU setting.In addition, increased GRV correlated with disease severity and patient outcomes. [6]Sucrose loading might be an alternative method for screening and triage patients who need close monitoring if GEN is initiated.The cut-off point of 150 mL at 120 min after sucrose loading showed the most appropriate sensitivity and specificity in this study.In addition, supplemental parenteral nutrition might be started early particularly in previously malnourished patients or small bowel feeding might be considered early if there is a high possibility of unsuccessful GEN.This strategy might decrease the energy deficit in these patients and result in decreased complications. [2]e strength of this study was a new proposed method for screening of feeding nonsuccess in critically ill patients.However, there were some inevitable limitations.First, despite slightly higher levels of blood glucose at the postloading period, the blood sugar levels before and after sucrose loading did not show statistically significant differences.In addition, this phenomenon might occurred from poor absorption of sucrose in nonsuccessful GEN patients.However, extrapolation of these results might be completed cautiously particularly in patients with underlying diabetic mellitus (DM) because only 6% of all enrolled patients had the previous history of DM.Second, all participating patients were surgical patients and hemodynamically stable patients.Using this method in medical ICU patients as well as patients needing high vasopressor requires further validation.Third, although there were no vital signs alterations in all tested patient including elderly patients, gastric distention might induce hypotension via vagal response.Therefore, this method should be used with caution and the testing volume might need to be reduced in frail patients.Forth, although there was no statistical significant difference on site of surgery, the distribution of nonsuccess GEN was not equally in all surgical types.The further validation in each subgroup should be performed in the future study.Finally, clinical outcomes especially nutritional statuses were not included in this study and the sample size on this pilot study was small.In addition, the strict criteria included in this study might not involve all spectrums of critically ill patients.Further pragmatic study using this test with a larger sample size for guiding GEN feeding should be performed.However, this pretest feeding might be a benefit on considering the appropriated nutritional treatment options and triage patients who need close monitoring during GEN.
Conclusion
The sucrose loading test might be a method to predict GEN success particularly using assessment of GVAL at 120 min with a cut-off point >150 mL.
Figure 2 :
Figure 2: Gastric enteral feeding protocol and decision guideline
Figure 3 :
Figure 3: Receiver operating characteristic plots of gastric residual volume measurement at each time point for nonsuccessful enteral feeding at 72 h after sucrose gastric loading test
Table 2 : Serum laboratory results before and after sucrose loading test Mean (SD) Before After P
‡ g/L; § mEq/L.SD: Standard deviation
Table 3 : Median amount of GVAL and ROC area in each time point
CI: Confidence interval; GRV: Gastric residual volume after glucose gastric loading test; IQR: Inter-quartile range; ROC: Receiving operating characteristic; GVAL: Gastric volume after loading
Table 4 : Sensitivity, specificity, PPV, NPV, and LR of GVAL after sucrose loading in the most appropriate cut point of each aspirated time for prediction of nonsuccess feeding at 72 h Cut point Sensitivity Specificity PPV NPV LR+ LR−
LR+: Positive likelihood ratio; LR−: Negative likelihood ratio; PPV: Positive predictive value; NPV: Negative predictive value; GVAL: Gastric volume after loading | 2018-04-03T02:05:31.982Z | 2015-02-01T00:00:00.000 | {
"year": 2015,
"sha1": "983e1a9c5d88790c3281550c7f786e6a6024e5aa",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0972-5229.151017",
"oa_status": "BRONZE",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bea9a977942cc1c860f99a1eeff14bf86e406d2d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256451065 | pes2o/s2orc | v3-fos-license | Primary Stereotactic Radiosurgery Provides Favorable Tumor Control for Intraventricular Meningioma: A Retrospective Analysis
The surgical resection of intraventricular meningiomas (IVMs) remains challenging because of their anatomically deep locations and proximity to vital structures, resulting in non-negligible morbidity and mortality rates. Stereotactic radiosurgery (SRS) is a safe and effective treatment option, providing durable tumor control for benign brain tumors, but its outcomes for IVMs have rarely been reported. Therefore, the goal of the present study was to evaluate the SRS outcomes for IVMs at our institution. This retrospective observational study included 11 patients with 12 IVMs with a median follow-up period of 52 months (range, 3–353 months) treated with SRS using the Leksell Gamma Knife. Nine (75%) tumors were located in the trigone of the lateral ventricle, two (17%) in the body of the lateral ventricle, and one (8%) in the third ventricle. Tumor control was achieved in all cases, and seven (55%) decreased in size. Post-SRS perifocal edema was observed in four (37%; three asymptomatic and one symptomatic but transient) patients, all of which were resolved by the last follow-up. SRS appears to provide safe and excellent tumor control for IVMs. A longer follow-up with a larger number of cases is desired for a more solid conclusion.
Introduction
Meningiomas are the most common benign intracranial tumors originating from the arachnoid cap cells [1,2] irrespective of the location. The standard therapeutic option is surgical resection; superficial tumors are easy to resect, while deep tumors are often challenging because of the important anatomical structures surrounding them.
Intraventricular meningiomas (IVMs) are rare, accounting for only 0.3-5% of all meningiomas. IVMs are one of the most challenging tumors because of their deep locations [3,4]. Smaller IVMs are usually asymptomatic, whereas larger IVMs can manifest various symptoms such as headache, visual field deficits, ataxia, paresis, seizure, and hydrocephalus [3,[5][6][7][8][9][10]. Many factors can complicate safe surgical resection and jeopardize patients' neurological outcomes: (1) sacrificing the cerebral cortex to approach the tumor, (2) critical nerve tracts surrounding the tumor and surgical trajectory, (3) difficulty with hemostasis deep inside the brain, especially for large tumor cases. Recent advances in neuroendoscopic surgery have offered adequate surgical exposure with minimal invasiveness, though the surgical complication rate is reportedly high-up to 33% [11][12][13][14]. Moreover, there is a non-negligible risk of surgery-associated mortality, which was reported to be 1.6% in a recent systematic review [4].
In light of the above, tumor control that preserves the surrounding functional anatomies is crucial and desirable in the management of IVMs. Stereotactic radiosurgery (SRS) is characterized by its accurate targeting and delivery of high-dose focused irradiation in a single session, offering a minimally invasive treatment option for intracranial tumors. Since radiation exposure to the surrounding structures can be adequately reduced owing 2 of 11 to its sharp dose fall-off, SRS can be considered an appropriate treatment for IVMs; however, there remains a paucity of evidence, likely because of the rarity of IVMs. Hence, we conducted the present study including detailed analyses on the radiosurgical outcomes of IVMs to elucidate the efficacy and safety of SRS for IVMs.
Patient Data Collection
Of 352 patients with intracranial meningioma who underwent SRS from 1990 to 2022 at our institution, data on 12 patients with 13 IVMs were collected from the institutional gamma knife database. One patient with <3 months of follow-up was excluded from the analysis, while patients with neurofibromatosis type 2 (NF2) were included. All tumors were diagnosed based on their radiologic findings, and all radiologic images were reviewed by two independent neuroradiologists and attending neurosurgeons. The study was approved by the Institutional Review Board of our institution (#2231). All patients provided written informed consent for study participation.
SRS Procedure
The Leksell Gamma Knife (Elekta Instruments, Stockholm, Sweden) was used for all SRS procedures. The detailed treatment process has been previously reported [15,16]. After head fixation using a Leksell frame (Elekta Instruments), stereotactic imaging (computed tomography [CT] before July 1996, magnetic resonance imaging [MRI] between August 1996 and January 2018, followed by cone-beam CT) was performed to obtain precise tumor data. Dedicated neurosurgeons and radiation oncologists performed radiosurgical planning using commercially available software (KULA planning system until 1998 and Leksell Gamma Plan thereafter [Elekta Instruments]). In principle, 16 Gy before 2010 and 14 Gy thereafter were administered to the tumor margin using a 50 ± 5% isodose line. Representative cases are shown in Figure 1.
Follow-Up and Clinical Outcomes
After SRS, MRIs were checked every 6 months for the first 3 years and annually thereafter. Tumor response after SRS was judged by the Response Assessment in Neuro-Oncology criteria [17,18]; tumor progression was defined as an enlargement in volume of ≥25% upon two or more consecutive post-SRS images. Patients' neurological status and radiological responses to SRS were prospectively collected at each hospital visit, and any adverse events were recorded based on a Common Terminology Criteria for Adverse Events (CTCAE, version 5.0) grade. Data on patients who dropped out of regular follow-ups or returned to referring physicians were collected via telephone, and follow-up radiographic images were obtained.
Statistical Analysis
First, the baseline characteristics of the patients were summarized. Second, progressionfree rates (PFRs), disease-specific survival (DSS), overall survival (OS), neurological preservation, and post-SRS peritumoral T2 signal change rates were calculated using the Kaplan-Meier method, excluding the patient with only a follow-up of three months. Third, factors associated with PFRs and post-SRS peritumoral T2 signal change rates were examined using bivariate Cox proportional hazard analyses. Continuous variables were entered into models after being dichotomized using their median values. Statistical analyses were performed using JMP Pro 16 software (SAS Institute Inc., Cary, NC, USA).
Patient and Tumor Characteristics
Eleven (6 women and 5 men) patients with a median age of 45 years (range, 13-80 years) were included in the study. The median post-SRS follow-up period was 52 months. The baseline characteristics and treatment data are summarized in Table 1, and the details of the patients are described in Table 2. Five patients had multiple NF2related intracranial meningiomas and underwent SRS for IVMs. There was one patient in whom bilateral trigonal meningiomas were simultaneously treated with SRS (
Tumor Control
Of the 12 tumors, seven (58%) decreased in size by the last follow-up visit, while five (42%) were stable in size. Tumor control was achieved in all patients; therefore, the cumulative 5-and 10-year PFR were 100% ( Figure 2). No significant differences in tumor control were observed between the sporadic and NF2-related IVMs. After SRS, five patients with multiple meningiomas underwent additional interventions for growing meningiomas other than IVMs. Eventually, two of the five NF2 patients died of progression of such tumors, although IVMs were well controlled after SRS. As a result, the cumulative 5-and 10-year DSS rates of IVMs after SRS were 100%, although the 3-and 10-year OS rates were 86% and 71%, respectively ( Figure 3A). OS rates in NF2 patients were lower in NF2 patients compared with sporadic IVM patients (67% vs. 100% at 3 years, and 33% vs. 100% at 10 years, respectively), although these differences were not statistically significant (Logrank test, p = 0.070; Figure 3B). tumors, although IVMs were well controlled after SRS. As a result, the cumulative 5-and 10-year DSS rates of IVMs after SRS were 100%, although the 3-and 10-year OS rates were 86% and 71%, respectively ( Figure 3A). OS rates in NF2 patients were lower in NF2 patients compared with sporadic IVM patients (67% vs. 100% at 3 years, and 33% vs. 100% at 10 years, respectively), although these differences were not statistically significant (Logrank test, p = 0.070; Figure 3B). tumors, although IVMs were well controlled after SRS. As a result, the cumulative 5-10-year DSS rates of IVMs after SRS were 100%, although the 3-and 10-year OS rates w 86% and 71%, respectively ( Figure 3A). OS rates in NF2 patients were lower in NF2 tients compared with sporadic IVM patients (67% vs. 100% at 3 years, and 33% vs. 1 at 10 years, respectively), although these differences were not statistically significant ( rank test, p = 0.070; Figure 3B).
Adverse Radiation Events (AREs)
No AREs were observed, and the 5-and 10-year neurological preservation rates were 100% ( Figure 4A). Post-SRS peritumoral T2 signal change was observed in four (33%) patients with trigonal IVM. The signal change developed at 6-29 months after SRS and diminished at 9-40 months. The 1-and 3-year cumulative post-SRS signal change rates were 18% and 40%, respectively ( Figure 4B). One patient (8%) complained of a transient headache along with signal change, but her symptom and the signal change disappeared following oral administration of corticosteroid for 1 month. No factors were significantly associated with post-SRS signal change (Table 3). No other AREs, including hydrocephalus, seizure, and visual field deficit, were observed after SRS.
tients with trigonal IVM. The signal change developed at 6-29 months after SRS and diminished at 9-40 months. The 1-and 3-year cumulative post-SRS signal change rates were 18% and 40%, respectively ( Figure 4B). One patient (8%) complained of a transient headache along with signal change, but her symptom and the signal change disappeared following oral administration of corticosteroid for 1 month. No factors were significantly associated with post-SRS signal change (Table 3). No other AREs, including hydrocephalus, seizure, and visual field deficit, were observed after SRS. CI, confidence interval; HR, hazard ratio; NF2, neurofibromatosis type 2; SRS, stereotactic radiosurgery; V12, volume of normal brain tissue exposed to ≥12 Gy.
Discussion
In this study, we analyzed the radiosurgical outcomes of SRS for IVMs. SRS provided an excellent PFR (100% with a median follow-up period of 52 months). Importantly, our
Discussion
In this study, we analyzed the radiosurgical outcomes of SRS for IVMs. SRS provided an excellent PFR (100% with a median follow-up period of 52 months). Importantly, our patients included NF2-related IVMs, suggesting that SRS is effective regardless of NF2 mutation status. Transient post-SRS signal change occurred in 33% of the cases, but there were no permanent AREs. The results were promising and comparable to the SRS outcomes for intracranial meningiomas of other locations [19][20][21][22][23].
For IVM, surgical resection is more complicated than for meningiomas of other locations due to its deep location with limited accessibility and adjacent eloquent neurovascular structures, leading to high post-surgical morbidity and mortality rates. Trigonal IVMs are especially challenging among lateral ventricle IVMs because the medial part of the tumor is in contact with the optic radiation, and the feeding arteries arise from the deepest part of the tumor via the transparietal transcortical route; therefore, the reported surgical morbidity rates range from 12.5 to 60%, including hemianopia, hemiparesis, intracranial hemorrhage, and intracranial hypertension [3,11,12,24,25]. Furthermore, the surgical morbidities are reported to be more frequent and more severe in third ventricle IVMs than in lateral ventricle IVMs due to the proximity to the thalamus, brainstem, and cranial nerve nuclei [25][26][27][28][29][30]. As a result, the mortality rate is reportedly high up to 4%, 44% of which occurred during the postoperative period [4].
On the other hand, there are only a few previous studies describing outcomes of SRS for IVMSs, with relatively favorable tumor control rates ranging from 67% to 100% reported. (Table 4; We searched PubMed without language restrictions for papers published from database inception up to December 1st, 2022 to include studies of intraventricular meningioma. We used the search terms "intraventricular, meningioma" "ventricular meningioma" "stereotactic radiosurgery". We identified 127 previous reports about IVMs including five studies about validating SRS for IVMs.) [19][20][21][22][23]. Although our study demonstrated excellent tumor control, some previous studies revealed that salvage SRS for progressive recurrent tumors may not always provide sufficient tumor control. In the studies conducted by Kim et al. [20] and Daza-Ovalle et al. [19], two of the three (67%) failed cases were salvage cases for recurrent tumors following prior resection. This suggests that a radiation dose may need to be increased for such progressive tumors. From another viewpoint, upfront SRS for residual tumors may be more reasonable than salvage SRS after recurrence, as was proved in a recent multi-center retrospective study [31]. Notably, all the tumors in patients who had NF2 or multiple meningiomas were under good control after SRS. Although patients with multiple meningiomas tend to require multiple interventions and exhibit a shorter overall PFR [32][33][34], NF2-associated IVMs could be controlled with SRS [23]. Further case accumulation is desirable for a more solid conclusion on these issues.
Newly or worsening perifocal edema is the main ARE in SRS for IVMs [19][20][21][22][23]. In general, peritumoral T2 signal changes occur in 28-50% of cases after SRS for meningiomas, with the incidence of symptomatic ones ranging between 5% and 43% [35]. The risks of symptomatic signal change include older age, larger tumor volume, higher radiosurgical dose, presence of peritumoral edema before SRS, and primary SRS [35,36]. The present study observed post-SRS signal changes in four tumors; three (25%) were asymptomatic and one (8%) was symptomatic but transient. Previous reports on SRS for IVMs have shown that an incidence of post-SRS peritumoral edema ranges from 17% and 100% [19][20][21][22][23]. In a report by Nundkumar et al. in which two of their two cases developed post-SRS peritumoral edema, a marginal dose as high as 18 Gy was used, and surgical resection was required due to the uncontrolled edema even without tumor progression in one of them [22]. Given, as Daza-Ovalle et al. highlight, the association between volume, received >12 Gy irradiation, and the occurrence of peritumoral edema [19], radiosurgical dose may be an important determiner and needs to be balanced according to tumor volume. Despite a certain risk of perifocal edema, all of our patients went through it without surgical intervention, developing no permanent morbidity.
The optimal radiosurgical dose for IVMs remains debatable. As shown in Table 4, 12-18 Gy was mainly used as a margin dose. Nevertheless, 18 Gy appears to be too high for a margin dose, given that one of the two patients in our study and two of the two patients in Nundkumar's cohort [22] who received 18 Gy at the tumor margins later developed perifocal edema. Initially, 16 Gy was used as a margin dose in our institution, but this has currently been reduced to 14 Gy in order to reduce RAE. At this moment, the optimal radiosurgical dose would be somewhere between 12 and 16 Gy. As in the previous discussion, a higher dose may be desirable for progressive recurrent tumors.
This study has several limitations. First, it was a retrospective, single-institutional study with a potential selection bias. To determine the efficacy and safety of SRS for larger tumors, further investigation is required. Second, all tumors in this study were radiologically diagnosed; therefore, the diagnoses might have been less reliable than a histological diagnosis. A larger number of patients would be desirable for future studies to re-confirm our results.
Conclusions
SRS can be an appropriate treatment option for IVMs, achieving favorable mid-term tumor control without jeopardizing neurological function. Further investigation in a larger volume study is warranted to establish the role of SRS for IVMs. Data Availability Statement: Anonymized data in this article will be available by request from any qualified investigator, and information about the method of analysis will be available from the corresponding author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-02-01T16:13:22.051Z | 2023-01-30T00:00:00.000 | {
"year": 2023,
"sha1": "e535da8b16cd0a38597d942a4f2e1774d9b8b09b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/3/1068/pdf?version=1675052820",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf742f039b015aa29e121861c251661ad1262bf9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119277762 | pes2o/s2orc | v3-fos-license | Comments on the paper `Unifying Boxy Bulge and Planar Long Bar in the Milky Way' by Martinez-Valpuesta&Gerhard [arXiv:1105.0928]
We comment on the recent paper by Martinez-Valpuesta&Gerhard (arXiv:1105.0928, 2011), who suggest, as an alternative to the bulge + long bar hypothesis in the inner 4 kpc of our Galaxy, a single boxy-bulge structure with a twisted major axis. In principle, we find this proposal acceptable; indeed, from a purely morphological point of view, this is more a question of semantics than science, and possibly all of us are talking about the same thing. However, we think that the particular features of this new proposal of a"single twisted bulge/bar"scenario leaves certain observational facts unexplained, whereas the model of a misaligned bulge + long bar successfully explains them.
Discussion
Martínez-Valpuesta & Gerhard (2011, hereafter MG11) criticize the proposal of the existence of a boxy bulge + long bar in the centre of the Milky Way (Hammersley et al. 2000;López-Corredoira et al. 2001; Benjamin et al. 2005. Finding possible problems in a hypothesis or alternatives to it is always an interesting exercise. Nonetheless, we find that MG11 is just a first step in an analysis that is still far from solving the problem of the structure in the inner 4 kpc of our Galaxy, and that leaves unexplained many observations related to the possible existence of the long bar. The Basic gist of the article is as follows. • The star-count maximum along a line of sight crossing a triaxial bulge/bar structure is not coincident with the major axis of the structure. That is indeed the case and has already been stated and discussed at length by our team (appendices A and B of L07; section 6 of C07). Our conclusions, coincident with that of MG11, are that for a thick structure (the bulge), the difference between the angle of the real structure with respect to the apparent one in the plot of star-count maxima can be important, with a systematic difference of up to ≈ 10 • . The analyses by C07 of the angle of the thick bulge take this effect into account, and the structure with an apparent opening angle of ≈ 25 • might indeed have a real inclination of ≈ 15 • ("opening angle" refers to the orientation in the Galactic plane of structure with respect to the Sun-Galactic centre line). In any case, this does not significantly affect the hypothetical long thin bar (a triaxial bulge with axial ratios 1:0.25 in the plane would give a maximum error of 100 pc in the difference between maximum density and major axis; see appendix A of L07).
• A bulge (developed from a bar after the second vertical buckling; Martínez-Valpuesta et al. 2006) with a twisted major axis of radius ≈4 kpc could reproduce the observed distribution of maxima in the plane of the red clump counts by C07 instead of the proposal by C07 and L07 of a shorter bulge + long bar with straight axes and a small angular difference between them. Indeed, from a purely morphological point of view, we are talking about the same thing under a different name. We could, for instance, say that the whole structure in the centre of a galaxy like that shown in Fig. 1 is a bulge, a bar, or a combined bar + bulge. What is evident is that this structure is thicker in the center and narrower at its extremes; it is therefore well represented by a combination of thick bulge + long thin bar. Whether the name should be only a bulge or a bar or a bulge + bar is merely a question of semantics. The possible slight misalignment of the outer part of this structure with respect to its inner part (∼20 • in the Milky Way; see Ann 1995 for other galaxies) can also be interpreted as a single structure with twisted major axis.
There are further observational aspects not discussed by MG11 that need to be considered when analysing the possible existence of a long bar: • Concerning morphology, one must also explain the measured thickness of the bar, not only the central position of the maxima of star counts. Figure 1a of MG11 shows a structure with a thickness at the tip of the bar at positive galactic longitude (25 • < l < 30 • ) of 4-5 kpc, whereas the thickness measured with the red clump method is 2-2.5 kpc (L07 1 ). This difference in numbers is not too significant, but the important thing is that "qualitatively" we observe a much thicker bulge in the center (l < 15 • ) than in the outer parts (l > 15 • ), and this observational result is not apparent in the proposal by MG11, which seems to maintain (judging from their fig. 1a) a similar axial ratio in the inner and the outer parts of their integrated bulge + bar structure.
• With regard to asymmetries in the projected counts, one of the main motivations for positing the existence of the long bar was the fact that, within the plane (b < 2 • ), the star counts were far higher at positive galactic longitudes than negative longitudes with the same |l|, b, for l < 30 2006), point out that their bulge becomes vertically thinner in the outer part, but they do not specify by how much. The bulge extends between b = −10 • and +10 • in the inner parts and should be constrained within |b| < 2 • in the outer parts, to reproduce the star counts of C07 (their fig. 20). This is not shown by MG11.
• Regarding stellar populations, the division of a galaxy into several stellar components is not only a question of visually identifying substructures within the global morphology of the galaxy in question. It is also related to the separation of different populations with different physical properties. This distinction is not rigid because, even in a given component like the thin disc or the bulge within |l| < 12 • in off-plane regions there are age and metallicity gradients, but an attempt is made to separate the major morphological groupings according to their stellar populations. In the case of the populations within galactocentric distances less than 4 kpc, there are important differences in the metallicity between the inner and outer parts (González-Fernández et al. 2008), so thinking about different populations associated with the bulge and the bar makes sense; alternatively, of course, one may posit a unique component called the "bulge" with strong outward metallicity gradient. In any case, the integrated bulge + bar structure proposed by MG11 cannot have a homogeneous stellar population if it is to incorporate established observational evidence.
• There is gathering evidence for a star formation region (SFR) at the tips of the bar. It is well confirmed that there is a huge SFR in the plane at l ≈ 27 • (López-Corredoira et al. 1999, Negueruela et al. 2011, the most prominent one in the Galaxy apart from the Galactic centre. This SFR marks the connection of the Scutum spiral arm and the hypothetical long bar (López-Corredoira et al. 1999). It is composed of a burst of very young stars, nothing to do with the symmetric enhancements at the ends of the stellar bar, called ansae, or the "handles" of the bar/bulge (Martínez-Valpuesta et al. 2007). This region is also detected in from methanol masers at 6.7 GHz (Green et al. 2011), together with the other huge SFR associated tentatively with the tip of the bar at negative longitudes (l ≈ −13 • ). MG11 now claim now that the long thin bar is a thick boxy bulge extending to R ≈ 4 kpc. As argued by López-Corredoira et al. (1999), these kinds of star formation regions are observed in other galaxies that have a thin bar, but we do not know any case of a galaxy with the kind of boxy bulge proposed by MG11.
From the point of view of methodology, we do not find the approach of MG11 to be the most appropriate. Provided that all the observational facts are reproduced, one should certainly use the simplest theoretical models rather than a complex model with many more free parameters (the principle of Occam's razor). MG11 claim that they cannot find a theoretical explanation for the existence of two misaligned triaxial structures, and that is their stated motivation for assert that something must be wrong with the interpretation of the observations of the Milky Way and other galaxies described here. This is a deductive standpoint (theorists telling observers what they should see). However, from an inductive standpoint (deriving theories from observations), which we find more appropriate, it is also possible to argue the need for changes in the theory rather than in the interpretation of observations. Two of us (Garzón & López-Corredoira, in preparation) are currently working on theoretical models that allow the possibility of two misaligned bars/triaxial structures.
Summing up, the model proposed by MG11 cannot replace the earlier proposal of a bulge + long bar, although MG11's general consideration of an integrated boxy bulge and planar long bar might be possible if a suitable model can be produced that explains all the relevant observational features. | 2011-06-01T16:55:10.000Z | 2011-06-01T00:00:00.000 | {
"year": 2011,
"sha1": "377cccb0d360ea2f13a604c2e09ec5cb84a9fa78",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "377cccb0d360ea2f13a604c2e09ec5cb84a9fa78",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
216603062 | pes2o/s2orc | v3-fos-license | THE ROLE OF THE TEACHER'S DISCIPLINE OF THE GOOD CHARACTER DEVELOPMENT OF EARLY AGE CHILDREN AT RAUDLATUL ATHFAL MA'ARIF 1 METRO ACADEMIC YEAR 2018/2019
: Discipline is an important thing that must be instilled in children. Discipline is obedience and obedience to something that has been agreed upon. The role of the teacher is very important in shaping the character of early childhood discipline. Identification of the problems mentioned above, then the formulation of the problem in this study is, How the Role of Teacher Discipline on Good Character Development of Early Childhood in Raudlatul Athfal Ma'arif 1 Metro Academic Year 2018/2019. This research uses a descriptive approach, The triangulation technique research is a technique of finding data in the same data source using different techniques, namely interviews, observation, and documentation. The application of teacher discipline in developing the good character of early childhood has been implemented optimally. The activities are given by the teacher run in accordance with the expectations and achievements of development, which serve as indicators of implementation on aspects of good character. Whereas the good character that is developed by the discipline of teachers in Raudlatul Athfal Ma'arif 1 Metro is to say prayers before and/or after doing something, recognize good/polite and bad behavior, get used to behaving well and say hello and give greetings. developing good character of year an interview with one of the that the implementation of character learning is good every opening time, at the core and at the time end of learning. 2019) the that the implementation of discipline is carried out in learning ranging from morning to closing material through activities programmed in learning and routine, spontaneous and exemplary activities. The learning activities at Raudlatul Athfal Ma'arif 1 Metro were carried out in sequence starting from the morning material until the end of the designated time. Based on the results of observations and interviews above, researchers can conclude that the implementation of good character learning in Raudlatul Athfal Ma'arif 1 Metro has been carried out through activities, core activities, and closing activities.
One of the characters that need to be instilled in children from an early age is the character of discipline. Now the word discipline has developed following the progress of science so that there are many different disciplinary understandings from one expert to another. (Rosma Elly: 2016) Hodges said that discipline can be interpreted as the attitude of a person or group who intends to follow the rules that have been established. (Evi Fadilla Helmi: 1996) Discipline is an important thing that must be instilled in children. Discipline is obedience and obedience to something that has been agreed upon. So, the goal to be achieved from the formation of disciplinary character for children is to form children with a good personality and behave in accordance with applicable norms. Since early childhood, parents and teachers must shape the discipline of children in all aspects of life such as discipline in eating with one's own hands, discipline in learning, discipline in returning toys / items that have been used to the place of origin, and discipline in doing hygiene such as washing hands before eating and after going to the toilet and throwing trash in its place.
Based on the results of observations of researchers during the practice of teaching experience in Raudlatul Athfal Ma'arif 1 Metro, the reality is that several times encountered children who showed low self-discipline behavior. As for the behavior that occurs , there are still many children who throw littering, when the bell enters when they want to do marching activities there are still many children who play, children who eat prematurely, children who like to walk when doing activities in the classroom and children who are lazy to wash their hands before eating and the child does not want to queue when washing hands.
Based on the results of observations in the field, researchers see that the causes of children who have not been disciplined, namely the environment and parents may not pay attention to existing discipline. Parents do not understand about understanding discipline, then parents may be busy with their work so they cannot apply discipline to children. Then the teacher has not given a good direction in the application of discipline. The teacher's attitude is too hard in establishing discipline and in this case the teacher instills discipline by force full of threats and punishment when the child shows an undisciplined attitude. The teacher also doesn't accustom the child to discipline, the teacher only uses the lecture method in applying discipline, while the child just sits and listens to what the teacher says, In the formation of disciplinary character for early childhood, the role of the teacher is very important in forming the character of early childhood discipline. Teachers as an example in class are also required to have skills in fostering discipline. With the teacher applying the attitude of daily discipline, the child will also imitate the attitude of discipline carried out by the teacher. The teacher as an educator must be able to determine and choose appropriate and effective ways of shaping the character of discipline in children. Teachers can choose learning methods in the right way to shape the character of discipline in children. The teacher as an educator must give good habits to students so that children have good personalities in the future (adults). According to Arikunto, that discipline is "Someone's compliance in following rules or regulations because it is driven by awareness in his conscience." (Suharsimi Arikunto: 2010) According to Gunawan school discipline means that every child must follow the rules and rules of the school such as how to dress neat and timeliness. (Irma Noffia: 2015) Meanwhile, according to Suryadi, discipline is a control system implemented by controls applied by educators to students so that they can function in society, as said by Hadiyanto discipline is a condition where attitudes and appearance, a student in accordance with the order of values, norms and provisions that apply in the school where the student is located (Wirna Nofita: 2015). Based on this information the researcher can conclude that discipline is a condition where a person follows obedience based on the growing awareness in a person.
Discipline of students in learning or learning discipline is the obedience (adherence) of students to the rules (rules) relating to teaching and learning activities in schools, which include time in and out of school, student compliance in dress, adherence of students in participating in school activities, and etc. (Darmadi: 2017) The teacher is a designation for positions, positions, and professions for someone who devotes himself in the field of education through educative interaction in a patterned, formal, and systematic way (M.Shabir U: 2015) In RI Law Number 14 of 2005 concerning teachers and lecturers in chapter I article 1 it is stated that: The teacher is a professional educator with the main task of educating, teaching, guiding, directing, training, evaluating, and evaluating students in early childhood education pathway formal education, basic education, and secondary education (RI Law No 14: 2005) . Good character is the answer to the question in which values need to be taught to others, namely humble, honest, good, loyal, patient and responsible classified as people with good character by others. ( Hengki Wijaya & Helaluddin: TT) Based on the Standards of Achievement in the Level of Early Childhood Development in 2013 Curriculum, indicators of good character of early childhood are included in the moral category, as follows (Permendeikbud No. 137: 2014). 1) Say a prayer before and / or after doing something 2) Get to know good/polite and bad behavior 3) Familiarize yourself with good behavior Say your greetings and reply to greetings Based on the information above, the indicators of good character for early childhood writers will make it an indicator in measuring the good character of children in this study.
RESEARCH METHODS
This Research used qualitative design, According to John W. Creswell, qualitative research is a process of inquiry to understand the problems of socially based on the creation of a holistic are in shape with the words, reported the views of informants in detail and are arranged in a background of scientific. ( Hamid Pattilima : 2005) This research uses a descriptive approach. Descriptive research is " research that seeks to tell the current problem solving based on data, so he also presents data, analyzes and interprets." ( Maria Caroline Cindy Iskandar : 2012 ) Whereas in this study will attempt to describe the data the author finds in the field into the form of scientific work in the form of a thesis on the Role of Teacher Discipline on the Development of Good Character of Early Childhood in Raudlatul Athfal Ma'arif 1 Metro Academic Year 2018/2019. This research is field research, so the data needed is sourced from: 1. Primary data sources, namely "Data can be obtained directly from the field including this laboratory are called primary sources." (Nasution: 2014) The primary data sources in this study are the results of interviews and observations from teachers and students in Raudlatul Athfal Ma'arif 1 Metro Tahun Doctrine 2018/2019.
2. Secondary data sources, namely "Sources of reading material are called secondary sources." (Nasution: 2014) The source of secondary data in this study is the profile of Raudlatul Athfal Ma'arif 1 Metro Academic Year 2018/2019 and reference books related to the title research.
3. Tertiary data sources is a collection and compilation of primary and secondary sources. Examples of tertiary sources are bibliography , library catalogs, directories, and reading lists. ( Wikipedia: 2018 ) The tertiary data source in this study is the internet.
The triangulation technique in this research is a technique of finding data in the same data source using different techniques, namely interviews, observation, and documentation.
RESULTS AND DISCUSSION
Based on the results of observations of researchers during the practice of teaching experience in Raudlatul Athfal Ma'arif 1 Metro, the reality is that several times encountered children who showed low self-discipline behavior. As for the behavior that occurs, there are still many children who litter, when the bell enters when they want to do marching activities there are still many children who play, children who eat prematurely, children who like to walk when doing activities in the classroom and children who are lazy to wash their hands before eating and the child does not want to queue when washing hands.
Based on Prasurvey's research above, of the 21 children observed, namely in class A1 there are seven indicators to be achieved, 16 children have started to develop and 5 children have developed in accordance with expectations, because given the importance of good character for early childhood in Raudlatul Athfal Ma'arif 1 Metro.
Based on observations in the field, the researcher sees that the causes of undisciplined children are the environment and parents may not pay attention to the existing discipline. Parents do not understand about understanding discipline, then parents may be busy with their work so they cannot apply discipline to children. Then the teacher has not given good direction in the application of discipline. The teacher's attitude is too hard in establishing discipline and in this case the teacher instills discipline by force full of threats and punishment when the child shows an undisciplined attitude. The teacher also doesn't accustom the child to discipline, the teacher only uses the lecture method in applying discipline, while the child just sits and listens to what the teacher says, The Role of Discipline of teachers in developing the good character of children at Raudlatul Athfal Ma'arif 1 Metro has resulted in quite good development. This was proven by researchers using data collection through interviews, observations, and documentation. To find out the role of teacher discipline in developing good character of children in Raudlatul Athfal Ma'arif 1 Metro Academic year 2018/2019 the researcher held an interview with one of the teachers, he explained that the implementation of character learning is good every day, opening time, at the core and at the time end of learning. (Interview: 2019) From the observation, results illustrate that the implementation of discipline is carried out in learning ranging from morning to closing material through activities programmed in learning and routine, spontaneous and exemplary activities. The learning activities at Raudlatul Athfal Ma'arif 1 Metro were carried out in sequence starting from the morning material until the end of the designated time. Based on the results of observations and interviews above, researchers can conclude that the implementation of good character learning in Raudlatul Athfal Ma'arif 1 Metro has been carried out through activities, core activities, and closing activities. (Observation: 2019) As for the teacher discipline, which is implemented in Raudlatul Athfal Ma'arif 1 Metro are as follows:
1) Honest
As for the results of observations that before the activity takes place the teacher always applies honestly in assessing and giving good examples to children. (Observation: 2019) As stated by the teacherage group 4-5 years: "In the learning process the teacher is always honest in teaching children, by acting honestly it is expected that children will be able to imitate the behavior given by the teacher." (Observation : 2019) Based on the statement above, the teacher at Raudlatul Athfal Ma'arif 1 Metro when learning activities applied the teacher's discipline in the form of honesty.
2) On-time
As for the results of observations that before the activity takes place the teacher is always on time in accordance with applicable regulations. (Observation: 2019) As stated by the teacherage group 4-5 years: "in the order as possible it is explained that the teacher was present at Raudlatul Athal 30 minutes before the activity began and went home 1 hour after the final activity was finished ." (Interview: 2019) Based on the statement above that the teacher at Raudlatul Athfal Ma'arif 1 Metro when learning activities implement teacher discipline in the form of timely.
3) Assertive
As for the results of observations that before the activity took place the teacher has a firm nature in teaching. (Observation: 2019) As stated by the teacherage group 4-5 years: "Every teacher should have a firm attitude, because by having this attitude every student will be obedient and obedient to be able to learn well, a strict teacher will encourage students on good deeds and reprimanding students for doing things that break the rules. "(Observation: 2019) Based on the statement above, the teacher at Raudlatul Athfal Ma'arif 1 Metro when learning activities applied the teacher's discipline in the form of a firm attitude.
4) To be responsible
As for the results of observations that before the activity took place the teacher was seen that the teacher was responsible for the tasks he was carrying out. (Observation: 2019) As stated by the teacherage group 4-5 years: "Every teacher has a duty and responsibility. The duties and responsibilities of a teacher are teaching and educating, thus the teacher is responsible for the success of the teaching and learning process. "(Observation: 2019) Based on the statement above that the teacher at Raudlatul Athfal Ma'arif 1 Metro when learning activities apply the discipline of the teacher in the form of responsibility.
At Raudlatul Athfal Ma'arif 1 Metro, the implementation of discipline in developing good character is divided into two semesters, namely semesters one and two. In semester one good character development emphasizes more on monotheism. Researchers use the second semester to obtain data in accordance with the indicators that will be used as research. With indicators of achievement of good character olds as follows: Pray before and after doing activities, carry out religious activities according to the rules according to the belief, Talking with courtesy, respect teachers and elders, apologizing and forgiving, helpful, shows right and wrong actions and is involved in religion. (Observation: 2019) To find out more clearly about how the use of discipline in developing the good character of children in accordance with the indicators of achievement that the authors carefully at Raudlatul Athfal Ma'arif 1 Metro can be described as follows: 1) Say a prayer before and/or after doing something The results of observations made by researchers at Raudlatul Athfal Ma'arif 1 Metro, that the discovery of the application of deliberate discipline by the teacher in the form of saying prayers before and after doing something, as well as memorizing the letters Ad-Dhuha, Al-Insyirah, Al-Qadr and then memorization short prayers like prayers entering and leaving a house, praying in a vehicle. Then also in the presence of memorizing Arabic vocabulary every day, there are presumably 50 vocabularies that children memorized starting from numbers, limbs, family members and so on (Observation: 2019) The observation was proven by the results of an interview conducted with one of the teachers, he explained that the example of the application of deliberate discipline was saying prayers before and after doing something, as well as memorizing short letters and daily prayers of this educational institution also has the program of studying the iqra (Reading) and memorizing Arabic vocabulary every day. and what is done together every day at the beginning of the core and the end of the activity with the teacher giving examples and children imitating. (Observation: 2019) Based on the results of interviews and observations above, it can be concluded that there is a deliberate application of discipline carried out by the teacher in the form of saying prayers before and after doing something, memorizing short letters, studying iqra ', memorizing Arabic vocabulary and praying' a daily each day with the teacher reciting the verse first and then the children follow it. in this activity, children are able to reach the realm of good character development in the realm of carrying out worship activities according to rules according to belief. 2) Get to know good/polite and bad behavior From the observational data, the researcher found that the teacher applied the discipline of courtesy in the way of dressing, speaking and behaving as well as the teacher giving examples of simple attitudes to students. The teacher's dress is always neat and simple, and the teacher's attitude towards the parents of students is very gentle and every teacher comes the teacher always shakes hands with each student's guardian. (Observation: 2019) The observation result was strengthened by one of the teachers, he said that there was an exemplary attitude such as bending the body when passing in front of parents and saying such fine words, and not shouting towards older people. (Observation: 2019) From the observational data and interviews above it can be concluded that the attitude of courtesy given by the teacher is carried out when this activity is in accordance with basic competencies.
Thus it can be concluded that the exemplary attitude in courtesy in saying and behaving in accordance with the conditions and circumstances on that day. Giving the exemplary appropriate indicators met in accordance with the level of achievement of children: Know the good behavior / polite and rush -rush.
3) Familiarize yourself with good behavior
The results of observations made by researchers at Raudlatul Athfal Ma'arif 1 Metro show that there is a good behavior like the teacher tells the story of the teacher apologizing to students before learning closes if today there are many mistakes. Based on the results of interviews and observations obtained above, it can be concluded that there are examples of apologizing from the teacher to students if the teacher made a mistake on that day and there is a good example of apologizing to a friend given by the teacher when a child made a mistake or arguing with his friend. This is in accordance with the indicators used as research that is to get used to behaving properly. As explained by one of the teachers, he explained that every day when he arrived at school the teacher shook hands and said greetings with fellow teachers and not only fellow teachers but also with parents of parents and other students. (Interview: 2019 ) Then the principal explained that planting good character is not only the practice of the Duha prayer, but also the greeting of fellow teachers, teachers with parents, and giving greetings every morning when they arrive at school is also one example of good character planting. Interview: 2019) Based on the results of interviews and observations obtained above, it can be concluded that the implementation of good character development is done by the teacher by exemplifying a good attitude that is saying greetings and shaking hands when meeting and arriving at school. This is in accordance with the indicators used as research that is saying greetings and returning greetings.
Say your greetings and reply to greetings
In connection with the data analysis conducted descriptively, in this discussion the researcher will describe the results of observations and interviews from the use of disciplinary application methods in developing the good character of early childhood in Raudlatul Athfal Ma'arif 1 Metro. The results of this study indicate that teachers apply the deliberate discipline including memorizing short letters and daily prayers, practicing Dhuha prayer, polite manners in saying and behaving to older people and practicing fasting and tithing in the month of Ramadan. Whereas the unintentional application of discipline is carried out with, Saying greetings and shaking hands when meeting, and apologizing when doing wrong. The results of these studies can show that there are 21 students, The objective to be achieved in the use of the method of applying discipline in developing good character is the existence of changes in students to become good and right humans in behaving as servants of God, children, families, and communities. Based on the facts of the findings, moral education is not just about understanding the rules of right and wrong or knowing about good and bad provisions, but must really improve one's moral behavior. Early childhood educators realize that the inculcation of good character in early childhood is not only to make children understand where good and right or bad deeds are wrong. But with the inculcation of good character in early childhood, the formation of good and right behavior as a Servant of God, children, family, and society.
The use of disciplinary methods carried out at Raudlatul Athfal Ma'arif 1 Metro is implemented by applying the application of deliberate discipline and the application of discipline unintentionally. The intentional application of discipline is done by the teacher so that students imitate what the teacher exemplifies. Furthermore, this unintentional discipline method is an unintentional act carried out by the teacher, but the act is in accordance with norms that can be set as an example for children.
Based on the facts of the above findings, it can be explained that there are two forms of educational methods with the application of discipline, namely the teacher intentionally giving good examples to his students to be imitated and the method of discipline without deliberate.
Furthermore, development material relating to the use of disciplinary application methods in developing good character is divided into exemplary intentionally and unintentionally. The deliberate discipline includes memorizing short letters, daily prayers, manners, practicing dhuha prayer, learning to fast and tithe. While the material presented through an unintentional exemplary method includes visiting sick friends, sharing with friends and apologizing to themes
CONCLUSION
The application of teacher discipline in developing the good character of early childhood has been implemented optimally. The activities are given by the teacher run in accordance with the expectations and achievements of development, which serve as indicators of implementation on aspects of good character. As for what is done by the teacher in the application of discipline in developing good character, namely teacher discipline in the form of honest attitude, teacher discipline in the form of timely attitude, teacher discipline in the form of a firm attitude, and teacher discipline in the form of responsibility. Whereas the good character that was developed by the discipline of teacher discipline in Raudlatul Athfal Ma'arif 1 Metro says a prayer before and/or after doing something, get to know good/polite and bad behavior, get used to good behavior and say hello and give greetings. | 2022-06-02T19:39:10.178Z | 2019-07-16T00:00:00.000 | {
"year": 2019,
"sha1": "3a17bec653544223af494967e0d5904a26cf61ec",
"oa_license": "CCBY",
"oa_url": "http://jurnal.stitnualhikmah.ac.id/index.php/seling/article/download/435/417",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3a17bec653544223af494967e0d5904a26cf61ec",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
28754026 | pes2o/s2orc | v3-fos-license | A longitudinal study of the implementation experiences of the Australian National Disability Insurance Scheme: investigating transformative policy change
Background Internationally there has been a growth in the use of publicly funded service markets as a mechanism to deliver health and social services. This has accompanied the emergence of ‘self-directed care’ in a number of different policy areas including disability and aged care – often referred to as ‘personalisation’ (Giaimo and Manow, Comp. Pol Stud 32:967–1000, 1999; Needham, Public Money Manage 30:136–8, 2010; [Hood], [The Idea of Joined-up Government: A Historical Perspective], [2005]; Klijn and Koppenjan, Public Manage 2:437–54, 2000, Greener, Policy Polit 36:93–108, 2008). These reforms are underpinned by an idea that individuals should be placed in control of their own service needs, given funding directly by government and encouraged to exercise choice and control through purchasing their own services. A major challenge for governments in charge of these reforms is determining the best way to structure and govern emerging service markets markets. Given the growing international embrace of market-based reform mechanisms to provide essential services to citizens, finding ways to ensure they promote, and not diminish, people’s health and wellbeing is vital. Methods The Australian National Disability Insurance Scheme (NDIS) is Australia’s first national approach to the use of personalised budgets. The program of research outlined in this paper brings together streams from a range of different studies in order to investigate the implementation of the NDIS longitudinally across different administrative levels of government, service providers and scheme participants. Conclusion This programme of research will make a contribution to our understanding of the Australian scheme and how individualised funding operates within this context, but will also generate much needed evidence that will have relevance to other jurisdictions and help fill a gap in the evidence base.
Background
In many countries around the world, welfare statesthe mechanism through which governments protect and provide for their citizensare in a state of critical transition. Faced with a range of fiscal and social pressures, industrialised countries are moving away from the collective social provision that drove the development of post-war welfare states [1] and which functioned as a social safety net, providing a range of services and financial assistance directly to citizens [2].
In response to these various pressures we have seen the increasing use of markets as a mechanism to deliver welfare services and the emergence of 'self-directed care' in a number of different policy areasoften referred to as 'personalisation' [3][4][5][6][7]. These reforms are underpinned by an idea that individuals should be placed in control of their own service needs, given funding directly by government and encouraged to exercise choice and control through purchasing their own services. Selfdirected care has become central to public service delivery in a wide range of countries and policy areas, from the National Health Service in the England to the Brukerstyrt Personlig Assistanse in Norway [8][9][10][11].
Where personalisation reforms are in place citizens are given money directly by government and must negotiate their own use of services; sourcing, understanding and choosing services that best meet their needs from a range of private and not-for-profit providers. Although these schemes have been in place in different jurisdictions for some time now, our knowledge of the implications of this shift is still in its infancy, both for citizens negotiating these new markets and for the governments overseeing them [8,12]. A recent systematic review of personal budgets for disability care found that most empirical research did not include detail about funding mechanisms, and that it is difficult to distinguish between processes and outcomes in some studies [13]. Differential outcomes have already been shown to emerge from personalisation approaches in the UK according to the capabilities and existing supports of the individual [4].
One of the challenges inherent in these reform processes is determining the best way to structure and govern such approaches. Governments are often required to balance control and flexibility to ensure markets function effectively and equitably. Therefore market-based welfare reforms, and the systems that govern them, are required to find ways to change, learn and adapt as these contexts evolve [14]. To do this, governments' seek market 'levers' that they aim to adjust in order to correct market failures [15]. However, as economist Joseph Stiglitz has argued, governments find markets notoriously difficult to regulate and manage in predictable and reliable ways [16]. This makes the use of markets within a social services context risky. Given the growing international embrace of market-based reform mechanisms to provide essential services to citizens, finding ways to ensure they promote, and not diminish, people's health and wellbeing is vital.
The Australian National Disability Insurance Scheme (NDIS) is Australia's first national approach to the use of personalised budgets in this policy area (Dickinson & Needham, Forthcoming). Consistent with changes in the UK and Europe, the NDIS represents a transition to selfdirected/personalised care: over 460,000 individuals with mental and physical disabilities will have to navigate a newly created service market in order to gain the assistance they need [17]. Launched in 2013, the NDIS represents a rare opportunity to study current large-scale transitions in social welfare provision. The Australian experience is unprecedented in several important ways. Firstly, the geographical spread outstrips that of other countries. Secondly, the Australian scheme is combined in an insurance approach [18]. Hence the scale of the NDIS is broader and deeper than its international counterparts running at a cost of over $22 billion a year, offering important opportunities for learning.
Study design
This on-going program of research reported in this paper brings together streams led by the authors from a range of different studies in order to investigate the implementation of the NDIS longitudinally for a period of 5 1 years across different administrative levels of government, service providers and scheme participants. 2 The project has three objectives: 1. To investigate the implementation experiences of key actors (e.g. Federal and State policymakers, NDIS administrators and service providers) with regard to NDIS governance structures, paying particular attention to the responsiveness and adaptation of such structures; 2. Explore the experiences of stakeholders (including scheme participants) involved in the establishment of new public sector disability markets with a view to examining: (a) Determinants of success and failure for care service providers and care outcomes (b)pathways and processes for effective market management (i.e. addressing thin markets and market failure) Understand the opportunities and/or limits of markets for the provision of public services and, in turn, the future structure of the welfare state
Context and theoretical framework
This project is embedded within the field of public policy and administration and draws on theories of new public governance (NPG) [19] to frame the broad study. The strength of NPG is its basis in network and institutional theoretical perspectives assists in capturing the real world complexity of the design, implementation and management of public policy in the twenty-first Century [19][20][21]. In this sense, NPG encapsulates both emerging public policy implementation and public service delivery challenges and issues left unresolved within previous iterations of public sector governance (including 1970/ 1980s public administration reforms and 1990s 'new public management' approaches) [19,22]. NPG conceptualises a plural state that has multiple interdependent actors contributing to the delivery of services, as well as a pluralist state with multiple processes inform the policy-making system [19]. These two types of plurality mean the focus within an NPG approach is on inter-organisational relationships and governance processes. Indeed, since the 1970s organisational analysis has thought to be critical to understanding why particular outcomes emerge from reform efforts as a result of implementation processes [23,24]. However, within an NPG framework both the actors and their relational processes are situated within institutional and environmental contexts that work to enable and constrain policy implementation, shaping the negotiation of trust, values and meaning within and between organisations [19,22]. The value of this approach within the context of this study is that it allows us to accommodate consideration of both formalised structures within the system and agency of the various actors involved in these reform processes.
Data collection Scope
The study will examine the implementation of the NDIS from a range of vantage points, capturing the emergent dynamics between institutional contexts and implementation processes [22]. An iterative approach to data collection will is taken in order to capture change over time. Our in-depth and multi-site approach enables the research team to track reform dynamics through an analysis of the type of tacit knowledge that is rarely captured during implementation (or time limited evaluations) but which ultimately shapes the trajectory of both current and future reforms [25,26].
The study has six interrelated components that seek to capture implementation and user experiences across the different domains of the NDIS
Document review
A review of documents relating to the implementation and evaluation of the NDIS will be undertaken throughout the project, to identify how emerging knowledge and practices are responded to and alter implementation and governance/market architectures. This analysis will also reveal how the NDIS is understood to exist both as a social program in its own right, and in relation to other programs and policies, and whether this changes over time. Documents will include: all available documents sourced through the NDIA (i.e. evaluation reports, strategic plans, program reports and funding agreements), in addition to publicly available evaluation and implementation reports. These will be collected and analysed in conjunction with other data sources throughout the duration of the project.
Semi-structured qualitative interviews with commonwealth and state government officials
Interviews with individuals embedded in Commonwealth and State agencies are key to understanding how the introduction of the NDIS is reshaping the broader social protection framework, and the implications of this reshaping for both fairness and the implementation of future reforms. Interviews with key actors will identify the effect of the NDIS on national and State priorities, funding decisions and program development. Interviews will also enable the research team to understand the operation of the governance structures, and the challenges associated with balancing flexibility, control and accountability.
Criterion-based, purposive sampling [26] of individuals will be conducted chosen on the basis of current/past role in State and Commonwealth administration (e.g. Departments of Human Services and central coordinating agencies such as the Department of Prime Minister and Cabinet).
Snowball sampling will be carried out as participants will be asked to nominate other stakeholders, until saturation is reached [26]. Final sample size will be determined by saturation. Interviews will be individual and semi-structured. Participants will be invited to be reinterviewed each 8 months (and further participants as appropriate) over 5 years in order to capture new information, knowledge and practices as implementation continues (minimum N = 100).
Network analysis and interviews with service providers
Interviews with disability service providers in the three sites (Australian Capital Territory, Victoria and Queensland) will provide information on a. How care service organisations are responding and adapting to the new market context, and the implications for care outcomes b. Where and why thin markets or market failure emerges The comparative approach will enable the research to investigate how governance and market architectures are able to manage different types of market variation or service issues. Each of the three sites have locationspecific governance and funding arrangements.
Up to 20 service providers will be interviewed in each site 1 a year, accompanied by a network analysis survey (consistent with network analysis sampling techniques) [27]. Network analysis provides a tool for measuring and analyzing network structure, changes and potential effectiveness [27,28]. Network analysis has two components. Firstly, a structured component to generate network dataquestions are directed at identifying network structure and function in the disability sector. Secondly, a semi-structured component with open-ended responses to questions about the experience of implementation for service providers and the new network structure. This will enable the research to identify how and why network structures are changing and identifying emerging forms of collaboration between services. The open-ended questions will seek to identify the barriers and facilitators to innovation. Moreover, they will determine whether emerging governance structures are indeed collaborative (versus a contractual or hierarchical arrangements) and if they support true innovation.
The social network analysis survey will be distributed online once a year in all three sites, using registered provider lists available through the National Disability Insurance Agency.
Peer-research with scheme participants
In order to capture the experiences of scheme participants, community researchers will be used to gain a deep appreciation of the impact of NDIS reforms to disability services from the perspective of consumers of these services and their families.
Participatory research processes are used, involving community researchers who do not know research participants but who had a common experience of accessing disability services. Such an approach was embraced on the basis that it should help increase the likelihood of uncovering issues and challenges faced by service users [29,30]. The aim is to encourage interactions with research participants that are respectful, supportive and interactive conversations between peers, to elicit information about service user experiences of the NDIS using semi-structured interviews within one of the trial areas, sample determined by feasibility (N = 42).
The service users interviewed for the project fall into three categories: people with physical disabilities; people with intellectual disabilities or mental health issues; and, carers. Findings from this project will be fed back to a number of government agencies and will also provide the basis of a learning lab that seeks to explore the meaning of choice within individualised funding systems with a range of partners from across the disability system.
Delphi analysis of stakeholders
An interactive Delphi approach will be used as a research translation method. Within a Delphi approach multiple iterations of data-collection and engagement are undertaken with a range of stakeholders. Delphi study designs overcomes the problem of single viewpoint and problem framing [31], enables new, effective and acceptable solutions to emerge from and in conjunction with complex stakeholder networks. The iterative process of conducting the Delphi study creates the opportunity for stakeholders to hear the views and ideas of others from very different sectors and settings and gain information on policy silences. This generates opportunities for new ideas and understanding, as well moving stakeholders towards a consensus.
Approximately 40 individuals will be included in the Delphi study (sample number determined by willing participants), drawn from government and non-government organisations, statutory bodies, disability service sector, Disabled People's Organisations (advocacy groups). Stakeholders will be engaged twice a year in face-to-face interviews, and surveyed electronically in between. This iterative design will enable the research team to keep abreast of the rapidly developing policy environment, occurring as a result of the launch of the NDIS. This will include planned changes in disability policy, monitoring and data collection procedures, and potential areas of contestation that the research team may assist in shedding light on. Snowball sampling will be conducted (where participants nominate other participants), beginning with members of participants from other elements of the studythen engaging more broadly over the course of the program of research.
One-on-one interviews will enable us to capture critical policy silences, while exploring potential solutions emerging from other aspects of the research. In this sense, the Delphi study will support research translations and enabling innovative solutions to come to the fore.
Analysis
All interviews and focus groups will be recorded and transcribed verbatim [26]. The aim is to uncover the tacit, or mutual, knowledge, intentions and rules that emerge from formal institutions and form the basis for emergent informal institutionsthereby shaping implementation action [26,32]. Analysis will be guided by a range of theoretical frameworks which seek to elucidate how tacit knowledge shapes action, including new institutionalism, structuration theory and diffusion of innovation theory [22,33,34]. Data collected through the core research activities will be analysed iteratively. For example, documents and surveys will be reviewed in accordance with the themes identified and developed through the interview data.
Ethics, consent and permissions
The research has ethics approval from the University of New South Wales Human Ethics Committee (No.: HC16396). All participants are required to sign a consent form prior to interviews agreeing to recording and use of the data in publications and conference presentations. Survey participants consent to participation through an online checkbox.
Discussion/contribution
How to ensure the gains promised by major policy reforms are achieved and benefit the population is a contested and important area of inquiry. By investigating, and contributing to, how implementation can be secured this study contributes our knowledge of policy implementation, particularly in the context of major reforms. The new reform context that emerges from its implementation will govern not only what policies are adopted into the future, but also the likelihood that they will be able to secure gains for the community [22]. In other words, understanding the institutional shifts that occur as a result of the NDIS is essential to the successful implementation of all social welfare and social service reforms that come after it. This programme of research will make a contribution to our understanding of the Australian scheme and how individualised funding operates within this context, but will also generate much needed evidence that will have relevance to other jurisdictions and help fill a gap in the evidence base.
Endnotes 1 Extension of the study is contingent upon funding. 2 See Funding section below for list of studies.
Abbreviations NDIS: National disability insurance scheme | 2017-08-23T05:10:22.497Z | 2017-08-17T00:00:00.000 | {
"year": 2017,
"sha1": "a878abd89f9eaad00c916da5d064040613d4b99b",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-017-2522-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a878abd89f9eaad00c916da5d064040613d4b99b",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210172795 | pes2o/s2orc | v3-fos-license | Spatial heterogeneity of the shorebird gastrointestinal microbiome
The gastrointestinal tract (GIT) consists of connected structures that vary in function and physiology, and different GIT sections potentially provide different habitats for microorganisms. Birds possess unique GIT structures, including the oesophagus, proventriculus, gizzard, small intestine, caeca and large intestine. To understand birds as hosts of microbial ecosystems, we characterized the microbial communities in six sections of the GIT of two shorebird species, the Dunlin and Semipalmated Sandpiper, identified potential host species effects on the GIT microbiome and used microbial source tracking to determine microbial origin throughout the GIT. The upper three GIT sections had higher alpha diversity and genus richness compared to the lower sections, and microbial communities in the upper GIT showed no clustering. The proventriculus and gizzard microbiomes primarily originated from upstream sections, while the majority of the large intestine microbiome originated from the caeca. The heterogeneity of the GIT sections shown in our study urges caution in equating data from faeces or a single GIT component to the entire GIT microbiome but confirms that ecologically similar species may share many attributes in GIT microbiomes.
refer to the microbiota of the large intestine. Meaning that researchers sample feces in order to draw conclusions about the large intestinal microbiome, but the typical word used is often gut microbiome. I don't think anyone believes that the bacterial community will be similar throughout the entire GIT (especially since this term includes everything from the bill to the cloaca). It has also been shown in previous studies that this is indeed the case. I think the authors are correct in stating that the GIT sections are different (which they have evaluated) and that fecal samples do not portray everything in the GIT (which they have not evaluated). But I also want to urge the authors that it might be useful to be slightly careful with this wording, because no one seriously believes you will get an accurate picture of the esophagus microbiome by sampling feces. So the point of feces not representing the entire GIT community becomes a bit meaningless. The point of fecal sampling is not to measure the entire GIT but to evaluate the large intestine non-invasively. • L50: The cited paper has not studied the gizzard microbial community or it's pH. Please revise. • L53: I think "decades" is a bit exaggerated. Probably "years" would fit better. Mammalian microbiome research in its current form is also relatively new. • Methods sampling: It is my understanding that one needs a permit for trapping birds and another permit for collecting birds. Feel free to correct me if I'm wrong. Even if the birds used in this study were accidentally killed during the trapping procedure, don't the authors agree it would be appropriate to state the permit or licenses used for trapping and collecting, since this allowed the authors (or collaborators) to catch the birds in the first place? • L105: As far as I'm aware, there is no MiSeq v4 kit. There are v2 and v3. Probably just a typo.
• L126: Dada2, vegan, phyloseq, DeSeq2, FEAST versions have not been specified. Since these kinds of software often make substantial changes between versions, it would be good to state version used for reproducibility reasons. • L170 Data availability: Royal Society Data Policy states that "Datasets and code should be deposited in an appropriate, recognized, publicly available repository. Where no data-specific repository exists, authors should deposit their datasets in a general repository such as Dryad or Figshare." First of all, I cannot find the sequences or the metatable of this study in the provided Figshare link. Regardless, I believe the 16S sequences in this study should be deposited in appropriate sequence databases such as SRA or ENA or DDBJ to allow for future re-analyses and meta-analyses. Figshare is not an appropriate repository for open sequence data. The metatable can be store on Figshare (in addition to the sequence repository). • Table 1: Misspellings. Please check.
• Table 1: Curious as to why only p-values are provided in Table 1? Where are the effect sizes? pvalues only tell the reader whether the test was significant or not at an arbitrary threshold. As a reader you want to see the results of the analysis. Is the esophagus more diverse than the gizzard? That's not possible to tell from p-values. Please add to the table diversity values, so the reader will at least know which direction the difference is. Consider also stats from the ANOVA test.
• Figure 2: I like the colors used. They are easy to tell apart. • L252: The word microbiome is used but I think the authors mean gastro-intestinal tract. • L284: "Decreased alpha diversity and community complexity in the lower GIT could be the result of host filtering of bacteria in the upper GI sections." Can the authors please explain this further how they mean? If the host kills certain bacteria in the upper GI sections with pH, the dead bacteria would still be present in the lower gut community as well due to the downward flow of content. This also does not explain why the diversity of bacteria is higher upper in the gut? Do the authors mean that the higher diversity in the upper gut is most likely derived from the diet and environmentally sourced bacteria? • Discussion: I don't know the word limit of this journal but if possible, I would love to read a little more extended discussion. The authors very briefly touch upon some of the interesting results they found, but there is very little integration on what it means and any comparisons with previous studies. There are a lot of similar studies that have been conducted in grouse, ostriches and sparrows for example. • Overall, I think the manuscript is well-written and easy to read.
22-Oct-2019
Dear Dr Grond, On behalf of the Editors, I am pleased to inform you that your Manuscript RSOS-191609 entitled "Spatial Heterogeneity of the Shorebird Gut Microbiome" has been accepted for publication in Royal Society Open Science subject to minor revision in accordance with the referee suggestions. Please find the referees' comments at the end of this email.
The reviewers and handling editors have recommended publication, but also suggest some minor revisions to your manuscript. Therefore, I invite you to respond to the comments and revise your manuscript.
• Ethics statement If your study uses humans or animals please include details of the ethical approval received, including the name of the committee that granted approval. For human studies please also detail whether informed consent was obtained. For field studies on animals please include details of all permissions, licences and/or approvals granted to carry out the fieldwork.
• Data accessibility It is a condition of publication that all supporting data are made available either as supplementary information or preferably in a suitable permanent repository. The data accessibility section should state where the article's supporting data can be accessed. This section should also include details, where possible of where to access other relevant research materials such as statistical tools, protocols, software etc can be accessed. If the data has been deposited in an external repository this section should list the database, accession number and link to the DOI for all data from the article that has been made publicly available. Data sets that have been deposited in an external repository and have a DOI should also be appropriately cited in the manuscript and included in the reference list.
If you wish to submit your supporting data or code to Dryad (http://datadryad.org/), or modify your current submission to dryad, please use the following link: http://datadryad.org/submit?journalID=RSOS&manu=RSOS-191609 • Competing interests Please declare any financial or non-financial competing interests, or state that you have no competing interests.
• Authors' contributions All submissions, other than those with a single author, must include an Authors' Contributions section which individually lists the specific contribution of each author. The list of Authors should meet all of the following criteria; 1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; 2) drafting the article or revising it critically for important intellectual content; and 3) final approval of the version to be published.
All contributors who do not meet all of these criteria should be included in the acknowledgements.
We suggest the following format: AB carried out the molecular lab work, participated in data analysis, carried out sequence alignments, participated in the design of the study and drafted the manuscript; CD carried out the statistical analyses; EF collected field data; GH conceived of the study, designed the study, coordinated the study and helped draft the manuscript. All authors gave final approval for publication.
• Acknowledgements Please acknowledge anyone who contributed to the study but did not meet the authorship criteria.
• Funding statement Please list the source of funding for each author.
Please ensure you have prepared your revision in accordance with the guidance at https://royalsociety.org/journals/authors/author-guidelines/ --please note that we cannot publish your manuscript without the end statements. We have included a screenshot example of the end statements for reference. If you feel that a given heading is not relevant to your paper, please nevertheless include the heading and explicitly state that it is not relevant to your work.
Because the schedule for publication is very tight, it is a condition of publication that you submit the revised version of your manuscript before 31-Oct-2019. Please note that the revision deadline will expire at 00.00am on this date. If you do not think you will be able to meet this date please let me know immediately.
To revise your manuscript, log into https://mc.manuscriptcentral.com/rsos and enter your Author Centre, where you will find your manuscript title listed under "Manuscripts with Decisions". Under "Actions," click on "Create a Revision." You will be unable to make your revisions on the originally submitted version of the manuscript. Instead, revise your manuscript and upload a new version through your Author Centre.
When submitting your revised manuscript, you will be able to respond to the comments made by the referees and upload a file "Response to Referees" in "Section 6 -File Upload". You can use this to document any changes you make to the original manuscript. In order to expedite the processing of the revised manuscript, please be as specific as possible in your response to the referees. We strongly recommend uploading two versions of your revised manuscript: 1) Identifying all the changes that have been made (for instance, in coloured highlight, in bold text, or tracked changes); 2) A 'clean' version of the new manuscript that incorporates the changes made, but does not highlight them.
When uploading your revised files please make sure that you have: 1) A text file of the manuscript (tex, txt, rtf, docx or doc), references, tables (including captions) and figure captions. Do not upload a PDF as your "Main Document"; 2) A separate electronic file of each figure (EPS or print-quality PDF preferred (either format should be produced directly from original creation package), or original software format); 3) Included a 100 word media summary of your paper when requested at submission. Please ensure you have entered correct contact details (email, institution and telephone) in your user account; 4) Included the raw data to support the claims made in your paper. You can either include your data as electronic supplementary material or upload to a repository and include the relevant doi within your manuscript. Make sure it is clear in your data accessibility statement how the data can be accessed; 5) All supplementary materials accompanying an accepted article will be treated as in their final form. Note that the Royal Society will neither edit nor typeset supplementary material and it will be hosted as provided. Please ensure that the supplementary material includes the paper details where possible (authors, article title, journal name).
Supplementary files will be published alongside the paper on the journal website and posted on the online figshare repository (https://rs.figshare.com/). The heading and legend provided for each supplementary file during the submission process will be used to create the figshare page, so please ensure these are accurate and informative so that your files can be found in searches. Files on figshare will be made available approximately one week before the accompanying article so that the supplementary material can be attributed a unique DOI.
Please note that Royal Society Open Science charge article processing charges for all new submissions that are accepted for publication. Charges will also apply to papers transferred to Royal Society Open Science from other Royal Society Publishing journals, as well as papers submitted as part of our collaboration with the Royal Society of Chemistry (http://rsos.royalsocietypublishing.org/chemistry).
If your manuscript is newly submitted and subsequently accepted for publication, you will be asked to pay the article processing charge, unless you request a waiver and this is approved by Royal Society Publishing. You can find out more about the charges at http://rsos.royalsocietypublishing.org/page/charges. Should you have any queries, please contact openscience@royalsociety.org.
Once again, thank you for submitting your manuscript to Royal Society Open Science and I look forward to receiving your revision. If you have any questions at all, please do not hesitate to get in touch. Please see below the comments and suggested MINOR revisions made by the individual(s) who reviewed your manuscript. I would like to take your attention to one critical issue raised by the reviewers: 1. The link provided in the manuscript (10.6084/m9.figshare.9792668) is not valid, thus data provided in the link is not accessible. Please provide a valid link or update the link so the data can be easily accessed.
Reviewer comments to Author: Reviewer: 1 Comments to the Author(s) Major comments: Overall this paper is well organized, the methods and analyses are sound, and the results are interesting. I think it will be an important contribution to our field.
My only major concerns are as follows: 1) Some of the statistical analyses would benefit from adjustments to control for repeated measures from a single individual 2) Data does not seem to be available at provided link, and no link is provided for a script that would allow people to replicate your analyses ABSTRACT 23 -alpha and genus diversity? This reads strangely; maybe taxonomic and alpha diversity, or rephrase another way. 26 -The language of "sourcing" and "originating" without the context of your cool analyses is confusing. In the abstract before reading the paper it seems as though it refers to your sampling. Consider add a sentence about the method, or use different language for the abstract.
INTRO 54 -"our knowledge of the diversity and distribution of microbes in the GIT of most bird species" 63 -can you estimate the age of the birds? or just specify juvenile or adult.
METHODS 81 -what kind of string was used? Overall, great job on this section. All of it was very clear and tight.
RESULTS
179 -Did you use a repeated measures ANOVA to compare samples only within individuals? As your samples show considerable variation among individuals, it may be beneficial to account for this statistically. 180 -sentence order is off, I think you meant to have "Shannon's alpha diversity" and "host species" switched 248 -"small intestine" does not need to be capitalized 345 -"have been shown to show" consider rephrasing for less redundancy. Also, consider adding citations to this sentence. DISCUSSION 312 -typo : "have share" FIGURES fig. 1 -consider adding full sample type name, vertically or at an angle on the axis for ease of interpretation table 1 -typo -"proventriculus" is missing a "t" Reviewer: 2 Comments to the Author(s) Grond et al. have evaluated the microbiome of six sections of the GIT in six individuals of two species of shorebirds. This is a good study and I enjoyed reading it. It is useful for the avian microbiome community. It is well written with adequate amount of information in the methods and the results. The figures are nice and well presented. The analyses are appropriate. I don't have any major comments but I hope that my minor comments will be useful to the authors during the revision of the paper.
Minor comments: • Abstract and discussion: In general, I believe the term "gut microbiome" is most often used to refer to the microbiota of the large intestine. Meaning that researchers sample feces in order to draw conclusions about the large intestinal microbiome, but the typical word used is often gut microbiome. I don't think anyone believes that the bacterial community will be similar throughout the entire GIT (especially since this term includes everything from the bill to the cloaca). It has also been shown in previous studies that this is indeed the case. I think the authors are correct in stating that the GIT sections are different (which they have evaluated) and that fecal samples do not portray everything in the GIT (which they have not evaluated). But I also want to urge the authors that it might be useful to be slightly careful with this wording, because no one seriously believes you will get an accurate picture of the esophagus microbiome by sampling feces. So the point of feces not representing the entire GIT community becomes a bit meaningless. The point of fecal sampling is not to measure the entire GIT but to evaluate the large intestine non-invasively. • L50: The cited paper has not studied the gizzard microbial community or it's pH. Please revise. • L53: I think "decades" is a bit exaggerated. Probably "years" would fit better. Mammalian microbiome research in its current form is also relatively new. • Methods sampling: It is my understanding that one needs a permit for trapping birds and another permit for collecting birds. Feel free to correct me if I'm wrong. Even if the birds used in this study were accidentally killed during the trapping procedure, don't the authors agree it would be appropriate to state the permit or licenses used for trapping and collecting, since this allowed the authors (or collaborators) to catch the birds in the first place? • L105: As far as I'm aware, there is no MiSeq v4 kit. There are v2 and v3. Probably just a typo. • L126: Dada2, vegan, phyloseq, DeSeq2, FEAST versions have not been specified. Since these kinds of software often make substantial changes between versions, it would be good to state version used for reproducibility reasons. • L170 Data availability: Royal Society Data Policy states that "Datasets and code should be deposited in an appropriate, recognized, publicly available repository. Where no data-specific repository exists, authors should deposit their datasets in a general repository such as Dryad or Figshare." First of all, I cannot find the sequences or the metatable of this study in the provided Figshare link. Regardless, I believe the 16S sequences in this study should be deposited in appropriate sequence databases such as SRA or ENA or DDBJ to allow for future re-analyses and meta-analyses. Figshare is not an appropriate repository for open sequence data. The metatable can be store on Figshare (in addition to the sequence repository). • Table 1: Misspellings. Please check.
• Table 1: Curious as to why only p-values are provided in Table 1? Where are the effect sizes? pvalues only tell the reader whether the test was significant or not at an arbitrary threshold. As a reader you want to see the results of the analysis. Is the esophagus more diverse than the gizzard? That's not possible to tell from p-values. Please add to the table diversity values, so the reader will at least know which direction the difference is. Consider also stats from the ANOVA test.
• Figure 2: I like the colors used. They are easy to tell apart. • L252: The word microbiome is used but I think the authors mean gastro-intestinal tract. • L284: "Decreased alpha diversity and community complexity in the lower GIT could be the result of host filtering of bacteria in the upper GI sections." Can the authors please explain this further how they mean? If the host kills certain bacteria in the upper GI sections with pH, the dead bacteria would still be present in the lower gut community as well due to the downward flow of content. This also does not explain why the diversity of bacteria is higher upper in the gut? Do the authors mean that the higher diversity in the upper gut is most likely derived from the diet and environmentally sourced bacteria? • Discussion: I don't know the word limit of this journal but if possible, I would love to read a little more extended discussion. The authors very briefly touch upon some of the interesting results they found, but there is very little integration on what it means and any comparisons with previous studies. There are a lot of similar studies that have been conducted in grouse, ostriches and sparrows for example. • Overall, I think the manuscript is well-written and easy to read.
11-Nov-2019
Dear Dr Grond, It is a pleasure to accept your manuscript entitled "Spatial Heterogeneity of the Shorebird Gastrointestinal Microbiome" in its current form for publication in Royal Society Open Science.
Please ensure that you send to the editorial office an editable version of your accepted manuscript, and individual files for each figure and table included in your manuscript. You can send these in a zip folder if more convenient. Failure to provide these files may delay the processing of your proof. You may disregard this request if you have already provided these files to the editorial office.
Before we proceed to the production stage, we ask that you please now make sure that your figshare article is finalised as a collection: https://knowledge.figshare.com/articles/item/howto-use-collections This will ensure that a formal DOI can be assigned to your dataset; this will enhance both visibility of your data, and ensure that your dataset can be cited appropriately with the DOI given. Once you have done this, please send an updated copy of your manuscript file (a clean Word document, with the updated data accessibility statement and figshare DOI) by return email.
Once we receive this, you can then expect to receive a proof of your article in the near future. Please contact the editorial office (openscience_proofs@royalsociety.org) and the production office (openscience@royalsociety.org) to let us know if you are likely to be away from e-mail contact --if you are going to be away, please nominate a co-author (if available) to manage the proofing process, and ensure they are copied into your email to the journal. Due to rapid publication and an extremely tight schedule, if comments are not received, your paper may experience a delay in publication.
Please see the Royal Society Publishing guidance on how you may share your accepted author manuscript at https://royalsociety.org/journals/ethics-policies/media-embargo/.
Thank you for your fine contribution. On behalf of the Editors of Royal Society Open Science, we look forward to your continued contributions to the Journal.
Kind regards, Lianne Parkhouse Editorial Coordinator Royal Society Open Science openscience@royalsociety.org
Dear editors,
Below we addressed the comments of Dr. Tezel and the two reviewers. We hope we have adequately addressed their concerns, and have modified our manuscript to be satisfactory for publication. We thank Dr. Tezel and the reviewers for their constructive comments and compliments. Sincerely,
Kirsten Grond
• Ethics statement If your study uses humans or animals please include details of the ethical approval received, including the name of the committee that granted approval. For human studies please also detail whether informed consent was obtained. For field studies on animals please include details of all permissions, licences and/or approvals granted to carry out the fieldwork. >> We added an ethics section at the end of the manuscript with all permits and approvals.
• Data accessibility It is a condition of publication that all supporting data are made available either as supplementary information or preferably in a suitable permanent repository. The data accessibility section should state where the article's supporting data can be accessed. This section should also include details, where possible of where to access other relevant research materials such as statistical tools, protocols, software etc can be accessed. If the data has been deposited in an external repository this section should list the database, accession number and link to the DOI for all data from the article that has been made publicly available. Data sets that have been deposited in an external repository and have a DOI should also be appropriately cited in the manuscript and included in the reference list. >> the data accessibility section is updated Associate Editor Comments to Author (Dr Ulas Tezel): Dear Dr. Kirsten Grond: Please see below the comments and suggested MINOR revisions made by the individual(s) who reviewed your manuscript. I would like to take your attention to one critical issue raised by the reviewers: 1. The link provided in the manuscript (10.6084/m9.figshare.9792668) is not valid, thus data provided in the link is not accessible. Please provide a valid link or update the link so the data can be easily accessed. >> Our apologies for the broken link. We updated the link for figshare, and added the SRA BioProject number for NCBI.
METHODS 81 -what kind of string was used? >> we used regular cotton string, and did not use the tissue and content directly next to the string. We added this to the methods.
Overall, great job on this section. All of it was very clear and tight.
RESULTS
179 -Did you use a repeated measures ANOVA to compare samples only within individuals? As your samples show considerable variation among individuals, it may be beneficial to account for this statistically. >> see above 180 -sentence order is off, I think you meant to have "Shannon's alpha diversity" and "host species" switched >> We thank the reviewer for noticing this mistake, and have rewritten the sentence.
248 -"small intestine" does not need to be capitalized >> We removed the capitalization of small intestine. 345 -"have been shown to show" consider rephrasing for less redundancy. Also, consider adding citations to this sentence. >> We replaced 'have been shown to show' with 'show', and added a citation by Videlvall and al 2018.
DISCUSSION 312 -typo : "have share" >> We removed 'have' in this sentence. FIGURES fig. 1 -consider adding full sample type name, vertically or at an angle on the axis for ease of interpretation >> We added the full sample type name to figure 1. table 1 -typo -"proventriculus" is missing a "t" >> We corrected this mistake.
Reviewer: 2 Comments to the Author(s) Grond et al. have evaluated the microbiome of six sections of the GIT in six individuals of two species of shorebirds. This is a good study and I enjoyed reading it. It is useful for the avian microbiome community. It is well written with adequate amount of information in the methods and the results. The figures are nice and well presented. The analyses are appropriate. I don't have any major comments but I hope that my minor comments will be useful to the authors during the revision of the paper.
Minor comments: • Abstract and discussion: In general, I believe the term "gut microbiome" is most often used to refer to the microbiota of the large intestine. Meaning that researchers sample feces in order to draw conclusions about the large intestinal microbiome, but the typical word used is often gut microbiome. I don't think anyone believes that the bacterial community will be similar throughout the entire GIT (especially since this term includes everything from the bill to the cloaca). It has also been shown in previous studies that this is indeed the case. I think the authors are correct in stating that the GIT sections are different (which they have evaluated) and that fecal samples do not portray everything in the GIT (which they have not evaluated). But I also want to urge the authors that it might be useful to be slightly careful with this wording, because no one seriously believes you will get an accurate picture of the esophagus microbiome by sampling feces. So the point of feces not representing the entire GIT community becomes a bit meaningless. The point of fecal sampling is not to measure the entire GIT but to evaluate the large intestine non-invasively. >> We have changed gut microbiome to GIT microbiome throughout the paper. Technically, the gut microbiome represents the entire GIT, but with its use in current literature we agree with the author that it could be a confusing term.
Although we agree with the author that the comparison of esophagus and fecal microbiomes are meaningless, we do want to emphasize in our paper that data from feces or single GIT sections do not represent the GIT microbiome. Although gut microbiome is often used interchangeably with the large intestinal microbiome, this is rarely mentioned in publications. I have seen a number of publications equating fecal or even cloacal samples to represent the gut, with no specification of what the gut represents in their paper.
However, I have used fecal sampling in a number of my own studies and I agree it is the only non-invasive method available for evaluating the large intestine microbiome. I added a sentence to the conclusion section to clarify this: "Although fecal samples are unlikely to capture the entire GIT microbiome community and variety, it is often the only non-invasive method available for investigating the microbiome of wild animals. Therefore, we advise authors that use fecal samples to clearly define which microbiome their samples represent." • L50: The cited paper has not studied the gizzard microbial community or it's pH. Please revise. >> Our apologies for the mistake. We changed the sentence to reflect the hypothesis of the authors that the acidic stomach causes the microbial filtering from face to intestine.
• L53: I think "decades" is a bit exaggerated. Probably "years" would fit better. Mammalian microbiome research in its current form is also relatively new. >> We removed 'several decades', which changed the sentence to: "After lagging behind mammalian microbiome research" • Methods sampling: It is my understanding that one needs a permit for trapping birds and another permit for collecting birds. Feel free to correct me if I'm wrong. Even if the birds used in this study were accidentally killed during the trapping procedure, don't the authors agree it would be appropriate to state the permit or licenses used for trapping and collecting, since this allowed the authors (or collaborators) to catch the birds in the first place? >> We agree with Reviewer 2 and we added our permit numbers for trapping and collecting to the new required Ethics section.
• L105: As far as I'm aware, there is no MiSeq v4 kit. There are v2 and v3. Probably just a typo. >> This is indeed a typo, our apologies. We used the v2 kit and changed this in the manuscript.
• L126: Dada2, vegan, phyloseq, DeSeq2, FEAST versions have not been specified. Since these kinds of software often make substantial changes between versions, it would be good to state version used for reproducibility reasons. >> We agree with reviewer 2, and added the version numbers of the programs and packages used, with the exception of FEAST. We mistakenly identified FEAST as a package, when it is custom code described in the citation provided.
• L170 Data availability: Royal Society Data Policy states that "Datasets and code should be deposited in an appropriate, recognized, publicly available repository. Where no data-specific repository exists, authors should deposit their datasets in a general repository such as Dryad or Figshare." First of all, I cannot find the sequences or the metatable of this study in the provided Figshare link. Regardless, I believe the 16S sequences in this study should be deposited in appropriate sequence databases such as SRA or ENA or DDBJ to allow for future re-analyses and meta-analyses. Figshare is not an appropriate repository for open sequence data. The metatable can be store on Figshare (in addition to the sequence repository). >> we updated the figshare link and provided the SRA bioproject number that contains the raw sequences. Our apologies for the broken link.
• Table 1: Curious as to why only p-values are provided in Table 1? Where are the effect sizes? p-values only tell the reader whether the test was significant or not at an arbitrary threshold. As a reader you want to see the results of the analysis. Is the esophagus more diverse than the gizzard? That's not possible to tell from p-values. Please add to the table diversity values, so the reader will at least know which direction the difference is. Consider also stats from the ANOVA test. >> We added the diversity values to the table for clarification. Since we performed a TukeyHSD test we did not have test statistics to add for this.
• Figure 2: I like the colors used. They are easy to tell apart. >> Thank you! • L252: The word microbiome is used but I think the authors mean gastro-intestinal tract.
>> We meant the microbiome in general as a collection of microbial communities, but since our paper focuses on the GIT we realize this causes confusion. We changed microbiome to gastrointestinal tract for clarity.
• L284: "Decreased alpha diversity and community complexity in the lower GIT could be the result of host filtering of bacteria in the upper GI sections." Can the authors please explain this further how they mean? If the host kills certain bacteria in the upper GI sections with pH, the dead bacteria would still be present in the lower gut community as well due to the downward flow of content. >> In humans, nucleic acids were shown to be digested in the stomach with a pH of 1.3-3.5 (Liu et al. 2016. Scientific Reports). Birds have a pH from 1-3 (Beasley et al. 2015. PloS One). We therefore believe that the avian proventriculus and gizzard play a role in host filtering by not only killing bacteria, but also (partially) degrading their DNA. We added this information to our paper.
This also does not explain why the diversity of bacteria is higher upper in the gut? Do the authors mean that the higher diversity in the upper gut is most likely derived from the diet and environmentally sourced bacteria? >> We indeed mean that the upper GIT has a wider microbial exposure to environment and diet, which likely results in the higher alpha diversity. We added a sentence to clarify this.
"Higher alpha diversity in the upper GIT is likely due to the influx of a larger diversity of microorganisms that are associated with environment and diet." • Discussion: I don't know the word limit of this journal but if possible, I would love to read a little more extended discussion. The authors very briefly touch upon some of the interesting results they found, but there is very little integration on what it means and any comparisons with previous studies. There are a lot of similar studies that have been conducted in grouse, ostriches and sparrows for example. >> We have added a couple sections to the discussion further explaining our results(see highlighted sections. We also added data from Videvall et al. concerning ostrich microbiomes, but our comparative ability was limited as they did not investigate microbiomes of the upper GIT. We searched for the grouse and sparrow GIT section publications mentioned by the reviewer but were unable to find any relevant papers unfortunately.
•
Overall, I think the manuscript is well-written and easy to read. >> Thank you! | 2020-01-15T14:04:08.303Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "bf62353c6b3f7acca98c4fbcfec787847c8769ac",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.191609",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49cb57e16acb8bfdd394f14f47c8da26a0727ef8",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
235211796 | pes2o/s2orc | v3-fos-license | Predictive Crystal Plasticity Modeling of Single Crystal Nickel Based on First-Principles Calculations
To reduce reliance on experimental fitting data within the crystal plasticity finite element method (CPFEM), an approached is proposed that integrates first-principles calculations based on density functional theory (DFT) to predict the strain hardening behavior of pure Ni single crystals. Flow resistance was evaluated through the Peierls-Nabarro equation using the ideal shear strength and elastic properties calculated by DFT-based methods, with hardening behavior modeled by imposing strains on supercells in first-principles calculations. Considered alone, elastic interactions of pure edge dislocations capture hardening behavior for small strains on single slip systems. For larger strains, hardening is captured through a strain-weighted linear combination of edge and screw flow resistance components. The rate of combination is not predicted in the present framework, but agreement with experiments through large strains (~0.4) for multiple loading orientations demonstrates a possible route for more predictive crystal plasticity modeling through incorporation of analytical models of mesoscale physics.
Introduction
Due to their high ductility, plastic deformation in face-centered cubic (fcc) materials has been widely investigated, typically through descriptions of shear stress-strain behavior of the individual slip systems [1]. As both the host element for many superalloys and a prototypical fcc material, pure single crystal Ni is of particular interest, and its mechanical properties have been investigated through both experiments [2][3][4] and simulations [5,6]. Experimental work on Ni single crystals in the literature has focused on determining the resolved shear stress-strain behavior of their slip systems, including, for example, the initial critical resolved shear stress (CRSS, represented by 0 in the present work) and its relationship to orientation and temperature [2].
A fundamental understanding of plasticity based on the evolution of slip system strength has been incorporated into the crystal plasticity finite element method (CPFEM), which has become one of the main computational techniques to relate macroscopic deformation behavior to its slipbased origins [7]. These methods have been used to capture experimentally observed mechanical behavior of both single crystals and polycrystals [8][9][10][11][12]. However, the parameterization of CPFEM models is predominantly accomplished by fitting simulated stress-strain curves to experimental data, underutilizing any physical meaning contained within their parameters and severely limiting their predictive power. Even recent bottom-up approaches to predicting material deformation often start with calibration of slip system behavior to macroscale experimental data [13]. To counter this, some descriptions of slip system strength that depend on physical mechanisms have been developed with the goal of using lower length scale computations to predict these terms [7]. For example, first-principles calculations based on density functional theory (DFT) have been used to predict bcc hardening parameters by considering an isolated mechanism known to be dominant in body-centered cubic (bcc) materials [14]. However, due to the array of complex dislocation interactions responsible for fcc plasticity, first-principles techniques have faced challenges in their extension to predicting hardening parameters in fcc materials [15].
DFT-based first-principles calculations provide a description of atomic processes based on their electronic structures. Advanced techniques have been developed to account more efficiently for far-field strain fields emanating from dislocations while maintaining atomistic accuracy near the core [14,16,17], but complex interactions of large numbers of dislocations remain out of reach of first-principles methods. Consequently, the present work makes no attempt to explicitly consider dislocations and instead focuses on the properties of ideal shear strength and elasticity, which can be related to physical parameters in CPFEM models and can be reliably obtained by DFT-based first principles calculations through the imposition of strains [18,19].
The present work introduces a method of linking a computationally tractable problem, the ideal shearing process, to a realistic description of macroscopic deformation, combining the utility of CPFEM modeling with the predictiveness of DFT-based calculations. In this proposed approach, the effects of the elastic field on ideal shear strength due to long-range interactions between dislocations are mimicked by applying pre-strains in the first-principles calculations. The flow resistance for pure edge and pure screw dislocations was then predicted using the Peierls-Nabarro model [20,21]. In contrast to CPFEM frameworks with numerous parameters whose physical interpretations are unused during fitting to experimental data, two of only three hardening model parameters were predicted based on their physical analogue in the context of first-principles methods. The parameterized CPFEM model was used to predict the macroscopic stress-strain curves of various single crystal tensile tests to small tensile strains. Due to the importance of screw characteristics of dislocations at high dislocation densities [22], the present work proposes to model the flow stress, which is used to parameterize an established CPFEM hardening model, as a linear combination of flow stresses of edge and screw dislocations weighted by plastic strain.
The linear coefficient presently relies on a fitting procedure, but the framework allows for the use of first-principles results while also considering mesoscale physics, the incorporation of which will be the subject of future work. The results of CPFEM simulations were compared with available literature data on the strain hardening behavior of single crystal Ni of multiple orientations at large strains.
Approach
In a single crystal fcc material, dislocations themselves are the main obstacles that inhibit dislocation movement and thus the major strain hardening mechanism [22,23]. In the small strain range, the dislocation density is relatively low, and the dislocation interaction primarily occurs through long-range elastic fields [1]. In the large strain range, short-range dislocation interactions become the major strain hardening source as the dislocation density is high and the dislocation mean free path is low [22,24]. In short-range dislocation interactions, dislocation cores make contact with each other to form jogs or junctions. Because junctions formed by dislocation core reactions can be sessile, junctions are considered to be the major source of strain hardening in stage II deformation of fcc crystals [22]. The model predictions considering only junctions have shown satisfactory agreement with experiments in the literature [22,25,26].
In DFT-based calculations, direct consideration of dislocations is challenging due to the high computational cost of the calculations, which limits their size, and the inherently extended nature of dislocations. Therefore, explicit first-principles calculations even of single dislocations have been made only with the help of elastic Green function solutions to account for the far-field elastic distortions, attenuating image forces due to periodic boundary conditions and allowing the accurate yet expensive DFT-based calculations to relax only those atoms deemed to be part of the dislocation core [14,17,27,28].
In the present work, a different approach is proposed to consider dislocations in an indirect manner. Specifically, the method adopted here relies on the improved analytic form to estimate Peierls stress proposed by Joós et al. [29], which is based on elastic properties and the ideal shear strength along the partial slip system and is widely employed [30][31][32][33]. Both the elastic constants and the ideal shear strength can be calculated through DFT-based methods, and, critically, these values can be calculated even when the crystal structure is already under the influence of an orthogonal shear strain. In the present work, the resulting increase in the ideal shear strength in these "pre-strained" structures is interpreted as the general influence of elastic fields on flow resistance and is used as a proxy for hardening due to long-range interactions of dislocations. The approach is described in general by a schematic in Figure 1 then through a discussion in the following two sections, with further calculational details in the Supplemental Materials.
The central postulation of the present work is that the response of the ideal crystal to elastic strains contains information relevant to a description of macroscopic deformation. Strain applied to the ideal crystal in one direction is used to determine, through the Peierls-Nabarro equation, the stress required to move a single dislocation in an otherwise perfect and strain-free lattice. Adding strain in an orthogonal in-plane direction increases the difficulty of the ideal shear process, analogously to the way that strain field interactions of multiple dislocations increase the difficulty of their motion through a realistic crystal. As will be discussed in Section 3.5, the above procedure predicts insufficient hardening at large strains, where the changing nature of the dislocation network must be taken into account. To this end, a model is proposed combining the effects of edge and screw dislocations as a function of strain to obtain new hardening parameters suitable for large strain predictions. While the model for large strains does not explicitly describe pinning effects or hardening due to forest dislocations, it generally considers the evolution of the most mobile dislocation segments as these segments increasingly become exhausted through formation of more sessile junction segments. The overall strength of the slip system is then tied to the mobility of its most mobile segments, as it must be for plastic strain to be accomplished through slip.
Crystal plasticity model
The crystal plasticity framework presented by Huang [34] is adopted in the current work. In this framework, strain hardening is described as the evolution of the CRSS on one slip system due to the shear strain on any slip system: where is the CRSS on slip system , is the shear strain on slip system , and ℎ is the hardening matrix. A form of ℎ presented by Peirce et al. [35] is adopted in the present work for the simplicity of its form and the interpretability of the individual parameters between length scales. Peirce et al. [35] proposed that: characterizes the difference between self-hardening ( = ) and latent hardening ( ≠ ), for which a ratio of 1.4 is widely accepted in the literature [7,36]. With this form, the slip system level strain hardening curve increases monotonically with a decreasing slope and approaches a saturation value asymptotically. The initial slope of this curve is controlled by ℎ 0 , the saturation value is controlled by , and the initial CRSS value is 0 .
It should be noted that even though less physically-motivated than the model of Taylor based on dislocation density [37], the hardening model by Peirce et al. [35] does not rely on explicit descriptions of dislocations that would be prohibitive in first-principles methods due to the computational expense. However, the models of Taylor [37] and Peirce et al. [35] describe the same deformation response and therefore must also describe the effects of collective dislocation motion, whether explicitly or implicitly. Similarly, other forms of ℎ , discussed in Ref. [7], describe deformation using terms specific to dislocation motion such as interaction strength, lock formation, dipole formation, and annihilation processes, each contributing at least one additional fitting parameter to the overall hardening law for a total parameter set that can easily number over 15.
The hardening modulus adopted in the present work (Eq. 2) has found application to the deformation behavior of single crystal tungsten [38,39], single crystal copper [40], polycrystal copper [41], friction stir welded aluminum [42], Ti-6Al-4V [43], and dual-phase steels [44]. The deformation modes to which it has been applied range from nanoindentation [38][39][40]43] to wire tension tests [41] to the deformation of representative volume elements that approximate a microstructure's bulk mechanical response [42,44]. In each case, model parameters were determined by fitting to experimental stress-strain or load-displacement data. The key novelty and strength of the present study is that 0 and ℎ 0 , as well as all elastic constants, were predicted through DFT-based computations and thus were not fit using experimental data. The value for was taken from results reported in the literature.
First-principles calculations of flow resistance
The initial CRSS, 0 , is the minimum stress required to initiate plastic deformation [18], which for perfect crystals corresponds to the ideal shear strength, , while more generally this corresponds to the initial flow resistance, . In the present work, is estimated using the Peierls-Nabarro model to find the Peierls stress, , the minimum stress required to move a dislocation [20,21]. For the spatially extended strain fields surrounding dislocations, common to pure metals, the Peierls-Nabarro equation is given in Eq. 4 [29,45].
Here, b is the Burgers vector, a is the row spacing of atoms within the slip plane (for example, = 0 √6/4, where a0 is the lattice parameter, for the case of {111}〈112 ̅ 〉 shear deformation of an fcc lattice), and is the half-width of the dislocation, given as: The elastic factor, K, is direction-dependent for an anisotropic crystal like pure Ni. Analytical forms for anisotropic elastic factors have been derived, e.g., in Ref. [46], and depend on the character of the dislocation, with variants existing for both pure edge and pure screw dislocations.
Their full forms are given in the Supplementary Material.
The ideal shear strength in Eq. 5 can be predicted directly by pure alias shear − a deformation mode more representative of the slip process than affine shear [19,47,48]. Alias shear involves only one sliding layer, with the atoms in other layers initially remaining in their original positions [19,47,48]; see Figure 2b. The relaxations of a pure alias shear include all degrees of freedom of a supercell except for the fixed shear angle as well as any other imposed constraints, such as the pre-strain deformation discussed below. Elastic properties can be predicted by computing stresses under given strains by means of firstprinciples calculations and Hooke's law, with imposed non-zero strains being 0.007 and 0.013, as previously described [49,50]. The elastic factor for both edge and screw dislocations can be calculated and applied to the Peierls-Nabarro equation (Eq. 4) for evaluation of the flow resistance as a function of strain and dislocation character. Further details of the first-principles calculations, for both elastic properties and flow resistance, are given in the Supplementary Material.
Results from first-principles calculations
Investigation into the effect of the number of {111} layers contained in the supercell showed that 3 atomic layers, which is 12 atoms and the minimum needed for this deformation mode, The ideal shear strengths under various pre-strains were calculated and shown in Table 1. It can be seen that the ideal shear strength increases with the magnitude of the orthogonal pre-strain while the shear strain at which the ideal strength is reached decreases. As mentioned in Section 2.1, the orthogonal pre-strain can be interpreted as the effect of the elastic field of one slip system on the deformation behavior of another. Therefore, the increase in ideal shear strength of one slip system as a function of the shear strain on another is indicative of strain hardening behavior, the quantification of which is discussed in the upcoming Section 3.2.
The elastic constants of fcc Ni in terms of the 6-atom orthorhombic cell ( ij,orth ′ ) are summarized in Table 2. Note that by adopting the relationship given by Hirth and Lothe [46], ij,orth ′ can be transformed to ij,cub , which are the elastic constants in terms of the 4-atom conventional cubic cell for comparison with experimental data. The predictions without pre-strain agree with the experimental elastic constants extrapolated to 0 K [53]. With the elastic constants and the ideal shear strengths established as functions of pre-strain, flow resistance can be calculated through the Peierls-Nabarro framework (Eq. 4) at each pre-strain. The elastic factor, in turn, depends upon the character of the dislocation, with the limiting cases of pure edge and pure screw given in the Supplementary Material. Note that the elastic factor at each pre-strain was calculated based on the elastic constants of the pre-strained structure to better capture the local elastic environment so that each input value to Eq. 4 comes from a calculation using the same initial (pre-strained) structure.
The predicted flow resistance (f or ) at 0 K are compared with experimental 0 values at room temperature in Table 1 Table 1).
CPFEM model parameters from first-principles calculations
For small strains, when an ample portion of highly-mobile edge-type segments exist in the dislocation network, the most meaningful flow resistances are those calculated through the Peierls-Nabarro equation using elastic factors for pure edge dislocations. The high mobility of edge-type dislocations is seen not only in the initial flow resistance values of Table 1 but also in the relative ease with which edge dislocations break free from the junctions formed during strain hardening [61]. Therefore, the predictions based on pure edge and pure screw dislocations must be combined at large deformations as the highly-mobile segments are exhausted, leaving behind junctions and segments of an increasingly screw character. The procedure used to combine the flow resistances for both dislocation types as a function of strain will be discussed in Section 3.5. In the present section, the procedure for quantifying hardening behavior based on first-principles calculations, the critical length-spanning translation step of the present work, will be given in the context of small strains.
As discussed in Section 2, DFT-based calculations predicted the flow resistance of a dislocation gliding along slip system under the influence of an elastic field from other dislocations. The intensity of the elastic field can be mimicked by the pre-strain imposed in the DFT-based calculations. This pre-strain corresponds to the local effect of the shear strain on a latent slip system caused by the long-range elastic field of dislocations and is the ( ≠ ) in Section 2.2. Note that while represents a plastic strain, it is also indicative of the slip system activity, which results in the generation and interaction of dislocations. In a Taylor-like model, this slip system activity information might be encoded into a dislocation density parameter for each slip system. Since long-range dislocation interactions are conveyed through elastic strain fields, the results from first-principles calculations in the present work, describing the effect of elastic strain on flow resistance, is applicable to the mesoscale description of slip defined in the CPFEM hardening equations. By imposing different levels of pre-strain, the relationship between and was predicted (i.e., versus the pre-strain 110 in Table 1). In this case, Eq. 1 through Eq. 3 can be simplified as: where ℎ 0 , 0 , and are model parameters. By matching the relationship between and determined from Eq. 6 (note that = 0 when = 0 ) with that predicted in DFT-based calculations, the values of 0 and ℎ 0 were determined. As shown in Figure 3, for small strains, the hardening relation is approximately linear, and 0 and ℎ 0 correspond to the intercept and slope, respectively. Note that because DFT-based predictions are limited to small strains, where ℎ 0 and 0 play a dominant role, a value reported in the literature was adopted for (40 MPa [5]). The determined parameter values are summarized in Table 3.
Experimental results in the literature
To show the predictive accuracy of the present approach, geometrical models of Ni single crystal tensile tests from the literature were constructed and combined with the predicted hardening parameters summarized in Table 3 Figure 4a provides the resolved shear stress-strain curves reported in these publications [2,60]. The process of calculating engineering values from the resolved shear stress-strain curves is detailed in the Supplementary Material, while the final engineering stress-strain curves are shown in Figure 4b. Note that the loading directions with respect to the crystal orientation are different for each test, i.e., 〈1 ̅ 5 10〉 and 〈1 ̅ 28〉 by Haasen [2], and 〈011〉 by Yao et al. [60].
Discrepancies in the reported literature on pure Ni single crystal CRSS and flow behavior stem from differences in material purity, initial dislocation density, and potential experimental uncertainties. A method must therefore be adopted to evaluate these differences so that they may be considered when comparing computational results to experimental data. Here, differences in experimental results were evaluated by comparing their initial CRSS values, which are independent of the assumptions adopted for converting force-displacement data to resolved shearstress strain data. Supplementary Figure S2 shows the initial CRSS value of pure Ni reported by ten different groups [2,3,[54][55][56][57][58][59][60]62]. Since the value reported by Latanision et al. [62] is significantly higher than the other reported values, it was excluded from evaluation in the present study. The rest of the experimental data all lie between 5 MPa and 20 MPa, and the statistics of these data are shown in Supplementary Table S2. According to the statistical analysis of the initial CRSS reported by nine different groups over more than 80 years, the experimental data in the literature exhibited a relative error of 43%.
DFT-based CPFEM predictions at small strains
To simulate the tests reported in the literature, the full geometry of the specimens in each test was modeled. All of the specimens were discretized with 0.2 mm hexahedral full integration elements (element type C3D8 [63]) in the gauge region, and the models contain 20,590 elements for the wire specimen by Haasen [2] and 2,176 elements for the dogbone specimen by Yao et al. [60]. In both models, the vertical movement of the bottom nodes was constrained while a uniform vertical displacement was applied to the top nodes. The horizontal movements of all top and bottom nodes of the flat dogbone specimen in Yao et al.'s study were also constrained to avoid potential out-of-plane distortion [64]. The crystal plasticity model was implemented in the commercial finite element software ABAQUS through a user subroutine UMAT [63] originally developed by Huang [34,65].
The simulated engineering stress-strain curves compared to the respective experimental results are shown in Figure 5, where it can be seen that the initial yield stresses in all of the tests were reasonably predicted. Table 4 overall mobility evolution of the dislocation network. Note that using elastic pre-strains to mimic the long-range elastic interactions between dislocations generated by plastic strain represents a limiting case, overestimating the elastic strain on the mobile dislocation due to the overall macroscopic plastic strain. As a result, the hardening rates in Figure 5 represent upper bounds to their small-strain estimation within this framework, further supporting the need to consider additional mechanisms.
Modeling and predictions at large strains
At large strains, as the highest mobility dislocation segments become exhausted, the effect of dislocation segments of both edge and screw character must be considered. Dislocations come into contact and form junctions that often exhibit screw character [66][67][68][69], which are a major contributor to the strain hardening of fcc crystals in the large strain range [22]. This indicates that the relative contribution to strain hardening from screw dislocations, and other segments that are difficult to move by an applied stress, increases with plastic strain. Therefore, in the present study, the following model is proposed to account for the increasing influence of screw components to strain hardening with plastic strain: where is a weighting factor that controls the contribution from each type of dislocation, is the shear strain on slip system , and are the predicted CRSS in the DFT-based calculations (see Eq. 4) for pure edge and pure screw dislocations, respectively, and , is the and Benzerga [71] and Huang et al. [72].
The weighting factor in Eq. 7 was adjusted to reproduce the stress-strain curve by Yao et al. [60]. Table 3. As discussed in Section 3.2, the saturation stress ( in Eq. 2) cannot be determined from DFT-based calculations; therefore, was calibrated to be 300 MPa based on the experimental data in Figure 7b. Note that the value of only affects the stress-strain curve in the large strain range. In the present study, one simulation with being an order of magnitude higher than 300 MPa resulted in a stress-strain curve that was only slightly different for engineering strains greater than 0.6. Therefore, the excellent agreement in Figure 7 is primarily attributed to the value of ℎ 0 , which is derived from the DFT-based calculations and the weighting factor . While the weighting factor for the present calculations was fit to one of the single crystal Ni curves, its physical meaning as the rate of average character evolution of a dislocation network allows for its prediction based on other types of simulations that explicitly consider representative numbers of dislocations. Such mesoscale investigations are beyond the scope of the present work; here, results of the large strain parameterization are used to demonstrate how first-principles results may be incorporated into CPFEM calculations.
The wire tension tests performed by Haasen [2] were simulated again using the newly determined parameters that consider the influence of both edge and screw dislocations. Figure 8 shows The present model relies on the dominance of partial slip systems and therefore has natural limitations, prohibiting its direct use in bcc materials or those where dislocations are unlikely to dissociate into partials. Even though forms of the Peierls-Nabarro equation exist for more compact dislocation cores [29], their use within the present framework for more complex crystal structures could be complicated by the existence of specific dislocation mechanisms that dominate plastic flow, as in the case of kink nucleation in bcc materials [14]. The focus of the present work is the fcc case of pure Ni, and it should be emphasized that in the above calculations, only the weighting factor, , and the saturation stress, (whose contribution to the accuracy of the predictions was negligible), were fitted from a macroscopic stress-strain curve, while all other parameters were predicted from DFT-based calculations. In contrast, existing physics-based crystal plasticity models in the literature generally feature large numbers of fitting parameters, with the fitting process in practice diminishing the physical significance of each parameter.
Conclusions
In the present work, an approach has been developed to predict the macroscopic stress-strain
Data Availability
All relevant data are available from the authors.
Conflict of Interest
On behalf of all authors, the corresponding author states that there is no conflict of interest. Figure 1: A schematic of the overall approach proposed in the current work, showing the transfer of information from the atomic scale ideal shear process to a mesoscale description of hardening on a slip system level to, finally, a description of macroscale deformation of single crystal samples. Figure S2. Table 2: Calculated elastic constants (in GPa) of fcc Ni in terms of the conventional cubic lattice ( ij,cub ) and the orthorhombic lattice ( ij,orth ′ , see Figure 2a for the supercell) without and with pre-strain 110 .
Tables
ij,cub translated directly from ij,orth Table 2. All of the parameters in this table were determined through DFT-based calculations in the present study, except and . The values for were either taken from literature [5] (edge based) or calibrated from macroscopic experiments (edge screw mix), and was calibrated from macroscopic experiments. See detailed discussion in Section 3.5. Edge based 265 161 127 24 9 40 -Edge screw mix 120 9 300 0.33
Details of first-principles calculations
All DFT-based first-principles calculations in the present work were performed by the Vienna Ab initio Simulation Package (VASP) [1]. The ion-electron interaction was described by the projector augmented wave (PAW) method [2]; the exchange-correlation functional was characterized by the generalized gradient approximation (GGA, PW91) as parameterized by Perdew et al. [3]; and the core configuration of To explore the layer dependency of ideal shear strength, ancillary DFT-based calculations of pure alias shear along {111}〈112 ̅ 〉 were also performed using the 6-atom (3-layer), 12-atom (6layer), and 18-atom (9-layer) orthorhombic supercells based on the structure shown in Figure 2a.
The corresponding k-point meshes were 10167, 9163, and 7122, respectively. In addition, phonon calculations were also carried out to explore the origin of layer-dependent IS in terms of the 6-atom (3-layer) and the 12-atom (6-layer) orthorhombic cells after {111}〈112 ̅ 〉 pure alias shear by applying the same amount of shear displacement (0.5 Å). These phonon calculations were performed by the supercell approach [6] as implemented in the YPHON code [7,8]. GPa [11]. The higher values found by nanoindentation are likely due to the measurement being performed on a non-close packed (001) plane [10] and the stabilizing effect of the triaxial stress state beneath the indenter tip [12]. With an increasing number of {111} layers, the predicted IS decreased significantly despite the fact that the absolute displacement distance increased only slightly. The 3-layer, 6-atom supercell was chosen for study in the present work due to its agreement with experimental estimates of the ideal shear strength of pure Ni and because it represents the minimum number of layers, and therefore maximum shear stress.
To understand the decrease of IS with increasing numbers of {111} layers, the stretching force constants are plotted in Figure S1 with phonon calculations for two fcc-based orthorhombic lattices: one with 3 layers (6 atoms) and one with 6 layers (12 atoms) after pure alias shear with the same amount of displacement distance (0.5 Å) applied. Here the force constants, particularly the dominant stretching force constants shown in Figure S1 (as opposed to the significantly smaller bending force constants), provide quantitative understanding of the interaction or bonding between atomic pairs [13,14]. A large and positive force constant indicates strong bonding, while a negative force constant suggests the pair of atoms tend to separate from each other. Figure S1 shows that the maximum stretching force constants from the 3-layer lattice are higher than those from the 6- It should be noted that all DFT-based calculations of CRSS in the present work were performed at 0 K for simplification, while all experimental data were taken at room temperature. This simplification is appropriate because, for pure metals, the CRSS values at 0 K are close to those at room temperature [15]. Additionally, previous calculations have indicated that properties from DFT-based calculations at 0 K are comparable to experimental data measured at room temperature (298 K) for many properties. For example, the predicted difference of enthalpy of formation is negligible between 0 K and room temperature (< 0.2 kJ/mol for metal sulfides [16]), the predicted bulk moduli of Ni and Ni3Al decrease about 9 GPa (5 %) from 0 K to room temperature [17], and the predicted ideal shear strength of Ni decreases about 0.1 GPa (2 %) [9].
Lastly, the conversion of the DFT-based ideal shear strengths and elastic constants depends on the choice of elastic factor, whose value depends on dislocation character. These elastic factors have been derived for an anisotropic solid by Hirth and Lothe [18]. For example, for an edge dislocation aligned with the z-direction, with a Burgers vector = ( , , 0), the corresponding of edge dislocation along the x-direction is given by [ Eq. S1 where ̅
Interpretation of experimental data in the literature
In the works of Yao et al. [19] and Haasen [20], both of which were used for comparison purposes in Section 3.3 and beyond, the authors showed only the resolved shear stress and resolved shear strain data. However, it is not straightforward to convert directly measurable quantities in the tests, namely force and displacement, to resolved shear stress and resolved shear strain on slip systems; the conversion process depends on the assumptions made as discussed below [21].
In the work by Yao et al. [19], only one slip system was assumed to be operating. The resolved shear strain and the resolved shear stress under this assumption are calculated as [22][23][24]: where 0 is the initial angle between the loading direction and the slip plane normal direction, 0 is the initial angle between the loading direction and the slip direction, is the engineering stress, and is the engineering strain. This approximation assumes that the loading axis continually rotates with respect to the active slip system throughout loading, which is unlikely to be true in finite deformation [25]. Eqs. S3 and S4 were used to calculate the engineering stressstrain curve in the tests in ref. [19].
In the framework of double slip, the rotation of the loading axis with respect to the active slip system is assumed to cease when it reaches a specific orientation. Before reaching this orientation, single slip operates, and the equations above can be applied. After the rotation of the loading axis activates a conjugate slip system, the two slip systems are assumed to operate simultaneously with the same hardening rate, rotating the loading axis along the slip system boundary until reaching a point of stable double glide that prevents further rotation [22]. If 1 and 2 are the unit normals of the two slip planes, and 1 and 2 are the unit vectors of the two slip directions, the resolved shear strain and resolved shear stress under the double glide approximation can be calculated as [22,26]: where = 1 + 2 , and 0 is the angle between the loading direction and at the onset of double glide. Eq. S5 through S7 were adopted in the present work to calculate the engineering stressstrain curves in Haasen's tests, in which the initial loading direction was 〈1 ̅ 5 10〉 for crystal #6 and 〈1 ̅ 28〉 for crystal #18 in ref. [20]. In both tests, the {111}〈1 ̅ 01〉 slip system was active first.
It was assumed that when the loading direction rotated to 〈5 ̅ 5 14〉 for crystal #6 and to 〈2 ̅ 29〉 for crystal #18, double slip began and {1 ̅ 1 ̅ 1}〈011〉 started to operate as an additional slip system. The engineering stress-strain curves for all three tests, calculated using the above equations [19,20], are shown in Figure 4b of the main text. Figure S1: Stretching force constants (FCs) as a function of bond length for two fcc lattices of Ni: (i) the orthorhombic lattice with 3 layers and 6 atoms (see Figure 2), and (ii) the orthorhombic lattice with 6 layers and 12 atoms. Note that both lattices have the same shear displacement of 0.5 Å for the {111}〈112 ̅ 〉 shear deformation, and the 72-atom supercells were employed for phonon calculations of both lattices. Tables Table S1: Ideal shear strength (IS), associated slip (displacement) distance on the shear plane, and engineering shear strain 112 of fcc Ni due to pure alias shear along {111}〈112 ̅ 〉 using supercells with different layers, with the total number of atoms within each supercell given. | 2020-02-21T02:01:00.449Z | 2020-02-20T00:00:00.000 | {
"year": 2020,
"sha1": "855f398574af477eb09745d07b8a0c0377a9118a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "855f398574af477eb09745d07b8a0c0377a9118a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
220713675 | pes2o/s2orc | v3-fos-license | Extragastrointestinal stromal tumor in the rectovaginal septum associated with acute arterial occlusion
Highlights • EGIST in rectovaginal septum with unusual presentation (excessive vaginal bleeding).• EGIST was misdiagnosed by MRI as vaginal leiomyoma.• The first report of untreated EGIST associated with acute arterial occlusion.
Introduction
Gastrointestinal stromal tumors (GISTs) are common mesenchymal tumors that normally originate from the gastrointestinal (GI) tract, especially the stomach and intestine. Extragastrointestinal stromal tumors (EGISTs) which originate from outside the GI tract account for fewer than 10% of GISTs. It is even rarer for these tumors to occur in the vagina and rectovaginal septum, with only 22 cases having been reported (Cheng et al., 2019). Extragastrointestinal stromal tumors in the female reproductive tract have exhibited a wide variation of clinical presentations depending on size and location of the tumor, such as a sensation of being dragged down, constipation, dyspareunia, and vaginal bleeding (Hanayneh et al., 2018). Due to its rarity, this disease is often misdiagnosed as other conditions that originate from the uterine cervix such as cervical leiomyoma. Surgical removal is usually performed using a vaginal approach. We report a case of an EGIST in the rectovaginal septum originally diagnosed as cervical leiomyoma and successfully removed by abdominoperineal resection.
Case presentation
A 43-year-old woman, gravida 4, para 4, presented to our gynecological outpatient department in October 2016 after having had a protruded painless vaginal mass 5 cm in diameter without abnormal vaginal bleeding or abnormal urination for nine years. Core needle biopsy was performed and immunohistochemical studies revealed a spindle cell tumor that was strongly positive for CD34 but negative for desmin and S-100. She was lost to follow-up until April 2019, when she presented with excessive vaginal bleeding and acute left leg pain. Her hematocrit was 14%. She received 4 units of packed red cells. Computed tomographic angiography (CTA) of the lower extremities showed no contrast opacity at the left femoral or left popliteal arteries. She was diagnosed with superficial femoral artery occlusion and underwent immediate surgical embolectomy. After surgery, she received oral warfarin at 15 mg daily in order to prevent recurrence. Pervaginal examination revealed that the posterior vagina had an irregular surface with blood oozing and a mass approximately 10 cm in diameter at the rectovaginal septum. The lower edge of the mass was located at 2 cm above the hymen, which had a soft to firm consistency, was not tender, and was fixed. The cervix, uterus, and adnexa could not be evaluated. Rectovaginal examination revealed an extraluminal nodular surface mass at the anterior wall of the rectum.
She underwent computed tomography (CT) of the whole abdomen, which showed a large enhancing mass in the vaginal pouch that extended to the cervix. Magnetic resonance imaging (MRI) of the lower abdomen was subsequently performed and revealed a vaginal mass measuring 11.2 × 7.6 × 7.8 cm with whirlpool-like heterogenous enhancement and an epicenter located within the vaginal canal. The https://doi.org/10.1016/j.gore.2020.100609 Received 20 May 2020; Received in revised form 3 July 2020; Accepted 5 July 2020 mass was hyperintense on T2-weighted images, which indicated degeneration. This mass caused pressure and abutment to the urethra, posterior wall of the urinary bladder, and rectum without adjacent organ invasion (Fig. 1). The provisional diagnosis was cervical leiomyoma. Chest imaging was not performed because malignancy was not suspected. Since she had no desire for further pregnancy, we decided to perform a total abdominal hysterectomy (TAH) after obtaining informed consent. We discontinued warfarin administration for five days, bridging the warfarin with 1 mg/kg of enoxaparin every 12 h, and performed the procedure after enoxaparin administration had been discontinued for 12 h. Upon laparotomy, we noted the normal size of the uterus. However, after the uterus was removed, we found that there was no connection between the uterus and the vaginal mass. The mass was accessed through dissection of the area behind the vaginal cuff through the rectovaginal space and separated from the adjacent organs. The anterior wall of the rectum was then separated from the mass. Palpation revealed a well-defined, firm mass attached to the posterior vaginal wall (Fig. 2). The tumor was enucleated under the effort from intra-abdomen that gently pushing the mass down into vagina. Next, the posterior vaginal wall was opened at the midline from the posterior fourchette to the posterior vaginal vault. Electrocautery and blunt dissection were performed to completely separate the vaginal epithelium from the vaginal mass, followed by excision of the vaginal mass from the underlying rectovaginal septum without trauma to the rectal sphincter or rectal serosa. The vaginal wall was closed with interrupted 2-0 Vicryl. There were no complications related to the surgery. Enoxaparin administration was resumed 24 h after surgery at a dosage of 60 mg subcutaneously every 12 h until discharge.
Gross examination revealed a 11.5 × 7 × 6 cm mass with rubbery tissue in the center on cut section. The pathological report revealed a well-circumscribed bland spindle cell lesion with moderate cellularity arranged in an intersecting fascicular pattern with prominent stromal hyalinization. The cells were spindle-shaped with a moderate amount of eosinophilic cytoplasm, spindle nuclei with blunt ends, dispersed chromatin, inconspicuous nucleoli, and mild pleomorphism. The mitotic count was 0-1/ 5 mm 2 . The tumor cells were strongly positive for CD117, CD34, DOG1, vimentin, and caldesmon, but negative for smooth muscle actin (SMA), desmin and S-100 (Fig. 3).
The patient did not receive any postoperative targeted therapy due to personal economic constraints. In November 2019, the patient discontinued anticoagulant drug treatment after CTA evaluation. In April 2020 (11 months post operation), the patient had no recurrent mass, abnormal vaginal bleeding, or left leg pain.
Discussion
Rectovaginal EGISTs are very rare and potentially malignant. They have been reported in women from 15 to 80 years of age, with a mean age of around 56 years (Cheng et al., 2019). Clinical presentations vary depending on tumor size and location. The most common presenting symptom is awareness of a mass in the vagina. Vaginal EGISTs can be asymptomatic or may have primary symptoms of bladder outlet obstruction or vaginal bleeding (Ceballos et al., 2004;Hanayneh et al., 2018;Weppler and Gaertner, 2005). Severe constipation and rectal mucosal involvement have also been reported to be associated with GISTs involving the rectovaginal septum (Nasu et al., 2004;Zhang et al., 2009). In our case, the severe bleeding may have been caused by ulceration and necrosis of the overlying vaginal epithelium.
There is a significantly greater amount of data available on venous thromboembolism (VTE) than on arterial thrombosis in cancer patients. This is because VTE is a common condition in patients with any active cancer, but the association between GISTs and venous thrombosis remains unclear. A previous review reported only five cases in which patients diagnosed with GISTs were found to have VTE (Galeano-Valle et al., 2020).
Thrombosis has been observed in the arteries of cancer patients, and the risk of cancer-related arterial thromboembolism (ATE) has been shown to vary over the course of disease (De Stefano, 2018). For instance, the rate of ATE is highest during the first twelve months after cancer confirmation and subsequently gradually declines (Aronson and Brenner, 2018;Navi et al., 2017). The most common cancers associated with arterial thromboembolic events (in descending order) are lung, colorectum, prostate, breast, non-Hodgkin lymphoma, pancreas, stomach, and uterus (Navi et al., 2017). The EGIST patient in this report is the first reported to subsequently develop acute arterial occlusion.
Accurate diagnosis of EGISTs remains a challenge due to their rarity and non-specific symptoms, which often lead to them being overlooked by physicians. Preoperative imaging may help determine whether an EGIST is benign or malignant prior to surgery, allowing for more effective treatment planning. Magnetic resonance imaging is the most accurate imaging technique for delineating the anatomy of the pelvic organs. Extragastrointestinal stromal tumors appear as intermediate signal intensity in both T1-and T2-weighted images (WI) with homogeneous enhancement, with central hyperintensity on T2WI reflecting cystic degeneration or necrosis (Vázquez et al., 2012). Although these findings are suggestive of EGISTs, diagnostic imaging is unable to distinguish this condition from sarcoma. Histopathology remains the gold standard for differentiation between the two conditions. Similar to their GI tract counterparts, these tumors exhibit spindle cell morphology and expression of CD117 and CD34 (Akahoshi et al., 2018). However, definitive diagnosis of GISTs could not be made based solely on the presence of CD117, given that other spindle cell tumor also express strong positivity for CD117 (Novelli et al., 2010). Immunohistochemistry with a panel of antibodies including CD117, DOG1, CD34, SMA, S-100, and desmin is beneficial in the diagnosis of GISTs (Akahoshi et al., 2018;West et al., 2004). Leiomyomas and leiomyosarcomas are diagnosed when immunohistochemical analysis demonstrate the tumor to be positive for smooth muscle actin and desmin and negative for CD117 and s-100 (Akahoshi et al., 2018;Lam et al., 2006).
Surgery is the treatment of choice for localized GISTs (Akahoshi et al., 2018). In most of the reported, excision of the mass through the vaginal route was successful. However, when an EGIST is abnormally large (as our case) the abdomino-perineal approach may be more feasible. This technique has been reported to be safe in difficult cases of complete rectal resection in cancer patients (Abou-Zeid et al., 2015) and may offer an advantage in cases in which there is a large mass pressing against the rectum. Having two surgeons operate simultaneously allows for better access to the mass and safer dissection of the bowel from the mass. Laparoscopic surgery is also an option if performed under the supervision of an experienced surgeon. Since 10-30% of GISTs have malignant potential, recurrence is a concern. Routine follow-up gynecologic examination is warranted (Lam et al., 2006). Targeted therapy such as imatinib has been administered postoperatively in some previously reported cases (Hanayneh et al., 2018). A randomized controlled trial showed that adjuvant therapy was able to prolong recurrence-free survival, with no recurrence found at 5 years. However, evidence regarding the clinical benefits of adjuvant therapy in vaginal EGISTs is lacking. When considering whether or not to prescribe adjuvant therapy, benefits and side effects such as nausea/ vomiting, edema, diarrhea, skin rashes, and fever should be taken into account (Etherington and DeMatteo, 2019).
Conclusion
This unusual case, in which an EGIST presented as a vaginal mass and acute arterial occlusion, emphasizes the challenge of rare disease diagnosis. Some associated symptoms may give clues to whether or not the condition is malignant. Using the appropriate approach when excising the mass plays an important role in the success of the operation. | 2020-07-16T09:04:59.052Z | 2020-07-15T00:00:00.000 | {
"year": 2020,
"sha1": "12a6ff001fddfba80a6b09be0a6af0b33d9a9e22",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.gore.2020.100609",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7eb4ecaf874edc5ffe6abaac77daf9c969913a40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248452816 | pes2o/s2orc | v3-fos-license | Immigration enforcement exposures and COVID-19 vaccine intentions among undocumented immigrants in California
COVID-19 vaccines are effective in preventing COVID-19 infection, disease, and death. However, there is no data about vaccine intentions among the 10.7 million undocumented immigrants in the US. This study examined the associations between immigration enforcement exposure and vaccine intentions among undocumented immigrants in California. This community-engaged study partnered with immigrant organizations across California during the COVID-19 pandemic to recruit 366 study participants to an online survey regarding their attitudes about the COVID-19 vaccine and past exposure with the immigration enforcement system. Data collection occurred from September 2020 – February 2021 before the vaccine became available. Overall, 65% of study participants indicated that they would definitely get the vaccine were it to become available. In multivariable logistic regressions, an increase in immigration enforcement scores were associated with a 12% decrease in vaccine acceptance (aOR = 0.88, CI: 0.78–0.99). Additionally, undocumented women were 3.09 times more likely to report vaccine acceptance compared to undocumented men (CI: 1.79–5.35) and undocumented Asians were 57% less likely to report vaccine acceptance compared to undocumented Latinx immigrants (aOR = 0.43, CI: 0.21–0.88). Exposure to the immigration enforcement system may undermine public health efforts to prevent further transmission of COVID-19 by reducing acceptability of vaccines among immigrant populations.
Introduction
Authorized COVID-19 vaccines are effective in preventing COVID-19 infection, disease, and death (CDC. Coronavirus Disease, 2019). As such, understanding COVID-19 vaccine intentions can inform strategies to increase vaccine uptake and address COVID-19 health and social disparities. However, there is no data about vaccine intentions among the 10.7 million undocumented immigrants in the US, who face increased risks and consequences for COVID-19 due to their disproportionate participation in essential occupations and restricted access to healthcare and public benefits (COVID-19, xxxx;Clark et al., 2020).
Multiple studies and validated scales assessing vaccination willingness incorporate trust as a determinant of vaccine acceptance (Betsch et al., 2018;Szilagyi et al., 2021;Latkin et al., 1982). Among undocumented immigrants, an overwhelming number of studies suggest immigration enforcement actions undermine trust in public institutions, making individuals less likely to engage in everyday behaviors or seek health and social services (Hacker et al., 2015). However, immigration enforcement does not inhibit all health seeking behavior (Yasenov et al., 2020) and the extent to which immigration enforcement influences vaccine acceptance is unknown.
To address these gaps, this study provides data about undocumented immigrants' vaccine intentions and associations with exposure to the immigration enforcement system, including encounters, worries, or fears.
Methods
Data come from the COVID-19 BRAVE Study (Building community Raising All immigrant Voices for health Equity), a community-engaged cross-sectional survey that examined the social, economic, and health impacts of COVID-19 among undocumented immigrants in California. Data collection occurred between September 2020-February 2021 before the COVID-19 vaccine became publicly available. The study partnered with a Community Advisory Board, schools and immigrantserving community-based organizations who recruited through listservs, social media, and flyers. Those who reported being undocumented, Asian and/or Latinx, ages 18-39, living in California at the time of participation, and ability to take a 15-minute online survey in English or Spanish were eligible. All participants provided informed consent and were emailed a password-protected, unique and time-sensitive survey link to minimize fraudulent participation. A total of 24 respondents were excluded from our sample due to inconsistent immigration-related responses (i.e., born in the US, reported having Deferred Action for Childhood Arrivals (DACA) but not meeting program requirements, etc.). To conduct a complete case analysis, we excluded an additional 16 participants who were missing a response for study items. The final analytic sample included 326 participants.
Participants were asked "If a vaccine becomes available for COVID-19, would you get it?" and could select definitely, probably, or definitely not. Reporting "definitely" indicated vaccine acceptance, while "probably" or "definitely not" were not vaccine accepting. Respondents' total immigration enforcement exposures were summed using affirmative responses to questions about their experiences, worries, and fears related to: (a) deciding not to apply for one or more needed non-cash government benefits because they were worried it would disqualify them or a family member from obtaining a green card or becoming a US citizen; (b) their own/someone they know experiences of immigration raids; (c) their own/someone they know experiences of detention or deportation by immigration authorities; (d) if they have experienced deportation proceedings; (e) restrained movement to avoid the police or immigration authorities; (f) restrained movement to avoid internal checkpoints or TSA; (g) surveillance by law enforcement; (h) being stopped for no good reason by law enforcement; (i) inquiries about their citizenship or legal status by a police officer or other law enforcement authority; (j) seen immigration authorities in their neighborhood; and (k) fear getting deported. Measures come from the Research on Immigrants Health and State Policy Study, which aimed to develop cumulative measures of immigration enforcement experiences, including surveillance, policing, and deportation (Young and Tafolla, 2021).
Other covariates included gender (female or male), race/ethnicity (Latino or Asian), DACA status (DACA or no DACA) and age (18-24, 25 and older). Respondents also reported their highest level of education, employment status, school enrollment, language spoken at home, and health insurance status. The insurance variable was derived from responses about health plans. Having a county health plan, Medi-Cal, school health plan, private/employee health plan or other health insurance was coded as having insurance.
Analyses
We investigated the distribution of study variables and examined bivariate relationships before fitting multivariable logistic regression models to assess the association between immigration enforcement exposures and vaccine intentions. To evaluate the robustness of our results, we used alternative immigration enforcement categorizations. Our findings held when the immigration enforcement exposure score was dichotomized, as reporting one or more exposures or cut off at the mean. Analyses were conducted using Stata 15, with statistical significance set at p <.05. Study materials and procedures were approved by the Institutional Review Board at the University of California, Los Angeles.
Discussion
In our sample of undocumented immigrants, 65% indicated vaccine acceptance, which is similar to other US reports that 67% would accept a COVID-19 vaccine were it to become available (Malik et al., 2020). Our finding that those who reported more immigration enforcement exposures were 12% less likely to accept the vaccine contributes to a growing literature demonstrating the harmful effects of the immigration enforcement regime on health behaviors and health outcomes (Hacker Table 1 Distribution and multivariable associations between immigration enforcement score and COVID-19 vaccine intentions. et al., 2015). Immigration enforcement may deter undocumented immigrants from accessing public health programs, including vaccines, through fear, government mistrust, and limiting access to healthcare services (Kerani and Kwakwa, 2018;Page and Flores-Miller, 2021). We also examined each exposure individually with vaccine intentions. No individual exposure was statistically significantly associated with the outcome, suggesting that the cumulative exposure to immigration enforcement as a systemacross surveillance, profiling, and deportation -is a better predictor of the outcome as opposed to individual exposures.
Other studies conducted prior to the COVID-19 vaccine rollout found that men vs. women and Asians vs. other races were more likely to accept the vaccine (Malik et al., 2020;Zintel et al., 2022). However, our study runs counter to these findings: women and Latinx undocumented were more likely to accept the vaccine compared to men and Asian undocumented, respectively. It is possible that the immigration enforcement tactics of surveillance, profiling, and detainment and deportation disproportionately targeting undocumented men results in mistrust and fear of public health interventions and programs.
Additionally, our exploratory data suggests that Asian undocumented may be less accepting of the COVID-19 vaccine. It should be noted that Latinx participants reported higher levels of immigration exposures compared to Asian participants. This is in line with other research and administrative data that finds that Latinx immigrants are more likely to be apprehended and deported compared to other groups (Golash-Boza and Hondagneu-Sotelo, 2013). However, our data also suggests that while Latinx immigrants are more likely to report higher numbers of immigration enforcement exposures, Asian immigrants also report high levels of exposures in their daily lives and this may contribute to their vaccine intentions. Other factors that may explain less acceptance among Asian undocumented may include recent dramatic increases in anti-Asian and xenophobic attacks during the pandemic, encouraged and reinforced through anti-Asian rhetoric and policies by US government officials (Gover et al., 2020). Racism and xenophobia inflicted by individuals and government entities may have negatively impacted the quality of relationships and trust in community members and institutions among undocumented Asians, thereby reducing social capital. Social capital is an important social determinant of health, health behaviors, and healthcare access, and has been shown to significantly correlate positively with fully vaccinated status and negatively with vaccine hesitancy (Kawachi, 1999;Ferwana and Varshney, 2021). Even prior to the pandemic, our past studies indicated undocumented Asians experience social isolation, discrimination, and intra-and inter-ethnic conflict to the detriment of their physical and mental health (Sudhinaraset et al., 1982;Ro et al., 2021). Thus, in addition to immigration enforcement, sharp increases in anti-Asian racism during the pandemic may have played a role in lowering vaccine acceptance among undocumented Asians. Future efforts should pay attention to how legal status and race may intersect to further compound disadvantage and inequities.
Although this is a small, cross-sectional study, COVID-19 data among immigrants, and in particular undocumented communities, is lacking. The online nature of the study may select for more connected or educated immigrants compared to those who do not have internet access. Related, recruitment through community partners may have biased our sample towards more connected immigrants; therefore, these results are likely to underestimate the associations between immigration enforcement and vaccine intentions. Another limitation of the study is that respondents were asked about their acceptance of a hypothetical vaccine. The response options for the hypothetical vaccine included definitely, probably, or definitely not; however, it did not include an "unsure" option, which may have not given participants their preferred option. However, only four participants did not answer this question in the entire sample suggesting participants were able to respond. Related, we decided to include the "probably" into the "non-accepting" group. It is possible that those who responded "probably" would have been "accepting" of the vaccines; however, this decision was made in order to more accurately estimate acceptance, as was used in other studies (Doherty et al., 2021). Future efforts should examine whether results change with the availability of vaccines.
Public health implications
To prevent further transmission of COVID-19, public health efforts are needed to address structural barriers to healthcare for the undocumented community, including increasing public trust in healthcare systems (Kerani and Kwakwa, 2018). This study suggests that trusted health officials should be present at vaccinations sites and undocumented immigrants should be assured that they will not be required to provide documents or proof of residence. Immigration enforcement policies, regardless of timing and proximity to public health intervention sites, may undermine trust in public health programs, including vaccine uptake.
Funding
This manuscript was made possible with the support of the UCLA Asian American Studies Center, California Asian Pacific Islander Legislative Caucus and the State of California, and University of California Office of the President Award Number R00RG2579. This manuscript was also funded in part by the National Institute on Minority Health and Health Disparities (NIMHD) Award Number R01M012292. | 2022-05-01T13:09:22.411Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "78930c26e500e61b02ea0d1015e7ed4bc85029cb",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.pmedr.2022.101808",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4b6753d6550bed54b3e63055f4e39f916c0ecc4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
251075765 | pes2o/s2orc | v3-fos-license | Metabolomics Approach on Non-Targeted Screening of 50 PPCPs in Lettuce and Maize
The metabolomics approach has proved to be promising in achieving non-targeted screening for those unknown and unexpected (U&U) contaminants in foods, but data analysis is often the bottleneck of the approach. In this study, a novel metabolomics analytical method via seeking marker compounds in 50 pharmaceutical and personal care products (PPCPs) as U&U contaminants spiked into lettuce and maize matrices was developed, based on ultrahigh-performance liquid chromatography-tandem mass spectrometer (UHPLC-MS/MS) output results. Three concentration groups (20, 50 and 100 ng mL−1) to simulate the control and experimental groups applied in the traditional metabolomics analysis were designed to discover marker compounds, for which multivariate and univariate analysis were adopted. In multivariate analysis, each concentration group showed obvious separation from other two groups in principal component analysis (PCA) and orthogonal partial least squares discriminant analysis (OPLS-DA) plots, providing the possibility to discern marker compounds among groups. Parameters including S-plot, permutation test and variable importance in projection (VIP) in OPLS-DA were used for screening and identification of marker compounds, which further underwent pairwise t-test and fold change judgement for univariate analysis. The results indicate that marker compounds on behalf of 50 PPCPs were all discovered in two plant matrices, proving the excellent practicability of the metabolomics approach on non-targeted screening of various U&U PPCPs in plant-derived foods. The limits of detection (LODs) for 50 PPCPs were calculated to be 0.4~2.0 µg kg−1 and 0.3~2.1 µg kg−1 in lettuce and maize matrices, respectively.
Introduction
Pharmaceutical and personal care product (PPCP) contamination in animal-derived foods has attracted worldwide attention, and a series of formal regulatory documents on the maximum residue limits (MRLs) of PPCPs from different countries and organizations has been issued [1][2][3][4]. However, PPCPs-induced contamination in plant-derived foods has not been fully addressed [5]. Previous studies [6][7][8][9][10][11][12][13][14] indicate that some plant-derived foods (e.g., corn, barley, pea, wheat, carrot, potato, cucumber and lettuce) can easily absorb PPCPs from soil with animal manure used as a fertilizer, which contains several kinds of commonly used antibiotics, e.g., tetracyclines, quinolones, sulfonamides and β-lactam, with their total concentration from the µg kg −1 to the mg kg −1 level in the plants [9,[15][16][17][18]. Due to the lack of evaluation standards of PPCPs in plant-derived foods, it is hard to directly judge whether the residue concentrations of PPCPs can induce adverse effects on human health. Referring to the regulatory files on MRLs of PPCPs in animal-derived foods [2,4], which proposed a concentration of 10 µg kg −1 as the threshold of safety for most PPCPs, it can be inferred that if the concentrations of PPCPs in plant-derived foods exceed 10 µg kg −1 , it triggers a food safety risk. Therefore, the top priority is to develop reliable analytical methods for the investigation of PPCP residues in plant-derived foods. hydrochloric acid (HCl) and C18 powder (Sinopharm Chemical Reagent Co., Ltd., Shanghai, China); methanol and acetonitrile (HPLC grade, Merck, Darmstadt, Germany); formic acid (HPLC grade, Shanghai ANPEL Laboratory Technologies Inc., Shanghai, China); filter membrane (0.22 µm, Agilent Technologies, Singapore, MI, USA); ultrapure water (Milli-Q ultrapure water system, Merck, Darmstadt, Germany); ciprofloxacin-d8 hydrochloride solution (100 µg mL −1 in methanol, First Standard, Ridgewood, NY, USA). Analytical standard compounds for 50 PPCPs (purity > 98.3%) were obtained from First Standard (Ridgewood, NY, USA), Sigma (Alexandria, VA, USA), TRC (Toronto, ON, Canada) and Dr. Ehrenstorfer (Augsburg, Germany). More details on the 50 PPCPs are shown in Table 1.
Solution Preparation
A total of 50 PPCPs were separately prepared with methanol at 100 µg mL −1 , 1 mL of which was withdrawn, mixed together and further diluted with methanol to obtain a 1 µg mL −1 solution. Then, 100 ng mL −1 ciprofloxacin-d8 methanol solution was prepared by diluting its 100 µg mL −1 solution. A 0.1 mol L −1 Na 2 EDTA-Mcllvaine buffer solution was prepared with Na 2 HPO 4 (5.5 g), citric acid (12.9 g) and Na 2 EDTA (37.2 g) dissolved in 1 L pure water, which was further adjusted to pH 4.0 with 0.1 mol L −1 HCl or NaOH solution.
Sample Preparation and Pretreatment Process
(a) Lettuce sample was cut into small pieces, then ground into batter by tissue homogenizer; (b) 2.0, 5.0 and 10.0 g lettuce batters, together with one-to-one corresponding 20, 50 and 100 µL of 50 PPCPs mixed solutions (1 µg mL −1 ) were poured into 50 mL polypropylene centrifuge tubes. To calibrate the recovery during the sample pretreatment process, ciprofloxacin-d8 methanol solution (0.5 mL, 100 ng mL −1 ) as recovery internal standard was further added, as adopted in previous studies [30][31][32]; (c) 5 mL Na 2 EDTA-Mcllvaine buffer solution (0.1 mol L −1 ) was dumped into the tube, vortexed for 1 min, then 20 mL 1% (V/V) formic acid/acetonitrile solution was added further, stirring for 1 min. An extraction salt package (10.0 g Na 2 SO 4 + 2.0 g NaCl) was added for stratification under salting out after the solution standing for 10 min, centrifuging at 4500 r min −1 for 5 min; (d) then, after transferring all the supernatant into new 50 mL polypropylene centrifuge tubes, adding 100 mg C18 powder, vortexing for 1 min, centrifuging at 4500 r min −1 for 3 min, the solution was extracted to another 50 mL centrifuge tube, dried with N 2 blowing by nitrogen blowing apparatus (N-EVAP-112, Organomation, Berlin, MA, USA), and redissolved in 1 mL 40% (V/V) methanol 0.1% formic acid/water solution, vortexed for 1 min; (e) then, filtered with a 0.22 µm filter membrane, the sample solutions of 50 PPCPs at the theoretical concentrations of 20, 50 and 100 ng mL −1 were prepared. Each concentration experiment was repeated nine times.
Analytical Method
The 50 PPCPs and ciprofloxacin-d8 were analyzed on a quadrupole/electrostatic field orbitrap LC-MS/MS system (Q Exactive Plus, Thermo Fisher Scientific Inc., Waltham, MA, USA) under the positive mode of electrospray ion (ESI) source. Components in the sample solution underwent separation within an Accucore RP-MS column (100 × 2.1 mm, 2.6 µm particle diameter, Thermo Fisher Scientific Inc., Waltham, MA, USA), with injection volume of 10 µL. Next, 0.1% (V/V) formic acid/water and 0.1% (V/V) formic acid/methanol solutions were prepared as the mobile phase A and B, respectively, with flow rate of 0.3 mL min −1 . In consideration of the matrix complexity of lettuce and maize, there may be some impurities not eluted from the LC-MS/MS system in a relatively short time (738 s for the last eluted target PPCP in this study) designed only for 50 PPCPs, leading to the potential disruption for the elution and analysis of the next sample. Therefore, a longer elution program was designed as follows: gradient started from 5% B, kept for 2 min, then increased to 30% B in 1 min, at a duration of 7 min, further increased to 90% B in 1 min, holding on 25 min, finally decreased to 5% B in 1 min, equilibrating for 16 min. The oven temperature was set at 40 • C. Other parameter settings were as follows: heating and capillary temperature 320 • C; lens and spray voltage 50 and 3200 V, respectively; auxiliary and sheath gas N 2 , with flow rate at 10 and 40 arb, respectively; scan mode: full-scan/data-dependent two-stage scanning; MS parameters: full-scan resolution 70,000, maximum dwell time 100 ms, AGC target 1 × 10 6 , m/z scan range 100~1000; MS/MS parameters: resolution 17,500, maximum dwell time 50 ms, AGC target 2 × 10 5 .
LC-MS/MS output results of 50 PPCPs and ciprofloxacin-d8 were analyzed by Trace Finder 3.3 software, with screening conditions as follows: (a) for primary parent ion, signal to noise ratio 5.0, response intensity threshold 10,000, and mass error 5 ppm; (b) for secondary fragment ions, minimum matching number of ion 1, response intensity threshold 10,000, and mass error 5 ppm. On the basis of the peak area of the primary parent ion, ciprofloxacin-d8 was quantified with standard curve for recovery calculation.
Metabolomics Data Processing
LC-MS/MS was operated in full scan mode with RAW-formatted files as the direct output, which underwent conversion to corresponding mzXML-formatted files via the ProteoWizard software [35]. These new files are adaptable to the upload to the Work-flow4Metabolomics (W4M) platform (https://workflow4metabolomics.usegalaxy.fr/, accessed on 20 November 2021) for metabolomics analysis [36]. After peak detection, alignment and retention time calibration, plus data normalization, centralization, scaling and transformation performed on the W4M platform, the data matrix was obtained in the format of variable and sample named as abscissa and ordinate, respectively [36,37]. Variable contains a series of information, e.g., molecular weight and retention time, with every marker compound corresponding to its unique variable, that is to say, the process to pursue marker compounds is actually a process to pursue eligible variables. Multivariate statistical analysis including principal component analysis (PCA) [38][39][40] and orthogonal partial least squares discriminant analysis (OPLS-DA) [41,42] was performed in SIMCA 14.1 software [43] after importing the data matrix. A permutation test with 200 iterations was employed for over-fitting judgement of the OPLS-DA model [43,44]. Other parameters to screen marker compound candidates include the absolute value of variable confidence in the S-plot plot [45] and variable importance in projection (VIP) [43,44,46], with the threshold above 0.9 and 1, respectively. After this, eligible marker compound candidates from 20 and 100 ng mL −1 groups can both be obtained, and only overlapped candidates in two groups, representing their significantly low and high concentration in the corresponding 20 and 100 ng mL −1 groups, were further investigated by pairwise t-test [47][48][49] in SPSS Statistics V17.0 software and fold change judgement for the univariate analysis. Univariate analysis is simple, intuitive and easy to be understood. It was used to quickly investigate the differences of marker compound candidates in different groups. To more rapidly verify the identity of marker compounds on behalf of 50 PPCPs, we directly compared the precise molecular weight (<5 ppm in absolute value of error), retention time and the adduct structure of marker compounds with that of the authentic 50 PPCPs (Table 1).
Data Preprocessing
As indicated in Figure 1, although only part of the total ion chromatograms at the retention time of 0~900 s is shown, during which all 50 PPCPs were eluted, obvious differences in peak intensity have already been observed in three concentration groups, implying the possibility to seek marker compounds among groups. The principle for relative standard deviation of peak intensity above 30% was employed to filter out invalid variables in QC and three concentration groups [50], with a final 6512 × 39 data matrix obtained for further analysis.
PCA Analysis
As Taguchi [51] pointed out, PCA can make a natural classification for sample groups and eliminate the extreme data without knowing their categories, thus PCA can be used in metabolomics to assess the data quality and to identify outliers [38][39][40]. As indicated in Figure 2, no extreme data and outliers were observed. Samples at the same concentration gathered together, indicating the good classification of groups. Obvious separation among three concentration groups indicates the existence of major discrepancies, further paving the way to seek marker compounds from different groups.
Data Preprocessing
As indicated in Figure 1, although only part of the total ion chromatograms at the retention time of 0 ~ 900 s is shown, during which all 50 PPCPs were eluted, obvious differences in peak intensity have already been observed in three concentration groups, implying the possibility to seek marker compounds among groups. The principle for relative standard deviation of peak intensity above 30% was employed to filter out invalid variables in QC and three concentration groups [50], with a final 6512 × 39 data matrix obtained for further analysis.
PCA Analysis
As Taguchi [51] pointed out, PCA can make a natural classification for sample groups and eliminate the extreme data without knowing their categories, thus PCA can be used in metabolomics to assess the data quality and to identify outliers [38][39][40]. As indicated in Figure 2, no extreme data and outliers were observed. Samples at the same concentration gathered together, indicating the good classification of groups. Obvious separation among three concentration groups indicates the existence of major discrepancies, further paving the way to seek marker compounds from different groups.
OPLS-DA Analysis
Theoretically speaking, the peak intensities of variables ought to increase with their rising concentrations, i.e., 20 and 100 ng mL −1 groups should present the minimum and maximum peak intensities, respectively. However, the reality may be different, due to the discrepancies in sample recoveries. Previous studies [30][31][32] proposed deuterated antibiotics as recovery internal standards to correct losses of PPCPs during sample preparation In consideration of this, ciprofloxacin-d8 (parent ion m/z 340.19132; fragment ions m/z 296.20156, 253.15933 and 239.14367; retention time 6.73 min) was employed here to eliminate the peak intensity errors of variables induced by disparate recoveries of PPCPs during the pretreatment process. As shown in Table S1 (Supplementary Materials), the recov eries of ciprofloxacin-d8 were calculated to be 80.1 ~ 85.9%, 80.3 ~ 86.2% and 81.6 ~ 87.7% in the 20, 50 and 100 ng mL −1 groups, respectively, based on the ciprofloxacin-d8 standard curve solutions (100, 50, 25, 10 and 5 ng mL −1 ) prepared in blank lettuce extract solution After this, the recoveries of ciprofloxacin-d8 were all calibrated to 100% by multiplying a corresponding calibration coefficient, with which the peak intensities of ciprofloxacin-d8 were also calibrated, together with peak intensities for all the variables.
OPLS-DA Analysis
Theoretically speaking, the peak intensities of variables ought to increase with their rising concentrations, i.e., 20 and 100 ng mL −1 groups should present the minimum and maximum peak intensities, respectively. However, the reality may be different, due to the discrepancies in sample recoveries. Previous studies [30][31][32] proposed deuterated antibiotics as recovery internal standards to correct losses of PPCPs during sample preparation. In consideration of this, ciprofloxacin-d8 (parent ion m/z 340.19132; fragment ions m/z 296.20156, 253.15933 and 239.14367; retention time 6.73 min) was employed here to eliminate the peak intensity errors of variables induced by disparate recoveries of PPCPs during the pretreatment process. As shown in Table S1 (Supplementary Materials), the recoveries of ciprofloxacin-d8 were calculated to be 80.1~85.9%, 80.3~86.2% and 81.6~87.7% in the 20, 50 and 100 ng mL −1 groups, respectively, based on the ciprofloxacin-d8 standard curve solutions (100, 50, 25, 10 and 5 ng mL −1 ) prepared in blank lettuce extract solution. After this, the recoveries of ciprofloxacin-d8 were all calibrated to 100% by multiplying a corresponding calibration coefficient, with which the peak intensities of ciprofloxacin-d8 were also calibrated, together with peak intensities for all the variables.
As shown in Figure 3, we can observe the separation of two camps on the first principal component axis. One camp represents the specific concentration group (green part), and the other camp is on behalf of the remaining two groups (blue part), indicating the existence of variables with significant differences between the two camps. Each point in the S-plot plots ( Figure 4) represents a variable, which keeps away from the origin along Xand Y-axis, implying more contribution and higher confidence level of the variable to the difference. Therefore, the points at the two ends of 'S' can be deemed the most differentiating components. In the S-plot analysis, absolute value of confidence > 0.9 has been proposed to screen variables as marker compound candidates [45], which at the significantly low and high concentration should be searched at the right and left ends of S-plot plots in Figure 4a,b, respectively. common parameters to describe the interpretation level of the model in the Y-axis direction and the prediction level of the model [52,53], respectively. If R 2 Y and Q 2 are both close (or equal) to 1, the OPLS-DA models are not susceptible to over-fitting. As can be seen from Figure 5, R 2 Y and Q 2 values were no less than 0.991, indicating the good reliability, predictability and no over-fitting for all OPLS-DA models. VIP > 1 principle continues to screen marker compounds. Eventually, marker compounds on behalf of 50 PPCPs were all screened out as shown in Table 2. Negligible concentrations (<0.1 ng mL −1 ) of 50 PPCPs in the blank lettuce extract solution were obtained by the metabolomics analysis, which eliminates the interference of inherent (rather than spiked) 50 PPCPs residues in lettuce matrix to seek marker compounds.
Univariate Analysis
After multivariate analysis, a pairwise t-test [47][48][49] was firstly employed to examine whether marker compounds from a specific concentration group presented significant differences in peak intensity with those from other two groups. Pairwise t-test, as a reliable statistical test method, was performed to calculate p values between the two concentration groups and the p < 0.05 observed in this study indeed showed the existence of significant differences among groups. Previous studies [29,54] also adopted fold change of concentration > 2 to discern variables with high contrast among groups as marker compounds. Herein, marker compounds on behalf of 50 PPCPs all presented fold change values above 2, supporting the validity of marker compounds obtained with our analytical strategy.
The limits of detection (LODs) for 50 PPCPs were also considered here. Firstly, a 2.0 g blank lettuce sample was used to prepare an extract solution (1 mL) after the same pretreatment mentioned above. Then, a 20 ng mL −1 PPCPs solution was obtained by diluting their mixed methanol solution (20 µL, 1 µg mL −1 ) with 1 mL blank lettuce extract solution. The experiments were repeated in septuplicate to obtain seven samples, which underwent the same metabolomics analysis to obtain the peak intensities of 50 PPCPs. For each PPCP, a 20 ng mL −1 concentration level was deemed to correspond to average values of seven samples in peak intensity; therefore, the concentration (unit: ng mL −1 ) of each PPCP in a sample was calculated by its own peak intensity × 20/average peak intensity for the standard deviation measurement of the seven samples. According to the method proposed by US Environmental Protection Agency [55], the LOD values for 50 PPCPs were calculated to be 0.4 ~ 2.0 µg kg −1 , as shown in Table 2. Note: a two VIP values from 100 and 20 ng mL −1 groups, respectively; b two-group coordinate values from 100 and 20 ng mL −1 groups, respectively; c Mass error (ppm) = (extracted molecular weight from W4M platformextracted molecular weight from LC-MS/MS) × 10 6 /extracted molecular weight from LC-MS/MS.
Univariate Analysis
After multivariate analysis, a pairwise t-test [47][48][49] was firstly employed to examine whether marker compounds from a specific concentration group presented significant differences in peak intensity with those from other two groups. Pairwise t-test, as a reliable statistical test method, was performed to calculate p values between the two concentration groups and the p < 0.05 observed in this study indeed showed the existence of significant differences among groups. Previous studies [29,54] also adopted fold change of concentration > 2 to discern variables with high contrast among groups as marker compounds. Herein, marker compounds on behalf of 50 PPCPs all presented fold change values above 2, supporting the validity of marker compounds obtained with our analytical strategy.
The limits of detection (LODs) for 50 PPCPs were also considered here. Firstly, a 2.0 g blank lettuce sample was used to prepare an extract solution (1 mL) after the same pretreatment mentioned above. Then, a 20 ng mL −1 PPCPs solution was obtained by diluting their mixed methanol solution (20 µL, 1 µg mL −1 ) with 1 mL blank lettuce extract solution. The experiments were repeated in septuplicate to obtain seven samples, which underwent the same metabolomics analysis to obtain the peak intensities of 50 PPCPs. For each PPCP, a 20 ng mL −1 concentration level was deemed to correspond to average values of seven samples in peak intensity; therefore, the concentration (unit: ng mL −1 ) of each PPCP in a sample was calculated by its own peak intensity × 20/average peak intensity for the standard deviation measurement of the seven samples. According to the method proposed by US Environmental Protection Agency [55], the LOD values for 50 PPCPs were calculated to be 0.4~2.0 µg kg −1 , as shown in Table 2.
Method Applicability in Maize Matrix
Maize as the primary food crop in China has proved to easily absorb PPCPs from the soil [19]; therefore, it was selected as another plant matrix different from vegetables to investigate the applicability of the developed metabolomics-based screening method. Maize sample was purchased from the local market and turned into a powder by a grinder. Then, it underwent the same above-mentioned pretreatment process after 50 PPCPs spiked at 10 µg kg −1 as well. Ciprofloxacin-d8 methanol solution (0.5 mL, 100 ng mL −1 ) was added for recovery calibration, with the results shown in Table S2. The same metabolomics analysis was performed as indicated in Figures S1-S5 (Supplementary Materials). Marker compounds to represent 50 PPCPs were also discovered (Table S3), proving the good applicability of the metabolomics analytical method to non-targeted screening of various PPCPs residues in different plant matrices. As can be seen from Table S3, the LOD values for 50 PPCPs in maize matrix were calculated to be 0.3~2.1 µg kg −1 .
Real Sample Test
We collected lettuce and maize samples from six administrative districts including Zhongshan, Xigang, Shahekou, Gaoxin, Ganjingzi and Jinpu affiliated to Dalian City, each district with two sampling points. A total of 12 fresh lettuce samples were purchased from the local farmer's market and immediately delivered to the laboratory for testing. The above process was also applied to the maize samples. After pretreatment experiments and metabolomics analysis, only one lettuce sample from Jinpu District was found to contain enrofloxacin and its content was 17.4 µg kg −1 . Other samples had no detection of PPCPs.
Although the detection rate of PPCPs in all the samples is only 1/24, and seemingly only one district is vulnerable to PPCPs contamination, the results are enough to show that our proposed method is competent for the screening of PPCPs in plant-derived foods. These spot check results alert us to the fact that PPCP-induced safety risk of plant-derived foods is on the horizon.
Previous studies have successfully applied non-targeted screening methods on the basis of metabolomics to pesticide residues in plant matrices, e.g., orange juice [28] and tea [29], providing the feasibility to screen PPCPs residues in plant-derived foods. In light of the otherness of analytes, the reported methods may not be completely applied to our study. Herein, we firstly considered spiked contaminants to be marker compounds and then implemented a marker compound-seeking analytical strategy of metabolomics to finish the non-targeted screening of contaminants in plant-derived foods, which is the biggest difference from previous studies [24,28,29]. Despite only 50 PPCPs and two plant matrices considered here, the developed method still has wide applicability due to the representation of these PPCPs and universal consumption of lettuce and maize.
Extensive use of PPCPs in livestock farming raises the risk that these compounds end up in soil where animal waste is used as fertilizer [9,56], which leads to the uptake of PPCPs by plant-derived foods from the soil [57][58][59][60][61][62][63][64]. Compared with other plants, leafy vegetables generally show higher detection ratio and concentrations of PPCPs [60,64] and therefore deserve more attention in their food safety risk. Although there are no official documents to explicitly clarify the MRLs of PPCPs in plant-derived foods, we can still deduce their safety thresholds from their corresponding MRLs in animal-derived foods [1][2][3][4]. Relative to the colossal number of analytical methods for PPCPs in animal-derived foods [65][66][67][68][69], the methods for PPCPs detection in plant matrices are in short supply. To better cope with the complicated PPCPs contamination in plants, the top priority is to develop a high-throughput screening method that can accurately, rapidly and comprehensively determine which PPCPs exist in the foods. With this consideration, we developed this novel metabolomics-based analytical method to achieve non-targeted screening of PPCPs in plant-derived foods.
Conclusions
The newly developed metabolomics analytical method was successfully applicable to non-targeted screening of 50 PPCPs residues in lettuce and maize matrices. We intentionally designed three concentration groups of PPCPs (20, 50 and 100 ng mL −1 ) to simulate the experimental and control groups adopted in the traditional metabolomics analytical procedures to search for marker compounds on behalf of 50 PPCPs. The process to perform metabolomics analysis has less artificial interference, a more concise workflow and higher screening efficiency. It is worth mentioning that this is the first implemented analytical strategy of metabolomics for non-targeted screening of PPCPs in plant-derived foods through seeking marker compounds. Due to the lack of binding legal documents on MRLs of PPCPs in plant matrices, together with constant development and application of new PPCPs in animal husbandry, it is urgent to compile legal rules to control MRLs of PPCPs in plant-derived foods, otherwise it may evolve as a serious food safety issue. To date, plant uptake from PPCP-contaminated soil is a known source of PPCP residues in plant-derived foods. It is not yet clear whether other ways can also induce the accumulation of PPCPs in the foods, potentially increasing the complexity of PPCPs contamination. Even worse, this increases the exposure risk of PPCPs to human health via the food chain. Therefore, we advocate that early attention to this issue would help defuse the potential crisis. | 2022-07-27T15:02:10.316Z | 2022-07-23T00:00:00.000 | {
"year": 2022,
"sha1": "dda42af72912517c2dcae9dad9d1c8e4d9bbc6d6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/15/4711/pdf?version=1658571157",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2656cf4235bc0a5c0d49221b8919a60ef9b3cef",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251408729 | pes2o/s2orc | v3-fos-license | Eating, Sleeping, Consoling for Neonatal Opioid Withdrawal (ESC-NOW): a Function-Based Assessment and Management Approach study protocol for a multi-center, stepped-wedge randomized controlled trial
Leslie W. Young , Songthip Ounpraseuth, Stephanie L. Merhar, Alan E. Simon, Abhik Das, Rachel G. Greenberg, Rosemary D. Higgins, Jeannette Lee, Brenda B. Poindexter, P. Brian Smith, Michele Walsh, Jessica Snowden, Lori A. Devlin and for the Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network and the NIH Environmental influences on Child Health Outcomes (ECHO) Program Institutional Development Awards States Pediatric Clinical Trials Network
Background and rationale Public health impact
Increased opioid use has resulted in a dramatic increase in the number of infants born with in utero opioid exposure requiring management for NOWS [1][2][3][4]. Despite the significance of this problem, numerous critical gaps remain in our knowledge with respect to the best practices for identification and management of infants with NOWS, as well as our understanding of the outcomes of these infants [5,6]. The opioid epidemic particularly impacts rural and underserved communities represented by the ISPCTN and participating Neonatal Research Network (NRN) sites, which makes our networks well poised to address these critical gaps and improve the care of infants with NOWS.
Background
Scope of the problem The medical and non-medical use of opioids has increased substantially in women of childbearing age during the last decade [7]. In the United States (US), medical professionals wrote and dispensed 259 million opioid prescriptions in 2012 alone, an average of 82.5 opioid prescriptions for every 100 persons [8]. Approximately 28% of privately insured and 39% of Medicaid-enrolled women between 15 and 44 years of age filled an opioid prescription annually between 2008 and 2012 [9]. Every 3 minutes, a woman seeks care in an emergency department for prescription opioid misuse. In addition, illicit opioid abuse is also increasing dramatically [7]. Nearly 600,000 Americans reported a substance-use disorder involving heroin in 2015, with the strongest risk factor for heroin use being a history of prescription opioid misuse [3,10]. The national rate of opioid use disorders in new mothers has quadrupled between 1999 and 2014, increasing from 1.5 to 6.5 per 1000 deliveries [11,12].
The increased use and misuse of opioids during pregnancy has directly resulted in a 5-fold increase in the incidence of NOWS between 2004 and 2014 [13]. A retrospective analysis of a National Inpatient Sample showed that, among infants covered by Medicaid, the incidence of NOWS increased from 2.8 to 14.4 per 1000 births during this same period [13]. Additionally, analysis of an administrative database of 23 hospitals from 2013-2016 demonstrated a continued increase in the incidence of NOWS to 20 per 1000 live births [14]. Significant regional variation in the incidence of NOWS has been noted, with the highest rates seen in the Northeast and Southeast regions of the United States [1]. Researchers have found an increased incidence of infants with NOWS born to mothers who have high rates of longterm unemployment or who live in mental health shortage areas [15]. Rural areas are disproportionately affected by NOWS, with twice the rate of growth in the number of hospital deliveries complicated by maternal opioid abuse in rural communities compared with the rate of growth in urban communities between 2004-2013 [16]. The proportion of infants with NOWS born into rural communities increased from 12.9% in 2003 to 21.2% in 2013 [16]. Therefore, improving care for infants with NOWS will particularly impact the rural areas served by many ISPCTN and NRN sites. Additionally, compared with their urban peers, rural infants affected by perinatal opioid misuse are more likely to come from lower-income families who have public insurance [16]. Nationally, state Medicaid programs enroll 60% of mothers with perinatal substance use and more than 80% of infants with NOWS [1, 2].
Recognition and assessment of neonatal opioid withdrawal syndrome Some infants with in utero opioid exposure may have mild signs of NOWS that do not significantly impact the infant's ability to feed, sleep, and function, while others may have more severe signs that require pharmacologic therapy to avoid negative effects on growth and development [17]. Physicians use observer-rated scales in clinical practice to quantify the severity of withdrawal and to guide pharmacotherapy [4]. Yet, current scales have not undergone rigorous instrument development and validation [18,19]. Ninetyfive percent of institutions in the United States use the FNAST, with its various modifications [20]. Preliminary data from the ACT NOW Current Experience Study, a chart review conducted at 25 sites within the ISPCTN and 5 sites within the NRN, found that all 30 participating sites used the FNAST or a modification of the FNAST for the assessment of infants with NOWS as part of usual institutional care. Loretta Finnegan developed the FNAST in 1975, and medical personnel currently use this and several modified versions. The tool was initially found to have an inter-rater reliability (IRR) of 0.82 (0.75-0.96), but it has not been subsequently validated for the evaluation of infants with NOWS, although researchers have studied normative values in newborns unexposed to maternal substances [21]. Researchers and clinicians remain concerned about the length of the tool, [22,23] its inherent subjectivity, [24] and the need to disturb infants for formal assessments [25]. In addition, investigators have concerns that the FNAST and modifications of the FNAST may overestimate the need for pharmacologic therapy, as the formal score incorporates all signs of withdrawal, including those that may not be clinically significant. This overestimation has been linked to increased length of hospital stay and hospital costs [26].
The ESC Care Tool is an alternative assessment and management tool developed and subsequently implemented at several sites as part of quality improvement (QI) initiatives based on the original ESC approach developed by Grossman and colleagues at Yale [25]. The ESC Care Tool uses a non-invasive, simplified, function-based assessment that evaluates the infant based on his/her ability to eat, sleep, and be consoled. The tool's design provides continued emphases on the role of the family/caregiver in the assessment and care provided for their infants and on non-pharmacologic care as the first-line treatment for infants with NOWS. If an infant is able to feed effectively within 10 minutes of showing hunger (breast-feed well x 10 minutes or take 10 mL [or age-appropriate volume] by alternative feeding method), to sleep undisturbed for 1 hour or longer, and is able to be consoled within 10 minutes, pharmacologic treatment is not initiated or escalated. If the care team assesses that the infant is having difficulties in one of these areas related to NOWS, the care team first attempts to optimize non-pharmacologic interventions. If these attempts are unsuccessful, the care team will initiate or escalate pharmacologic therapy.
Initial eating, sleeping, consoling approach The ESC approach, an approach that emphasizes parental involvement, simplifies the assessment of infants with NOWS, and focuses interventions on non-pharmacologic therapies, began its evolution at Yale-New Haven Children's Hospital over a 5-year period of QI work. During this time, the proportion of infants prenatally exposed to methadone who received pharmacologic treatment for NOWS decreased significantly from 98% (54 out of 55 infants) in the baseline period (January 2008-February 2010) to 14% (6 out of 44 infants) in the post-intervention period (May 2015-June 2016), P < 0.001. The average length of stay (LOS) for these infants also decreased significantly from 22 to 6 days (P < 0.001) [25]. There were no reported seizures during the initial birth hospitalization or need for readmission within 30 days of discharge related to signs of withdrawal for the post-intervention group. Although the results of this QI work appear quite impressive, it is unclear how generalizable this work is, as the pre-intervention rate of pharmacologic treatment was much higher than national estimates at 98% of methadone-exposed infants [4]. Additionally many infants with NOWS are exposed to opioids other than methadone (e.g., buprenorphine and illicit opioids).
On direct comparison, Yale-New Haven's ESC approach, studied as a QI measure, appears to trigger the initiation of opioid replacement therapy for significantly fewer infants than use of the FNAST approach. The Yale group, following their transition to ESC-based assessments, completed a retrospective comparison of treatment decisions for 50 consecutive opioid exposed infants (March 2014-Aug 2015) [26]. These infants had FNAST scores recorded every 2 to 6 hours, but clinical personnel managed these infants based on their ESC assessments alone. Management decisions based on the ESC assessment resulted in morphine initiation for 6 infants (12%), compared with 31 infants (62%) who medical professionals would have treated using the FNAST (P < 0.001). Additionally, using the ESC-based assessments, medical personnel initiated or increased morphine on 8 patient days (3%), compared with 76 patient days (26%) predicted using the FNAST (P < 0.001) [26]. Again, the study reported no readmissions or adverse events (AEs).
Eating, sleeping, consoling care tool development Other groups have subsequently worked to standardize implementation of the assessment and management components of the ESC care approach, through the development and testing of a formal ESC Care Tool. Initial evaluation of the assessment component of the ESC Care Tool, using standardized training and simulated case scenarios, has demonstrated high inter-and intra-rater reliability [27]. Training in the use of the ESC Care Tool and the overall care approach, with standardized training materials, continues to be evaluated and improved, allowing for feasible implementation in even small community hospitals. Faculty at Children's Hospital at Dartmouth-Hitchcock Medical Center, Boston Medical Center, and Yale-New Haven Children's Hospital collaborated to develop training materials, including Instructional Manual, ESC Care Tool with definitions, Newborn Care Diary, ESC training video, and written and videotaped case scenarios with answer key. Sites within The Northern New England Perinatal Quality Improvement Network are currently using these materials to facilitate training as part of a networkwide QI initiative.
Physicians at one of the institutions involved in these development efforts, recently published on their QI results following implementation of the ESC care approach. This institution utilized a pilot version of the ESC Care Tool and showed more modest but consistent findings to those at Yale. The researchers found a decrease in pharmacologic treatment from 87% to 40% and a reduction in LOS from 17 to 11 days, with no AEs noted [28].
Further study Although outcomes following implementation of the ESC care approach, inclusive of the ESC Care Tool, appear promising and initial accounts suggest that it is safe, we need to rigorously study this care approach to show safety, efficacy, and generalizability of its use in the care of infants with NOWS. Reports on the ESC care approach to date have been from hospitals where the majority of the mothers are compliant with medication-assisted treatment and are highly motivated to care for their infants. Furthermore, the potential effects of the care provided, using the ESC care approach, on infant and family well-being after discharge are unknown and important to assess [5,29]. In the proposed trial, comparison of the short-and long-term outcomes for infants managed with the ESC care approach versus those managed with usual care will move us closer to an evidence-based approach for the evaluation and management of infants with NOWS, thus meeting a top research priority in the field [5,6].
Hypotheses
Primary hypothesis Among infants evaluated for NOWS, the ESC care approach will reduce the length of time until infants are medically ready for discharge by an average of 4 days, compared to usual institutional care with the FNAST or modification thereof.
Secondary hypothesis Among infants evaluated for NOWS, use of the ESC care approach will result in an improvement in infant neurobehavioral functioning and family well-being, when compared to usual institutional care with the FNAST or modification thereof.
Justification of hypotheses
We hypothesize that use of the ESC care approach for the evaluation and management of infants with NOWS will safely reduce the average length of time until infants are medically ready for discharge, compared with usual care with the FNAST or modification thereof. We selected the primary outcome, average length of time until infants are medically ready for discharge, due to the potential for infants to remain in the hospital beyond this point because of social factors and the previously described potential impact of and link between a reduction in hospital stay and the following: • Improved maternal and infant attachment/bonding [30,31] • Decreased hospital complications • Increased benefit to society in reduced healthcare costs [2,13,32] Additionally, we hypothesize that use of the ESC care approach will have minimal to no impact on infant safety, while resulting in the following outcomes: • Reduction in the need for initiation of opioid replacement therapy (i.e., morphine, methadone, or buprenorphine) • Decrease in the total postnatal opioid exposure • Improvement in the timeliness to initiation of opioid replacement therapy, when required • Decrease in the need for adjuvant therapy • Increase in the proportion of infants who directly breastfeed • Increase in the proportion of infants receiving their mothers' own breastmilk We also hypothesize that use of the ESC care approach will improve postnatal attachment and bonding, and will enhance infant well-being and neurobehavioral functioning and development compared to usual care. Further, we hypothesize that use of the ESC care approach will enhance maternal well-being and the family environment after discharge. An important component of the ESC care approach is the reported fostering of a collaborative relationship between the primary caregiver(s) and the inpatient clinical team through the co-assessment of the infant's severity of withdrawal and shared treatment plan development. Interviews conducted with families as part of the QI implementation of the ESC Care Tool consistently suggest that this element may reduce the social and emotional impact of the infant's hospitalization on the family. However, while many families expressed feeling like they were an integral part of their infants' care team and reported decreased anxiety and reduction in stigma during the initial birth hospitalization, [33] these families were poised to actively participate in the care of their infants, and such results may not be consistent across all families/caregivers. Thus, we must consider that families/caregivers who are not as well poised to actively participate in the care of their infants may experience more stress if their infants are discharged home earlier.
Our assessment of key markers of infant and family wellbeing in the subpopulation of infants whose caregiver(s) provide informed consent will allow for further insight into safety. This will also provide an opportunity to examine not only often-assessed intermediate outcome variables (time until medically ready for discharge and need for opioid replacement therapy), but also to examine longer-term outcomes, such as infant neurobehavioral functioning and development, maternal-infant attachment and bonding, and family well-being and functioning.
Study design type
In this stepped-wedge cluster randomized controlled trial with transition period, the protocol study team will compare the ESC care approach to usual institutional care with the FNAST or modification thereof. Randomization will occur at the site level. The protocol study team will randomize approximately 24 US sites into 8 blocks. Each block will transition from usual care to the ESC care approach for the evaluation and management of all infants with NOWS at various time intervals (see Table 1). Sites will use the care approach randomly assigned to their block during each study period for the evaluation and management of all infants with NOWS cared for at the site. During the initial birth hospitalization, the site research team will collect data under waiver of consent for infants who meet eligibility criteria. The number of infants enrolled per period at each site will vary throughout the study, due to fluctuations in the number of infants managed for NOWS at each site during each period. However, the goal is for each site to enroll at least 4 infants per period. The site research team will obtain informed consent from the legal guardian(s) to obtain long-term outcomes for eligible infants and caregivers. Site research team members may obtain this consent at any point during the hospital stay for infants who meet the trial's inclusion criteria.
Justification of study design The protocol study team selected a stepped-wedge cluster design due to three main considerations: 1. Transition to the ESC care approach requires a significant cultural shift in the management of infants with NOWS. This type of cultural change is most effective when applied at the level of the population covered by the hospital and not on a subset of random infants with NOWS within the hospital, thus making a cluster study design important.
2. Interim analysis of the ACT NOW Current Experience study allowed for estimation of an intracluster correlation coefficient (ICC) based on the LOS outcome measure. Using LOS as a proxy for our primary outcome, time until infant is medically ready for discharge, the number of sites required to adequately power the trial based on a parallel cluster design with estimated ICC=0.25 would be prohibitive. The stepped wedge design makes the study feasible by allowing each site to serve as its own control in a pre/ post analysis and thus, the variation between sites is of less statistical significance. 3. Additionally, the results of QI projects have inspired many healthcare providers to consider transition to the ESC care approach. A brief questionnaire, sent to investigators at the available study sites, demonstrated an increased willingness to participate in the trial if we integrated a transition to the ESC care approach into the study design.
For these reasons, the protocol study team has designed a stepped-wedge cluster randomized trial with the intervention applied at a cluster level and applied to all participating sites by the end of the 20-month study period, with the timing of transition to ESC randomized. This study design also allows for differentiation between the effect of the intervention and unanticipated time-related confounders.
Study population Inclusion criteria
Site level • The site is willing, able, and has nurse management and administrative commitment to transition to the ESC care approach at the randomly allocated time • The site currently uses the FNAST or modification thereof for the assessment of withdrawal severity for infants with NOWS • The site currently provides opioid replacement therapy (i.e., morphine, methadone, or buprenorphine) for the pharmacologic management of infants with NOWS Table 1 Stepped-wedge cluster randomized controlled trial with transition period *Each block will consist of 3-4 sites **Each period will be 2 months in duration, except for the transition period, which will be 3 months, and the intervention periods bordering the transition, which will be 1.5 months/6 weeks in duration
Infant level
• The infant is being managed for NOWS at an eligible site (i.e., receiving non-pharmacologic care, assessments for withdrawal severity, +/pharmacologic care) • The infant is ≥ 36 weeks gestation • The infant satisfies at least 1 of the following criteria: • Maternal history of prenatal opioid use • Maternal toxicology screen positive for opioids during the second and/or third trimester of pregnancy • Infant toxicology screen positive for opioids during the initial hospital stay
Exclusion criteria
Site level • The site currently manages < 20 opioid-exposed infants annually • The site routinely discharges/transfers infants from the hospital on opioid replacement therapy (i.e., morphine, methadone, or buprenorphine). We define routine discharge/transfer as ≥10% of infants who receive opioid replacement therapy for NOWS at the site Table 2 outlines the study events from the initial hospital stay through 24 months.
Screening
The protocol study team will screen interested sites for eligibility, and will randomize eligible sites into one of 8 blocks, as illustrated in Table 1. With this study design, all infants with NOWS cared for at a site, will be evaluated and managed using the care approach that the site is assigned to during the study period. Therefore, individual infant screening will not be required before initiation of this study protocol. The process will be as follows (see Fig. 1), after birth the inpatient clinical team will assess infants as at risk for NOWS and initiate management for NOWS based on the site's usual methods of identification. The initiation of clinical management for infants with NOWS will not be impacted by the study intervention. The site research team will identify potential participants for the trial based on their eligibility following review of the medical record after delivery. The site research team may obtain informed consent for infant and caregiver participation in the long-term outcomes portion of the study at any point during the hospital stay for infants who meet the trial's inclusion criteria. The site research team will evaluate for exclusion criteria after the infant's first 60 hours of life. The site research team will note infants as screen fails who meet any of the exclusion criteria, and will not collect additional data for these infants. Sites may enroll up to 16 infants per period; screen fails are not included in this total. Only infants enrolled in the study may be approached for consent and included in the long-term follow-up study.
Consent procedures
Waiver of consent Since this study is a stepped-wedge cluster randomized controlled trial, the intervention will take place on a site-wide basis and sites will transition their practice for all infants with NOWS cared for at the site during the study period. Thus, we will request a waiver of consent from the central institutional review board (IRB) at the University of Arkansas for Medical Sciences for the primary outcome and previously outlined short-term secondary outcomes. There is debate in , an IRB may approve a consent procedure that does not include, or that alters, some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent, provided the IRB finds and documents that all of the following conditions are met: 1. The research involves no more than minimal risk to the participants; 2. The waiver or alteration will not adversely affect the rights and welfare of the participants; 3. The research could not practicably be carried out without the waiver or alteration; and 4. Whenever appropriate, the study team will provide participants with additional pertinent information after participation.
The justification for a waiver of informed consent from caregiver(s) for the short-term outcomes meets the above criteria per the following: Screening and enrollment procedures. *The site research team may obtain consent for the long-term follow-up portion of the study at any point during the hospital stay for infants who meet the trial's inclusion criteria. To optimize recruitment it will permissible to obtain initial consent up to one month after discharge 1. The research involves no more than minimal risk to the participants.
Both usual care using the FNAST and the ESC care approaches are currently used at sites across the country, and the optimal care approach for the management of infants with NOWS is unknown. Additionally, there are no study procedures or study interventions within this protocol that would qualify as more than minimal risk, based on federal regulations, for either intervention group. 2. The waiver or alteration will not adversely affect the rights and welfare of the participants.
As the best management for infants with NOWS is unknown, there is no universally accepted standard of care, and both care approaches are currently being used at sites across the country. Therefore, participants receiving care via either model should not have their rights and welfare adversely affected. 3. The site research team could not practicably carry out this trial without the waiver or alteration.
Carrying out this trial and obtaining generalizable results would not be feasible if obtaining informed consent were required. Obtaining informed consent from legally authorized representatives of infants in this population is difficult due to multiple factors. The interventions conducted in this study begin shortly after birth and recruitment during this vulnerable period can be extremely difficult. Researchers have experienced this difficulty in a number of trials that have failed to successfully recruit this population shortly after birth [34][35][36]. Seeking consent shortly after delivery may not only result in recruitment failure, but may result in achieving consent only among a less generalizable group of "responders", which may introduce bias and diminish generalizability. This could be particularly problematic in this trial where the success of the intervention may be particularly susceptible to caregiver effort and engagement. Seeking consent later in the hospital stay for the long-term follow-up portion of the study will allow for relationship and trust building between the consenting member of the site research team and the primary caregiver(s). This will likely allow for improved consent rates and improved generalizability. If consenting members of the site research team sought early consent and only a group of "responders" were consented, it is unclear whether this would have a differential impact across the study interventions. Additionally, the intervention is instituted at the site level and will represent a culture change. The site will use the assigned approach to care for all infants with NOWS during the trial period. Therefore, if consent were required, obtaining consent would not alter the care approach used for the infant. Thus, the benefits afforded by using a waiver of consent outweigh the risks to the infant receiving the same management. If the clinical team used two different care models at the same time and at the same site, patient safety could be at risk and care potentially compromised due to the use of inconsistent care practices at the site. 4. Whenever appropriate the study team will provide participants with additional pertinent information after participation. Throughout the study, the site research team will provide participants with additional pertinent information when appropriate. The protocol study team will develop a handout that the site research team will give to the caregiver(s) of all infants with NOWS cared for at the site throughout the study period. This will fulfill the suggested framework [37] of participants being "provided with a detailed description of the interventions to which their cluster has been randomized. " Consent for assessment of long-term outcomes Members of the site research team will work with families/caregivers to obtain informed consent for: 1) parent/caregiver questionnaires that will assess caregiver well-being (e.g., parenting stress, attachment and bonding, depression, anxiety, etc.) and infant well-being (e.g., diet, sleep, neurobehavioral functioning, etc.), and 2) in-person followup visit at 24 months to assess neurodevelopmental outcomes and growth measures. Consent will contain basic information on recognition and support (consistent with regulatory requirements at each site) for mental health issues including suicidality among caregivers. The consent will also contain basic information on notification of child protective services (consistent with state law) should researchers or members of the clinical team have suspicion of child neglect or abuse. The site research team will obtain written, informed consent from primary caregiver(s) (e.g., biological parents, adoptive parents, or state-appointed guardians) prior to administration of the first questionnaire.
As previously outlined, participants may be consented up to 1 month after discharge. To facilitate this process, and due to the ongoing COVID-19 pandemic, remote consenting will be allowed. All communications will be done via HIPAA-compliant methods such as telephone, personal delivery of documents, US postal service, RED-Cap or other compliant electronic platform. The remote consent process will parallel the consent processed used for in-person consenting. The only difference will be the method(s) of communication. The study team will ensure that, as with in-person consenting, the participant is given sufficient opportunity to ask questions, is able to understand the nature of this study and what participation entails, and is provided a copy of the final, completed consent signed by all parties involved, including the research team member who obtained consent and, when applicable, the site investigator. This final, signed consent will be provided via a HIPAA-compliant method or a method that the participant has agreed to in writing. The study team members working on the consenting process will ensure that any participant who is consenting remotely has the authority to consent.
Detailing barriers to consent and participation The site research team will ask non-consenting parents/caregivers to answer questions specific to perceived or actual barriers to participation and their choice not to be involved in the long-term outcome portion of the study. The site research team will inform non-consenting parents/caregivers about the purpose of these questions and that they are not required to answer them. The site research team will record responses without linking identifiers. The protocol study team will not permit an amendment of the consent form for previously non-consenting parents/caregivers that wish to consent following these questions. The protocol study team will use the data collected to improve site-specific and study-wide recruitment strategies for this trial and to inform future trials in this field.
Randomization procedures
This is a stepped-wedge cluster randomized design with a transition period wherein we will randomize participating study sites, rather than individual infants. All sites will implement the ESC care approach at some point during the trial; the random elements are two-fold: 1) randomization into the blocks, and 2) randomization of blocks to the time point at which each block implements the ESC care approach, the so-called "step" of the stepped-wedge design.
A statistician at the independent Data Coordinating Center (DCC) for the trial will generate a randomization list using SAS 9.4 (SAS Institute Inc., Cary, NC). The protocol study team will use the proportion of infants with NOWS treated pharmacologically at each site as the variable to stratify randomization (i.e., lowest 3rd, middle 3rd, highest 3rd). The protocol study team will identify this proportion using the results of the ACT NOW Current Experience Protocol, a retrospective data collection that details the inpatient identification, assessment, and management of infants with NOWS at the ISPCTN and participating NRN sites. The protocol study team will conduct a brief survey to obtain similar estimates from other interested sites. The protocol study team will randomize sites in each stratum into one of 8 blocks (Fig. 2). Once the protocol study team randomizes each site into blocks, computer-generated random numbers from a uniform distribution will determine the order in which the block of sites step into the transition and implementation period for the ESC care approach. The DCC will hold the randomization list. Due to the nature of a stepped-wedge cluster randomized controlled trial, the protocol study team can only enforce limited blinding. The protocol study team will notify sites of their allocated block following randomization.
Integration of the ESC care approach is a complex process and training of the hospital staff will be timeintensive. Thus, we expect to observe no quantifiable effects on the outcomes of interest during the transition period. Hence, this study has a planned transition period. The site research team will collect data on primary and secondary outcomes for all study periods, excluding the transition period. As this trial has sufficient power, we have planned for the transition period to be 3 months in duration to allow for adequate time for training and implementation at the sites. To maintain this, the preceding usual care period and the initial ESC period will each be 1.5 months/6 weeks in duration.
Study intervention and comparison
All sites will provide usual institutional care, including the use of the FNAST, or modification thereof, for the evaluation and management of infants with NOWS during Period 1 (see Table 1). After the first period, the sites in Block 1 (3-4 sites) will move into the transition period. During the transition period, sites will participate in education and training modules conducted through a centralized training platform. The protocol study team will standardize education and training across sites during each block's designated transition period. Educational modules will include an introduction and overview of the ESC care approach, education on trauma informed care and bias, and a general review on caring for infants with NOWS and the importance of non-pharmacologic care. Training will occur in a train-the-trainer format, and will include off-site or teleconference ESC training for a core group of site champions with subsequent on-site training of clinical personnel. Following the transition period, sites within Block 1 will move into the first ESC period and each of the other blocks will move into their next designated intervention period. The site research team will collect data during all intervention periods of the study and make comparisons between these interventions (usual care versus ESC care approach).
Intervention
• Following delivery or transfer, the care team will initiate care for NOWS per usual practice. The clinical team will use institutional practices and protocols to guide non-pharmacologic care, to assess the infant using the FNAST or modification thereof, and to guide pharmacologic care. • If needed, the clinical team will initiate pharmacologic treatment per the site's usual practice and/or treatment protocol, and escalation, weaning, and discontinuation of pharmacologic care will be per the site's usual care. Opioid replacement therapy given (morphine, methadone, or buprenorphine) will be per site preference, as will adjuvant therapy used (clonidine or phenobarbital). • The clinical team will monitor each infant requiring opioid replacement therapy for signs of escalat-ing withdrawal symptoms following discontinuation of this treatment and will consider discharge per the site's usual practice. • The clinical team will use FNAST or modification thereof to assess infants after birth and consider discharge for infants who do not require pharmacologic treatment per the site's usual practice. • The DCC will develop a monitoring plan for each site's compliance with usual care during this period. • Infants with antenatal opioid exposure born or transferred to the site during the usual institutional care intervention will be managed per this care approach throughout their admission (this includes infants who remain admitted when the site enters the transition period), and the site research team will collect their data and use it for the study analysis.
Transition period
Intervention Education • All research and clinical nurses, advanced practice providers, and physicians who care for infants with NOWS at each participating site will complete educational modules. These modules will include an introduction and overview of the ESC care approach, education on trauma informed care and bias, and a general review specific to caring for infants with NOWS. The latter will include an emphasis on the importance of non-pharmacologic care, as well as on the importance of differentiating the etiology of symptoms common to NOWS. • The protocol study team will assess completion of these modules through pre/post assessments. Post assessments will require 80% correct responses for completion. Participants will be able to retake each of the lessons until he/she achieves a correct response rate of 80%.
Training and implementation
• The protocol study team will conduct all training in a train-the-trainer format supported by nationally known clinical experts. • The protocol study team will train a core group of site champions, which may include research and clinical nurses and physicians, in the use of the ESC care approach during the designated transition period. Education and training on the optimal use of the ESC care approach will include an introduction and overview of the ESC care approach, review of the Instructional manual, review of the ESC Care Tool with definitions and Newborn Care Diary, ESC training video, and review of written and videotaped case scenarios. The site champions will access the components of this training through the educational platform, and the protocol study team will track completion through pre/post assessments. The protocol study team will provide an electronic copy of the ESC Care Tool Instructional Manual for each site in anticipation of entry into the transition period. • After the training, the core group of site champions will view and score cases until each member of the group consistently attains 100% reliability on standardized patient assessment cases (6/6 items). We define this as three consecutive assessments with 100% reliability as compared to national experts in the field. Once consistently achieving 100% reliability, the protocol study team will consider these individuals the "gold-star raters." • This core group will train all other clinical personnel who care for infants with NOWS at their site, including, but not limited to, nurses, advanced practice providers, and physicians in all areas where these infants receive care. These areas may include, but are not limited to, the well-baby nursery, pediatric unit, and neonatal intensive care unit. After the training, clinical personnel will view and co-assess cases with the "gold-star raters" using the ESC IRR tool until clinical personnel consistently achieve 80% agreement (5/6 items). Once a member of the clinical team reaches 80% IRR, the site research team will clear the trainee for independent assessment. If the trainee consistently achieves 100% reliability, the site research team will consider him/her to be a "goldstar rater", and may ask the trainee to function in this capacity. Site staffing levels will determine the number of "gold-star raters" at a site, with the goal of having one "gold-star rater" available on each shift. The protocol study team will require clinical personnel who are unable to attain 80% reliability to complete supplementary training. Clinical personnel hired after the initial training will complete the educational modules and ESC training at the site inclusive of coassessing cases with "gold-star raters" using the ESC IRR tool to demonstrate 80% agreement. • To ensure fidelity of the assessments, the protocol study team will assess the reliability of the "gold-star raters" at each site during the implementation phase of the transition period. The protocol study team anticipates each "gold-star rater" will maintain 100% reliability in scoring on patient assessment cases. The "gold-star raters" will then gauge reliability for the clinical team by assessing 10 individuals during the implementation phase by using the ESC IRR tool and written or video case scenarios on the training platform. For clinical personnel who fail to maintain the target of 80% reliability in scoring during the implementation period, the protocol study team will utilize just-in-time training through a centralized training platform until he/she achieves 80% reliability in assessments. When staffing allows, members of the care team who have reliability less than 80% should not be assigned to care for infants with NOWS until improved reliability is demonstrated through the just-in-time training process. • Infants with antenatal opioid exposure born or transferred to the site during the transition period but before the site has implemented ESC, will be managed with usual institutional care. Once a site implements ESC, the site will manage all infants born or transferred to the site with the ESC care approach. For those infants receiving ongoing care for NOWS at the time of ESC implementation, the protocol study team will leave the care approach used for the continued care of these infants to the discretion of the clinical team. • Infants born or transferred to the site during the transition period will not have their data collected and these infants will not be included in the study analysis. • Clinical leads from each discipline (i.e. nursing and medicine) and the site research team at the site will assess for completeness of ESC care approach implementation prior to the site's formal movement into the ESC intervention period. • ESC experts will conduct biweekly webinars for each block of sites through the transition and initial intervention period(s). These webinars will provide continued support to the sites during this initial period of implementation, and ESC experts will continue to conduct these webinars on a monthly basis throughout the subsequent ESC intervention period(s).
Intervention
• After delivery or transfer to the site, the care team will initiate non-pharmacologic care for NOWS, as detailed in the ESC training materials, and non-pharmacologic care will remain in place for the full duration of the infant's management for NOWS. • Non-pharmacologic care can include: primary caregiver(s) involvement (rooming-in if possible), promoting breastfeeding (for eligible infants based on the institution's established breastfeeding guideline), encouraging on-demand feeding, enhancement of low light and minimal noise exposure, supporting clustered care (doing assessments, vitals, and all other care around feeding, to promote sleep), swaddling, and skin-to-skin care by primary caregiver(s) or holding by family/staff volunteers. • Not all sites will be able to offer all forms of nonpharmacologic care and not all infants will be able to receive all non-pharmacologic interventions available at the site. Acknowledging this, the clinical team will make every attempt to optimize the non-pharmacologic care provided to each infant. • The clinical team will encourage primary caregiver(s) to participate in the care and evaluation of their infants. The clinical team will also encourage the primary caregiver(s) to record the infant's feedings (timing and duration, and/or volume), sleeping (quality and quantity), and ability to be consoled, in the Newborn Care Diary, a component of the ESC care approach. • The clinical team, in collaboration with the primary caregiver(s), will use the ESC Care Tool to assess the infant with respect to the ESC items (eating, sleeping and consoling) by approximately 4 to 6 hours of life (if risk for NOWS is known) or upon identification of the need for NOWS management. • The clinical team will perform ESC Care Tool assessments every 2 to 4 hours after feedings, clustering other infant and maternal care (i.e., vital signs) at the same time. These assessments will include a collaborative review with the primary caregiver(s) (when available) of the ESC items since the last assessment, using the Newborn Care Diary. If the primary caregiver(s) are not available, the clinical team who participated in the care of the infant during the assessment period will complete the assessment. • If during an assessment the infant has a "Yes" for any ESC item or obtains a score of "3" for "Consoling Support Needed" on the ESC Care Tool, the primary caregiver(s) and clinical team will conduct a "Parent/Caregiver huddle" to determine: 1) if the "Yes" is due to NOWS and 2) which non-pharmacologic care interventions the care team can optimize further. The "Parent/Caregiver Huddle" could include, but is not limited to, the parent/caregiver and the bedside nurse. • If the care team can optimize non-pharmacologic interventions, they will do so and will continue to assess the infant. • If it is unclear if the infant's difficulties with eating, sleeping, or consoling are due to NOWS, the care team will indicate a "Yes" on the ESC Care Tool and will continue to monitor the infant closely while optimizing all non-pharmacologic care interventions. • If the infant has a second consecutive "Yes" for any ESC item (or "3" for "Consoling Support Needed") on the ESC Care Tool (or other significant concerns are present), despite maximal non-pharmacologic care, the care team will conduct a "Full-Care Team Huddle" to determine if: 1) the "Yes" is due to NOWS and 2) the infant needs pharmacologic treatment. A "Full-Care Team Huddle" could include, but is not limited to, the parent/caregiver and the bedside nurse, in addition to the physician and/or advanced practice providers caring for the infant. • The clinical team will initiate pharmacologic treatment if the infant scores "Yes" due to NOWS on an ESC item or scores a "3" for "Consoling Support Needed" on the ESC Care Tool despite optimization of non-pharmacologic care. If an infant requires pharmacologic treatment, sites will initiate a treatment protocol to guide care. A treatment protocol should include dose initiation, escalation, and weaning parameters. The protocol study team will provide sites with a protocol. The protocol study team will permit (following review and approval) site-level modifications of the protocol to align it with the site's preferred practice. Opioid replacement therapy given (morphine, methadone, or buprenorphine) will be per site preference, as will adjuvant therapy (clonidine or phenobarbital). • The clinical team will monitor each infant requiring opioid replacement therapy for signs of escalating withdrawal symptoms following discontinuation of this treatment and will consider discharge per the site's usual practice. • The clinical team will use the ESC Care Tool to monitor infants following birth and consider discharge for infants who do not require pharmacologic treatment per the site's usual practice. • To ensure fidelity of the assessments, the protocol study team will randomly assess the reliability of the "gold-star raters" at each site throughout the study period. The protocol study team anticipates that each "gold-star rater" will maintain 100% reliability in scoring. The "gold-star raters" will then assess reliability of the clinical team once per period, by assessing 10 individuals using the ESC IRR tool and written or video case scenarios on the training platform. The protocol study team anticipates that each member of the clinical team will maintain 80% reliability in scoring. If a member of the clinical team fails to meet this target during the assessment, the protocol study team will utilize just-in-time training through a centralized training platform until the clinical team member achieves 80% reliability. The protocol study team would ask that members of the care team with reliability less than 80% not be assigned to care for infants with NOWS until improved reliability is demonstrated through the just-in-time training process. • To ensure fidelity of ESC implementation the protocol study team will develop an electronic platform that will allow "gold-star raters" to discretely evaluate, in real time, how nursing implements ESC at each participating site. The electronic platform will contain items from the ESC IRR tool and the ESC Implementation Process Evaluation. The protocol study team will use these tools to evaluate how consistent each nurse is in her/his evaluation of infant symptoms, recommendations for the care team huddle, as described by the ESC Care Tool, and implemen-tation of the ESC Care Tool (inclusive of non-pharmacologic care interventions). The site research team will enter data into the electronic application and will send it directly to a central repository where the protocol study team will analyze the data and identify fidelity issues that the site research team can address in a timely fashion.
Protocol adherence and compliance monitoring
The DCC will monitor protocol deviations per site in relation to the number of participants enrolled and visits conducted. All sites will receive re-education via regularly scheduled teleconferences to help other sites prevent similar deviations. If a particular deviation is recurrent at one site or across the sites, the DCC may implement operational tools, such as additional reminders, source document worksheets, and/or checklists, to reduce the likelihood of deviations. The DCC will review protocol deviations throughout the study, and it may schedule additional on-site visits, as needed, to review regulatory documents, data points, key issues, etc. or to retrain site staff to improve processes and provide additional education. Strategies to improve or monitor adherence to the study protocol will include the following: • Monthly recruitment reports of infants screened, enrolled, and consented (accrual figures) • Screen fails will be reviewed by the protocol study team to assess for bias in inclusion/exclusion decisions • Monthly reports detailing data received at the data center, data consistency, missing data, performance measures, and adherence to the study protocol (with appropriate measures taken to preserve the blinding of study personnel and investigators) • Supplementary blinded reports requested by the study investigators or subcommittee that do not disclose allocation-group-specific outcomes (primary, secondary, or any safety outcomes) The DCC will generate the aforementioned reports. Additionally, the protocol study team will monitor protocol adherence through collection of the following data: • Completion of modules and training by the research and clinical team as assessed through the education and training platform. • Initial IRR for the clinical team (reevaluated each period). • Assessed adherence to the assigned care approach.
Post-hospital procedures
The site research team will assess for the outpatient composite safety outcome at approximately 3 months of age as well as the critical safety outcome through review of the medical records (including the site's primary and any linked electronic medical record systems) and media review for all infants enrolled in the study at approximately 3 and 24 months of age. Primary caregiver(s) for infants for whom the protocol study team obtained informed consent will receive questionnaires via electronic application or via phone interview, if caregiver(s) have limited access to cellular/internet service or prefer this modality of communication. Caregiver(s) will complete these questionnaires at discharge, 1-month post discharge, and 3 months, 6 months, 12 months, and 24 months of age. These questionnaires will gather information on infant neurobehavioral functioning, infant wellness, primary caregiver(s) well-being, family environment, and caregiver-infant interactions. In addition, there will be an in-person follow-up visit with neurodevelopmental assessment and anthropometric measures at 24 months of age. The site research team will maintain contact in between study assessments at regular intervals, as detailed in Table 2. As there will likely be differences between the populations who provide consent for followup and those who do not, we will collect socioeconomic data (insurance and maternal educational status), marital status, and maternal receipt of medication-assisted treatment for all populations to examine for possible bias.
Data quality assurance
To assure the quality of the data collected, the protocol study team will provide training specific to accuracy of data acquisition for the research coordinators at each site. The protocol study team will design data collection forms, which a subset of sites will subsequently pilot to minimize the potential for errors. Additionally, the protocol study team will allocate sufficient funds to allow for quality data collection. The site research team will reabstract a subsample of their own charts and assess the error rate. Re-abstraction will focus on critical data elements related to the primary and secondary objectives of the protocol. The protocol study team will base the number of charts a site re-abstracts, for each 6-month interval, on the number of patients enrolled in the study during the 6-month period at each site as shown as outlined in Table 3.
The DCC will provide sites with the randomly selected subject IDs for re-abstraction. The site research team will identify an independent site quality control (QC) abstractor who will re-abstract and enter data into the electronic data capture system (EDC) only for the QC process and will not abstract study data while QC activities are taking place. The DCC will generate a discrepancy report comparing study data abstracted by the site with the source information abstracted by the independent abstractor. The site manager will hold a QC Review Meeting with the independent site QC abstractor, research coordinator, and site abstractor(s) to review the discrepancies and identify errors. Together they will discuss and document the corrective action for each error identified. The DCC will create manual queries in the EDC to make any necessary corrections to the data that QC Review members identify. The protocol study team will provide hospitals that have an error rate above the predefined threshold with additional training, a hospital-specific assessment of the data collection process, and suggestions for process improvement. The protocol study team will track hospitals by their error rates. The protocol study team will share practices of those hospitals with exceptionally low error rates with hospitals working to improve their own process. The protocol study team will review error rates and re-abstraction data during monthly team calls. If errors exceed the predefined threshold on 2 consecutive reviews, a remediation plan will be requested and shared with the study sponsor.
Sites that have an error rate above the predefined threshold will receive additional training, a site-specific assessment of the data collection process and suggestions for process improvement. The protocol study team will highlight sites with exceptionally low error rates, and these sites will share aspects of their data collection process with sites working to improve their own process.
Blinding/Masking
The protocol study team will assure blinding of the electronically performed follow-up questionnaires through the use of a centralized computer scoring system. For questionnaires completed by phone, each site should develop a site-specific protocol to preserve blinding of those administering the questionnaires. The protocol study team will note the method of questionnaire completion.
Primary outcome
The primary outcome is the time from birth until infants are medically ready for discharge. We define medically ready for discharge as when the infant meets ALL of the following criteria: • Hypothesis: The use of the ESC care approach will decrease the proportion of infants who receive opioid replacement therapy. • This is a yes/no outcome, and it will enable us to determine the percentage of infants receiving opioid replacement therapy in each intervention group.
2. Total postnatal opioid exposure prior to hospital discharge • Hypothesis: The use of the ESC care approach will decrease total opioid exposure, compared to usual care.
• Each dose of opioid replacement therapy (total units and units/kg and morphine equivalents [mg/ kg]) that infants received throughout the initial birth hospitalization will be collected to determine total postnatal opioid exposure.
Hour of life opioid replacement initiated
• Hypothesis: The use of the ESC care approach will not delay the initiation of pharmacologic therapy. • Use of the ESC Care Tool for the assessment of infants may delay the initiation of pharmacologic therapy and thus infants may be at an advanced state of withdrawal and more difficult to "capture". Alternatively, there is some evidence to suggest27 that use of the ESC Care Tool ultimately allows for more timely recognition of infants requiring pharmacologic therapy, compared to usual care using the FNAST.
4. Receipt of adjuvant therapy (clonidine or phenobarbital) prior to hospital discharge • Hypothesis: The use of the ESC care approach will decrease the proportion of infants who receive adjuvant therapy. • This is a yes/no outcome, and it will allow us to determine the percentage of infants receiving adjuvant therapy.
Maximum percent weight loss during the initial birth hospitalization
• Hypothesis: Use of the ESC care approach will not result in more excessive weight loss than usual care. • Poor feeding and excessive weight loss are signs of suboptimal control of NOWS. Birth weight and daily weights (g) will be collected throughout the initial birth hospitalization to determine the impact of NOWS on growth, and the maximum percent weight loss will be calculated as: 6. Type of enteral feedings (exclusive maternal breastmilk, combination of formula and maternal breastmilk, exclusive formula feeding) at time of hospital discharge • Hypothesis: Use of the ESC care approach will increase the proportion of infants who receive birthweight (g) − weight nadir(g) birthweight (g) x 100 = max percent weight loss maternal breastmilk at the time of discharge from the initial birth hospitalization. • Studies have shown that the receipt of maternal breastmilk decreases withdrawal signs in infants in a dose-dependent fashion [38,39] • The site research team will assess and collect the type of enteral feeding at the time of discharge from the initial birth hospitalization.
7. Direct breastfeeding at the time of hospital discharge • Hypothesis: Use of the ESC care approach will increase the proportion of mothers who directly breastfeed at the time of discharge from the initial birth hospitalization. • The site research team will assess and collect direct breastfeeding occurrences within 24 hours of the time of discharge from the initial birth hospitalization.
Length of hospital stay
• Hypothesis: Infants managed with ESC will have a decrease in the LOS. • The site research team will report the LOS in addition to the length of time until infants are medically ready for discharge. The differences in these measures will allow the protocol study team to assess the impact of social factors on the length of hospitalization.
9.
A composite measure of infant safety during the initial birth hospitalization (seizures, accidental trauma [i.e., dropped infants], and respiratory insufficiency due to opioid therapy, including documented apnea or need for respiratory support [positive pressure or supplemental oxygen]) • Hypothesis: Infants managed using the ESC care approach will be safe during the initial birth hospitalization. • Use of the ESC care approach may delay initiation of pharmacologic therapy, which could result in an increase in withdrawal-related seizures. Therefore, monitoring for the presence or absence of seizures will help to build the safety profile for ESC. • Increased primary caregiver(s) involvement is thought to result from the ESC care approach. In this case, parent/caregiver skin-to-skin time and holding may increase, which could increase the risk of infants being dropped if primary caregiver(s) are fatigued and/or chemically impaired. • Use of the ESC care approach may delay initiation of pharmacologic therapy, which could result in the infant receiving a higher dose of opioid replacement therapy. Higher doses of opioids may increase the risk of respiratory insufficiency. Therefore, monitoring for respiratory insufficiency will help to build the safety profile for ESC. 10. A composite measure of critical infant safety outcomes during the initial birth hospitalization (nonaccidental trauma and death) • Hypothesis: Infants managed using the ESC care approach will be safe during the initial birth hospitalization. • Use of the ESC care approach encourages parents/caregivers to provide extensive non-pharmacologic care and rooming-in. This may increase stress and fatigue and lead to undesired caregiverinfant interactions. Inclusion of a critical composite safety outcome inclusive of non-accidental trauma and death will help to build the safety profile for ESC 11. A composite measure of infant safety during the first 3 months of life based on the presence or absence of acute/urgent care and/or ER visits and hospital readmissions • Hypothesis: Infants managed using the ESC care approach will be safe during the first 3 months of life. • Discharge of an infant earlier from the initial hospitalization and/or increased primary caregiver involvement during the initial hospitalization may increase the stress and fatigue experienced by the caregiver(s) and lead to increased risk for poor outcomes, and increased healthcare utilization.
12.
A composite measure of critical safety outcomes based on the presence or absence of non-accidental trauma and death at discharge and during the first 3 and 24 months of life • Hypothesis: Infants managed using the ESC care approach will be safe during the first 3 and 24 months of life. • Infants with undertreated signs of withdrawal may be at increased risk for non-accidental trauma and death due to the potential for increased primary caregiver stress and fatigue during the hospital admission and following discharge. These infants may also fail to develop a bond with their primary caregiver(s) during the first months of life, which may further increase the risk for non-accidental trauma and death during the first two years of life.
Obtained for the subpopulation who provide informed consent and acquired through questionnaires Assessed at various time points between discharge and 24 months of age (see Table 2).
Infant neurobehavioral functioning following discharge
• Hypothesis: Infants managed using the ESC care approach will have improved infant neurobehavioral functioning when compared to usual care. • Assessed with Infant Behavior Questionnaire -Revised (IBQ-R) very short form at 3 and 12 months of age. The caregiver will complete the survey and it will be sent to a central location for review by the protocol study team • The IBQ-R is a well-established caregiver report measure of neurobehavioral functioning through assessment of temperament for infants between 3 and 12 months of age [40] The questionnaire has demonstrated good internal consistency, reliability, and validity [41][42][43][44] The IBQ-R consists of 191 items and takes approximately 1 hour to complete which makes it impractical for this study. The very short form consists of 37 questions that measure surgency, negative affect, and effortful control of the infant caregiver. This form takes approximately 12 minutes to complete. 46 The very short form has been shown to have reliability and stability that are similar to the IBQ-R and other temperament measures [45] 2. Infant wellness following discharge as independently assessed by: • Anthropometric growth (weight, height, head circumference) • Hypothesis: Use of the ESC care approach will not impact growth long-term when compared to usual care. • Assessed with percentile measurements of weight, length, head circumference (HC), and weight for length on WHO growth curves. The research team will assess weight, length, head circumference and weight for length at hospital discharge and 24 months of age. The study team will calculate anthropometric z-scores at these time points, and will assess BMI at 24 months of age and calculate BMI-z.
• Sleep • Hypothesis: The infant's sleep will improve after use of the ESC care approach compared to usual care. • Assessed with the Brief Infant Sleep Questionnaire (BISQ) [46] at 3 and 12 months of age. The caregiver will complete the survey and it will be sent to a central location for review by the protocol study team.
• Enteral feeds during the first 6 months of life (exclusive maternal breastmilk, combination of maternal breastmilk and formula, or exclusive formula feeding) • Hypothesis: Use of the ESC care approach will increase the proportion of infants who receive maternal breastmilk following discharge compared to usual care. • Assessed with the Caregiver Questionnaire (CQ) at 1-month post hospital discharge, and 3, and 6 months of age. The caregiver will complete the questionnaire and it will be sent to a central location for review by the protocol study team.
• Direct breastfeeding during the first 6 months of life • Hypothesis: Use of the ESC care approach will increase the proportion of mothers who directly breastfeed following discharge compared to usual care. • Assessed with the CQ at 1-month post hospital discharge, and 3, and 6 months of age. The caregiver will complete the questionnaire and it will be sent to a central location for review by the protocol study team.
• Number of ER visits and/or acute/urgent care visits • Hypothesis: Use of the ESC care approach will not result in an increase in the number of ER or acute/ urgent care visits compared to usual care. • Assessed at 1-month post hospital discharge, and 3, 6, 12, and 24 months of age via completion of the CQ and submission for review by a proto-col study team. The site research team will also assess the site's electronic health record (EHR) and include any visits not reported, if observed.
• Readmissions
• Hypothesis: Use of the ESC care approach will not result in an increase in the number of readmissions following initial hospital discharge compared to usual care. • Assessed at 1-month post hospital discharge, and 3, 6, 12, and 24 months of age via completion of the CQ and reviewed by the protocol study team. The site research team will also assess the sites' EHR and include any visits not reported, if observed.
Maternal/caregiver well-being
• Hypothesis: Use of the ESC care approach will improve maternal/caregiver well-being compared to usual care. • Assessed with Patient Reported Outcomes Measurement Information System (PROMIS) short forms at discharge, 6 months and 24 months [47]. Standardized short forms examining mental health, specifically the areas of anxiety (PROMIS Short Form v1.0 -Anxiety -8a 31May2019), depression (PROMIS_ SF_v1.0_-_ED-Depression_8a_5-31-2019), anger (PROMIS Short Form v1.1 -Anger -5a 27Apr2016), life meaning and purpose (PROMIS Short Form v1.0 -Meaning and Purpose -8a 18Jul2017), and social support (PROMIS v2.0 -Emotional Support Short Form 4a 23June2016) will be completed by the primary caregiver and will be sent to a central location for review by the protocol study team. • The standardized short form for each of the PROMIS Measures consists of between four to eight 5-point Likert scale questions. The PROMIS Depression Short form has been validated in the postpartum period and has been found to be strongly correlated with the Edinburgh Postnatal Depression Scale, the most extensively studied measure of depression in the postpartum period [48,49]. In addition, the PROMIS anxiety measure has been correlated with the Mood and Anxiety Questionnaire (MASQ) and has been shown to be a valid measurement tool for anxiety in the post-partum period in a sample of parents whose infants were hospitalized in the NICU [49]. Administration takes approximately 10 minutes and includes a total of 33 questions.
Infant-caregiver bonding and attachment
• Hypothesis: Use of the ESC care approach will result in improved infant-caregiver bonding and attachment, compared to usual care. • The protocol study team will assess with the Maternal Postnatal Attachment Questionnaire (MPAQ) at discharge and 6 months of age. The caregiver will complete the questionnaire and it will be sent to a central location for review by the protocol study team. • Primary caregiver-infant interactions will be assessed with the MPAQ, [50] a 19-item questionnaire that assesses quality of bonding, absence of hostility, and pleasure in interaction. The MPAQ requires approximately 5 minutes to complete, and researchers have validated the tool for postpartum women with substance-abuse problems [51] • The focus of the MPAQ is primarily upon the caregiver(s) subjective experiences in relation to their infant in the first year of life [52] Established risk quartiles exist, and the protocol study team will note caregiver(s) for entry and exit into these high-risk quartiles at each time point.
Parenting efficacy
• Hypothesis: Use of the ESC care approach will result in improved caregiver sense of competency in caring for their infants compared to usual care. • The protocol study team will assess with the Parenting Sense of Competence (PSOC) Scale at discharge and 6 months of age. The caregiver will complete the questionnaire and it will be sent to a central location for review by the protocol study team. • The PSOC is a self-reporting instrument that measures and assesses parent self-efficacy. It is a 17-item publicly available scale that measures satisfaction (degree of liking a person has for their role as a parent) and efficacy (an individual's perceived competence in their role as a parent). • Researchers have used this tool to assess the impact of parenting efficacy on the likelihood of out-ofhome placement and loss of custody in mothers with mental health and substance use disorders [53] 6. Family environment • Hypothesis: Use of the ESC care approach will enhance family environment when compared to usual care. • The protocol study team will assess with Family Environmental Scale (FES) -Relationship Dimension -Form R at 3 months of age. The caregiver will complete the questionnaire and it will be sent to a central location for review by the protocol study team. • The Relationship dimension of the FES consists of the Cohesion, Expressiveness, and Conflict subscales. Form R for each subscale is composed of 9 true-false items. • The relationship dimension assesses the degree of commitment, help, and support that family members provide each other, the extent to which family members are encouraged to act openly and to express their feelings directly, and the amount of openly expressed anger, aggression, and conflict among family members [54] • Researchers frequently use the FES to assess the home environment and it has been found to have strong psychometric properties [54] 7. Influence of maternal childhood experiences on infant outcomes • Hypothesis: Maternal history of adverse childhood experiences will be associated with worse infant behavioral functioning and developmental outcomes. • The protocol study team will assess adverse childhood experiences using the Adverse Childhood Experience (ACE) Questionnaire at 24 months of age. • The ACE [55] is a self-report measure used to capture specific childhood experiences correlating with future social risk factors and negative health outcomes.
Potential risks and benefits to participants
Under the proposed study design, the protocol study team will randomize each site into blocks with each block transitioning from usual care to ESC at a randomly allocated time. At any given time during the study enrollment period, all infants managed for NOWS at a site will receive care consistent with the care approach assigned by the protocol study team. Sites throughout the country are currently using both care approaches described in this study for the evaluation and management of NOWS. Use of either care approach will not expose infants in this study to risk beyond that of usual/accepted clinical care. Involvement in the study will not increase the risk to the family of legal ramifications associated with the in utero opioid exposure of their infants, as only infants who have been identified by the site as at risk for NOWS and for whom management for NOWS has begun, will be screened for enrollment in the trial. There will be no additional toxicology screening (maternal or infant) performed beyond what medical professionals would typically obtain as part of usual institutional care at the site.
Thus, there will be no additional information garnered with respect to substance use during pregnancy due to one's involvement in the trial.
The protocol study team will assess primary caregiver well-being (e.g. parenting stress, attachment and bonding, depression, anxiety, etc.) as well as infant well-being, neurobehavioral functioning, and development during the follow-up portion of the study.
The protocol study team will assess primary caregiver wellness with 5 PROMIS Measures. It is possible that these questionnaires may reveal that the primary caregiver is experiencing psychological distress potentially requiring support. The study team has determined that a standardized scoring threshold for the PROMIS Depression Measure will be used to identify these individuals. As thresholds specific to postpartum women with opioid dependency have yet to be established and given that severe depression (a t-score >70, or 2 standard deviations above the mean for the normative population is the threshold for severe depressive symptoms [58,59]) is most likely to impact family well-being, a score of >70 was chosen for this threshold.
If a primary caregiver has a t-score >70 on the PROMIS Depression measure, the primary caregiver will be provided with national hotline support numbers within the electronic questionnaire platform. In addition, after the questionnaire is completed in REDCap an email will be automatically generated and sent to the study coordinator and PI. Each site will develop a plan to provide support for the primary caregivers at risk and connect them with local mental health resources in response to those emails. The protocol study team will collect a copy of this plan from each site.
NATIONAL SUICIDE PREVENTION LIFELINE -1-800-273-8255
• https:// suici depre venti onlif eline. org • The National Suicide Prevention Lifeline is a national network of local crisis centers that provides free and confidential emotional support to people in suicidal crisis or emotional distress 24 hours a day, 7 days a week.
Additionally, a response plan will be in place at each site for questions specific to incidental findings of or suspicions for child abuse and/or neglect.
Participants recognized to have neurodevelopmental impairment on the Bayley-4 exam will be referred to their primary care providers for follow-up. The study team will communicate and share the report with the caregiver(s) and primary care providers if requested by the participants' caregiver(s) and consent is obtained.
The infants in the study may not benefit directly from participation. There may be a benefit to the infant of information garnered from the developmental screening portion of the study. By virtue of inclusion in a research study, participants are at risk of loss of confidentiality of medical-record information because participants will have their medical records reviewed by research personnel. The protocol study team will institute measures to protect the privacy of medical information, including the coding of all HIPAA (Health Insurance Portability and Accountability Act) identifiers in medical records, limitation of access to the medical records to research personnel, and removal of any individual identifiers in reports and publications generated from the study. Research personnel will keep any hard copies of research records in a locked cabinet and will destroy these records after the study is complete and the protocol study team publishes the results. In this study, infants themselves are the primary research focus, thus justifying the inclusion of children. The protocol study team will not exclude a subject based on race, ethnicity, or gender. However, some of the study questionnaires that will be used have not been validated in languages other than English. Thus, the population for the consented portion of the study will be limited to infants of English-speaking, reading and writing caregivers. Due to the demographic distribution of NOWS, the proportion of low socioeconomic-status infants will likely be higher than in the general population.
Recruitment and retention Site recruitment and retention
The protocol study team began to optimize the potential for recruitment during initial protocol development, through an assessment of potential ISPCTN and NRN sites' willingness to participate in and enthusiasm for various study designs. The study design chosen for this protocol incorporates the feedback from these sites. During the site assessment process, the protocol study team will expect each site to commit in writing to the site's participation in and completion of the trial with maintenance of the site's allocated intervention for the duration of the study. The protocol study team will facilitate retention of sites through the focused allocation of funds to support participation, through assessment of needs, provision of support, and troubleshooting at each site, as needed.
Infant and parent/caregiver recruitment and retention
The site research teams will need to obtain participant consent for the long-term follow-up portion of the study. Historically, enrollment of infants with NOWS in clinical trials that seek to improve their care has been challenging. In response to this, the protocol study team plans to utilize a robust recruitment and retention plan developed to support and optimize the participation of this population in the follow-up portion of this study. Recruitment The single most important element of the recruitment strategy is to establish trust with the primary caregiver(s) and provide an introduction to the research plan prior to delivery. The prenatal consultation is most likely the first time that the family will meet the site PI or designee and is an ideal time to introduce the trial. The consultation is the opportunity for the provider to gain trust with the family and reaffirm a partnership with the family. The consultation will include establishing a foundation of knowledge about NOWS, outlining gaps in current national care, and a detailed description of the research approach.
In anticipation that prenatal consultations will not be feasible for all patients, effective dissemination of information regarding the clinical trial will be exceptionally important. The protocol study team will provide an informational pamphlet to all parent/caregiver(s) of infants receiving care for NOWS at participating sites soon after delivery. The consenting member of the site research team will begin trust building with the parent/ caregiver(s) in anticipation of the consenting process. The site research team will present information about the study in person, and/or via an informational brochure developed by the protocol study team and distributed to the sites. To further optimize recruitment, if informed consent is not able to be obtained during the initial hospitalization, it is permissible to obtain consent up to one month after discharge.
The protocol study team will assess site recruitment for long-term follow-up each month following site enrollment. If the protocol study team assesses the site as below target, the study team will evaluate the site's processes for recruitment, and the site will receive additional training and/or modifications to the recruitment approach as suggested by a recruitment and retention expert from the protocol study team.
Additionally, the protocol study team will assess for barriers to participation, perceived or actual, of non-consenters and utilize their responses to further improve site-specific and study-wide recruitment strategies.
Retention The protocol study team will optimize infant and parent/caregiver(s) retention soon after a site obtains consent, by sending a note of thanks to the parent/caregiver(s) and acknowledging the importance of their contribution to the future care we provide these infants. The site research team will further optimize retention via text messaging for reminders, and access to questionnaires through a centrally located electronic platform. The site research team will use the electronic health record to update a participant's contact information as needed, in the event that the contact information provided by the participant is not sufficient. The site research team can conduct questionnaires via phone interview if the caregiver(s) has limited access to cellular/ internet service or prefer this modality of communication. If a participant answers questions for questionnaires or comes to an in-person visit, the parent/caregiver(s) will receive compensation for their time. This compensation will be provided at -or very near -the time the participant finishes that contact time. Participants will be reimbursed for their time according to the plan outlined in Table 4.
The mechanism of payment (gift card, check, etc.) will be site specific and will be according to each site's mechanism for making such payments.
The site research team will provide text reminders to the parent/caregiver(s) to optimize timely completion of the questionnaires. Additionally, the site research team at each site will include a retention coordinator, and the protocol study team will allocate funds to support this role.
Additionally, the protocol study team will explore other methods to optimize both recruitment and retention. This could include, but is not limited to, discussions with stakeholders and parent/caregiver(s) from the community who have had infants treated for NOWS and understand the importance of being able to successfully complete this trial.
Statistical analysis plan General approach
The material of this section is the basis for the statistical analysis plan of this study. The protocol study team may revise the plan during the study to accommodate clinical trial protocol amendments and to make changes to adapt to unexpected issues in study execution and data that affect planned analyses. The protocol study team will conduct all statistical analyses following the statistical principles for clinical trials as specified in International Council on Harmonization Topic E9. The protocol study team will describe and justify any deviations from the planned analyses in the final integrated clinical study report. The protocol study team will present overall and study site-specific data and summary tables.
The protocol study team will present the characteristics of infants and mothers by intervention groups (usual care versus ESC care approach) and their outcomes for each site. We do not expect significant differences in the demographics of the study population during the 20-month study period. Each site covers a different population mix, and while each hospital will contribute both usual care and ESC participants, they will do so in different proportions depending on when the protocol study team randomizes the hospital to the intervention. This will contribute greatly to any demographic differences between the usual institutional care and ESC groups. Whilst we do not intend to test for demographic differences between the usual institutional care and ESC groups for the full cohort, we will adjust the analyses for the covariates described because of potential imbalance across sites and across steps. We will present numerical variables as means [standard deviation (SD)] or medians (interquartile range), depending on their distribution, and categorical variables as counts and percentages.
We will use the principles of intention-to-treat for all statistical analyses related to primary and secondary endpoints.
Analysis of the primary efficacy endpoints
For the primary efficacy variable, we will test the following null hypothesis: H0: There is no treatment difference in average length of time until medically ready for discharge between usual care and the ESC care approach.
Versus H1: There is a treatment difference in average length of time until medically ready for discharge between usual care and the ESC care approach.
We will consider the length of time until medically ready for discharge measure a count measure and has the potential to follow a skewed distribution. Initially, we will assess the distributional assumption. We will evaluate the associations of potential confounders (e.g., gestational age, birth weight, race/ethnicity, hospital volume, rural/urban indicator) at both the participant and site level with the intervention. An additional potential confounder that we will evaluate will be the presence of other ongoing clinical trials in our trial sites that might impact the outcome of this study, including the "Prospective Randomized Blinded Trial to Shorten Pharmacologic Treatment of Newborns with Neonatal Opioid Withdrawal Syndrome (NOWS)".
We will use a generalized linear mixed-effects model (GLMM) to compare the expected length of time until medically ready for discharge between the two treatment interventions (usual care and ESC care approach). Specifically, we will use a GLMM with a negative binomial distribution and log-link to account for potential overdispersion, as an infant level analysis, and accounting for correlations between observations in the same hospital by including hospital in the model as a random effect. We will report point estimates for the group mean difference along with a 95% confidence interval (CI). The model-building approach for our primary outcome will follow four analyses steps: 1) an unadjusted before/after of the effect of the ESC care approach (ignoring period/ time effect); 2) the time period (i.e., steps) to examine if any potential intervention effect relates only to the intervention or also to an independent effect of calendar time; 3) an adjustment for infant-level and maternal characteristics and potential hospital-level confounders, such as hospital volume and rural/urban indicator; 4) the possible interaction between period and intervention effect. The impact of the ESC care approach on the primary outcome could potentially change over time, as the improvement in outcome could increase with time as the staff gains experience. However, the impact could also decrease after an initial improvement as the level of initial enthusiasm decreases. We aim to explore this question through the inclusion of an interaction between period/time and intervention effect in Model 4.
In certain circumstances, medical personnel may discharge an infant prior to being medically ready for discharge as defined in our protocol (e.g., sent home on opioids such as methadone, morphine, or buprenorphine). Therefore, to compare the 2 interventions based on the primary outcome, we will censor these infants. Since one can view the time until medically ready for discharge as a time-to-event outcome, we will use the log-rank test adjusted for a cluster randomized design to compare the median time the infant is medically ready for discharge between the intervention groups [60]. Additionally, we will use a Cox proportional hazards (Cox PH) model with the Lin and Wei robust sandwich estimate of the variance-covariance matrix, to account for clustering, to adjust for infant and maternal demographics. We expect the amount of censoring to be minimal, therefore the results from the Cox PH model will serve as a sensitivity analysis for our primary analysis based on a GLMM with log-link.
Analysis of secondary endpoints obtained under waiver of consent
Receipt of opioid replacement therapy (Morphine, Methadone, or Buprenorphine) for neonatal opioid withdrawal syndrome prior to hospital discharge The analysis team will compare the proportion of receipt of opioid replacement therapy for NOWS prior to hospital discharge between the intervention groups using a GLMM with a logistic link function. We will follow the same four modeling strategies described for the primary outcome. We will also present odds ratio estimate of receipt of opioid replacement therapy for NOWS for the intervention effect (ESC versus usual institutional care) with 95% CI.
Total opioid exposure prior to hospital discharge The analysis team will provide the median and range of the total opioid exposure prior to hospital discharge for each treatment group. For the unadjusted analysis, the team will compare the median opioid exposure of the treatment groups using the Wilcoxon rank-sum test for clustered data proposed by Rosner, Glynn, and Lee (2003) [61]. Their test statistic extends the Wilcoxon rank-sum test under the assumptions that all participants from the same cluster belong to the same treatment group, that observations within any cluster are exchangeable, and that the intracluster dependence does not vary across treatment groups. Additionally, the team will use median mixed regression to account for the potential skewness of maximum dose of opioid replacement therapy and of clustered data and allow adjustment for covariates. The team will use the same four model building sequence described for the primary outcome except that the team will replace GLMM with a median mixed regression model.
Hour of life opioid replacement initiated
The analysis team will provide median and range for the hour of life when opioid replacement was initiated, and will do so separately for each treatment group. We anticipate that most of the infants will not receive opioid replacement, therefore we will use a hurdle model to model the expected hour of life until medical personnel initiate opioid replacement (i.e., count data) while handling excess zeros and over dispersion. More specifically, the team will fit the first part of the model with a binary logit model, which models whether an infant receives opioid replacement or not. In the second part, the team will utilize a negative binomial mixed model to account for the stepped-wedge design and adjust for potential infant and maternal demographics.
Receipt of adjuvant therapy (clonidine or phenobarbital) prior to hospital discharge
The analysis team will compare the proportion of receipt of adjuvant therapy between the treatment groups using a GLMM with a logistic link function. We will follow the same four modeling strategies described for the primary outcome. We will present odds ratio estimate of receipt of adjuvant therapy for the intervention effect (ESC versus usual institutional care) with 95% CI.
Maximum percent weight loss during birth hospitalization
The analysis team will provide the mean and SD of percent weight loss during birth hospitalization separately for each treatment group. The team will use a GLMM with an identity link function to compare average percent weight loss between the ESC care approach versus usual institutional care. The analysis team will report point estimates for the group mean difference along with a 95% CI. The team will use the same four model building sequence described for the primary outcome.
Type of enteral feedings (exclusive maternal breastmilk/ breastfeeding, combination of maternal breastmilk and formula, exclusive formula feeding) at time of hospital discharge The analysis team will compare the proportion of infants receiving any maternal breastmilk (i.e., exclusive breastmilk/breastfeeding or combination) at discharge between the treatment groups using a GLMM with a logistic link function. We will follow the same four modeling strategies described for the primary outcome, and we will present the odds ratio estimate of receiving any maternal breastmilk for the intervention effect (ESC versus usual institutional care) with 95% CI.
Breastfeeding at the time of hospital discharge The analysis team will compare the proportion of breastfeeding at the time of hospital discharge between the treatment groups using a GLMM with a logistic link function. We will follow the same four modeling strategies described for the primary outcome. We will present odds ratio estimate of breastfeeding at the time of hospital discharge for the intervention effect (ESC versus usual institutional care) with 95% CI.
Length of hospital stay Similar to the primary outcome (i.e., length of time until medically ready for discharge measure), we will consider LOS a count measure. Therefore, we will complete the analysis of LOS using a GLMM with log link assuming a negative binomial distribution to account for over-dispersion. The protocol study team will report point estimates for the group mean difference along with a 95% CI. Similar to the primary analysis, we will start with an unadjusted analysis and conclude with a model that includes possible interaction between period and intervention effect.
Composite measure of infant safety during birth hospitalization (seizures, accidental trauma [i.e., dropped infants], respiratory insufficiency due to opioid therapy, including documented apnea or need for respiratory support [positive pressure or supplemental oxygen]) We will be monitoring for the presence or absence of safety indicators such as seizures, accidental trauma, and respiratory insufficiency due to opioid therapy. To assess the safety concerns of the ESC care approach, we will create a binary composite measure of inpatient infant safety. The binary composite measure will have a value of 1 if there is a presence for any inpatient infant safety indicator and 0 otherwise. We will compare the proportion of positive inpatient safety concerns between the treatment groups using a GLMM with a logistic link function. We will follow the same four modeling strategies described for the primary outcome, and we will present odds ratio estimate of inpatient safety concerns for the intervention effect (ESC versus usual institutional care) with 95% CI.
Composite measure of infant safety during the first 3 months of life (acute/urgent care and/or ER visits and readmissions)
To assess the safety concerns of the ESC care approach, we will create a second composite measure consisting of outpatient infant safety indicators. We will base this outpatient composite measure on the presence or absence of acute/urgent care and/or ER visits, or readmissions during the first 3 months of life. Similar to the inpatient composite safety measure, we will compare the proportion of positive outpatient safety concerns between the treatment groups using a GLMM with a logistic link function. We will follow the same four modeling strategies described for the primary outcome, and we will present odds ratio estimate of outpatient safety concerns for the intervention effect (ESC versus usual institutional care) with 95% CI.
Composite measure of critical safety outcomes during the first 24 months of life (non-accidental trauma and death)
The analysis team will compare the proportion of non-accidental trauma and death between the treatment groups using a GLMM with a logistic link function. We will follow the same four modeling strategies described for the primary outcome, and we will present odds ratio estimate of non-accidental trauma and death for the intervention effect (ESC versus usual institutional care) with 95% CI.
Analysis of the long-term outcome endpoints obtained under provision of consent
Growth assessed with respect to weight, length, head circumference, and weight-for-length normalized to world health organization growth curves We will calculate anthropometric z-scores at each assessment period for the purpose of analysis based on age-and gender-specific WHO norms. The analysis team will provide the mean and SD of infants' weights (z-scores) separately for each treatment group. The team will use a GLMM with appropriate link function (i.e., identity link for continuous outcome) to evaluate the effect of ESC on weight (z-scores). The model will examine the how the treatment means differ (i.e., main treatment effect), how treatment means change over time (i.e., main time effect), and how differences between treatment means change over time (i.e., treatment-by-time effect). The team will carry out assessment across 2 time points: hospital discharge and 24 months of age. The GLMM analytical approach allows us to analyze correlated data obtained repeatedly from the same participant and account for the ICC among participants nested within with same clinical site. To account for potential imbalance in key demographic and site-level characteristics, the analysis team will utilize both unadjusted and adjusted GLMMs. Initially, the unadjusted GLMM will include the fixed categorical effects of intervention, time, and intervention-by-time interaction and the random-site effect. We will calculate the point estimates and their respective CIs for the changes in infants' weights for each intervention group and for the difference in the estimated change between intervention groups. Additionally, the team will present the p-value of the difference in point estimates between intervention groups.
The analysis team will examine the impact of the ESC care approach on length, head circumference, and infant weight for length (z-scores) using the same analytical methods described for weight (z-scores). Additionally, the team will provide the mean and SD of infant BMI-z at 24 months for each treatment group. The team will use a GLMM with an identity to compare average BMI-z between the groups, and the team will report point estimates for the group mean difference along with a 95% CI.
Sleep assessed with the brief infant sleep questionnaire (BISQ) The analysis team will provide the mean and SD of the BISQ survey separately for each treatment group. The team will use a generalized linear mixed model (GLMM) with appropriate link function (i.e., identity link for continuous outcome) to evaluate the effect of ESC on infant sleep duration. The model will examine how the treatment means differ (i.e., main treatment effect), how treatment means change over time (i.e., main time effect), and how differences between treatment means change over time (i.e., treatment-by-time effect). The team will carry out assessment at 3 months and 12 months of age. The GLMM analytical approach allows us to analyze correlated data obtained repeatedly from the same infant and account for the intracluster correlation coefficient among infants nested within with same clinical site. To account for potential imbalance in key demographic and site-level characteristics, the analysis team will utilize both unadjusted and adjusted GLMMs.
Initially, the unadjusted GLMM will include the fixed categorical effects of intervention, time, and interventionby-time interaction and the random-site effect. We will calculate the point estimates and their respective CIs for the changes in infants' BISQ scores for each intervention group and for the difference in the estimated change between intervention groups. Additionally, the team will present the p-value of the difference in point estimates between intervention groups.
Enteral feeds during the first 6 months of life We will measure enteral feeds on a nominal scale (i.e., exclusive maternal breastmilk, combination of maternal breastmilk and formula, or exclusive formula feeding). The analysis team will tabulate count and relative frequency for each level and for each treatment group. To evaluate the association between enteral feeds with intervention, the team will use a mixed-effects multinomial logistic regression model to account for the longitudinal cluster study design and potential participant and site-level covariates.
Breastfeeding during the first 6 months of life The analysis team will report the proportion of direct breastfeeding for each treatment group during each of the assessment periods (1 month post-discharge, and 3 months and 6 months of age). To evaluate the association between breastfeeding with intervention, the team will use a mixed-effects logistic regression model to account for the longitudinal cluster study design and potential participant and site-level covariates. We will present odds ratio estimate of breastfeeding at each assessment period for the intervention effect (ESC versus usual institutional care) with 95% CI.
Number of emergency room visits and/or acute/urgent care visits
The analysis team will examine the impact of the ESC care approach on the reduction of ER visits and/or acute/urgent care visits using the same analytical steps described for the primary outcome. Given that the outcome measure is count (number of ER visits, integers ≥ 1), we expect that Poisson regression analysis, adjusted for clustering at hospital will be appropriate. However, if the distribution should be approximate to normal or if the team observes over-dispersion, we will consider linear mixed-effect regression or negative binomial models. Again, the team will use the same four model building sequence described for the primary outcome. Specifically, we will start with an unadjusted model and conclude with a model that will include possible interaction between period and intervention effect.
Readmissions
The analysis team will compare the proportion readmissions between the treatment groups using a GLMM with a logistic link function. We will follow the same four modeling strategies described for the primary outcome, and we will present odds ratio estimate of readmissions for the intervention effect (ESC versus usual institutional care) with 95% CI.
Patient-reported outcomes measurement information system (PROMIS) short forms The analysis team will measure primary caregiver(s)' well-being with PROMIS Short Forms. The team will convert raw scores to T-scores and report descriptive statistics (mean ± SD) for each of the five domains (i.e., emotional support, meaning and purpose, anger, anxiety, and depression) separately for each treatment group. To compare each domain composite scores between the ESC care approach and usual care, the team will use a GLMM model with identity link with a fixed effect for the intervention group, time, and groupby-time and a random effect for study site. Assessment periods will include at discharge, and at 6 months and 24 months. The team will report point estimates for the group mean difference along with a 95% CI. Again, this analytical approach will be repeated for each of the 5 PROMIS domains.
Maternal postnatal attachment questionnaire The analysis team will examine the impact of the ESC care approach on the composite score of the MPAQ and its three subscales (quality, absence of hostility towards infant, and pleasure). Since these measures are continuous, the team will apply the GLMM with the normal link function. In addition, the model will examine the ESC intervention impact at hospital discharge and 6 months of age.
Family environment scale (FES) at 3 months Initially, we will base the overall assessment of the FES using the relationship dimension on a composite score of the 30 true-false items found on form R (i.e., range of 0-30).
The analysis team will provide the mean and SD of the composite FES scores separately for each treatment group, and the team will use a GLMM with an identity link function to compare average FES scores between the ESC care approach and usual care. We will report point estimates for the group mean difference along with a 95% CI. The team will use the same four model building sequence described for the primary outcome. Additionally, we will repeat the analytical for each relationship dimension, namely, Cohesion, Expressiveness, and Conflict subscales.
Parenting sense of competence scale (PSOC) The analysis team will report descriptive statistics (mean ± SD) for the composite PSOC score separately for each treatment group. The team will compare the PSOC composite scores assessed at hospital discharge and 6-months of age using a GLMM model with identity link with a fixed effect for the intervention group, time, and group-bytime and a random effect for study site. We will calculate the point estimates and their respective CIs for the changes in PSOC scores for each intervention group and for the difference in the estimated change between intervention groups. Additionally, the team will present the p-value of the difference in point estimates between intervention groups.
Infant behavior questionnaire (IBQ-R) revised very short form at 3 and 12 months of age The analysis team will report descriptive statistics (mean ± SD) for each domain of the IBQ-R (i.e., positive affectivity/surgency, negative emotionality, and orienting/regulatory capacity) separately for each treatment group. The team will compare the IBQ-R composite scores for each domain using separate GLMM models with identity link with a fixed effect for the intervention group, time, and group-by-time and a random effect for study site. The team will report point estimates for the group mean difference along with a 95% CI for each domain.
Bayley scales of infant and toddler development, fourth edition (Bayley-4): cognitive, language and motor at 24-months of age The analysis team will calculate descriptive statistics (mean ± SD, medians, percentiles) for each domain in the Bayley-4 separately for each treatment group. To compare the scores between the ESC and usual care groups, we will perform a linear mixed-effects model with a fixed effect for the intervention group and a random effect for study site. We will report point estimates for the group mean difference along with a 95% CI, and the team will repeat this analytical approach for each of the Bayley-4 domains.
Brief infant-toddler social and emotional assessment (BITSEA) at 24-months of age The analysis team will calculate descriptive statistics (mean ± SD, medians, percentiles) for BITSEA problem scale and BITSEA competence scale separately for each treatment group. To compare the scores between the ESC and usual care groups among the two BITSEA scales, we will perform separate linear mixed-effects model with a fixed effect for the intervention group and a random effect for study site. We will report point estimates for the group mean difference along with a 95% CI.
Influence of maternal childhood experiences on infant outcomes
The analysis team will calculate descriptive statistics (mean ± SD, medians, percentiles) for the ACE Questionnaire separately for each treatment group. To examine the relationship between the ACE Questionnaire with the IBQ-R scores and Bayley-4, the analysis team will compute separate marginal Pearson correlation coefficients, [62] which is an analog of the standard Pearson correlation coefficient for clustered data. If significant, we will perform a sensitivity analysis in which we include the ACE Questionnaire scores as a covariate in the final analytic models for Bayley-4 and IBQ-R scores.
Interim analysis
In a stepped-wedge randomized controlled trial, interim analyses for outcomes carried out early in the trial will have a large imbalance between numbers of observations exposed to usual care and intervention conditions. The imbalance will likely have power implications and will make a power analysis infeasible. The clustered natures of the data will also impact the analysis [63,64]. Therefore, the protocol study team will not conduct an interim analysis on the primary outcome for the purpose of study termination due to inferiority or superiority of the ESC care approach. The protocol study team will conduct an interim analysis for the long-term follow-up portion of the study to assess for futility due to under-recruitment. The projected informed consent rate for long-term follow-up is 30-40%. After each block of two periods (approximately 6 months), the protocol study team will compare the informed consent rate with the projected informed consent rate. If the actual informed consent rate over a block of two periods is below 30%, then the protocol study team will monitor the informed consent rate for another block of two periods. If the cumulative informed consent rate remains below 30%, then the protocol study team will ask the Data and Safety Monitoring Committee (DSMC) to review accrual trajectories and to determine, with the protocol study team, if measures can be taken to improve the accrual rate. The DSMC will consider whether to stop accrual to the long-term followup portion of the study due to an insufficient informed consent rate. Additionally, the DSMC will monitor the study for safety concerns.
Sample size and power estimates
We based the sample size estimate (Table 5) on the primary outcome, which is the comparison of the average length of time until the infant is medically ready for discharge between groups (ESC care approach versus usual care). In much of the literature, researchers tend to report overall length of inpatient hospital stay (LOS). The average reported LOS for infants managed for NOWS is approximately 18 days (SD=8) [28]; we expect a reduction of 4 days with use of the ESC care approach. For this study, we used preliminary data from the ACT NOW Current Experience Study to obtain the mean and standard deviation estimate for LOS. For the purpose of our sample size justification, we used these estimates as a proxy for our estimates of the average length of time until the infant is medically ready for discharge. Based on the Current Experience Study, the average LOS is approximately 11 days (SD=11). Additionally, we derived an estimate of the ICC of 0.25 from this preliminary data analysis. Richard Hooper and colleagues [65] noted that most sample size justifications for stepped-wedge design studies follow a mixed-effects regression approach for cross-sectional stepped-wedge design, as described by Hussey and Hughes, [66] which assumes that the withinperiod ICC and between-period ICC are equal. They define the cluster autocorrelation coefficient (CAC) as the ratio of the between-period ICC over the withinperiod ICC. We calculated statistical power based on the methodology for stepped-wedge with transition period design proposed by Hooper et al using the R-Shiny app written by Hemming and Kasza [65]. Given that our primary outcome is a count measure, we used the ACT NOW Current Experience Study to obtain an estimate of the over-dispersion parameter (φ). McCullagh and Nelder suggested that the over-dispersion parameter estimate (φ) is simply a ratio of the deviance or the Pearson chisquare to its associated degrees-of-freedom [67]. Thus, a total sample size of 864 infants would achieve 90% power to detect a difference of 4 days between the groups with an estimated CAC of 0.8 and φ=10. This assumes an 8-step stepped-wedge with transition period design with approximately 24 total sites. We will randomize each site into 1 of 8 blocks, and we expect each site to enroll an average of 4 infants during each period for 36 total infants per site during the study duration. Since we have no prior information regarding the CAC estimate, Table 5 provides the total sample size required assuming a CAC ranging between 0.6 to 0.8 and differences of 3 days, 3.5 days, and 4 days. Based on the ACT NOW Current Experience Study, the expected number of infants with NOWS delivered at participating sites annually will be approximately 1500-2000 infants. Therefore, our study will still be sufficiently powered (i.e., 85%) to detect a difference of 3 days between the groups with CAC=0.8. The power calculation assumes significance level of 5%, delivery of infants with NOWS equally distributed across hospital groupings, and analysis by Negative Binomial GLMM.
To address the primary study hypothesis, the protocol study team will randomize a minimum of 24 sites, and a maximum of 28 sites to 1 of 8 blocks of a stepped wedge with transition design (Table 1), with each site enrolling an average of 36 infants. During any single study period (see Table 1), a site may enroll no more than 16 infants. Although we calculated the sample size for the overall trial using the power calculation for the primary hypothesis, we conducted the following power calculations to assure adequacy of sample size to show potential effect of the intervention on infant neurobehavioral functioning and development. To evaluate the impact of the ESC care approach on infant neurobehavioral function and development using measures such as the IBQ-R and Bayley-4, we must obtain primary caregiver consent. We anticipate that not all participants will provide consent for the long-term outcome portion of the study. Table 6 provides an estimate of the effect size based on varying consent rates and CAC estimates with the study having 80% power. Again based on the ACT NOW Current Experience Study, we expect the number of infants with NOWS delivered at participating sites annually will be approximately 1500 infants. Thus, for a 17-month enrollment period, the protocol study team expects a total of 2125 infants with NOWS will be delivered at participating study sites. Assuming a 40% or 30% consent rate, this produces a total sample size of 850 (40% consent rate) or 638 (30% consent rate) infants. Cohen defined effect size as the mean differences, μ 1 − μ 2 , divided by the standard deviation, σ, of either group [68]. However, Rosnow and Rosenthal noted that in practice, researchers commonly use the pooled SD (defined as the root mean square of the 2 SDs) [69]. Effect sizes are generally classified as small (≤ 0.3), medium (~0.5), and large (≥ 0.75). For infant neurobehavioral functioning based on the IBQ-R, the study will have 80% power to detect an expected mean difference of 0.28 points in the Orienting/Regulatory Capacity domain, assuming a 30% consent rate and CAC=0.8, based on a mixed-effects model with a fixed treatment effect and random site effect with a significance level of 0.05. With a SD of 0.70, the detectable mean difference constitutes a moderate effect size. We based our estimated mean (5.0) and SD (0.70) for the Orienting/Regulatory Capacity domain using the summary statistics provided by Putnam and colleagues, [45] in which the authors provided summary data of the IBQ-R domains extracted from six standard form data samples.
For the neurodevelopmental outcome based on the Bayley-4, the study will have 80% power to detect an expected mean difference of 6 points, assuming a 30% consent rate and CAC=0.8, based on a mixed-effects model with a fixed treatment effect and random site effect with a significance level of 0.05. With a SD of 15, the detectable mean difference constitutes a moderate effect size.
Available population
In December 2018, we completed data abstraction for the ACT NOW Current Experience Study. Twenty-five ISPCTN and 5 NRN sites participated in this study. We
Projected recruitment time Site recruitment
We will recruit approximately 24 sites for this study. We will randomize these sites into 8 blocks. Initial assessment of site interest in study participation across the networks suggests an adequate number of sites to meet our site recruitment goal. The site's ability to initiate a change in practice within their organization will impact actual site recruitment. Recruitment of all sites will take an estimated 3 months. We will randomize sites into blocks once recruitment is complete.
Site training and implementation
Site training and implementation will take approximately 3 months, as we will first train a core group of site champions, followed by training of all site personnel by the core group. The protocol study team will train the site champions off-site and as such, training may occur in parallel with the end of the final usual care period at the site. Once a site has achieved the training milestones, the site will formally implement ESC. After this initial implementation, the site will step into the ESC period (see Table 1). Total enrollment period is 20 months with each site actively enrolling infants for 17 of the 20 months. If the site research team obtains consent for the long-term follow-up portion of the trial, the site research team will follow the infant for 24 months. Total length of the study will be approximately 44 months.
Study monitoring plan
We will conduct clinical site monitoring to ensure that we protect the rights and well-being of study participants, that the reported trial data are accurate, complete, and verifiable, and that the conduct of the study complies with the currently approved protocol/amendment(s), with International Council for Harmonisation Good Clinical Practice, and with applicable regulatory requirements.
• A member of the DCC clinical operations staff or their designee will monitor the study. • The clinical monitoring team will plan and conduct an on-site visit at least once during the course of the study and more often if needed for cause. • Details of clinical site monitoring are in the Clinical Monitoring Plan. The plan describes who will conduct the monitoring, at what frequency monitoring will occur, at what level of detail monitoring will be performed, and how monitoring reports will be distributed.
Definition of adverse events and serious adverse events
Adverse event (AE): AE means any untoward medical occurrence associated with the use of an intervention in humans, whether or not considered intervention related. Serious Adverse Event (SAE): An AE is considered "serious" if, in the view of either the investigator or sponsor, it results in any of the following outcomes: 1. Death 2. Life-threatening AE (life-threatening means that the study participant was, in the opinion of the investigator or sponsor, at immediate risk of death from the reaction as it occurred and required intervention) 3. Persistent or significant incapacity or substantial disruption of the ability to conduct normal life functions 4. Inpatient hospitalization or prolongation of existing hospitalization 5. Important medical event that may not result in 1 of the above outcomes but may jeopardize the health of the study participant or require medical or surgical intervention to prevent 1 of the outcomes listed in the above definition of serious event
Classification of an adverse event
Severity of event For AEs, the site research team will use the following guidelines to describe severity. The site investigator will determine severity.
• Mild -Events require minimal or no treatment and do not interfere with the participant's daily activities. • Moderate -Events result in a low level of inconvenience or concern with the therapeutic measures. Moderate events may cause some interference with functioning. • Severe -Events interrupt a participant's usual daily activity and may require systemic drug therapy or other treatment. Severe events are usually potentially life threatening or incapacitating. Of note, the term "severe" does not necessarily equate to "serious. " Relationship to study intervention The site research team will grade the degree of certainty about causality by using the categories below.
• Related -The AE is known to occur with the study procedures, there is a reasonable possibility that the study procedures caused the AE, or there is a temporal relationship between the study procedures and the event. Reasonable possibility means that there is evidence to suggest a causal relationship between the study procedures and the AE. • Not Related -There is not a reasonable possibility that the study procedures caused the event, there is no temporal relationship between the study procedures and event onset, or an alternate etiology has been established.
Expected AEs Expected AEs include -seizures, accidental trauma, severe weight loss (greater than 15% from birthweight) and respiratory insufficiency. Expected AEs that could occur during the follow-up portion of the study include acute/urgent care and or ER visits for worsening symptoms of NOWS. Hospital readmission to assess and manage symptoms of NOWS and non-accidental trauma may also occur. We note anticipated rates in Table 7.
Time period and frequency for event assessment and follow up
For this study, the protocol study team will collect the following AEs: 1) all expected AEs (seizures, accidental trauma, severe weight loss, and respiratory insufficiency), and 2) SAEs related to study procedures. The occurrence of an AE or SAE may come to the attention of study personnel during the hospital stay, by the clinical team with administration of questionnaires, or by the medical monitor upon reviewing data. The site research team will capture all AEs on the appropriate case report form. Information to be collected includes event description, date/time of onset, date/time of resolution, clinician's assessment of severity, relationship to study intervention and time of resolution/stabilization of the event. Site research teams must follow all AEs until the AE meets one of the following criteria: resolution, the condition stabilizes, the event is otherwise explained or is judged by the protocol study team to be no longer clinically significant, or the participant is lost to follow-up. The site research team will collect AEs during the initial hospitalization through hospital discharge.
Data monitoring and safety
The independent DSMC will have overall responsibility for interim data monitoring and operate based on the ISPCTN and NRN charter for the DSMC. The DSMC will formally review interim safety data in a sequential fashion using interim monitoring boundaries after approximately 25%, 50%, and 75% of the study sites (6, 12, and 18 sites, respectively) have transitioned to ESC. Treatment groups will be compared statistically using the analysis planned for the final analyses for safety outcomes as previously outlined. Safety oversight will be under the direction of a DSMC. Safety outcomes include the components of the inpatient composite safety outcome and those of the outpatient composite safety outcome (see Table 7). The DSMC may request other outcomes at their discretion. Formal statistical testing for an imbalance of seizures, accidental trauma, or respiratory insufficiency due to opioid therapy, in either treatment group, will be based on a comparatively liberal Lan DeMets Pocock boundary at the three interim safety reviews to guard against any occurrence of false positives while at the same time allowing for stopping at reasonable levels of evidence. Thus, at each interim, an increased incidence of seizures in either treatment group with P < 0.0179 (for 4 total tests) will be considered a statistically significant evidence of harm that the DSMC can use to recommend suspension of the trial for safety reasons. This same statistical testing will also be conducted for the components of the outpatient composite safety outcome. In addition to the formal safety outcomes, the DSMC will examine other safety outcomes, including all reported SAEs by treatment group in considering a recommendation to suspend the trial for safety reasons.
The Medical Monitor will provide input on safety considerations, evaluate safety trends, and provide oversight throughout the life cycle of the clinical research, in accordance with the approved protocol. This role includes reviewing and monitoring safety events on a regular basis, advising the protocol investigators on trial-related medical questions or problems, reviewing cumulative participant safety data, and making recommendations regarding the data to the DSMC.
Data management
The data management center, RTI International, will: • Collaborate in the development, implementation, and monitoring of ESC protocol. • Provide data management, including development of CRFs and appropriate data collection systems • Supervise data entry activities, including instructing and certifying data entry personnel in software and hardware usage, quality assurance of data entry, etc. • Manage the Data Safety and Monitoring Committee for the trial. This will include scheduling meetings and the DSMC charter. • Oversee the receipt and reconciliation of safety data. • Supervise NRN-site quality assurance efforts, including conducting site visits and remote monitoring of data. • Prepare and distributes monthly reports, detailing data received, data consistency, missing data and adherence to protocol. • Disburse capitation payments to clinical centers on the basis of enrolled participants and other studyspecific milestone triggers specified in the study protocol. • Provide the logistical support necessary to run an efficient and productive network.
Publication and data sharing policy
This study will comply with the National Institutes of Health (NIH) Public Access Policy, which ensures that the public has access to the published results of NIHfunded research. The study will also comply with the NIH Data Sharing Policy and Policy on the Dissemination of NIH-Funded Clinical Trial Information and the Clinical Trials Registration and Results Information Submission rule. As such, this study will: • Register with Clini calTr ials. gov and submit primary outcome results. The Clini calTr ial. gov number is NCT04057820. • Publish results. The protocol study team will make every attempt to publish results in peer-reviewed journals. The team will submit all final peer-reviewed journal manuscripts from this study to the digital archive PubMed Central upon acceptance for publication. • Deposit data for data sharing with other researchers. Within the bounds of relevant IRB approvals and guidelines for protection of personally identifiable data, the protocol study team will deposit de-identified data from this study in an appropriate, NIHapproved data repository.
Trial status
Protocol version 5, August 7, 2020 Enrollment began on September 8, 2020. Anticipated end of enrollment is March 22, 2022. Long-term follow up will be complete May 2024. | 2022-08-09T13:33:03.860Z | 2022-08-09T00:00:00.000 | {
"year": 2022,
"sha1": "3b96b9d6a623091839166a8118c27964d4e95078",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "3b96b9d6a623091839166a8118c27964d4e95078",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245896651 | pes2o/s2orc | v3-fos-license | Initial experience with laparoscopic posterior retroperitoneal adrenalectomy in single tertiary center
Purpose Laparoscopic posterior retroperitoneal adrenalectomy (LPRA) is a surgical method that accesses the adrenal gland through the back. The aim of this study was to report initial experience of LPRA and evaluate possibilities for surgical application. Methods From March 2018 to December 2019, a total of 30 consecutive patients diagnosed with adrenal tumor underwent surgical treatment at Pusan National University Hospital were enrolled. Clinicopathologic features and various peri- and postoperative parameters were analyzed by retrospective medical record review. The mean age of the patients was 48.20±13.66 years. Results The mean body mass index (BMI) was 25.50±4.30 kg/m2. Primary hyperaldosteronism was the most frequently preoperative diagnosed disease (n=13, 43.4%), followed by adrenal incidentaloma (n=8, 26.6%), Cushing syndrome (n=5, 16.6%) and pheochromocytoma (n=4, 13.3%). The mean size of postoperative adrenal tumor was 2.72±1.76 cm. The mean operating time was 162±58.14 minutes. Among the 30 patients, 28 patients underwent total adrenalectomy (93.3%) and two patients underwent cortical sparing adrenalectomy (6.7%). When LPRA was performed for patients with BMI >23.16 kg/m2, the operating time was longer than the average (P=0.016). Conclusion LPRA was suitable and safe for patients with benign adrenal tumors. BMI, retroperitoneal fat density and postoperative adrenal weight may be related to the operating time, so they should be considered when deciding on a surgical method for adrenalectomy.
INTRODUCTION
After Gagner et al. [1] first introduced laparoscopic adrenalectomy for pheochromocytoma and Cushing syndrome in 1992, laparoscopic adrenalectomy has become the standard surgical procedure for adrenal tumors, replacing traditional open adrenalectomy. Laparoscopic adrenalectomy is known to have the advantages of fewer postoperative complications, bleeding, and analgesic requirements, a shorter hospital stay, and shorter time to return to normal physical activities and diet compared to open adrenalectomy [2,3]. Recently, indications for laparoscopic surgery have been expanded to include metastatic lesions and malignant tumors, and surgical safety for pheochromocytoma has been reported [4][5][6].
Mercan et al. [7] developed laparoscopic posterior retroperitoneal adrenalectomy (LPRA) in 1993 and LPRA has been used as one of the method of surgery for adrenal tumors until now. LPRA is a surgical method that accesses the adrenal gland through the back and has several advantages compared to laparoscopic transperitoneal adrenalectomy (LTPA); direct access to the adrenal gland, minimizing the need for intra-abdominal dissection, more useful for patient with previous abdominal surgery [8]. However, despite these advantages of LPRA, it is not as popular as LTPA because of the unfamiliar operative field and narrower working space and the worldwide experience has been limited [9][10][11]. Therefore, we report the initial experience of LPRA performed on 30 consecutive patients by one surgeon.
Study population
From March 2018 to December 2019, a total of 30 consecutive patients diagnosed with adrenal tumor underwent surgical treatment at our hospital. We reviewed the medical records of these patients using the endocrine surgery database at Pusan National University Hospital. Various clinicopathologic features, including the age at operation, gender, body mass index (BMI), American Society of Anesthesiologist (ASA) grade, adrenal tumor size, length of hospital stay, type of disease, adrenal tumor site, estimated blood loss (EBL), mean operation time, operation type, perinephric fat density (FD), postoperative complication and postoperative pain were assessed. The perinephric FD was calculated as the ratio of the extent of retroperitoneal fat tissue and the area of retroperitoneal cavity using adrenal computed tomography. This study was approved by the Pusan National University Hospital Institutional Review Board (IRB No. J-2108-025-093). The informed consent was waived because this study design is a retrospective medical record review.
Operative technique
LPRA was performed with the patient in the prone position (Fig. 1). The first incision was made just below the tip of the 12th rib, and the retroperitoneal space was bluntly dissected with a finger. The second and third ports were then placed blindly on the finger (Fig. 2). After CO2 insufflation (15-20 mmHg), fatty tissue from the posterior aspect of the kidney was dissected and the superior pole of the kidney was exposed. Adrenalectomy was performed by resecting the adrenal gland from adjacent structures and ligating the adrenal vein. The resected adrenal gland was placed in an endo-bag and pulled out through the first incision site [11].
Statistical analysis
Data were expressed as mean ± standard deviation or number (%) for descriptive statistics. A t-test was used for continuous variables. The Pearson product moment correlation chi-square test was used for categorical variables. In all cases, a P-value < 0.05 was considered statistically significant. All statistical analyses were performed using IBM SPSS statistics 23.0 (IBM Corp., Armonk, NY, USA).
RESULTS
The clinicopathologic characteristics of the 30 patients are shown in Table 1. The mean age of the patients was 48 years and there were 13 males and 17 females. The mean BMI was 25.5 kg/m 2 and 21 patients were grade II and nine patients were grade III in ASA grade. There was no bilateral adrenalectomy. All adrenalectomies were unilateral procedures. Primary hyperaldosteronism was the most frequently preoperative diagnosed disease (n = 13, 43.4%), followed by adrenal incidentaloma (n = 8, 26.6%), Cushing syndrome (n = 5, 16.6%), and pheochromocytoma (n = 4, 13.3%). In was 3.7 days. In our study, maximal EBL was 1,500 mL when performing adrenalectomy with 5 cm-sized pheochromocytoma. There were no morbidity-or mortality-related complications in this study population. In only one patient, ilioinguinal neuralgia occurred after surgery and self-resolved within 1 week. There was no conversion to open adrenalectomy in this study. The mean day of postoperative analgesic intravenous injection with ketorolac tromethamine (Trolac) 30 mg was 1 day.
In our study, as the number of surgeries increased, the operating time became shorter and the operating time of last case (30th case) was 75 minutes (Fig. 3).
With analysis of the correlation between the various pre-and peri-operative parameters, we found with box plots that BMI 4C) were higher in surgeries that took more time than the mean operating time compared to surgeries that took less time than the mean operating time with statistically insignificant. Box plots revealed that a wider range of perinephric FD and postoperative adrenal weight in surgeries with over operating time than average compared to surgeries with less operating time than average ( Fig. 4B and C). We also found that when LPRA was performed for patients with BMI > 23.16 kg/ m 2 , the operating time was longer than the average (P = 0.016).
DISCUSSION
Laparoscopic adrenalectomy is considered as the gold standard surgical method for removing benign adrenal mass [12,13]. Various studies described the advantage of laparoscopic adrenalectomy [12,14] and LTPA has been most popular surgical procedure final pathologic result, adrenocortical adenoma was the most frequently operated disease (n = 18, 60%), followed by pheochromocytoma (n = 4, 13.3%), and adrenal hyperplasia (n = 3, 10%). The mean size of postoperative adrenal tumor was 2.72 cm. The perioperative and postoperative various parameters of the 30 patients are shown in Table 2. The mean operating time was 162 minutes. Among the 30 patients, 28 patients underwent total adrenalectomy (93.3%) and two patients underwent cortical sparing adrenalectomy (6.7%). The mean perinephric FD was 0.4 cm and the mean postoperative adrenal weight was 29 g. The mean EBL during operation was 199 mL and the mean length of hospital stay for adrenal tumor, except in some special cases, such as primary malignant tumor or metastatic malignant tumor. After Mercan et al. [7] introduced LPRA in 1993, LPRA has recently been widely used as a surgical method for resection of adrenal tumors. Various previous studies reported the differences between LTPA and LPRA.
In several studies, no significant differences in outcome variables were found between the LPRA versus the classic LTPA [15][16][17]. However, recent studies reported shorter hospital stay and lower postoperative pain for LPRA than for LTPA [18,19]. In this study, we reported no complications after LPRA, except one case Patients with ilioinguinal neuralgia. The location of the adrenal glands in the retroperitoneal space and the direct approach explains the shorter operation times and lower amount of blood loss because of minimized intra-abdominal dissection in LPRA compared to LTA [11]. The adrenal gland is exposed directly in the retroperitoneal space, without mobilization of liver, pancreas and spleen in LPRA, reflecting low complication rates. Our complication, ilioinguinal neuralgia, is a complication that occurs not only in LPRA but also in laparoscopic surgery and improves after a few days. LPRA could be also performed for patients with previous abdominal surgery. Furthermore, it is performed in case of bilateral adrenal resection without the need for repositioning and is feasible in obese patients as the abdominal fat is located at the non-operative ventral side of the patient. However, because most surgeons are not familiar with the anatomy of the retroperitoneal space, a substantial learning time was required for the use of LPRA [11,20].
Previous study suggested that about 20-25 LPRAs would probably be necessary to apprehend the new technique for LPRA [21]. In our study, the operating time of first case was 330 minutes, but, as the number of surgeries increased, the operating time became shorter and the operating time of last case (30th case) was 75 minutes.
In our study, the maximal preoperative and postoperative tumor size was 20 cm and 8.2 cm, respectively ( Table 1). In that case, adrenal tumor was a 20 cm-sized pure adrenal cyst and LPRA was performed while the adrenal cystic fluid was aspirated. Walz et al. [22] showed that the retroperitoneal approach is difficult to perform in patients with large tumors ( > 7-8 cm), but we think that LPRA is suitable and safe for patients with over 7-8 cm-sized benign pure adrenal cyst. Furthermore, maximal EBL was 1,500 mL when performing adrenalectomy with 5cm sized pheochromocytoma (Table 2). During performing adrenalectomy for pheochromocytoma, the tumor often bleeds, and once the tumor begins to bleed, it is often out of control. In this case, we think that it is important to control bleeding by gauze packing unless it is a major blood vessel injury. If electrocautery is excessively performed on the tumor bleeding site, bleeding may be further promoted.
In our study, despite statistically insignificant value, BMI, retroperitoneal FD and postoperative adrenal weight had correlations with the operating time. Walz et al. [22] showed that the retroperitoneal approach is difficult to perform in patients with a high BMI. However, more present study showed no correlation between BMI and the outcome measures (operation time, recovery time, and blood loss) [11]. Therefore, BMI, retroperitoneal FD and postoperative adrenal weight were not impossible factors in implementing LPRA, but they may be factors that should be sufficiently considered when performing LPRA.
In conclusion, based on our initial experiences for LPRA in sin-gle tertiary center, LPRA was suitable and safe for patients with benign adrenal tumors. BMI, retroperitoneal FD and postoperative adrenal weight may be related to the operating time, so they should be considered when deciding on a surgical method for adrenalectomy. | 2022-01-13T16:16:44.566Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "9e0c8e389c68fa767946553b2c879cd310c9d276",
"oa_license": "CCBYNC",
"oa_url": "http://www.kjco.org/upload/kjco-17-2-90.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a00cf4251a1bcbeeb7d5e0e54f881a1b33f883ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243056949 | pes2o/s2orc | v3-fos-license | Comparative study of efficacy of topical dexamethasone 0.1% with difluprednate 0.05% in post-operative small incision cataract surgery
Introduction: Post-operative ocular inflammation is a common occurrence following cataract surgery. Corticosteroids have been used to treat ocular inflammation; however, they carry a risk of side effects, particularly an increase in intra ocular pressure (IOP). Previous studies have proved that difluprednate is more efficacious compared to dexamethasone. Hence this study was undertaken to compare the efficacy of difluprednate ophthalmic emulsion 0.05% and dexamethasone 0.1% in postoperative management after small incision cataract surgery. Materials and Methods: A total 200 patients were selected as per inclusion criteria and equally divided between difluprednate and dexamethasone groups. Dexamethasone 0.1% or difluprednate0.05% was prescribed post operatively following small incision cataract surgery. Patients were examined on post-operative day 1, 7, 15 and 30 for anterior segment by slit-lamp examination and side effects. IOP was measured in both the groups on day 30. Results: In our observation both drugs were efficient in the reduction of anterior chamber cells and flare with difluprednate being more rapid. Corneal edema was reduced equally by both the drugs at all observation periods. There was no clinically significant IOP elevation in both difluprednate and dexamethsone group. Difluprednate was found to be more effective in controlling pain compared to dexamethasone. Conclusion: As per present study both difluprednate ophthalmic emulsion 0.05% eye drops and dexamethasone 0.1% eye drops were equally effective in reducing post cataract surgery inflammation. Hence, difluprednate emulsion 0.05% can be used in postoperative management after cataract surgery; nonetheless, further clinical trials with long follow- up period are required.
Introduction
The major cause of the blindness across the globe is cataract, and around 1 to 4% population of the world suffers from blindess. Cataract surgery should be performed with equal emphasis on quality and quantity of surgery. 1 Surgical techniques in all fields of ophthalmology has evolved considerably over the years from transition to clear corneal incisions by anterior segment surgeons to adoption of small-gauge minimally invasive pars planavitrectomies by vitreo-retinal specialists. Cataract surgery can cause ocular inflammation and it could be due to surgical trauma itself and due to various physical, chemical and biological agents introduced during the surgery. The host response to these injurious agents in the form of inflammation is a complex interaction of immuno reactive cells, their products and other chemical mediators of inflammation. More is known today about chemical mediators of inflammation; the prostaglandins, the kinins the complement system etc and their role in inflammation. It is becoming more apparent that prostaglandins help to mediate the response of eye to acute trauma. This response is marked by miosis, hyperemia of conjunctiva, disturbance of blood aqueous barrier and transient increase in intraocular pressure followed by hypotension. 2 In the last few years cataract surgery operative techniques have been improved tremendously and it has become least traumatic. And due to this improved cataract surgeries there is low chance of ocular inflammation and high chance of maintenance of integrity of blood aqueous barrier. In 1950, with the arrival of cortisone in the medical field, there was major advancement in the management of inflammatory conditions. 3 In the immediate post-operative period, topical corticosteroids are employed to suppress the production of inflammatory mediators, providing local treatment without any side effects. Corticisteriods prevent the release of arachidonic acid from cell membrane phospholipids there by inhibiting the genesis of prostaglandins and leukotrines contributing to the disruption of the inflammatory cascade. These agents are continued until the anterior chamber (AC) reaction has resolved and the blood-aqueous barrier has been reestablished. 4 Routinely topical corticosteroids are in use for the reduction of inflammation after surgery but side effects of corticosteriods are; inhibition of wound healing, high risk of infections and in some patients intraocular hypertension also seen. 5 0.05% difluprednate got US food and drug administration (FDA) approval in June 2008 for postoperative management of inflammation and pain. This is the first ophthalmic steroid approved by FDA since 1973. A study conducted showed that difluprednate emulsion 0.05% safely decreased the inflammation associated with cataract surgery with no serious adverse effects compared to placebo. Thus difluprednate in the past 35 years has proved to be high potent, safe, and effectively reduces the post-operative pain. 6 The emulsion formulations of difluprednate can be credited to its dose uniformity. 7 Various studies have proved that difluprednate is more efficacious compared to dexamethasone. There are few studies about the comparison of efficacy of difluprednate and dexamethasone. Hence this study was undertaken to compare the efficacy of difluprednate ophthalmic emulsion 0.05% and dexamethasone 0.1% in postoperative management after small incision cataract surgery.
Materials and Methods
Total two hundred patients diagnosed with senile cataracts reported to HKE Society's Basaveshwara General and Teaching Hospital, Kalaburagi have been selected for the research. The study period was from December 2015 to June 2017. Before the start of the study informed consent of the patient was acquired and permission from the institutional ethical committee was procured.
A total of two hundred patients with senile cataract were divided into two groups. Group A-100 cases (Difluprednate ophthalmic emulsion 0.05% group) Group B-100 cases (Dexamethasone phosphate ophthalmic suspension 0.1% group) Inclusion Criteria: Patients diagnosed with cataract aged more than 40yrs and undergoing small incision cataract surgery with PC IOL implantation. Exclusion Criteria: Below years old patients, patients on long term steroids or NSAIDS, Patients sensitive to any of the study or procedural medicines, patients with preoperative inflammation in either eye, patients with H/O ocular trauma, previous intraocular surgery or wear of contact lens, patients developing intraoperative complications, patients not giving consent, and patients undergoing phacoemulsification Methodology: Before surgical procedure the patients were evaluated for following; history was evaluated, slit lamp examination of the patient was done, visual acuity was assessed, Keratometry has been done, A-scan has been done, and routine pre-operative blood investigations was carried out and blood pressure was measured. Intraocular pressure measurement and lacrimal patency test was done as the part of investigations. The intraocular surgery was a conventional SICS with PCIOL implantation done by experienced surgeons. Surgery was uncomplicated in all cases. The study medications topical difluprednate ophthalmic emulsion (0.05%) or dexamethasone phosphate ophthalmic solution (0.1%) was administered in a randomized fashion.
Patients in Group A has been administered with difluprednate eye drops 4 times per day and Moxifloxacin eye drops 4 times per day for 1st week followed by tapering of difluprednate till 6 th week.
Patients in Group B received dexamethasone eye drops 8 times and Moxifloxacin eye drops 4 times in a day for the 1 st week followed by tapering of dexamethasone till 6 th week.
On 1,7,15 and 30 days of post-operative period each patient was examined for pain, watering or any other symptom which has been experienced by the patient. For the assessment of inflammation slit lamp examination was carried out and Snellen's chart was used to assess visual acuity. With maintenance of standard conditions slit lamp examination was carried out: room was illuminated with dim light, voltage of the lamp was high, 3x1 millimeter aperture for Anterior chamber Flare and Cells (Hogan's grading). 30 degrees was used as illumination angle and magnification of 16x. Visual analogue scale (VAS) was used to determine ocular pain. Symptoms of watering and discomfort were recorded as present or absent.
At each visit the following parameters were noted and the degree of parameters are graded as 0,1,2,3, etc As per VAS scale pain is graded as 0-absent, 1mild, 2-moderate, 3-severe and 4-extreme.
(If 50 cells and hypopyon was graded as 4 than Pupils were examined for any synechiae or any other abnormalities). The details of the fundus were also noted especially in the macula for the presence of cystoid macular edema by direct ophthalmoscopy.
After 1 month, intraocular pressure was measured with the help of applanation Tonometer and best corrected visual acuity was got after doing refraction. All the above details were recorded in the clinical proforma at each visit.
Statistical analysis was done using Chi-square test with Yate's correction and Wilcoxon Signed Ranks test.
Statistical analysis using descriptive statistics and inferential statistics has been done for this study. The results were analysed by using SPSS version 18 (IBM Corporation, SPSS Inc., and Chicago, IL, USA). Microsoft word and Excel was used to generate graphs, tables etc. Mean +SD (min-max) has been presented for the continuous measurements and number (%) has been presented for the categorical measurements. The level of significance was determined at 5%. To find the difference between groups, Chi square test with Yates correction and Wilcoxon signed ranked test was performed.
Results
64.14 years is the mean age found in difluprednate group and 65.55 years was the mean age observed in dexamethasone group.
38 male patients were found in difluprednate group whereas dexamethasone group had 44 patients. There were 62 females in difluprednate group and 56 patients in dexamethasone group. Immature cataract was observed in 74 in patients in difluprednate group where as in dexamethasone group there were 67 patients with immature cataract. Mature cataract was found in 17 patients in difluprednate group and 15 patients in dexamethasone group. Hypermature cataract has been diagnosed in 9 patients in difluprednate group and in 18 patients under dexamethasone group. Intra Ocular Pressure: At base line the number of patients in the range of 12.2 to 13.4 mmHg was 19 patients (19%) in difluprednate group and 21 patients (21%) in dexamethasone group. 51 patients (51%) in difluprednate group and 46 patients (46%) in dexamethasone group were in the range of 14.6 to 15.9mmHg. 30 patients (30%) were in range of 17.3 to 18.9 mmHg in difluprednate group as compared with 33 patients (33%) of dexamethasone group. At the end of 1 month, 12 Patients (12%) each in difluprednate and 4 patients (4%) in dexamethasone group had IOP of 14mmHg. 40 patients (40%) in difluprednate group and 46 patients (46%) in dexamethasone group had IOP 16 mmHg. An IOP of 18 mmHg was seen in 40 patients (40%) in difluprednate group and in 38 patients (38%) in dexamethasone group. Eight patients (8%) in difluprednate group and 12 patients (12%) in dexamethasone group had IOP of 20mmHg. (Table 1) The changes obtained in the parameters are as follows: Pain: 86 patients of dexamethasone group and 90 patients of difluprednate group experienced grade 1 pain on 1 st post-operative day. No pain (0 grade) was found in 69 patients of dexamethasone group and 75 patients of difluprednate group on 7 th post-operative day. Grade 1 pain was experienced by 31 patients of dexamethasone group and 25 patients of difluprednate group. (Table 2) Corneal edema: There was no corneal edema in 16 patients of dexamethasone group and 26 difluprednate group on 1 st post-operative day. Grade 1 corneal edema was observed in 59 patients of difluprednate group and 56 patients of dexamethasone group during 1 st postoperative day. Grade 2 corneal edema was found in 28 patients of dexamethasone group and 15 patients of difluprednate group on 1 st post-operative day. Grade 1 corneal edema was seen in 12 patients of dexamethasone group and 8 patients of difluprednate group on 7 th postoperative day. On 15 th post-operative day, grade 1 corneal edema was noticed in 10 patients of dexamethasone group and 6 patients of difluprednate group. (Table 3) Anterior (Table 5) Best corrected visual acuity after 1 month of postoperative period; There were 8 patients with best corrected visual acuity of 6/12(p)-6/12 in the difluprednate group and dexamethasone group had 10 patient with best corrected visual acuity of 6/12(p)-6/12, 70 patients in dexamethasone group and 66 patients in difluprednate group was found with best visual acuity of 6/9(p)-6/9, best visual acuity of 6/6(p)-6/6 was found in 20 patients of dexamethasone group and 26 patients of difluprednate group (Table 6).
Discussion
Several steroids have been introduced over the last few decades; still dexamethasone has been considered the "gold standard" and indeed has enjoyed a status as the "go-to" steroid for many inflammatory conditions. All ophthalmic corticosteroids, both topical and systemic, have the potential to provoke a rise in intraocular pressure (IOP).
The role of corticosteroids in reducing postoperative inflammation following cataract surgery is very important for the successful outcome of cataract surgery. Dexamethasone was the corticosteroid mostly used since 5 decades but recently difluprednate has assumed greater importance because recent studies proved it as safe and efficacious. Difluprednate has been approved by FDA in 2008.
We have done comparison of difluprednate and dexamethasone for their efficacy in decreasing inflammation associated with cataract surgery.
In our research at base line; 22.2 to 13.4 mm hg of IOP was observed in 19% of patients in difluprednate group and 21% of patients in dexamethasone group. There were 46 patients in dexamethasone group and 51 patients in difluprednate group who had IOP in the range of 14.6mm hg to 15.9mm hg. 33 patients in dexamethasaone group and 30 patients in difluprednate group had exhibited IOP in the range of 17.3mmhg to 18.9 mmhg.
On 30 th post-operative day; it has been observed that 12 patients in dexamethasone group had 14mm hg of IOP whereas only 4 patients in difluprednate group had 14mmhg of IOP, in difluprednate group 40 patients had 16mmhg of iop and 46 patients in dexamethasone group had 16mm hg of IOP, 38 patients in dexamethasone group and 40 patients in difluprednate group exhibited 18mmhg of IOP, 12 patients in dexamethasone group and 8 patients in difluprednate group showed 20mmhg of IOP. This in in agreement with the study carried out by Tijunelis et al studied who have administered prednisolone acetate 4 times in a day for 30 days on 224 eyes and difluprednate 2 times in a days for 30 days on 225 eyes and they have found that there is no significant rise in mean IOP in both groups. 8 In our observation on 1 st post-operative day; grade 1 corneal edema was found in 56 patients of dexamethasone group and 59 patients in difluprednate group had exhibited grade 1 corneal edema. Grade 2 corneal edema has been observed in 15 patients in difluprednate group and 28 patients in dexamethasone group had grade 1 corneal edema (p=0.094). Statistical significance was not observed between 2 groups on 1 st post-operative day. On 7 th post-operative day; grade 1coreneal edema was observed in 12 patients in dexamethasone group and 8 patients of difluprednate group had grade 1 corneal edema (p=0.346). There was no statistical difference between the 2 groups. On the 15 th post-operative day grade 1 corneal edema was observed in 10 patients of dexamethasone group and 6 patients of difluprednate group had grade 1 corneal edema and there was no statistical difference between the 2 groups(p=0.297).
In our investigation, on day; 1 post-operative day 59 patients in difluprednate group and 43 patients in dexamethasone group had grade 1 anterior chamber flare, 47 patients in dexamethasone group and 31 patients in difluprednate group had grade 2 flare(p=0.055). Statistically no significance was found between 2 groups on 1 st post -operative day. On 7 th postoperative day; grade 1 flare was decreased to 32 patients and grade 1 flare was increased to 50 patients in dexamethasone group, grade 2 flare was found in 12 patients of difluprednate and dexamethasone group (p-0.025).There was statistically no significance was found in 2 groups on 7 th post-operative day. On 15 th post-operative day it's been observed that both groups had 10 patients with grade 1 flare.
Smith et al carried out a study at fluoride in USA, in which they did comparison of difluprednate to a placebo and observed that reduction of inflammation which is absence of AC cells and flare has been achieved in more number of patients who have been managed using difluprednate in comparison with placebo (74.7% vs 42.5% p=0.0006). The patients managed with difluprednate had significantly lesser ocular discomfort/pain in comparison to patients managed with placebo on 14 th post-operative day (64.6% vs 30.0% p=0.0004)). 9 In difluprednate group grade 1 cells were reduced from 59 patients to 40 patients on 7 th post-operative day in comparison to 1 st post-operative day where as in dexamethasone group grade 1 cells increased from 51 patients to 58 patients on 7 th post-operative day in comparison to 1 post-operative day (statistically significant; p=0.011). On 15 th post-operative day in the anterior chamber grade 1cells was found only in 4 patients in difluprednate group and in 8 patients in dexamethasone group (p=0.234). On 1 st post-operative day grade 2 cells was observed in 34 patients in difluprednate group and 35 patients in dexamethasone group grade 3 cells was found in 8 patients in dexamethasone group (p=0.086).
Sood et al, did an investigation to find out the efficacy of 0.05% difluprednate and 0.1% dexamethasone eye drops in the reduction of postoperative inflammation after small incision cataract surgery. 120 patients were enrolled in the study who had undergone small incision cataract surgery. The study participants have been examined with slit lamp for IOP, anterior chamber cells and flare on 1,7,14 and 28 th days of post-operative period. They observed that difluprednate was more effective in decreasing the pain in 62% patients on 3 rd post operate day in comparison to dexamethasone group on 7 th post-operative day. Both drugs had not produced significant effect on intraocular pressure. Their research proved that difluprednate was more efficient in comparison to dexamethasone. 10 Chaudhary et al did a comparison of 0.1% dexamethasone and 0.05% difluprednate ophthalmic solution in terms of its efficacy and safety in treatment of inflammation associated with phacoemulsification. 50 patients participated in the study they were divided into 2 groups of 25 patients each. And after surgery each group received either 0.055 difluprednate ophthalmic solution or 0.1% dexamethasone eye drops. Patients were examined and compared on postoperative day 1,7,14, and 28 for anterior segment, intraocular pressure and side effects. Anterior chamber cell loss on day 7 was more in difluprednate group. Both drugs showed equal efficacy in decreasing anterior chamber flare. No significant increase in intraocular pressure in both groups and no serious adverse effects have been reported. This study reported that difluprednate is more effective compare to dexamethsone in reducing postoperative inflammation after surgery. 11 Garg et al investigated that both prednisolone acetate and 0.05% difluprednate ophthalmic solution are coequally effective in the management of inflammation associated with cataract surgery. Further they stated that difluprednate has an additional advantage due to its drug dosage uniformity and lack of harmful preservative. 12 In our observation there is no rise in IOP in both groups from baseline to till 6 weeks. Dexamethasone group responded slower in comparison to difluprednate group in relation to reduction of inflammation. (i.e anterior chamber cells and flare).
Donnenfeld et al, did a review on difluprednate and identified that difluprednate was shown to be efficacious in treatment of post-operative inflammation in different clinical settings including a novel post-operative regimen. 13 Our research and other previous studies proved that 0.05% difluprednate ophthalmic solution is efficacious in the treatment of inflammation after cataract surgery.
Conclusion
As per our findings and previous results, difluprednate emulsion 0.05% appears to be a suitable medicament to manage inflammatory conditions and pain after cataract surgery. Hence, difluprednate emulsion 0.05% can be used in post-operative management after cataract surgery; nonetheless, further clinical trials with long follow-up period are required. | 2019-08-18T17:34:17.202Z | 2020-12-15T00:00:00.000 | {
"year": 2020,
"sha1": "f68dfe83afab7cc6e7948b12c1124bfb7c72e9e1",
"oa_license": null,
"oa_url": "https://doi.org/10.18231/2395-1451.2018.0076",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "150d4004494b36d444f21a1842f4853d8b8e4a4a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
255852529 | pes2o/s2orc | v3-fos-license | Endothelial colony-forming cell-derived exosomal miR-21-5p regulates autophagic flux to promote vascular endothelial repair by inhibiting SIPL1A2 in atherosclerosis
Percutaneous transluminal coronary angioplasty (PTCA) represents an efficient therapeutic method for atherosclerosis but conveys a risk of causing restenosis. Endothelial colony-forming cell-derived exosomes (ECFC-exosomes) are important mediators during vascular repair. This study aimed to investigate the therapeutic effects of ECFC-exosomes in a rat model of atherosclerosis and to explore the molecular mechanisms underlying the ECFC-exosome-mediated effects on ox-LDL-induced endothelial injury. The effect of ECFC-exosome-mediated autophagy on ox-LDL-induced human microvascular endothelial cell (HMEC) injury was examined by cell counting kit-8 assay, scratch wound assay, tube formation assay, western blot and the Ad-mCherry-GFP-LC3B system. RNA-sequencing assays, bioinformatic analysis and dual-luciferase reporter assays were performed to confirm the interaction between the miR-21-5p abundance of ECFC-exosomes and SIPA1L2 in HMECs. The role and underlying mechanism of ECFC-exosomes in endothelial repair were explored using a high-fat diet combined with balloon injury to establish an atherosclerotic rat model of vascular injury. Evans blue staining, haematoxylin and eosin staining and western blotting were used to evaluate vascular injury. ECFC-exosomes were incorporated into HMECs and promoted HMEC proliferation, migration and tube formation by repairing autophagic flux and enhancing autophagic activity. Subsequently, we demonstrated that miR-21-5p, which is abundant in ECFC-exosomes, binds to the 3’ untranslated region of SIPA1L2 to inhibit its expression, and knockout of miR-21-5p in ECFC-exosomes reversed ECFC-exosome-decreased SIPA1L2 expression in ox-LDL-induced HMEC injury. Knockdown of SIPA1L2 repaired autophagic flux and enhanced autophagic activity to promote cell proliferation in ox-LDL-treated HMECs. ECFC-exosome treatment attenuated vascular endothelial injury, regulated lipid balance and activated autophagy in an atherogenic rat model of vascular injury, whereas these effects were eliminated with ECFC-exosomes with knockdown of miR-21-5p. Our study demonstrated that ECFC-exosomes protect against atherosclerosis- or PTCA-induced vascular injury by rescuing autophagic flux and inhibiting SIAP1L2 expression through delivery of miR-21-5p. 2CBvb9SjWfguy8HHKM67ro Video Abstract. Video Abstract.
Background
Atherosclerosis is one of the most common causes of death in the world, with a prevalence of 19 (males) or 14 (females) per 100,000 [1]. Atherosclerosis is primarily caused by the elevation of low-density lipoprotein cholesterol (LDL-c) with the primary symptom of intimal hyperplasia owing to endothelial injury, hyperproliferation of smooth muscle cells, and lymphocytic infiltration. Percutaneous transluminal coronary angioplasty (PTCA), known as an efficient therapeutic method against atherosclerosis, has significantly improved patient survival rates [2]. However, postsurgical restenosis may seriously interfere with patient prognosis, urgently requiring a solution [3,4]. Therefore, developing a better method to restore the structural and functional integrity of blood vessels is of great importance for improving the prognosis of patients with atherosclerosis.
Endothelial progenitor cells (EPCs) are multipotential stem cells that differentiate into mature endothelial cells [5,6]. Recent studies have shown that there are at least two subsets of EPCs, circulating angiogenic cells (CACs) and endothelial colony-forming cells (ECFCs), and ECFCs are considered true EPCs due to their capability to integrate directly into developing vessels and form tube-like structures in vitro, providing promising therapeutic potential [7]. Previous studies have revealed that EPCs may facilitate vascular reendothelialization and reduce intimal hyperplasia when transplanted into the injury site of carotid arteries [8][9][10][11]. However, abnormal differentiation, thrombosis, and immunogenicity impede the application of EPC transplantation; thus, an optimized approach is urgently needed. Recently, it has been shown that paracrine signalling plays a crucial role in EPC-mediated vascular repair [12], and the master paracrine product of EPCs is exosomes [13][14][15]. Hu et al. [16] reported that exosomes from umbilical cord blood (UCB) generated by EPCs improved the repair of balloon-induced mechanical vascular injury in a rat model. However, the difference between balloon-induced vascular injury and atherosclerosis, which is usually induced by lipid accumulation, may lead to varied outcomes and requires specific treatment.
Exosomes are highly efficient in intercellular communication, as they are capable of transmitting biodegradable molecules in a protective manner. Emerging evidence has shown that paracrine exosomes secreted by different cells play crucial roles in atherosclerosis via their protein and noncoding RNA (such as miRNA) components, which may become therapeutic approaches for atherosclerosis [17][18][19]. Zhu et al. [20] reported that nicotine-stimulated macrophage-derived exosomes accelerate atherosclerosis through miR-21-3p/PTEN-mediated VSMC migration and proliferation. Bouchareychas et al. [21] indicated that exosome delivered anti-inflammatory miR-99a/146b/378a from bone marrow-derived macrophages to resolve atherosclerosis by regulating haematopoiesis and inflammation by targeting NF-κB and TNF-α signalling. Yang et al. [22] demonstrated that exosomes from mesenchymal stem cells efficiently delivered miR-145 to endothelial cells and inhibited atherosclerosis by targeting JAM-A. In addition, recent studies have indicated that EPC-derived exosomes protect against endothelial injury to promote vascular repair in balloon injuryinduced vascular injury in rats with normal blood lipid levels [23][24][25]. However, the therapeutic effect and underlying mechanism of ECFC-exosomes on vascular injury induced by balloon injury in hyperlipidaemic rats remain unclear and require further examination.
In this study, we investigated the therapeutic effects of ECFC-exosomes in a rat model of atherosclerosis established by hyperlipidaemia with balloon-induced injury. The molecular mechanisms underlying the ECFC-exosome-mediated effects on ox-LDL-induced endothelial injury were also explored.
Isolation and culture of ECFCs from human peripheral blood
All experimental procedures were approved by the Ethical Committee of the Fuwai Hospital Chinese Academy of Medical Sciences (NO.: SP2019002), and each patient signed informed consent. ECFCs were isolated from human peripheral blood as previously described [26]. In brief, human whole blood (50 mL) was collected from 8 healthy volunteers (aged between 30 and 40 years old). Blood samples were diluted in phosphate-buffered saline (PBS) at a 1:1 ratio and added onto separation medium (GE Healthcare, Pittsburgh, USA) with endothelial cell growth factor and cytokines. Thereafter, the blood samples were centrifuged at 1200g for 30 min, and the mononuclear cells were collected and washed with PBS. The cells were then placed into a 25 cm 2 culture flasks with endothelial cell growth medium (EGM-2; Thermo Fisher Scientific, Waltham, USA) containing 10% foetal bovine serum (FBS, Gibco, Waltham, USA), 100 U/mL penicillin (Gibco) and 100 μg/mL streptomycin (Gibco). The FBS used in this study was centrifuged in advance using density gradient centrifugation to remove pre-existing Keywords: Atherosclerosis, Endothelial progenitor cell-derived exosomes, miR-21-5p, SIPA1L2, Autophagic flux exosomes. After 72 h, nonadherent cells were removed. The medium was refreshed every 3 days, and cells were cultured in a 5% CO 2 humidified atmosphere at 37 °C. Cells at passages 3-6 were used in subsequent experiments. Cell morphology was examined under a light microscope following culturing for 0, 3, 7, 10 and 21 days. ECFCs were successfully isolated from the peripheral blood of 6 of the 8 healthy volunteers.
Identification of human peripheral blood-derived ECFCs
For immunocytochemistry, cells were fixed in 4% paraformaldehyde for 15 min and permeabilized with 0.1% Triton X-100 for 10 min at room temperature. After blocking with 3% bovine serum albumin (BSA) for 1 h, cells were incubated with primary antibodies overnight at 4℃ and then incubated with secondary antibodies for 1 h at 37 °C. Nuclei were stained with 4,6-diamidino-2-phenylindole (DAPI; 0.5 µg/ml; Invitrogen, USA) for 5 min. Cells were washed and analysed using a fluorescence microscope (Leica, Germany). Antibodies, including anti-CD45, anti-CD144, anti-eNOS, anti-vWF and their respective secondary antibodies, were obtained from Abcam (Cambridge, UK). Flow cytometry analysis was performed using a BD Accuri ™ C6 flow cytometer (BD Biosciences). Cells were stained with CD34-FITC, CD144-FITC, vWF-FITC and CD45-FITC antibodies using standard procedures and were then measured and analysed using a BD Accuri ™ C6 flow cytometer.
Preparation and identification of exosomes from human peripheral blood-derived ECFC-exosomes
ECFCs were cultured in complete medium until reaching 80% confluence. The medium was then replaced with EGM-2 medium supplemented with 1 × serum replacement solution (PeproTech, Rocky Hill, USA). After incubation for an additional 48 h, the conditioned medium of ECFCs was collected and centrifuged at 300g for 20 min and 2000g for 10 min at 4 °C to remove dead cells and cellular debris. Thereafter, the supernatant was filtered through a 0.22 μm filter (Millipore, Burlington, USA) followed by centrifugation at 10,000g for 30 min. The pellet was discarded, and the supernatant was centrifuged at 100,000g for 70 min. The pellet was resuspended in PBS and centrifuged at 100,000g for 70 min and then resuspended again in PBS and stored at − 80 °C.
Total protein concentration of the exosomes was measured using a BCA protein assay (Pierce, T6hermo Scientific). The exosomes were characterized by morphologic examination using a transmission electron microscope (Hitachi H-7650; Japan), and the images were captured using a digital camera (Olympus). Western blots were conducted to detect protein levels of CD31, CD63, CD9 and CD81 in exosomes. The size distribution and concentration of the exosomes were analysed using Nanosight (NTA).
Culture and treatment of human microvascular endothelial cells (HMECs)
HMECs (Lonza, Basel, Switzerland) were cultured in endothelial cell medium with 5% foetal bovine serum, 1% endothelial cell growth supplement, and 1% penicillin/ streptomycin solution (ScienCell Research Laboratories, Carlsbad, USA) at 37 °C under a 5% CO 2 atmosphere. HMECs between passages 3-7 were used in this study. Cells were treated with or without ox-LDL for 1 h followed by treatment with 100 μg/mL ECFC-exosomes or an autophagy inhibitor (10 mM bafilomycin A1, Selleckchem) for another 24 h. In the ECFC-exosome treatment experiments, HMECs were cultured in MCDB131 medium without serum or growth factors.
Exosome uptake by HMECs
ECFC-exosomes were labelled with Vybrant DiO dye (Molecular Probes, Carlsbad, CA, USA) according to the manufacturer's instructions. The labelled exosomes (8 μl) were incubated with HMECs at 37 °C for 2 h. HMECs were washed with PBS, fixed in 4% paraformaldehyde for 15 min, and stained with DAPI for 5 min at room temperature. After washing, the cells were analysed using a fluorescence microscope (Leica DMI6000B, Solms, Germany).
Cell counting kit-8 (CCK-8) assay
HMECs (5000 cells/well in 96-well plates) were adjusted to different treatments for 24 h at 37 °C followed by culture with 10 μl cell counting kit-8 (CCK-8) reagent (Abcam) in each well for 1 h at 37 °C. Cell viability was determined by measuring the optical density values at 450 nm using a microplate reader (Thermo Fisher Scientific).
Lactate dehydrogenase (LDH) activity
Lactate dehydrogenase (LDH) activity in culture medium was determined using an LDH activity assay kit (Solarbio, China) following the manufacturer's protocol. Briefly, the cell medium was collected, and then the medium was mixed with different reagents from the LDH activity assay kit. Finally, the absorbance of each sample was detected at 450 nm using a microplate reader. LDH activity (U/L) was calculated as follows: (OD U − OD C ) × C S × N × 1000/ (OD S − OD B ), where OD U represents the sample tube absorbance value, OD C represents the absorbance value of the blank tube, C S represents the standard concentration (2 mmol/L), N represents multiples of dilutions of samples before testing, OD S represents the absorbance value of the standard tube, and OD B represents the absorbance value of the control tube.
Wound healing assay
HMECs with different treatments were seeded into 6-well plates. When the cells reached 80% confluence, the cultured HMECs were scratched using a 200-μl pipette tip to create a wound area. The migration distances of cells were imaged 0 h and 24 h after scraping on an Olympus IX-71 inverted microscope equipped with an Olympus camera and were analysed using ImageJ software. The percentage of wound closure was calculated by the following equation: wound closure (%) = (wound area at 0 h − wound area at 24 h) × 100/(wound area at 0 h).
Tube formation assay
After the addition of Matrigel (BD Biosciences, Franklin Lakes, USA), 24-well plates were gently agitated and incubated at 37 °C to form a gel. HMECs (2 × 10 4 cells/ well) with different treatments were plated into coated wells and then cultured at 37 °C for 8 h. Finally, images of each sample were captured using an Olympus IX-71 inverted microscope equipped with an Olympus camera. The acquired images were analysed using the Angiogenesis Analyser tool in ImageJ software, measuring the number of meshes and tube length. The average number of meshes formed and the percent tube length were calculated as tube formation ability.
Quantitative real-time PCR (qRT-PCR)
Total RNA from isolated exosomes, cells and tissues was extracted using TRIzol reagent (Invitrogen, Carlsbad, USA) according to the manufacturer's protocol. The quality of RNA was evaluated using a microspectrophotometer (NanoDrop, Wilmington, USA). For miRNA, qRT-PCR was conducted using the Hairpin-it ™ miRNAs qPCR Quantitation Assay Kit (GenePharma) with U6 as an internal control. For mRNA, qRT-PCR was performed using the PrimeScript ™ RT reagent Kit with gDNA Eraser (Takara, Dalian, China) and SYBR Premix Ex TaqTM Kit (Takara) with GAPDH as the internal control. Realtime PCRs were performed in a CFX96 Real-Time System thermocycler (Bio-Rad, Hercules, USA). The relative expression of genes was calculated using the comparative Ct method. All primer sequences are listed in Additional file 1: Table S1.
Tracking autophagy with double-tagged LC3B
Tracking autophagy using double-tagged LC3B was previously illustrated in detail [27]. Briefly, HMECs were infected with Ad-mCherry-GFP-LC3B (Beyotime) followed by different treatments. Then, the HMECs were stained with Hoechst 33342. Finally, autophagy was evaluated by the detection of mCherry and GFP using fluorescence microscopy (Leica). When autophagy occurs, mCherry-GFP-LC3B aggregates on the autophagosome membrane, presenting as yellow dots. When the autophagosome and lysosome fuse, they present as red spots due to partial quenching of GFP fluorescence, indicating smooth autophagic efflux. Each sample was assessed using three to five randomly selected fields under a fluorescence microscope (Leica), and at least 10 cells from each field were randomly selected for autophagy analysis.
Expression profiling analysis of miRNA and mRNA
A schematic diagram of biological sample processing before miRNA and mRNA expression profiling is shown in Additional file 2: Figure S1.
For miRNA expression profile analysis in ECFC exosomes, we performed miRNA microarray analysis. In brief, total RNA was extracted from ECFC-exosomes using TRIzol reagent (Invitrogen). Subsequently, total RNA was labelled using a FlashTag Biotin HSR RNA Labeling Kit (Affymetrix, USA) following the manufacturer's protocol and then hybridized with a GeneChip miRNA 4.0 Array (Affymetrix, USA). After hybridization, array images were digitized using a laser scanner interfaced with ArrayPro image analysis software (Media Cybernetics, Silver Spring, USA) to generate raw data. The obtained raw data were first normalized with robust multiarray average (RMA) using Expression Console software (version 1.3.1; Affymetrix, Inc.) and then analysed using Affymetrix Expression Console Software (version 1.3.1).
For mRNA expression profile analysis, total RNA was extracted from three independent samples of ECFCexosomes + ox-LDL-or PBS + ox-LDL-treated HMECs using TRIzol reagent (Invitrogen) according to the manufacturer's recommended protocol, and RNA quantity was assessed using a NanoDrop ND-2000 spectrophotometer (NanoDrop Technologies). After purifying mRNA using the RiboZero Magnetic Gold Kit, cDNA libraries were constructed for the KAPA Stranded RNA-Seq Library Prep kit (Illumina, Inc.) according to the manufacturer's instructions. Subsequently, we used Agilent 2100 and qPCR to assess the quality and quantification of the cDNA library. Finally, RNA sequencing was performed using next-generation sequencing on an Illumina HiSeq Xten platform. Clean data were obtained from the raw data by removing reads containing adapters, reads containing more than 10% poly N, and low-quality reads and subsequently aligning the genome to the specified reference genome (Homo sapiens. GRCh38, NBCI) to obtain the mapped data. Differentially expressed mRNAs between ECFC-exosome-treated HMECs and PBStreated HMECs were analysed using the EBseq R package, and fold changes (FCs) ≥ 2 and false discovery rates (FDRs) < 0.05 served as the screening criteria to identify differentially expressed mRNAs.
Atherogenic rat model of vascular injury
Male Sprague-Dawley rats (200-250 g) were purchased from the Laboratory Animal Centre in Guangdong (Guangzhou, China). Animals were housed in a temperature-controlled environment (21 ± 1 °C) with 40-60% humidity with a 12 h light/dark cycle and were provided free access to tap water and regular chow. The animal protocol was reviewed and approved by the Institutional Animal Care and Use Committee of Fuwai Hospital, Chinese Academy of Medical Sciences. All experiments were performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. After 1 week of adaptive feeding, all rats were randomly divided into four groups: (a) Sham, (b) Model, (c) Exos-NC inh., and (d) Exos-miR inh. An atherogenic rat model of vascular injury was established by feeding high-fat diets and inducing balloon injury as previously described [28,29] with minor modifications. In brief, the rats were given a left carotid arterial intima injury surgically suing a 2F Fogarty arterial embolectomy balloon catheter after 4 weeks of high-fat feeding, and then the rats continued to be fed high-fat diets until the end of the study. High-fat diets were composed of 81.5% basic diets, 10% lard, 0.5% sodium cholate, 3% cholesterol, and 5% sugar. The dose of high-fat diets was 150 g/day. For the model group, rats received 100 µl PBS via tail vein injection after the surgery and were fed a high-fat diet throughout the experimental period; for the Exos-NC inh. group, the atherosclerotic rats received 100 μg ECFC-exosomes transfected with inhibitor NC after the surgery and were fed a high-fat diet throughout the experimental period; for the Exos-miR inh. group, the atherosclerotic rats received 100 μg ECFC-exosomes transfected with miR-21-5p inhibitor after the surgery and were fed a high-fat diet throughout the experimental period. For the sham group, the rats were injected with 100 µl PBS via tail vein injection after sham surgery and were fed a normal diet throughout the experimental period. The sham-operated rats were subjected to anaesthesia and surgical procedures without balloon injury. The rats were euthanized using an overdose of pentobarbital (80 mg/kg, i.p.) t Fourteen days after different treatments, the blood and carotid arteries were collected and processed as described below for further analysis.
Evaluation of in vivo reendothelialization
Rats were intravenously injected with 5% Evans Blue dye (25 mg/kg) for 30 min before being sacrificed. The left common carotid artery was fully removed and rinsed in saline, and residual connective tissue was carefully removed. The staining area and the total area of the artery were analysed using Image-Pro Plus. Reendothelialization was evaluated by calculating the ratio of the staining area to the total area.
Evaluation of intimal hyperplasia
Carotid arterial tissue sections were deparaffinized in xylene for 5 min, transferred to the dye solution in distilled water, stained with haematoxylin for 5 min, and then separated with a 1% hydrochloric acid alcohol in a saturated solution of lithium carbonate. After a short period of bluing, the sections were quickly washed in distilled water, stained with approximately 0.5% eosin dye solution for 1-3 min, and finally dehydrated using gradient alcohol. After the xylene became transparent, a suitable dose of neutral gum was added to seal the film. The degree of aortic pathology was observed using an optical microscope, and the samples were imaged. The sections were analysed using Image-Pro Plus 6.0 image processing software.
Evaluation of serum lipid profiles
Rats were anaesthetized 1 day before surgery and 2 weeks after surgery, and blood samples were collected for serum lipid (triglyceride: TG, total cholesterol: TC, low-density lipoprotein cholesterol: LDL-C, high-density lipoprotein cholesterol: HDL-C) analysis. TG levels were measured using a TG Quantification Kit (Abcam, Cambridge, MA, USA) according to the manufacturer's instructions. TC levels were assessed using a Cholesterol Assay Kit (Abcam, Cambridge, MA, USA) according to the manufacturer's instructions. LDL-C levels were evaluated using an LDL-C Quantification Kit (Abcam, Cambridge, MA, USA) according to the manufacturer's instructions, and HDL-C levels were measured using an HDL-C Quantification Kit (Abcam, Cambridge, MA, USA) according to the manufacturer's instructions.
Statistical analysis
Data are shown as the mean ± standard deviation (SD). Data analysis was performed using GraphPad Prism 6.0 (GraphPad Software, La Jolla, USA). Statistical significance for comparisons between two groups was analysed using the Mann-Whitney U test, and statistical significance for comparisons among more than two groups was analysed using the Kruskal-Wallis test followed by Dunn's multiple comparisons test. P < 0.05 was considered statistically significant.
Characterization of human PB-derived ECFCs and ECFC-exosomes
ECFCs were isolated from human peripheral blood (PB) and cultured in vitro. PB-derived cells adhered within 3 days and proliferated for more than 21 days. Morphology of the isolated cells cultured at different time points is shown in Fig. 1A. Immunofluorescence staining revealed that PB-derived cells were positive for ECFC-specific surface markers (CD144, eNOS, vWF and CD34) but were negative for the haematopoietic cell-specific marker CD45 (Fig. 1B). The results of flow cytometry analysis also indicated that PB-derived cells were highly positive for numerous endothelial lineage markers (CD144, vWF and CD34) and were negative for CD45 (Fig. 1C). Furthermore, functional analysis using fluorescence tracking revealed that ECFCs were able to take up ac-LDL-c and bind to UEA-1 (Fig. 1D). These data demonstrated that we obtained high purity ECFCs. Next, ECFC-exosomes were purified from the supernatant of ECFC cultures and then identified by transmission electron microscopy (TEM), nanoparticle tracking analysis (NTA) and western blot analyses. TEM indicated that ECFC exosomes exhibited a typical sphere-shaped bilayer membrane morphology structure with a diameter of approximately 100 nm (Fig. 1E). Western blot results revealed that ECFC-exosomes were positive for exosome surface markers (CD63, CD9 and CD81) and the endothelial marker CD31, while ECFCs expressed only the endothelial marker CD31 (Fig. 1F). NTA demonstrated that the average diameter of ECFC-exosomes was 140.1 nm, and the main peak of particle size was located at 132.5 nm (Fig. 1G).
ECFC-exosomes suppress ox-LDL-induced HMEC injury by rescuing autophagic flux
To evaluate the role of ECFC-exosomes in protecting against vascular intima injury, an ox-LDL-induced HMEC injury model was established. First, HMECs were incubated with Dio-labelled ECFC-exosomes for 12 h, and then the green fluorescence signal of Dio in HMECs was observed under a fluorescence microscope, which Fig. 1 Characterization of human ECFCs. A Morphology of EPCs cultured for 0, 3, 7, 10 and 21 days. Scale bar = 100 μm. B Immunofluorescence staining of endothelial progenitor cell-specific surface markers (CD144, eNOS, vWF and CD34) and haematopoietic cell-specific markers (CD45) in the cells. Nuclei were counterstained using DAPI. Scale bar = 100 μm. C Flow cytometry analysis showing that EPCs were positive for CD144, CD34 and vWF but negative for the haematopoietic cell-specific marker CD45. D Immunofluorescence examination of ac-LDL (Dil) uptake and UEA-1 (FITC) binding capability of ECFCs. Nuclei were counterstained with DAPI. Scale bar = 50 μm. E The morphology of ECFC-derived exosomes was examined by TEM. Scale bar = 100 nm. F The surface markers CD63, CD9, and CD81 of exosomes were examined by western blot assay. Exos ECFCs-exosomes. G The particle sizes and concentrations of ECFC-exosomes were measured using NanoSight. Biological replicates = 3, and technical replicates = 1 was primarily distributed in the cytoplasm ( Fig. 2A), suggesting that ECFC-exosomes were taken up by HMECs. Subsequently, to estimate the effect of ECFC-exosomes on cell viability, we treated HMECs with 100 µg/ml ECFC exosomes in the presence or absence of ox-LDL, in which ox-LDL was used to imitate the microenvironment of atherosclerosis in vitro. The CCK-8 results showed that ECFC-exosomes had no significant effect on the viability of HMECs without ox-LDL treatment, while they significantly reversed the viability of HMECs treated with 60 μg/mL or 75 μg/mL ox-LDL, and ECFC-exosome-mediated effects were more profound in HMECs treated with 60 μg/ml ox-LDL (Fig. 2B). Thus, 60 μg/ml ox-LDL was used in subsequent experiments. Moreover, ECFC-exosomes significantly increased the migration of HMECs without ox-LDL treatment. The migration of HMECs was not affected by 60 μg/ml ox-LDL treatment, while ECFC-exosome treatment significantly enhanced HMEC migration in ox-LDL-treated HMECs (Fig. 2C, E). In addition, ECFC-exosomes increased cell tube formation ability in HMECs without ox-LDL treatment, while tube formation ability of HMECs was significantly repressed by 60 μg/ml ox-LDL treatment, which was largely rescued by ECFC-exosome treatment (Fig. 2D, F). These results suggested that ECFC-derived exosomes enhance cell viability, migration and tubule formation in HMECs treated with ox-LDL.
It is well known that autophagy plays an important role in high fat-induced vascular damage or atherosclerosis [30,31]. To investigate whether ECFC-derived exosomes exert their effects through autophagic pathways, we examined the effects of ECFC-derived exosomes on autophagic flux in HMECs treated with ox-LDL. The results of western blotting showed that the relative ratio of LC3II/LC3I in HMECs was increased in response to ox-LDL treatment, which was further enhanced by ECFC-exosome treatment, and ECFC-exosome treatment significantly attenuated ox-LDL-induced increases in p62 protein levels in HMECs (Fig. 3A). The Ad-mCherry-GFP-LC3B system was next applied to trace different stages of autophagy. The results demonstrated that ox-LDL prevented the formation of autolysosomes with the accumulation of spotty yellow fluorescence (Fig. 3B), and the formation of autolysosomes was rescued by ECFC-exosomes with increased red fluorescence (Fig. 3B), which was abolished by the autophagy inhibitor bafilomycin A1 (Fig. 3B). In vitro functional assays further revealed that bafilomycin A1 treatment significantly abolished the enhanced effects of ECFC-exosomes on cell viability, cell migration and tube formation of HMECs in response to ox-LDL treatment (Fig. 4). These results indicated that ECFC-exosomes suppress ox-LDL-treated HMEC injury in an autophagy-dependent mechanism.
To further determine whether ECFC-exosomes play a protective role in the ox-LDL-induced HMEC injury model by transmitting miR-21-5p, we transfected an miR-21-5p inhibitor or its negative control (NC inhibitor) into ECFC-exosomes using the Exo-fect Exosome Transfection Kit according to the manufacturer's protocol. The qRT-PCR results suggested that miR-21-5p expression in ECFC-exosomes converted to NC inhibitor was no different from that in ECFC-exosomes, while miR-21-5p expression in ECFC-exosomes converted to miR-21-5p inhibitor was significantly lower than that in ECFC-exosomes converted to NC inhibitor (Fig. 6A), suggesting that the miR-21-5p inhibitor significantly inhibited miR-21-5p expression in ECFC-exosomes. Furthermore, expression levels of miR-21-5p in HMECs with ECFC-exosome-miR inhibitor were significantly lower than that in HMECs without ECFC-exosome-miR inhibitor (Additional file 3: Figure S2A). The results of the CCK-8 assay and LDH release assay revealed that ECFC-exosomes inhibited ox-LDL-induced cell viability decline and LDH release in HMECs, while silencing miR-21-5p ECFC exosomes weakened the effect of ECFC-exosomes on HMECs treated with ox-LDL (Additional file 3: Figure S2B and S2C). Furthermore, it was found in this study that ECFC-exosomes protect against ox-LDL-induced HMEC injury by enhancing autophagy and restoring autophagy flow (Fig. 4B, C). Therefore, we further evaluated whether ECFC-exosomes regulate autophagy or autophagic flux by delivering miR-21-5p. We assessed the protein expression of autophagy markers (LC3II, LC3I and P62) in HMECs with different treatments using western blotting. The LC3II/LC3I ratio and B Cell viability of ox-LDL-treated HMECs with or without ECFC-exosome treatment was determined using CCK-8 assay. Biological replicates = 6, and technical replicates = 3. C, E Cell migration of ox-LDL-treated HMECs with or without ECFC-exosome treatment was determined by wound healing assay. Biological replicates = 3, and technical replicates = 1. D, F The tube formation ability of ox-LDL-treated HMECs with or without ECFC-exosome treatment was determined by tube formation assay. Scale bars = 100 μm. Exos ECFCs-exosomes. Biological replicates = 3-5, and technical replicates = 1. N.S. not significant; significant differences between different treatment groups are indicated as *P < 0.05 and **P < 0.01 P62 protein expression were significantly increased in ECFC-exosome-treated cells, but the LC3II/LC3I ratio was markedly decreased and the P62 protein expression was significantly increased in the silenced miR-21-5p ECFC-exosome-treated cells (Fig. 6B), indicating that ECFC-exosomes enhance autophagy and repair autophagic flux by delivering miR-21-5p. Consistent with the results of the western blotting analysis, the Ad-mCherry-GFP-LC3B adenovirus infection assay showed that the formation of autolysosomes was increased in ox-LDL-induced HMECs with ECFC-exosome treatment, which was attenuated by miR-21-5p inhibition (Fig. 6C).
Taken together, these results revealed that ECFC-derived exosomes protect against ox-LDL-induced HMEC damage by transmitting miR-21-5p to mediate autophagy or autophagic flux.
ECFC-derived exosomes deliver miR-21-5p to target SIPA1L2 expression in ox-LDL-induced HMECs
To determine the downstream targets of miR-21-5p, we first performed bioinformatics analysis using miRanda, starBase, RAID and PITA, and a total of 320 commonly targeted genes were identified (Fig. 7A). Furthermore, we extracted the differentially expressed genes in ECFC-exosomes + ox-LDL-or PBS + ox-LDL-treated HMECs, and a total of 36 downregulated differentially expressed genes were detected. SIPAL1L2 was identified as the common gene between the predicted targets and downregulated DEGs (Fig. 7A). qRT-PCR and western blot assays showed that ox-LDL treatment significantly increased mRNA and protein expression levels of SIPAL1L2, which was significantly attenuated by ECFCexosome treatment (Fig. 7B).
To explore the effect of SIPA1L2 on autophagy and autophagic flux in ox-LDL-induced HMECs, HMECs were transfected with three shRNAs specifically targeting SIPA1L2 (SIPA1L2 shRNA#1, #2, and #3) to restrain SIPA1L3 expression. SIPA1L2 shRNA#1 (sh-SIPA1L2) was observed to have the optimum inhibitory efficiency (Additional file 4: Figure S3A) and was used in subsequent studies. The qRT-PCR results showed that treatment with ox-LDL for 24 h significantly increased SIPA1L2 mRNA expression, and this effect was abolished by shRNA-mediated silencing of SIPA1L2 in HMECs (Additional file 4: Figure S3B), suggesting that transfection of sh-SIPA1L2 with HMECs effectively inhibits SIPA1L2 expression induced by ox-LDL. Furthermore, western blotting was performed to estimate the protein expression levels of autophagy-related genes (Beclin 1, LC3I, LC3II, and P62), and the results showed that ox-LDL treatment significantly increased the protein Figure S3C-D), indicating that ox-LDL induces autophagic flux dysfunction in HMECs. The downregulation of SIPA1L2 markedly enhanced Beclin-1 protein expression and promoted the turnover of LC3II and P62 degradation in ox-LDL-treated HMECs (Additional file 4: Figure S3C-D), suggesting that downregulation of SIPA1L2 improves ox-LDL-induced HMEC autophagic flux dysfunction and autophagic activity. Furthermore, the autophagy inhibitor Bafi A1 was used to verify autophagic flux dysfunction and the autophagic activation effect of SIPA1L2 downregulation in ox-LDL-treated HMECs. The results revealed that costimulation of HMECs with sh-SIPA1L2 and Bafi A1 significantly decreased the protein expression levels of autophagy-related genes (Beclin1 and p62) and inhibited the turnover of LC3II compared to those after treatment with sh-SIPA1L2 alone (Additional file 4: Figure S3E-F). Moreover, the CCK-8 assay results showed that SIPA1L2 suppression with sh-SIPA1L2 enhanced cell Fig. 6 ECFC-derived exosomes rescue autophagic flux in ox-LDL-treated HMECs through miR-21-5p. A Expression of miR-21-5p in ECFC exosomes transfected with negative control inhibitor or miR-21-5p inhibitor was detected by qRT-PCR. Biological replicates = 3, and technical replicates = 3. B The expression of autophagy-related proteins (LC3I, LC3II, and p62) in HMECs with different treatments was detected by western blot assay. Biological replicates = 3, and technical replicates = 1. C Representative images of HMECs transfected with Ad-mCherry-GFP-LC3B adenovirus followed by different treatments. Scale bars = 100 μm. Exos-miR inh ECFC-exosomes were transfected with miR-21-5p inhibitor, Exos-NC inh ECFC-exosomes were transfected with NC inhibitor. Biological replicates = 3 and technical replicates = 1. Significant differences between different treatment groups are indicated as *P < 0.05 Fig. 7 EPC-exosome-derived miR-21-5p targets the 3'UTR of SIPA1L2. A Venn diagram illustrating the overlapping miR-21-5p-targeted genes predicted using bioinformatics tools (miRanda, starBase, RAID and PITA) and downregulated genes in HMECs following ECFC-exosome treatment. B mRNA and protein expression of SIPA1L2 in HMECs with different treatments was determined by qRT-PCR and western blot assay, respectively. Biological replicates = 3, and technical replicates = 1-3. C The interaction between miR-21-5p and the 3'UTR of SIPA1L2 was evaluated using the dual-luciferase reporter assay. Biological replicates = 5, and technical replicates = 1. D Expression of miR-21-5p and SIPA1L2 in HMECs transfected with miR-21-5p mimics or inhibitor was determined by qRT-PCR. Biological replicates = 3, and technical replicates = 3. E mRNA and protein expression of SIPA1L2 in HMECs with different treatments was determined by qRT-PCR and western blot assay, respectively. Exos ECFC-exosomes, Exos-miR inh ECFC-exosomes were transfected with miR-21-5p inhibitor, Exos-NC inh ECFC-exosomes were transfected with NC inhibitor. Biological replicates = 3, and technical replicates = 1-3. N.S. not significant; significant differences between different treatment groups are indicated as *P < 0.05 and **P < 0.01 proliferation and that Bafi A1 treatment alone reduced cell proliferation in ox-LDL-treated HMECs, which was reversed when the cells were costimulated with sh-SIPA1L2 and Bafi A1 (Additional file 4: Figure S3G). These data suggested that silencing SIPA1L2 in ox-LDLtreated HMECs promotes cell proliferation by enhancing autophagy and repairing autophagic flux dysfunction.
ECFC-exosomes promote endothelial repair and activate autophagy by delivering miR-21-5p in an atherogenic rat model of vascular injury
To further evaluate the role and underlying mechanism of ECFC-exosomes in endothelial repair in vivo, we constructed an atherogenic rat model of vascular injury using a high-fat diet combined with balloon injury. The in vivo experimental procedures are illustrated in Fig. 8A. After 4 weeks of high-fat diet treatment, serum levels of TC, TG, and LDL-c were elevated, and serum levels of HDL-c were decreased in rats (Additional file 5: Figure S4), indicating that the hyperlipidaemia or atherosclerosis rat model had been successfully established. After carotid artery balloon injury in hyperlipidaemic rats for 14 days, Evans blue staining indicated that rats in the model groups exhibited serious vascular endothelial injury, and ECFC-exosome treatment alleviated vascular endothelial injury in the atherogenic rat model of vascular injury, which was significantly reversed by treatment with ECFC-exosomes with miR-21-5p knockdown (Fig. 8B). Intimal hyperplasia is the primary symptom of atherosclerosis or PTAC after surgery. We evaluated levels of intimal hyperplasia by haematoxylin and eosin (HE) staining, and our results demonstrated severe intimal hyperplasia in the atherogenic rat model of vascular injury, while administration of ECFC-exosomes largely prevented intimal hyperplasia, which was hindered by treatment with ECFC-exosomes with miR-21-5p knockdown (Fig. 8C). Interestingly, we found that ECFC-exosome treatment significantly decreased serum levels of TC, TG and LDL-c but increased serum levels of HDL-c in the atherogenic rat model of vascular injury, and ECFC-exosomes with miR-21-5p knockdown treatment partially weakened the beneficial regulation of ECFCexosomes on lipid indices (Fig. 8D). These data suggested that ECFC-exosomes might regulate lipid homeostasis to inhibit vascular injury in an atherogenic rat model of vascular injury, and the regulatory mechanism is partly dependent on miR-21-5p transmitted by ECFCexosomes. In addition, compared to the sham group, miR-21-5p expression was significantly downregulated and mRNA and protein expression of SIPL1A2 was significantly upregulated in the model group. ECFC-exosome treatment significantly increased miR-21-5p and decreased SIPL1A2 mRNA expression in the rat model, which was significantly attenuated by ECFC-exosomes with miR-21-5p knockdown (Fig. 9A), suggesting that ECFC-exosomes regulate the mRNA and protein expression of SIPL1A2 in the atherogenic rat model of vascular injury through miR-21-5p transmission. Furthermore, western blotting results showed that protein levels of the LC3II/LC3I ratio and p62 were upregulated in the model group, while ECFC-exosomes treatment significantly decreased the p62 protein expression but increased the ratio of LC3II/LC3I in the rat model, which was largely reversed by ECFC-exosomes with miR-21-5p knockdown (Fig. 9B). Taken together, these results indicated that ECFC exosomes repair vascular endothelial injury and enhance autophagy by delivering exosomal miR-21-5p in an atherogenic rat model of vascular injury.
Discussion
ECFCs have been shown to promote the formation of new endothelium in animal models, in which vessel injury occurs after balloon injury, myocardial infarction, or coronary microembolization [35][36][37]. Exosomes have emerged as an important paracrine mechanism of cell-to-cell communication by facilitating the transfer of RNAs or proteins from one cell to a recipient cell [38], and their use is currently considered to represent a promising alternative to stem cell therapy [39]. Studies have revealed that ECFC-exosomes promote endothelial cell repair in rat models of balloon injury [23,24]. However, it is still unknown whether ECFC exosomes protect against endothelial injury in AS. In this study, we first found that ECFC-exosomes protected against ox-LDL-induced vascular endothelial injury by repairing autophagic flux. Subsequently, we demonstrated that miR-21-5p, (See figure on next page.) Fig. 8 ECFC-derived exosomes promote reendothelialization and rescue intimal hyperplasia in a rat atherosclerosis model in a miR-21-5p-dependent manner. A A flowchart illustrating experiments to evaluate the therapeutic effects of ECFC-exosomes in a rat atherosclerosis model. B The injured areas of carotid arteries in rats with different treatments were evaluated using Evans blue staining. Biological replicates = 6, and technical replicates = 1. C Morphology of the carotid arteries in the rats with different treatments was evaluated by HE staining. The red arrows indicate the endothelial layers, and the black arrows indicate intimal hyperplasia. Biological replicates = 6, and technical replicates = 1. D Concentrations of TC, TG, LDL-c, and HDL-c in serum samples at the end of the in vivo experiment. Sham, control rat; Model, high-fat diet combined with balloon injury to construct atherosclerotic rat model of vascular injury; Exos-miR inh atherosclerotic rat model of vascular injury treated with ECFCs-exosomes transfected with miR-21-5p inhibitor, Exos-NC inh atherosclerotic rat model of vascular injury treated with ECFCs-exosomes transfected with NC inhibitor. Biological replicates = 6, and technical replicates = 3. Significant differences between different treatment groups are indicated as *P < 0.05 and **P < 0.01 which is abundant in ECFC exosomes, binds to the 3'-UTR of SIPA1L2 to inhibit its expression and enhance autophagic flux, and ECFC-exosome knockout of miR-21-5p reversed ECFC-exosome-decreased SIPA1L2 expression in ox-LDL-induced vascular injury. Finally, our results revealed that ECFC-exosomes repaired vascular endothelial injury, regulated lipid balance and activated autophagy in the atherogenic rat model of vascular ECFC-derived exosomes repair vascular injury by rescuing autophagic flux through the miR-21-5p/SIPA1L2 axis in a rat atherosclerosis model. A Expression levels of miR-21-5p and SIPA1L2 in rats with different treatments were determined by qRT-PCR. Biological replicates = 3-6, and technical replicates = 3. B Protein levels of SIPA1L2 and autophagy-related proteins (LC3I, LC3II and p62) in rats with different treatments were determined by qRT-PCR and western blot assay. Biological replicates = 3, and technical replicates = 1. C The schematic diagram illustrates that ECFC-exosomes repair vascular injury by rescuing autophagic flux through the miR-21-5p/SIPA1L2 axis in a rat atherosclerosis model. Exos-miR inh ECFC-exosomes transfected with miR-21-5p inhibitor, Exos-NC inh ECFC-exosomes transfected with NC inhibitor, N.S. not significant; significant differences between different treatment groups are indicated as *P < 0.05 and **P < 0.01 injury, whereas these effects were eliminated with ECFCexosomes with knockdown of miR-21-5p.
Autologous stem cells from patients are difficult to acquire and often yield low quality cells [40]. Allogeneic stem cells, on the other hand, have limited applications due to their immunogenicity [41]. To overcome these problems, researchers have focused on the paracrine products of stem cells. Several studies have demonstrated that ECFC-exosomes could be used to treat certain vascular diseases in different tissues [13][14][15]. Recent studies have shown that autophagy and autophagic flux play key roles in cardiovascular diseases, including atherosclerosis [42]. Impaired autophagic flux or inhibition of autophagic flux aggravates ox-LDL-induced inflammatory responses in endothelial cells, whereas enhancing autophagic flux alleviates ox-LDL-induced inflammatory responses [43]. miR-100 suppresses vascular injury by stimulating endothelial autophagy [44]. Shimaa et al. [45] reported that EPCs confer therapeutic effects against epilepsy by upregulating autophagy. However, the effect of ECFCexosomes on autophagy or autophagic flux in ox-LDLinduced vascular endothelial injury remains unclear. In the present study, our results demonstrated that ECFCexosomes alleviate ox-LDL-induced vascular endothelial injury and reverse the impaired autophagic flux induced by ox-LDL, whereas these effects were eliminated with the autophagy inhibitor bafilomycin A1, suggesting that ECFC-exosomes protect against ox-LDL-induced vascular endothelial injury by enhancing autophagic flux. Furthermore, we found that ECFC-exosomes promote vascular reendothelialization and inhibit intimal hyperplasia in an atherogenic rat model of vascular injury, consistent with the results of previous studies. Li et al. [23] and Hu et al. [16] reported that exosomes of EPCs derived from human umbilical cord blood promote reendothelialization and inhibit neointimal formation in rat models of balloon injury by upregulating endothelial cell function. The latest research showed that exosomes of EPCs derived from mouse bone marrow ameliorated endothelial dysfunction and decreased lipid droplets in thoracic aortas in a mouse model of diabetes [46]. Interestingly, our results also demonstrated that ECFC exosomes activate autophagy and regulate lipid metabolism homeostasis in an atherogenic rat model of vascular injury, which may be the potential mechanism by which ECFC exosomes promote the repair of injured vasculature.
Moreover, EPC-exosomes promote proliferation, migration and tube formation via the delivery of miR-21-5p in endothelial cells [24]. Therefore, we speculate that ECFC-exosomes protect against vascular endothelial injury under hyperlipidaemic conditions by delivering one or more miRNAs. We extracted total RNA from ECFC-exosomes and analysed the components of the miRNAs using miRNA microarray analysis. Our results indicated that miR-21-5p was the most highly expressed in ECFC-exosomes, consistent with the sequencing results of ECFC-exosomes from other origins [24,46], indicating that EPCs from different origins may share a similar constitution. We further verified by qRT-PCR that miR-21-5p exhibited the highest abundance in ECFC-exosomes, consistent with the miRNA microarray. Moreover, our data revealed for the first time that ECFC exosomes inhibit vascular endothelial cell damage by delivering exosomal miR-21-5p to recipient cells in an AS model induced by ox-LDL in vitro and an atherogenic rat model of vascular injury in vivo. Similarly, many studies have revealed the protective function of exosomal miR-21-5p in different diseases [50][51][52][53], suggesting that exosome-mediated transfer of miR-21-5p is an important mechanism for cell-to-cell communication and the regulation of recipient cell functions. Interestingly, our results indicated that ECFC-exosomes enhance autophagic flux and regulate lipid metabolism balance by delivering exosomal miR-21-5p. Previous studies have shown that miR-21-5p is involved in the regulation of autophagy, such as the knockout of miR-21-5p to inhibit arsenate-induced autophagy [54] and overexpression of miR-21-5p to induce autophagy in female germline stem cells [55]. In addition, overexpression of miR-21-5p correlated with a less atherogenic lipid profile and decreased serum lipid levels [56,57].
Studies have shown that miRNAs directly interact with the 3'UTR of their target mRNAs and regulate posttranscriptional genes by blocking translation or degradation of target mRNAs to influence the biological function of cells [58]. In this study, our results demonstrated for the first time that SIPA1L2 was the target of ECFC-exosomal miRNA-21-5p and that ECFC-exosome-mediated transfer of miR-21-5p suppressed SIPA1L2 expression in in vitro and in vivo models. SIPA1L2, also known as SPAR2, is a regulator of Rap1, a member of the SIPA1L family with RapGAP activity [59]. At present, SIPA1L2 has primarily been investigated in the neurological field, and it has been demonstrated that SIPA1L2 interacts with the autophagy marker LC3 [60], indicating that SIPA1L2 is associated with the autophagic pathway. Our data also demonstrated that silencing SIPA1L2 in ox-LDLtreated HMECs promoted cell proliferation by enhancing autophagy and repairing autophagic flux dysfunction.
Despite these promising findings, this study has several limitations. First, the present study lacked evidence that the exosomes administered in vivo reached the injury site, which may still require further investigation. Second, the role of SIPA2L1 in atherosclerosis progression has not been further verified in vivo. Third, the underlying mechanism of SIPA1L2 in ox-LDL-induced autophagic flux dysfunction remains largely unknown. Fourth, a rescue experiment was not performed to verify that ECFC-exosomes or miR-21-5p repair vascular injury in a high-fat rat balloon injury model by regulating SIPA1L2-mediated autophagy. Fifth, this study focused on the effect of ECFC exosomes on atherosclerosisinduced endothelial cell injury, but ECFC-exosomes on smooth muscle cells were not explored. These points will be addressed in future studies.
Conclusion
In summary, our study demonstrated that ECFCexosomes protect against atherosclerosis-or PTCAinduced vascular injury by rescuing autophagic flux and inhibiting SIAP1L2 expression by delivering miR-21-5p (Fig. 9C). Our study provides new insight into the molecular mechanism of atherosclerosis-induced vascular intimal injury and facilitates a new therapeutic strategy for repairing vascular endothelial injuries. | 2023-01-17T15:20:46.203Z | 2022-03-12T00:00:00.000 | {
"year": 2022,
"sha1": "a154d8e00c25170a47d6bd0a5cfbfae51d2e3f32",
"oa_license": "CCBY",
"oa_url": "https://biosignaling.biomedcentral.com/track/pdf/10.1186/s12964-022-00828-0",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "a154d8e00c25170a47d6bd0a5cfbfae51d2e3f32",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
202067721 | pes2o/s2orc | v3-fos-license | Osteoporotic Bone Recovery by a Highly Bone‐Inductive Calcium Phosphate Polymer‐Induced Liquid‐Precursor
Abstract Osteoporosis is an incurable chronic disease characterized by a lack of mineral mass in the bones. Here, the full recovery of osteoporotic bone is achieved by using a calcium phosphate polymer‐induced liquid‐precursor (CaP‐PILP). This free‐flowing CaP‐PILP material displays excellent bone inductivity and is able to readily penetrate into collagen fibrils and form intrafibrillar hydroxyapatite crystals oriented along the c‐axis. This ability is attributed to the microstructure of the material, which consists of homogeneously distributed ultrasmall (≈1 nm) amorphous calcium phosphate clusters. In vitro study shows the strong affinity of CaP‐PILP to osteoporotic bone, which can be uniformly distributed throughout the bone tissue to significantly increase the bone density. In vivo experiments show that the repaired bones exhibit satisfactory mechanical performance comparable with normal ones, following a promising treatment of osteoporosis by using CaP‐PILP. The discovery provides insight into the structure and property of biological nanocluster materials and their potential for hard tissue repair.
assembly of quasi-hexagonally packed, twisted collagen triplehelix molecules [26] that contain only ≈1.8-4 nm sized tortuous subchannels. [27] Such a structure is inaccessible to most CaP nanomaterials and thereby makes the recovery of the affected bone difficult.
The polymer-induced liquid-precursor (PILP) was first reported for calcium carbonate [28] and was then extended to calcium phosphate. [29][30][31] PILP is a liquid-like mineral precursor stabilized by charged polymers, such as polyacrylic acid (PAA), [31] poly(allylamine hydrochloride) (PAH), [32] or polyaspartic acid (PASP), [28] that forms thin films on flat substrates and can infiltrate into nanopores. [28,33,34] In vitro experiments have shown that the calcium phosphate PILP is able to infiltrate into collagen fibrils and form oriented intrafibrillar HAP crystallites, with a diffraction pattern indistinguishable from that of the mineralized collagen fibrils in bone. [31,35] Despite the unique liquid-like properties of PILP and its possible vital role in the biomineralization processes, [36] its microstructure is still under debate. While early studies suggest that PILP appears as a dense liquid phase, [28,37] cryo-transmission electron microscopy (cryoTEM) observations showed only amorphous calcium phosphate (ACP) nanoclusters in the early stage of in vitro collagen remineralization experiments, where the calcium phosphate PILP is supposed to form. [33,35] Similarly, a recent cryoTEM study using a calcium carbonate system indicated that PILP is actually a polymer-driven assembly of nanoclusters. [34] Although calcium phosphate PILP displays a promising ability to remineralize collagen fibrils, it has not yet been applied in biomedical bone engineering. A major challenge is that the calcium phosphate PILP is generally synthesized at low Ca 2+ /PO 4 3− concentrations, [31] which are insufficient to provide the mineral mass required for the recovery of demineralized bone at the macroscopic scale.
In the present work, we demonstrate that the full recovery of osteoporotic bone can be achieved using a free-flowing calcium phosphate polymer-induced liquid-precursor (CaP-PILP) material. By combining two biocompatible polymeric additives, PAA and PASP, CaP-PILP is stabilized on a large scale and at a high Ca 2+ /PO 4 3− concentration. In contrast to previous CaP materials for bone repair, this CaP-PILP material has excellent bone inductivity, which uniquely allows the intrafibrillar mineralization of collagen fibrils. This is directly related to its microstructure, which contains a high density of uniform-sized (≈1 nm) ACP nanoclusters. Both in vitro and in vivo experiments provide the first proof that the structural and mechanical properties of osteoporotic bone can be recovered to those of healthy bone by treatment with CaP-PILP.
Two biocompatible, negatively charged polymers, PAA and PASP, were used in the synthesis of our CaP-PILP material. PAA with sufficient molecular weight is able to stabilize the PILP phase of CaP. [38] However, this polymer also causes precipitates to form when mixed with high concentrations of Ca 2+ . To generate a stable CaP-PILP at a high Ca 2+ without precipitation, we used PASP to bind Ca 2+ as a competitor to PAA. In a typical procedure, 2.0 mL of a 0.1 m CaCl 2 solution was mixed with 0.2 mL of a 0.3 g mL − , so that the PILP phase can be formed. The PILP phase forms with relatively low concentration of charged polymers (<20 mg L −1 ) and Ca 2+ (≤5 × 10 −3 m). [31,35] However, in this work we aim to form a high concentration of CaP-PILP so that it can sufficiently support the repair of osteoporotic bones, therefore a high concentration of PAA/PASP (26.1 and 13.0 mg mL −1 ) was used, with the maximal amount of Ca 2+ that can be chelated by the PAA/PASP, which is 43.5 × 10 −3 m. The resulting material is transparent and viscous but still free flowing (Figure 1a; Movie S1, Supporting Information). Cryogenic electron tomography (cryoET) showed that the resulting material is densely loaded with uniform-sized, separate, and homogeneously distributed nanoclusters, indicating the formation of CaP-PILP (Figure 1b; Movie S2, Supporting Information). In the PILP process, PAA and PASP are used to stabilize and form an amorphous precursor that is sufficiently hydrated to be a liquid phase. The close-to-focus cryoTEM images (defocus = −1 µm) showed that the clusters are ≈1 nm in size (inset 1 of Figure 1b). [5,39] Selected area electron diffraction (SAED, inset 2 of Figure 1b) showed a broad diffraction band, while powder X-ray diffraction (pXRD) showed a broad peak at ≈2θ = 30° (Figure S1a, Supporting Information); both results indicate that the clusters are ACP. This assignment was further confirmed by Fourier transform infrared (FT-IR) spectroscopy, which revealed two wide bands typical of phosphate stretching (v 3 ) at 1055 cm −1 and phosphate bending (v 4 ) at 560 cm −1 ( Figure S1b, Supporting Information). The dynamic mechanical properties of CaP-PILP were examined by frequency-dependent oscillatory shear rheology (Figure 1c). The measurements revealed a dynamic storage modulus (G′) that was slightly lower than the loss modulus (G″), confirming that CaP-PILP is a fluid despite its very high viscosity. The straindependent oscillatory rheology of CaP-PILP exhibited a broad linear viscoelastic region, indicating that this material has a wide processing range within the strain domain of 0.1%-100%. Taken together, the results indicate that CaP-PILP is a viscous, transparent, liquid-like precursor phase with a high density of uniform-sized ACP clusters.
The liquid-like CaP-PILP can be solidified after being injected into moulds kept at 37 °C for 7 d (Figure 1d). Conventional transmission electron microscopy (TEM) showed that the amorphous phase transforms into nanorod/nanoplate-like structures with a length of ≈20 nm ( Figure S2a Figure S2c, Supporting Information). [40] The thermogravimetric and differential thermal analysis (TG/DTA) measurements ( Figure 1f) revealed an endothermic peak between 20 and 200 °C and an exothermic peak between 200 and 400 °C, which are assigned to the loss of water and the decomposition of organics, respectively. The TG curves showed that the solidified material is composed of 69.4 wt% mineral, 19.3 wt% organics, and 11.3 wt% water. The solidification and crystallization of CaP-PILP is related to the conversion of the ≈1 nm clusters, which is similar to the 0.7-1.0 nm sized "Posner's clusters." [39,41] Posner's clusters act as the basic building blocks to generate larger ACP nanoparticles by cluster-cluster complex aggregation. [5] By taking up the extra-OH groups and calcium ions into the voids within the ACP precursors, HAP can be further formed. [42,43] Generally, the solidification and crystallization of these clusters to HAP is fast and occurs within hours. [5] However, those processes are extended to days in our CaP-PILP due to the stabilization effect from PAA and PASP.
The CaP-PILP was then used for the remineralization of type I collagen fibrils. The native collagen fibrils (self-assembled from rat tail type I collagen) display periodic gap and overlap regions (Figure 2a). The TEM grids coated with the collagen fibrils were then floated at 37 °C oven the CaP-PILP as well as a suspension of commercial HAP nanocrystals. After 7 d of contact with the commercial HAP nanocrystals, the fibrils were barely mineralized, and HAP nanocrystals were observed only around the collagen fibrils ( Figure 2b). In contrast, the CaP-PILP-treated fibrils showed increased levels of mineralization with time (Figure 2c-e), while SAED confirmed that HAP is the final mineral product (insets in Figure 2c-e). The intrafibrillar mineralization of collagen was demonstrated using 3D super-resolution stochastic optical reconstruction microscopy (STORM) (Figure 2f-j). For this experiment, the collagen fibrils were labeled before mineralization with the red-emitting fluorescent reagent cy3B. After 7 d, the mineralized collagen fibrils were stained with 10.0 × 10 −6 m calcein to label the newly generated HAP nanocrystals. The results showed that crystals form within the collagen fibrils (Figure 2j), and the degree of mineralization in the collagen fibrils is approximately 95%.
To exclude possible endotoxin contamination, suspensions of commercial ACP nanoparticles (ACP group, size of ≈80 nm, Figure S3a, Supporting Information), HAP nanoparticles (HAP group, ≈150 × 30 × 1.5 nm, Figure S3b, Supporting Information), or CaP-PILP were incubated with RAW264.7 cells and assessed for the secretion of the inflammatory cytokine IL-6, respectively, which is a sensitive readout for the presence of endotoxins. [44] The IL-6 release of CaP-PILP was similar to that of the ACP, HAP, and control groups after culturing for 1, 6, and 24 h, indicating that CaP-PILP did not promote the secretion of IL-6 compared with the ACP, HAP, and control groups ( Figure S4, Supporting Information). To investigate the biocompatibility and osteoinductive capacity of CaP-PILP, bone marrow-derived mesenchymal stem cells (MSCs) were cultured with CaP-PILP using ACP group, HAP group, and osteogenic medium only (blank group) as controls. In terms of cell differentiation, the expression of alkaline phosphatase (ALP) in MSCs cultured with the ACP, HAP, and CaP-PILP groups all increased after 7 d compared with that of the Adv. Sci. 2019, 6,1900683 blank group ( Figure S5a-d,m, Supporting Information). The ALP activity of the CaP-PILP group is similar to that of the HAP group, slightly higher than that of the ACP group, and ≈2.4 times higher than that of the blank group, revealing that CaP-PILP can promote the differentiation of MSCs ( Figure S5m, Supporting Information). Another biochemical marker of in vitro osteogenic differentiation, calcium deposition, [45] was also investigated for the MSCs after culturing with osteogenic medium for 14 d ( Figure S5e-h,n, Supporting Information). We observed that the calcium deposition increased for the ACP, HAP, and CaP-PILP groups compared with that of the blank group ( Figure S5e-h, Supporting Information). The quantitative analysis showed that the optical density (OD) value of the CaP-PILP group was ≈2.0, 1.5, and 20.0 times that of the HAP, ACP, and blank groups, respectively ( Figure S5n, Supporting Information). However, without MSCs, calcium deposition of the four groups with osteogenic medium was relatively low, and the OD values of the blank, ACP, HAP, and CaP-PILP groups were 4.6 × 10 −2 , 5.1 × 10 −2 , 4.7 × 10 −2 , and 6.2 × 10 −2 , respectively ( Figure S5i-l,o, Supporting Information). In general, the results indicated that CaP-PILP provides a suitable physicochemical and biological microenvironment for the differentiation of MSCs, which is essential for in vivo osteoporotic bone recovery.
The affinity of CaP-PILP for bone was studied by measuring the permeability of a rhodamine B-containing droplet of CaP-PILP into osteoporotic bone, which simulates the in vivo infiltration of CaP-PILP to osteoporotic bone (Figure 3a-d). After placing the purple CaP-PILP droplet on the milky white osteoporotic bone for 30 s, the droplet extended on the surface of the osteoporotic bone (Figure 3b). After 2 h, the purple color was uniformly distributed throughout the osteoporotic bone, indicating excellent permeability (Figure 3c,d). After treatment with CaP-PILP for 1 d, ACP was observed surrounding and entering the collagen fibrils (Figure 3f), and remineralization of the collagen fibrils could be observed after 7 d, The SAED patterns confirmed that the mineral phase is HAP (Figure 3g). In contrast, without CaP-PILP treatment, demineralized collagen fibrils were observed in the osteoporotic bone (Figure 3e). We then investigated the in vitro osteoporotic bone recovery ability of CaP-PILP by injecting a suspension of HAP particles or CaP-PILP into osteoporotic bones. After incubation at 37 °C for 2 weeks, the samples were analyzed by micro-computed tomography (micro-CT), with the native, untreated osteoporotic bone, and healthy bone used as comparisons (Figure 3h Figure S6b, Supporting Information). Scanning transmission electron microscopy (STEM) showed that osteoporotic bone is deficient in calcium and phosphate (Figure 3n-p), and the SAED patterns confirmed that there is little mineral present (inset in Figure 3n). However, abundant calcium and phosphate atoms were in the CaP-PILP recovered bone (Figure 3r-t). The diffraction patterns of HAP for the CaP-PILP recovered bone, however, showed (002) diffraction arcs following the long axis of the collagen fibrils (inset in Figure 3r). These results demonstrated that CaP-PILP can effectively recover osteoporotic bone in vitro.
Subsequently, the in vivo osteoporotic bone recovery capability of CaP-PILP was evaluated in ovariectomized osteoporotic mouse tibia using a percutaneous mini-invasive injection syringe at 4, 8, and 12 weeks (Figure 4a). To determine the location of the injected CaP-PILP, in vivo imaging was performed on living osteoporotic mice to locate the fluorescence signals from calcein-stained CaP-PILP ( Figure S7, Supporting Information). After injecting the calcein-stained CaP-PILP droplet into the osteoporotic mouse tibia for 30 min, the green fluorescence signal from CaP-PILP extended into the osteoporotic bone ( Figure S7a, Supporting Information). After 2 h, the green color infiltrated throughout the tibia, indicating that CaP-PILP has been nicely distributed in the tissue ( Figure S7b, Supporting Information). The bone loss results from oestrogen deficiency due to enhanced bone resorption and impaired osteoblast function. [46] In the experiments, the control group was subjected to a bilateral ovariectomy, which limited the secretion of oestrogen, resulting in osteoporosis. The control group cannot heal naturally during the lifetime of the mice because the osteoporotic bone lacked mineral supply. Representative 2D and 3D micro-CT images of the osteoporotic bone, phosphatebuffered saline (PBS), CaP-PILP, and healthy bone (shamoperation) groups at postoperative weeks 0, 4, 8, and 12 are provided in Figure 4b-i and Figures S8-S10 in the Supporting Information. In the osteoporotic bone and PBS groups, hardly any new bone formation occurred over time ( Figures S8-S10, Supporting Information). In contrast, after 4 weeks, new bone formation was significantly increased in the CaP-PILP group (Figure 4c,g). After 8 weeks, the healing status of the CaP-PILP group (Figure 4d,h) was already comparable with that of the healthy bone group (Figures S8c,f, S9c,f, and S10c,f, Supporting Information). No further growth of new bone tissue was detected at postoperative week 12 (Figure 4e,i), indicating that in the CaP-PILP group, the bone recovery reached its summit after 8 weeks. Haematoxylin and eosin (H&E) staining also demonstrated that the CaP-PILP group showed abundant newly formed bone tissue after 8 and 12 weeks (Figure 4l,m) and was nearly comparable with the healthy bone group (Figures S8i, S9i, and S10i, Supporting Information), while scare newly formed bone was detected in the osteoporotic bone and PBS groups (Figures S8g,h, S9g,h, and S10g,h, Supporting Information). The BV/TV, trabecular number (Tb.N) and trabecular separation (Tb.Sp) of the four groups were analyzed to Adv. Sci. 2019, 6,1900683 quantify the amounts of osteoporotic bone and newly formed bone (Figure 5a-c) and were shown to remain constant for the osteoporotic bone, PBS, and healthy bone groups after 4, 8, and 12 weeks. The BV/TV and Tb.N of the CaP-PILP group, however, increased by factors of approximately 2.6 or 1.3, respectively, after 8 weeks, while the Tb.Sp decreased. The values were all comparable with those of the healthy bone group, indicating that CaP-PILP remarkably promotes new bone formation in osteoporotic regions. Similar to the in vitro experiments, elemental mapping and SAED revealed that the osteoporotic bone and PBS recovered bone showed a strongly reduced mineral content ( Figure S11a,b, Supporting Information), while the CaP-PILP recovered bone showed the formation of HAP crystals with their c-axis aligned along the collagen fibrils, similar to that in healthy bone ( Figure S11c,d, Supporting Information). These results confirmed that treatment with CaP-PILP is a promising method for rapid osteoporotic bone recovery in vivo.
The mechanical properties of the osteoporotic bone and recovered bone were also tested (Figure 5d-e). The results showed that the hardness values of the osteoporotic bone, PBS recovered bone, CaP-PILP treated bone, and healthy bone were 116.5, 137.7, 371.8, and 280.5 MPa, respectively, while the Young's moduli were 5.3, 5.2, 14.3, and 13.9 GPa, respectively (Figure 5d). The recorded compressive stress-strain curve can be divided into three main regions: linear elastic, plateau, and densification. For these four groups, the stiffness of the materials was determined from the maximum value of the stress-strain slope in the linear elastic region. [47] The compressive stress-strain measurements indicated that the stiffness values of the osteoporotic bone, PBS recovered bone, CaP-PILP recovered bone, and healthy bone were 23.4, 23.6, 50.2, and 48.3 MPa, respectively (Figure 5e). These results demonstrate that our CaP-PILP can effectively enhance the mechanical performance of osteoporotic bone and that the recovered zones displayed a similar (even higher) stiffness and hardness to those of healthy bone, which makes CaP-PILP an excellent candidate for osteoporotic bone recovery. The TG curves showed that the CaP-PILP recovered bone was composed of 68.3 wt% mineral, a value that was very similar to what was found for healthy bone (66.8 wt% mineral), while the mineral ratio in the osteoporotic bone and PBS groups was only ≈50.0 wt% (Figure 5f).
In this work, the bone-inductive CaP-PILP is synthesized and used for osteoporotic bone recovery as an alternative to traditional osteoporotic bone treatment methods. The resulting CaP-PILP can penetrate into osteoporotic bone tissue to induce the intrafibrillar mineralization of collagen fibrils with HAP, a key aspect in effectively recovering osteoporotic bone tissue. The recovered bone displays good mechanical performance and is comparable with healthy bone. The fluidity of CaP-PILP allows for the minimally-invasive injection recovery of osteoporotic bone, without the need for surgical incision in clinical applications. More generally and fundamentally, our results provide the first proof that the structure and mechanical performance of osteoporotic bone can be recovered to their healthy state by treatment with CaP-PILP.
www.advancedscience.com
Adv. Sci. 2019, 6,1900683 (M w = 9-11 kDa) solution to obtain solution A, while 2.0 mL of a 0.1 m Na 2 HPO 4 solution was mixed with 0.4 mL of a solution containing 0.3 g mL −1 PAA (M w = 450 kDa) to obtain solution B. Then, 2.4 mL of solution B was slowly injected into 2.2 mL of solution A with vigorous stirring, and the pH value was adjusted to 7.4 with NaOH solution.
CryoTEM of CaP-PILP: CryoTEM Au grids (R2/2 Quantifoil Jena Grids) were treated by glow discharge for 40 s to increase their hydrophilicity. Three microliter of CaP-PILP was applied on the grid, and then the grid was blotted for 3 s, relaxed for 60 s to allow the formation of a thin liquid layer, and vitrified by plunging into liquid ethane at liquid nitrogen temperature. CryoTEM imaging was performed under an ≈1 µm defocus on an FEI-Titan TEM equipped with a field emission gun operating at 300 kV. The images were recorded using a 2k × 2k Gatan CCD camera equipped with a postcolumn Gatan energy filter (GIF), with an electron dose of 16 e Å −2 per image. A cryogenic tomography tilt series was recorded by tilting the holder from −65 to +65 degrees using the Saxton tilt increment scheme (87 images were taken in total). [48] The images were recorded under an ≈3 µm defocus, with an electron dose of 2 e Å −2 per image.
Rheological Test: The rheology experiments were performed on an Anton Paar rheometer at 25 °C. CaP-PILP was prepared and gently placed in the middle of a 15 mm diameter parallel plate with a proper gap. Dynamic oscillatory frequency sweep measurements were conducted at a 1% strain amplitude. To prevent evaporation, a lid was prepared on the top.
Pro-Inflammatory Cytokines (IL-6):
To test the inflammatory response of CaP-PILP, RAW264.7 cells were cultured with CaP-PILP, a suspension of ACP particles, a suspension of HAP particles, and medium only at 37 °C for 1, 6, and 24 h, respectively. The concentration of the cytokines IL-6 was measured by ELISA using antibodies obtained from Biolegend according to the manufacturer's instructions (R&D Systems).
Osteogenic Differentiation: Four types of media were prepared and coated onto 24-well Petri dishes: 50 µg CaP-PILP, commercial ACP particles, HAP particles, and a blank. All the groups were sterilized overnight under ultraviolet germicidal lamps. The osteogenic medium was composed of 10 −8 m dexamethasone, 50 µg mL −1 ascorbic acid, 10 × 10 −3 m β-glycerol phosphate, 10% fetal bovine serum (FBS), and high-glucose Dulbecco's modified Eagle's medium (DMEM). Then, MSCs were seeded in the above 24-well Petri dishes at a density of 1 × 10 4 cells/well. The media changed every other day, and the MSCs in the four groups were incubated at 37 °C in a humidified atmosphere containing 5% CO 2 . After the MSCs were cultured for 1 week in osteogenic medium, the ALP activity was examined using a commercial detection kit (Beyotime, C3206). The cell nuclei were stained with 4′, 6-diamidino-2-phenylindole (DAPI) to count the total cell number and calculate the ALP staining positive rate of the MSCs. The calcium deposits formed were also stained by the MSCs with Alizarin Red S (ARS) after culturing in osteogenic medium for 14 d. To further quantify the results of the ARS staining, the stained nodules were solubilized with 5% sodium dodecyl sulfate (SDS) in 0.5 m HCl for 30 min at room temperature. Finally, the OD value of the solution was measured at a wavelength of 405 nm.
Self-Assembly of Collagen Fibrils on the TEM Grids and Laser Confocal Culture Dish (LCCD) and Collagen Mineralization: A 3 mg mL −1 stock solution of type I collagen was purchased from Gibco-Invitrogen. The assembly solutions contained 50 × 10 −3 m glycine and 200 × 10 −3 m KCl, and the pH was adjusted to 9 using NaOH solution. An 8.33 µL volume of the collagen stock solution was added dropwise into 0.5 mL assembly solution and incubated for 20 min at 37 °C. 3 µL of the incubated collagen solution was placed on a nickel TEM grid for 12 h and then rinsed with deionized water. For the LCCD samples, 100 µL collagen solution (50 µg mL −1 ) was placed dropwise over an aminopropyltriethoxysilane (APTES)-modified LCCD glass substrate, incubated at a constant temperature of 37 °C for 12 h and washed with deionized water. Then, the collagen fibrils were further cross-linked with 0.05% glutaraldehyde for 4 h. TEM grids loaded with collagen fibrils were floated on the CaP-PILP for mineralization. The mineralization degree of the collagen fibrils was quantified using ImageJ based on the method used in the previous work. [49] Briefly, the pixel intensities of the mineralized collagen fibrils and nonmineralized collagen fibrils in the TEM images was different. The nonmineralized region contains light atoms (C, H, and N), while the mineralized portion contains extra heavier atoms (Ca and P). As a result, the mineralized region has a lower pixel intensity comparing with the nonmineralized region, and the areas of the mineralized region (S1) and nonmineralized region (S2) can be obtained by segmenting the image based on pixel intensities. The mineralization degree (m.d.) is calculated by (six TEM images were examined to obtain the mean mineralization degree) m.d. 1 1 2 S S S = + 3D STORM Imaging: The collagen fibrils were labeled with a fluorescent reagent by immunofluorescence staining. CaP-PILP was incubated with blocking buffer (Beyotime, China, Product Code: P0023B) for 1 h at 37 °C. After washing three times, the samples were incubated with Cy3B-conjugated secondary antibodies for 2 h. After that, 1 mL of CaP-PILP was placed dropwise onto the LCCD, which was loaded with immunofluorescence-stained fibrils. The material was incubated at 37 °C for 6 h and rinsed with deionized water three times. Then, the mineralized collagen fibrils were labeled with 10 × 10 −6 m calcein for 20 min and rinsed with deionized water three times. All STORM imaging experiments were performed on a Nikon Ti-E inverted optical microscope, the movies and images were analyzed by Nikon NIS-Elements AR software.
In Vitro Recovery of Osteoporotic Bone: In vitro experiments were used to detect the mineralization properties of CaP-PILP for collagen fibrils in osteoporotic bones without cells or vessels. Female Sprague Dawley osteoporotic bones and healthy bones were kindly provided by the Sir Run Run Shaw Hospital Affiliated with the Medical College of Zhejiang University, and the use of animal tissues for the in vitro study was approved by the guidelines on the care and use of animals for scientific purposes issued by the National Institutes of Health (NIH) and Zhejiang University. First, models with ovariectomized-induced 8-week-old female Sprague Dawley (body weight, 290-330 g) osteoporotic rats were created. Then the healthy rats and osteoporotic rats were scarified to obtain the femurs. After that, the healthy femurs and osteoporotic femurs were cut into slices and dried in an oven at 37 °C for seven days prior to use. CaP-PILP was synthesized by the above method, and the HAP was made by suspending commercial HAP particles into a PBS solution. CaP-PILP or HAP particles were injected into the osteoporotic bones, and then the bones were placed in a water bath at 37 °C for 14 d. After that, the bones were dried at room temperature before further experiments.
In Vivo Recovery of Osteoporotic Bone: All animal experiments were performed at the Sir Run Run Shaw Hospital Affiliated with the Medical College of Zhejiang University. All handling and care of the animals were carried out according to the guidelines on the care and use of animals for scientific purposes issued by the NIH and Zhejiang University. First, models with ovariectomized-induced osteoporotic bone were created. All mice in this model were deprived of any food for 6 h before being anaesthetized. Each mouse in this model was given a general anesthetic of 50 mg kg −1 pentobarbital sodium by intraperitoneal injection and then fixed in the prone position. The psoas muscles were cut along the linea scapularis subcostals at the two sides to expose the ovaries and uterine horns under the kidneys, and then ligature was conducted. Subsequently, the uterine horns were cut, the ovaries were completely extracted, the incision was sewn closed layer by layer, and the model creation surgery was complete. The removed tissue was examined to ensure the completeness of the surgery, and the ovaries were confirmed by histological determination. In the sham-operation mice, the incisions were made without resection. Briefly, 65 healthy 8-week-old female C57BL/6 mice (body weight, 20-25 g) were used in this study. 50 mice were randomly selected for the ovariectomized groups, and the rest (15 mice) were used as the healthy group (sham operation). The ovariectomized mice were randomly divided into three groups: osteoporotic bone (no intervention), PBS, and CaP-PILP (n = 10) groups. The different administrations began at the 6th week after oophorectomy. All the materials were filtered through 0.22 µm Millipore films prior to | 2019-09-10T00:27:58.295Z | 2019-08-20T00:00:00.000 | {
"year": 2019,
"sha1": "39d6645c3b93eac89eaa1a9b04b2a8b0ef496f62",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.201900683",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a84896c178543074d29af4822d6a68be0bb67bbd",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
73529283 | pes2o/s2orc | v3-fos-license | Limiting Behavior of Travelling Waves for the Modified Degasperis-Procesi Equation
Using an improved qualitative method which combines characteristics of several methods, we classify all travelling wave solutions of the modified Degasperis-Procesi equation in specified regions of the parametric space. Besides some popular exotic solutions including peaked waves, and looped and cusped waves, this equation also admits some very particular waves, such as fractal-like waves, double stumpons, double kinked waves, and butterfly-like waves. The last three types of solutions have not been reported in the literature. Furthermore, we give the limiting behavior of all periodic solutions as the parameters trend to some special values.
The DP equation is of interests because of the following two aspects.On one side, (1) is integrable [1].On another side, the DP equation presents abundant nonlinear phenomena due to the coexistence of nonlinear convections and nonlinear dispersions.Equation (1) admits wave-breaking phenomena and existence of exotic solution including peakons and cuspons [11][12][13][14][15].
To further complement the study of the DP equation, Wazwaz gave and studied the modified Degasperis-Procesi equation (mDP) [16]: It is clear that the nonlinear convection term has been changed to 2 in (2).Wazwaz employed these modified forms as a vehicle to explore the change in the physical structure of the solution.Many researchers have obtained abundant travelling wave solutions by different methods.Ma et al. [17] applied the auxiliary equation method to obtain some new solitary and traveling wave solutions.Rui et al. [18] obtained abundant traveling wave solutions by the integral bifurcation method.In [19], a new characteristic of solitary wave solutions, bell-shaped solitary wave, and peakon coexisting for the same wave speed in mDP equation was found by Liu and Ouyang.
It is noted that the nonsmooth wave solutions of the mDP equation obtained in the previous studies have not been checked in a weak solution way.And new solutions of the mDP equation have been founded.Motivated by the above two aspects, we try to answer them.It is noticeable that our method to search solutions of the mDP equation is combined with some characteristics of several methods [10][11][12][13][14][15].Three characters lie in our method.(i) Definition of weak solutions of the mDP equation is given.(ii) More new exotic solutions are obtained.The parameter space is divided in further detail.(iii) The limiting behavior of traveling wave solutions is given.
This paper is organized as follows.In Section 2, we give the definition of weak solutions.In Section 3, we give 2 Advances in Mathematical Physics theorems of the classification of travelling waves in the mDP equation.In Section 4, we give the proof.Section 5 is the conclusion.
Definitions and Notations
In this section, we will give the classification of travelling wave solutions of (2), which is stated in Theorem 3.
For a travelling wave (, ) = (−), (2) takes the form where is the wave speed.By integrating with respect to and letting the integral constant be zero, (3) becomes Equation ( 4) makes sense for all ∈ 1 loc (R).The following definition is therefore natural.
Like the proof of the proceeding of Lemmas 4 and 5 in [15], we can give the following definition of weak traveling wave solutions. where and → , at any finite endpoint of .(ii) If has strictly positive Lebesgue measure () > 0, we have = −(1/3) 2 − 0 .
Main Results
Let ℎ , ℎ , and ℎ be defined as in (11).All travelling wave solutions ( − ) of (1) are smooth except at points where = .We state our main result as follows.
(2) For ℎ ∈ (ℎ , ℎ ], there exists a periodic wave solution. (3) For ℎ ∈ (ℎ , 0), there exist a periodic wave solution and a looped periodic wave solution.Moreover, as ℎ → 0, the periodic wave solution converges to a solitary wave solution pointing downward and the looped periodic wave solution converges to a looped wave solution.
(2) For ℎ ∈ (ℎ , ℎ ], there exists a periodic wave solution. (3) For ℎ ∈ (ℎ , 0), there exist periodic and looped periodic wave solutions.Moreover, as ℎ → 0, the periodic wave solution converges to a solitary wave solution pointing downward and the looped periodic wave solution converges to a butterfly-like wave solution.
any travelling wave solution of (1) falls into one of the following categories.
(4) If ℎ ∈ (ℎ , 0), there exist a periodic wave solution and a cusped periodic wave solution.Moreover, as ℎ → 0, the periodic wave solution converges to a solitary wave solution pointing downward and the cusped periodic wave solution converges to a cusped wave solution.
, any travelling wave solution of (1) falls into one of the following categories.
(2) For ℎ ∈ (ℎ , 0), there exists a periodic wave solution.Moreover, as ℎ → 0, there exists a cusped wave solution and the periodic wave solution converges to solitary wave solution pointing downward.
, any travelling wave solution of (1) falls into one of the following categories.
(4) For ℎ ∈ (ℎ , 0), there exist a periodic wave solution and a cusped periodic wave solution.Moreover, as ℎ → 0, the periodic wave solution converges a solitary wave solution pointing downward and the cusped periodic wave solution converges to a cusped wave solution.
, any travelling wave solution of (1) falls into one of the following categories.
(1) If ℎ ≤ ℎ , there are no bounded traveling solitary wave solutions. ( there exists a periodic wave solution. (3) If ℎ ∈ (ℎ , 0), there are two types of periodic wave solution.Moreover, as ℎ → 0, the two periodic wave solutions converge to solitary wave solution pointing downward and peaked wave solution.
Theorem 10.If 0 ≥ , any travelling wave solution of (1) falls into one of the following categories.
Theorem 11 (composite waves).A countable number of cusped, peaked, and looped waves in the above cases corresponding to the same value of can be joined at points where = to form composite waves.If ( −1 ()) = 0, one can get travelling wave solution with very strange profiles, such as the travelling waves with a fractal appearance (see Figure 1(k)).For = −(1/3) 2 − 0 ., the composite waves are solutions of (2) even if ( −1 ()) > 0. Hence we can obtain double stumpons which contain intervals where = (see Figure 1(l)).
Proof of Theorem 3
In this section, we will show that the functions satisfying (a) and (b) in Definition 2 consist exactly of the waves stated in Theorem 3.
Let be a function satisfying (a) and (b) and each wave segment solves the equation for some interval and constants 0 , ℎ.
For determining the solutions of (3), we should give the following facts.
Lemma 12.The qualitative behavior of solutions of 2 = () near points where has a zero or a pole is as follows.
(2) If () has a double zero at = , the solution of (6) satisfies () ∼ + exp(−||) as → ∞.It is easy to find that smooth solitary wave solutions exist if () has a simple zero.
(4) Peaked waves occur when the evolution of according to (6) suddenly changes direction at = where () ̸ = 0 and ( (5) Double kinked waves occur when the right-hand side of (6) has two double zeros which are not opposite numbers, and = is not between the two zeros.When the pole = falls in the interval of the two zeros, butterfly-like waves occur.One reason for the occurrence of these two new solutions is that the solutions to (2) must be symmetry because this equation is invariable under the transformation → −.For convenience, we define (), (), and () where
Now we apply the above analysis to
If > 0 , we define Let Remark 13.It is not difficult to find that () has the same zero points as () except = .It is easy to see that a change ℎ in equation ( 6) will shift the graph vertically up or down.Now we consider the existence of solutions and their limiting behavior, which have different analytical forms depending on the values of 0 and ℎ.
(1) If ℎ < ℎ , we can get that () has only a simple zero 1 and () < 0 (see Figure 2(a)).Hence, there are no bounded traveling wave solutions.As ℎ → ℎ , it is easy to find that, as ( ) → 0 (see Figure 2(b)), () has a double zero and a simple zero, so there are no bounded traveling wave solutions either.
(4) If ℎ > 0, it is easy to observe that () has only one simple zero and () > 0; hence there exists a cusped periodic wave solution.
Case D ( 0 = (1/3)(3 − 4 2 )).In this case, the geometric analysis of () is shown in Figure 5.The result given in Theorem 6 can be proved in a way similar to that in Case B.
Case F ( 0 = (1/5)(5 − 2 2 )).In this case, the geometric analysis of () is shown in Figure 7.The result given in Theorem 8 can be proved in a way similar to that in Case B.
Case G ((1/5)(5 − 2 2 ) < 0 < ).In this case, the geometric analysis of () is shown in Figure 8.The result given in Theorem 9 can be proved in a way similar to that in Case B.
Case H ( 0 ≥ ).In this case, the geometric analysis of () is shown in Figure 9.The result given in Theorem 3 can be proved in a way similar to that in Case B.
Then, we will study the existence of composite waves.By Theorems 3-10, any countable number of travelling waves in the above cases corresponding to the same value of can be joined at points where = to form composite waves.If = ( −1 ()) = 0, then the composite wave is a solution of (2).For = −(1/3) 2 − 0 , the composite waves are solutions of (2) even if ( −1 ()) > 0. Consequently, we can obtain double stumpons which contain intervals where = (see Figure 1(l)).Since any countable number of wave segments can be joined together, one can get travelling waves with very strange profiles, such as the travelling waves with a fractal appearance where ( −1 ()) = 0 (see Figure 1(k)).Then the proof of Theorem 11 is completed.
Conclusions
By an improved method combining some characteristics of several methods, we have obtained abundant traveling waves in the mDP equation.Those solutions include looped wave solutions, cusped wave solutions, peaked wave solutions, fractal-like waves, double stumpons, double kinked waves, and butterfly-like waves.Under different parametric conditions, various sufficient conditions to guarantee the existence of the above solutions are given.Our method can also be applied to other models where the location of extreme value points is determined.The limiting behavior of traveling wave solutions can also be given.Based on the study, it might be concluded that the improved method is useful and efficient.
It can be widely applied to other nonlinear wave equations.
Our study may be useful to further understand the role that the nonlinearly dispersive terms pay on the optical wave solutions.
Definition 2 .
Any bounded function belongs to 1 loc (R) and is a travelling wave solution of (2) with speed if it satisfies the following two statements.(a) There are disjoint open intervals , ≥ 1, and a closed set such that R \ = ⋃ ∞ =1 , ∈ ∞ ( ) for ≥ 1, () ̸ = for ∈ ⋃ ∞ =1 , and () = for ∈ .(b) There is an ∈ R such that (i) For each ∈ R, there exists ∈ R such that | 2018-12-30T22:15:19.403Z | 2014-07-09T00:00:00.000 | {
"year": 2014,
"sha1": "84f8a7636c4588695a82c4af162261d52cc4878a",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/amp/2014/548920.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "84f8a7636c4588695a82c4af162261d52cc4878a",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
226416607 | pes2o/s2orc | v3-fos-license | INVESTIGATION OF BIVALVE MOLLUSCAN SEASHELLS FOR THE REMOVAL OF CADMIUM, LEAD AND ZINC METAL IONS FROM WASTEWATER STREAMS
To treat industrial effluents containing heavy metals using low-cost materials, bivalve molluscans, a variety of seashells found in beaches were investigated for the sorption of heavy metals like cadmium (Cd), lead (Pb), and zinc (Zn). The powdered seashells were characterized using SEM, FTIR and BET method. The accumulation of heavy metals on these bivalve seashells was found to be ion exchange. The equilibrium sorption capacity of seashell powder for Pb, Cd and Zn was determined to be 588.23, 476.19 and 357.14 mg/g, respectively. Equilibrium sorption followed the Langmuir adsorption model. The optimum pH for the uptake of the studied heavy metals by the molluscan seashells was observed to be in the range of 5 to 7. The sorption kinetics followed pseudo-secondorder kinetics. The values of enthalpy (∆H°) and entropy (∆S°) for the uptake of heavy metals were found to be negative which indicates the sorption to be an exothermic process and feasibility of sorption decreases with increasing temperature.
INTRODUCTION
The removal of heavy metals from the industrial effluents before disposal received much attention during recent times due to stringent rules for the protection of the environment. 1 Some of the heavy metals like arsenic, mercury, cadmium, mercury, lead, etc are suspected to be carcinogens. Flowing water streams or heavy rains can leach the heavy metals present in the earth's crust. Mineral processing operations also contribute to the leaching of heavy metals from ore and stockpiles. Electroplating industry discharging large volumes of metal-rich streams contaminate the water bodies in its proximity. Cadmium, a toxic metal, is one of the main pollutants emanating from metal processing industries. [2][3][4] Industrial effluents from lead battery production contain Pb 2+ ions which are released at higher concentrations than the allowed permissible limit of <1 ppm for heavy metals. 5 The nervous system is the most vulnerable target of lead poisoning. Zinc being the most widely used heavy metals in the electroplating industry leads to depression, lethargy, neurological signs, and increased thirst if present beyond 0.8 ppm in aqueous streams. 6 Several methods like precipitation, ultrafiltration, ion exchange, electrochemical deposition, adsorption, etc., have been employed to treat aqueous waste streams containing heavy metals. 7 Among all these methods, adsorption is simple and offers a greater advantage over other methods. Many low-cost biosorbents from agriculture and natural materials have been investigated in recent times for treating these toxic heavy metals. The use of natural materials for industrial effluent treatment has gained importance in recent times as they are cheap and easily obtained. 8 The development of effective low-cost sorbents is still in search. Seashells are the outer protective layer of a marine animal which is made out of several layers of proteins surrounded by calcite and platy calcium carbonate crystals layers. 9 The main constituent of most of the seashells is CaCO 3 , which exists as calcite and aragonite forms. The surfaces of calcite and aragonite sorb manganese ions (Mn 2+ ) replacing Ca 2+ on their surface. 10,11 Several studies also showed that zinc too can be chemisorbed on calcite, dolomite, and magnesite surfaces. 12 The mechanism of Zn 2+ sorption on calcite was found to be ion exchange with Ca 2+ on its surface. 13 Cd 2+ removal using calcite showed that the uptake of Cd 2+ was particle size-dependent. 2 Oyster shells effectively removed copper and nickel ions from waste aqueous streams with uptake capacities of 49.26-103.1 and 48.75-94.3 mg/g, respectively. 14, 15 demonstrated that crab shells can sorb copper and cobalt on to their surface with maximum uptake capacities of 243.9 and 322.6 mg/g, respectively. Dong S. Kim et al 16 reported that crab shells treated with HCL were found effective in the removal of Pb 2+ from aqueous waste streams. Crushed shells from the crab Scylla Serrata were able to remove 94.7 and 85.1% Cu and Cd, respectively from low concentrated solutions 17 . Waste mollusc and exoskeletons were used to reduce solutions of high Pb concentration to less than 0.5 ppm in five minutes. 18 In the present work, bivalve, a variety of molluscan seashell was investigated for the removal of heavy metals like Cd, Pb, and Zn from aqueous streams.
EXPERIMENTAL Materials
The molluscan bivalve seashells used in the present study were procured from the local beaches of Mangalore in Karnataka. The shells were soaked in detergent water and then cleaned with demineralized water to remove the dirt adhered to it. These seashells were dried at 60̊̊ C in a hot air oven for 48 hours for the moisture to get completely removed. Later the shells were crushed to powder in a ball mill and separated according to different mesh sizes and stored. Seashell powder of 100 mesh size was chosen for different studies.
Characterization
The surface morphology of the seashell powder was characterized using a scanning electron microscope (SEM) -energy-dispersive X-ray (EDX), (Quanta 200). The functional groups of the seashells were determined using Fourier-transform infrared spectroscopy (FTIR) (Model-Bruker). BET surface area of the seashells was estimated using surface area analyzer (Smart Sorb 92/93) from Smart Instruments Company Pvt. Ltd.
Equilibrium Isotherm Study
To obtain equilibrium isotherms, 100 mL of a synthetic metal solution of 1000 ppm was equilibrated with seashell powder of varying mass ranging from 0.1 to 0.5 g. The experimental samples were left for 48h to ensure equilibrium between sorbate and sorbent phases. After reaching equilibrium, the solution is filtered and centrifuged to separate shell powder from the metal ion solution. The metal ion concentration in the feed and the filtrate was determined using Atomic Absorption Spectrophotometer (AAS). The uptake of different metal ions by the sorbent, q e (mg/g), was obtained as follows: Where C o is the initial metal ion concentration in the feed solution and C e is the equilibrium concentration of the metal ion in the filtrate (mg/L), respectively; V is the volume of the solution employed (L) and W is the mass of the sorbent (g).
Kinetic Study
To determine the batch kinetics and equilibrium time for sorption, 200 mg of seashell powder was added to 0.5 L of heavy metal ion solution of different concentrations. The solution was stirred at 500 RPM using a mechanical stirrer to attain uniform concentration. Several samples were withdrawn from the bulk liquid to analyze the metal concentration in the bulk liquid. The batch kinetic studies were performed for different concentrations of metal ions at a pH of 6.5.
pH Study
The importance of pH of the aqueous solution on the accumulation of different heavy metals onto the surface of seashells was studied at different pH ranging 1 to 12. Each metal ion (Cd 2+ , Pb 2+, and Zn 2+ ) of 500 ppm was adjusted for the desired pH using HNO 3 and NaOH. The metal solution (100 mL) was equilibrated with 0.2 g of shell powder for 3 h in an orbital shaker. After 3 h, the filtrate was removed and analyzed for metal concentration using AAS.
Temperature Study
Temperature is one of the governing parameters in a sorption process. To study the effect of temperature on metal ion sorption onto the surface of seashells, each metal ion (100 mL) of 500 ppm was equilibrated with 0.2 g of seashell powder at different temperatures for 3 h in an orbital shaker.
RESULTS AND DISCUSSION SEM-EDX
The SEM images of seashell powder before the sorption of heavy metals was shown in Fig.-1a. It was observed that the particles were distributed distinctly and no pores were observed. EDX analysis of the sorbent was also shown in Even after sorption, the particles were distributed distinctly. It was seen from Fig.-2b that most of the calcium was replaced by cadmium after equilibrating the metal solution with the seashell power. Similar profiles were observed for lead and zinc sorption also in Fig-3b and 4b, respectively where calcium content of the sorbent was negligible after the sorption of the heavy metals. EDX images shown in Fig.-2b, 3b and 4b showed the replacement of calcium ions for the studied heavy metal ions (cadmium, lead and zinc). This confirmed the mechanism of the sorption of heavy metals onto the seashell powder to be ion exchange.
FTIR
The FTIR spectra of the seashell powder before and after sorption of different heavy metals were compared in Fig.-5. The FTIR spectra before and after the sorption of heavy metals showed discrete peaks at 712, 980 and 1483 cm -1 which unveiled that -CO 3 groups were the main constituents of molluscan seashells. 11 Also, these strong peaks at 712 cm -1 and 1483 cm -1 indicated the presence of aragonite phase and calcite phases, respectively. 11 Absorbance at 980cm −1 was due to C-O stretching vibrations which were not seen after the sorption of the heavy metals. Thus the FTIR analysis demonstrated the presence of -CO 3 in the selected seashells which was found to interact with the heavy metal ions during its sorption on to the seashell powder.
BET Surface Method
The surface area of 100 mesh powder was determined using a surface area analyzer as 3.39 m 2 /g before the sorption of metal ions. After sorption, the surface area of seashell powder was found to be 0.22 m 2 /g.
From the SEM image and the BET surface analysis, it can be inferred that the powder was not porous to provide high surface area. After sorption, the decrease in the surface area indicated that the sorption sites were blocked by the metal ions.
Equilibrium Isotherms
The equilibrium uptake of different heavy metal ions (Pb 2+ , Cd 2+, and Zn 2+ ) on to the surface of seashell powder was obtained and shown in Fig-6. Sorption capacities increased with a decrease in dose rate of sorbent and reached constant after a certain range. The maximum heavy metal uptake of seashell powder was obtained as 489 mg/g for Pb, 460 mg/g for Cd and 450 mg/g for Zn at 303K and pH 6.5. Langmuir and Freundlich isotherm sorption models were fitted with experimental data to analyze the sorption type. The assumptions of the Langmuir model include a homogeneous surface with identical and unassociated sites distributed as a monolayer. The linearized Langmuir model equation is given by: Where, C e = equilibrium metal ion concentration (mg/L) in the filtrate; q e = equilibrium metal ion concentration on sorbent or solid (mg/g); q m = maximum uptake of metal ion (mg/g) and K L is the Langmuir model constant.
Freundlich isotherm model assumes the adsorbent surface to be heterogeneous with sites distributed over multilayers. The linearized Freundlich model can be expressed as: K F (mg 1−1/n L 1/n /g) and n are Freundlich constants. The value of n demonstrates the favourability of sorption. In general, the value of n ranging from 1 to 10 represents favorable sorption. The model constants, K F, and n were estimated from the intercept and slope of the plot logq e vs logC e . From Table-1, it was confirmed that the uptake of heavy metal onto molluscan seashells followed the Langmuir isotherm model. It indicated that the molecules sorbed on the sorbent forms a monolayer of ions which do not interact or compete with each other.
Effect of Contact Time
From Fig.-7, it was evident that the sorption of Pb +2 ions onto the seashells increased with contact time for different initial concentrations. The uptake was rapid until 40 mins for all the concentrations studied and then slowly increased with time till 90 mins. After 90 mins, saturation was observed in the uptake capacity for all the initial concentrations. The saturation capacities were observed to be 180, 350 and 370 mg/g for Pb ions onto the seashells for initial concentrations of 100, 250 and 500 mg/g, respectively. From Fig.-8, the maximum uptake was determined for Cd ions onto the seashells to be 192.5, 285 and 314 mg/g for initial concentrations of 100, 250 and 500 ppm, respectively. Similarly from Fig-9, the maximum sorption capacity of Zn onto the seashells was seen to be 140, 220 and 253 mg/g. The equilibrium time was observed to be approximately 90 minutes for all the studied concentrations. The kinetics of sorption was determined using the batch kinetic data of different metal ions uptake by the seashell powder. The rate constants determine the uptake behavior of metal ions onto the solid surface and various kinetic models have been developed for analyzing the batch kinetic data. Lagergren model based on Pseudo-first-order kinetics and Ho model based on pseudo-second-order kinetics were examined to analyze the kinetics of sorption onto the seashells.
Application of Kinetic Models Pseudo-first order Lagergren model
The mathematical expression of the Lagergren model 19 for sorption of adsorbate is given as: Where, K 1 (min -1 ) is the pseudo-first-order rate constant, q e (mg/g) is the equilibrium concentration of adsorbate on the adsorbent and q t (mg/g) is the uptake of adsorbate on to the adsorbent at any time. The linearized Lagergren model is represented as: (7) The constants (K 1 and q e ) of the Lagergren model were determined from the plots of vs t for different parameters and were summarized in Table-2.
Pseudo-Second Order Model
The model expression of Ho and McKay 20 is expressed as: (8) The linear form of Eq. (8) can be written as: (9) The pseudo-second-order rate constants (K 2 ) for the uptake of different metal ions onto the seashell powder were estimated from the plot of t/q t versus t and tabulated in Table-2. The Lagergren model equation did not fit the experimental kinetic data accurately. Even though the regression coefficient (R 2 ) of the fit was good for all the metals studied, the calculated (q th ) and the experimental equilibrium uptake capacities (q exp ) evaluated using the linear Lagergren model varied widely. Thus the pseudo-first-order kinetic model was not suitable to describe the sorption kinetic mechanism of the studied heavy metals onto molluscan seashells. Further, the experimental kinetic data were tested for pseudo-second-order kinetics which showed a good fit with good regression coefficients (R 2 > 0.99). Further, the calculated (q e ) th values were also close to the experimental (q exp ) values for the three metals. So, it was deduced that the sorption of Pd, Cd and Zn ions onto the molluscan seashells followed pseudo-second-order kinetics. Effect of pH pH is one of the significant parameters affecting the sorption of adsorbate onto the adsorbent. 21 The surface charge of any sorbent can be influenced by affecting the pH of the surrounding solution. The pH of the feed solution was varied from 2 to 10 to find its effect on the accumulation of heavy metals onto the seashells. Sorption capacities of the studied heavy metals increased slightly with an increase in pH from 2 to 6 and then a decrease in accumulation of Cd and Zn was observed with further increase in pH as seen from Fig.-10. The -CO 3 groups of the molluscan seashells, as confirmed from the FTIR analysis, were easily protonated in acidic solutions (pH <4.0). The repulsion between H + ions and heavy metal ions might have resulted in lower uptake of metal ions in acidic solutions. Further, the sorbent was observed to dissolve in acidic conditions (pH <2) and no solid phase was seen for sorption. In the alkaline medium (6<pH<10), -CO 3 functional groups were negatively charged and the sorption of metal ions might be due to the attractive electrostatic forces between the sorbent surface and the metal ions. The increase in heavy metal sorption capacity in alkaline conditions resulted because of the reduced H + ions which competed with metals ions in acidic conditions for the active sites on the sorbent surface. 22,23 For pH>10, turbidity was observed in the metal solutions and even after equilibrating with shell powder, the precipitate was observed on the walls of the container. The precipitation of metal ions might be due to the increased hydroxyl ions for pH >10 which might have reduced the sorption capacity of the shell powder.
Effect of Temperature
The effect of temperature on heavy metal sorption by seashell powder was investigated in the range of 293 to 333K. The uptake of heavy metals by seashell powder was found to diminish with an increase in temperature (not shown) from 293 to 333K. The decrease in heavy metal uptake with temperature might be due to the weaker physical bonding between the heavy metal and the active sites of the sorbent at elevated temperatures. 24 Also, as the solubility of metal ions increases with temperature, the solute prefers to be with the liquid phase than the solid phase. The decreased trend in heavy metal uptake capacity of seashell powder with temperature indicated that the sorption to be an exothermic process. 25,26 The equilibrium constant (K c ) for the sorption of metal ions onto the solid phase was determined using equation (10). The free energy change (ΔG • ) for the uptake of Pb, Cd, and Zn onto molluscan seashells at the studied temperatures was obtained by using equation (11) and were listed in Table-3 Where, C se and C le are the equilibrium metal ion concentrations (mg/L) in the solid and liquid phases, respectively. From Tables-3, 4 and 5, it was seen that the change in Gibbs free energy was negative at the studied temperatures which indicate that the sorption of heavy metals onto seashells is feasible and spontaneous. Further, the free energy change increased with temperature which proved that the sorption process was less favored at higher temperatures. The negative value of ΔH • indicates that the accumulation of heavy metals onto the seashell powder is exothermic and the negative value of ΔS• is consistent with the sorbate and sorbent interactions.
Comparison of Molluscan Seashells with Other Bio-sorbents
The different metal ion uptake capacity of molluscan seashells was compared with various bio-sorbents and summarized in Table -6. It was seen that the molluscan seashells showed higher uptake capacity for heavy metals like Cd, Pb, and Zn than many other bio-sorbents. Low cost and easy availability of seashells were some of the attractive features of seashells powder for heavy metal removal from industrial effluents.
CONCLUSION
Bivalve Molluscan seashells have been investigated for the removal of different heavy metals (Cd, Pb, Zn). The seashells were characterized using SEM-EDX, FTIR and BET surface methods to determine their surface morphology, composition, functional groups, and surface area. The surface area of 100 mesh powder was determined to be 3.39 m 2 /g before the sorption of metal ions. After sorption, the surface area of the seashell powder was reduced to 0.22 m 2 /g. Form EDX analysis, the sorbent was found to be mainly composed of CaCO 3 . The sorption of heavy metals onto the surface of the seashells was due to the replacement of calcium ions which indicates the sorption mechanism to be ion-exchange instead of adsorption. The maximum uptake capacity of molluscan seashells was estimated as 588.23 mg/g for lead, 476.19 mg/g for cadmium and 357.14 mg/g for zinc at 303K and pH 6.0. The sorption mechanism of the studied metal ions onto the molluscan seashells was observed to follow Langmuir isotherm model assumptions. A contact time of 90 minutes was found to be sufficient to reach equilibrium for all the studied heavy metal sorption onto the seashells. The sorbent was found to be effective in the pH range of 6-8. Seashells were found to dissolve in highly acidic medium (pH < 2). The kinetic data generated for different heavy metal sorption followed the pseudo-second-order kinetic model. The sorption capacity of the seashells for the heavy metals was comparable with the other materials available and hence it could serve as low-cost bio-sorbent for the heavy metals. | 2020-06-18T09:08:28.152Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "f3b5bab22de7adcfd2b858ab97089afc801b481f",
"oa_license": null,
"oa_url": "https://doi.org/10.31788/rjc.2020.1325617",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d34eea12d362b452de43f6f736e56730635ef6c8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
86864338 | pes2o/s2orc | v3-fos-license | An Eco-Friendly Method to Get a Bio-Based Dicarboxylic Acid Monomer 2,5-Furandicarboxylic Acid and Its Application in the Synthesis of Poly(hexylene 2,5-furandicarboxylate) (PHF)
Recently, we have developed an eco-friendly method for the preparation of a renewable dicarboxylic acid 2,5-furandicarboxylic acid (FDCA) from biomass-based 5-hydroxymethylfrufural (HMF). In the present work, we optimized our reported method, which used phosphate buffer and Fe(OH)3 as the stabilizer to improve the stability of potassium ferrate, then got a purified FDCA (up to 99%) in high yield (91.7 wt %) under mild conditions (25 °C, 15 min, air atmosphere). Subsequently, the obtained FDCA, along with 1,6-hexanediol (HDO), which was also made from HMF, were used as monomers for the synthesis of poly(hexylene 2,5-furandicarboxylate) (PHF) via direct esterification, and triphenyl phosphite was used as the antioxidant to alleviate the discoloration problem during the esterification. The intrinsic viscosity, mechanical properties, molecular structure, thermal properties, and degradability of the PHFs were measured or characterized by Koehler viscometer, universal tensile tester, Nuclear Magnetic Resonance (NMR), Fourier-transform Infrared (FTIR), X-ray diffraction (XRD), Differential Scanning Calorimeter (DSC), Derivative Thermogravimetry (DTG), Scanning Electron Microscope (SEM), and weight loss method. The experimental evidence clearly showed that the furan-aromatic polyesters prepared from biomass-based HMF are viable alternatives to the petrochemical benzene-aromatic polyesters, they can serve as low-melting heat bondable fiber, high gas-barrier packaging material, as well as specialty material for engineering applications.
Introduction
Polyesters are popular synthetic polymers that are closely related to the development of human society. In recent years, there has been significant interest in biomass-based products, which would be alternatives to petroleum-based products, due to their environmental advantages [1]. A recent study predicted that the worldwide capacity of biomass-based polyesters would increase from 0.36 Mt in 2007 to 3. 45 Mt in 2020 [2]. It was reported that levulinic acid [3], lactic acid [4], isosorbide [5], succinic acid [6], dodecanedioic acid [7], and ethylene glycol [8] were all potential building blocks for these the these the preparation of biomass-based polymers. Among them, 2,5-furandicarboxylic acid (FDCA), an aromatic product in nature, has been considered a "sleeping giant" and suitable replacement for terphthalic acid (TPA) in engineering plastics due to similarities between TPA and FDCA [9,10].
FDCA-based polyesters were first studied by Moore and co-workers [11]. After that, Gandini and co-workers [12,13] synthesized some FDCA-based polyesters with interfacial polycondensations and melt polytransesterification and revealed that the mechanical properties, thermal properties, and crystal structures of the obtained poly(butylene-FDCA) (PBF) were similar to petroleum derived poly(butylene terephthalate) (PBT). Based on these works, Gomes [14] and Ma [15] synthesized a series of FDCA-based polyesters with variety of diols, and provided ample evidence in favor of the exploitation of furan monomers as renewable alternatives to petroleum-based aromatic homologs. Subsequently, Sousa [5], Shirke [16], and Pellis [17,18] further synthesized a series of fully renewable poly((ether)ester)s from FDCA and revealed that the obtained polyesters showed better thermal properties than their petroleum-based counterparts. Papageorgiou [19] highlighted the progress and fundamental aspects for the synthesis of bio-based 2,5-FDCA polyesters and their thermal properties, they associated with the coloration and successful syntheses of polyesters with high molecular weights are thoroughly discussed. Recently, Wang and co-workers [20] modified PEF with trans-or cis-1,4-cyclohexanedimethanol (CHDM) and obtained a polyester with high crystalline, melting temperature, and air barrier property. The above researches have provided important clues that the FDCA-based polyesters can be an ideal substitute of fossil-based counterparts. To fit well with the sustainable conversion process concept and realize the scale production of FDCA-based polyesters, the discovery of new pathway for the production of FDCA is always in demand.
yield. Subsequently, the obtained FDCA, along with commercially available 1,6-hexanediol (HDO), which can be also made from HMF (Scheme 1) [38], were used as the monomers for the synthesis of poly(hexylene 2,5-furandicarboxylate) (PHF). Triphenyl phosphite was used as the antioxidant to alleviate the discoloration problem during the esterification and thus improved the properties of the obtained PHF. Based on this research, we hope that totally biomass-based polyesters with high performance could finally be developed in the future.
Monomer Synthesis and PHFs Preparation
The oxidation of HMF to FDCA was carried out in a 100 mL high pressure stainless-steel reactor (Anhui Kemi Machinery Technology Co., Ltd., Anhui, China) with the following steps (Scheme 1): (1) 10 mL distilled water and 0.016 mol of NaOH were added into the reactor. (2) The reactor was placed onto a magnetic stirrer and 0.1 mol of HMF, 0.015 mol of prepared K 2 FeO 4 , 0-0.016 mol of K 2 HPO 4 , and 0.005 mmol of metal compound were added into the prepared alkali solution under agitation (400 rpm). (3) The agitation continued for 15 min and the reaction mixture was filtered. (4) The filter residue was dried to get Fe 2 O 3 so it could be used as the iron source to prepare K 2 FeO 4 . (5) The filtrate was acidized with hydrochloric acid under stirring until a large amount of white precipitate appeared. (6) The precipitate was filtered and the filter residue was dried by vacuum drying for 24 h to obtain FDCA, the filtered water could be reused in another HMF oxidation reaction.
PHFs were synthesized via the direct esterification method, which was performed, as follows: (1) a mixture of HDO (0.01-0.03 mol), FDCA (0.01 mol), tetrabutyl titanate (0.03 mmol) and triphenyl phosphite (0.05-0.5 mmol) was loaded into a 50 mL three-neck round-bottomed flask, which was sealed and purged with N 2 three times. (2) The flask was then heated to 180 • C with stirring at 120 rmp until the reaction system reached a clear point, and no liquid precipitation was present in the condenser tube. (3) The system pressure was decreased to 600 Pa via vacuum force technology, the temperature was increased to 230-250 • C to start the polycondensation. (4) After polycondensation, the product was dissolved in phenol-tetrachloroethane, then precipitated in methanol three times at the end of the polycondensation, the final PHFs were obtained via vacuum drying at 50 • C for 24 h.
In a typical approach for the preparation of PHF membrane, 0.7 g PHF was dissolved in 10 mL 1,1,2,2-tetrachloroethane to get the PHF solution. Subsequently, the obtained PHF solution was transferred to a 10 mL plastic syringe with an 18-gauge blunt tip needle. For the electrospinning process, a high voltage of 15 kV and a flow rate of 0.001 mm/s were applied, with a distance of 20 cm between the needle and the rotating grounded collector.
Techniques
The intrinsic viscosity (η) of PHFs was measured at a concentration of 0.5 to 1.5 g/dL in 1,1,2,2-tetrachloroethane/phenol (1:1 w/w) under 25 • C by using a Koehler viscometer and related standard method [39]. The intrinsic viscosity was calculated using the following equations: where η sp represented specific viscosity and c was the concentration of PHFs in 1,1,2,2tetrachloroethane/phenol (1:1 w/w) at 25 • C. The mechanical properties of PHF membranes were tested using the universal tensile tester, which were performed on an INSTRON-1121 tester with a strain rate of 5 mm/min at room temperature. Three rectangular specimens (15 mm × 3.23 mm × 3.20 µm) were employed for each test to determine the average of tensile modulus (E), tensile strength (σ m ), and elongation at break (ε b ). The length and width of the film were measured by Vernier Caliper, and the thickness was measured by a spiral micrometer. The average value was obtained from five times of data. The morphology of PHF membranes fracture surface after membrane stretching was characterized by S4800 SEM (Hitachi, Tokyo, Japan) at the accelerating voltage of 15.00 kV by stretching both ends of the length.
The 1 H Nuclear Magnetic Resonance (NMR) and 13 C NMR measurements were carried out on a FTNMR Digital NMR spectrometer (Bruker, Karlsruhe, Germany) operating at 399.95 MHz for 1 H and 100.58 MHz for 13 C at room temperature with a magnetic field of 9.4 T. The acquisition time was 0.034 s, the delay time was 2 s, and the proton 90 • pulse time was 4.85 s. The PHF sample was dissolved with CF 3 COOD with tetramethylsilane (TMS) as the internal reference.
The Fourier-transform Infrared (FTIR) data of PHF was obtained from FTIR spectrometer (SpectrumOne, Thermo Electron Corporation, Waltham, MA, USA) with a scan of 32 times and a resolution of 4 cm −1 in the range of 3500-500 cm −1 , for which the PHF was pelletized with KBr.
The X-ray diffraction (XRD) patterns of PHFs were recorded on a D8 advance diffractometer (Bruker, Karlsruhe, Germany) with a Cu Kα radiation (λ = 0.154 nm) at 40 KV and 30 mA. PHF was scanned in the 2θ range of 10-35 • at a scan rate of 10 • /min. Differential Scanning Calorimeter (DSC) measurements of PHFs were performed on a differential scanning calorimeter (TA Instruments, New Castle, DE, USA). Measurements were performed under nitrogen atmosphere with the flow rate of 50 mL/min. About 6 mg of PHF was heated to 250 • C at a heating rate of 5 • C/min and then held at this temperature for 3 min in order to erase thermal history. Afterwards, it was cooled down to room temperature at a rate of 5 • C/min and subsequently heated to 250 • C with the same heating rate for the second time. The PHF sample was quenched in liquid nitrogen. 6 ± 0.1 mg sample was used in the test. The sample was sealed in aluminum pans and heated to 250 • C at a heating rate of 5 • C/min.
The thermal stability of PHFs was determined by PYRIS 1 TGA (Perkin-Elmer, Waltham, MA, USA). The thermal analyzer was temperature calibrated using the Curie point of nickel as a reference. The samples of 6 ± 0.5 mg were heated from 20 to 500 • C at a heating rate of 10 • C/min in nitrogen.
The degradability of PHF membranes was evaluated under strong acidic conditions. For a typical procedure, the PHF membranes were made into 1 cm × 1 cm, then the samples were placed in the 50 mL lid bottle. Subsequently, 10 mL of concentrated hydrochloric acid was added into the lid bottle and sealed, the bottle was oscillated at 180 rpm/min frequency for one to four weeks under constant temperature at 25 • C. The solution needed to be replaced with fresh concentrated hydrochloric acid every week. Finally, the treated samples were washed with water, dried at room temperature for 48 h, then weighed. The morphology of PHF membranes after acid treatment was characterized by S4800 SEM (Hitachi, Tokyo, Japan) at the accelerating voltage of 15.00 kV.
FDCA Synthesis
Recently, we have developed a method the oxidation of HMF to FDCA, an 87.2% yield of FDCA was obtained under optimal reaction conditions [36]. It was reported that dipotassium phosphate buffer and metal compounds were helpful in improving the stability of potassium ferrate [40]. To further improve the FDCA yield, we constructed a H 2 O-K 2 HPO 4 -NaOH reaction system with K 2 FeO 4 as the oxidant and metal compounds, such as NaCl, KCl, CaCl 2 , Mg(OH) 2 , Al(OH) 3 , MnO 2 , Fe 2 O 3 , Fe(OH) 3 , and CuO as the stabilizer of K 2 FeO 4 , the oxidation sketch and experiment results are shown in Figure 1. As can be seen from Figure 1a, the filter residue after the oxidation can be recycled to prepare K 2 FeO 4 , the filtered water can be neutralized with ammonia and reused in another HMF oxidation reaction.
Fe2O3 and Fe(OH)3 could improve the FDCA yield to some extent, as seen when the FDCA yield increased from 82.5 wt % (control sample) to 85.7 and 86.1 wt %, respectively, when 0.5 mmol/L of Fe2O3 or Fe(OH)3 were added. We then further studied the effect of Fe(OH)3 amount on the oxidation of HMF to FDCA, the results are shown in Figure 1d. It shows that the FDCA yield gradually increased with the increase of Fe(OH)3, a highest FDCA yield of 91.7 wt % was obtained by adding 1 mmol/L Fe(OH)3. However, when the Fe(OH)3 concentration was over 1 mmol/L and further increased, the yield of FDCA was slightly decreased. It was reported that Fe(OH)3 has a stable effect on K2FeO4, therefore, adding an appropriate amount of Fe(OH)3 was helpful in enhancing the FDCA It can be observed from the above results that the use of dipotassium phosphate buffer and Fe(OH)3 improved the stability of potassium ferrate, thus increasing the oxidation efficiency of HMF to FDCA. The highest FDCA yield of 91.7 wt % was obtained under an optimum reaction condition, the obtained FDCA had high purity (>99%) ( Figure S1). The present method is more moderate than the reported methods, which can be conducted in atmosphere at room temperature (25 °C), the reaction just needs 15 min to be finished. Most important, the present method is environmentally friendly, all of the Fe ions and water can be recycled, therefore, the present method will not result in the metal and water pollution for the environment, which offers an effective method for the economic and eco-friendly production of FDCA from renewable biomass-based platform chemical, it fits well into the green conversion process concept. From Figure 1b, we can find that the FDCA yield gradually increased with the increase of K 2 HPO 4 . However, when the amount of K 2 HPO 4 was over a critical value and further increased, the yield of FDCA distinctly decreased. The main reason is that PO 4 3− in the solution has a stabilizing effect on K 2 FeO 4 , which will prevent its ineffective decomposition [37]. Therefore, the ineffective decomposition of K 2 FeO 4 gradually decreased with the addition of K 2 HPO 4 , the yield of FDCA gradually increased accordingly. However, when the amount of K 2 HPO 4 reached a certain level (0.4 mol/L), PO 4 3− in the system had an inhibitory effect on the oxidation HMF to FDCA, then resulting in lower FDCA yield [41]. It can be also observed from Figure 1c that the addition of NaCl, KCl, CaCl 2 , Mg(OH) 2 , Al(OH) 3 , MnO 2 , and CuO had obvious inhibitory effects on the oxidation of HMF to FDCA. However, adding Fe 2 O 3 and Fe(OH) 3 could improve the FDCA yield to some extent, as seen when the FDCA yield increased from 82.5 wt % (control sample) to 85.7 and 86.1 wt %, respectively, when 0.5 mmol/L of Fe 2 O 3 or Fe(OH) 3 were added. We then further studied the effect of Fe(OH) 3 amount on the oxidation of HMF to FDCA, the results are shown in Figure 1d. It shows that the FDCA yield gradually increased with the increase of Fe(OH) 3 , a highest FDCA yield of 91.7 wt % was obtained by adding 1 mmol/L Fe(OH) 3 . However, when the Fe(OH) 3 concentration was over 1 mmol/L and further increased, the yield of FDCA was slightly decreased. It was reported that Fe(OH) 3 has a stable effect on K 2 FeO 4 , therefore, adding an appropriate amount of Fe(OH) 3 was helpful in enhancing the FDCA yield [40]. However, Fe(OH) 3 has a flocculation function and an excessive amount of Fe(OH) 3 will affect the reaction of HMF and the precipitation of FDCA, thus reducing the FDCA yield.
Synthesis of PHFs
It can be observed from the above results that the use of dipotassium phosphate buffer and Fe(OH) 3 improved the stability of potassium ferrate, thus increasing the oxidation efficiency of HMF to FDCA. The highest FDCA yield of 91.7 wt % was obtained under an optimum reaction condition, the obtained FDCA had high purity (>99%) ( Figure S1). The present method is more moderate than the reported methods, which can be conducted in atmosphere at room temperature (25 • C), the reaction just needs 15 min to be finished. Most important, the present method is environmentally friendly, all of the Fe ions and water can be recycled, therefore, the present method will not result in the metal and water pollution for the environment, which offers an effective method for the economic and eco-friendly production of FDCA from renewable biomass-based platform chemical, it fits well into the green conversion process concept.
Synthesis of PHFs
In the present work, the effect of the amount of triphenyl phosphite and condensation temperature on the intrinsic viscosity and mechanical properties of the obtained PHFs were studied, and the results are shown in Figure 2. It can be observed that a small amount of triphenyl phosphite could significantly enhance the intrinsic viscosity, which increased from 0.230 dL/g (PHF-1) to 0.780 dL/g (PHF-2) with the adding amount of triphenyl phosphite increased from 0 mmol to 0.05 mmol. However, when the adding amount of triphenyl phosphite was further increased to 0.2 and 0.5 mmol, the intrinsic viscosity was distinctly decreased to 0.720 dL/g (PHF-3) and 0.656 dL/g (PHF-4). This is mainly due to the excessive adding of triphenyl phosphite, resulting in the inhibition of polycondensation [42]. It also can be observed that the color became more and more lighter with the increase of triphenyl phosphite amount, which was mainly due to the inhibiting effect of triphenyl phosphite on the oxidation degradation of the furan ring at high temperature, and thus reduced the discoloration during the synthesis of PHF [42]. In the present work, the effect of the amount of triphenyl phosphite and condensation temperature on the intrinsic viscosity and mechanical properties of the obtained PHFs were studied, and the results are shown in Figure 2. It can be observed that a small amount of triphenyl phosphite could significantly enhance the intrinsic viscosity, which increased from 0.230 dL/g (PHF-1) to 0.780 dL/g (PHF-2) with the adding amount of triphenyl phosphite increased from 0 mmol to 0.05 mmol. However, when the adding amount of triphenyl phosphite was further increased to 0.2 and 0.5 mmol, the intrinsic viscosity was distinctly decreased to 0.720 dL/g (PHF-3) and 0.656 dL/g (PHF-4). This is mainly due to the excessive adding of triphenyl phosphite, resulting in the inhibition of polycondensation [42]. It also can be observed that the color became more and more lighter with the increase of triphenyl phosphite amount, which was mainly due to the inhibiting effect of triphenyl phosphite on the oxidation degradation of the furan ring at high temperature, and thus reduced the discoloration during the synthesis of PHF [42].
Subsequently, the effect of condensation temperature was further evaluated without adding triphenyl phosphite (PHF-1, PHF-5, and PHF-6) (Figure 2a). The results indicated that the intrinsic viscosity gradually increased with the increase of condensation temperature, and a highest intrinsic viscosity (0.736 dL/g) was obtained at the temperature of 250 °C. However, the PHFs would be significantly carbonized when the condensation temperature exceeded 250 °C. In addition, it was found that the intrinsic viscosity increased remarkably from 0.736 dL/g (PHF-6) to 0.803 dL/g (PHF-7) at 250 °C by adding 0.05 mmol triphenyl phosphite, the result further indicated that a small amount of triphenyl phosphite could enhance the intrinsic viscosity of the obtained PHF and thus contribute to the direct esterification.
Structure Characterization of PHFs
PHF-7 displayed a better performance on mechanical property, thermal property, and acid degradability; therefore, its structure was further determined by NMR and FTIR (Figure 3). From 1 H NMR, we can find that the resonances of C-H peaks on the furan ring, CH2 on the ester, and CH2 on the carbon chain appeared at 7.30 (Hf1), 7.23 (Hf2), 4.37 (Ha), 1.83 (Hb), and 1.52 (Hc) ppm, respectively. The peak integration ratio of the four 1 H peaks was 1:2:2:2, which was consistent with the calculated values of the molecular structure of PHF. The 13 C NMR revealed that the resonance peaks associated the furan ring (Cs/Cf) appeared at 142.8 and 115.7 ppm. The chemical shifts related to HDO (Ca, Cb, and Cc) appeared at 63.5, 23.9, and 21.0 ppm, respectively. The chemical shift at 156.6 ppm was ascribed to Cx. The integration ratio of characteristic peak was 1:1:1:1:1:1, which was consistent with the theoretical calculation of PHF. The FTIR spectrum and its representative resulting peak assignments are shown in Figure 3c Subsequently, the effect of condensation temperature was further evaluated without adding triphenyl phosphite (PHF-1, PHF-5, and PHF-6) (Figure 2a). The results indicated that the intrinsic viscosity gradually increased with the increase of condensation temperature, and a highest intrinsic viscosity (0.736 dL/g) was obtained at the temperature of 250 • C. However, the PHFs would be significantly carbonized when the condensation temperature exceeded 250 • C. In addition, it was found that the intrinsic viscosity increased remarkably from 0.736 dL/g (PHF-6) to 0.803 dL/g (PHF-7) at 250 • C by adding 0.05 mmol triphenyl phosphite, the result further indicated that a small amount of triphenyl phosphite could enhance the intrinsic viscosity of the obtained PHF and thus contribute to the direct esterification.
Structure Characterization of PHFs
PHF-7 displayed a better performance on mechanical property, thermal property, and acid degradability; therefore, its structure was further determined by NMR and FTIR (Figure 3). respectively. The peak integration ratio of the four 1 H peaks was 1:2:2:2, which was consistent with the calculated values of the molecular structure of PHF. The 13 C NMR revealed that the resonance peaks associated the furan ring (C s /C f ) appeared at 142.8 and 115.7 ppm. The chemical shifts related to HDO (C a , C b , and C c ) appeared at 63.5, 23.9, and 21.0 ppm, respectively. The chemical shift at 156.6 ppm was ascribed to C x . The integration ratio of characteristic peak was 1:1:1:1:1:1, which was consistent with the theoretical calculation of PHF.
Polymers 2018, 10, x FOR PEER REVIEW 7 of 14 absorption peaks of -C=O appeared at 2927 and 2861 cm −1 , and the adsorption peak of =C-H on furan ring appeared at 3120 cm -1 . These results indicate the existence of furan ring in PHF-7. In addition, the out-of-plane bending vibration of more than six carbon chains of -C-H is observed at 725 cm −1 . The FTIR spectrum and its representative resulting peak assignments are shown in Figure 3c. The bands at 770, 818 and 966 cm −1 were the out-of-plane bending vibration of =C-H on furan ring; the band at 1038 cm −1 was the asymmetric stretching vibration of =C-O on furan ring; the bands at 1570 and 1508 cm −1 were characteristic absorption peaks of -C=C on furan ring; the characteristic absorption peaks of -C=O appeared at 2927 and 2861 cm −1 , and the adsorption peak of =C-H on furan ring appeared at 3120 cm -1 . These results indicate the existence of furan ring in PHF-7. In addition, the out-of-plane bending vibration of more than six carbon chains of -C-H is observed at 725 cm −1 . After the polymerization, the strong absorption bands appeared at about 1268 and 1716 cm −1 due to the newly formed C-O and C=O in the ester linkage (C−O−C=O); this result confirms the existence of ester in PHF-7 [43].
Crystallinity Properties
The crystallinity of the obtained PHFs was characterized by XRD; the results are shown in Figure 4. We found that the pronounced crystallinity of the solvent treated PHF-6 and PHF-7 membranes was corroborated by the presence of two sharp signals at 17.1 • and 24.9 • and a less intense diffraction peak at 13.8 • . The characteristic diffraction peak at 2θ = 13.8 • (d = 6.42Å) could be ascribed to the (110) plane, the peak at 2θ = 17.06 • (d = 5.19 Å) could be assigned to the (010) plane, and the diffraction at 2θ = 24.9 • (d = 3.58 Å) was attributed to the (111) plane of PHF, respectively. The unit cell of PHF-6 and PHF-7 should be triclinic according to the published paper [2]. These results exhibited excellent agreement with the published literature [43]. However, for PHF-1, only one wide signal at 2θ = 24.5 • can be observed, indicating that PHF-1 was a semi-crystalline polyester, and PHF-6 and PHF-7 were crystalline polyesters. The crystallinity of PHF-1, PHF-6, and PHF-7 was 53.3%, 94.3%, and 96.7%, respectively, based on the XRD data.
Crystallinity Properties
The crystallinity of the obtained PHFs was characterized by XRD; the results are shown in Figure 4. We found that the pronounced crystallinity of the solvent treated PHF-6 and PHF-7 membranes was corroborated by the presence of two sharp signals at 17.1° and 24.9° and a less intense diffraction peak at 13.8°. The characteristic diffraction peak at 2θ = 13.8° (d = 6.42Å) could be ascribed to the (110) plane, the peak at 2θ = 17.06° (d = 5.19 Å) could be assigned to the (010) plane, and the diffraction at 2θ = 24.9° (d = 3.58 Å) was attributed to the (111) plane of PHF, respectively. The unit cell of PHF-6 and PHF-7 should be triclinic according to the published paper [2]. These results exhibited excellent agreement with the published literature [43]. However, for PHF-1, only one wide signal at 2θ = 24.5° can be observed, indicating that PHF-1 was a semi-crystalline polyester, and PHF-6 and PHF-7 were crystalline polyesters. The crystallinity of PHF-1, PHF-6, and PHF-7 was 53.3%, 94.3%, and 96.7%, respectively, based on the XRD data.
Mechanical Properties
The mechanical properties of PHF membranes that were evaluated by means of universal tensile testing are shown in Figure 5a, and the related datum are summarized in Figure 5b. It can be clearly observed from the yielding during stress-train tests that the PHF membranes displayed ductile fracture behavior. The SEM diagram of the fracture zone indicated that the PHF membrane showed typical ductile fracture characteristics [44]. Though there has a huge difference in the intrinsic viscosity between PHF-1 (0.230 dL/g) and PHF-6 (0.736 dL/g) due to the difference condensation temperature (Figure 2), they have a similar average Young's modulus (E), maximum tensile strength (σm), and elongation at break (εb) from Figure 5b, which was mainly due to the difference of their crystallinity. From the XRD results shown in Figure 4, we can find that PHF-1 is a semi-crystalline polyester and PHF-6 is a crystalline polyester, which resulted in a similar mechanical property, even though they had a huge difference in the intrinsic viscosity. However, the E and σm obviously increased to 479 and 36.5 MPa from about 450 and 34 MPa with the addition of triphenyl phosphite, though there was a slight decrease in εb. Therefore, the addition of triphenyl phosphite improved the degree of polymerization of PHFs, thus enhancing its mechanical properties, which can be used for the preparation of high elongation fibre.
Mechanical Properties
The mechanical properties of PHF membranes that were evaluated by means of universal tensile testing are shown in Figure 5a, and the related datum are summarized in Figure 5b. It can be clearly observed from the yielding during stress-train tests that the PHF membranes displayed ductile fracture behavior. The SEM diagram of the fracture zone indicated that the PHF membrane showed typical ductile fracture characteristics [44]. Though there has a huge difference in the intrinsic viscosity between PHF-1 (0.230 dL/g) and PHF-6 (0.736 dL/g) due to the difference condensation temperature (Figure 2), they have a similar average Young's modulus (E), maximum tensile strength (σ m ), and elongation at break (ε b ) from Figure 5b, which was mainly due to the difference of their crystallinity. From the XRD results shown in Figure 4, we can find that PHF-1 is a semi-crystalline polyester and PHF-6 is a crystalline polyester, which resulted in a similar mechanical property, even though they had a huge difference in the intrinsic viscosity. However, the E and σ m obviously increased to 479
Thermal Properties
As PHF-1 has the lowest intrinsic viscosity, PHF-6 has the highest intrinsic viscosity without the addition of tripenyl phosphite, and PHF-7 is a contrast sample of PHF-6 with the addition of tripenyl phosphite; therefore, the three samples were chosen for thermal property evaluation, and the results are shown in Figure 6. Figure 6a displayed the DSC trances of PHF-1, 6, and 7, and the glass transition temperature (Tg) and melting point (Tm) were listed in Figure 6d. As we can see, the value of Tg and Tm increased from 48.1 and 143.8 °C to 48.5 and 145.3 °C with the condensation temperature increase from 230 °C to 250 °C. However, the Tg and Tm had a slight decrease with the adding of triphenyl phosphite. In addition, it also can be observed from Figure 6a that the sample of PHF-6 and PHF-7 had a sharp melting peak, which was due to the high fusion enthalpy resulting from the high intrinsic viscosity, indicating that PHF-6 and PHF-7 had narrow crystal distribution and high crystalline phase [45,46]. Furthermore, the thermal stability of the PHFs was studied and the results are shown in Figure 6b,c, and an about 5% mass loss of PHF-7 at 100 °C should be the carried moisture during the operation. The temperature values of thermal decomposition onset (Tid) as well as those of the maximum decomposition rate (Tmax) are summarized in Figure 6d. It can be derived that all the three PHFs have similar Tid (about 348 °C), which is higher than their Tm (about 144 °C). Therefore, they are thermally stable and can be safely processed at a temperature higher than their melting point. Similarly, the increase of intrinsic viscosity caused the decomposition temperature to rise, and PHF-7 had the highest Tmax (411 °C) than that of PHF-1 and PHF-6 (about 390 °C), indicating that the adding of triphenyl phosphite had a positive effect on the thermal stability of PHF. These results indicated that the adding of triphenyl phosphite not only alleviated yellowing during the synthesis of PHFs, but it also increased their thermal property. Since the obtained PHFs had a similar maximum decomposition rate, a lower thermal decomposition onset and single glass transition temperature than those of the mentioned PEF in the literature [45], this indicates that the obtained PHFs can be used as low melting point polyester.
Thermal Properties
As PHF-1 has the lowest intrinsic viscosity, PHF-6 has the highest intrinsic viscosity without the addition of tripenyl phosphite, and PHF-7 is a contrast sample of PHF-6 with the addition of tripenyl phosphite; therefore, the three samples were chosen for thermal property evaluation, and the results are shown in Figure 6. Figure 6a displayed the DSC trances of PHF-1, 6, and 7, and the glass transition temperature (T g ) and melting point (T m ) were listed in Figure 6d. As we can see, the value of T g and T m increased from 48.1 and 143.8 • C to 48.5 and 145.3 • C with the condensation temperature increase from 230 • C to 250 • C. However, the T g and T m had a slight decrease with the adding of triphenyl phosphite. In addition, it also can be observed from Figure 6a that the sample of PHF-6 and PHF-7 had a sharp melting peak, which was due to the high fusion enthalpy resulting from the high intrinsic viscosity, indicating that PHF-6 and PHF-7 had narrow crystal distribution and high crystalline phase [45,46].
Degradability in Strong Acid
The degradability of the three PHF membranes was determined by monitoring the weight loss Furthermore, the thermal stability of the PHFs was studied and the results are shown in Figure 6b,c, and an about 5% mass loss of PHF-7 at 100 • C should be the carried moisture during the operation. The temperature values of thermal decomposition onset (T id ) as well as those of the maximum decomposition rate (T max ) are summarized in Figure 6d. It can be derived that all the three PHFs have similar T id (about 348 • C), which is higher than their T m (about 144 • C). Therefore, they are thermally stable and can be safely processed at a temperature higher than their melting point. Similarly, the increase of intrinsic viscosity caused the decomposition temperature to rise, and PHF-7 had the highest T max (411 • C) than that of PHF-1 and PHF-6 (about 390 • C), indicating that the adding of triphenyl phosphite had a positive effect on the thermal stability of PHF. These results indicated that the adding of triphenyl phosphite not only alleviated yellowing during the synthesis of PHFs, but it also increased their thermal property. Since the obtained PHFs had a similar maximum decomposition rate, a lower thermal decomposition onset and single glass transition temperature than those of the mentioned PEF in the literature [45], this indicates that the obtained PHFs can be used as low melting point polyester.
Degradability in Strong Acid
The degradability of the three PHF membranes was determined by monitoring the weight loss with time under strong acid conditions, and the results are shown in Figure 7. The weight loss of the three PHF membranes is between 6.4-9.4 wt % when treated with strong acid for one week. The weight loss gradually increased with the increase of the treatment time, and the maximum weight loss reached about 20 wt % after four weeks treatment. As has been discussed in the part of Section 3.4, PHF-1 was a semi-crystalline polyester, and PHF-6 and PHF-7 were crystalline polyesters. Their crystallinity was 53.3%, 94.3%, and 96.7%, respectively. As a result, it can be found from Figure 7a that the acid degradation rate of PHF-1 was higher than that of PHF-6 and PHF-7. In addition, we can find that the acid degradation rate of PHF-6 was lower than that of PHF-7, although it has a lower crystallinity than that of PHF-7, this was mainly due to the higher melting temperature of PHF-6 (T m = 145.3 • C, Figure 6) than that of PHF-7 (T m = 144.2 • C, Figure 6). As a result, the weight loss of PHF-1 (23.0 wt %) was higher than that of PHF-7 (20.9 wt %), and PHF-7 was higher that of PHF-6 (19.8 wt %). When compared with the FDCA-based polyesters that were reported in the published literature [47], the weight loss of PHFs was obviously higher than that of the reported results, which only were 1, 2, and 10 wt % in the fourth week and 2, 5, and 28 wt % in twenty-second week, respectively, under neutral, pH = 4.0 and pH = 12.0 conditions. It shows in Figure 7a that the weight loss is not linearly related to acid treatment time, which may be due to the difference in crystalline and amorphous regions [48]. The environmental condition provided in this paper was harsher than that of the reported, but the degradation rate of the polyester was obviously higher, which exhibited better acid degradation performance.
Furthermore, it shows in Figure 7c that the PHF film surface was obviously eroded after four weeks treatment by strong acid when compared with that of the controlled sample (Figure 7b), and the erosion degree of the sample surface was different (Figure 7c), which may be due to the difference between the crystalline region and the amorphous zone [48].
Conclusions
In the present work, an economic method for the preparation of FDCA derived from HMF was constructed, which used K2FeO4 as oxidant and K2HPO4 and Fe(OH)3 as the stabilizer of K2FeO4. The results revealed that the oxidation efficiency of HMF to FDCA was increased, and a 91.7 wt % of FDCA with high purity (>99%) was obtained at 25 °C in 15 min under air atmosphere.
Subsequently, a totally biomass-based polyester, poly(hexylene 2,5-furandicarboxylate) (PHF), was prepared successfully with the directly use of the obtained FDCA and commercially available HDO as monomers. The degree of polymerization of PHFs was improved with the addition of triphenyl phosphite, a crystalline polyester (PHF-7) with typical (110) plane, (010) plane, and (111) plane at 2θ = 13.78° (d = 6.42 Å), 17.06° (d = 5.19 Å), and 24.9° (d = 3.58 Å) was obtained with 0.05 mmol of triphenyl phosphite was added. As a result, the obtained PHF-7 had the highest intrinsic viscosity (0.803 dL/g), average Young's modulus (479 MPa), and maximum tensile strength (36.5 MPa) than the other PHFs. Its elongation at break was 216%, which was significantly higher than that of PET (90%) and PEF (3%). Furthermore, the high intrinsic viscosity caused the increase in the enthalpy of fusion with respect to as-synthesized PEF, in this case, the melting peak of PHF-7 was particularly sharp, and the Tg and Tm of PHF-7 were about 48 and 144 °C, which were relatively lower when compared to those of PEF, which can be served as heat bondable fibre with low melting and packaging application with high gas barrier demanding as well as engineering materials.
All of these results support the notion that it is entirely possible to synthesize the furan-aromatic polyesters via the direct esterification method starting from HMF, and the properties of furanaromatic polyesters based on renewable resources are favorable to those from the petrochemical benzene-aromatic polyesters.
Conclusions
In the present work, an economic method for the preparation of FDCA derived from HMF was constructed, which used K 2 FeO 4 as oxidant and K 2 HPO 4 and Fe(OH) 3 as the stabilizer of K 2 FeO 4 . The results revealed that the oxidation efficiency of HMF to FDCA was increased, and a 91.7 wt % of FDCA with high purity (>99%) was obtained at 25 • C in 15 min under air atmosphere.
Subsequently, a totally biomass-based polyester, poly(hexylene 2,5-furandicarboxylate) (PHF), was prepared successfully with the directly use of the obtained FDCA and commercially available HDO as monomers. The degree of polymerization of PHFs was improved with the addition of triphenyl phosphite, a crystalline polyester (PHF-7) with typical (110) plane, (010) plane, and (111) plane at 2θ = 13.78 • (d = 6.42 Å), 17.06 • (d = 5.19 Å), and 24.9 • (d = 3.58 Å) was obtained with 0.05 mmol of triphenyl phosphite was added. As a result, the obtained PHF-7 had the highest intrinsic viscosity (0.803 dL/g), average Young's modulus (479 MPa), and maximum tensile strength (36.5 MPa) than the other PHFs. Its elongation at break was 216%, which was significantly higher than that of PET (90%) and PEF (3%). Furthermore, the high intrinsic viscosity caused the increase in the enthalpy of fusion with respect to as-synthesized PEF, in this case, the melting peak of PHF-7 was particularly sharp, and the T g and T m of PHF-7 were about 48 and 144 • C, which were relatively lower when compared to those of PEF, which can be served as heat bondable fibre with low melting and packaging application with high gas barrier demanding as well as engineering materials.
All of these results support the notion that it is entirely possible to synthesize the furan-aromatic polyesters via the direct esterification method starting from HMF, and the properties of furan-aromatic polyesters based on renewable resources are favorable to those from the petrochemical benzene-aromatic polyesters. | 2019-02-07T05:53:17.216Z | 2019-01-23T00:00:00.000 | {
"year": 2019,
"sha1": "206d570cda9b47aa0302eccadc40eaf1c2f88878",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/11/2/197/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "206d570cda9b47aa0302eccadc40eaf1c2f88878",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
265590443 | pes2o/s2orc | v3-fos-license | Acute Tubulointerstitial Nephritis and Secondary Renal Amyloidosis: A Rare Complication of Atezolizumab
Lung cancer is the second most common malignancy in both genders and the most common cause of cancer-related deaths worldwide. Broadly, lung cancer is divided into two types: small-cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC). Non-small cell lung cancer accounts for 85% of the diagnoses of lung cancer. It is necessary to check for any targetable mutations, which can help in deciding the treatment plan for the patients. The patient we are reporting is a 70-year-old male with multiple co-morbidities diagnosed with non-small cell carcinoma, favoring adenocarcinoma on histopathology. He was started on Atezolizumab/Bevacizumab/Carboplatin/Paclitaxel (ABCP). He was switched to maintenance Atezolizumab/Bevacizumab after four cycles due to poor tolerance to carboplatin and paclitaxel. The patient presented with neutropenic colitis and acute kidney injury (AKI), requiring admission. workup revealed nephrotic range proteinuria with a high urinary albumin-to-creatinine ratio. He underwent a renal biopsy to ascertain the cause of his proteinuria, which showed marked acute and chronic tubulo-interstitial nephritis (TIN), amyloidosis, and global glomerulosclerosis. Secondary (AA) amyloidosis is characterized by the extracellular deposition of misfolded proteins. Although interstitial nephritis is a reported side effect of immune checkpoint inhibitors, AA amyloidosis is a rarer side effect. So, to determine the exact cause and early therapeutic intervention in immune checkpoint inhibitor-related kidney injury, large retrospective or prospective studies should be done.
Introduction
Lung cancer is the second most common malignancy in both genders' incidence-wise; however, it is the most common cause of cancer-related deaths worldwide [1].Broadly, lung cancer is divided into two types: smallcell lung cancer (SCLC) and non-small-cell lung cancer (NSCLC).Non-small cell lung cancer accounts for 85% of the diagnoses of lung cancer [2] and can further be sub-classified into four different histological subtypes, with adenocarcinoma being the most common subtype [2].Treatment of NSCLC depends on the patient's performance status, comorbidities, tumor stage, and the molecular nature of the disease.Patients with stages I to III are treated with curative intent, which includes surgery, chemotherapy, radiation therapy, or a combined approach [2].It is necessary to check for any targetable mutations that can help in deciding the treatment of patients with stage IV NSCLC and those with early-stage disease who may require adjuvant tyrosine kinase inhibitor therapy after undergoing curative resection [3].In the absence of targetable driver mutations, anti-PD-1 or anti-PD-L1 agents (immune checkpoint inhibitors (ICIs), pembrolizumab, and atezolizumab, respectively) can be incorporated into the treatment plan if programmed death ligand 1 (PD-L1) expression is more than 1%.Atezolizumab can also be used in an adjuvant setting in combination with chemotherapy, as reported in the IM Power 150 trial [3].As far as the safety profile is concerned, most reported immune-related adverse events associated with ICIs include diarrhea, pneumonitis, and hepatitis.
Here, we report a case of a 70-year-old male who received adjuvant Atezolizumab and developed nephrotic range proteinuria, which eventually was proven to be drug-induced secondary amyloidosis.
Case Presentation
The patient we are reporting is a 70-year-old ex-smoker male with a history of 25 pack years of smoking and comorbidities of hypertension and ischemic heart disease, for the latter of which he underwent percutaneous coronary intervention.He presented to the clinic with a history of hemoptysis for three months.A computed tomography (CT) scan of the thorax was performed, which showed a well- circumscribed lesion in the upper lobe of the right lung along with mediastinal lymphadenopathy.A positron emission tomography (PET) scan showed an upper lobe lesion in the right lung and metabolically avid right hilar and right para-tracheal lymph nodes.His disease was radiologically staged as cT3N2M0 (Stage III-B).A CT-guided biopsy of the lung lesion was performed, which revealed non-small cell carcinoma favoring adenocarcinoma on histopathology.Immunohistochemistry showed positive cytokeratin, TTF1 was focally positive, and p40 and CD56 were negative.Polymerase chain reaction (PCR) for epidermal growth factor receptor (EGFR) did not detect any mutation.Fluorescence in situ hybridization (FISH) was also negative for ALK gene rearrangement.Immunohistochemistry for PDL1 clone SP142 was positive for more than 50% TPS (total proportion score).
The case was discussed in a multidisciplinary meeting; the patient underwent upfront thoracotomy, leading to a right lung upper lobe lobectomy and mediastinal staging, resulting in pathological stage T3N0.A PET scan performed after surgery showed interval progression with hyper-metabolic pleural-based nodules, nodules in the right lung middle lobe and horizontal fissure, right para-tracheal and internal mammary lymphadenopathy, and bilateral adrenal and bony metastases.He was started on Atezolizumab 1200 mg, Bevacizumab 15 mg/kg, carboplatin AUC 6, and Paclitaxel 200 mg/m2 (ABCP).A restaging PET scan after four cycles of ABCP showed a partial response, with no avidity in pleural nodules, mediastinal nodes, or bony metastases.Adrenal glands also showed responses, with one of the glands showing some increase in size due to necrosis, but avidity in both glands decreased overall.The patient, however, had poor tolerance to chemotherapy with grade II to grade III gastrointestinal side effects, so he was switched to maintenance Atezolizumab/Bevacizumab (AB).After four cycles of maintenance of AB, a PET/CT scan showed interval progression in the left adrenal gland lesion.The left adrenal gland was radiated, and the treatment regimen was switched to Atezolizumab/Pemetrexed (AP).After four cycles of AP, a PET scan showed interval regression in the size and metabolic activity of the left adrenal mass.However, after the 5th cycle of AP, the patient presented with neutropenic colitis and acute kidney injury (AKI), requiring admission.His renal functions continued to deteriorate gradually despite holding chemo-immunotherapy (CIT), as mentioned in Figure 1.
FIGURE 1: Showing the creatinine trend.
An interval-staging CT scan was performed after three cycles, which showed stable postsurgical changes in the right lung upper lobe lobectomy without local recurrence or metastatic nodules, along with stable bilateral necrotic adrenal metastases.Further workup for AKI revealed nephrotic range proteinuria with a urine albumin to creatinine ratio of 7915 mg/g (reference < 17 mg/g) and a urine protein to creatinine ratio of 15.54 mg/mg (reference < 0.11 mg/mg).Serum protein electrophoresis was consistent with monoclonal band proteins of 2.1 grams/liter in the beta region, but the Kappa/Lambda ratio was in the normal range (1.55 grams/liter; reference: 1.17-2.93grams/liter).
He underwent a renal biopsy to ascertain the cause of his proteinuria, which showed marked acute and chronic tubulo-interstitial nephritis (TIN), amyloidosis, and global glomerulosclerosis (Figures 2-5).The patient started on Prednisolone 1 mg/kg; creatinine came down from 2.8 mg/dl to 1.7 mg/dl after four weeks of steroid therapy, although he continued to have albuminuria.He gradually developed generalized anasarca, which kept on worsening; besides, he developed ESBL Escherichia coli septicemia, leading to multiorgan failure and, later, death.
Discussion
Our patient was an elderly male with PDL-1-positive adenocarcinoma of the lung, stage IIIB, treated with CIT.He had a particularly good treatment response for almost a year.However, he later developed secondary renal AA amyloidosis and tubulointerstitial nephritis, which is a very rare side effect of Atezolizumab, and only a few case reports/case series have documented this side effect [4,5].
Secondary (AA) amyloidosis is characterized by the extracellular deposition of misfolded proteins.AA amyloidosis can occur in various circumstances, including chronic inflammatory disorders, rheumatoid arthritis, and ankylosing spondylitis.Inflammatory bowel disease, chronic infections like tuberculosis, and neoplasms like renal cell carcinoma and lymphoma.
The incidence of AA amyloidosis varies from 1-2 cases per million but is now decreasing with a prevalence of about 5% to 10% [6,7].In Western countries, the incidence of AA amyloidosis is decreasing due to the low incidence of chronic infections and better treatment for autoimmune diseases; it is now less common than amyloid light-chain (AL) or wild-type transthyretin (senile) amyloidosis.In amyloidosis, intermediate SAA (serum amyloid A) products get aggregated into protofilaments.The kidney is the major involved organ, with proteinuria as the first clinical manifestation.It is diagnosed on a renal biopsy, and the extent of renal damage defines the prognosis.Targeted anti-inflammatory treatment aims to normalize SAA levels and achieve sustained response SAA levels.
Interstitial nephritis is a reported side effect of ICI-related kidney injury, but AA amyloidosis is a rarer side effect.Only a few case reports and a case series have reported this side effect [4].Data suggests that immune checkpoint inhibitor-related kidney injury may occur as late as 12 months, as in our case [5,8].
Though pemetrexed can cause renal injury, to the best of our knowledge, no case of AA amyloid has been reported.Available reports suggest that there is PDL-1 expression in the renal tubular epithelium of patients who develop ICI-related kidney injury, and the intensity of expression is related to the severity of renal injury.Treatment of ICI-related kidney injury is the withdrawal of immunotherapy in combination with immunosuppression, leading to improvement in renal functions and a decrease in urinary protein excretion.The extent of inflammation and sclerosis on biopsy determines the recovery, with severe inflammation and sclerosis associated with a poor response to immunosuppression [9].
So, large retrospective or prospective studies should be done to determine the exact cause and role of early therapeutic intervention in immune checkpoint inhibitor-related kidney injury.
Conclusions
Immune checkpoint inhibitors play an important role in lung cancer.Interstitial nephritis is a reported side effect of ICI-related kidney injury, but AA amyloidosis is a rarer side effect that needs early diagnosis and treatment.Only a few case reports and a case series have reported this side effect.More reports in the future are required to prove the association of this side effect with Atezolizumab and to determine how to treat it.
ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work.Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
FIGURE 2 :FIGURE 3 :
FIGURE 2: This photomicrograph shows marked acute and chronic interstitial inflammation (lymphocytes, plasma cells, neutrophils, and a few eosinophils), moderate interstitial fibrosis, and tubular atrophy.Arteries and arterioles show hyalinosis. | 2023-12-04T17:49:49.197Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "5815af0afc89aa2537e39fb175e582c04e6a6c23",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/199317/20231128-15986-19yveot.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "82f315dff6190cd85e62bc0540c926408b796958",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73692209 | pes2o/s2orc | v3-fos-license | Point-spread function ramifications and deconvolution of a signal dependent blur kernel due to interpixel capacitive coupling
Interpixel capacitance (IPC) is a deterministic electronic coupling that results in a portion of the collected signal incident on one pixel of a hybridized detector array being measured in adjacent pixels. Data collected by light sensitive HgCdTe arrays which exhibit this coupling typically goes uncorrected or is corrected by treating the coupling as a fixed point spread function. Evidence suggests that this IPC coupling is not uniform across different signal and background levels. This variation invalidates assumptions that are key in decoupling techniques such as Wiener Filtering or application of the Lucy- Richardson algorithm. Additionally, the variable IPC results in the point spread function (PSF) depending upon a star's signal level relative to the background level, amond other parameters. With an IPC ranging from 0.68% to 1.45% over the full well depth of a sensor, as is a reasonable range for the H2RG arrays, the FWHM of the JWSTs NIRCam 405N band is degraded from 2.080 pix (0".132) as expected from the diffraction patter to 2.186 pix (0".142) when the star is just breaching the sensitivity limit of the system. For example, when attempting to use a fixed PSF fitting (e.g. assuming the PSF observed from a bright star in the field) to untangle two sources with a flux ratio of 4:1 and a center to center distance of 3 pixels, flux estimation can be off by upwards of 1.5% with a separation error of 50 millipixels. To deal with this issue an iterative non-stationary method for deconvolution is here proposed, implemented, and evaluated that can account for the signal dependent nature of IPC.
Introduction
Hybridization has become a standard portion of the process for most detectors which utilize unconventional semiconductors for photon detection and even some silicon arrays. Hybridized detectors are composed of separate photodiode and read-out circuit layers, which are connected to each other using indium bump bonds to form electrical contact as illustrated in figure1. In this type of detector gives rise to a classic capacitor. 2 The presence of this capacitance results in a coupled relationship between a pixel's charge and its neighbors' electrostatic potentials. The readout from a pixel corresponds to this electrostatic potential, yielding the final result that signal collected on one pixel is attributed, not just to that pixel, but also to its neighbors. This type of cross-talk is distinct from diffusion; diffusion occurs when charge carriers, generated under one pixel, are collected in a neighboring pixel. In the case of IPC coupling the charge carriers do not move between pixels; they are collected in one pixel and their collection impacts the electrostatic state of nearby pixels.
Early observation of IPC was made due to the coupling's reduction of measured Poissonian noise through the correlation introduced 3 . 4 By examining the autocorrelation of flat fields the magnitude of the IPC coupling could be assessed 1 . 4 This correlation invalidated an essential assumption 1 for calculation of a sensor's conversion gain using the photon transfer method. 5 This, in turn, necessitated the adoption of a direct capacitive comparison method to accurately determine the conversion gain. 6 However, the impact that IPC coupling has on collected data remains an issue. Correction methods for IPC have been proposed 7 but these methods act to deconvolve a constant coupling.
The underlying semi-conductor physics 8 indicates that this assumption of constant coupling may not be true when a sensor is exposed to spatially varying illumination, a variable which autocorrelation methods are incapable of exploring. Simulations from first principle 9 and measurements obtained using cosmic ray exposures and hot pixels of various intensities 1011 have verified that the coupling of signal by IPC is a function of the signal level.
As pixel sizes used in modern focal plane arrays continue to decrease, the distance between conductive elements of adjacent pixels is reduced, resulting in an increase of interpixel capacitance. 2 This increase in capacitance results in a greater coupling 1 which has inspired exploration of the impact that IPC may have on photometric and astrometric observations for future missions such as WFIRST 12 . 13 Until this point, these examinations have not included the signal dependence of IPC.
The goal of this work is to rigorously model the photometric and astrometric effects of a scene dependent IPC as has been predicted 9 and observed 10 . 11 This approach allows for simultaneous development of an efficient and effective decoupling technique to aid in the restoration of photometric and astrometric accuracy.
Mathematical methods
A strict mathematical model of the application of IPC coupling allows for examination in a way that lends itself to the development of an iterative approximation technique by which decoupling can be achieved.
A rigorous definition of coupling
The fundamental pixel to pixel issue is that on any particular pixel; i, j, the value measured in that pixel; M (i, j), is not the signal collected in that pixel; S(i, j). 3 A portion of signal collected on a pixel couples capacitively to its neighbors, while simultaneously the neighbors couple capacitively to the initial pixel. These values are connected by a coupling kernel defined with nearest neighbor coupling as: 4 by the relation: 4 This results in a fraction, α referred to as the IPC coupling coefficient, 4 of signal moving from a pixel to each of its neighbors. Expressing this relationship in the discrete form with the convolution expanded we have the following relationship: Where M (a, b) is the value measured on pixel a, b; S(a, b) is the signal collected on pixel a, b; α(a, b, c, d) is the fraction of signal coupled from pixel c, d onto pixel a, b; and i, j and m, n are integer pairs to represent pixel locations. In this way the readout from each pixel is the signal collected in the initial pixel plus the signal coupled from the neighbors into the initial pixel, minus the signal coupled from the initial pixel into the neighboring pixels.
If however, the coupling coefficient varies as a function of the pixel signal level, then the coupling cannot be expressed strictly as a convolution. Instead the coupling needs to be applied on a pixel by pixel basis depending on the signal collected in both pixels involved in the coupling; that is to say α(a, b, c, d) = α(S(a, b), S(c, d)). Evidence from both a first principle, semi-conductor physics approach, 9 and data analysis of coupling from isolated single pixel events 1011 indicate that the coupling coefficient varies as a function of the pixel signal level. Therefore, this extended form is a more accurate characterization than a simple convolution. Furthermore, in the case where the range of alpha is small, this approach converges identically to the case of discrete convolution.
Decoupling
To allow equation3 to be solvable for S we must perform an approximation; taking α(S(a, b), S(c, d)) << 1.00 we can make the first order approximation of α(S(a, b), This allows for the expression of an approximation of S, indicated byŜ in terms of M : This equation tells us that that we can approximate the signal collected in a pixel as the value measured in that pixel minus the signal that would have coupled into the pixel from each neighbor, plus the signal that would have coupled out from the pixel into each neighbor. Our approximation; S, is now closer to S than our initial observation; M , provided that α(a, b, c, d) is strictly signed.
In this way we can reform our earlier approximation and instead use α(S(a, b), S(c, d)) · S(c, d) ≈ α(Ŝ(a, b),Ŝ(c, d)) ·Ŝ(c, d). Using this method we can devise an iterative approach, not entirely dissimilar to the Eular method, for evaluating successive approximations for the signal collected, S q for the qth approximation.
This process looks at the output frame, calculates what the coupling to and from every pixel would have been if that were the input frame, and then corrects each pixel by that difference. It then takes that 1st guess at correction as the input frame, calculates what the coupling to and from every pixel would have been in that case, then corrects the measured frame by that difference.
This process continues until the pixel by pixel difference between successive estimates of the input frame approaches zero.
Example calculation
Consider a small 4x4 array with a signal incident slightly off center in such a way that the following signals are recorded on a focal plane array: Due to the range of values contained in this array, the coupling will vary between pixel pairs.
We will take the coupling to be governed by the following equation, where S it the signal incident on a pixel and N is the signal incident on its neighbor: This form has the best case behavior of 0.68% coupling when observing a bright point source In this case, the couplings would appear as illustrated in the following1: Note that the application of a signal dependent IPC results in a weaker fractional coupling in the neighborhood of the brighter pixels and a stronger fractional coupling between the weaker pixels. 9 In fact, the strongest fractional coupling is between the pixels of identical value. When this IPC is applied to this array it yields the readout presented in figure4.
As illustrated in figure4 after IPC coupling there is an underestimate of peak pixel intensity by 3.81% (i.e. 10000 to 9619). Application of a single iteration of the non-linear iterative deconvolution algorithm will yield the a reduction of error at the peak intensity pixel to 0.1%(i.e. restoration from 9619 to 9983). However, not every individual pixel value has moved closer to the true value; the corner values of this array were unchanged by the initial coupling but have now had their values changed. In fact, these locations now return nonsensical negative values. These negative values are a result of the approximation of α · S = α ·Ŝ q not being strictly true. This error is corrected in
Implementation
This deconvolution technique can be considered as comparable to other filtering techniques such as Wiener filtering, 14 or Lucy-Richardson deconvolution 15 . 16 While each of these deconvolution techniques works to restore an image blurred by a known point-spread function (PSF) they are all unable to adapt in the presence of a well characterized but variable PSF; they cannot handle a coupling coefficient that changes across a scene. However, they all have the advantage of being computable in linearithmic time 17 (O(nlogn)) where as the method described above is constrained to run in quadratic time (O(n 2 )). This is due to the fact that conventional filtering algorithms can exploit the properties of convolution in the Fourier domain to operate on the entire image simultaneously. 17 The technique described above is constrained to operate in the image domain and must operate pixel by pixel. However, within an iteration it requires only computationally cheap look-up, multiplication, and addition operations, while not requiring any reference to the newly updated values until the next iteration begins. As a result, this algorithm is an excellent candidate for parallel implementation. Though this algorithm converges quickly as illustrated in section 2.3, a convergence constraint is still necessary for implementation. The convergence condition introduced permits the algorithm to either, cycle 20 times, or cycle until the maximum difference between each individual pixel pair in two successive iterations is less than 0.001% of the greatest pixel magnitude returned in that iteration. This constraint is typically met after three to five iterations for α peaking on the order of a few percent. Including image read and write support functions, a 512 by 512 array can be fully decoupled in under 10 seconds. A 2048 by 2048 array can be decoupled with a total run-time of approximately 6 minutes with >80% of that time being spent on writing out the array as a *.csv file. Sample code for both a naive python implementation and the full GPU parallel implementation will be available at https://github.com/Donlok/Decouple.
IPC and the Point Spread Function
The point spread function (PSF) of an imaging system indicates how the system as a whole will respond to an incident point source. For a digital system, it is the final input to output mapping of the optics, sensor response, and digitization of the readout. A point source incident on the imaging system will be altered by optical diffraction, the pixel response function, discrete sampling, and any crosstalk present in the array. The end result is a unit volume mapping of a point in scene space into read-out space.
To build a model of the behavior of an imaging system in the presence of IPC a full mathematical model of imaging system has to be constructed first. To begin this model we start with a signal; for a single star we can approximate this as a Dirac-delta function. 17 This function is defined to be an ideal point source; it has zero value over its full domain except at its origin and when integrated over all space, has unit area. 17 This function is convolved by the diffraction pattern of the light; for a simple lens system we can approximate this as an airy disk or radial sinc 2 function. For the systems considered later we will use a custom PSF calculated from optics of the JWST. This result is then sampled by the sensor; a step that is mathematicized as multiplication by a comb function. 17 Instead of being a continuous function, it is now a discrete set of values that can be read out from the sensor one by one. In the absence of IPC this is the mathematical point where noise is introduced by sampling using a Poissonian distribution to represent photon noise and then adding in samples from a Gaussian distribution to represent the read noise. In the presence of IPC this process is slightly more complicated. The Poissonian sampling occurs as normal to represent the shot noise but it is at this point that IPC is applied. 9 When IPC is taken as a constant coupling this is done through convolution with a blur kernel 1 . 4 In this work we allow IPC to vary as a function of signal strength therefore a discrete sum as described in equation3 is required instead.
After this operation occurs we introduce a read noise distribution by addition of a sampled normal distribution. So the final mathematical form for an individual sampling would be: Where P oisson(x, y; γ) is a sample from the Poisson distribution with parameter γ at location x, y; Dif f (x, y) is the diffraction pattern in two dimensions; X pitch (x, y) is the Dirac comb in two dimensions with x and y frequency given by the pixel pitch; and N ormal(i, j; µ, σ) is a sample from the normal distribution with mean of µ and variance of σ 2 . The PSF that will be discussed here is built in the absence of these noise distributions and using S(x, y) = δ(x, y). It can be considered as the average resulting from an ensemble of measurements.
This leaves the PSFs reported here defined as follows: These will be compared to what this PSF would have looked like in the absence of IPC: For the analysis provided here Dif f is taken as the WebbPSF F405N as provided from WebbPSF revision V available at www.stsci.edu/~mperrin/software/psf_library/ which is provided oversampled 4x allowing offsets in quarter pixel intervals in each dimension. 18 This is the diffraction pattern as would be incident onto the James Webb Space Telescope's long wave NIRcam sensor after passing through the imaging optics and the narrow band 4.05µm filter. To provide a continuous PSF the figures here presented are sampled at quarter pixel intervals and then interpolated using a cubic spline method. In order to assess the accuracy of this decoupling technique the coupling must be applied to known scenes for analysis. To accomplish this goal the following method was used:
Testing paradigm
• First, a scene image was generated. The particular scenes examined here include; point sources convolved with the WebbPSF provided, and random levels assigned to each pixel.
• Second, a copy of this scene image is produced. This copy undergoes IPC coupling by examining the values of each pixel and using the signal dependent α defined by equation7 to determine the IPC coupled image.
• Third, a read noise distribution was generated by taking uncorrelated samples from a zero mean normal distribution with variance of σ 2 r for each pixel. This distribution is added to the original scene image and the coupled copy. These images are referred to as the truth image or true(i, j) and the coupled image or coupled(i, j) respectively.
• Fourth, the coupled image is run through the deconvolution algorithm described in equation5 and using equation7 as the reference coupling. This output is referred to as the decoupled image or decoupled(i, j).
This method is outlined in the flowchart presented in figure5 above.
This computational method simultaneously gives the results that would be expected if IPC had been present, the results in the presence of IPC, and the results after the removal of IPC using the iterative non-linear method described earlier. Additionally, it underscores the most significant issue with this particular iterative algorithm; the read noise distribution that is applied to the scene is not a part of the image prior to coupling, but after being introduced, still undergoes decoupling. This forces a type of trade-space; this technique does not fully uncorrelate neighboring pixels. Instead it uncorrelates the Poissonian noise and restores the signal accuracy while forcing a correlation on the read noise. In the absence of read noise this algorithm, both on average and in every individual pixel, restores levels to the uncoupled values. In the presence of zero mean read noise, this algorithm restores accuracy on average, but any individual pixel has error proportional to the read noise as will be illustrated in the following section.
Comparison to Lucy-Richardson deconvolution
In order to evaluate the success of this deconvolution technique it has been compared to established techniques. The Lucy-Richardson (LR) deconvolution method was selected as it was both the best performing of the standard deconvolution techniques, as well due to the similarities that it has to the method presented here; both are iterative, though the LR algorithm uses successive iterations to minimize the impact of noise 1516 rather than as a series of progressively more accurate estimates.
To evaluate the success of each algorithm in restoring an image frame where each pixel was set to a random value sampled from a uniform distribution with range from 0 to 60,000 was used as the input. A pixel to pixel comparison between the truth image and each of: the LR decoupled image, the iterative non-linear decoupled image, and the IPC coupled image. Histograms of the pixel to pixel error in each case when no read noise is present are illustrated for, the uncorrected IPC present case in figure6 , the iterative method of described here by equation5 in figure7, and the LR corrected case in figure8.
It is clear in this case that, though the LR deconvolution does reduce error and has a similar The iterative non-linear decoupling reduces pixel to pixel error to less than a single count, not just on average, but in every pixel individually.
However, it is known that the LR deconvolution technique is designed to be noise resistant 15 . 16 To explore this, cases where read noise scales from 0 RMS to 100 RMS were explored. As expected and illustrated in figure9, the IPC present case has no scaling mean absolute error. The iterative decoupling's mean absolute error scales linearly with RMS read noise as shown in figure10. The LR deconvolution's mean absolute error scales with RMS error of the read noise distribution in a sub-linear fashion converging to linear at high RMS, as seen in figure9.
It can be seen through comparison that though the growth behavior of the mean absolute error is slower in the case of LR deconvolution, the mean absolute error of the non-linear decoupling still allows for more accurate correction even with 100 RMS read noise distributions. In fact, for the LR deconvolution to perform better, the read noise distribution introduced would have to be Fig 9 Mean absolute error as a function of read noise for the IPC present case. This is indicates that for the random input frame described with α given by equation 7 the average magnitude difference between the truth image and the coupled image is 719 values (i.e. < |coupled(i, j) − true(i, j)| >≈ 719). As expected because the read noise is zero mean, this mean absolute error has no read noise dependence. on the order of 800 RMS error. Both on average and when evaluated pixel by pixel, the non-linear decoupling is more effective at removing IPC unless read noise is exceedingly large (on the order of 1% of the array's saturation).
PSF
In this section we will examine the impact that IPC has on the structure of PSFs as the incident signal varies. We will prescribe a particular coupling coefficient which matches the form presented previously in equation7. There is a distinction of kind between the blurring due to a diffraction and the blur from IPC; diffraction is continuous whereas IPC blur occurs discretely between pixels. The results presented here appear continuous as they are three dimensional cubic spline interpolations from data generated with a one quarter pixel movement in the x and y directions. A sample three dimensional PSF result is shown in figure12. By using an interpolation method, the PSF can be treated as a continuous mathematical object rather than just the discrete sampling. This allows for better visualization of the PSF's distortion as well as greater ease in application for PSF fitting techniques as will be outlined in a later section of this work.
To obtain these PSF values the same data processing chart as presented earlier is used on a point source with the following exceptions and stipulations: 1. The background level is set to zero.
2. There is no Poisson sampling of the flux map.
3. The read noise values are all set to zero: zero mean and zero variance. 4. The values are normalized to unit volume.
Three point source intensities were examined using this technique. From here out they will be referred to as 'bri ght', 'mid' and 'dim'. The bright point source had an intensity such that with the WebbPSF F405N filter, 18 Both the true and decoupled frames after these operations are identical to each other across point source intensity within numerical error. The IPC present frames in these cases are distinct from the true and decoupled frames as well as distinct from each other. Due to the functional nature of the IPC applied, the more intense the point source the taller, narrower, and closer to true, the PSF is. Equivalently, the lower flux levels give rise to a shorter, fatter, and more distorted PSF as seen in figure13. The error introduced from IPC causes a decrease in the peak brightness of the PSF on the order of 1.2 to 1.4%. Additionally, the difference in PSF peak of a bright star relative to a dim star is on the order of 0.15% as can be seen in the error cross section presented in figure 14.
However these PSFs do not tell the full story of IPC coupling. In a crowded field, when the post diffraction flux map of a scene results in overlap between sources, the IPC coupling experienced will be different than of a point source in isolation. The simple case considered here is a system composed of two point sources separated by some distance. In trends, IPC pulls collected signal away from local maxima and into local minima. When point sources are near each other, the signal generated from each star couples differently towards the neighboring star compared to away from it.
Photometry and Astrometry using DAOphot
With established PSFs and a method to simulate IPC on a given frame, we can now examine how, and to what extent IPC will impact particular measurements. Here we will look at the impact that a signal dependent IPC can have on will have on measurements of flux and separation estimates for a binary star. PSF fitting will be performed by using established star best fitting techniques.
The python implementation of IRAF starfinder 20 and DAO photometry 21 developed in the photutils library 22 are used. These algorithms take in the arguments of an image frame for processing and a series of frames containing a super-sampled PSF. The output is a best-fit location and integrated magnitude of each star.
In the frames, the separation between the stars is varied from 2.0 pixels to 5.0 pixels, with an additional sample with separation of 20.0 pixels. One star was given a brightness level called 'bright', 'mid' or 'dim' corresponding to the definitions given earlier. The second star was given a brightness as a fraction of the first star ranging from equal intensity to 2 −3 the intensity in powers of two. The first star's location was initially set ranging from centered on the pixel to directly on the edge in 0.25 pixel intervals in both x and y, giving 16 starting configurations per star pair.
Additionally both a noise present and noise absent case were examined. The noise absent case was designed to show the the theoretical ideal and the noise present case gives a realistic simulation of expected results. In order to evaluate the effectiveness of the decouple, results were compared between the 'true' frames, the 'coupled' frames, and the 'decoupled' frames. An emblematic subset of these results are presented here: It can be seen from analysis of these frames that if ignored, an IPC on the order of 1% to nearest neighbor can cause an error in accuracy of photometric estimates on the order of a few percent as seen in figures15 and16 or worse when attempting to discern a dim object from a bright neighbor. This is due to the way in which the PSF distorting in the presence of IPC causing power that should be attributed to the brighter source, to instead be attributed to the more dim source.
Because there is a region of overlap between the two stars, they no longer blur independently. The output from the sensor after IPC coupling occurs is not the sum of two coupled PSFs; it is instead the coupling of the sum of two PSFs. Because IPC coupling is not a commutitive operation, this distinction results in a break down of an assumption made by PSF fitting techniques. The image as a whole can no longer be represented as a linear combination of stars with the same PSF. Flux incident from each star is coupled more strongly towards the center of the binary than away from it. This results in estimates of the center to center distances can being inaccurate on the order of tens of millipixels as seen in figure17. If properly corrected for, the error in flux can be dropped to the level of hundredths or thousandths of a percent and the error in separation, dropped to the level micro-pixels. Further figures examining the full range of parameters explored are available at https://github.com/Donlok/Photometry_Astrometry. From equation7 as informed by simulations 9 and previous observations, 11 the greatest fractional coupling occurs when the signal difference between adjacent pixels is smallest. In the case of confused point sources, this signal difference is larger on the exterior side than on the interior side. This results in the interior side of the PSF of each star experiencing a greater coupling than the exterior side, causing energy to appear pulled towards the center. Additionally, a higher fractional coupling occurs a the signal strength in any single pixel decreases. As a result, the PSF distortion is the most severe on confused systems where the brightness of the star is the lowest. IPC's most severe impact on astrometric and photometric accuracy will occur when examining objects which are nearest to the sensitivity limit of the imaging system.
Conclusion
The non-linear iterative decoupling algorithm presented here is capable of completely removing the impact of IPC in the mathematically abstract case. In the applied case, where read noise is present, it is capable of signal restoration with error proportional to the read noise magnitude. It requires a well characterized IPC, but, unlike other methods, it can account for an IPC which varies with signal level. Failure to correct, or improper correction of, IPC can result in the introduction of systematic errors into the data which can result in incorrect scientific conclusions. As missions in the infrared begin to transition from arrays using the H2RG readout circuitry to the smaller pixels of the H4RG-10 circuitry, IPC will continue to become larger and present a more significant problem. Some measurements for H4RG-10 HgCdTe arrays indicated α on the order of 8 % resulting coupling out of the central pixel on the order of 35%. 23 If correction of IPC is not performed, this could result in erroneous conclusions and unnecessary imprecision from missions such as WFIRST. In the case of higher couplings, diagonal and second-neighbor couplings can rise to non-negligible levels. 23 Figure 18 shows the PSF degradation that would be caused to the F405N WebbPSF in the presence of a static α = 8%. The FWHM would be expected to broaden from 2.080 pixels to 2.249 pixels. This is an 8% increase in FWHM prior to introduction of any signal dependence. The greater IPC in the next generation of sensor will result in further image degradation and will require careful consideration regarding characterization, and correction.
The decoupling technique described here in equation5 for nearest neighbor coupling is easily extensible to coupling kernels of any size through expansion of the indexing of the sums. In fact, as presented, equation5 includes diagonal coupling, though it has been set to zero in the implementations and examples provided. | 2018-05-23T20:24:37.000Z | 2018-05-23T00:00:00.000 | {
"year": 2018,
"sha1": "b1c32040cf2312be2b1fc5279830d729f17c4ed4",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/1538-3873/aac261/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "b1c32040cf2312be2b1fc5279830d729f17c4ed4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221381254 | pes2o/s2orc | v3-fos-license | A predictive nomogram for lymph node metastasis of incidental gallbladder cancer: a SEER population-based study
Background Existing imaging techniques have a low ability to detect lymph node metastasis (LNM) of gallbladder cancer (GBC). Gallbladder removal by laparoscopic cholecystectomy can provide pathological information regarding the tumor itself for incidental gallbladder cancer (IGBC). The purpose of this study was to identify the risk factors associated with LNM of IGBC and to establish a nomogram to improve the ability to predict the risk of LNM for IGBC. Methods A total of 796 patients diagnosed with stage T1/2 GBC between 2004 and 2015 who underwent surgery and lymph node evaluation were enrolled in this study. We randomly divided the dataset into a training set (70%) and a validation set (30%). A logistic regression model was used to construct the nomogram in the training set and then was verified in the validation set. Nomogram performance was quantified with respect to discrimination and calibration. Results The rates of LNM in T1a, T1b and T2 patients were 7, 11.1 and 44.3%, respectively. Tumor diameter, T stage, and tumor differentiation were independent factors affecting LNM. The C-index and AUC of the training set were 0.718 (95% CI, 0.676–0.760) and 0.702 (95% CI, 0.659–0.702), respectively, demonstrating good prediction performance. The calibration curves showed perfect agreement between the nomogram predictions and actual observations. Decision curve analysis showed that the LNM nomogram was clinically useful when the risk was decided at a possibility threshold of 2–63%. The C-index and AUC of the validation set were 0.73 (95% CI: 0.665–0.795) and 0.692 (95% CI: 0.625–0.759), respectively. Conclusion The nomogram established in this study has good prediction ability. For patients with IGBC requiring re-resection, the model can effectively predict the risk of LNM and make up for the inaccuracy of imaging.
GBCs may be confirmed at the early stage through laparoscopic cholecystectomy (LC) so that early R0 resection may be performed; thus, progression of the disease may be avoided, and the overall survival rate may be improved [4,5].
More than 50% of GBCs are diagnosed by intraoperative or postoperative pathological examination after LC [4] and are considered incidental gallbladder cancer (IGBC), in which stage T1/2 GBCs are the most common [6]. IGBC often requires radical re-resection [5]. Among patients with lymph node metastasis (LNM), lymph node dissection is an important part of radical surgery [7]. Although an increasing number of clinical centers emphasize the importance of high-quality lymph node dissection [8][9][10], a study based on the SEER database showed that the lymph node resection rates for stages T1a, T1b, and T2 GBC were only 33.6, 39.2, and 53.7%, respectively [7], which indicated that preoperative lymph node examination was seriously insufficient. LNM is an independent factor influencing the prognosis of early GBC [11,12]. Therefore, the preoperative diagnosis of LNM is very important. However, current imaging is still not sensitive enough to identify LNM in the preoperative examination [13]. In lieu of the low incidence rate of GBC, there is still no study with a large sample size for predicting the risk factors for LNM in early GBC, and there is no quantified prediction model.
LC makes general pathological information on patients with IGBC available before the patients receive reresection [2]. In recent years, nomograms have been broadly used for preoperative prediction of the risk of LNM and have been proven to be effective [14][15][16]. Therefore, this study aims to use the pathological and demographic information contained in the SEER database to determine the LNM risk factors for IGBC and to establish a nomogram model for predicting the incidence rate of LNM at the early stage of IGBC before re-resection.
Data collection
The SEER (Surveillance, Epidemiology, and End results) database is currently the largest publicly available cancer database, covering approximately 28% of the US population [3]. The National Cancer Institute's SEER*Stat software (8.3.6 version) was used to collect data. The inclusion criteria were as follows: (1) site record: C23.9, according to the Third Edition of International Classification of Diseases for Oncology (ICD-O-3); (2) pathological type: adenocarcinoma or squamous cell carcinoma; (3) T stage classified as T1a, T1b, T2 and N stage classified as N0 and N1 according to 6th edition AJCC staging system; (4) underwent surgery; (5) at least 1 regional lymph node examined; and (6) no preoperative radiotherapy. After the inclusion, patients were excluded if their information regarding tumor size or tumor differentiation was unknown. We also excluded patients diagnosed with M1 stage, for whom surgery was not suitable [17].
We extracted the demographic and clinicopathologic data of patients with T1/2 GBC from the SEER database for model development and validation, including age, sex, race, tumor size, histology, differentiation, depth of invasion, and number of lymph nodes examined.
The whole dataset from the SEER database was randomly partitioned into a training set and a validation set, which included 70 and 30% of the dataset, respectively. To let each data has the same chance to be assigned to training set and validation set, a simple random sampling method was used for allocation. Specifically, we installed caret package in R software version 3.6.2, then we loaded the foreign, survival and caret packages. And the last step was to run the packages by specific codes. The codes were attached in our Supplementary Material.
Statistical analysis
Correlations between the clinicopathological characteristics of patients and LNM were analyzed using Pearson's chi-square test or Fisher exact test when needed. To identify factors that were associated with LNM, binary logistic regression analysis was used for univariate and multivariable analyses. Odds ratios (ORs) were presented with 95% CIs. Preoperatively available variables were included in the logistic regression analysis. To construct a well-calibrated and discriminative nomogram for predicting LNM, a model was developed in a training set and then validated in the validation set. A logistic regression model was used to construct the nomogram with a backward stepwise procedure. Variables with P < 0.05 were included in the nomogram.
Nomogram performance was quantified with respect to discrimination and calibration. Discrimination (the ability of a nomogram to separate patients with different lymph node statuses) was quantified by concordance indexes (C-indexes) and the area under the receiver operating characteristic (ROC) curve (AUC). Calibration was assessed graphically by plotting the relationship between the actual (observed) probabilities and predicted probabilities (calibration plot) with the bootstrapping method (1000 replications). Clinical usefulness and net benefit were estimated with decision curve analysis (DCA).
Statistical analyses of correlations between clinicopathological characteristics were conducted using SPSS version 24.0 (IBM, NY, US). The partition of dataset, logistic regression analysis, construction and performance quantification of nomogram and DCA were conducted using R statistical software version 3.6.2. All tests were two-sided, and P < 0.05 was deemed significant.
Factors associated with preoperative LNM
As shown in Table 2, the logistic regression model was used to further verify the effectiveness of the included factors. Univariate analysis showed that tumors with a diameter > 1 cm, stage T2, and poor/undifferentiation were closely related to LNM. Multivariate analysis further confirmed that tumors with a diameter > 1 cm (OR = 3.628, 95% CI: 1.770-7.437), stage T2 (OR = 11.104, 95% CI: 2.590-47.597), and poor/undifferentiation (OR = 2.110, 95% CI: 1.184-3.762) were independent factors influencing LNM. Based on the OR value, T2 stage was the most correlated, followed by the tumor diameter and then the degree of differentiation. Age, sex, race and pathological pattern were not significantly correlated with LNM.
Validation of the model
The nomogram demonstrated good accuracy for predicting positive lymph nodes, with a C-index of 0.718 (95% CI, 0.676-0.760) and an AUC of 0.702 (95% CI, 0.659-0.702). The calibration plot presented good agreement between the bias-corrected prediction and the ideal reference line with an additional 1000 bootstraps (mean absolute error = 0.02) (Fig. 2a, c).
The C-index and AUC of the validation set were 0.73 (95% CI: 0.665-0.795) and 0.692 (95% CI: 0.625-0.759), respectively, which revealed good concordance and reliable ability to estimate the status of lymph node involvement. The calibration plot of validation also demonstrated good agreement between the bias-corrected prediction and the ideal reference line with an additional 1000 bootstraps (mean absolute error = 0.035) (Fig. 2b, d).
Comparison between different prediction methods
Comparisons between different prediction methods were conducted by decision curve analysis. The decision curve has the ability to show the clinical usefulness of each method based on a continuum of potential thresholds for LNM risk (x-axis) and the net benefit of using the model to risk stratify patients (y-axis) relative to assuming that no patient will have LNM. Figure 3 reveals that the nomogram provided the largest net benefit across the range of LNM risk compared with the methods using tumor size, differentiation and T-stage alone.
Discussion
GBC is a highly occult cancer with no obvious clinical manifestations in its early stage [3]. With the development of laparoscopy, an increasing number of stage T1/ 2 IGBCs can be detected via pathological biopsy after LC [6]. For IGBCs, postoperative pathological evaluations need to be completed in combination with imaging for re-resection [18,19].
For patients with LNM, lymphadenectomy is an important part of radical resection, and all positive lymph nodes need to be cleared [20]. Although high-quality lymph node dissection was emphasized, preoperative lymph node examination was seriously insufficient based on the results that the resection rates of T1a, T1b, and T2 GBC were only 33.6, 39.2, and 53.7%, respectively, according to this SEER-based study [7]. Although current NCCN guidelines recommend radical surgery for all patients with GBC at stages T1b and above [18], several studies have concluded that patients with T1b and T2 stages might not require radical surgery [21][22][23][24]. However, some studies have shown that LNM is closely related to malignant phenotype of early stage GBC [25,26], we believe that patients diagnosed with LNM preoperatively should receive more aggressive surgical treatment and more extensive lymph node dissection than patients without LNM.
CT is the most commonly used clinical imaging method [27]. Although CT can accurately show the invasion of tumors in blood vessels and adjacent organs, its accuracy for the identification of LNM is very low [28]. Some studies have shown that more than half of the positive lymph nodes existing among GBC patients cannot be detected by preoperative CT examination [24,27,29]. Unfortunately, neither MRI nor PET-CT is a good supplement for CT [28,30,31]. The present study may combine clinical imaging to further improve the estimation of the risk of LNM, which is conducive to clinicians choosing the most suitable surgical methods for patients. Among the cases of GBC included in this study, the LNM rate of stage T1a was 7%, stage T1b was 11.1%, and stage T2 was 44.3%. For a variety of early primary cancers in the digestive tract, such as gastric cancer [14], appendiceal cancer [15], and colon cancer [16], the SEER database has been used to establish a nomogram for predicting the risk of LNM. In this paper, the SEER database was used to predict the risk of LNM in IGBC and construct a nomogram. In the present study, tumor diameter, tumor differentiation degree and T stage were independent factors influencing metastasis, of which T stage was the most significant factor. Compared with that at stage T1a, the risk of LNM at stage T2 may have increased by 11 times. The second most significant factor was tumor diameter. When the tumor diameter was greater than 1 cm, the risk of LNM may have increased by 3.6 times. According to the nomogram, there was little difference in the risk of LNM when the tumor diameter was greater than 1 cm, but the risk was reduced when the tumor diameter was greater than 4 cm. The least significant factor was tumor differentiation. The risk of LNM in poorly differentiated or undifferentiated patients was only twice as high as that in welldifferentiated patients. Gallbladder adenocarcinoma (76-90%) and squamous cell carcinoma (2-10%) are the two most common pathological patterns of GBC and the prognosis of squamous cell carcinoma is worse than that of adenocarcinoma [32], but in our study, it is indicated that there was no significant correlation between pathological patterns and LNM. We believe that there are two possibilities: (1) according to the relevant literature, squamous cell carcinoma is more likely to invade the liver than LNM [33], which may further confirm that there is no correlation pathological patterns in LNM; and (2) the number of T1/2 squamous cell carcinomas is too small to be statistically significant. Considering the low incidence rate of GBC, few singlecenter studies have previously used clinical data to predict the risk of LNM in early GBC. Therefore, we used DCA to compare the differences in predictive power among the nomogram and the included univariates. According to Fig. 3, the probability thresholds of differentiation, T stage, tumor size and nomogram are 0.23-0.49, 0.03-0.45, 0.28-0.51 and 0.02-0.63, respectively. The curve of T-stage is very close to that of nomogram containing three factors, but the probability threshold of T-stage is smaller than that of nomogram. When the risk is decided at a probability threshold lower than 0.38, the T-stage curve and the nomogram curve almost overlap which indicates the two prediction models almost have the same net benefit within this range, and both are higher than the reference line. However, when the risk is decided higher than 0.38, the net benefit of T-stage is not as good as that of the nomogram. A comparison between tumor and differentiation shows that when the risk is decided at a probability threshold of 0.23-0.28, the net benefits of tumor and differentiation are very close and nearly equal to the reference line; when the probability is decided at a probability threshold of 0.28-0.35, the net benefits of these two are still very close, but higher than the reference line; when the risk is decided at a probability threshold of 0.35 and 0.4, the net benefit of differentiation is relatively high; and when the probability is decided higher than 0.4, the net benefit of tumor size is less than 0 while the differentiation model has a prediction ability higher than that of the tumor model. However, the net benefits of these two models within their probability thresholds are both smaller than that of the nomogram. To sum up, although the univariate models have certain predictive power, DCA shows that the nomogram predicts accurately in a wider range.
For GBC patients accompanied by LNM, existing studies recommend cholecystectomy and lymph node dissection for patients at stage T1a [34], and radical surgery for patients at stage T1b/T2 [26]. The total score calculated by the nomogram corresponds to the risk of LNM. Zhu et al. [35] put forward that patients with a ≤ 5.0% predicted risk of LNM are considered as low-risk group, those with 5-15% predicted risk as intermediate risk group, and those predicted risk >15% as high risk group. Combining these conclusions with our study, we assume that patients in low-risk group could choose long-term follow-up, and patients in the high-risk group should be recommended for a re-resection; as for those in intermediate-risk group, patients could choose a longterm follow-up, however, the recommendation of reresection should better be come up with. Take a T1b IGBC patients for example, in clinical practice, if a T1b IGBC patient pathologically diagnosed after LC is with poor compliance to a re-resection, in the meanwhile, no LNM is found by imaging, which is considered having low ability to detect LNM [27,28,30,31], the clinician will be caught into a dilemma that whether a reresection is needed or not. In this case, the clinician may use our nomogram to make a decision. If he/she is pathologically confirmed with a poorly differentiated or undifferentiated tumor with a diameter between 3 and 4 cm, his/her total score will be 113. His/her corresponding risk of LNM is nearly 19% and is allocated to highrisk group. The clinical suggestion is that him/her should undergo a radical re-resection. In contrast, if the T1b patient is with a highly differentiated tumor with a diameter less than 1 cm, his/her total score will be 20, and the risk of LNM is nearly 3% and is allocated to low-risk group. The clinical suggestion is that he/she could choose to follow up regularly.
We must recognize the limitations that may exist with our study. First, all selected patients have received lymph nodes biopsy and the median number of lymph nodes inspected in training set was 2 (IQR: 1-5), however, the effect of selection bias with LN+ and LN-due to the non-randomized nature of this study can't be expected. Steffen et al. [7] claimed that retrieval of even a few lymph nodes reliably predicts the lymph node status, which may compensate for this bias. Second, previous studies have concluded that age < 60, elevated CA199 levels [27], and hepatic-sided tumors [36] can also be used for predicting LNM. However, in this study, age was not necessarily associated with LNM, and this study lacked information concerning the preoperative diagnosis of CA199 and tumor location, which may have led to insufficient influencing factors. Last but not least, the data in SEER database is originated from different sources and hospitals [3], so our study is considered as a multicenter study. However, GBC has regional differences in incidence [37]. Although the nomogram constructed in this study was validated internally and externally having good prediction ability, in our view, the generalization ability of the nomogram is still needed to be verified with clinical data other than SEER database. Therefore, we hope that in the future, large sample of GBC patients from different regions can be obtained to construct a nomogram using the three variables selected in this study for further external validation, as well as measurement of the generalization ability of the nomogram.
Despite limitations above, the large-sample based study predicts LNM with good discrimination and calibration both in the training and validation cohorts. The nomogram constructed in this study visualizes the risk factors and could better guide the clinical decisions.
Conclusion
In conclusion, based on the clinical risk factors identified in a large population-based cohort, we established the first practical nomograms that could objectively and accurately predict the individualized risk of LNM for IGBC patients who required re-resection. Moreover, the validation set results demonstrated that the nomograms performed well and had high accuracy and reliability. Our nomogram was demonstrated to be clinically useful in DCA, and it made up for the inaccuracy of imaging.
Therefore, these results could help clinicians improve individual treatment and make clinical decisions regarding patients with T1/2 stage IGBC. | 2020-09-01T13:38:07.778Z | 2020-08-31T00:00:00.000 | {
"year": 2020,
"sha1": "1801f58939e792a11fba77efd4dd3f554f8e2299",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-020-07341-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8913453af939c428c08894ccc18f3b19eaedd38",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5118422 | pes2o/s2orc | v3-fos-license | A comprehensive review of neuroanatomy of the prostate
Although oncologic efficacy is the primary goal of radical prostatectomy, preserving potency and continence is also important, given the indolent clinical course of most prostate cancers. In order to preserve and recover postoperative potency and continence after radical prostatectomy, a detailed understanding of the pelvic anatomy is necessary to recognize the optimal nerve-sparing plane and to minimize injury to the neurovascular bundles. Therefore, we reviewed the most recent findings from neuroanatomic studies of the prostate and adjacent tissues, some of which are contrary to the established consensus on pelvic anatomy. We also described the functional outcomes of radical prostatectomies following improved anatomical understanding and development of surgical techniques for preserving the neurovascular bundles.
INTRODUCTION
Prostate cancer is the most common cancer among men, and approximately 2.8 million men are estimated to have a history of prostate cancer in the United States [1]. Currently, prostate cancer can be detected in patients because its association with high prostate specific antigen levels, thus allowing early diagnosis and prolonging survival after diagnosis [2]. This, in turn, has increased the number of candidates for radical prostatectomy, with the intention to cure prostate cancer while minimizing the risk of urinary incontinence and erectile dysfunction [3].
Neuroanatomy of the prostate is important owing to its relationship with postoperative functions of continence and potency. Initially, Walsh's anatomic nerve-sparing technique in 1982 was based on the idea that the neurovascular bundles (NVBs) are situated posterolaterally and symmetrically to the prostate in the space defined by the levator fascia, prostatic fascia, and Denonvilliers' fascia [4]. In the past few decades, several anatomic studies have provided deeper insight into the neuroanatomy of the prostate and adjacent tissue, which formed the basis for ensuring good oncologic and functional outcomes after radical prostatectomy. This article summarizes the most recent findings from neuroanatomic studies, some of which are contrary to the established consensus on pelvic anatomy. We also described the functional outcomes of radical prostatectomies following improved anatomical understanding and development of surgical techniques for preserving the NVBs.
EXPANSION OF NEUROANATOMIC STUDIES
In 1982, Walsh and Donker [4] introduced the nerve-sparing radical prostatectomy procedure to preserving cavernous nerves situated posterolaterally and symmetrically to the prostate. This
PROSTATE INTERNATIONAL
http://dx.doi.org/10.12954/PI.13020 technique has inspired greater acceptance of the surgical approach for prostate cancer therapy and came to be used globally. Since then, however, there has been an ongoing debate about the course of these cavernous nerves [5][6][7][8] (Table 1). The precise relationship of the NVBs and cavernous nerves to Denonvillier's fascia has been questioned by Kourambas et al. [5]. Costello et al. [9] expanded on Walsh's initial efforts by using cadaver models to further detail the precise anatomy of the NVBs because of their close relation with the prostate and seminal vesicles (Fig. 1). They identified 3 functional components of the NVBs. The posterior and posterolateral component runs within Denonvillier's fascia and the pararectal fascia and innervates the rectum. A second component in the lateral NVB supplies the levator ani. The cavernosal nerves and prostatic neurovascular supply, the third component originally described by Walsh and Donker [4], lie along the posterolateral surface. The organization of these nerve bundles is rather disordered at the base of the prostate and at the seminal vesicles, further showing the complexity of the NVBs and the challenges of performing a technically sound nerve-sparing procedure [9]. Takenaka et al. [6] confirmed that branches of the hypogastric nerve and pelvic splanchnic nerve are likely to interdigitate at multiple levels, showing spray-like arrangement without clear bundle formation (Fig. 2). In addition, Lunacek et al. [7] demonstrated that the cavernous nerves running along the prostate are displaced more anteriorly and disperse along the convex surface of the prostatic capsule (like a "curtain") during the growth of the prostate. From these anatomical findings, they proposed a "curtain dissection" technique, in which the incision of the periprostatic fascia and dissection of the NVBs is far more an- Scattered nerves throughout the Denonvilliers' fascia, including medially towards the midline Costello et al. [9] Three functional components of the NVBs Takenaka et al. [6] Spray-like arrangement of nerves without clear bundle formation Era of wider nerve sparing Lunacek et al. [7] "Curtain dissection": dispersion of cavernous nerves along the prostatic capsule Menon et al. [8] "Veil of Aphrodite": lateral prostatic fascia containing NVBs NVB, neurovascular bundle. terior than previously described. Furthermore, Menon et al. [8] described a technique for preserving the lateral prostatic fascia containing NVBs, the "Veil of Aphrodite. " On the basis of these studies, the high anterior release, "Veil of Aphrodite, " or "Superveil" technique have been developed for preserving the maximum number of nerve fibers [10][11][12].
DISTRIBUTION OF PERIPROSTATIC NERVES
Recent anatomic studies have shown the variable degrees of periprostatic nerves both in the dorsolateral and ventrolateral positions [13][14][15][16]. Eichelberg et al. [13] illustrated that, while most periprostatic nerves were found posterolaterally as initially described, a significant portion of the nerves (21.5-28.5%) were located on the anterior surface. Similarly, Lee et al. [16] investigated the pattern of distribution of nerves surrounding the prostate by analyzing specimens from non-nerve-sparing radical prostatectomies (Fig. 3). Significant proportions (19.9-22.8%) of the total nerves were located on the anterior side of the prostate. NVBs with a relatively round, bundle-like formation were observed in approximately half the cases; in other cases, NVBs were more widely spread as they extended anteriorly.
In a study using whole-mount sections of non-nerve-sparing radical prostatectomies, Ganzer et al. [14] used novel computerized planimetry software to characterize the topographical anatomy of periprostatic and capsular nerves [15]. The percentage of total nerve surface area was highest dorsolaterally (84.1%, 75.1%, and 74.5% at the base, middle, and apex, respectively), but this finding was variable. Up to 39.9% of nerve surface area was found ventrolaterally with up to 45.5% in the dorsal position. However, the dilemma is a product of growing evidence on anatomic distribution NVBs without any clear understanding of their role in the physiology of erectile function. Since the presence of periprostatic nerve fibers was proven not to be involved in erection, Kaiho et al. [17] provided evidence to confirm the role of these fibers using electrophysiologic testing. Although the largest amplitudes of pressure responses were induced by stimulation at the 5-o'clock position, electrical stimulation at all positions of the midprostate (between 1-and 5-o'clock) evoked the cavernosal pressure responses in all patients.
Although the existence of ventrolateral periprostatic nerves has been confirmed, detailed knowledge of the type of nerve fibers innervating the prostate is important in understanding the pathophysiology and functional consequences. Alsaid et al. [18] demonstrated the location and type of nerve fibers within the NVBs and provided a three-dimensional representation of their structural relationship in male fetus. The threedimensional reconstruction illustrated that nerve fibers were derived from the inferior hypogastric plexus, providing cholinergic, adrenergic, and sensory innervation to seminal vesicles, vas deferens, prostate, and urethral sphincter in a fanlike formation. However, in their cadaver study, Costello et al. [19] reported that functionally significant parasympathetic nerve fibers accounted for 4%, 5%, and 6.8% of the nerves located on the anterolateral aspect of the prostate at the base, mid, and apex, respectively. Ganzer et al. [20] recently confirmed this finding using topographic distribution of periprostatic nerves, including immunohistochemical differentiation of proerectile parasympathetic from sympathetic nerves. They found that parasympathetic nerves were dispersed at the base and were mainly located dorsolaterally at the apex, with 14.6% above the horizontal line at the base and only 1.5% at the apex. Thus, no consensus has been reached on the anatomic evidence for supporting high anterior incision in the lateral prostatic fascia in order to spare the cavernous nerve fibers.
FASCIAL ANATOMY OF THE PROSTATE
The fascial anatomy near the prostate is not well understood anatomically, and many urologists have not reached a consensus on its nomenclature (Fig. 4). The endopelvic fascia comprises of multilayered connective tissue that encases and supports the prostate and bladder and provides adherence to the pubic bone by the puboprostatic ligaments. The parietal and visceral components of the endopelvic fascia are fused along the pelvic sidewall at the lateral aspect of the prostate and bladder. This fusion is often recognizable as a whitish line and is named the fascial tendinous arch of the pelvis [21]. The prostatic fascia directly covers the prostate, forming an intrafascial plane between this fascia and the prostate capsule. The levator ani fascia is immediately exterior to the prostatic fascia and serves as the boundary for an interfascial plane. After the endopelvic fascia is opened laterally to the fascial tendinous arch and the levator ani muscle is deflected laterally, the outermost fascial layer on the lateral surface of the prostate, the levator ani fascia, is observed [22]. Both the levator ani fascia and prostatic fascia constitute periprostatic fascia for the operating surgeon. The posterior surface of the prostate and the seminal vesicles are closely covered by a continuous layer of the posterior prostatic fascia and seminal vesicles fascia, known as Denonvillier's fascia. Dissection along these avascular planes preserves the NVBs, as the majority of the NVBs are thought to run between the anterior extension of Denonvillier's fascia and the levator ani fascia. A thorough understanding of these planes is crucial for performing an anatomic dissection, while avoiding mechanical and thermal injury to the NVBs.
DEVELOPMENT OF NERVE-SPARING TECHNIQUES
Several techniques have been proposed to optimize the pres-ervation of erectile function on the basis of the anatomic principles summarized above. In particular, the intraoperative magnification offered by robotic surgical systems enables identification and preservation of periprostatic fascial planes that have nerve fibers [23]. Interfascial dissection of NVBs involves a dissection lateral to the prostatic fascia at the anterolateral and posterolateral aspects of the prostate, combined with a dissection medial to the NVB at the 5-o'clock and 7-o'clock positions or the 2-o'clock and 10-o'clock positions of the prostate in axial section [24,25]. Depending on individual anatomic variations, the NVBs might be more prone to partial resection with this technique. According to the experience gained from intrafascial nerve-sparing prostatectomy, Stolzenburg et al. [26,27] emphasized the importance of the dissection depth for preserving NVBs. The intrafascial technique is a dissection that follows a plane on the prostate capsule, remaining medial to the prostatic fascia at the anterolateral and posterolateral aspect of the prostate and anterior to Denonvillier's fascia.
Tewari et al. [28] studied the neuroanatomy of the pelvic erectile nerves as relevant to robotic radical prostatectomy. They grouped important neural structures into the proximal neurovascular plate (PNP), the predominant NVB (PNB), and the accessory neural pathways (ANPs). The PNP, located lateral to the bladder neck, seminal vesicles, and branches of the inferior vesical vessels, processes and relays erectogenic neural signals. The PNB is the classical bundle that carries neural impulses to the cavernosal tissue, and ANPs are the putative accessory neural pathways around the prostate, other than the PNB, that might be additional conduits for neural impulses. These authors described a hammock-like distribution of the nerves on which the prostate rests, showing that the NVB is more of a network of multiple fine dispersed nerves than a distinct structure. Because the classical nerve-sparing approach will sacrifice most of the proximal and posterior extensions of the neurovascular tissue, the neural zones around the prostate have important implications in robotic radical prostatectomy. They proposed a novel risk-stratified nerve-sparing approach for determining the degree of nerve sparing based on the observation of venous distribution over the prostate and periprostatic fascial planes [29]. They reported that patients with greater degrees of nerve-sparing had higher rates of intercourse and return to baseline sexual function [29], and early return of urinary continence without compromising oncologic safety [30].
Similarly, Schatloff et al. [31] described a nerve-sparing grading system based on the arterial periprostatic distribution on the posterolateral aspect of the prostate. The landmark artery, which could be either a prostatic or a capsular artery, is located approximately 2-3 mm outside the capsule and can be used as a visual cue to delineate the extension of the resection of the NVBs. They independently graded nerve sparing on either sides (1, no nerve sparing; 2, < 50% nerve sparing; 3, 50% nerve sparing; 4, 75% nerve sparing; 5, ≥ 95% nerve sparing), and found that the side-specific positive surgical margin rate according to the nerve-sparing score were 3.6% for grade 5, 7.5% for grade 4, 16.7% for grade 3, 5.7% for grade 2, and 0% for grade 1.
CLINICAL OUTCOMES
The aforementioned studies have improved anatomical understanding, development of surgical techniques for preserving periprostatic nerves, and functional outcomes, simultaneously preserving the oncological goals after radical prostatectomy. Potency rates after radical prostatectomy are influenced by numerous factors including baseline characteristics, nervesparing extension and techniques, and definition of potency. A recent meta-analysis revealed a progressive increase in potency rates with follow-up after radical prostatectomy [32]. Different modifications of the initial nerve-sparing technique have been described, which reflected improvements in anatomic understanding. Ahlering et al. [33] described a cauteryfree nerve-sparing procedure that significantly improved early return of potency (47% vs. 8.3%, P < 0.001). Menon et al. [8,10] described "Veil of Aphrodite" or "superveil" technique in which the prostatic fascia is dissected to the prostatic surface, and the periprostatic tissue is released in a relatively avascular plane. With the "superveil" technique, 94% of men who attempted sexual intercourse were successful at 6-18 months after radical prostatectomy. Tewari's risk stratified approach to athermal, traction-free nerve sparing reported that increased nerve sparing corresponds to increased percentages of patients with postoperative recovery of potency [34]. In their study, patients who underwent nerve-sparing grade 1 had a potency rate of 92.4% with a positive surgical margin rate of 10.5%.
The role of NVB preservation for urinary control is particularly controversial. Recent studies, however, have shown a relationship between the urinary continence recovery and nerve sparing. Choi et al. [35] reported that bilateral nerve-sparing prostatectomy improved postoperative urinary functions and was associated with improved continence at 4 months (47.2% vs. 26.7%, P = 0.043), but not at 12 or 24 months. Similarly, Ko et al. [36] demonstrated that the probability of continence recovery within 3 months was significantly higher for the partial nerve-sparing and bilateral nerve-sparing groups with a shorter time to recovery of continence, compared with the non-nerve-sparing group. Gandaglia et al. [37] reported that preoperative erectile function should be considered in predicting urinary continence after bilateral nerve-sparing radical prostatectomy. Since erectile function depends on systemic vascular status [38], it may also represent a marker of pelvic vascular disease, which may subsequently affect the status of the external urinary sphincter. In their study, patients who were fully potent before surgery had a higher probability of urinary continence recovery than patients with any degree of preoperative erectile dysfunction.
FUTURE DIRECTIONS
There are several novel techniques for improving the efficacy of a nerve-sparing procedure during radical prostatectomy without sacrificing any degree of cancer control. Multiphoton microscopy for real-time tissue imaging of the prostate and periprostatic neural tissue obtains high-resolution images of the prostate capsule, underlying acini, and individual cells outlining the glands at varying magnifications [39]. Tewari et al. [40] reported that multiphoton microscopy of freshly excised, unprocessed, and unstained tissue can identify all relevant prostatic and periprostatic structures and also pathological changes that were validated in pathologic examinations. Moreover, to aid the identification and preservation of the NVBs, numerous imaging modalities, including optical coherence tomography [41] and fluorescent peptides [42] are currently under investigation for assessing possible roles in the development of a more individualized anatomic nervesparing radical prostatectomy. These technologies, as well as accurate knowledge of the neuroanatomy of the prostate, will reveal the course of the nerves and sites of nerve branching otherwise not grossly visible during radical prostatectomy. | 2018-04-03T05:45:00.152Z | 2013-12-01T00:00:00.000 | {
"year": 2013,
"sha1": "57e31f056394df4ba357275fd252c8950bb892c4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12954/pi.13020",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57e31f056394df4ba357275fd252c8950bb892c4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
117586587 | pes2o/s2orc | v3-fos-license | TEACHING MATERIALS AND TECHNIQUES NEEDED BY FOREIGN STUDENTS IN LEARNING BAHASA INDONESIA
The study outlined in this article aimed to describe teaching materials and techniques needed by foreign students in learning Bahasa Indonesia. In learning BIPA, teaching materials and learning techniques are important aspects that need to be selected and organized seriously. The characteristics and objectives of BIPA learners are the main factors that need to be considered and understood by BIPA teachers. In this case, BIPA teachers must master the pedagogical norm of BIPA learning. Understanding of pedagogical norms will lead BIPA teachers to be able to determine the materials and learning techniques in accordance with the conditions of BIPA students.
INTRODUCTION
Learning Bahasa Indonesia for foreign students (BIPA) is a potential vehicle for introducing and promoting an Indonesian society and culture to a foreigner. Therefore, BIPA learning program needs to be well designed. BIPA institutions have to manage and think seriously BIPA learning pedagogical norms that can guide in learning Bahasa Indonesia more professional. One of the pedagogical norms is the selection of teaching materials and the teaching techniques in presenting the teaching materials.
Pedagogical norms in the selection of teaching materials to be essential for BIPA teachers and materials developers in an effort to incorporate aspects of culture and language into the learning program and present it to the foreign students. The pedagogical norms involve the study of cultural norms and the actual use of the language and its implementation for pedagogical purposes. Such an activity is conducted from the design material that will be taught to the creation of classroom THE CHARACTERISTIC OF BIPA STUDENTS BIPA students are foreign students from various countries. Therefore, they have the language and cultural backgrounds are different from the language and culture of Indonesia. In addition, their knowledge and skills about the Indonesian language are varied. In fact, learning styles and strategies were highly variable and highly dependent on their respective cultures.
In teaching-learning BIPA, language and cultural differences have consequences on the selection of Indonesian material that will be taught. In the early stages of learning BIPA, foreign students are still heavily influenced by the first language, culture, and learning styles that they have learned. Mastery and acquisition of Indonesian of BIPA students are strongly influenced by the first language. Lee quoted by Ellis (1986:23) said that the only cause of difficulties and errors in learning a second language or a foreign language is the first language influence students. At this stage, which is controlled by the Indonesian foreign students is characterized by the presence of interference from the first language. However, this interference will gradually be reduced, which ultimately achieve mastery of Indonesian students similar to native speakers.
One of the problems in learning a foreign language is a gap between the first language and the target language to be learned. This often happens because of a lack of knowledge of the target language by foreign language learners. In general, it can be said that the length of the gap is the more difficult of the learning process, and the closer of the gap is the easier of the learning process. Grabe (1986) said that the problem of learning a foreign language emerged as a result of linguistic differences and sociocultural of the first language and the target language. In this situation, the use of the teaching techniques and the selection of functional materials that have a very important role in determining the success of the BIPA learning process. Therefore, the use of authentic materials will help students, especially for students who are not familiar with the target language (Heritaningsih, 2007).
One part that is often forgotten in the teaching BIPA is a component of Indonesian culture. BIPA students often experience a culture clash when they get Indonesian culture as BIPA teaching materials. Authentic materials can be taken from real events in the community, newspapers, television news footage of events in Indonesia, radio programs, menus in restaurants, advertising, and so on. Armed with these materials, BIPA students have expected awareness of Indonesian culture and can actualize themselves appropriately in the Indonesian language.
THE LEARNING OBJECTIVES OF BIPA
The purpose of foreign students studying BIPA is to facilitate the Indonesian language and know the culture of Indonesia closely. Indonesian language fluency is required by them. The capabilities are needed by them because they (a) take on Indonesian program at their university, (b) will conduct research in Indonesia, (c) will work in Indonesia, and (d) will stay in Indonesia for a long time. An overview of the purpose of the study has implications for the preparation of BIPA learning materials.
Thus, the BIPA learning materials have to be selected materials that a close connection with the objectives and needs of BIPA students.
Mackey and Mountford (in Sofyan, 1983) explained that there are three requirements that drive a person to learn the language, namely (1) the need for employment, (2) the need for vocational training programs, and (3) the need to learn. Furthermore, Hoed (1995) stated that BIPA program aims to (1) attend to learning classes in Indonesia, (2) read books and newspapers for research purposes, and (3) communicate orally in daily life day in Indonesia. The third purpose of each can still further divided into some special purpose, for example, to attend college in Indonesia Indonesian require knowledge in areas of science that followed (social sciences, engineering sciences, economics, and so on). Similarly, for the purpose of research depends on what areas to be studied, in which the research will be done, who is the subject of research, and so on. To learn the Indonesian language spoken by the population for purposes of communication is also necessary specialization, such as formal or informal communication. Based on the needs and the learning objectives, materials BIPA selected and prepared in accordance with the requirement or relevance to the achievement of that goal.
THE NORMS OF SELECTING BIPA TEACHING MATERIALS
It is important to note that the BIPA learning objectives are to make foreign students are eager to learn Bahasa Indonesia and able to use it well in real situation and communication. The statement turned out to be interpreted in various ways by the organizers and teachers BIPA. In learning BIPA, we found that there were BIPA teachers who prefer to use a formal Indonesian language. They prefer to use appropriate Indonesian language materials that focused on language structures and apply teaching techniques in the training model of grammar. Elsewhere, BIPA teachers also found that only focuses on the use of language learning activities in real situations, regardless of the accuracy of the structure of the language it uses.
Direction and orientation diversity have an impact on the choice of learning and teaching materials in the presentation of learning activities BIPA.
BIPA learning that direct foreign students to use appropriate language structures is influenced by the method of grammar. Choice of learning materials is more focused on the rules of the Indonesian language. Language variation reduction through the selection of language features is the most common and neutral to basic reference in determining the teaching materials. In determining instructional materials, BIPA teachers choose a language feature that (a) has a frequency of use and the high acceptance, (b) widely used, (c) is not too complex to be studied, and (d) gradual change towards a feature rarely used, its use is narrower, and more complex variants (Valdan in Magnan and Walz, 2002). In teaching-learning BIPA, foreign students are trained to use the features of the language through listening, speaking, reading, and writing.
Learning BIPA is focused to make foreign students able to use the language in real situations. This principle directs learning to use a model of communicative learning. In teaching-learning activity, a teacher provides an opportunity to the students to use Bahasa Indonesia as much as possible in communication. pedagogical norms suggest the selection and arrangement of the prioritized sequence materials of language features for the sake of learning. The data used as the language teaching materials is a data utterance native speakers in a variety of social contexts are selected based on the needs of language learners. Because of native speech data is very diverse, in the selection of teaching materials, Valdan (in Magnan and Walz, 2002) suggested that the material chosen language as the teaching materials should (a) reflect the actual speakers of the target language utterances in communicative situations authentic, (b) in accordance with the idealized language usage by native speakers, (c) in accordance with the expectations of native speakers and learners of foreign languages with respect to the type of behavior that fits the needs of foreign students, and (d) factors into account and learning process.
With regard to the second principle of learning BIPA that has been described above, to develop an ideal learning BIPA, teachers have to consider two principles proportionately. The selection of learning materials needs to pay attention to the authenticity of the data subject and communicative language so that conversations conducted by students really meaningful because it addresses the real thing, not a fictitious conversation. Thus, learning will be more easily followed and learning materials will be easily understood by students. Pedagogical norms of language learning activities need to emphasize the meaning, function, and context.
In building conducive classroom activities, it is necessary to create effective communication between students and teachers. Effective communication can be done if the selected learning material completely functional for students. Eskey (1986) explained that the students who the lower ability need the target language learning materials that emphasize to the identification form, whereas the target language learners who require high skills of learning materials that emphasize to the interpretation of meaning. For the first group of students who normally reside in the beginner class, which emphasizes the use of authentic materials that is a very important aspect because it serves to bridge the communication gap between students and teachers. With proper use authentic materials, students will be able to In the early stages, learning BIPA directed to encourage foreign students willing and able to express ideas, feelings, and opinions with the Indonesian language.
To that end, the teaching materials used may be real events that can be observed by the learner, visual impressions, or text with a topic that is "now" and "here" that can be understood by the learner. In this case, the precision of language structure and grammar error correction priority has not been a priority of learning. Therefore, in the choice of language data in the form of specified materials wherever possible the correct language of the rules of the language. At a later stage, when the students have started to emerge their willingness and ability to develop language, the correct use of language and grammar error corrections start to become the focus of attention.
However, the error correction undergo structural language learners, teachers need to consider (a) the effect on the error message, (b) if the error rate is measured by the error rate experienced by native speakers, and (c) the relationship between the fault and the system state of the language learner.
THE VARIETIES OF BIPA TEACHING MATERIALS
In the learning BIPA, learning targets are emphasized to four aspects of learning language skills, namely listening skills, reading skills, speaking skills, and writing skills. Listening and reading are receptive proficiency, speaking and writing are productive proficiency. The ideal language acquisition includes the four types of proficiency. However, in reality, there are a BIPA students fluent in speaking Bahasa Indonesia, but they are a weakness in reading or writing. Conversely, there are students who are able to read the text and write the contents of text correctly, but many have difficulty in conveying his opinion orally (Lado, 1985). To solve this problem, BIPA learning needs to develop exercising for four aspects of such proficiency in proportion to the needs of their students.
In learning BIPA to develop the proficiency of listening and speaking skills, the material can be used in the form of dialogue. Material intended dialogue can be topics about real events that required language and can be applied by students in the day-to-day communication. Learning materials in the form of text dialogue are very useful to improve and enrich the vocabulary of foreign students. Instructional materials that use this dialog can be started from a very simple dialog, for example, dialogue about greetings in Indonesian. Simple dialogue is appropriate for beginner students. For advanced students, for developing the two skills, teaching materials in form of dialog should be more complex and more formal language. Topics selected dialogue should also formal topics, such as formal conversations in the office.
To enrich the knowledge of BIPA students about greetings in Indonesian, it is needed to introduce a number of greetings that can actually be found in everyday Teaching techniques used by a teacher to accelerate students in learning complex dialog is a modeling. In this case, the teacher read the text and students listen while marking new words that have not understood the meaning. After reading the dialogue is completed, the teacher provides the opportunity for students to find new difficult words in the text dialog. New words in question by BIPA students discussed with the class. Wherever possible teachers in explaining the meaning of new words in a way that does not translate into the language learner, but it can be done through the interpretation of the meaning based on context or explain by giving illustrations. To check student understanding of new words, the teacher asks students to make sentences with the words. After all the new words in the text dialog understood by students, BIPA teachers provide a training to students in pairs to read The training forms of the dialog can be followed by assignments for students to change the statement, answer, or phrase in the text dialog with the form of statements, answers, or other expressions according to the context and the student wishes. It is important for students because in real communication students will encounter various forms of statements, answers, or the expression. Introduced with the various forms of the learning experience, the gap between the use of Bahasa Indonesia in class and in the real communication can be addressed.
With reference to the text, the dialog has been learned, classroom activities can proceed with his training to respond spontaneously to any form of inquiry. To activate and bring creativity students, teachers ask students to close the book material. The question is not only initiated by teachers, but also each student is assigned to ask and answer questions so that students are more active and more varied learning activities.
To improve students' ability in communication, classroom activities can be done with his training development of creativity. Students creatively make statements or questions submitted to his/her friends, then his/her friends to respond to the statements or questions. This can be done in pairs so that students are able to create interactive communication with their partner.
Learning BIPA with the dialogue model can be done by presenting the topics dialogue, without the use of dialogue that has been composed before. Topics such dialogue can be determined by the teacher or students based on mutual agreement.
In learning activities, teachers and students discuss mutually convey thoughts and opinions from the perspective of each of the topics that have been agreed. Choice of presentation material in a way so is the potential to explore and develop student competence in the language. Because of the choice of topics based on student choice in accordance with their interests, students have the supplies and wealth ideas that can be communicated to another. In these conditions, it is possible students having difficulty to find the words to convey his thoughts Indonesian. Students then bring To start a dialogue with this model of learning, teachers first give an overview of the topics to be discussed or talked about. Submission of these images can be done through the story, the events in the video, pictures, and so on. If the topic comes from learners, initial overview of the subject can be submitted by the student concerned. After an overview of the subject possessed by all students in a class, a dialogue on the topic can be started.
In this learning activity, the content of any comments, answers, opinions, and comments are not assessed student right or wrong, or not considered good and bad, because the problem is not the focus of attention in learning. Most important in this activity, students willing and able to express their opinions by using the appropriate Indonesian. Therefore, the focus of attention is the language used by students instead of talking about the quality of content material. In addition to the material in the form of dialogue, in teaching listening and speaking, is accomplished by utilizing the material in the form of discourse that exists in everyday speaking activities, such as news or conversations on television, radio, and a crowd of people in everyday life.
Learning to listen to the news or conversations in utilizing electronic media can be done in two ways, namely (1) teachers and students together in a classroom listening activity or conversation then discuss the news and asking for feedback from the students about what they listen, or (2) teacher assigns students individually or in groups to listen to the news or conversations outside the classroom activities and then on the next day of class activity students were asked to report information and responses about what their listen. While learning to listen to conversations that take advantage of the crowd of people can be done in the activity of individual tutorials. Individual students are accompanied by a tutor in a crowd of people who were talking and listening to the information being discussed by them. Obtained information presented in the class at the next meeting.
Various materials of dialogue as described above, in terms of complexity and grain, need to be adjusted to the conditions and the ability of the students. Advanced students generally already have awareness of learning and independent learning. Prior learning in the classroom, they usually have to prepare by studying the learning materials that have been scheduled. Therefore, the teacher must really prepare teaching materials as well as possible so that the students ' learning motivation is maintained. In learning to read, when the teacher gives reading material which is difficult because of a lot of new words, teachers need to provide a list of difficult words and its translation at the end of the reading. It is intended to facilitate students in understanding the content of the text and to avoid the onset of saturation because students face a lot of difficulty understanding.
The exercise of reading comprehension can be developed through a variety of models. The models are (1) answering reading questions, (2) completing the blank sections of text reading, (3) revealing the contents of the reading, (4) summarizing the content of reading by using their own language, (5) making conclusions reading the contents of and (6) commenting the content of reading. The forms can be made in his training varied as students at advanced levels deemed capable of processing and operating system that has mastered the language as appropriate.
There is still one aspect of language skills that need to be mentioned again in this paper, namely writing skills. Teaching writing skills can be forms of writing sentences, writing a simple essay, writing a paper for a seminar in the seminar in its class. When the teacher gives his training in teaching reading and students answer questions or write answers in writing about the content of reading assignments, actually students have gained a training to write. For example, when students disclose the contents of paragraphs and write conclusions of the reading text, students' activities and practices are learning to write. In the learning BIPA, students sometimes are assigned to write an essay. In this case, students were assigned to compose a simple report or papers.
In an effort to develop the learner's ability to use language correctly and acceptable, learning grammar was still needed. The grammar learning is intended to equip BIPA students well aware about the use of the structure of the true Indonesian. The learning provides many significant benefits for the improvement of language learners as well as provides supplies and services to the students in understanding the text in scientific books in the Indonesian language.
Selection of language in the learning material is tailored to the language ability of BIPA students. BIPA language material for beginner level students include greeting words, simple everyday phrases, simple sentences, active sentences, passive sentences, negative sentences, prepositions, word/sentence asked, said numbers, and BIPA students provided the ability and accuracy of language through analysis sentence wrong and correct it and change the pattern of the sentence without changing its meaning.
Developing and organizing the materials need to be tailored to the needs of their students and ability levels. Therefore, management of BIPA learning materials need to pay attention to three things, namely (1) the orientation of the material should be directed and focused on material that (a) can be used and potentially can be trained, (b) actually exists and is used in real communication society, and (c) able to develop competence to practice and understand the patterns and be able to develop an understanding of Indonesian through forms of conversation/dialogue situational-contextual, (2) the range and the arrangement of the material sought in the material that refers to those aspects that determine how Indonesian is used, the following aspects: (a) vocabulary, (b) sentence pattern, (c) discourse / conversation, (d) spelling / pronunciation and intonation, and (e) processing ideas, and (3) learning material should be arranged by units in an integrated communicative utterances (Suyitno, 2005).
In learning BIPA, students were introduced to Indonesian culture. The development of the culture material is directed to enrichment insight Indonesian culture to foreign students so they can use it as a provision within the daily life of Indonesian society. Principles of cultural materials that need to be introduced to students BIPA is a cultural behavior, cultural knowledge, and cultural objects. The principles of culture material are to equip students to be able to speak Indonesian BIPA according to the circumstances. In addition, also introduce Indonesian culture to BIPA students so as to foster a positive attitude of BIPA students about Indonesian culture (Suyitno, 2017).
Culture is all kinds of human activity and the results are patterned (Sadtono, 2002:16). In line with these opinions, cultures can be grouped into two major increments, ie, as a product of culture and culture as a whole way of life. As a In the implementation of immersion techniques, teachers do not use the English language to the students. Students are recommended to use the Indonesian language. If they were once given the chance to speak English, they will always ask for an explanation in English. This is consistent with the statement Wolff, et al. (1988) who suggested that BIPA teachers need to consider the following techniques in the teaching of BIPA, namely (1) speak to the students in Indonesian, (2) use words, formations, sentences, and grammar are already known to students, (3) do not provide the opportunity and flexibility to the students to speak English, even if they have not been able to convey meaning with good Indonesian, (4) speak naturally, (5) when the students say the sentence is one sentence that is meant to say, well, then take them to repeat it, (6) the mistakes made by students shall be addressed as a collective error, (7) Learning BIPA is not only done in the classroom but also done outside the classroom. Techniques adopted in the presentation of the material through activities outside the classroom through activities including outdoor tasks (to the bank, to the photo studio, to the market), visiting, interviews with Indonesian students, visits to tourist places, see things craft (puppets, ceramics, masks), see performances, witnessed ceremonies (weddings, funerals), and so on. Such a way that is consistent with the opinions expressed by Surajaya (1995) who argued that the tips to do in teaching BIPA is (1) college tips, (2) issue an explanation with examples of cultural objects, (3) demonstration and participation tips active, (4) tips field visit or excursion, (5) tips magazine wall, (6) tips dancing and singing, (7) tips game simulation, (8) native informant tips, (9) tips video -tape, (10) tips audio -motor units, (11) tips culturally Identification of general behavior, (12) tips the identification of cultural connotations, (13) tips minimization of perception that is stereotyping, and (14) tips utilize authentic literature. In language learning in the classroom, the classroom atmosphere will determine the success of learning. Classes are necessary to create a good atmosphere in learning. There are several ways that can be done by BIPA teachers in creating a classroom atmosphere so that teaching and learning take place, namely (1) the use of humor, (2) change/provide challenging materials, (3) give the song, (4) providing puzzles (puzzles), (5) provide to students to take a break, and (6) encourage students to move to another place, for example (outside the classroom, in a coffee shop, etc. (Suyitno, 2005).
In addition to the presentation of material engineering, engineering students face both in the classroom and outside the classroom requires attention in learning BIPA. It is given that the student is not a student BIPA Indonesia, namely foreign students who have different cultural backgrounds with the instructor. Some techniques that can be taken in addressing the BIPA students both inside and outside the classroom are (1) shows the attitude of discipline with respect to time, (2) demonstrate responsible attitude towards work/assignments, (3) shows the attitude as a friend, (4) shows attitude that knows the language problems, (5) shows the attitude of patient and painstaking, (6) shows an open attitude, (7) shows the lackluster attitude.
CONCLUSION
Learning BIPA has significance for the development and promotion of Indonesian culture for the people of Indonesia. Through learning BIPA, foreigners will be easier in knowing and studying Indonesian culture. Therefore, BIPA learning needs to prepare materials and learning techniques that can meet the learning needs of foreign students. Therefore, BIPA teachers need to understand correctly the pedagogical norm of BIPA learning.
Foreign students studying BIPA aim to be able to use the Indonesian language. Mastery of the Indonesian language is intended by foreign students for various purposes, both academic and non-academic purposes. Therefore, BIPA learning materials chosen to be taught to BIPA students are potential materials to meet their needs. These materials include language and grammar skills.
In the learning process, BIPA teachers need to use a variety of techniques. It is intended to serve BIPA students who have varied cultural background and language skills. Therefore, BIPA teachers should really have and master the various tricks that can encourage BIPA students to enjoy and be able to learn the Indonesian language. | 2019-04-16T13:25:33.501Z | 2017-09-15T00:00:00.000 | {
"year": 2017,
"sha1": "ad4f688c51077f937b013fc2dc81df9faf07c17f",
"oa_license": "CCBYSA",
"oa_url": "http://journal2.um.ac.id/index.php/jisllac/article/download/1423/746",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "619a3339d1b150a61f69b92e80b14f41f4ae00c9",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
} |
73500925 | pes2o/s2orc | v3-fos-license | A pilot, randomized, double‐blind, placebo‐controlled trial to assess the safety and efficacy of a novel Boswellia serrata extract in the management of osteoarthritis of the knee
A double‐blind, placebo‐controlled human trial was conducted to evaluate the safety and efficacy of a standardized oral supplementation of Boswellin®, a novel extract of Boswellia serrata extract (BSE) containing 3‐acetyl‐11‐keto‐β‐boswellic acid (AKBBA) with β‐boswellic acid (BBA). A total of 48 patients with osteoarthritis (OA) of the knee were randomized and allocated to the BSE and placebo groups for intervention. Patients were administered BSE or placebo for a period of 120 days. The trial results revealed that BSE treatment significantly improved the physical function of the patients by reducing pain and stiffness compared with placebo. Radiographic assessments showed improved knee joint gap and reduced osteophytes (spur) confirming the efficacy of BSE treatment. BSE also significantly reduced the serum levels of high‐sensitive C‐reactive protein, a potential inflammatory marker associated with OA of the knee. No serious adverse events were reported. This is the first study with BSE conducted for a period of 120 days, longer than any other previous clinical trial on patients with OA of the knee. The findings provide evidence that biologically active constituents of BSE, namely, AKBBA and BBA, act synergistically to exert anti‐inflammatory/anti‐arthritic activity showing improvement in physical and functional ability and reducing the pain and stiffness.
knee is the most common form of arthritis and has a high prevalence rate compared with other types of OA. Overall, an estimated 43.5% American adults were affected by arthritis in 2013-2015, reflecting a net increase of about 1 million people per year (Barbour, Helmick, Boring, & Brady, 2017). By 2040, that number is expected to raise to nearly 78.4 million of the projected total adult population (Barbour et al., 2017). One of the primary focus of OA medication is to reduce pain and the use of acetaminophen, and nonsteroidal antiinflammatory drugs (NSAIDs) are currently the mainstay of pharmacotherapy for OA of the knee. Unfortunately, many patients do not respond to the above treatments. NSAIDs are known to be associated with gastrointestinal, renal, and cardiovascular risks (Umar et al., 2014). Therefore, search for herbal agents with anti-inflammatory properties without adverse side effects for the treatment of OA is on the rise.
Interestingly, the gum resin extracted from a well-known herb Boswellia serrata Roxb. ex Colebr. (family: Burseraceae), also called Indian frankincense or Salai guggal, has been used in traditional Ayurvedic medicine in India for centuries as a remedy for the treatment of chronic inflammatory diseases, including OA (Ammon, 2016;Ernst, 2008). In the recent past, the potent anti-inflammatory, antiarthritic, and analgesic activities of the gum resin extract from B. serrata has gained much attention (Abdel-Tawab, Werz, & Schubert-Zsilavecz, 2011). The pentacyclic triterpenoids from B. serrata comprising a β-carboxyl moiety at C-24 are regarded as biologically/pharmacologically active compounds (Sailer et al., 1996).
Although all the above clinical studies have shown beneficial effects for OA of the knee, small number of patients enrolled, shorter study period employed in these studies, absence of placebo control in some studies, and lack of proper characterization of the extract did not meet all rigorous clinical trial criteria to draw definitive conclusions on the usage of BSE for the treatment of knee OA.
In an earlier clinical study, the efficacy of boswellic acid-containing product (Boswellin®) in combination with Curcumin C3 Complex® and ginger extract was demonstrated in the management of OA (Natarajan & Majeed, 2012). No adverse events were recorded. The results of this study clearly indicated that Boswellin® is potent for the management of OA in combination with curcuminoids and ginger.
The present study was planned to evaluate the safety and efficacy of Boswellin®, a standardized oral supplementation of BSE containing 30% AKBBA and along with three other bioactive β-boswellic acids, namely, BBA, KBBA, and ABBA, the highly bioactive and pharmacologically relevant components, in newly diagnosed or untreated patients with OA of the knee. This is the first study with BSE conducted for a period of 120 days longer than any other previous clinical trials. In addition to the measurement of standard parameters, radiography was also employed for the assessment of efficacy. Recent studies have shown that circulating concentrations of high-sensitive C-reactive protein (hs-CRP) levels are associated with inflammation and OA progression (Pearle et al., 2007). Other studies also have shown that degeneration of joints was higher among patients who had higher serum CRP (Bonnet & Walsh, 2005). Therefore, in the present study, serum hs-CRP level was examined to determine the mechanistic insights on the interactions of β-boswellic acids against the progression of OA of the knee.
| Patient recruitment
Both male and female patients, between 35 and 75 years of age, newly diagnosed for OA were screened based on typical history, clinical presentation, classical radiological findings, and fulfilling the classification for OA of the knees according to the criteria of the American College of Rheumatology.
All participants who met the following inclusion criteria were selected for enrolment: (a) patients with a minimum pain visual analog scale (VAS) score >4 on walking in one or both knees during the 24 hr preceding recruitment; (b) patient ambulant and requiring treatment with an anti-inflammatory drug and not receiving regular antiinflammatory or analgesic drugs or not satisfied with drugs being taken and seek a change; (c) patients willing to come for regular follow-up visits; and (d) participants had to be able to walk and give both verbal and written information regarding the study. Demographic data, physical examination, medical and medication history, comorbid conditions, and vital signs were recorded. All participants provided written informed consent.
The exclusion criteria for the study included the following: (a) known hypersensitivity to herbal extracts or dietary supplements; (b) pregnant or lactating women and women of child bearing potential not following adequate contraceptive measure or women who were found positive for urine pregnancy test; (c) nondegenerative joint diseases or other joint degenerative diseases; (d) incapacitated or bound to wheel chair or bed and unable to carry out self-care activities; (e) current or recent (in the last 3 months) oral or intra-articular corticosteroid therapy; (f) preexisting or recent onset of demyelinating disorders or type I diabetes; (g) ongoing with anticoagulants, hydantoin, lithium, steroids, methotrexate, and colchicine; (h) renal, hepatic, or hematopoietic disease or hypertension or severe cardiac insufficiency or congestive heart failure or untreated hyperlipidemia (cardiovascular risk); (i) Ayurvedic formulation or any form of complementary alternative medicine therapy in the preceding 2 months; (j) receiving any investigational drug or participated in any other clinical trial that ended in preceding month or currently ongoing; (k) patients who needed high dose of NSAIDs or analgesics; and (l) inability to comply to study procedures.
| Selection of BSE dosage
Information on clinical studies conducted over the past three decades (PubMed database review up to September 2018) was evaluated pertinent to the safety and efficacy of oral administration of BSE in patients with OA or OA of the knee, including rheumatoid arthritis, to select the dosage. In earlier studies, Boswellia was given as an extract standardized to contain 30-40% boswellic acids, 300-500 mg two or three times a day (Maroon, Bost, & Maroon, 2010).
In a 12-week pilot study, the authors used (Sander et al., 1998) tablets containing 400 mg of BSE, nine capsules a day (3,600 mg/day) for treatment in outpatients with active rheumatoid arthritis. In a 56-day crossover study, 333-mg capsules of BSE containing 40% BAs (corresponding to 118.4 mg of total β-boswellic acids) were given three times a day (355.2 mg of total β-boswellic acids per day in addition to α-boswellic acids) for the treatment of OA of the knee (Kimmatkar et al., 2003). Similarly, 333-mg capsules of BSE three times a day were also given in a 180-day trial to compare the efficacy of BSE with valdecoxib (a selective COX-2 inhibitor) in patients with OA of the knee (Sontakke et al., 2007). Recently, in a 90-day trial, Sengupta et al. (2008) evaluated the efficacy and safety of BSE (250 mg) enriched with 30% AKBBA (corresponding to 75 mg of AKBBA); however, details of other β-boswellic acids in the composition were not provided in the treatment of OA of the knee. In a more recent study, patients were administered with 500-mg capsule of B. serrata, 6 g/day (in three divided doses) of undetermined composition of β-boswellic acids in the management of OA (Gupta et al., 2011).
In this study, BSE tablets, each tablet containing the BSE extract of 169.33 mg with a mean value of 87.3 mg of total β-boswellic acids, corresponding to the four major β-boswellic acids, namely, AKBBA (53.27 mg), BBA (20.83 mg), KBBA (7.11 mg), and ABBA (6.06 mg), were given twice a day. Thus, the selected dosage of BSE, equivalent to 87.3 mg of total β-boswellic acids per tablet twice a day (174.6 mg of total β-boswellic acids per day), was safe and was comparable or well below the amount of total β-boswellic acids in BSE used in previous clinical trials in patients with OA or OA of the knee. The individual boswellic acids in the extract contents were AKBBA ≥ 30%, KBBA-1.5%, ABBA ≥ 3.5%, and BBA ≥ 7.5% with not less than 50% w/w of total boswellic acids in the extract.
| Study design
This clinical trial to evaluate the safety and efficacy of the tablet form of BSE in patients with knee OA was performed at the Kempegowda Institute of Medical Sciences, Bangalore, India. Recruitment of patients for this trial commenced on March 18, 2014, and completed on June 6, 2014. A total of 48 newly diagnosed or untreated patients with OA of the knee, with mild to moderate in severity and who were not on any other treatment in the past 3 months, were randomly assigned, in a 1:1 ratio, to receive either BSE or placebo, respectively.
Subjects were instructed to self-administer two tablets of 169.33 mg of BSE each day, each tablet containing a mean value of 87.3 mg of total β-boswellic acids, or placebo for a period of 120 days ( Figure 1). No concomitant medications were allowed.
| Randomization and blinding
Both BSE and placebo were coated tablets and were identical to allow for blinding. The coating materials used for both the tablets were exactly the same such that color, taste, and smell are uniform in nature and were packed identically in the same type of bottles. One bottle of tablets was dispensed at each study visit for twice-daily dosing for 1 month, providing sufficient extra pills to allow visit windows of up to 40 days. During the double-blinded treatment phase of the study, the subject and all personnel involved with the conduct of the interpretation of the study, including the investigators and investigational site personnel, were blinded to the medication codes. An authorized statistician, independent of the sponsoring organization, not involved in conduct or reporting of the study made random allocation cards using computer-generated random numbers. The randomization codes were recorded to avoid further confusion, and data were kept strictly confidential. The original random allocation sequences were accessible only to authorized persons on an emergency basis as per sponsor's standard operating procedures until the time of unblinding. Through unblinding of randomization codes at the end of statistical analysis, it was revealed that the XAXA01 group received active BSE (Boswellin®) whereas the XAXA02 group received the placebo.
| Intervention and compliance
All subjects were asked to take two tablets (either BSE or placebo) per day. Subjects were provided with study visit plan. Study personnel conducted regular home visits to ensure compliance as per the protocol with special reference to medications and follow-up visits. A diary was provided to the patients to record their daily study and nonstudy medications and any adverse health event. The trial coordinator checked the diary and further ensured compliance. Unused tablets were returned and were analyzed for percent treatment compliance.
Out of a total of 48 subjects enrolled into the study, 42 (22 in the BSE group and 20 in the placebo group) completed the study. Six subjects (two from the BSE group and four from the placebo group) dropped out of study, citing personal reasons, and withdrawals from the study, unrelated to treatment effects, were significantly lower for the BSE treatment group than the placebo group (Figure 1).
| Outcome measures
The severity of OA based on a QOL questionnaire, radiography, and physical examination before and after BSE treatment was assessed for the efficacy. The Western Ontario McMaster Index (WOMAC) was used for the assessment of pain, stiffness, and physical function in patients with OA of the knee to evaluate the efficacy of at Days 0, 30, 60, 90, and 120. The WOMAC questionnaire contains questions related to severity and frequency of symptoms such as swelling of the joint, grinding and clicking noises, knee catching or hanging up, the ability to straighten or bend knees, pain in the knees in different positions, knee functions, and the ability to perform daily functions. Based on the sum of all the scores, the overall WOMAC score was determined.
The objective of the 6-min walk test was to evaluate the effect of BSE on the ability of the patients to walk as far as possible for 6 min.
The distance travelled by patients/study subjects in a time period of 6 min was recorded on the baseline and during other study visits.
Other measures performed included determination of physician's and subject's global assessment, 6-min walk test, VAS pain scores, and FIGURE 1 Study design flowchart of Boswellia serrata extract (BSE). A tablet form of BSE (169.33 mg containing 30% 3-acetyl-11-keto-βboswellic acid [AKBBA]) was given orally twice daily for a period of 120 days in patients with osteoarthritis (OA) of the knee European Quality of life-5 Dimension QOL. The physical exam was focused on the range of motion (both passive and active), muscle strength, ligament stability, and tenderness of the affected joints. A comparative analysis of radiological X-ray images captured before (baseline) and on Day 120 was performed to determine the efficacy of BSE treatment. All radiographs were obtained under standardized conditions.
For the analysis of serum hs-CRP, blood samples were collected from subjects on scheduled visits. Fresh serum sample was prepared by centrifugation after 1-hr interval at room temperature. hs-CRP was measured by a particle-enhanced immunoturbidimetric assay using commercially available kit in which human CRP agglutinates with latex particles coated with monoclonal anti-CRP antibodies. The precipitate was determined turbidimetrically on a Roche/Hitachi cobas c 501/502 using reagents/kit from Roche Diagnostics GmbH (Mannheim, Germany). The lower detection limit of the hs-CRP assay was 0.03 mg/L, and measurements lower than 0.03 mg/L were not considered.
| Safety assessment
Vital signs, namely, blood pressure, respiratory rate, pulse rate, and any abnormal lab/diagnostic parameters, were considered for safety evaluations. Physical examination and vital signs were measured on Days 0, 30, 60, 90, and 120. Demographic data were recorded on Days 0 and 120. Vital signs were assessed immediately after BSE tablets were taken for the first time and continued throughout the study.
The routine laboratory parameters of safety, namely hematological and biochemical investigations, were measured using standard laboratory techniques, before and after the BSE treatment. Urine test for pregnancy was performed on female volunteers of child bearing potential. Adverse effects, if any, were recorded at each study visit.
| Statistical analysis
All data are expressed as mean ± SD. Data were evaluated for statistical significance by t test or analysis of covariance or Wilcoxon's signed rank sum test depending on the number of comparisons made to reach the best possible statistical conclusion between patients receiving BSE and placebo. Last observation carried forward method was followed for efficacy evaluations of subjects, whose data were not available in the last/final visit. Results with p < 0.05 are considered statistically significant. Statistical Analysis Software (SAS) of version 9.2 (Cary, NC, USA) was used for data analysis.
| Demographic characteristics of subjects
Details on the overall demographic characteristics of subjects enrolled for the trial are provided in Table 1. A comparative detail on the body mass index, height, and weight of subjects recorded at the baseline and Day 120 of BSE treatment is presented in Table 2. No significant differences were observed in the demographic characteristics between the group who received 169.33 mg of tablets of BSE and the placebo.
| Clinical efficacy
The data on the efficacy assessments, including WOMAC, Physician's Global Assessment, 6-min walk test, and VAS scores after 120 days of BSE treatment, are presented in Figure 2. Detailed analyses pertinent to the overall WOMAC scores, Physician's Global Assessment, 6-min walk test, and VAS scores are presented in Table 3. Analysis of covariance was applied to confirm the efficacy assessment. Mean scores were used for bilateral OA. The differences in the efficacy parameters between the baseline and after BSE treatment group compared with the placebo group were found to be significant (p < 0.001). A similar difference (p < 0.001) was also observed when nonparametric Wilcoxon test was employed. However, no significant difference was observed between the baseline and the placebo group.
| Effect of BSE treatment on WOMAC pain score
A comparative analysis of WOMAC score at the baseline visit showed a mean value of 69.4 ± 8.06 and 68.9 ± 7.48 for the treatment and placebo groups, respectively. However, a steady decrease in the WOMAC score was observed at different time points in patients with the BSE treatment, and a mean value of 42.3 ± 4.84 was observed on Day 120. The mean WOMAC score for the placebo group was 55.5 ± 6.72 on Day 120. Overall, the BSE treatment group showed a statistically significant (p < 0.001) decrease in WOMAC score indicating improvements in physical function by reducing pain and stiffness ( Figure 2a and Table 3). The subscores of WOMAC, namely, the three domains of pain, physical function, and stiffness, are provided in Table S4. The values of the subscores of WOMAC are in agreement with the WOMAC overall score that BSE treatment significantly reduced the pain and stiffness compared with placebo control in patients with OA of the knee. Note. Values are presented as mean ± SD. One-way analysis of variance test was performed between the baseline and after treatment (Day 120) of each group and between the BSE treatment and placebo groups. p values are not significant (p > 0.05). BMI: body mass index; BSE: Boswellia serrata extract.
| Effect of BSE treatment on Physician's Global Pain Assessment scale
Pain assessment scale ranging from 0 to 10, where 0 indicates very poor and 10 excellent, was used by the physician for assessment. At the baseline visit, the mean values were found to be 5.6 ± 0.91 and 5.9 ± 1.19 between the BSE treatment and placebo groups, respectively. At the end of the study, patients in the BSE treatment presented a mean score of 8.5 ± 0.64, which is a statistically significant better score (p < 0.001), compared with their baseline visit values, but also significantly different from the mean values of patients in the placebo group (6.3 ± 0.87), on the final visit ( Figure 2b and Table 3).
| Effect of BSE treatment on the ability to walk
As per this assessment, the distance travelled by subjects in a period The difference in the efficacy assessments was significant (p < 0.001) between the groups when their respective final visit (Day 120) values were analyzed (Figure 2c and Table 3).
| Effect of BSE treatment on VAS pain scale
The VAS pain scale score was significantly reduced after BSE treatment.
Briefly, on the baseline visit, a mean score of 6.4 ± 1.24 and 6.9 ± 1.51 was reported by the active and placebo treatment groups of patients, respectively. On Day 120 (final visit), the pain score decreased significantly (p < 0.001) to 3.7 ± 1.35 in the active treatment group of patients with no statistically significant change, whereas it was 6.3 ± 0.62 in the placebo receiving patients. The study concluded that as per physicians' assessment, patients felt much better with BSE (active) when compared with placebo ( Figure 2d and Table 3).
| Effect of BSE treatment on European Quality of life (QOL)-5 Dimension score measures
This is a self-report QOL that measures mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. This questionnaire indicates how bad a patient's health is on a specific day. A total score of 15 indicates poor QOL, whereas 5 indicates good/better QOL. Mean values of 10.9 ± 1.71 and 11.0 ± 1.20 on baseline visits were changed to 6.3 ± 0.88 and 12.1 ± 1.93 on the final visit between the active BSE and placebo receiving groups of patients, respectively (Figure 2e and Table 3). The differences in the QOL between the baseline and after BSE treatment group compared with the placebo group were found to be significant (p < 0.001).
| Effect of BSE treatment on radiological X-ray examination
Examination of radiological X-ray images revealed reduced joint space due to loss of articular cartilage and osteophyte formation in OA of the knee patients in the placebo group. On the other hand, the OA of the knee patients in the BSE treatment groups showed significant improvements on the final visit (120 days). A distinct change in the OA condition could be seen where the gap between the knee joints increased significantly with a sharp decrease in osteophytes (spur) in subjects (Figure 3).
| Effect of BSE treatment on hs-CRP
Elevated levels of hs-CRP are found to be associated with local inflammation in patients with OA. Several studies have shown that hs-CRP is elevated in the plasma of patients with OA compared with the agematched controls (Pearle et al., 2007). In this study, a significant decrease in the activity of hs-CRP in the BSE-treated group was observed in contrast to that from the placebo receiving group (p < 0.01; Figure 2f). These findings are clearly suggesting that BSE is a powerful inhibitor of hs-CRP induced by local inflammation in patients with OA. VAS pain scale score 6.4 ± 1.24 3.7 ± 1.35** 0.0001 6.9 ± 1.51 6.3 ± 0.62 0.1828 0.0001 European Q5D quality of life 10.9 ± 1.71 6.3 ± 0.88** 0.0001 11.0 ± 1.20 12.1 ± 1.93 0.0858 0.0001 Note. Values are presented as mean ± SD. One-way analysis of variance test was performed between the baseline and after treatment (Day 120) of each group and between the BSE treatment and placebo groups. BSE: Boswellia serrata extract; VAS: visual analog scale; WOMAC: Western Ontario McMaster Index.
| Safety evaluations
None of the enrolled subjects had abnormal medical history. No abnormality in physical findings was observed on the screening visit or during the study visits. The vital signs recorded in listings for physical examination did not show statistically significant changes that are recorded between the baseline and Day 120 of BSE or between the treatment groups (Table S1). Systolic blood pressure and diastolic blood pressure pulse rate, respiratory rate, heart rate, and oral temperature were normal on the screening visits and during the study visits as well.
| Biochemical and hematological evaluations
As part of the safety evaluation, a complete set of analysis was performed for (a) the biochemical parameters in the urine and in the serum samples and (b) for hematological parameters (before and at the end of the study). These analyses are summarized in Tables S2 and S3. A repeated measure analysis of variance test was performed to compare the values of the above parameters at recorded at different time points of the visit (baseline and at the final visit). Statistical analysis of the data for the biochemical and hematological parameters did not indicate any significant changes. Some of the minor changes (if any) observed were found to be within the normal laboratory range.
| Adverse events
There were no statistically significant changes in the body weight and body mass index from baseline to the last visit or between the treatment groups. Vital signs, namely, blood pressure, respiratory rate, pulse rate, and any abnormal lab/diagnostic parameters recorded, were safe and thus support the safety of the agent BSE. During the course of the study, there were no serious adverse events reported.
No clinically significant abnormal lab values were identified, and no statistically significant changes in the vitals were observed from the baseline to the final visits. Although few high values were reported, they were categorized as "not clinically significant" by the study investigator owing to their marginal borderline values from the lab (Sumantran et al., 2011). Specifically, oleoresin of B. serrata was shown to play a crucial role in the chondroprotective and anti-inflammatory activity in OA patients (Sumantran et al., 2011). An earlier study using a triterpene-rich extract of Vitellaria paradoxa against OA showed a decrease in tumor necrosis factor alpha and a cartilage degradation marker CTX-II (Cheras, Myers, Paul-Brent, Outerbridge, & Nielsen, 2010). Likewise, a clinical study on the efficacy of green tea extract conducted in patients with OA of the knee for 4 weeks showed a reduction in VAS pain, total WOMAC, and WOMAC physical function scores compared with the control group. However, the authors reported no significant differences between the two groups and suggested that future studies with longer duration in larger sample size may be required to validate the efficacy (Hashempur, Sadrneshin, Mosavat, & Ashraf, 2018). Despite the existence of various plant products, sustainable use of plant resources in the management of OA is still challenging.
Earlier, the efficacy of boswellic acid-containing product (Boswellin®) in combination with Curcumin C3 Complex® and ginger extract was demonstrated in the management of OA (Natarajan & Majeed, 2012). No adverse events were recorded. The present study demonstrated that oral supplementation of BSE containing AKBBA and BBA significantly improved physical function by reducing pain and stiffness compared with placebo control in newly diagnosed or untreated patients with OA of the knee, as presented in Table 3 and Figure 2. Also, radiographic assessment showed that BSE significantly improved between the knee joints and reduced osteophytes (spur) formation compared with the placebo (Figure 3), thus, confirming the efficacy of BSE against OA of the knee (Figure 3). More importantly, BSE treatment comprising 30% AKBBA and BBA significantly decreased hs-CRP values compared with the placebo group (Figure 2 f) clearly supporting the clinical efficacy for OA.
Regarding structure and functional aspects of BSE, earlier reports suggest that BBA lacking keto functional groups may reverse or partially prevent the activity of AKBBA on 5-lipoxygenase (5-LOX) pathway (Safayhi, Sailer, & Ammon, 1995;Sailer et al., 1996). Although the importance of 5-LOX inhibition by AKBBA in BSE is decidedly important, the findings of the present study provide strong clinical evidence to support the fact that the active components of BSE, namely, BBA and AKBBA, acted synergistically to exert antiinflammatory activity efficaciously in reducing joint pain and improving the physical functional ability in patients with knee OA. The current findings are also consistent with earlier studies indicating that BSE comprising β-configured derivatives of boswellic acids are specific nonredox inhibitors of 5-LOX and hence inhibit leukotriene biosynthesis and reduce the pain associated with joint stiffness and physical discomfort (Gupta et al., 2011;Kimmatkar et al., 2003;Sengupta et al., 2008;Sontakke et al., 2007) (Figure 4). Although human leukocyte elastase activities inhibition is established for many lipophilic compounds, a dual human leukocyte elastase and 5-LOX inhibitory property is unique to pentacyclic triterpenes (Safayhi, Rall, Sailer, & Ammon, 1997). Although AKBBA has been reported as a natural inhibitor of the transcription factor nuclear factor κB involved in inflammatory reactions (Cuaz-Pérolin et al., 2008), BSE has been shown to reduce the production of reactive oxygen species in OA-related oxidative stress conditions (Umar et al., 2014).
Although AKBBA and KBA have been considered as the pharmacologically active ingredients (Safayhi et al., 1992), recent studies show that β-boswellic acids, including BBA lacking the C11-oxo moiety, are about equipotent to 11-keto-β-boswellic acids to interfere with serine protease cathepsin G (Tausch et al., 2009). The potent interference of BBA with cathepsin G in relation to its higher achievable plasma levels favors this interaction as a possible molecular basis for the underlying beneficial effects of BSE ( Figure 4).
Although NSAIDs can cause disruption of glycosaminoglycan synthesis, BSE has been claimed to decrease the glycosaminoglycan degradation (Reddy, Chandrakasan, & Dhar, 1989) Regarding clinical safety, assessment of the laboratory/diagnostic parameters observed in this study confirms that there were no serious adverse events with BSE treatment.
The current study has a few limitations. This is a pilot study with a small group of subjects. However, studies with larger human cohort is required to confirm the conclusions of the present study.
hs-CRP, a potential inflammatory marker associated with OA of the knee, was used to assess the anti-inflammatory activity of BSE.
Use of a panel of inflammatory markers, including matrix metalloproteinase-derived inflammation, a component of OA (Siebuhr et al., 2014), and interleukin 6, identified in the systemic circulation and synovial fluid of OA patients (Bonnet & Walsh, 2005), may provide distinctive effects of BSE on inflammation associated with OA. Despite some of these limitations, the present clinical trial demonstrated the safety and efficacy of BSE (Boswellin®), in patients with OA of the knee.
| CONCLUSIONS
The findings from the present study provide clinical evidence to support that biologically active components of BSE, specifically AKBBA and BBA, acted synergistically to exert anti-inflammatory/antiarthritic activity efficaciously in reducing joint pain and improving the physical functional ability (Figure 4). No serious adverse events were observed, thus supporting the pharmacological safety of BSE (Boswellin®) to be considered as a viable candidate for the treatment of OA of the knee. | 2019-03-11T17:24:37.796Z | 2019-03-06T00:00:00.000 | {
"year": 2019,
"sha1": "28908fd2790f5c9ad0b7abff6d45325521e6539b",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ptr.6338",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "28908fd2790f5c9ad0b7abff6d45325521e6539b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
18609987 | pes2o/s2orc | v3-fos-license | An Analysis of Security System for Intrusion in Smartphone Environment
There are many malware applications in Smartphone. Smartphone's users may become unaware if their data has been recorded and stolen by intruders via malware. Smartphone—whether for business or personal use—may not be protected from malwares. Thus, monitoring, detecting, tracking, and notification (MDTN) have become the main purpose of the writing of this paper. MDTN is meant to enable Smartphone to prevent and reduce the number of cybercrimes. The methods are shown to be effective in protecting Smartphone and isolating malware and sending warning in the form of notification to the user about the danger in progress. In particular, (a) MDTN process is possible and will be enabled for Smartphone environment. (b) The methods are shown to be an advanced security for private sensitive data of the Smartphone user.
Introduction
Malware applications inhabit the application store and market. It does not only intrude via downloading or installing activity, but also intrude via access to particular website and SMS. Juniper research finds that 80% of Smartphone device will remain vulnerable for cyberattacks through 2013 [1]. This happens although there is an increasing in customer awareness toward the issue of mobile security products. According to Juniper, there are several factors upon the cause of low level of adoption for security products. It is expected that, by 2018, 1.3 billion mobile devices including smart phones, feature phones, and tablets are fortified by mobile security devices, up from around 325 million this year. According to the study by the Department of Homeland Security and the Federal Bureau of Investigation, as the dominant mobile operating system, android is the primary target for malware attacks because there are many users who are still using the older versions of the software [1]. According to the government agencies, 79 percent of the existing malwares are threatening android mobile system while the rest are haunting the other mobile systems [2].
The growth rate for threats targeting mobile platforms has increased dramatically: 40,059 of the 46,415 modifications and 138 of the 469 mobile malware families were added to our database in 2012 [3].
99% of mobile malware detections in 2012 were targeting android devices. For the next two years, it is clear that android will become the dominant target for malware attacks. Android operating system has become the most common operating system and the most interesting system to be attacked by malware-maker. The formula stands as follows: "the most prevalent OS" + "installation of software from any source" = "the greatest number of threats" [3].
Based on the research by Kaspersky lab and Juniper research, Figure 1 shows the Most targeted Mobile Operating System by intruders is Android (Figure 1(a)) and the most malware injected by intruders through Android is Trojan-SMS.AndroidOS.Opfake.bo (Figure 1(b)), this confirmed from the result in Table 1 and Table 2, where android hold the largest market share (Table 1) and the biggest threats modification by intruders. Based on this, we propose a new approach to analyze the behavior of malware in Smartphone. The idea will be running in android environment. The idea consists of methods in monitoring, detecting malicious program inside the Smartphone, and tracking and notifying the user about the result and progress. Figure 2 illustrates how intruder works for repackaging a malware application process and also in Figure 3 illustrates the android installation file containing malware components to the mobile's user.
Based on this, MDTN is an interconnected process with two focuses. Malware application will be detected and any suspicious activity will be monitored in real time and notification will be sent to the user, all with the help of cloud computing system which is connected to the Smartphone for signature database.The outline of this work shows in Figure 4. The main contribution of this idea is methods in the form of MDTN which could be used by other researchers to track cyber intruders.
Related Work
There are a lot of researches about malware application up until 2013. Malware applications are being labeled (Kaspersky lab, Juniper research). Research about methods or species are also developed by several institutions (Cloud Security Alliance-CSA). According to CSA, malware could be deployed not only via website link, fake application, or smishing (SMS phishing), but also via Wi-Fi connectivity [4].
There are a lot of researchers who have been contributing ideas to improve security system to prevent data loss in mobile computing like Oliveira et al. via Honeypotlabsac, a virtual honeypot for android which emulate intrusion detection on services like telnet, http, and SMS [5].
Some researchers provide their own security model [6][7][8]. The permission-based security model is one of the most important security models in android devices. The user could grant or deny the installation and the application itself specifies which resources of the device need to be used. Analysis and enforcement of this permission-based model have been proposed by various researchers [9,10]. Burguera et al. [11] give a framework to detect malware on android platform. They monitor system call in Linux level and generate software behavioral patterns and classify these patterns by using cluster algorithm. Their method is efficient in detecting malware behavior seen from Linux kernel. Unfortunately there are several malware behaviors that cannot be seen from Linux level such as malicious SMS malware or malicious call malware. They are able to track a suspicious third-party application because they used dynamic analysis techniques to monitor sensitive information on android. The drawback of the system is that many normal applications may be considered as malware. Enck et al. presented TaintDroid in [12]. Their system used dynamic analysis techniques to monitor sensitive information on android. Thus, they can track a suspicious third-party application that uses sensitive data like GPS location information or address book 4 The Scientific World Journal information. The shortcoming of their method is that an application with sensitive data may be considered as malware [8,9,12]. Lee et al. [13] have found that white list Smartphone environment contributes the idea of a white list server to store the identity of any application in the database so the server may recognize friendly application not as a malware. Infections are blocked by using reputation-based collected data and information. Marforio et al. [14] have been working on a coordinated attack against modern Smartphone system and this can lead to disclosure of user private data to third parties. Discussion about countermeasures can be used for protection against these kinds of attacks. Metamorphic malware has become a subject in an ongoing and underdevelopment research from 2009. You and Yim have been contributing in the construction of malware obfuscation techniques. They explain a few general techniques in obfuscation malware [17].
Research Framework
The primary goal of this research study is to investigate the security risks associated with the use of android. It will contribute to regulate data in mobile computing on Smartphone, especially android mobile system. The proposed solution will detect attacks (viruses, worms, Trojan horses, and metamorphic malware) and prompt users to take actions to prevent breaches. Any suspicious activity that may reveal personal information to third parties or unknown entities will be reported to users to prevent potential attacks. This research study is different in that it will leverage previously proposed and implemented defense strategies and present an enhanced protection framework that will address android's vulnerabilities and risks. Furthermore, this project will extend the existing knowledge about android Smartphone's security and provide in-depth understanding of how to effectively manage emerging threats and fend off attacks, an issue that has long been realized and pointed out by security researchers and required more extensive research.
Android's threats are further amplified by the fact that users are limited to use their Smartphone for basic services and functions, such as email and SMS/MMS. Android's open-source nature further increases security vulnerabilities because cybercriminals could easily exploit this feature to modify the core applications and inserted malicious software to cause damage and monetary loss.
Research Question
(i) What are the parameters to monitor, detect, and track? From the result and related work which has already been done and still in progress, the possible parameters are to monitor all applications, through detection using static analysis code and signature behavior stored within the database that is controlled by cloud computational. By this scenario, using Smartphone to execute MDTN will become efficient though the drawback is to bypass malware because of the unavailability of behavior and new signature outside of database. (ii) Has the MDTN process been approved to protect sensitive data from the Smartphone user? A few ideas and implementation to detect malware have been executed, though the idea for detection using static behavior analysis that connected to cloud computational is still new. Based on research and text book about malware detection, MDTN process is logically possible and the mechanism has been executed well. (iii) How many malwares that could be anticipated using MDTN? For the time being, only generally known malwares are recognized by behavior signature and static analysis code. (iv) Has MDTN fulfilled the security requirement about confidentiality, integrity, availability, authenticity, and accountability? Confidentiality, integrity, availability, authenticity, and accountability are general requirements for security issue; thus MDTN has to fulfill these prerequisites.
Malware Behavior in Smartphone Environment
There are two methods of an intruder to steal data from Smartphone as follows.
(i) Trojanized apps: cybercriminals will download an app from mobile store and then reupload the app into the app site with injected malicious malware. (ii) Malicious apps: cybercriminals will create malicious apps under the disguise of popular mobile app and upload them to the mobile store [18].
Vennon, a GTC engineer at Smobile Systems, has stated that malware is categorized based on what the malware does once it has infected a system. The categories are as follows [19].
(i) Virus: a virus is defined as a destructive or malicious program that lacks the capacity to self-reproduce. (ii) Worm: this is a malicious code that can control system vulnerability or a network in order to automatically duplicate to another system. (iii) Trojan: a Trojan allows an attacker to obtain unauthorized access or remote access to a system while it appears to be executing a required operation. (iv) Spyware and adware: this destructive application conceals itself from the user while it collects information about the user without the user's permission. (v) Phishing apps: this malware is disguising itself as a legitimate site but containing mobile phishing that could steal user credential data. Malicious application is discovered by the user after installation and infection.
The Scientific World Journal 5 (vi) Bot processes: hidden processes can execute completely invisible to the user, run executables, or contact botmasters for new instructions. Botnet strives to hijack and control infected devices.
(vii) Mobile malware symptoms: signs of a malware infection can include unwanted behaviors and degradation of device performance. Performance issues such as frozen apps, failure to reboot, and difficulty connecting to the network are also common. Mobile malware can eat up battery or processing power, hijack the browser, send unauthorized SMS messages, and freeze or brick the device entirely.
Schmidt et al. [20] have announced the evolution of malware up to 2008. The malicious Linux binary itself is packed as "raw resource" into this Java application, for example, as png file, which can be seen on Figure 3. After installation, the Java application has to be executed once in order to rename the resource file into the appropriate binary. After renaming the file, the file has to be made executable which is currently impossible from within Java.
Malwares have various variants; one of them is metamorphic malware. The malware uses semantics-preserving transformations (obfuscations) to change its own code as it progresses. It progresses by repeating the computing process and applying the result of previous stage so the next stage will be different from the last. Any signature-based antivirus program will find it difficult to detect the malicious malware. Despite the ongoing changes, the function stays the same. The longer the malware stays, the more it evolves, making it difficult for the antivirus to defend the system. Obfuscation is to make the information less clear and more difficult to understand. Software vendors use obfuscation technique to prevent the software from reversing the engineer. Intruders use obfuscation transformations so the malware may never reverse the engineer and the malicious intent cannot be comprehended. (1) "Dead code" is semantically equivalent to a nil operation. Insertion of this type of code has no semantic impact on the malware. The insertion increases the size of the malware and modifies the byte and instruction level content of the malware. "Code reordering" changes the syntactic order of the code in the malware. The actual or semantic execution path of the program does not change but only the syntactic order as present in the malware image. Code reordering includes the techniques of branch obfuscation, branch inversion, and branch flipping and the use of opaque predicates.
Proposed Idea and Design
The idea of this paper is to construct a proper android environment. Figure 6 illustrates the flowchart of the MDTN system contains Monitoring, detecting, tracking, and notification (MDTN) which is interconnected in this proposed idea.
Monitoring. Scanning all application and activity in
Smartphone: the engine must examine and monitor various locations of the computer such as the hard disk, registry, and main memory. If a change to a critical component is detected, it could be a sign of infection.
Third-party applications are entrusted with several types of privacy sensitive information. The monitoring system must distinguish multiple information types, which requires additional computation and storage.
System activities include any action of interest which may be taken by the system, typically utilizing system resources. When integrated with system resource monitoring, these features can be used to study how activities impact system resource usage. When integrated with user activity monitoring, these features can be used to study how user activity impacts the system.
Monitoring system can also be used to continuously monitor features but only issue callbacks when certain conditions are met. These monitors will be referred to as notifies. The monitoring module will continuously monitor these features at the requested frequency but will only initiate a call to the callback function when the specified criteria are met. The format for such a request is similar to the monitor request, but with the additional information to specify the notification conditions. Monitoring System includes application and screen activities which listed in Table 3.
Context-based privacy sensitive information is dynamic and can be difficult to identify even when sent in the clear. For example, geographic locations are pairs of floating point numbers that frequently change and are hard to predict [12].
Detecting.
A malware detector is a system responsible to determine whether a program has malicious behavior. In other words malware detector D is defined as a function: D : A{Malware, Normal} where D is set for detecting and A is set for application. Consider Detecting process will determine whether an application is a malware or legit through recorded behavior that is in line with library detector.
Generally, there are two techinques to detect malwares: anomaly-based technique and signature-based technique. Signature-based detection techniques define every known malwares by signature or particular patterns to identify malicious program. Anomaly-based detection techniques model normal behavior during a training phase and use this normal model to identify malicious programs. Figure 5 illustrates the classification of malware detection techniques. In this classification, we followed the defined three rules. Reference behavior rule classified detection techniques broadly into two main categories: anomaly-based technique and signature-based technique. An anomaly-based detection technique constructs normal behavior model during the training phase. In detection phase any deviation from this model can be considered maliciousness [15].
This detection system is using behavior-based detection. This technique is a complex metastructure with dynamic concept and semantic interpretation. Behavior-based detection is effective and efficient to deal with complex techniques, such as polymorphic, binary packers, and encryption. This method is based on static code analysis which uses information embedded in a given executable file or code templates to capture the functionality of a specific malware. Behaviorbased detection techniques assume that an intrusion can be detected by observing a deviation from normal or expected behavior of the system or the users. The model of normal or valid behavior is extracted from reference information collected by various means. The intrusion detection system later compares this model with the current activity. Advantages of behavior-based approaches are that they can detect attempts to exploit new and unforeseen vulnerabilities, the advantages and disadvantages of this method listed in Table 4. Once the engine has detected an item that requires further examination, the engine will refer to an updated list of known malware, called the "blacklist". The blacklist contains "signatures" or identifiable patterns of know malware. The engine will be able to determine whether any filematches any of the known malware. If a match is identified, the file is classified according to the particular category: Malware's integrity identification, eradication of the particular packages, publication of the list of packages to the remote server, and so forth [21]. One of the methods to perform detection is Removal. The final step for this engine is to take appropriate actions on files that are identified as malware. In most circumstances, the engine removes the program or file completely and restores the computer to its ore-infection state. Otherwise, a file can be disabled or quarantined, so that the user could enable it later.
For Metamorphic Malware There Is an Interpretation for
Obfuscating Solution. Abstract interpretation declared in 1977 is a general model for the (static or dynamic) approximation of semantics of discrete dynamic systems. Obfuscating programs is making abstract interpreters incomplete. Modifying the simple self-interpreter so that all values in the store are obfuscated. Algorithm 1 illustrates Formal framework for malware detection itself are based on program semantics and abstract interpretation. This is as follows for obfuscation interpretation in metamorphic malware.
The following are the obfuscation techniques that are particularly used by metamorphic viruses [22]:
Let
→ denote that program is infected with malware .
where an ideal malware detector is sound and complete: sound means no false positives and complete means no false negatives.
Certifying Malware Detecting. We can characterize the most concrete property such that :
Tracking.
Tracking is a phase where the application will track the source of the problem and perform the tracking activity in the Smartphone.
The following could be logged to represent a user: (iv) Company-assigned ID.
The process for data tracking is started from detected tainted source or suspicious behavior. Tainted data comes from specific source; thus the contaminated data shall be tracked down specifically and dealt with. After the purging process is done, the result will be reported that the decontamination has been finalized.
To track the URL location of the intruder who remotely controls the malware inside the Smartphone, the following method will be used: The device's IMEI was also exposed by applications. The IMEI uniquely identifies a specific mobile phone and is used to prevent a stolen handset from accessing the cellular network. TaintDroid flags indicated that nine applications transmitted the IMEI.
Seven out of the nine applications either do not represent an end user license agreement (EULA) or do not specify IMEI collection in the EULA [12]. From this result, tracking IMEI and IMSI activity should be made known to users to let them determine which activity is remotely controlled by intruders.
The method for IMEI and IMSI (personal information) is as follows:
Notification.
The action is defined by a PendingIntent containing an Intent that starts an activity in your application.
To associate the PendingIntent with a gesture, call the appropriate method of NotificationCompat.Builder. A PendingIntent object helps to perform an action on the application's behalf, often at a later time, without caring about whether or not the application is running. After the action is performed, NotificationManager.notify() is called to pass the notification object to the system by sending the particular task.
The method for getting notification is as follows:
The Design System of MDTN and Discussion
The MDTN system is an interconnected process to monitor the downloading and installation progress of any file in a Smartphone. If in the case of suspicious behavior detected, the tracking module will be deployed to track the source of the behavior, the result will be forwarded to the notification module for the next decision. A new application which is going to be installed or web application with malware will be monitored and detected using static code analysis and signature database will determine whether the file contains any malware or not. The overall progress will be notified to the user, and the user may decide whether or not to install or to delete the given application. Figure 7 illustrates 3 modules inside MDTN system infrastructure.
There are 3 modules inside MDTN system. The first and the third module (module notification) are connected to the user, while the monitoring, detecting and tracking parts which consisted of classification models and extraction are featured inside the machine learning. They combine an internally developed platform-independent machine learning C library with specific components device-written in Java-which are responsible for communication, storage and user interface. The process is the classification pipeline which is responsible for the inference of end user behavior. The pipeline continuously samples the phone sensors and extracts features used by classification models, which also run on the phone. The classification pipeline samples one sensor, GPS. All these processes are connected with the user's smartphone.
The second module is in-between or middleware module that could reduce Smartphone's performance. Though, it has its own disadvantage when it comes to delivery process into the cloud server where the server computes the data and consumes more time to redeliver back to the user. All data is stored within independent SQLite files. These files are transferred to the cloud infrastructure with an uploading policy that emphasizes energy efficiency to minimize the impact of using the phone's batteries.
For this case we build our own Private Cloud Eucalyptus which was bundled with Ubuntu (UEC-Ubuntu is bundling OpenStack from 11.10). UEC/Eucalyptus is an on-premise private cloud OSS based platform, sponsored by Eucalyptus Systems, Linux based-RHEL, CentOS, Ubuntu, Support for VMware. Figure 8 illustrates the 3 layers inside the private cloud internal architecture that used in this work.
Performance Analysis
A standardized measurement should first be set to test and evaluate the performance in the two systems in which total time execution is used. Basically, throughput is a decent indicator of malware analyzer performance. Within a period of time, throughput will calculate the total number of completed analysis task. Thus, the total consumed time is used in which the defined samples affect the total number submitted in a fixed number. The time is calculated by summing the inbetween period of the last analysis sample and the previous analysis time within the same group sampling. Total execution time ( ) for task ( , ) consists of three elements: setup time ( ), execution time ( ), and postprocessing time ( ). Setup time is the total time to prepare and to deploy the required accessible malware sample. The time in the second system will become the connector as it requires longer time comparing to the first system. Nevertheless, the execution and postprocessing time itself will be the same in the first and the second systems. The setup time itself is a drawback to the second system, thus giving advantage to the first system which practically uses cloud computing. The required records from each tasks needed to calculate ( ) are submit time, start analysis time, and finish time. (Table 6). 12 types of mobile software (.APK) under trial are successfully detected as malware by the system on this system performance. Stabilized time (ms) is the time needed for the system to recognize the mobile software as a malware and the notification process via executable binary file. Total stabilized time ( ) consists of three elements: setup time ( ), execution time ( ), and postprocessing time ( ). (Table 7). The tracking phase is not maximally carried out, as the result services of Zitmo android do not show and GoldDream's server is not detected. Under services, any service that is going to be used by the intruders can be tracked (data stealing). From the tracking result, user is able to know which of the many services on android system is under monitoring or modification, or under threat from the particular server. The tracking has not been maximally carried out by scrutinizing any server that serves the intruders, and so far tracking depends on intruder's known website. In the future, the malware's server can be reported to the database system so that the particular application can be blocked. (Table 8). On comparison with the 3 previous projects that have been carried out and developed-like TaintDroid, CrowDroid, and Robo-Droid-real time monitoring and tracking system is provided by TaintDroid by which the kernel level is exercised. Taint-Droid is an android operating system with added real time monitoring and tracking system.
Conclusion and Future Work
MDTN is an interconnected system process for Smartphone environment. For this research paper, the author uses Android OS because the operating system is frequently attacked by cybercriminals. Monitoring, detecting, tracking, and notification are used not only to check new application before being installed into the Smartphone, but also to detect suspicious behavior activity in real time. As for the detection method, behavior-based detection technique and database static code analysis are used to determine suspicious behavior and malware application. The tracking part can be developed to later stage for the purpose of preventing future threat realization. In the case of reoccurrence, the system is able to block and recognizing the data as spam or threat. | 2016-05-04T20:20:58.661Z | 2014-08-05T00:00:00.000 | {
"year": 2014,
"sha1": "e341340803b7b546772643bd28b54363b852cfcb",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2014/983901.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ae403c7aff80e9ced381f2f01574fe33d55200b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
14303249 | pes2o/s2orc | v3-fos-license | Perceived weight discrimination and changes in weight, waist circumference, and weight status
Objective To examine associations between perceived weight discrimination and changes in weight, waist circumference, and weight status. Methods Data were from 2944 men and women aged ≥50 years participating in the English Longitudinal Study of Ageing. Experiences of weight discrimination were reported in 2010-2011 and weight and waist circumference were objectively measured in 2008-2009 and 2012-2013. ANCOVAs were used to test associations between perceived weight discrimination and changes in weight and waist circumference. Logistic regression was used to test associations with changes in weight status. All analyses adjusted for baseline BMI, age, sex, and wealth. Results Perceived weight discrimination was associated with relative increases in weight (+1.66 kg, P < 0.001) and waist circumference (+1.12 cm, P = 0.046). There was also a significant association with odds of becoming obese over the follow-up period (OR = 6.67, 95% CI 1.85-24.04) but odds of remaining obese did not differ according to experiences of weight discrimination (OR = 1.09, 95% CI 0.46-2.59). Conclusions Our results indicate that rather than encouraging people to lose weight, weight discrimination promotes weight gain and the onset of obesity. Implementing effective interventions to combat weight stigma and discrimination at the population level could reduce the burden of obesity.
Introduction
Negative attitudes towards obese individuals remain one of the "last socially acceptable forms of prejudice" (1) and many obese individuals experience weight-related discrimination in their everyday lives (2). There is a common perception that weight discrimination might encourage overweight individuals to lose weight (3), but a growing literature suggests it might actually have the opposite effect. Studies show that people who experience weight stigma are more likely to report engaging in obesity-promoting behaviors, including problematic eating (4,5), refusal to diet (6), and avoidance of physical activity (7), and these effects are independent of BMI. However, there is limited evidence on associations with actual changes in body weight. The present study therefore examined relationships between perceived weight discrimination and changes in weight, waist circumference, and weight status over four years in a large population-based sample.
Study population
Data were from the English Longitudinal Study of Ageing (ELSA), a longitudinal panel study of adults aged 50 y (8). The first ELSA data were collected in 2001-2002 and participants have been followed up every two years, with a nurse visit every four years to take objective measurements of anthropometry. Wave 5 (2010Wave 5 ( -2011 is the only assessment that included questions on discrimination. Among the 9090 participants who were interviewed in wave 5, 8107 (93% of those eligible) answered the self-completion questionnaire that assessed discrimination. For our analyses, we use these data plus anthropometric data collected in waves 4 (2008-2009) and 6 (2012-2013), as no anthropometric data were collected in wave 5. Complete data were available for 2944 participants.
Measures
Questions on perceived discrimination were based on items developed and used widely in other longitudinal studies, notably MIDUS and the Health and Retirement Study (2,9,10). Participants were asked how often they encounter five discriminatory situations: "In your day-to-day life, how often have any of the following things happened to you: (1) you are treated with less respect or courtesy; (2) you receive poorer service than other people in restaurants and stores; (3) people act as if they think you are not clever; (4) you are threatened or harassed; and (5) you receive poorer service or treatment than other people from doctors or hospitals. Responses ranged from "never" to "almost every day". Because data were highly skewed, with most participants reporting never experiencing discrimination, we dichotomized responses to indicate whether or not respondents had ever experienced discrimination in any domain (never vs. all other options). Participants who reported discrimination in any of the situations were asked to indicate the reason(s) they attributed their experience to from a list of options including weight, age, gender, and race. We considered participants who attributed experiences of discrimination to their weight as cases of perceived weight discrimination.
Age, sex, and household nonpension wealth [a sensitive indicator of socioeconomic status (SES) in this age group] were included as control variables.
Statistical analysis
Baseline demographic and anthropometric characteristics of participants who reported weight discrimination and those who did not were compared using t-tests (continuous variables) and v 2 tests (categorical variables). One-way analyses of covariance (ANCOVAs) were used to examine whether perceived weight discrimination was associated with changes in weight and waist circumference; and logistic regression was used to examine associations between weight discrimination and becoming and remaining obese. All analyses adjusted for confounding by age, sex, and wealth.
Results
Perceived weight discrimination was reported by 5.1% of participants, ranging from 0.7% in normal-weight to 35.9% in class III obese. Participants who had experienced weight discrimination were significantly younger (61.6 vs. 66.4 y, P < 0.001), less wealthy (P < 0.001), and more overweight (BMI 35.5 vs. 27.2, P < 0.001) than those who had not, but the groups did not differ significantly by sex (P 5 0.090) ( Table 1).
There were significant associations between perceived weight discrimination and weight change over four years (Table 2). There was a 1.66 kg difference in mean weight change between individuals who reported experiences of weight discrimination (10.95 kg) and those who did not (20.71 kg, P < 0.001). There was a trend towards greater weight gain (or less weight loss) across all BMI groups (Figure 1). There was also a 1.12 cm difference in waist circumference change, increasing in individuals who experienced weight discrimination (10.72 cm) and decreasing in those who did not (20.40 cm, P 5 0.046), although this association was not consistently observed across BMI groups ( Table 2). The interaction between weight discrimination and weight status was not significant for change in weight (P 5 0.271) or waist (P 5 0.283).
Discussion
This study is the first to examine the relationship between weight discrimination and weight change in a population-based sample. Weight discrimination was reported by 5.1% of participants, in line with previous prevalence estimates in this age group (2). Consistent with evidence that people who experience weight discrimination are more likely to engage in behaviors that promote weight gain (4-7),
Obesity
Weight Discrimination and Changes in Weight Jackson et al.
perceived weight discrimination was associated with relative increases in weight and waist circumference over time. We also found that weight discrimination was related to the onset of obesity, which has been shown previously (9), although there was no association between weight discrimination and remaining obese.
There are a number of potential mechanisms that may lead from weight discrimination to weight gain. Exposure to weight stigma is associated with psychological distress (3). Food activates dopaminergic reward pathways in the brain (11) and may provide short-term relief from the adverse psychological effects of discrimination. Many overweight/obese individuals who experience stigmatization report eating as a coping strategy (6). Stress responses to discriminatory experiences may also drive unhealthy eating behavior via activation of the hypothalamic-pituitary-adrenal axis and resultant release of cortisol and endogenous opioids (12). Experimental Figure 1 Associations between perceived weight discrimination and weight change, by baseline weight status. studies show that elevated cortisol levels and a higher cortisol response to stressors predict increased food intake, particularly of energy-dense foods (13,14), while opioids promote consumption of palatable foods (15). Weight discrimination may also influence weight change through effects on energy expenditure, with evidence indicating that people who experience stigmatization perceive themselves as less competent in physical activity and tend to avoid it (7,16).
Weight discrimination has been justified on the grounds that it encourages obese individuals to lose weight (3), but our results provide no support for this notion and rather suggest that discrimination exacerbates weight gain and promotes onset of obesity. Removing prejudice and blame from weight loss advice might be a better route to promoting weight control. Widespread weight bias has been documented in health professionals (1), including those who specialize in obesity (17). Negative attitudes are picked up on by obese patients, who often feel that doctors do not understand how difficult it is to be overweight (18), and report being treated disrespectfully by the medical profession because of their weight (19), which may hinder weight loss success. Providing support to those affected by weight discrimination and teaching adaptive coping strategies could also improve weight outcomes. One study demonstrated positive effects of a brief acceptance-based intervention in obese weight loss programme participants, with those randomized to the intervention losing more weight than controls over a three-month follow-up (20).
This study had several limitations. Weight was not measured in the same wave as discrimination was assessed, so baseline values were from two years earlier. We cannot be sure whether discrimination preceded weight gain or vice versa. It is therefore not possible to establish causal relationships; i.e. whether people gain weight as a consequence of experiencing weight discrimination, or whether gaining weight makes people more likely to experience weight discrimination or attribute experiences of discrimination to their weight. Participants were from an older population, in which weight change and experiences of weight discrimination may differ relative to younger populations so findings cannot be assumed to generalize. The sample was also predominantly white (97.9%), with just five non-white respondents reporting weight discrimination, so results may not apply to other ethnic groups that have different body weight ideals that make them less likely to perceive, or less affected by, discrimination. Our analyses were restricted to ELSA participants who had complete data. The analyzed sample was slightly younger, wealthier, and less overweight than the total ELSA sample (although the level of perceived weight discrimination did not differ), so results may not be population-representative.
The results of this study provide evidence that weight discrimination is associated with significant increases in body weight and waist cir-cumference over time. Our findings underscore the need for effective interventions at the population level to combat weight stigma and discrimination. O | 2017-06-18T19:22:58.000Z | 2014-09-11T00:00:00.000 | {
"year": 2014,
"sha1": "d293edd1a7e4c993c2e4bed05a89f315ce718a2f",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/oby.20891",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d293edd1a7e4c993c2e4bed05a89f315ce718a2f",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239012915 | pes2o/s2orc | v3-fos-license | ASCO 2021—selection of personal highlights in early stage non-small cell lung cancer
This article intends to summarize personal non-small cell lung cancer (NSCLC) highlights of the virtual ASCO 2021 meeting. Immunotherapy is now a mainstay of advanced stage NSCLC treatment and there are several ongoing studies investigating the role of immunotherapy in early stage NSCLC. At ASCO 2021 the first data on atezolizumab in the adjuvant setting were presented and give a positive signal that immunotherapy will also become an option for patient in early stage NSCLC. Furthermore, overall survival (OS) updates of two studies investigating the effects of epidermal growth factor receptor (EGFR) tyrosine kinase inhibitors (TKIs) in the adjuvant setting of EGFR-mutated NSCLC patients were presented. In conclusion ASCO 2021 provided the lung cancer community with inspiring new data especial in early stages and challenges the community with integration of these data into our daily clinical routine.
was held under the tagline "equity: every patient, every day and everywhere".
In regard to NSCLC the use of immunotherapy in early stage disease was of major focus and the first data of atezolizumab in the adjuvant setting were presented. Furthermore, updates on studies investigating the effects of EGFR TKIs in the adjuvant setting of EGFR-mutated NSCLC patients were presented. In advanced stage disease interesting however not yet practice changing abstracts were discussed. Therefore, this personal highlight article focuses on the topics on adjuvant immunotherapy and adjuvant EGFR TKI therapy in EGFR-mutated patients.
Immunotherapy as adjuvant therapy
IMpower 010 Heather Wakelee presented the first results of IMpower 010, a phase III, multicentre, open-label, randomized study. The trial enrolled patients with completely resected stage IB to IIIA NSCLC according to the UICC 7 staging and ECOG 0-1 who had received up to four cycles of cisplatin-based chemotherapy with pemetrexed, docetaxel, gemcitabine or vinorelbine. A total of 1005 patients were randomized 1:1 to 16 cycles of atezolizumab 1200 mg every 3 weeks vs best supportive care (BSC). The primary endpoint of investigator assessed disease-free survival (DFS) and the secondary endpoint of overall survival (OS) were tested hierarchically starting with DFS in PD-L1 TC ≥1% in stage II-IIIA, then DFS in all stage II-IIIA patients and finally DFS in an intention to treat (ITT) population.
At ASCO 2021 the first results from the preplanned interim analysis at a median follow-up time of 32.2 months were presented. Atezolizumab showed signif-review icant benefit DFS in PD-L1 positive (PD-L1 TC ≥1%) in the stage II-IIIA patients. Median DFS was not reached in the atezolizumab arm vs. 35.3 months with BSC (HR 0.66; 95% CI 0.50-0.88, p = 0.0039) [1].
In all stage II-IIIA patients regardless of PD-L1 status the benefit was also significant but lower: 42.3 months with atezolizumab vs. 35.3 months with BSC (HR 0.79; 95% CI 0.64-0.96, p = 0.02) [1]. In the subgroup analysis mainly the PD-L1 positive (TC ≥1%) and high positive (TC >50%) patients benefit from atezolizumab; in contrast the PD-L1 negative patients had no benefit. In addition, no benefit was seen for EGFR-and ALK-positive patients. The significance boundary was not crossed for DFS in the ITT population, the OS data was immature and not formally tested.
Adverse advents (AEs) of any grade occurred in 92.7% of patients receiving atezolizumab and 70.7% of BSC patients; grade 3/4 events in 21.8 and 11.5% respectively. Treatment was discontinued due to AEs in 18.2% of atezolizumab patients [1].
Heather Wakelee concluded that atezolizumab may be considered a practice changing adjuvant treatment option for patients with stage II-IIIA PD-L1 positive NSCLC. Nevertheless, we need to wait for further results.
Targeted therapy in genetic driven early stage NSCLC
In EGFR-mutated metastatic NSCLC, frontline EGFR inhibition is the gold standard and three different generations of EGFR inhibitors are approved. At the ASCO 2021 two relevant trials evaluating TKIs in early stage EGFR-mutated NSCLC were presented.
The phase III IMPACT trial evaluated adjuvant gefitinib vs. cisplatin/vinorelbine in Japanese patients with completely resected EGFR-mutated stage II-III NSCLC. All patients had common EGFR mutations, either deletion 19 or L858R and median duration of follow-up was 71 months. The primary endpoint DFS was 36 months with gefitinib versus 25 months with chemotherapy. However, after 5 years, the Kaplan-Meier curves began to overlap and no significant difference in DFS was seen (HR 0.92; 95% CI 0.67-1.28; p = 0.63) [2]. The DFS rates at 5 years were 31.8% with gefitinib and 34.1% with chemotherapy. OS was also similar in each arm, the 5-year survival rates for gefitinib and cis/vin arms were 78.0 and 74.6%, respectively (HR for death of 1.03; 95%CI, 0.65-1.65; p = 0.89) [2].
It can be concluded that adjuvant gefitinib may prevent early relapse, but did not significantly prolong DFS or OS in patients with completely resected stage II-III, EGFR-mutated NSCLC.
Further relevant data concerning neoadjuvant/ adjuvant TKI treatment at ASCO 2021 the final OS analysis of phase II CTONG1103 trial were presented. In all, 72 Chinese patients with stage IIIA N2 NSCLC and common EGFR mutations received either neoadjuvant erlotinib 150 mg/day for 42 days and as adjuvant therapy, up to 12 months or gemcitabine plus cisplatin two cycles neoadjuvant and up to two cycles adjuvant. The progression-free survival (PFS) data were presented previously and showed a significant benefit for the TKI versus chemotherapy. Median PFS with erlotinib was 21.5 months versus 11.4 months with chemotherapy (HR 0.39; 95% CI 0.23 to 0.67; p < 0.001) [3]. For OS, the median follow-up was 62.5 months and found no significant difference. OS was 42.2 months with erlotinib and 36.9 months with chemotherapy (HR 0.83, 95%CI 0.47-1.47, p = 0.513). The 3-and 5-year OS rates were 58.6 and 40.8% in the erlotinib arm and 55.9 and 27.6% with chemotherapy, respectively [4].
Conclusion
IMpower 010 is the first study showing a DFS benefit of atezolizumab compared to BSC in early stage NSCLC. The study bears practice changing potential, nevertheless several open questions have to be addressed before regulatory approval. The benefit of atezolizumab is mainly seen in PD-L1 TPS high patients and less in PD-L1 >1% and even absent in negative patients. Therefore, patient selection will be key especially in light of the potential high-grade immune-related adverse events (irAE) reported in the study. The OS endpoint is not mature yet; however the curve shape of both treatment groups seems to separate nicely and interpretation of this consistent split of the curves may suggest that immunotherapy cures patients in early stages and recapitulates the efficacy from advanced stage studies. Nevertheless, OS data are of major importance as both mentioned studies with EGFR TKI in early stage NSCLC neo-/adjuvant setting were positive in DFS/PFS; however this effect did not translate into an OS benefit. One explanation could be that it is unusual for a targeted therapy to actually cure lung cancer as remissions are rarely sustained when patients stop treatment. This could be one difference in the discussion of adjuvant therapy with targeted therapies compared to immunotherapy. And it raises the question whether the recent approval of adjuvant osimertinib based on the impressive DFS benefit of the ADAURA trial will help us to cure more EGFR-positive lung cancer patients.
There are still several open considerations for the optimal endpoint in neo/adjuvant NSCLC therapy which give room for lively discussion at the next upcoming meetings.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 2021-10-18T13:44:47.173Z | 2021-10-18T00:00:00.000 | {
"year": 2021,
"sha1": "41d79e1bc39ab9cb132381f0f8ecf328f60ce481",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12254-021-00770-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "41d79e1bc39ab9cb132381f0f8ecf328f60ce481",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
198983011 | pes2o/s2orc | v3-fos-license | An Efficient Electrochemical Sensor Driven by Hierarchical Hetero-Nanostructures Consisting of RuO2 Nanorods on WO3 Nanofibers for Detecting Biologically Relevant Molecules
By means of electrospinning with the thermal annealing process, we investigate a highly efficient sensing platform driven by a hierarchical hetero-nanostructure for the sensitive detection of biologically relevant molecules, consisting of single crystalline ruthenium dioxide nanorods (RuO2 NRs) directly grown on the surface of electrospun tungsten trioxide nanofibers (WO3 NFs). Electrochemical measurements reveal the enhanced electron transfer kinetics at the prepared RuO2 NRs-WO3 NFs hetero-nanostructures due to the incorporation of conductive RuO2 NRs nanostructures with a high surface area, resulting in improved relevant electrochemical sensing performances for detecting H2O2 and L-ascorbic acid with high sensitivity.
Introduction
Recently, a variety of metal oxide nanostructures have been extensively utilized as efficient electrode substances owing to their outstanding electrocatalytic properties. Among them, ruthenium dioxide (RuO 2 ) has been well described as one of the best electrocatalysts for diverse energy related applications, such as the hydrogen evolution reaction (HER), oxygen evolution reaction (OER), and supercapacitors because of its high electric conductivity, catalytic activity, and thermal stability [1][2][3]. Especially, RuO 2 has been used as an efficient electrode system for supercapacitors owing to its excellent charging-discharging behavior [1,[4][5][6][7][8]. Generally, RuO 2 as a promising catalytic material is often used in the forms of hybrid structures or alloys with other abundant transition metals in consideration of the relatively high cost of RuO 2 . Thus, there have been previous reports regarding the use of RuO 2 nanostructures with other metal oxides as supercapacitors [9][10][11][12], and biosensing applications [1][2][3]13,14].
Tungsten trioxide (WO 3 ) nanostructures have been also extensively studied in various applications due to its earth-abundance, high durability, and chemical stabilities in aqueous acid media, as well as good electrochemical conductivity [15][16][17][18]. Thereby it has been developed as a catalyst for the hydrogen evolution reaction (HER) and supercapacitors in an acidic solution [19][20][21][22]. WO 3 also constitutes composites with other novel metals like Pt [23][24][25][26], Ir [17,23,27], and Ru [16,[28][29][30], or supporting materials. Nanostructured catalysts are applied to nonenzymatic electrochemical biosensors. Electrochemical properties can be enhanced from the increase of active surfaces. The detection of hydrogen peroxide (H 2 O 2 ) is important in not only biomedical and environmental applications, but also in the enzymatic system [31]. While ascorbic acid (AA) has an important role in the physiological function of organisms, a deficiency of AA causes several diseases [32,33]. Therefore, the detection and accurate quantification of target material with selectivity is highly required.
In this study, we introduce a facile fabrication of hybrid nanostructures consisting of single crystalline RuO 2 nanorods on eletrospun WO 3 nanofibers by utilizing electrospinning and thermal annealing processes. In addition, the fundamental electrochemical performances of RuO 2 nanorods-WO 3 nanofibers (RuO 2 NRs-WO 3 NFs) are carefully investigated, which confirm their characteristics of fast electron-transfer reactions and possibility as a catalytic sensing platform for detecting l-ascorbic acid (AA) and hydrogen peroxide (H 2 O 2 ) in phosphate buffered solution (PBS).
First, WO 3 nanofibers were synthesized by electrospinning and thermal annealing process according to the reported method [23]. To prepare electrospinning solution, 1.5 g WCl 6 were dissolved in 10.549 mL DMF with 1.25 g PVP and 0.191 mL acetic acid. After being stirred overnight, the solution was loaded into syringe and applied to the needle of the electrospinning system (Nano NC ESR 200R2). The needle was connected to a voltage power supply (applied voltage = 17.5 kV) at a flow rate of 5 µL/min, and the distance from needle tip to aluminum plate to collect as spun NFs was 15 cm. The collected electrospun NFs were calcinated at 500 • C for 1 h under a mixed gas atmosphere of 80 sccm of He and 10 sccm of O 2 with ramping rate of 1 • C/min. Ruthenium hydroxide (Ru(OH) 3 ) precursor was prepared by a precipitation process via the acid-base reaction with controlling pH of aqueous solution. The pH of the final precursor solution at about pH 10 was carefully achieved by slowly dropping 0.1 M NaOH dilute solution into 5 mM RuCl 3 ·xH 2 O aqueous solution [2,13]. After precipitation, the precursor solution was washed five times with deionized water, and then re-dispersed in 2~3 mL pure deionized water again. To grow RuO 2 NRs on WO 3 NFs, 2 mg of WO 3 NFs was dispersed into 1 mL deionized water and then mixed with 1 mL Ru(OH) 3 precursor solution. After sonication for 30 min, the mixed solution was directly dropped on the center of Si wafer. WO 3 nanofibers containing Ru(OH) 3 precursors loaded on the Si wafer was placed into the center of a furnace and calcined at 300°C for 5 h in air. The furnace was then allowed to cool to room temperature.
The surface morphology of as-grown products was examined by field emission scanning electron microscopy (FE-SEM; JEOL JSM-6700F). The detailed crystal structures were also investigated by a high-resolution transmission electron microscopy (HRTEM, Cs-corrected STEM, JEOL JEM-2100F) instrument equipped with selected area electron diffraction (SAED) micrographs and elemental EDX mapping with a Tecnai-F20 system operated at 200 kV. Additionally, high resolution X-ray diffraction measurement (XRD; Bruker D8 DISCOVER, Cu Kα radiation), and X-ray photoelectron spectroscopy (XPS; Theta Probe AR-XPS System. Al Kα radiation) were performed to investigate the crystal structure and surface binding energies of as-grown RuO 2 NR-WO 3 NFs.
For electrochemical measurements, a three-electrode system was used with a modified glassy carbon (GC) electrode (3 mm in diameter), a saturated calomel electrode (S.C.E.), and a coiled Pt wire (1 mm in diameter, length immersed in a solution~10 cm) as the working electrode, the reference electrode, and the counter electrode, respectively. All electrochemical experiments conducted with CHI 650E workstation (CH Instruments) and BAS100B (BAS Inc.). To modify the surface of a GC electrode with synthesized nanomaterials, 2 mg of RuO 2 NR-WO 3 NFs was suspended in 1.0 mL deionized water. Subsequently, 10 µL of the solution were dropped onto the GC electrode surface three times. Then, 10 µL of 0.05 wt% Nafion solution were loaded onto the modified GC electrode surface. Cyclic voltammetry (CV) measurements was used for analyze the capacitive behavior in 1 M H 2 SO 4 . For sensing experiments, linear sweep voltammetry (LSV) was also used with rotating disk electrode (RDE) at a scan rate of 5 mV s −1 with rotating speed of 1600 rpm, and amperometry measurements were used in 0.1 M phosphate buffered saline (PBS) at physiological condition pH (7.4).
Results and Discussion
3.1. Synthesis of Hybrid Nanostructures of RuO 2 Nanorods on Electrospun WO 3 Nanofibers Figure 1A,B show FE-SEM images of WO 3 NFs annealed at 500 • C. The calcined WO 3 NFs revealed a very fine structure and the diameter of the fibers was around 200 nm. On the other hand, after the heat treatment of the mixed solution composed of Ru(OH) 3 precursors and WO 3 NFs at 300°C for 5 h, it is readily identified that RuO 2 NRs were directly grown on the electrospun WO 3 NFs as shown in Figure 1C,D. Figure 1D represents the as grown RuO 2 NRs covering the entire surface of WO 3 NFs. The lateral dimension of RuO 2 NRs is estimated to be about 40 nm and the length up to 300 nm. Careful EDS measurements indicate that the atomic ratio of Ru to W is confirmed as 45:55. According to our previous real-time study by in situ synchrotron XRD, a simple recrystallization process by thermal annealing might be responsible for the growth mechanism of RuO 2 NRs. It was carefully suggested that Ru diffusion to the amorphous nanoparticles followed by diffusion to the growing surface of the nanorod plays an essential role in the growth of RuO 2 NRs in oxygen ambient, which is supported by the nucleation theory [34]. electrode with synthesized nanomaterials, 2 mg of RuO2 NR-WO3 NFs was suspended in 1.0 mL deionized water. Subsequently, 10 μL of the solution were dropped onto the GC electrode surface three times. Then, 10 μL of 0.05 wt% Nafion solution were loaded onto the modified GC electrode surface. Cyclic voltammetry (CV) measurements was used for analyze the capacitive behavior in 1 M H2SO4.
For sensing experiments, linear sweep voltammetry (LSV) was also used with rotating disk electrode (RDE) at a scan rate of 5 mV s -1 with rotating speed of 1600 rpm, and amperometry measurements were used in 0.1 M phosphate buffered saline (PBS) at physiological condition pH (7.4). Figure 1A and 1B show FE-SEM images of WO3 NFs annealed at 500 °C. The calcined WO3 NFs revealed a very fine structure and the diameter of the fibers was around 200 nm. On the other hand, after the heat treatment of the mixed solution composed of Ru(OH)3 precursors and WO3 NFs at 300 ℃ for 5 h, it is readily identified that RuO2 NRs were directly grown on the electrospun WO3 NFs as shown in Figure 1C,D. Figure 1D represents the as grown RuO2 NRs covering the entire surface of WO3 NFs. The lateral dimension of RuO2 NRs is estimated to be about 40 nm and the length up to 300 nm. Careful EDS measurements indicate that the atomic ratio of Ru to W is confirmed as 45:55. According to our previous real-time study by in situ synchrotron XRD, a simple recrystallization process by thermal annealing might be responsible for the growth mechanism of RuO2 NRs. It was carefully suggested that Ru diffusion to the amorphous nanoparticles followed by diffusion to the growing surface of the nanorod plays an essential role in the growth of RuO2 NRs in oxygen ambient, which is supported by the nucleation theory [34]. Figure 2B demonstrates that all peaks are closely matched with the monoclinic phase of WO3 [19,35]. On the other hand, XRD spectrum of composite RuO2 NR-WO3 NFs confirms the same monoclinic phase WO3 peaks including two major peaks at 27.1° and 34.8° corresponding to (110) and (101) crystallographic planes of tetragonal RuO2 structure as displayed in Figure 2A [2,13]. To investigate the oxidation states of Ru, W, and O atoms, XPS measurements were performed. In Figure 2C, two separated binding energies at 35.1 eV and 37.3 eV are clearly identified as two spin-orbit states of W 4f5/2 and W 4f7/2, respectively, which indicates the oxidation state of +6 for W in WO3 NFs [16,36]. Both high resolution Ru 3d and Ru 3p spectra were shown in Figure 2E, F. Although the peak position of Ru 3d3/2 is overlapped with C 1s [16,37], the oxidation state of Ru species is readily identified to Ru 4+ based on the binding energies of 280.7 eV Figure 2B demonstrates that all peaks are closely matched with the monoclinic phase of WO 3 [19,35]. On the other hand, XRD spectrum of composite RuO 2 NR-WO 3 NFs confirms the same monoclinic phase WO 3 peaks including two major peaks at 27.1 • and 34.8 • corresponding to (110) and (101) crystallographic planes of tetragonal RuO 2 structure as displayed in Figure 2A [2,13]. To investigate the oxidation states of Ru, W, and O atoms, XPS measurements were performed. In Figure 2C, two separated binding energies at 35.1 eV and 37.3 eV are clearly identified as two spin-orbit states of W 4f 5/2 and W 4f 7/2 , respectively, which indicates the oxidation state of +6 for W in WO 3 NFs [16,36]. Both high resolution Ru 3d and Ru 3p spectra were shown in Figure 2E,F. Although the peak position of Ru 3d 3/2 is overlapped with C 1s [16,37], the oxidation state of Ru species is readily identified to Ru 4+ based on the binding energies of 280.7 eV and 462.8 eV, indexed to Ru 3d 5/2 and Ru 3p 3/2 , respectively [37,38]. In addition, the peak at 530.5 eV of O 1s is associated with O 2− in RuO 2 and WO 3 metal oxides as shown in Figure 2D. [37,38]. In addition, the peak at 530.5 eV of O 1s is associated with O 2-in RuO2 and WO3 metal oxides as shown in Figure 2D. Figure 3E reveals the existence of many different crystalline phases in a WO3 nanofiber which confirms the polycrystalline nature of a WO3 nanofiber. On the contrary, the fast Fourier transform (FFT) of the lattice-resolved image for a RuO2 nanorod in Figure 3F represents highly ordered lattice fringes with a single crystal nature. The values of lattice spacing of adjacent planes are estimated by about 0.318 nm and 0.263 nm, corresponding to those of between the (110) planes and (101) for the tetragonal RuO2, respectively. Furthermore, TEM-EDS element mapping analysis from the highangle annular dark field (HAADF) STEM image shown in Figure S1 confirms the homogenous distribution of Ru, W, and O in distinct regions in the hierarchical nanostructure. W atoms exist on the backbone of the nanofibers, whereas Ru atoms exclusively exist on the branched nanorods. Oxygen atoms exist both on the backbone of the nanofibers and branched nanorods. Thus, we successfully fabricate the high density of single-crystalline RuO2 nanorods on WO3 nanofibers by using a combination of an electrospinning process and a thermal annealing process. Our growth process thus provides a simple methodology for the fabrication of highly efficient electrocatalysts.
Electrochemical Properties for Capacitive Behaviors of RuO2 NRs-WO3 NFs
The general electrochemical activities of RuO2 NRs-WO3 NFs and WO3 NFs were examined by CV in 10 mM [Fe(CN)6] 3-aqueous solution containing 1 M KCl. Figure S2 displays CV curves of RuO2 NRs-WO3 NFs and WO3 NFs at a scan rate 100 mV s -1 . Voltammetric current peaks at RuO2 NRs-WO3 NFs are reversible, while those of WO3 NFs are quasi-reversible. It seems to be ascribed to that RuO2 NRs-WO3 NFs allow very facile heterogeneous electron transfer kinetics with high electric conductivities in contrast to WO3 NFs. Moreover, RuO2 NRs-WO3 NFs show a much larger charging current in CV than WO3 NFs.
To characterize the charging behavior of the synthesized materials, CV was measured for a potential range from 0.1 V to 0.9 V (vs. S.C.E.) in 1 M H2SO4 as seen in Figure 4. Figure 4A shows CV Figure 3A,B, low-magnification TEM images show the high density of RuO 2 nanorods directly grown on the porous surface of WO 3 nanofiber. The SAED pattern shown in Figure 3E reveals the existence of many different crystalline phases in a WO 3 nanofiber which confirms the polycrystalline nature of a WO 3 nanofiber. On the contrary, the fast Fourier transform (FFT) of the lattice-resolved image for a RuO 2 nanorod in Figure 3F represents highly ordered lattice fringes with a single crystal nature. The values of lattice spacing of adjacent planes are estimated by about 0.318 nm and 0.263 nm, corresponding to those of between the (110) planes and (101) for the tetragonal RuO 2 , respectively. Furthermore, TEM-EDS element mapping analysis from the high-angle annular dark field (HAADF) STEM image shown in Figure S1 confirms the homogenous distribution of Ru, W, and O in distinct regions in the hierarchical nanostructure. W atoms exist on the backbone of the nanofibers, whereas Ru atoms exclusively exist on the branched nanorods. Oxygen atoms exist both on the backbone of the nanofibers and branched nanorods. Thus, we successfully fabricate the high density of single-crystalline RuO 2 nanorods on WO 3 nanofibers by using a combination of an electrospinning process and a thermal annealing process. Our growth process thus provides a simple methodology for the fabrication of highly efficient electrocatalysts.
Electrochemical Properties for Capacitive Behaviors of RuO 2 NRs-WO 3 NFs
The general electrochemical activities of RuO 2 NRs-WO 3 NFs and WO 3 NFs were examined by CV in 10 mM [Fe(CN) 6 ] 3− aqueous solution containing 1 M KCl. Figure S2 displays CV curves of RuO 2 NRs-WO 3 NFs and WO 3 NFs at a scan rate 100 mV s −1 . Voltammetric current peaks at RuO 2 NRs-WO 3 NFs are reversible, while those of WO 3 NFs are quasi-reversible. It seems to be ascribed to that RuO 2 NRs-WO 3 NFs allow very facile heterogeneous electron transfer kinetics with high electric conductivities in contrast to WO 3 NFs. Moreover, RuO 2 NRs-WO 3 NFs show a much larger charging current in CV than WO 3 NFs.
To characterize the charging behavior of the synthesized materials, CV was measured for a potential range from 0.1 V to 0.9 V (vs. S.C.E.) in 1 M H 2 SO 4 as seen in Figure 4. Figure 4A shows CV results comparing RuO 2 NRs-WO 3 NFs and WO 3 NFs at a scan rate 100 mV s −1 . It supports the enhanced capacity of RuO 2 NRs-WO 3 NFs as the RuO 2 NRs were grown on WO 3 NFs. To examine the charging performance, the average specific capacitance values (C sp , F g −1 ) were calculated with the following Equation (1) using CV curves shown in Figure 4B. results comparing RuO2 NRs-WO3 NFs and WO3 NFs at a scan rate 100 mV s -1 . It supports the enhanced capacity of RuO2 NRs-WO3 NFs as the RuO2 NRs were grown on WO3 NFs. To examine the charging performance, the average specific capacitance values (Csp, F g -1 ) were calculated with the following Equation (1) using CV curves shown in Figure 4B.
where v is the scan rate (V s -1 ), ∆ is the weight of electrode materials, ∆ is the potential range, and is the area under CV curve [39]. At the scan rate of 10 mV s -1 , the Csp values of the synthesized materials, RuO2 NRs-WO3 NFs and WO3 NFs, are 98.15 F g -1 and 0.95 F g -1 , respectively. The Csp of RuO2 NRs-WO3 NFs is obviously 103-fold higher than that of WO3 NFs as shown in Figure 4C. As the scan rate increases, Csp becomes smaller and the Csp of RuO2 NRs-WO3 NFs and WO3 NFs decreased down to 57% and 42%, respectively, while increasing the scan rate from 10 mV s -1 to 200 mV s -1 . This additionally indicates the successful decoration of WO3 NFs with RuO2 NRs forming the hierarchical hetero-nanostructures.
Electrochemical impedance spectroscopy (EIS) was also employed to examine the electrochemical where v is the scan rate (V s −1 ), ∆m is the weight of electrode materials, ∆V is the potential range, and IdV is the area under CV curve [39]. At the scan rate of 10 mV s −1 , the C sp values of the synthesized materials, RuO 2 NRs-WO 3 NFs and WO 3 NFs, are 98.15 F g −1 and 0.95 F g −1 , respectively. The C sp of RuO 2 NRs-WO 3 NFs is obviously 103-fold higher than that of WO 3 NFs as shown in Figure 4C.
As the scan rate increases, C sp becomes smaller and the C sp of RuO 2 NRs-WO 3 NFs and WO 3 NFs decreased down to 57% and 42%, respectively, while increasing the scan rate from 10 mV s −1 to 200 mV s −1 . This additionally indicates the successful decoration of WO 3 NFs with RuO 2 NRs forming the hierarchical hetero-nanostructures. Electrochemical impedance spectroscopy (EIS) was also employed to examine the electrochemical behavior of RuO 2 NRs-WO 3 NFs and WO 3 NFs. EIS measurement was carried out at 0.5 V (vs. S.C.E.) under the same condition of CV experiments with a frequency range of 0.1 Hz-1000 kHz as shown in Figure S3. The Nyquist plot of RuO 2 NRs-WO 3 NFs was closer to a vertical line than that of WO 3 NFs, exhibiting nearly pure capacitive behavior of RuO 2 NR-WO 3 NFs [1,40]. The stability of RuO 2 NRs-WO 3 NFs for capacitance was demonstrated by monitoring the change of C sp during repeated CV cycles as depicted in Figure 4D. RuO 2 NRs-WO 3 NFs excellently maintained about 96% of its original C sp for the 1000 CV cycles at a scan rate of 100 mV s −1 . Figure S3. The Nyquist plot of RuO2 NRs-WO3 NFs was closer to a vertical line than that of WO3 NFs, exhibiting nearly pure capacitive behavior of RuO2 NR-WO3 NFs [1,40]. The stability of RuO2 NRs-WO3 NFs for capacitance was demonstrated by monitoring the change of Csp during repeated CV cycles as depicted in Figure 4D. RuO2 NRs-WO3 NFs excellently maintained about 96% of its original Csp for the 1000 CV cycles at a scan rate of 100 mV s −1 .
Applications to Electrochemical Sensing of AA and H2O2
The electrochemical properties of RuO2 NRs-WO3 NFs for AA oxidation were also studied. LSV measurements in 0.1 M PBS were used for examining the oxidations of various biomaterials such as AA, DA, UA, AP, and glucose. The chosen concentrations are slightly above the physiological concentrations. As shown in Figure 5A, AA oxidation started to occur from the most negative potential compared with other biomaterials. Amperometric measurements of RuO2 NRs-WO3 NFs and WO3 NFs were conducted at 0 V (vs. S.C.E.) which possibly allow for the oxidation of AA only, excepting for the other tested biomolecules as seen in the LSV results of Figure 5A.
Applications to Electrochemical Sensing of AA and H 2 O 2
The electrochemical properties of RuO 2 NRs-WO 3 NFs for AA oxidation were also studied. LSV measurements in 0.1 M PBS were used for examining the oxidations of various biomaterials such as AA, DA, UA, AP, and glucose. The chosen concentrations are slightly above the physiological concentrations. As shown in Figure 5A, AA oxidation started to occur from the most negative potential compared with other biomaterials. Amperometric measurements of RuO 2 NRs-WO 3 NFs and WO 3 NFs were conducted at 0 V (vs. S.C.E.) which possibly allow for the oxidation of AA only, excepting for the other tested biomolecules as seen in the LSV results of Figure 5A. As observed in Figure 5B, the anodic currents of both electrodes were increased linearly with the concentration of AA increased from 5 μM to 2 mM. Also, the calibration curves based on the amperometric data were depicted in inset of Figure 5B. The sensitivity of RuO2 NRs-WO3 NFs (171.7 μA mM -1 cm -2 , R 2 = 0.9990, normalized to GC substrate electrode area, 0.072 cm 2 ) were surprisingly increased by 244 times compared to that of WO3 NFs (0.704 μA mM -1 cm -2 , R 2 = 0.9990). Most of the typical biological samples are complex, having various oxidizable species, so selectivity to a targeted analyte is an essential requirement for any sensor. In Figure 6A, current responses for AA oxidation were stable against the additions of 0.1 mM AP, 0.1 mM UA, 0.1 μM DA and 5 mM glucose at 0 V. Additionally, the stability of RuO2 NRs-WO3 NFs was measured by monitoring the change of current at 0 V in 0.1 M PBS containing 0.3 mM AA. The amperometric response of RuO2 NRs-WO3 NFs retained 96% of the initial current level during over a 4200s measurement in Figure 6B, supporting its excellent stability. Table 1 summarizes the properties of RuO2 NRs-WO3 NFs in comparison with other Ru-based materials used as AA sensors. As observed in Figure 5B, the anodic currents of both electrodes were increased linearly with the concentration of AA increased from 5 µM to 2 mM. Also, the calibration curves based on the amperometric data were depicted in inset of Figure 5B. The sensitivity of RuO 2 NRs-WO 3 NFs (171.7 µA mM −1 cm −2 , R 2 = 0.9990, normalized to GC substrate electrode area, 0.072 cm 2 ) were surprisingly increased by 244 times compared to that of WO 3 NFs (0.704 µA mM −1 cm −2 , R 2 = 0.9990). Most of the typical biological samples are complex, having various oxidizable species, so selectivity to a targeted analyte is an essential requirement for any sensor. In Figure 6A, current responses for AA oxidation were stable against the additions of 0.1 mM AP, 0.1 mM UA, 0.1 µM DA and 5 mM glucose at 0 V. Additionally, the stability of RuO 2 NRs-WO 3 NFs was measured by monitoring the change of current at 0 V in 0.1 M PBS containing 0.3 mM AA. The amperometric response of RuO 2 NRs-WO 3 NFs retained 96% of the initial current level during over a 4200-s measurement in Figure 6B, supporting its excellent stability. Table 1 As observed in Figure 5B, the anodic currents of both electrodes were increased linearly with the concentration of AA increased from 5 μM to 2 mM. Also, the calibration curves based on the amperometric data were depicted in inset of Figure 5B. The sensitivity of RuO2 NRs-WO3 NFs (171.7 μA mM -1 cm -2 , R 2 = 0.9990, normalized to GC substrate electrode area, 0.072 cm 2 ) were surprisingly increased by 244 times compared to that of WO3 NFs (0.704 μA mM -1 cm -2 , R 2 = 0.9990). Most of the typical biological samples are complex, having various oxidizable species, so selectivity to a targeted analyte is an essential requirement for any sensor. In Figure 6A, current responses for AA oxidation were stable against the additions of 0.1 mM AP, 0.1 mM UA, 0.1 μM DA and 5 mM glucose at 0 V. Additionally, the stability of RuO2 NRs-WO3 NFs was measured by monitoring the change of current at 0 V in 0.1 M PBS containing 0.3 mM AA. The amperometric response of RuO2 NRs-WO3 NFs retained 96% of the initial current level during over a 4200s measurement in Figure 6B, supporting its excellent stability. Table 1 summarizes the properties of RuO2 NRs-WO3 NFs in comparison with other Ru-based materials used as AA sensors. 3 Ref. [13], 4 Ref. [41], 5 Ref. [42], 6 Ref, [43].
The catalytic effect of RuO 2 NRs-WO 3 NFs for H 2 O 2 reduction was also measured. Figure 7A shows overlaid LSV results of RuO 2 NRs-WO 3 NFs and WO 3 NFs. It presents clearly that H 2 O 2 reduction at RuO 2 NRs-WO 3 NFs starts from a much less negative potential with much greater reduction current level than that at WO 3 NFs. In fact, the cathodic current level measured at −0.2 V (vs. S.C.E.) was more greatly increased for RuO 2 NRs-WO 3 NFs than WO 3 NFs in response to the successive increase of H 2 O 2 concentration ( Figure 7B). Inset of Figure 7B shows the calibrated current vs concentration with good linearity. Obtained sensitivities from the calibration curves are 619.7 µA mM −1 cm −2 (R 2 = 0.9960), and 5.5 µA mM −1 cm −2 (R 2 = 0.9384) for RuO 2 NRs-WO 3 NFs and WO 3 NFs, respectively. Sensitivity of RuO 2 NR-WO 3 NFs is 112-fold higher than the value of WO 3 NFs, and therefore it supports the enhanced activities of RuO 2 NRs-WO 3 NFs toward H 2 O 2 reduction. The H 2 O 2 reduction current instead of the oxidation current was monitored to sense H 2 O 2 in order to avoid the interference from many oxidizable species generally present in biological systems. Figure S4 represents the selectivity of RuO 2 NRs-WO 3 Table 2. RuO 2 NRs-WO 3 NFs for measuring H 2 O 2 reduction current was less stable than that for AA oxidation. In fact, H 2 O 2 reduction current measured at −0.2 V was decreased to~60% of the initial current level after 4200-s continuous measurement (data not shown).
Conclusions
We report the successful fabrication of single crystalline RuO2 nanorods on WO3 nanofibers by electrospinning and calcination. Microscopic and spectroscopic measurements such as SEM with EDS, XRD, and XPS were used to characterize the structure and composition of RuO2 NRs-WO3 NFs. The RuO2 NRs-WO3 NFs showed improved electrocatalytic activities over WO3 NFs through a series of electrochemical measurements. In 1 M H2SO4 solution, RuO2 NRs-WO3 NFs represent a higher Csp, 98.15 F g -1 , by 103-fold with good stability and a sharper slope than pure WO3 NFs. Additionally, the RuO2 NRs-WO3 NFs have dramatically enhanced sensing abilities, in accordance with 224 times (171.7 μA mM -1 cm -2 ) sensitivity for AA oxidation, and 112 times (619.7 μA mM -1 cm -2 ) sensitivity for H2O2 reduction, respectively, compared to those of pure WO3 NFs. These results thus suggest that RuO2 NRs-WO3 NFs could be a promising candidate electrocatalyst for the fabrication of an efficient electrochemical sensor due to its highly effective electrochemical performance.
Conclusions
We report the successful fabrication of single crystalline RuO 2 nanorods on WO 3 nanofibers by electrospinning and calcination. Microscopic and spectroscopic measurements such as SEM with EDS, XRD, and XPS were used to characterize the structure and composition of RuO 2 NRs-WO 3 NFs. The RuO 2 NRs-WO 3 NFs showed improved electrocatalytic activities over WO 3 NFs through a series of electrochemical measurements. In 1 M H 2 SO 4 solution, RuO 2 NRs-WO 3 NFs represent a higher C sp , 98.15 F g −1 , by 103-fold with good stability and a sharper slope than pure WO 3 NFs. Additionally, the RuO 2 NRs-WO 3 NFs have dramatically enhanced sensing abilities, in accordance with 224 times (171.7 µA mM −1 cm −2 ) sensitivity for AA oxidation, and 112 times (619.7 µA mM −1 cm −2 ) sensitivity for H 2 O 2 reduction, respectively, compared to those of pure WO 3 NFs. These results thus suggest that RuO 2 NRs-WO 3 NFs could be a promising candidate electrocatalyst for the fabrication of an efficient electrochemical sensor due to its highly effective electrochemical performance. | 2019-07-31T13:03:55.647Z | 2019-07-26T00:00:00.000 | {
"year": 2019,
"sha1": "3eb545a3cf6741c68ac38bf0f14c4cab0161c6fb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/19/15/3295/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3eb545a3cf6741c68ac38bf0f14c4cab0161c6fb",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine",
"Computer Science"
]
} |
250289346 | pes2o/s2orc | v3-fos-license | Category-Specific Nuance Exploration Network for Fine-Grained Object Retrieval
Employing additional prior knowledge to model local features as a final fine-grained object representation has become a trend for fine-grained object retrieval (FGOR). A potential limitation of these methods is that they only focus on common parts across the dataset (e.g., head, body, or even leg) by introducing additional prior knowledge, but the retrieval of a fine-grained object may rely on category-specific nuances that contribute to category prediction. To handle this limitation, we propose an end-to-end Category-specific Nuance Exploration Network (CNENet) that elaborately discovers category-specific nuances that contribute to category prediction, and semantically aligns these nuances grouped by sub-category without any additional prior knowledge, to directly emphasize the discrepancy among subcategories. Specifi-cally, we design a Nuance Modelling Module that adaptively predicts a group of category-specific response (CARE) maps via implicitly digging into category-specific nuances, specifying the locations and scales for category-specific nuances. Upon this, two nuance regularizations are proposed: 1) semantic discrete loss that forces each CARE map to attend to different spatial regions to capture diverse nuances; 2) semantic alignment loss that constructs a consistent semantic correspondence for each CARE map of the same order with the same subcategory via guaranteeing each instance and its transformed counterpart to be spatially aligned. Moreover, we propose a Nuance Expansion Module, which exploits context appearance information of discovered nuances and refines the prediction of current nuance by its similar neighbors, leading to further improvement on nuance consistency and completeness. Extensive experiments validate that our CNENet consistently yields the best performance under the same settings against most competitive approaches on CUB Birds, Stanford Cars, and FGVC Aircraft datasets.
Introduction
Fine-grained object retrieval (FGOR) aims at retrieving images belonging to various subcategories of a certain metacategory and returning images with the same subcategory as the query image. It is a more challenging problem than general image retrieval due to the inherently subtle inter-class object variances among subcategories. As a result, the key to FGOR lies in picking out nuances buried in the local regions to address the aforementioned challenge of FGOR.
Recently, quite a few approaches (Zheng et al. 2018;Shen et al. 2017;Moskvyak et al. 2021) have been proposed for exploring nuances, which primarily brings together representation learning, and fine-grained object auxiliary information into a framework to consider the nuances of a finegrained object. However, these works require extra prior knowledge (i.e., object location, key points, or object parsing information) for discovering and aligning common parts across all subcategories, while neglecting the fact that these common parts are not always discriminative, and accordingly degrades the retrieval performance. For example, visually similar birds can be retrieved using their categoryspecific nuances that contribute to category prediction in Fig. 1, but previous part-based works select the common parts (e.g., head and tail) across all subcategories by the guidance of additional prior knowledge, making the selected parts not always inclusive of category-specific nuances. Therefore, we argue that additional prior knowledge can only provide the location of common parts across the dataset but cannot explicitly point out the category-specific nuances among subcategories. When additional prior knowledge is useless, how to effectively extract category-specific nuances and how to semantically align these nuances grouped by category are worthy of investigation for FGOR.
To this end, we propose an end-to-end Category-specific Nuance Exploration Network (CNENet) to elaborately discover category-specific nuances and semantically align these nuances of the same subcategory in order by introducing additional nuance regularizations, which directly emphasizes the discrepancy among subcategories. The CNENet consists of Nuance Modelling Module (NMM) and Nuance Expansion Module (NEM). NMM predicts a set of categoryspecific response (CARE) maps by implicitly digging into nuances relevant to categories under two nuance regularizations, which specifies the location and scale for categoryspecific nuances. Concretely, the nuance regularizations are: 1) semantic discrete loss that forces each CARE map to attend to different spatial regions to discover diverse nuances; 2) semantic alignment loss that constructs a consistent semantic correspondence for each CARE map of the same order with the same subcategory via guaranteeing each instance and its transformed counterpart to be spatially aligned. The multiple nuances generated by NMM are expected to be spatially discrete as much as possible to achieve semantic diversity of category-specific nuances. However, some vital nuances may cover the entire object or overlap with the others, resulting in some nuances being shrunk. Therefore, NEM exploits context appearance information of discovered nuances and refines the prediction of current nuance by its similar neighbors, leading to further improvement on nuance consistency and completeness. Finally, these two modules without any pairwise metric losses are cascaded and jointly optimized, to learn the category-specific nuances, which have the property of benefiting FGOR performance.
Main contributions of this paper can be summarized: • To the best of our knowledge, we are the first to dig into and align category-specific nuances grouped by category rather than focus on common parts across the dataset in FGOR. • We design two nuance regularizations: semantic discrete loss to explore diverse category-specific nuances and semantic alignment loss to semantically align nuances of the same subcategory, thus achieving category-specific nuance exploration in a self-supervised manner. • We evaluate the proposed method on three datasets (CUB Birds, Stanford Cars, and FGVC Aircraft), and the results demonstrate that our CNENet achieves the state-of-theart.
Related Work
Fine-grained Object Retrieval: Existing FGOR methods can be roughly divided into three groups. The first group, metric-based schemes, is learning an embedding space where similar examples are attracted, and dissimilar examples are repelled (Teh et al. 2020;Wang et al. 2019a;Boudiaf et al. 2020). PNCA++ (Teh et al. 2020) proposes a proxy-based deep metric learning (DML) solution to embed image-level features and thus represent class distribution. The shortcoming of metric-based schemes is that they focus on the optimization of image-level features which contain many noisy and non-discriminative information. Therefore, the second group, object-based schemes, focuses on localizing the objects from images via exploring the activation of features (Wei et al. 2017;Zheng et al. 2018). SCDA (Wei et al. 2017) only localizes the objects while discards the noisy background for extracting informative descriptors for FGOR. CRL (Zheng et al. 2018) designs an attractive object feature extraction strategy to facilitate the retrieval task. Instead of localizing object-level features, the third group, part-based schemes tends to dig into common parts across the dataset via the guidance of additional prior knowledge (Zheng et al. 2018;Shen et al. 2017;Moskvyak et al. 2021). LFE (Shen et al. 2017) selects the specific filters to localize the semantically coherent parts, which achieves the goal of encoding common regions. KAE-Net (Moskvyak et al. 2021) learns features corresponding to each keypoint position to construct a representation. However, these approaches are difficult to guarantee the learnt features are discriminative enough. Different from these works, we propose CNENet to dig into category-specific nuances that contribute to category prediction, and thus explicitly emphasize discrepancies among subcategories. Nuance exploration: Recently, nuance exploration is mainly applied to fine-grained image recognition, and has made great progress (Ding et al. 2019;Yang et al. 2018;Zhang et al. 2016;Zheng et al. 2019a;Wang et al. 2020c;Zhou et al. 2020;Wang et al. 2021Wang et al. , 2020aWang et al. , 2019b. S3Ns (Ding et al. 2019) produces sparse attention to localize object and discriminative nuances by collecting local maximums of class response maps. ACNet (Ji et al. 2020) introduces the attention transformer to facilitate coarse-to-fine hierarchical feature learning to grab discriminative nuances. CGP (Wang et al. 2020b) establishes correlation between regions by graph propagation to discover the more discriminative nuance groups. The recognition task maps the learned nuances to the category space while not considering other samples, and thus is not sensitive to the order of the learned nuances. In contrast, the category-specific nuances in retrieval require matching with nuances from other samples in the dataset and thus are sequentially sensitive. Upon this, we design two nuance regularizations to adaptively discover and semantically align category-specific nuances guided by category to address the problem of sequence sensitivity. To our best knowledge, this is the first work to explore categoryspecific nuances in a self-supervised manner for FGOR.
Proposed Method
We aim to explore the nuances that contribute to category prediction for emphasizing discrepancies among subcategories in FGOR. To this end, we propose the Categoryspecific Nuance Exploration Network (CNENet). It introduces two new components: the nuance modeling module Figure 2: Framework of CNENet. The Nuance Modelling Module receives feature maps from the backbone network to discover and align category-specific response (CARE) maps with two nuance regularizations, which are the Semantic Discrete Loss L SD to force CARE maps to capture diverse nuances and Semantic Alignment Loss L SA to semantically align CARE maps guided by category. Subsequently, the Nuance Expansion Module exploits context appearance information of discovered nuances and refines the prediction of current nuance by its similar neighbors, which can restores those missing details due to the constraint of the semantic discrete loss. Finally, we extract the aligned category-specific nuances and concatenate them as retrieval features.
(NMM) to discover the category-specific nuances and align them guided by category, and the nuance expansion module (NEM) to refine the prediction of current nuance by its similar neighbors. Our framework is illustrated in Fig. 2.
Nuance Modelling Module
Understanding discriminative semantics among subcategories is a prerequisite for retrieving visually similar images. A typical approach is to introduce the additional prior knowledge, i.e., bounding boxes or key points, to capture common parts across the dataset. However, these common parts cannot explicitly point out discrepancies among subcategories, and thus are useless. To handle this issue, we propose a Nuance Modelling Module (NMM) to help the network simultaneously discover the category-specific nuances and align them guided by category in a self-supervised manner.
For an input image X, we denote its feature maps F ∈ R c×h×w extracted by the convolutional blocks as the input of NMM, where c, h, w are the dimension, height, and width of the feature maps. To obtain category-specific nuances, NMM aims to discover and align the nuances of fine-grained objects with the same subcategory. Specifically, NMM consists of three sub-modules: category-specific response generation, semantic discrete loss, and semantic alignment loss. They are explained in detail as below.
Category-specific response generation. NMM first splits the feature maps F into l category-specific response (CARE) maps M = [M 1 , M 2 , ..., M l ] ∈ R h×w×l . Concretely, these maps are generated by a light-weight generator G(·) followed by a normalization operation as follows: (1) where ReLU (·) denotes the rectified linear unit (ReLU) activation function, and G(·) is a convolutional operation with kernel size C × 1 × 1 × l. ThenM is passed through a minmax layer to normalize the nuanced response coefficients M , which forces M into [0, 1]: where ε is a protection item to avoid dividing-by-zero, and is set to 10 −5 in our experiments. Note that by the operation of lightweight generation, the only goal of learning CARE maps is to capture and represent the scales and locations of category-specific nuances between input images and corresponding class information. Since the class information can implicitly determine the relevant and irrelevant features in F , optimal features would capture the relevant features while compressing F by suppressing the irrelevant visual patterns which do not contribute to the prediction of categories. Considering the corresponding relationship between compressed F and CARE maps M , F produces category-specific M , which thus indicates the spatial locations of category-specific nuances. Semantic discrete loss. The category-specific response generation tends to activate category-specific nuances by utilizing the correlation between features and category information, but does not consider the fact that CARE maps should cover diverse nuances of a fine-grained object. To ensure the CARE maps can capture diverse nuances, we thus design the semantic discrete loss L SD as a nuance regularization to force each CARE map to attend to a different spatial region.
Specifically, we introduce L SD to make the l CARE maps in M as discrepant with each other much as possible. Therefore, this is equivalent to minimize the similarity among CARE maps, as Once Eq. 3 is optimized, the CARE maps are obviously discrepant with each other. By this means that if a CARE map discovers one nuanced region, the other maps will be forced to activate other spatially exclusive nuances. Semantic alignment loss. The semantic discrete loss only aims to force learned CARE maps to be discrepant in space, capturing diverse nuances for fine-grained objects. Nonetheless, it can not guarantee that the activated CARE maps with the same subcategory are semantically corresponding in order, which leads to the problem of feature incoherency for images with the same subcategory and decreases the retrieval performance accordingly.
Inspired by the data augmentation stage of fully supervised object detection or semantic segmentation Li et al. 2020), the spatial annotations should be applied with the same affine transformation as input images. It introduces an implicit equivariant regularization for the network to enforce spatial alignment between transformed images and corresponding annotations. Therefore, we design a semantic alignment loss as an implicit equivariant regularization to imitate the contribution of full supervision, making selected nuances semantically correspond to the ones from other samples in the same subcategory. Concretely, we expand the network into a shared-weight siamese structure to integrate the semantic alignment loss L SA into the original network, thus being able to semantically align categoryspecific nuances guided by category: where G(B(·)) represents the backbone network B(·) followed by category-specific response generation operation G(·), T (·) is any spatial affine transformation, e.g. rescaling, rotation, flip, and so on. One branch T (G(B(I))) applies the transformation on the CARE map to output T (M O k ), the other branch G(B(T (I))) warps the input samples by the same affine transformation before the feed-forward of the network to output transformed CARE maps M T k . Therefore, according to Eq. 4, regularizing the CARE maps from two branches to guarantee the spatially corresponding can be rewritten as: Moreover, to further improve the ability of network for semantically aligning nuances selected from images with the same subcategory, we change the data distribution of the input images by utilizing content augmentation manners (e.g., Gaussian blur, saturation adjustment) in addition to the spatial affine transformations. By this means that it can enlarge the distance between original samples and transformed samples, which further narrows the supervision gap between fully and weakly supervised signals. By encouraging spatial correspondence between CARE maps of the same instance but with different affine transformations, the effective category-specific nuance generator is learned to match the discrete but semantically consistent nuances of the same subcategory in order.
Nuance Expansion Module
The nuances generated by NMM are spatially discrete as much as possible to ensure semantic diversity of discovering nuances. However, some vital nuances could cover the entire objects or overlap with other nuances, thus resulting in some nuances being shrunk due to the constraint of semantic discrete loss. To handle this limitation, we propose a Nuance Expansion Module (NEM) to exploit context appearance information of discovering nuances and refine the prediction of current nuance by its similar neighbors.
NEM works as a reinforcement operation by capturing context feature dependency to revise category-specific nuances. Therefore, we refer to the core part of the selfattention mechanism ) with some modifications to achieve the key structure of NEM. NEM consists of two steps: 1) pixel correlation prediction and 2) nuance reassembly. Before taking a look at two steps, let's review the self-attention mechanism. Revisiting self-attention. Self-attention mechanism ) meets the ideas of most methods using the similarity of pixels to refine the original activation regions. Following the denotation , the general selfattention mechanism can be integrated into NEM to refine CARE maps M : where and three embedding functions ϑ, δ, η can be implemented by individual 1 × 1 convolution operations. Here M i and E j respectively denote the original and refined CARE maps with the spatial position index i and j, and function η(M j ) provides a feature vector of input M j at each position and all of them are integrated into position i based on the correlation coefficient given by f (M i , M j ), which calculates the dotproduct feature affinity in an embedding space. The output value E is normalized by N (M i ) = ∀j f (M i , M j ). However, since CARE maps M are constrained by the semantic discrete loss and are thereby orthogonal to each other, the f (M i , M j ) equals to 0, further failing to refine M . Pixel correlation prediction. To handle this problem, we select the feature vectors in high-level features F rather than orthogonal CARE maps M to learn the pixel correlation. More importantly, since features contain more visual clues compared to CARE maps, we can obtain more accurate correlation coefficients. Specifically, the feature projection layer can be implemented by an individual convolution operation as follows:F where W θ ∈ R C×1×1×C1 and b are the learned weight parameters and bias vector of a convolution layer ϑ, respectively. 1 × 1 is the size of convolution kernel.F denote the new feature maps. Unlike classical self-attention in object detection , since our network only provides image-level supervision and two nuance regularizations, which is not as accurate as full supervision, we reduce parameters by removing two embedding functions δ, η to avoid overfitting on inaccurate supervision. Let's take only a single correlation of two positions as an example. The correlation of two positions at p 1 and p 2 inF is then defined as Here we take the inner-product · in normalized feature space to calculate the reassembly correlation coefficient f (F p1 ,F p2 ) between current pixelF p1 and others. Compared to Eq. 7, we use ReLU activation function with L1 normalization to mask out irrelevant pixels and generate an correlation map which is smoother in relevant regions. Nuance reassembly. With the reassembly correlation coefficients f (F p1 ,F p2 ), Eq. 6 can be rewritten as: where refined CARE maps E ∈ R l×H×W are the weighted sum of the original CARE maps M with the normalized f (F p1 ,F p2 ). Moreover, we remove the residual connection to keep the same activation intensity of the original CARE maps. With these refined CARE maps, we can split the feature maps F into l nuances as follows: where denotes element-wise multiplication.
Once the feature maps are split into l nuances according to refined CARE maps, the features of the k-th nuances u k = g(U k ) ∈ R C are extracted by global average pooling g(·). Finally, the output features f ∈ R (l+1)×C for retrieval can be represented by:
Loss function
The full multi-task loss L can be denoted as below: where L CE represents the classification cross-entropy loss.
Experiments Experimental Setting
Datasets. CUB-200-2011 (Branson et al. 2014) contains 200 bird subcategories with 11,788 images. We utilize the first 100 classes (5,864 images) in training and the rest (5,924 images) in testing. The spilt in Stanford Cars (Krause et al. 2013) is also similar to CUB, which contains 196 classes with 16,185 images, i.e. with the first 98 classes (8,045 images) for training and the remaining class (8,131 images) for testing. FGVC Aircraft (Maji et al. 2013) is divided into first 50 classes (5,000 images) for training and the rest 50 classes (5,000 images) for testing. Evaluation protocols. We evaluate the retrieval performance by Recall@K with cosine distance, which is average recall scores over all query images in the test set and strictly follows the setting in (Song et al. 2016). Specifically, for each query, our model returns the top K similar images. In the top K returning images, the score will be 1 if there exists at least one positive image, and 0 otherwise. Implementation details. We apply the widely-used Resnet (He et al. 2016) in our experiments with the pre-trained parameters. The input raw images are resized to 256 × 256 and cropped into 224 × 224. We train our models through using Stochastic Gradient Descent (SGD) optimizer with weight decay of 0.0001, momentum of 0.9, epochs of 90, and batch size of 32 on one GTX 2080ti GPU. The initial learning rate is set to 10 −5 , with the exponential decay of 0.9 after every 5 epochs.
Ablation Experiments
We conduct some ablation experiments to illustrate the effectiveness of proposed modules, including the Nuance Modelling Module (NMM) and the Nuance Expansion Module (NEM). The baseline method uses ResNet-50 as the backbone network, followed by an FC layer as the classifier and trained with L CE in the same setting. As shown in Tab. 1, the contribution of each component is revealed. Compared with baseline, the NMM improves the -200-2011, Stanford Cars 196 and FGVC Aircraft datasets. "Arch" denotes the architecture of using backbone network. "R50" and "In3" represent Resnet50 (He et al. 2016) and Inception V3 (Szegedy et al. 2016), respectively.
Recall@1 accuracy by 4.9% due to discovering categoryspecific nuances and semantically aligning them guided by category. Moreover, we also verify the effectiveness of the semantic discrete loss L SD and semantic alignment loss L SA , and find that L SA plays a more vital role in FGOR. Based on the above results, we apply the original CARE maps generated by NMM to refine the selected nuances (Self-attention), while the performance drops by 0.9%. The result verifies that directly using self-attention can not refine CARE maps while introducing more learnable parameters, further making the network overfit on them. Therefore, NEM learns the correlation based on the features for refining the CARE maps, and outperforms BL + NMM by 3.3%. For existing metric-based methods, they use or design the pair-wise loss (i.e., Triplet loss) to perform the retrieval task. Therefore, we add Triplet loss to further constrain the learned features more compactly, but the accuracy drops by 1.1%. By this means that the pair-wise constraint limits the discriminative ability of feature representation, and our model can directly emphasize category-specific discrepancy to minimize the intra-class variances and maximize the interclass differences. These results demonstrate that each module plays a role in effectively discovering category-specific nuances and semantically aligning them guided by the category information.
Comparison with the State-of-the-Art Methods
We compare our CNENet with state-of-the-art (SOTA) finegrained object retrieval approaches. In Tab. 2, the performance of different methods on CUB-200-2011, Stanford Cars-196, and FGVC Aircraft datasets is reported, respectively. In the table from top to bottom, the methods are separated into three groups, which are (1) metric-based frameworks, (2) localization-based networks, and (3) our CNENet.
The success behind these models based on deep metric learning can be largely attributed to being able to precisely identify the negative/positive pairs via enlarging/shrinking their distances, which indirectly explores the discriminative ability of features. Despite the encouraging achievement, the existing works still have limited ability in learning discriminative features across different subcategories due to only paying more attention to the optimization of global features while overlooking nuances buried in the local regions. Existing works tend to localize regions to directly improve the discriminative ability of feature representation. Although the localization-based networks work well on various datasets, they are difficult to guarantee that the learned features are discriminative enough. Unlike these works, we propose CNENet to dig into category-specific nuances that contribute to category prediction, and thus explicitly emphasize discrepancies among subcategories. Therefore, our CNENet approach achieves new SOTA without any extra annotations and enjoys consistent improvement on various datasets.
As shown in Tab. 3, our approach outperforms these deep metric learning-based methods in the first group, which indicates that the proposed method can better minimize the intraclass variances and maximize the inter-class distances by directly exploring the category-specific nuances. Compared with recent localization-based works, they demonstrate the importance of localizing objects/parts. We run CNENet to directly learn category-specific nuances from images for emphasizing discrepancies among subcategories and achieve the new state-of-the-art.
Discussions
Response to Nuances. One of the keys to fine-grained images is to pick out discriminative nuances for improving the discriminability of features. To further illustrate the In the first column, we show an input image with the red and yellow boxes respectively projected by NMM and NEM. The second and forth columns are CARE maps generated by NMM. The third and fifth columns are the refined category-specific response maps. The first and second rows have the same subcategory, and the third row has a different subcategory. effectiveness of our proposed CNENet, which can attend category-specific nuances for discovering category-specific nuances and semantically align these nuances guided by category, we visualize the category-specific response maps (CARE) learned by NMM and NEM, respectively. Fig. 3 illustrates individual response nuances for three bird images of two subcategories. We can observe that each CARE map M 1 , M 2 generated by NMM focuses on a certain nuance different from the others without the effect of pose or viewpoint. Moreover, CARE maps of the same order emphasize the same semantic information in images of the same subcategory, whereas this relationship does not exist for ones with different subcategories. To verify the effectiveness of NEM, we also visualize the refined CARE maps E to expand the shrank nuances by utilizing the correlation between feature vectors. Compared with the original maps M i , the corresponding E i can pay attention to the entire nuances rather than shrank ones caused by the constraint of semantic discrete loss, which resumes some discriminative nuances and further improves the discriminative ability of feature representation. To more intuitively display the contribution of NMM, we roughly project the localization of nuances generated by NMM and NEM into the yellow and red bounding boxes in the images. Visualized Distributions. To illustrate the impact of CNENet on exploring subcategory discrepancy, we carefully select 10 subcategories with small discrepancy from the testing set to visualize the distributions of learned features in Fig. 4, where each distinct color denotes a fine-grained subcategory. As shown in Fig. 4(a), the features extracted by baseline network have difficulty in alleviating the large intraclass variances, and using these features thus degrades the retrieval performance. In Fig. 4(b), the learnt features with CNENet are well clustered by subcategory. Besides, the distance between the features of different subcategories is farther, and the features of the same subcategory are more compact. Furthermore, improving the discriminative ability of features by discovering and aligning category-specific nuances achieves a vital improvement. The more, the better? We show the retrieval performance with the different number of category-specific nuances, as shown in Tab. 3. The performance of CNENet drops when the number of nuances increases to 4. The result means that an excessive number of category-specific nuances can introduce more useless features, while fewer details can miss informative features. It should be clarified that the number of nuances is explicitly divided into l groups from the spacial perspective for emphasizing the discrepancies among subcategories. Nevertheless, since each category-specific response map may contain different semantic nuances, the number of nuances could be different from a semantic perspective.
Conclusion
In this paper, we propose a novel method called categoryspecific nuance exploration network (CNENet) for FGOR, which solves the problem of how to effectively extract category-specific nuances and how to semantically align these nuances grouped by category. The exploration strategy can be considered as a self-supervised scheme that enables the network to adaptively dig into category-specific nuances by category. Extensive experiments show that the retrieval performance can be improved significantly by discovering the nuances. The last but the most important, our algorithm is end-to-end trainable, and achieves state-the-of-the-art in CUB-200-2011, Stanford Cars and FGVC Aircraft datasets. | 2022-07-06T15:06:08.495Z | 2022-06-28T00:00:00.000 | {
"year": 2022,
"sha1": "701507ba60b6f73943da688724618233bfd309c1",
"oa_license": null,
"oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/20152/19911",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "01fa185cf36733ccb21c9974a1c8d0863ee40975",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
215798732 | pes2o/s2orc | v3-fos-license | Serum Carbohydrate Antigen 199 as a Biomarker for Evaluating Patients with Choledocholithiasis
Aims Choledocholithiasis is a common and yet potentially debilitating disease of the biliary tract. While certain patients with this disease remain largely asymptomatic or experience mild discomfort, in several cases, patient can suffer biliary inflammation and other serious symptoms. Previous studies have detected elevated serum levels of carbohydrate antigen 199 in patients with choledocholithiasis. We wanted to know whether serum CA199 level in patients with choledocholithiasis is related to the level of inflammation in patients. Methods In the present study, we separated a cohort of 135 choledocholithiasis patients into two groups based on their white blood cell counts, which were either 3.5 − 9.5 × 109/L or ≥9.5 × 109/L. We also divided patients into two groups according to CPR < 10 mg/L and CRP ≥ 10 mg/L. At the same time, the correlation between CA199 and CRP level was analyzed. Results We then used a Rank-sum test to compare serum carbohydrate 199 levels between these groups, revealing significantly higher levels of this antigen in patients with a white cell count ≥9.5 × 109/L (Z = −3.584, P < 0.01). The two groups were grouped by CRP, and the CA199 level was compared. The difference between the two groups was statistically significant (P < 0.01). The correlation analysis between CA199 and CRP showed an obvious correlation (r = 0.574). Conclusion This suggests that in patients with choledocholithiasis, higher circulating carbohydrate antigen 199 levels may correspond to a higher degree of inflammation.
Introduction
Choledocholithiasis is a common disease that has become increasingly frequent in recent years, with the most common treatments for this condition being either laparotomy or laparoscopic cholecystectomy and ERCP combined with endoscopic sphincterotomy [1][2][3]. If left untreated, the persistence of stones within the common bile duct can interfere with the normal excretion of bile, potentially leading to secondary acute cholangitis. While diagnostic strategies for detecting choledocholithiasis have improved significantly in recent years, there has not been a corresponding reduction in morbidity and mortality as a result of acute cholangitis secondary to choledocholithiasis. This is thought to be at least in part the consequence of a lack of satisfactory objective indices suitable for diagnosing acute cholangitis at an early stage. Previous studies have reported that elevated serum levels of carbohydrate antigen 199 (CA199) are detectable in patients with choledocholithiasis, rising to over 1000 U/ml in those patients suffering from cholangitis [4,5]. We therefore hypothesized that these CA199 levels may be linked with the levels of inflammation in patients with choledocholithiasis, such that more severe inflammation may be associated with increased levels of circulating CA199. Levels of circulating white blood cells (WBCs) can be easily and cost-effectively measured in patients and can serve as an easily interpretable marker of systemic inflammation [6]. Elevated WBC counts often correspond to increased systemic inflammation, making them a potentially valuable diagnostic tool in the context of other findings [7]. As such, in the present study, we sought to determine whether WBC levels were correlated with CA199 levels in the serum of patients with choledocholithiasis, thus supporting a link between inflammation and levels of this putative circulating biomarker.
Patients.
A total of 135 patients with choledocholithiasis that had been admitted to our hospital between 2016 and 2018 were enrolled in this study. Choledocholithiasis in these patients was confirmed via B-mode ultrasound, CT, magnetic resonance cholangiopancreatography, or intraoperative examination. On the day of admission, we obtained information pertaining to patient age, gender, C-reactive protein count, WBC count, CA99 levels, and total bilirubin levels.
Methods.
(1) Grouping: 135 patients with choledocholithiasis were divided into two groups according to the CRP count. One group had a CRP count <10 mg/L, and the other group had a CRP count of ≥10 mg/L. We then used appropriate statistical tests to compare the CA199 levels between the two groups. (2) Examination items: the choledocholithiasis was confirmed by B-ultrasound, computed tomography, or MR cholangiopancreatography. Acute cholangitis is diagnosed as choledocholithiasis if Charcot's triple sign is found in these patients. Total DBIL and tumor marker carbohydrate antigen 199 (CA199), WBC, and CRP were detected in each patient. These tests were completed within 12 hours of admission. Table 1 compiles the results of comparisons in gender, age, and total bilirubin between patient groups. No significant difference in gender was detected between groups as assessed via chi-squared test (P = 0:528 > 0:05). Similarly, no difference in age was detected between groups as assessed via an independent samples t test (P = 0:085 > 0:05). Total bilirubin levels in patients in this study were not normally distributed and were thus expressed as median (1st quartile, 3rd quartile). There was a significant difference in total bilirubin levels between the two groups, as determined via a rank sum (P = 0:017 < 0:05).
Results
As CA199 levels were also nonnormally distributed, they too were expressed as median (1st quartile, 3rd quartile). Consistent with our hypothesis, there was a significant difference in these CA199 levels between groups as assessed via a rank sum test (P = 0:000 < 0:01). The results of these comparisons are shown in Table 2. The difference in CA199 levels between the two groups is also clearly visible in Figure 1. We further analyzed the correlation between WBC counts and CA199 levels, revealing a significant positive correlation between these two variables (r = 0:255; P < 0:01).
Gastroenterology Research and Practice
According to the results of c-reactive protein count grouping analysis, as shown in Table 3, the difference in CA199 level between the two groups was statistically significant (P < 0:01). The correlation between them was analyzed, and the results showed that there was an obvious correlation (r = 0:574, P < 0:01).
The area under the curve (AUC) of CA199 was 0.977 (95% CI: 0.953-1.000), and the 59.54 U/L threshold had the highest diagnostic accuracy with a sensitivity of 93.1% and specificity of 93.5%. The area under the curve (AUC) of WBC was 0.686 (95% CI: 0.594-0.775), and the 8:9 × 10 9 threshold had the highest diagnostic accuracy with a sensitivity of 65.5% and specificity of 68.8% (Table 4 and Figure 2).
Factors with linear correlation in the correlation analysis were further introduced into the multiple linear regression equation, and the results showed that CA199 was related to WBC level, while age, gender, total bilirubin, and WBC had no linear relationship (all P values were >0.05) ( Table 5). Factors with linear correlation in the correlation analysis were further introduced into the multiple linear regression equation, and the results showed that there was no linear relationship between CA199, age, gender, total bilirubin, and CRP (all P values were >0.05) ( Table 6).
Discussion
CA199 is a carbohydrate antigen that is linked to the Lewis blood group antigen classification system. First discovered in human colorectal cancer cells, CA199 has since been found to be produced by a range of normal epithelial cell types, including those in the pancreas and the bile ducts [9]. In the current clinical practice, CA199 levels are commonly used as a biomarker of malignant tumors of the biliary tract and pancreas [10]. In most benign diseases, CA199 levels remain low, although levels in patients affected benign obstructive jaundice (including choledocholithiasis, cholangitis, and Mirizzi syndrome) are elevated [11,12]. The mechanistic basis for such elevation is unclear. Proposed mechanisms include increased CA199 production in the bile duct as a consequence of increased biliary pressure leading to increased CA199 production either directly or indirectly as a consequence of inflammation. These changes ultimately increase CA199 production and/or enhance its release into circulation, allowing for it to be detected clinically. In addition, cholangitis is associated with inflammatory cytokine production [13].
Cholangitis is a form of biliary inflammation and infection that is secondary to obstruction, with choledocholithiasis being the most common cause of such obstructions. Infec-tions in this context are most frequently caused by Escherichia coli, Klebsiella, Enterobacter, and Enterococcus. If untreated, cholangitis can rapidly progress to sepsis and endanger the life of affected individuals [14]. Symptoms of cholangitis include fever, jaundice, and right upper abdominal pain, which affect up to half of patients. In contrast, only about 5% of patients exhibit a combination of these symptoms together with psychiatric changes and hypotension indicative of severe sepsis [15]. The early identification and treatment of patients with choledocholithiasis are thus essential in order to prevent its progression to serious cholangitis. As the symptoms of this condition are nonspecific, cholangitis is typically based upon laboratory values and imaging studies, including elevated WBC and bilirubin levels [14].
It can be found from the ROC curve that CA119 and WBC have better sensitivity for the diagnosis of cholangitis, but the critical value of WBC is still in the normal physiological level, which may have limited significance for clinicians to make judgments, while CA199 is more sensitive, and the critical value is also outside the normal range, so it is more The test result variable(s): WBC has at least one tie between the positive actual state group and the negative actual state group. Statistics may be biased. a Under the nonparametric assumption. b Null hypothesis: true area = 0:5. Gastroenterology Research and Practice helpful for clinicians to make judgments. The correlation analysis results of customs clearance showed that CA119 had a certain correlation with WBC and CRP, but the correlation was not very strong. In addition, we conducted further linear multivariate analysis on WBC and CRP, and the results indicated that CA199 was an independent influencing factor for WBC, but had no significant correlation with CRP. The above conclusions can indicate that the level of CA199 represents the inflammatory level of patients to some extent, but the representativeness may not be so accurate, so comprehensive clinical analysis is needed, and CA119 can only be used as a part of the reference. Our results strongly suggest that CA199 levels can be used to gauge the risk of cholangitis secondary to choledocholithiasis and represent an ideal biomarker for identifying cholangitis in its early stages. Abnormally elevated CA199 levels may offer predictive value as a means of detecting acute cholangitis after choledocholithiasis, with CA199 serving as an inflammatory marker in the pathogenesis of this disease. Elevated Ca199 levels may thus suggest that doctors should treat this condition in order to prevent acute cholangitis onset, to alleviate patient pain, and to improve patient prognosis.
In conclusion, our results show that abnormally elevated levels of CA199 in the serum of patients with choledocholithiasis may be predictive of the risk of secondary acute cholangitis in these patients. Elevated CA199 levels should therefore alert clinicians to the possibility of patients developing acute cholangitis, allowing them to undertake appropriate interventions to prevent disease progression.
Data Availability
Raw data to support the results of this study can be obtained from the corresponding author upon request.
Ethical Approval
The study was approved by the ethics committee of Anhui Medical University Third Affiliated Hospital.
Consent
Written informed consent was obtained from the patient and patient's next-of-kin for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
Conflicts of Interest
The authors declare that they have no competing interests. | 2020-04-02T09:37:43.625Z | 2020-03-25T00:00:00.000 | {
"year": 2020,
"sha1": "8b2050282296df7d286d2907ed2025330d1988bd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/2739612",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ccacdc42ee84e6991f47cb4eff062dfdc16d5ec2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214623487 | pes2o/s2orc | v3-fos-license | Model selection criteria of the standard censored regression model based on the bootstrap sample augmentation mechanism
The statistical regression technique is an extraordinarily essential data fitting tool to explore the potential possible generation mechanism of the random phenomenon. Therefore, the model selection or the variable selection is becoming extremely important so as to identify the most appropriate model with the most optimal explanation effect on the interesting response. In this paper, we discuss and compare the bootstrap-based model selection criteria on the standard censored regression model (Tobit regression model) under the circumstance of limited observation information. The Monte Carlo numerical evidence demonstrates that the performances of the model selection criteria based on the bootstrap sample augmentation strategy will become more competitive than their alternative ones, such as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) etc. under the circumstance of the inadequate observation information. Meanwhile, the numerical simulation experiments further demonstrate that the model identification risk due to the deficiency of the data information, such as the high censoring rate and rather limited number of observations, can be adequately compensated by increasing the scientific computation cost in terms of the bootstrap sample augmentation strategies. We also apply the recommended bootstrap-based model selection criterion on the Tobit regression model to fit the real fidelity dataset.
Introduction
In the practical data analysis area, regression is an extraordinarily essential data modelling technique used to fit the interesting response variable so as to capture and identify the potential possible true relationship between the dependent variable and the explanatory variables as accurately as possible. Therefore, the model selection or variable selection technique is becoming extremely important.
Model selection is based on the Kullback-Liebler (K-L) discrepancy (Kullback 1951) [1] which is a discrepancy measurement between two different probability distributions. The Akaike information criterion (Akaike 1973) [2], which is based on the K-L discrepancy, is a commonly used model selection criterion used to measure the discrepancy degree between the assumed true model and the corresponding candidate one. By the Taylor expansion and the asymptotical normality of the maximum likelihood estimation, Akaike showed that the maximized log-likelihood of the model is a positive biased estimation of the expected log-likelihood and the bias can be asymptotically approximated by the dimension of the model parameter space. However, the AIC criterion is not consistent in the sense that the probability of the correct identification on the true model does not asymptotically tend to one, more specifically, it will asymptotically overshoot the true model order or the dimension of the parameter space of the true model. The model selection criteria with the consistency property, such as the Bayesian information criterion (BIC) (Schwarz1978) [3] and the Hannan-Quinn information criterion (HQ) (Hhannan1979) [4], were consecutively proposed. The corrected AIC (AICc) (Sugiura1978) [5] was proposed to improve the model selection performance on the linear regression model under the circumstance of the finite number of observations. Hurvich and Tsai (Hurvich 1989) [6] explored the application of the AICc on the nonlinear model and the auto regression model. The advantage of the AIC criterion is that it can be applied to any model, however, the derivation of the AICc criterion is highly model related.
The consistency property of the model selection criterion is a statistical large sample property, however the statistical inference efficiency of the model selection will be jeopardized when the observation information is becoming limited due to some unexpected restrictions, for example it is time or money consuming to acquire the sufficient observation samples or the acquirement of the sample information is related to some sensitive social and psychological issues. Meanwhile, the derivation of the model selection criterion is highly related to the restricted assumptions. For example, the derivation of the AIC information criterion assumes that the potential true probability model is in the given candidate model class and the data volatility effect is evaluated by the normality property of the maximum likelihood estimate.
To circumvent the analytical difficulty and restricted assumptions and improve the model identification efficiency especially when the observation information is becoming limited, bootstrap-based model selection criteria have been proposed amid to use bootstrap methodology to simulate the data fluctuation. Efron (1979) [7] initially introduced the bootstrap methodology as a generalization of the Jackknife and discussed its advantage when it is used to estimate the bias or variance of the estimator. Efron (1983) [8] discussed the bootstrap estimation of the error rate of a prediction rule. Efron and Tibshirani (1986) [9] discussed the statistical accuracy of the bootstrap methods. Ishiguro (1991) [10] introduced the bootstrapped model selection criterion known as the WIC (An estimatorfree information criterion). The extension criterion of AIC, known as the EIC, was introduced by Ishiguro (1997) [11]. Shibata (1997) [12] discussed the bootstrap estimate of Kullback-Leibler information for model selection. Efron and Tibshirani (1997) [13] introduced the .632+ bootstrap method which is an improvements on the cross-validation. Following Efron and Tibshirani (1997), Pan (1999) [14] introduced the .632+ rule.
The bootstrap-based extensions of the AIC model selection criterion have been applied to different kinds of models. For example, the bootstrapped variant model selection criterion of AIC for the state space model was discussed by Cavanaugh (1997) [15]. Bootstrap-based model selection for the mixed model was introduced by Shang (2008) [16]. The asymptotic bootstrap bias for the linear regression model was introduced by Seghouane (2010) [17]. Bootstrap-based model selection criterion on the beta regression model was discussed by Bayer (2015) [18].
In this paper, we mainly discuss and compare the model selection performance of the bootstrap-based model selection criteria on the Tobit regression model under the circumstances of different bootstrap sample augmentation mechanisms. Different kinds of bootstrap-based model selection criteria and the sample augmentation strategies on the Tobit regression model are compared by the Monte Carlo numerical simulation technique. Some useful and empirical recommendations are given based on results of the Monte Carlo simulation experiments. The recommended bootstrapped model selection criteria based on the Tobit regression model are also applied to fit the real fidelity data.
Motivating dataset example
Modelling the censoring interesting variable is extremely common in the practical data analysis area. Tobin (1958) [19] discussed the estimation of relationships for limited dependent variables in the economic surveys of households. Amemiya (1973) [20] considered the parameter estimation of the regression model when the dependent variable is truncated normal.
In this study, the Affairs dataset which is available in the AER package of the statistical analysis software R will be used to demonstrate our main motivation. The Affairs dataset consists of number of nine variables and the total number of 601 observation samples. The fist variable named Affairs with the relative high censoring rate 0.75 will be taken as the interesting response variable. The kernal density curve of the interesting response variable Affairs is demonstrated by the following figure named Kernal Density of Affiars. The variable Affairs follows the non-Gaussian distribution with a relative high censoring rate and it is appropriate to fit the Affairs variable by the standard censored regression model.
The acquirement of the interesting response variable Affairs is time and money consuming because it is highly related to the social moral constraint and the self-protection of the individual privacy. Therefore it is indispensable to investigate the model selection performance under the circumstance of the limited observation information. Bootstrap-based sample augmentation mechanism come into the sight to simulate the po- Density tential true data volatility. To sufficiently demonstrate the model selection performances of different kinds of model selection criteria on the Tobit regression model, such a total number of 601 observation individuals in the fidelity data set will be considered as the potential true population with the unknown relationship between the interesting response variable named Affairs and other potential possible explanatory variables. The objective of the statistical inference is to select the most appropriate candidate model as the optimal relationship expression between the interesting response and its potential possible explanatory variables based on the limited observation information. Therefore, increasing the cost of the scientific computation aimed to recuperate the loss of the model identification efficiency brought by the limited data information constitutes the core issue of the bootstrap-based model selection on the Tobit regression model.
Model Selection
Both the statisticians and the practitioners are interested in fitting the random observations by using the statistical model as an approximation expression toward the potential true generation mechanism of the random phenomenon. If we use the capital letter Y and the notation g(y) to denote the interesting variable and the corresponding potential true distribution law respectively, the objective of the model selection is to identify the optimal model from the candidate model class to approximate the potential true distribution law g(y) as accurately as possible.
In order to simplify our discussion and the notation expression, we will not take the different candidate model family with the same dimension of the parameter space under our consideration. Suppose that we have the specific candidate model class F = {F(1), ..., F(m)}, where m is the maximum allowable dimension of the model parameter space Θ. The candidate model families F(k), k = 1, ..., m are sequentially nested in the sense that they satisfy the relationship F(1) ⊂ F (2) ⊂ · · · ⊂ F(m − 1) ⊂ F(m). The parsimonious candidate model family F(k), k = 1, ..., m is obtained by setting the number of m−k parameters to constants. Without loss of generality, these constants can be assumed to be zeros. Then the candidate model family F(k) can be expressed through the following expression, where f (y; θ k ) is the parametric probability model with the k-dimensional model parameter defined in a k-dimensional parameter space Θ k . Furthermore, we use the notationθ k to express the maximum likelihood estimation of the model with number of k model parameters andθ k is the solution of the following optimum problem, where L(θ) is the likelihood function of the parsimonious model with the number of k model parameters.
If we assume that there exists a probabilistic model f (y; θ k 0 ) ∈ F(k0) ∈ F such that f (y; θ k 0 ) is equivalent to the potential true random generation mechanism g(y), the final estimated model f (y;θ k ) is said to be correctly specified if f (y; θ k 0 ) ∈ F (k), but there does not exist any candidate model family F(s) such that f (y; θ k 0 ) ∈ F(s), where s < k. The final selected model f (y;θ k ) is said to be over specified if f (y; θ k 0 ) ∈ F(k), but there exists a candidate model family F(s) such that f (y; θ k 0 ) ∈ F(s), where s < k. The final selected estimated model f (y;θ k ) is said to be under specified if f (y; θ k 0 ) / ∈ F(k). Meanwhile, if the candidate model class F does not contain any candidate model which can be considered as an equivalent expression to the potential true distribution law g(y), the final selected model f (y;θ k ) will be considered as the optimal approximated model expression toward the potential true random generation mechanism g(y).
Model selection criterion
Model selection is based on the concept of the K-L discrepancy. The candidate model with the minimum K-L discrepancy with respect to the potential true probability distribution law will be considered as the optimal fitted model. In this section, we briefly introduce different kinds of model selection criteria which are commonly applied in the research and practical data analysis realms.
In order to distinguish the meanings of the different notations, we use the capital letter Y = (Y1, ..., Yn) T to denote the number of n random variables coming from the potential true unknown population and the small letter y = (y1, ..., yn) T to express the corresponding realization. Similarly, we use the capital letter to express the number of n random variables under the specific bootstrapped generation mechanism and the small letter y b = (y b 1 , ..., y b n ) T to denote its realization. The notationθ b k stands for the maximum likelihood estimation of the k-dimensional model parameter based on the number of n bootstrap observation Y b and the notationθ k denotes the maximum likelihood estimation of the k-dimensional model parameter based on the number of n validated observations of Y . We use the notations E Y b and EY to express taking the expectation with respect to
Akaike information criterion (AIC) and its alternatives
The derivation of the AIC is based on the Kullback Leibler (K-L) divergence (Kullback 1951) which is a distance measurement between two different probability distributions. If we assume that the potential true generation mechanism of the random phenomenon can be described by the parametric probability model f (y; θ k 0 ), the K-L divergence between the true model f (y; θ k 0 ) and the candidate model f (y; θ k ) can be described by the following expression, where θ k 0 is the k0-dimension true model parameter vector and θ k is the optimal k-dimension model parameter vector in the sense thatθ k is the consistent estimation of θ k , whereθ k is the maximum likelihood estimation of the model with the number of k parameters and the number of n observations. The expression (3.2.1.1) can be further expanded based on the linearity property of the expectation operation as following The first term of the expression (3.2.1.2) EY [log f (y; θ k 0 )] is completely determined by the potential true model f (y; θ k 0 ). Therefore, the discrepancy measurement (3.2.1.1) will be completely determined by the second term of the expression (3.2.1.2) which is the negative of the cross-entropy expression EY [log f (y; θ k )]. Nevertheless, it is impossible to accurately quantify the cross-entropy EY [log f (y; θ k )] because the true probability model f (y; θ k 0 ) is unknown. Akaike (1973) recommended to use the maximized log-likelihood expression log f (y;θ k ) as an estimation on the crossentropy EY log f (y;θ k ) and showed that the following bias expression can be asymptotically approximated by the dimension of the parameter space k. The AIC criterion is an asymptotically unbiased estimation of the expected loglikelihood and can be expressed by the following expression A series of the alternative information criteria based on the AIC information criterion have been proposed. Sugiura (1978) proposed the AICc on the linear regression model. Hurvich and Tsai (1989) extended the usage of the AICc on regression and time series models in small samples and showed that the AIC and the AICc are asymptotically equivalent. The AICc model selection performance will be more superior to AIC under the circumstance of a finite sample size. Schwarz(1978) proposed the Bayesian information criterion(BIC). Hannan and Quinn (1979) proposed the HQ information criterion.
The bootstrap extensions of the AIC criterion
The motivation behind the bootstrap extensions of the AIC is to take advantage of the validated observation samples which will be taken as a substitution toward the potential true population to fulfill the augmentation of the training samples. The newly generated training samples can be used to realize the volatility of the parameter estimation, which makes the calculation of the bias expression (3.2.1.3) become possible. Ishiguro (1997) proposed an information criterion known as EIC by using the following bootstrap-based bias calculation expression, is considered as a substitution toward the log-likelihood log f (y|θ k ). The corresponding bootstrap-based expected log-likelihood is calculated by the expression log f (y|θ b k ). The difference between the bias expression (3.2.1.3) and the bias expression (3.2.2.1) is that the previous one takes the assumed true probability model f (y; θ k 0 ) as the random generation mechanism of the potential unknown population; however, the later uses y = (y1, ..., yn) T or its corresponding fitted probability model f (y;θ k ) to substitute the potential true population so as to fulfill the augmentation of the bootstrap samples. We refer such information criterion with the bias expression (3.2.2.1) as the EIC1 criterion which can be expressed by the following expression According to the law of large numbers, the bias expression B1 can be approximated almost surely by the following expression where B is the number of augmentation of the bootstrap sample Y b . Cavanaugh and Shumway (1997) proposed another similar bootstrapbased extension version of the AIC for the state-space model. We refer it as the EIC2 criterion and its bias expression is We will refer to the information criterion with the bias expression (3.2.2.2) as the EIC2 model selection criterion and it can be expressed by the following expression EIC2 = −2 log(y;θ k ) + B2. Shibata (1997) proposed another three bootstrap extensions of the AIC criterion and the corresponding bias expressions can be expressed as follows, − 2 log f (y|θ k ) and we will refer to these three bootstrap-based model selection criteria as EIC3, EIC4 and EIC5 respectively which can be expressed as follows,
The Bootstrap likelihood and cross-validation
Pan, Wei (1999) introduced a model selection criterion combined with the non-parametric bootstrap and the cross-validation: bootstrap likelihood cross-validation (BCV). The BCV model selection criterion estimate the expected log-likelihood of the candidate model as following is the non-parametric bootstrapped sample with the number of elements n, m * is the number of elements contained in Pan (1999) introduced the following .632CV criterion following the .632+ rule proposed by Efron and Tibshirani (1997)
Bootstrap likelihood quasi-CV
Bayer (2015) proposed a bootstrapped likelihood quasi-cross validation (BQCV) model selection criterion which is similar to the bootstrapped likelihood cross-validation (BCV) criterion when the generalized linear regression model is used to fit the dependent variable with a beta distribution.
The BQCV criterion generates the training sample by using the empirical distributionF estimated atθ k . The newly generated training sample Y b combining with the validated sample Y constitutes the final observation sample. Comparing with the real cross validation, the BQCV criterion uses the bootstrapped samples y b as the training sample and take the validated sample Y as the testing sample to estimate the expected log-likelihood. The BQCV criterion can be expressed as Bayer (2015) also proposed the corresponding 632BQCV model selection criterion which can be expressed by the following expression
Tobit regression model and its maximum likelihood
The Tobit regression model, which is a generalization of the Probit regression model, was proposed by Tobin,J.(1958) [19] when he analyzed the data of the household expenditure on the durable goods. Takeshi Amemiya(1973) [20] proved the consistency and the asymptotic normality of the maximum likelihood estimation of the Tobit regression model. Tobit regression model can be described as follows: where φ 0,σ 2 denotes the probability density function of the normal distribution with mean 0 and variance σ 2 . The probability of zero value observation yi = 0, i ∈ {i : yi = 0} under the model (4.1.1) can be expressed as where Φ denotes the standard normal cumulative distribution function. The observation likelihood function of the model (4.1.1) can be expressed by the following expression accordingly and loglikelihood of the Tobit regression model (4.1.1) is where u is the number of positive observation individuals.
The bootstrapped sampling mechanism of the Tobit regression model
In this section, we mainly consider three kinds of bootstrap sampling mechanisms of the Tobit regression model which are parametric bootstrap, non-parametric bootstrap and the combination of the parametric and nonparametric bootstrap sampling mechanism. The final newly generated bootstrap samples will be considered as the training samples coming from the potential unknown true population.
Nonparametric Bootstrap
The nonparametric bootstrap sampling technique is a data augmentation methodology by generating the bootstrap samples from the empirical distribution function which is acquired by assigning the weight 1/n to each observation individual yi, i = 1, ..., n. The nonparametric bootstrap sampling methodology does not have to require making any model assumption about the potential unknown population. Therefore, the nonparametric sample augmentation mechanism is an extremely straightforward and effective sample generation mechanism.
Parametric Bootstrap
The parametric bootstrap sampling mechanism is also a commonly used data augmentation technique because the estimation of the model parameter contains the information of the potential unknown population. The parametric bootstrap generation mechanism of the Tobit regression model (4.1.1) can be described as follows, i is the ith bootstrap observation individual,β is the maximum likelihood estimation based on the number of n validated observations,σ is the maximum likelihood estimation of the model disturbance term σ andεi ∼ N (0,σ 2 ).
Integration of the nonparametric and the parametric bootstrap sampling mechanisms
The nonparametric bootstrap sampling methodology does not depend on any specific model assumption and the bootstrap samples completely come from the empirical distribution. Meanwhile, the parametric bootstrap sampling mechanism can sufficiently take advantage of the model parameter information to generate bootstrap samples. Therefore, it is natural to integrate both the nonparametric bootstrap and the parametric bootstrap sampling mechanisms to increase the variability of the random observation samples. The random bootstrap observation of the combination methodology can be expressed as follows: .., n is the covariate generated by the nonparametric bootstrap sampling mechanism,εi ∼ N (0,σ 2 ),β andσ 2 are the maximum likelihood estimations of the model parameter β and the variance of the model disturbance σ 2 respectively based on the number of n validated observation samples.
Simulation Study
In this simulation study section, we use the Monte Carlo simulation experimentation to demonstrate the performances of the model selection of different kinds of model selection criteria on the Tobit regression model. To save the cost of the intensive computation and sufficiently demonstrate the model selection performances of different kinds of model selection criteria, all the explanatory variables will be assigned by the same marginal distribution and the correlation coefficient between any two different explanatory variables will be set as the same constant.
The potential possible explanatory vector is X = (1, X1, ..., Xp) T , where the random vector (X1, ..., Xp) T follows the p-dimension Gaussian distribution with the zero mean and the variance-covariance matrix Σp×p. The corresponding regression coefficient is β = (β0, β1, ..., βp) T , where β0 is the intercept term which is used to adjust the response variable censoring rate.
The performances of the model selection are demonstrated from the Table 1 to the Table 4, where the subscripts of the names of the criteria pb, np, and npp stand for parametric bootstrap, nonparametric bootstrap and the combination of the nonparametric and parametric bootstrap respectively. The simulation results demonstrate that the non-parametric bootstrap performance will be superior to the other bootstrap sampling mechanism for the EIC1, EIC4, and EIC5 model selection criteria. As for the EIC2 model selection criterion, the performance of EIC2npp will be superior to the EIC2p and the EIC2np. However, the model selection performance of the EIC3np will be inferior to the EIC3 pb and EIC3npp. When the observation information is adequate, the BIC and BCV criteria are becoming competitive than the other model selection criteria. However, the CV632 criterion is becoming superior to the others when the observation information is becoming inadequate.
Meanwhile, to clearly demonstrate the performance of the model selection, a risk function is defined by the expression E I(d = d0) = P (d = d0) which can be approximately calculated by the expression The graph (a) and the graph (b) illustrate that the performance of the BIC and BCV are superior to the CV632 criterion when the number of observations are taken as n = 200 and n = 150. However, the performance of the CV632 criterion is becoming more competitive as depicted by the graph (c) and the graph (d). Specifically, as is shown by the graph (d), the performance of the CV632 criterion is becoming uniformly superior to the BIC and BCV model selection criteria when the number of observation is taken as n = 100. The graph (e) and the graph (f) demonstrate that the both the model identification risk of the BIC and BCV are decreasing with the increase of the number of observations when the censoring rate are taken as fifty percent and sixty percent respectively. Moreover, the performances of the BIC and BCV are superior to the CV632. However, the graph (g) and (h) show that the performance of the CV632 is gradually becoming superior than the BIC and BCV criteria when the number of the observations varies from n = 100 to n = 150. The intensive simulation experimentation demonstrates that the model selection performance of the CV632 criterion becomes more competitive than its competitors when the sample size is becoming limited and the censoring rate is becoming high.
Real data analysis
In this section, we apply the CV632 model selection criterion on the real data to demonstrate its model selection performance when the Tobit regression model is used to fit real data under the circumstance of the inadequate observation information.
The real data set we use is the fidelity data, which is available at the package AER in the statistical analysis software R. The Affairs dataset contains a total number of 601 individuals which will be considered as the potential population. The variable named Affairs is the interesting response variable with 0.75 censoring rate and the rest 8 variables will be taken as the potential possible explanatory variables. There are totally number of 2 8 possible candidate models and a total number of C d 8 candidate models for every specific candidate model family F(d + 2), where d, d = 0, 1, ..., 8 is the number of explanatory variables got involved into the regression model.
In order to compare the model selection performance when the observation information is becoming inadequate, the model identification results fulfilled by the BCV and BIC criteria based on the number of 601 observation samples will be taken as the reference standard of the variable selection. In order to demonstrate the performance of the model selection under the circumstance of the limited observation samples, a total number of 130 observations are sampled from the Affairs dataset.
The minimum values of the BCV and BIC criteria for any specific candidate model family F(k), k = 2, 3, ..., 10 are summarized in Table 5. The notations BCVmin(d) and BICmin(d) stand for the minimum values of the BCV and BIC criteria of the specific candidate model family As is shown in Table 5, the estimation of the number of explanatory variablesd = 4 for the BCV criterion and the corresponding variable combination is age, yearsmarried, religiousness and rating. As for the BIC criterion, the estimated number of variables isd = 3 and the corresponding variable combination is yearsmarried, religiousness and rating.
The minimum BCV, BIC and CV632 of any specific candidate model family F(d + 2), d = 0, 1, ..., 8 based on the number of 130 observation individuals BCVmin(d), BICmin(d) and CV 632min(d) are summarized in Table 6. Table 6 shows that the estimated number of explanatory variables for CV632 criterion isd = 2 and the corresponding variable combination is age and yearsmarried. As for the BCV criterion, the final estimated number of explanatory variables is alsod = 2 with BCVmin(2) = 291; however, the corresponding variable combination is age and children. The estimated number of variables for the BIC criterion isd = 1 with BICmin(1) = 306 and there is only one explanatory variable named rating got involved into the model.
The real data analysis results demonstrate that the performance of CV632 will be superior to the performance of both the BCV and BIC criteria when the observation information is becoming limited. The performance of the real data analysis of the CV632 criterion is consistent with its performance in the simulation study section. | 2020-03-25T01:01:14.367Z | 2020-03-24T00:00:00.000 | {
"year": 2020,
"sha1": "95a9248cd7747ca5de865e06ba42ef57d18c6877",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "95a9248cd7747ca5de865e06ba42ef57d18c6877",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
34838662 | pes2o/s2orc | v3-fos-license | Localization of a binding site for phosphatidylinositol 4,5-bisphosphate on human profilin.
Profilin is a small 12-15-kDa actin-binding protein, which in eukaryotic organisms is ubiquitous and necessary for normal cell growth and function. Although profilin's interactions with its three known ligands (actin monomers, phosphatidylinositol 4,5-bisphosphate (PIP2), and poly-L-proline (PLP)) have been well characterized in vitro, its precise role in cells remains largely unknown. By binding to clusters of PIP2, profilin is able to inhibit the hydrolysis of PIP2 by phospholipase C1 (PLC1). This ability is the result of profilin's affinity for PIP2, but the specific residues of profilin's amino acid sequence involved in the binding of PIP2 are not known. Using site-directed mutagenesis, we sought to localize regions of profilin important for this interaction by generating the following mutants of human profilin (named according to the wild-type amino acid altered, its position, and the amino acid substituted in its place): Y6F, D8A, L10R, K25Q, K53I, R74L, R88L, R88L/K90E, H119D, G121D, and K125Q. With the exception of L10R, all of the mutants were successfully expressed in Escherichia coli and purified by affinity chromatography on PLP-Sepharose. Only Y6F and K25Q demonstrated moderately less stringent binding to PLP, indicating that most of the mutations did not induce marked alterations of profilin's structure. When tested for their relative abilities to inhibit the hydrolysis of PIP2 by PLC1, most of the mutants were indistinguishable from wild-type profilin. Exceptions included D8A, which demonstrated increased inhibition of PLC1, and R88L, which demonstrated decreased inhibition of PLC1. To assess the importance of the region surrounding residue 88 of human profilin, three synthetic decapeptides selected to correspond to non-overlapping stretches of the human profilin sequence were tested for their abilities to inhibit PLC1. We found that only the decapeptide that matched the peptide stretch centered around residue 88 was able to inhibit PLC1 activity substantially and was able to do so at nearly wild-type profilin levels. Taken together with the finding that mutating residue 88 resulted in decreased inhibition of PLC1 activity, these data provide strong evidence that this region of human profilin represents an important binding site for PIP2.
Since the molecule's discovery 20 years ago (1), profilin's interactions with its three ligands (actin monomers, phosphatidylinositol 4,5-bisphosphate (PIP 2 ), 1 and PLP) have been well established through in vitro studies (for review, see Ref. 2). For example, by binding to actin monomers in a 1:1 complex, profilin decreases the critical concentration of monomeric actin in the presence of thymosin 4 (3), inhibits the spontaneous nucleation of actin filaments (4), and catalyzes the exchange of adenosine nucleotides bound to actin monomers (5). By binding to PIP 2 and to a lesser degree its precursor PIP (6), profilin prevents PLC␥1 from hydrolyzing PIP 2 (7). However, when PLC␥1 is phosphorylated on specific tyrosine residues, such as it occurs when extracellular growth factors bind to and activate receptor tyrosine kinases, it is able to overcome the protective effects of profilin and hydrolyze PIP 2 (8). Since PIP 2 binding to profilin precludes the formation of profilin-actin complexes, one can conjecture that in resting cells, PIP 2 sequesters profilin from actin and that upon growth factor-induced cell activation, profilin is released from PIP 2 by the hydrolytic actions of phosphorylated PLC␥1 and diffuses freely to the actin cytoskeleton, where it then exerts effects as a regulator of actin polymerization.
How PIP 2 is able to displace actin so effectively remains unclear, and indeed, efforts to localize a binding site for PIP 2 on profilin have been supplanted until only recently by the more extensive efforts to identify the binding site for actin. The quest for the latter began as early as 1982 with biochemical studies involving peptidases applied to actin (9), whereas the first mention of a putative binding site for PIP 2 on profilin did not occur until 1991 when Pollard and Rimm (10) noted that the charge differences between Acanthamoeba profilin-I and -II occur between residues 24 and 66 (corresponding to residues 25-69 of human profilin) and that a polylysine region exists between residues 80 and 115 (corresponding to residues 88 -126 of human profilin). These regions were suspected to be involved in PIP 2 binding because first, positively charged residues are assumed to be involved in the binding of acidic head groups of PIP 2 and second, the more positively charged isoform, profilin-II, has ϳ100 times greater affinity for PIP 2 (11).
A year later, Yu et al. (12) implicated the region spanning residues 126 -136 of human profilin as a binding site for PIP 2 by proposing the sequence KXXXXXXHXRR to be a modification of the KXXXKXKK and KXXXXKXRR motifs of gelsolin, which by themselves bind to PIP 2 (12). These motifs are also found in CapG, villin, cofilin, and the PLC family (12), all of which bind PIP 2 . Another region was implicated by Raghunathan et al. (13), who used fluorescence spectroscopy to show that binding of profilin to PIP 2 resulted in marked fluorescence quenching of Tyr-3 and Tyr-31. However, they also showed through circular dichroism spectroscopy that upon binding to PIP 2 , profilin undergoes a significant conformational change involving an increase in ␣-helical content from a base line of 5% to one as high as 35%. This being true, it is difficult to know whether the changes in the fluorescence of Tyr-3 and Tyr-31 are due simply to the proximity of a binding site or due to local conformational changes transmitted from a distant binding site.
Finally, Vinson et al. (14), in elucidating the three-dimensional structure for Acanthamoeba profilin-I, proposed a PIP 2binding site consisting of the loop between -strands 1 and 2, the loop between -strands 6 and 7, and the region immediately after ␣-helix 2 (see Fig. 1B). However, Fedorov et al. (15) recently showed through calculations of electrostatic surface potentials that a second distinct region of positive potential present on Acanthamoeba profilin-II but markedly less so on profilin-I is located on the opposite side of the protein. The positively charged residues here include Arg-66, Arg-71, Lys-80, Lys-81, and Lys-115, which are analogous to the residues Lys-69, Arg-74, Arg-88, Lys-90, and Lys-125 of human profilin (16).
To further define the importance of the residues proposed to be involved in PIP 2 binding, we mutated single base pairs distributed across the human profilin cDNA sequence, corresponding to the substitution of several conserved residues on both terminal ␣-helices (Tyr-6, Asp-8, Leu-10, His-119, Gly-121, Lys-125), basic residues implicated by Vinson et al. (14) (Lys-25, Lys-53), and basic residues located on the second region of positive potential identified by Fedorov et al. (15) (Arg-74, Arg-88, Lys-90). Here, we report the results of our testing these mutations for the effects they have on profilin's ability to inhibit the hydrolysis of PIP 2 by PLC␥1, an effect that has been shown previously to correlate precisely with profilin's affinity for PIP 2 (11). We found that most of the mutations did not significantly alter profilin's ability to inhibit PLC␥1 activity. However, the mutation D8A caused a marked increase of PLC␥1 inhibition, and R88L resulted in a marked decrease of PLC␥1 inhibition. Based on our work, we propose that a crucial binding site for PIP 2 on human profilin is contained within five amino acids of residue 88 since this stretch, by itself, inhibits PLC␥1 activity as well as the entire molecule of profilin.
EXPERIMENTAL PROCEDURES
DNA Techniques-The cDNA for human profilin was obtained from David J. Kwiatkowski (17) and subcloned into pTrc99A (Pharmacia Biotech Inc.), a prokaryotic expression vector. To generate mutant profilins, single amino acid substitutions were introduced through single base pair changes using a previously described method for sitedirected mutagenesis involving mutagenic primers, T4 DNA polymerase, and T4 DNA ligase (18). Identities of the resultant mutants were then confirmed by sequence analysis using the Sequenase version 2.0 DNA sequencing kit. Before expressing the profilins in bacteria, the cDNAs were first sublcloned into pMW172 (constructed by Michael Way) to allow higher levels of expression.
Protein Purification-Escherichia coli BL21(DE3) cells transformed with plasmids containing the appropriate cDNAs were grown in 1-liter cultures to an optical density (at ϭ 600 nm) of 0.6 -0.7, at which point expression was induced by the addition of 1 mM isopropyl -D-thiogalactoside. The cells were incubated for an additional 4 h before harvesting by centrifugation. Subsequent steps were performed at 0 -4°C. To each cell pellet, 100 ml of lysing buffer (6 M urea, 145 mM NaCl, 0.1 mM MgCl 2 , 15 mM HEPES, 10 mM EGTA, 1 mM sodium vanadate, 0.5% Triton X-100, 30 g/ml leupeptin and 1 mM 4-(2-aminoethyl)benzenesulfonyl fluoride) was added, and the cells were resuspended by sonication (Sonifier 450, Branson). The lysates were clarified by centrifugation at 6,000 ϫ g for 15 min and dialyzed (Spectra/Por membrane; molecular weight cutoff, 6 -8,000) for 72 h against three changes of Dulbecco's phosphate-buffered saline, pH 7.1, supplemented with 0.5 mM DTT. The dialysates were then filtered through glass wool and applied separately to a 20-ml column of PLP-linked Sepharose, to which profilins are known to bind with high specificity (19). For each profilin mutant, the column was washed with 200 ml of PLP buffer (10 mM Tris, pH 7.8, 0.1 M NaCl, 0.1 M glycine, 0.01 mM DTT), followed by 100 ml of PLP buffer containing 3.5 M urea. Each profilin was then eluted with 100 ml of PLP buffer containing 7.5 M urea (20). Fractions of 3.2 ml each were tested for protein content using the Bio-Rad protein assay (Bio-Kinetics Reader EL312e, Bio-Tek Instruments), and the purity of profilins was checked by SDS-PAGE (21). The appropriate fractions were pooled and dialyzed for 72 h against three changes of 2 mM Tris, pH 8.5, supplemented with 0.5 mM DTT. Final profilin concentrations were measured by the Bio-Rad protein assay after calibration to wild-type human profilin standards determined by UV absorbance at 280 nm using an extinction coefficient of A 0.015 M Ϫ1 cm Ϫ1 (22).
Actin was purified from rabbit skeletal muscle (23). G-actin was separated from residual F-actin by size-exclusion chromatography on a Bio-Gel P60 gel column after dialysis against G buffer (2 mM Tris, pH 7.5, 0.1 mM ATP, 0.5 mM DTT, 0.1 mM CaCl 2 ) and used within 7 days. Some of the actin was labeled with pyrenyliodoacetamide (24) and stored in G buffer.
Recombinant phosphoinositide-specific rat brain PLC␥1 was purified from bacterial cell extracts using a three-amino acid C-terminal tag (Glu-Glu-Phe) engineered into the PLC␥1 cDNA (25). The recombinant PLC␥1 displayed calcium dependence, pH sensitivity, and substrate specificity indistinguishable from that of wild-type bovine PLC␥1. Synthetic decapeptides cross-linked to an octabranched matrix core were obtained from Research Genetics (26). The anti-human profilin antibody (JH44) was a generous gift from Donald A. Kaiser and Thomas D. Pollard (26).
Lipid Preparation-Unilamellar vesicles containing 7 M PIP 2 , trace [ 3 H]PIP 2 , and 50 M phosphatidylethanolamine were prepared by sonication in deionized water as described (7). The unilamellar character of vesicles was confirmed by incubating the vesicles with PLC␥1 for extended periods of time (24 -48 h) to show that ϳ50% of the total cpm translocated from the lipid phase to the aqueous phase (see below).
PLC␥1 Activity-To measure hydrolysis of PIP 2 by PLC␥1, IP 3 production was measured after incubating PIP 2 vesicles with 25 nM recombinant PLC␥1 and varying concentrations of profilin for 15 min at 22°C in 100 l of 50 mM HEPES, pH 7.1, 1.4 mM Tris, 0.2 mM CaCl 2 , 70 mM KCl, 0.4 mM EGTA, and 0.35 mM DTT. IP 3 produced in the absence of profilin was measured in triplicate and defined as 100% activity. The relative PLC␥1 activity obtained in the presence of various profilin concentrations was equal to IP 3 production in the presence of profilin, expressed as a percentage of the mean IP 3 production in the absence of profilin, for each profilin concentration (7). Since the Michaelis constant, K m , for PLC␥1 is much greater (by at least 50-fold) than the substrate concentration, [S], of PIP 2 (7,27), the Michaelis-Menten equa- where k is a proportionality constant. Thus, the relative PLC␥1 activity in the presence of profilin is simply S/S T , where S is the concentration of PIP 2 pentamers not bound to profilin and S T is the total PIP 2 pentamer concentration. By substituting in the equation where P T is the total profilin concentration, and K d is the dissociation constant for the profilin-PIP 2 complex, a value for K d can be calculated for each data point by solving the simplified equation where A is the relative PLC␥1 activity expressed as a fraction of unity.
Actin Polymerization-The concentration of F-actin was determined from the fluorescence of 10% pyrenyliodoacetamide-labeled actin using an excitation wavelength of 365 nm and emission wavelength of 407 nm (28). To measure the steady-state concentrations of F-actin, labeled actin was polymerized, diluted, and allowed to depolymerize overnight at 22°C in the absence or presence of profilins (29). Before measuring fluorescence, the samples were degassed for 1 h. Steady-state experiments were performed in G buffer containing 50 mM KCl and 1 mM MgCl 2 . To initiate polymerization for the time course experiments, 2 mM MgCl 2 was added to G buffer containing 10 M labeled G-actin.
Actin Monomer Nucleotide Exchange-The concentration of G-actin complexed with eATP was measured by fluorescence spectrophotometry using an excitation wavelength of 360 nm and emission wavelength of 410 nm (30,31). The experiments were performed in G buffer contain-ing 1.5 M G-actin, 3 M CaCl 2 , and 3 M ATP. Time 0 corresponds to the addition of 75 M eATP. Profilin is known not to alter the fluorescence of eATP (21).
Measuring PLP Binding-Serial dilutions of wild-type profilin and R88L were made in 2 mM Tris, pH 8.5, supplemented with 0.5 mM DTT, and added to 80 l of PLP-linked Sepharose for a total volume of 160 l, with and without 7.5 M urea. After incubation with mixing for 30 min at 4°C, the concentration of unbound profilin was determined with the Bio-Rad protein assay as the protein concentration in the supernatant after centrifugation.
Protease Sensitivity-To assess the sensitivity of proteins to trypsin, 50 l of either 12 M wild-type profilin or R88L was added to 5.5 l of washed insoluble trypsin attached to beaded agarose (Sigma) and was incubated in a shaker at room temperature. Samples of 10 l each were removed at 0, 5, 10, 20, and 40 min and centrifuged. The supernatants (5 l of each) were tested by SDS-PAGE using both Coomassie Blue staining and Western blot analysis.
RESULTS
Mutagenesis of Human Profilin-By designing mutagenic primers to induce single base pair changes in the cDNA sequence for human profilin, we generated 11 mutant clones of profilin to which we assigned the names Y6F, D8A, L10R, K25Q, K53I, R74L, R88L, R88L/K90E, H119D, G121D, and K125Q, according to the amino acid altered (17), its position in the wild-type sequence, and the amino acid substituted in its place. The locations of these mutations in the primary amino acid sequence as well as on the three-dimensional map for human profilin are shown in Fig. 1, along with excerpts of sequencing gels verifying their identities.
Expression and Purification of Profilin Mutants-Milligram amounts of highly pure wild-type and mutant profilins were eluted from the PLP-Sepharose column with 7.5 M urea, as demonstrated by SDS-PAGE (Fig. 2). Only in the case of L10R did the column fail to bind a 14.5-kDa protein. Western blot analysis in this case revealed the presence of a high molecular mass protein (ϳ120 kDa) in the bacterial extract that was not retained by the column but which cross-hybridized with a polyclonal antibody specific for human profilin (data not shown). These findings suggested that the L10R mutation may have induced the aggregation of profilin into large, high affinity complexes that were unable to bind PLP. The fact that the remaining mutants were able to bind normally to PLP indicates that the mutations did not induce marked alterations of profilin's structure. Interestingly, the mutants Y6F and K25Q eluted from the column under less stringent conditions (3.5 M urea) (data not shown) than with either the wild-type profilin or any of the other mutant profilins (all of which required 7.5 M urea for elution). These same residues are known to localize to the PLP-binding site on human profilin (32,33).
Inhibition of PLC␥1 Activity-The ability of profilins to inhibit PLC␥1 activity has been previously shown to be directly proportional to the affinity of profilins for PIP 2 (11). Therefore, to determine which of the residues in question are important for the profilin-PIP 2 interaction, we tested the mutants Y6F, D8A, K25Q, K53I, R74L, R88L, H119D, and G121D for their relative abilities to inhibit PLC␥1 activity. All except two mu- tants exhibited concentration-dependent inhibition that was indistinguishable from that of wild-type human profilin (Fig. 3A), corresponding to a best-fit dissociation constant, K d , of ϳ0.21 M between profilin and PIP 2 , assuming a stochiometric ratio of 1:5 (7). The mutants that showed altered interactions with PLC␥1 were D8A and R88L (Table I). The mutant D8A demonstrated increased inhibition of PLC␥1 activity, corresponding to a best-fit K d of ϳ25 nM, and the mutant R88L showed lessened inhibition of PLC␥1 activity, corresponding to a best-fit K d of ϳ0.60 M (Fig. 3A). The reason for the increased affinity demonstrated by D8A is unclear but is presumably charge-related and may suggest involvement of ␣-helix 1 in profilin's binding to PIP 2 . To rule out the possibility that a separate direct interaction between PLC␥1 and the profilins accounts for these alterations in activity, we have shown that PLC␥1 and profilin do not coprecipitate under a variety of conditions (data not shown).
In light of the work by Fedorov et al. (15), we were particularly interested in the reduction of profilin's affinity for PIP 2 caused by mutating residue 88 since this result suggests that a region near the loop between -strands 5 and 6 may be involved in the binding of PIP 2 . To assess the importance of this region, we tested three different synthetic decapeptides containing sequences that matched that of three non-overlapping 10-amino acid stretches in the wild-type human profilin sequence for their abilities to inhibit PLC␥1 activity. Besides selecting the peptide segment surrounding residue 88, we also selected segments implicated by Vinson respectively) corresponding to the large protein peak at the beginning of the 7.5 M urea elution. Lanes 1-3 were stained with Coomassie Blue, while lane 4 was analyzed by Western blot analysis using an antibody specific for human profilin (see "Experimental Procedures"). All of the mutant profilins purified by this method eluted in the same pattern as wild-type profilin except for Y6F and K25Q, of which elution started with the 3.5 M urea wash (data not shown). B, wild-type profilin, D8A, K25Q, and R88L analyzed by SDS-PAGE (ϳ5 g of protein per lane). As shown, K25Q migrated slightly slower than the other profilins and was the only purified mutant to migrate differently from wild-type profilin. Samples were loaded after elution from PLP-Sepharose. The gels were stained with Coomassie Blue. Table I are superimposed on the data. B, inhibition of PLC␥1 activity by synthetic peptides. The amino acid stretches of profilin on which the peptide sequences were based are as follows: P1 (closed circles), residues 50 -59; P2 (closed triangles), residues 83-92; and P3 (closed squares), residues 128 -137. The data points and error bars were calculated from two separate experiments. a significant effect on PLC␥1 activity was that which corresponded to the segment centered around residue 88, spanning residues 83-92 (Fig. 3B). Furthermore, the degree of inhibition observed was comparable to that of wild-type profilin (K d ϳ0.63 Ϯ 0.13 M, calculated from peptide concentrations greater than 1 M). Thus, it appears that this 10-amino acid segment, by itself, could account for much of profilin's ability to inhibit PLC␥1 activity. These data implicate this region of human profilin (residues 83-92) as a key binding site for PIP 2 .
Effects of R88L on Actin Polymerization and Monomer Nucleotide Exchange-Since residue 88 also lies in the binding site for actin (9,16,34,35), we sought to determine whether mutating residue 88 also induced changes in profilin's interactions with actin. We measured the effects of R88L on the critical concentration of actin, the rate of actin polymerization, and the rate of actin monomer nucleotide exchange. While wild-type profilin decreased the steady-state concentration of F-actin, no difference between steady-state concentrations of F-actin in the absence and presence of R88L was detectable by our assay at the concentrations tested (Fig. 4A). Correlated with this observation was the fact that R88L inhibited the time course of actin polymerization by much less than wild-type profilin (Fig. 4B).
When we tested the effect of R88L on the rate of actin monomer adenosine nucleotide exchange, we found its effect to be greatly diminished compared to that of wild-type profilin, such that at least 25 times higher concentrations of R88L were required to achieve comparable levels of catalysis (Fig. 5).
Ruling Out Unstable Folding and Global Denaturation in R88L-Although the decreased actin and PIP 2 interactions exhibited by R88L can be explained by the overlap in binding sites for actin and PIP 2 on human profilin, an alternative explanation for this effect is that unstable folding results in large-scale denaturation. To rule out this possibility, we tested the abilities of wild-type profilin and R88L to bind PLP and performed protease sensitivity assays. As shown by Scatchard plot analyses, the affinity of R88L for PLP was not decreased in comparison to that of wild-type profilin, and both proteins lost their affinities for PLP when denatured by 7.5 M urea (Fig. 6A). This demonstrates that global denaturation does not occur in R88L since profilin's binding to PLP requires the proper alignment of both terminal ␣-helices (32,33,35,36). Protease sensitivity assays further ruled out large-scale conformational changes by demonstrating no significant difference between wild-type profilin and R88L in their time courses for digestion by trypsin (Fig. 6B). DISCUSSION The many effects of profilin on its ligands have been well worked out through numerous in vitro studies, but how these effects are coordinated inside of cells to produce vital functions has been more difficult to ascertain. For example, the precise manner in which profilin's three effects on actin are balanced in vivo remains unclear. The relative contributions of these effects likely depend on the ratio of profilin-to-actin concentrations, the relative availabilities of ADP and ATP, and the concentration of thymosin 4 and other sequestering proteins. Since these parameters probably vary greatly between different subcompartments of the same cell, profilin may actually inhibit actin polymerization in some regions of a cell while promoting actin polymerization in others (37). Recent evidence suggests that even across species, profilin's role may vary depending on its total intracellular concentration and the relative availability of other actin monomer sequestering proteins (38). Indeed, a variety of elegant in vivo studies including the microinjection, deletion, and overexpression of profilin in cells (26, 38 -44) has demonstrated the dramatic phenotypic changes caused by simply altering the level of total profilin in cells.
Unfortunately, the specific mechanisms responsible for such changes fail to be revealed with any certainty by these studies.
Mutagenesis offers an alternative and more targeted approach for dissecting out profilin's functions and functional domains (35), made all the more feasible by the recent elucidation of the three-dimensional structures for Acanthamoeba and bovine profilin (14,16). Using this approach, we substituted a variety of residues in the primary structure of human profilin to clarify further the importance of various regions of the molecule in binding to PIP 2 . In particular, we tested the effects of point mutations on the ability of human profilin to inhibit PLC␥1 activity, a property of profilin that is directly related to its ability to bind PIP 2 (11). We mutated residues in the proposed binding sites for PIP 2 (10,(12)(13)(14)(15), as well as a number of other residues including several in both the N-and C-terminal ␣-helices, which constitute the most highly conserved regions of profilin.
If the binding site proposed by Vinson et al. (14) were correct, we would expect the mutations K25Q and K53I, if any, to decrease profilin's ability to inhibit PLC␥1 activity since both of these are in the region analogous to the proposed site, and they both involve substitution of a positively charged residue analogous to one present in Acanthamoeba profilin-II but not in profilin-I. Contrary to this, neither mutation had any observable effect on profilin's ability to inhibit PLC␥1, a result not entirely surprising since we now know that the loop between -strands 1 and 2 makes up part of profilin's binding site for PLP (32,33) and that profilin can bind both PLP and PIP 2 simultaneously (33). This correlates with our observation that the mutants Y6F and K25Q, both with a substitution in the region of the PLP-binding site, exhibited diminished binding to PLP.
We found that mutating residue Arg-88 on the opposite side of profilin caused a decrease in profilin's inhibition of PLC␥1, suggesting that the binding site actually exists on the opposite side of the protein. This result is consistent with the presence of a positive electrostatic potential over the analogous region of Acanthamoeba profilin-II (15). Mutagenesis applied to yeast profilin has shown that substituting Arg-72 decreased PIP 2 binding (35), providing further evidence that the binding site for PIP 2 is localized to this area. We also showed that a decapeptide comprised of the sequence around residue 88 was able to inhibit PLC␥1 activity just as well as the entire molecule of wild-type profilin, thereby providing strong evidence that a binding site for PIP 2 on human profilin is located near the loop between -strands 5 and 6.
If the loop between -strands 5 and 6 represents a binding site for PIP 2 , then the binding sites for PIP 2 and actin would overlap since biochemical studies, x-ray crystallography, and mutational analysis of yeast profilin have implicated the region spanning ␣-helix 3, -strands 4 -6, and the first portion of ␣-helix 4 as the binding site for actin (9,16,34,35). Such overlap has already been demonstrated for cofilin (45) and yeast profilin (35), and this could account for the ability of PIP 2 to dissociate profilin-actin complexes, although PIP 2 -induced changes in conformation may also be important in precluding actin binding (13). Overlap of the two binding sites is consistent with our finding that R88L exhibited markedly diminished interactions with actin, an effect not simply explained by largescale denaturation since R88L demonstrated unaltered protease sensitivity and exhibited normal binding to PLP. Smaller scale effects on conformation, however, are more difficult to exclude. In fact, just as PIP 2 binding induces a conformation that may disfavor actin binding, the mutation R88L may be stabilizing an intermediate conformation that favors neither PIP 2 or actin binding. Crystallographic analysis is underway and should provide us with specific structural information concerning this mutant. Why the charge differences between Acanthamoeba profilin-I and -II, two isoforms displaying markedly different affinities for PIP 2 , cluster to the side of the molecule that does not bind PIP 2 remains unclear. It may be that the overall positive charge of profilin is important in facilitating an interaction with PIP 2 , as would be consistent with the increased affinity of D8A for PIP 2 , but that a separate non-electrostatic interaction is crucial for the cooperative binding of multiple PIP 2 molecules. This is supported by the fact that the relative abilities of our three synthetic decapeptides to inhibit PLC␥1 activity did not correlate with the net charge of each peptide, a result also seen with PIP 2 -binding peptides derived from gelsolin (46). Examination of the loop at the proposed site shows that it protrudes from the molecule and could potentially serve as a core for the clustering of PIP 2 molecules, an arrangement perhaps stabilized by a hydrophobic interaction between the proximal aspects of the acyl chains of PIP 2 and the aliphatic residues at the tip of the loop (Gly-93, Gly-94, and Ala-95). This may explain why profilin does not bind to IP 3 (11), the acidic head group of PIP 2 cleaved by PLC␥1 from the remainder of the molecule. Recently, the PIP 2 -binding region of the N-terminal homology domain of pleckstrin was found to include a loop contained in the sequence KKGSVFNTWK (47). This bears a striking resemblance to the loop between -strands 5 and 6 of Acanthamoeba profilin-II contained in the sequence KKG-SAGVITVK. Admittedly, the corresponding sequence for human profilin is not so similar, but it is interesting to note that human profilin has a 10-fold greater affinity than Acanthamoeba profilin-II for PIP 2 (11) and that the single major difference between the two profilin tertiary structures is in this same loop, which is much larger and more protrusive in human profilin. 2 Furthermore, the affinity of pleckstrin for PIP 2 (K d ϳ30 M) (47) is much closer to that of Acanthamoeba profilin-II than to that of human profilin for PIP 2 .
The details of how profilin binds PIP 2 is certainly complicated and will require additional research to establish more clearly the structures and mechanisms involved. Mutational studies directed to other residues in the vicinity of residue 88 are currently underway, and three-dimensional studies such as nuclear magnetic resonance spectroscopy or x-ray crystallography of the profilin-PIP 2 complex will be needed to confirm our proposed region of human profilin as the true PIP 2 -binding site. Meanwhile, the generation of profilin mutants, which are deficient for the profilin-PIP 2 interaction but not for the actin interactions, should help us to determine through their overexpression in mammalian cells the physiologic importance of the PIP 2 interaction in vivo. As for R88L, its intact PLP-binding activity with diminished PIP 2 and actin interactions may be useful in determining the importance of the recently reported interaction between profilin and vasodilator-stimulated phosphoprotein, an interaction mediated by proline-rich domains on vasodilator-stimulated phosphoprotein (48), by acting as a competitive inhibitor of wild-type profilin. As such, sitedirected mutagenesis provides us with a powerful tool for deciphering the molecular interactions between this multifunctional protein and its many ligands. | 2018-04-03T02:44:07.310Z | 1995-09-08T00:00:00.000 | {
"year": 1995,
"sha1": "9f0405b28ff34bbab72b218575c1b852753abd14",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/270/36/21114.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "8c9b33e99788bc77348c23b257a3ab6ff0c45201",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15922734 | pes2o/s2orc | v3-fos-license | Dust and the ultraviolet energy distribution of quasars
The ultraviolet energy distribution of quasars shows a sharp steepening of the continuum shortward of 1000 A (rest-frame). We describe how we came to consider the possibility that this continuum break might be the result of absorption by carbon crystallite dust grains.
INTRODUCTION
The ultraviolet energy distribution of quasars is characterized by the so-called "big blue bump", which peaks in νF ν at approximately 1000Å. The quasar 'composite' spectral energy distribution (SED) of Telfer et al. (2002, hereafter TZ02) obtained by co-adding 332 HST-FOS archived spectra of 184 quasars between redshifts 0.33 and 3.6, exhibits a steepening of the continuum at ∼ 1100Å. A fit of this composite SED using a broken powerlaw reveals that the powerlaw index changes from approximately −0.69 in the near-UV to −1.76 in the far-UV. We label this observed sharp steepening the 'far-UV break'. In these proceedings, we describe how we came to propose that absorption by crystalline carbon dust is the possible cause of the UV break observed in high redshift quasars. The argumentation behind this interpretation of the UV break has been presented in detail in Binette et al. (2005a, hereafter BM05). Further information can be found in recent proceedings such as in Binette et al. (2005b, c).
THE NEARBY AGN WITH FUSE
In an earlier paper, Binette et al. (2003) showed that H i scattering by a tenuous intergalactic component could not be the cause of the 1000Å break. This negative result supported the prevailing view that the break is an intrinsic feature of quasar SED. More recently, Scott et al. (2004) derived a composite SED similar to TZ02 but for 'nearby' (z q < 0.7) active galactic nuclei (AGN), using archived data 1 Instituto de Astronomía, UNAM, México D.F., México. from FUSE. The authors reported the lack of evidence of a steepening in nearby AGN! This new piece of information intrigued us. Although this absence of a continuum break could be explained away by supposing that the nearby AGN are less luminous and hence possess a lower mass blackhole and as a result a hotter accretion disk, we were not initially satisfied by this explanation. An additional reason for being skeptical is that some individual spectra from the TZ02 sample are extremely far-UV deficient, showing a much steeper break than that seen in the composite. Two examples are given in Fig. 1. Yet their emission-line spectrum is no different than that of other quasars. Photoionization is generally believed to be the excitation mechanism of the emission lines. Therefore, the above-mentioned UV deficiency poses a serious challenge to our understanding of what mechanism powers the emission lines.
The original suggestion of investigating dust absorption as a possible cause of the break came from one of us, C. Morisset (CM), who had experimented with photoionization models of planetary nebulae that included dust mixed with the ionized gas. CM's suggestion arose after looking at an interesting figure prepared by S. Haro-Corzo (SHC), in which three spectra appeared, showing a steep far-UV break. LB argued that ISM dust could not reproduce the sharpness of the 1000Å break, as had already been shown by Shang et al. (2004). This initial suggestion nevertheless stayed around on and lead to a bibliographical search by LB of a new grain composition, which would have the property of producing a sharp break at 1000Å.
COMPARING RADIO-LOUD AND RADIO QUIET QUASARS
The first step has been to explore whether there might be evidence of reddening within the quasar sample that Telfer kindly lent to us in 2002. If dust was responsible for the break, we might for instance expect that the degree of steepening would scale with the amount of dust present. TZ02 had previously showed that the far-UV continuum was steeper in radio-loud (RL) than in radio-quiet (RQ) quasars. Within the paradigm of the dust being the cause of the break, this difference must be the result of differences in the amount of dust present. In other words, radio-loud quasars are possibly more absorbed 2 than radio-quiet quasars.
To verify this proposition, we over-plot in Fig. 2 the separate radio-quiet and radio-loud composite SEDs derived by TZ02. Each composite in this figure, however, has been multiplied by the appropriate normalization constant that made their flux equal to unity at 1350Å. The radio-loud and radio-quiet composites in Fig. 2 are painted in black and gray, respectively. The black dot represents the renormalization wavelength. Within the narrow spectral segments that appear to be line-free between 2500 and 1200Å, both continua overlap remarkably well. This suggests that the intrinsic SED longward of Lyα are very similar in both quasar subsets. The dotted line in panel a is a powerlaw fit to line-free segments, using the mean spectral index value of α ν = −0.69 determined by TZ02 for the combined RL+QR sample.
If we make the simplification that dust absorp- tion is negligible below 1000Å, we can take the ratio of the fitted powerlaw with either composite SED as a means of showing how the UV deficit increases with wavelength. Such ratio-curves are plotted in Fig. 3. We can see that the UV deficit increases smoothly, starting at the break, near 1000Å, down to about 550Å. The UV deficit increases faster in the case of radio-loud quasars than in radio-quiet quasars. Such a difference in the behavior of the ratio-curves is expected, if RL quasars are more absorbed than RQ quasars, and dust absorption is responsible for the UV deficit. The absorption would be characterized by an absorption cross-section that increases toward shorter wavelengths. At wavelengths shorter than 550Å, some spectral features appear to be unique to each composite. The absorption test becomes therefore inconclusive in that region. This could be the result of having too few very high redshift quasars among the TZ02 sample, which result in a loss of reliability of the composite SEDs in that wavelength domain (see TZ02).
CARBON CRYSTALLITES
The UV deficit in RL and RQ quasars has been shown to behave qualitatively as expected, if dust absorption was responsible for the break. The next step consisted in searching the kind of material that might possess the optical properties required to produce a sharp absorption feature at the same position as the 1000Å break. We looked for a dust constituent, whose absorption cross section peaked in the far-UV and yet causes negligible absorption at wavelengths longer than 1200Å. Ideally, as it is the case with the interstellar medium (ISM) dust, the grain particles should be composed of the most abundant elements. Using ADS and Google, the most promising candidate appeared to be carbon in its crystal form, but with surface impurities. The so-called meteoritic nanodiamonds. The finding of the the recently published work on nanodiamonds by Mutschke, Andersen and coworkers 3 (Mutschke et al. 2004) lead to a real breakthrough in the project, since Mutschke et al. (2004) had just measured the optical properties of nanodiamonds down to very short wavelengths. The grain size distribution could in principle be varied as needed, using the Mie theory to compute the extinction curve (Binette et al. 2005a, b). It turns out that it is unnecessary to assume a different grain size distribution than that which is found to characterize nanodiamonds embedded in primitive meteorites [Lewis et al. 1989] (provided the dust is intrinsic to quasars and not extragalactic, see BM05).
DUST GRAINS WITH AND WITHOUT SURFACE IMPURITIES
As matters stand, the crystalline form of carbon can exist either in the form of the well known terrestrial type of cubic diamonds or as the type found in primitive meteorites as in the Allende 4 meteorite, which was incidentally used by Mutschke et al. (2004) in their study of non-terrestrial nanodiamonds.
RESULTS
Using the complex refraction indices n + ik from Mutschke et al (2004) for the Allende nanodiamonds, and Edwards & Philipp (1985) for the cubic diamonds, respectively, we proceeded to calculate the extinction curve corresponding to each of the two nanodiamond types. Assuming a simple intrinsic SED consisting of a powerlaw with the spectral index inferred from the observed near-UV region in each quasar, we proceeded to calculate the absorbed powerlaw and compare it with the observed SED. We found that an acceptable fit could be obtained of the 1000Å break in 80% of quasars. However, the dust absorbed powerlaw model requires in most cases that dust grains of the above two types be combined (the terrestrial cubic diamonds and the nanodiamonds of the type found in primitive meteorites). This result, as well as the computed extinction curves are shown in the proceedings of another meeting (Binette et al. 2005b). We will present here only a few spectra that can be fitted, using a single type of nanodiamonds.
As shown in Fig. 1, the cubic diamond extinction curve fits the abrupt breaks found in the quasars PG 1248+401 and Ton 34 very well. The Allende nanodiamonds, on the other hand fit better the break observed in 4C55 (continuous line), as shown in Fig. 4, where a comparison is also made with pure cubic diamonds (dotted line) or a combination of the two nanodiamond types (dashed line). The hydrogen columns quoted in the figure captions assume that all carbon is locked unto the dust, and that its abundance is solar. It corresponds to a dust-to-mass ratio of 0.003. The real dust-to-gas ratio cannot be constrained at this stage.
Instead of using optically known materials, one could have treated the absorption hypothesis as an inverse problem, working out the extinction curve that best succeeds. We consider, however, that it confers a higher degree of plausibility to have used an extinction curve based on a known material such as that of the Allende meteorite, rather than an invented cross-section. Finally, the vector that we propose to be responsible for the absorption consists of grains made of carbon atoms, a major constituent of the interstellar medium dust, albeit here in a less common form, that of crystals (nanodiamonds). | 2014-10-01T00:00:00.000Z | 2005-09-24T00:00:00.000 | {
"year": 2005,
"sha1": "feaae03423b49abdcf2951a735bf10a09bf6fe5c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c2a8a49bd20da0784bec826b31f6b0cce8adc570",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119476298 | pes2o/s2orc | v3-fos-license | Strange mode instabilities and mass-loss in evolved massive primordial stars
A linear stability analysis of models for evolved primordial stars with masses between 150 and 250 M$_{\odot}$ is presented. Strange mode instabilities with growth rates in the dynamical range are identified for stellar models with effective temperatures below log T$_{\rm{eff}}$ = 4.5. For selected models the final fate of the instabilities is determined by numerical simulation of their evolution into the non-linear regime. As a result, the instabilities lead to finite amplitude pulsations. Associated with them are acoustic energy fluxes capable of driving stellar winds with mass-loss rates in the range between 7.7 $\times$ 10$^{-7}$ and 3.5 $\times$ 10$^{-4}$ M$_{\odot}$ yr$^{-1}$.
INTRODUCTION
Primordial stars with initially vanishing metallicity are cosmologically relevant in many respects: Being the only source for the production of metals they are responsible for chemical evolution in the early universe (see Nomoto et al. 2013). Thus they have an important influence on the formation and evolution of cosmic structure. Having masses up to ∼ 1000 M ⊙ they typically end their life as pair instability supernovae enriching their environment with heavy elements or leaving behind intermediate mass black holes (see, e.g., Bromm & Larson 2004;Ohkubo et al. 2009). Being smaller and hotter than their counterparts with finite metallicity they are the most promising source of the reionization of the universe (see, e.g., Tumlinson & Shull 2000;. Indication for the existence of primordial (Pop III) stars has meanwhile been found from the observations of high redshift galaxies (see, e.g., Fosbury et al. 2003;Kashlinsky et al. 2005). The study presented in this paper of the structure and evolution of primordial stars including consideration of their stability is therefore of utmost importance for cosmology. Should an instability prevail which leads to pulsationally driven mass loss, it would be relevant not only for the evolution of Pop III stars but also for the enrichment with metals of the environment.
Investigations of the formation process of primordial stars indicate that the absence of metals allows for the formation of massive fragments with masses in the range ⋆ E-mail:yadav@astro.physik.uni-goettingen.de between 100 and 1000 M ⊙ (see, e.g., Abel et al. 2000;Bromm et al. 1999Bromm et al. , 2002. Whether these massive primordial Pop III stars suffer from significant mass-loss during their evolution or the evolution proceeds at constant mass, is still a matter of debate. Due to the absence of metals, line driven winds can be excluded as a source of mass-loss. A stability analysis of Pop III stars with respect to the εmechanism revealed instabilities which are too weak to drive a significant mass-loss (Baraffe et al. 2001). Moreover, this ε -instability is restricted to the very vicinity of the zero age main sequence. On the other hand, massive Pop III stars are characterized by high luminosity to mass ratios (>10 3 in solar units) which imply a high fraction of the radiation pressure in the envelopes of these stars. Both high L/M ratios and dominant radiation pressure favour the occurrence of strange mode instabilities (see, e.g., Glatzel 1994). Therefore strange mode instabilities are to be expected in massive primordial stars.
The objective of the present study is to identify strange mode instabilities in massive primordial stars by a linear stability analysis and subsequently to determine the final result by numerical simulation of their evolution into the nonlinear regime. Evolution and stability with respect to the ε -mechanism close to the main sequence has been studied by Baraffe et al. (2001). In the present study we shall therefore ignore this phase and restrict our investigation to the post main sequence phase, where the evolution proceeds at almost constant mass and luminosity from high to low effective temperature. Moreover, since the ε -mechanism is disregarded and the strange mode instabilities of interest operate in the stellar envelope only, we can restrict ourselves to the consideration of envelopes. The stellar models considered will be described in section 2, their linear stability analysis in sections 3 and 4. Non-linear simulation are discussed in section 5. A discussion and our conclusions follow (section 6).
MODELS
Concerning the objective to study strange mode instability of evolved massive primordial stars disregarding ε -instability, we restrict ourselves to the investigation of envelope models (rotation and magnetic fields are ignored) with masses of 150, 200 and 250 M ⊙ , respectively. In the post main sequence phase, evolution proceeds at almost constant mass and luminosity. Thus these masses correspond to a luminosity of log L/L ⊙ = 6.60, 6.77 and 6.88, respectively (see Moriya & Langer 2015;Baraffe et al. 2001). The effective temperature is varied between log T eff = 4.80 and log T eff = 3.62. For the chemical composition, we adopt primordial values Z = 0.00, X = 0.77 and Y = 0.23. Opacities are taken from the OPAL tables (Rogers & Iglesias 1992;Rogers et al. 1996;. For given mass, luminosity and effective temperature envelope models are constructed by integrating the equations of mass conservation, hydrostatic equilibrium and energy transport from the photosphere to a maximum cutoff temperature. To ensure that the parts of the envelope relevant for stability are represented, the latter has to be chosen sufficiently high. For the initial conditions of the integration, we have adopted Stefan-Boltzmann's law and the common prescription for the photospheric pressure (see section 11.2 of Kippenhahn et al. 2012).
Concerning the energy transport, Schwarzschild's criterion has been used for the onset of convection. Convection is treated according to the standard mixing length theory (Böhm-Vitense 1958) with 1.6 pressure scale heights for the mixing length. In models with effective temperatures above log T eff = 3.7 energy transport by convection in negligible, below log T eff = 3.7, the fraction of convectively transported energy strongly increases with decreasing T eff . This is illustrated in Fig. 1, where the ratio of the convective and the total luminosity is given as a function of relative radius for stellar models with different effective temperatures. Stellar models with log T eff ≈ 3.6 are fully convective.
LINEAR STABILITY ANALYSIS
The linear stability analysis is based on the equations governing linear stability and pulsations in the form given by Gautschy & Glatzel (1990b, equation 2.12). Together with four boundary conditions, they form a fourth order eigenvalue problem. It is solved using the Riccati method introduced by Gautschy & Glatzel (1990a). The eigenvalues are complex where the real parts (σ r ) correspond to the pulsation frequency and the imaginary parts (σ i ) provide information about excitation or damping of the corresponding mode. Negative values of the imaginary part (σ i < 0) indicate excitation and instability, positive values (σ i > 0) correspond to damping. In the following, eigenvalues will be normalized by the global free fall time 3G M ( G, R and M are the gravitational constant, radius and mass of the stellar model considered, respectively). For the normalization see also Baker & Kippenhahn (1962).
As a theory of the interaction of pulsation and convection is still not available, we have adopted for the treatment of convection the 'frozen in approximation' introduced by Baker & Kippenhahn (1965). In this approximation, the Lagrangian perturbation of the convective luminosity is disregarded in the pulsation equations. It is applicable as long as the convective turn over timescale is longer than the pulsation timescale and if the energy is mainly transported by radiation diffusion (see Baker & Kippenhahn 1965, for a detailed discussion). For the models considered here, the frozen in approximation holds for log T eff > 3.7. However, as discussed in the previous section (see Fig. 1) below log T eff ≈ 3.7 convection is dominant and the results of the stability analysis have to be interpreted with caution.
RESULTS OF THE LINEAR STABILITY ANALYSIS
For the three masses 150, 200 and 250 M ⊙ and associated luminosities log L/L ⊙ = 6.60, 6.77 and 6.88, the results of the linear stability analysis, i.e., the real and imaginary parts (σ r , σ i ) of the eigenvalues normalized by the global free fall time are presented as a function of the effective temperature in Figs. 2 -7 which will be referred to as modal diagrams in the following. As long as the bottom boundary of the envelope is chosen sufficiently deep, the eigenfrequencies are neither sensitive to its position nor to the boundary conditions imposed there, which is a consequence of the common exponential decay of the eigenfunctions from the surface to the stellar core. For the boundary conditions at the photosphere, we have considered the conventional set, where the Lagrangian pressure perturbation is required to vanish and a linearized version of Stefan -Boltzmann's law is assumed to hold (see Baker & Kippenhahn 1962). Alternatively, we have chosen boundary conditions which are identical with those used in the subsequent nonlinear simulations. These boundary conditions requiring the gradient of compression and the divergence of the heat flux to vanish are constructed such that reflection of waves and shocks at the outer boundary is minimized (see Grott et al. 2005). For stellar models previously tested for linear stability, the choice of the photospheric boundary conditions was not crucial (Yadav & Glatzel 2017a,b). Even if occasionally quantitative differences were observed, the results of the linear stability analysis did qualitatively not depend on the outer boundary conditions. For the massive primordial models considered here we have performed a stability analysis both using the conventional outer boundary conditions (results are shown in the modal diagrams Figs. 2 -4) and with the boundary conditions consistent with the subsequent non-linear treatment (results are shown in the modal diagrams Figs. 5 -7).
The stability analysis of models with 150 M ⊙ using conventional boundary conditions (Fig. 2) exhibits a complex behaviour of the eigenvalues which is typical for the expected strange mode phenomenon (see, e.g., Yadav & Glatzel 2016, 2017a: At least two sets of modes can be identified by actual crossings and sequences of avoided crossings. One of the mode crossings has unfolded into an instability band implying instabilities with growth rates in the dynamical regime for effective temperatures between log T eff = 4.21 and log T eff = 3.98. A second instability associated with the second lowest eigenfrequency σ r is observed for effective temperatures below log T eff = 3.7 (see Fig. 2). We emphasize that a classification in terms of fundamental modes and overtones is not applicable here due to significant deviations from adiabatic behaviour. In models with log T eff < 3.7, energy transport is dominated by convection. Thus the results of the stability analysis in this range, in particular the instability identified there, have to be interpreted with caution. Contrariwise, the strange mode instability for effective temperatures between log T eff = 4.21 and log T eff = 3.98 occurs in models where convective energy transport is negligible.
The stability analysis of models with 200 M ⊙ and 250 M ⊙ using conventional boundary conditions (Figs. 3 and 4) reveals results qualitatively similar to those for 150 M ⊙ (Fig. 2). With increasing mass (and luminosity) both the growth rate and the temperature range of the strange mode instabilities increases to 4.4 > log T eff > 3.93 (200 M ⊙ ) and to 4.42 > log T eff > 3.91 (250 M ⊙ ). On the other hand, both the growth rate and maximum effective temperature (log T eff = 3.7) for the instability of the convectively dominated models is almost independent of mass (and luminosity).
Comparing the results of the stability analysis for boundary conditions consistent with the subsequent nonlinear simulations (Figs. 5 -7) with those based on the conventional outer boundary conditions (Figs. 2 -4), we recover counterparts of the strange mode instability and the instability for log T eff < 3.7. The latter is not affected by the boundary conditions, whereas the strange mode instability has become stronger and affects more than a single mode. In addition, several dynamical monotonic instabilities are found. Obviously their existence is due to the special choice of boundary conditions. The outer boundary conditions are in principle ambiguous, since the outer boundary of the stellar model does not coincide with the boundary of the star.
Therefore the physical relevance of instabilities caused by special boundary conditions remains an open question. We shall discuss this issue again in connection with the simulations of the evolution of the instabilities into the non-linear regime and the final fate of unstable stellar models.
Contrary to our study, the stability analysis of massive primordial stars by Moriya & Langer (2015) has not revealed any instability for log T eff > 3.7, in particular not the strange mode instability extending up to log T eff ≈ 4.5. Whether the code used by these authors is not capable to identify strange mode instabilities or an operating error has prevented the discovery of the instability needs further study. On the other hand, the instability for log T eff < 3.7 identified by Moriya & Langer (2015) is confirmed. However, as already noted, this instability occurs in convectively dominated models, where the fundamental assumptions of the stability analysis are no longer valid. Thus the physical relevance of this instability is highly questionable and requires further investigations.
Our study might also be regarded as a test for the strange mode instability mechanism proposed by Glatzel (1994). In the limit of high luminosity to mass ratios and dominant radiation pressure, the dispersion relation for strange modes reduces to (Glatzel 1994): where ω, k, g and κ ρ denote frequency, wave number, gravity and the logarithmic derivative of the opacity with respect to density, respectively. For finite k and g, this dispersion relation provides instability only if κ ρ does not vanish. Thus apart from high L/M ratios and dominant radiation pressure, κ ρ 0 is an additional requirement for the occurrence of strange mode instabilities. For massive primordial stars high L/M ratios and dominant radiation pressure prevail in all evolutionary phases. Close to the main sequence, the effective temperatures are sufficiently high, such that the matter in the envelope is completely ionized and the opacity is determined by electron scattering with κ ρ = 0. As a consequence, strange mode instabilities should not occur. However, as soon as in the post main sequence phase the effective temperatures becomes sufficiently low for helium to recombine, bound-free transitions of helium determine the opacity implying κ ρ 0. Thus we expect strange mode instabilities to occur together with helium recombination below log T eff ≈ 4.5. The findings of our study agree with these predictions.
NONLINEAR SIMULATIONS
For selected unstable models, the evolution of the instabilities has been followed by numerical simulation into the non-linear regime in order to determine their final fate. The numerical scheme used for this purpose is described in Grott et al. (2005). It is fully conservative with respect to energy, i.e., the energy balance is satisfied by the scheme intrinsically and locally. This property is essential for the simulation of stellar instabilities and pulsations, since the kinetic energy and the time integrated acoustic energy at the outer boundary are smaller than the dominant gravitational and internal energies (see equation 23 of Grott et al. 2005) by several orders of magnitude for stellar pulsations. Thus for a meaningful determination of the kinetic energy and time integrated acoustic energy (at the outer boundary) which we are interested in, an extremely high accuracy is required which can only be achieved by using a fully conservative scheme. This issue has meanwhile been discussed and emphasized several times (see, e.g., Grott et al. 2005;Yadav & Glatzel 2016, 2017a. As a consequence of conservativity, the numerical scheme has to be implicit with respect to time. In the course of the non-linear evolution of instabilities, shock waves do occur. They are represented by the introduction of artificial viscosity. For further details of the numerical treatment, we refer to Grott et al. (2005). One term occurring in the energy balance of the system corresponds to the time integrated acoustic energy at the outer boundary (see equation 23 of Grott et al. 2005) representing the mechanical energy lost from the configuration by acoustic waves and shocks. As discussed in a previous paper (Yadav & Glatzel 2017b), there are phases of incoming and outgoing acoustic fluxes during a pulsation cycle. As a consequence, the time integrated acoustic energy is a non-monotonic function. However, integrated over one cycle the outgoing energy in general exceeds the incoming energy and on average the time integrated acoustic energy increases with time. Thus we obtain a mean slope of the integrated acoustic energy, which corresponds to a mean mechanical luminosity of the system (see Yadav & Glatzel 2017b). Assuming that this mean mechanical luminosity is responsible for mass-loss of the star, we can estimate the mass-loss rate by comparing it to the wind kinetic luminosity 1 2 M v 2 ∞ , where M and v ∞ are the mass-loss rate and terminal wind velocity, respectively (see also Grott et al. 2005;Yadav & Glatzel 2016, 2017a. The terminal wind velocity is estimated by the escape velocity. In this way, an estimate is obtained in the following for the mass-loss rate from the mean slope of the time integrated acoustic energy.
Models with masses of 150 M ⊙
For models with masses of 150 M ⊙ (log L/L ⊙ = 6.6) four unstable configurations with effective temperatures of log T eff = 4.6, 4.4, 4.2 and 4.0 respectively have been selected for the numerical simulation of the evolution of the instabilities into the non-linear regime.
At log T eff = 4.6, a monotonic instability was identified using 'minimum reflection boundary conditions' (see Fig. 5), whereas this model turned out to be stable for the conventional boundary conditions (see Fig. 2). Starting from numerical noise, the evolution of the instability (see Fig. 8) exhibits a linear phase of monotonic exponential growth (with the growth rate predicted by the linear analysis) saturating at a weakly non-linear level (below 0.01 cm/s in terms of the velocity amplitude). At this phase a slight modification of the structure of the model implies stabilization. As a consequence, the model oscillates around its new equilibrium, i.e., we observe a damped oscillation ending -in terms of the velocity amplitude -on the numerical noise level superimposed on the new hydrostatic equilibrium.
At log T eff = 4.4, an oscillatory instability was identified using 'minimum reflection boundary conditions' (see Fig. 5), whereas this model turned out to be stable for the conventional boundary conditions (see Fig. 2). Starting from numerical noise, the evolution of the instability (see Fig. 9) exhibits a linear phase of (oscillatory) exponential growth (with period and growth rate as predicted by the linear theory) saturating after ≈ 150 days in the non-linear regime with a velocity amplitude of 38 km s −1 . After ≈ 600 days, the structure is sufficiently modified to ensure stabilization and the configuration starts to oscillate around a new hydrostatic equilibrium with an exponential decay of the superimposed perturbations. The decay of the velocity perturbations switches from oscillatory to monotonic around ≈ 900 days.
At log T eff = 4.2, an oscillatory instability was identified independent of the boundary conditions (see Figs. 5 and 2). After the linear phase of exponential growth non-linear saturation is reached for this model after ≈ 600 days with a velocity amplitude of ≈ 30 km s −1 (see Fig. 10b). Rather than a new hydrostatic equilibrium, finite amplitude pulsations are the consequence of the instability for this model. An increase of the mean radius by ≈ 8 per cent in the non-linear regime is found (see Fig. 10a) implying the final non-linear pulsation period of 13.3 days to be higher than predicted by the linear analysis. For illustration of the accuracy requirement and the numerical quality of the simulation, some terms occurring in the energy balance (see equation 23 of Grott et al. 2005) together with its error are displayed in Fig. 10(f)-(i). Potential and internal energy (Fig. 10h) have almost identical modulus and opposite sign. They exceed the kinetic (Fig. 10f) and the time integrated acoustic energy (Fig. 10g) by three and one order of magnitude, respectively, whereas the error in the energy balance (Fig. 10i) is smaller than the smallest term in the energy balance by at least two orders of magnitude. From the mean slope of the time integrated acoustic energy, we derive 7.7 × 10 −7 M ⊙ yr −1 as an estimate for the mass-loss rate induced and driven by the pulsation. Similar to the model with log T eff = 4.6, a monotonic instability was identified for log T eff = 4.0 using 'minimum reflection boundary conditions' (see Fig. 5), whereas stability or very weak instability was found for the conven- , we deduce that the evolution of the instability starts from hydrostatic equilibrium with velocity perturbations of the order of 10 −7 cm s −1 superimposed, undergoes the linear phase of exponential growth and saturates in the non-linear regime with an amplitude of 30 km s −1 . Compared to the hydrostatic value, the mean radius is increased by ≈ 8 per cent in the non-linear regime. Some terms in the energy balance (with hydrostatic values subtracted) are given in (f)-(h) as a function of time. Potential and internal energy (h) with almost identical modulus have opposite sign. They are bigger than the kinetic (f) and the time integrated acoustic energy (g) by three and one orders of magnitude, respectively. The error in the energy balance is shown in (i). It is smaller than the smallest term in the energy balance by at least two orders of magnitude. Figure 11. Same as Fig. 9 but for a model having M = 150 M ⊙ and log T eff = 4.0. Note the presence of a monotonically unstable mode in the linear phase of exponential growth. The instability rearranges the stellar structure and a new hydrostatic state with slightly increased radius and decreased effective temperature is attained. tional boundary conditions (see Fig. 2). Starting from numerical noise, the evolution of the instability (see Fig. 11) exhibits a linear phase of monotonic exponential growth (with the growth rate predicted by the linear analysis) saturating in the non-linear regime with a velocity amplitude below 10 km s −1 . In this phase, the structure becomes sufficiently modified to ensure stabilization and the configuration starts to oscillate around a new hydrostatic equilibrium with an exponential decay of the superimposed perturbations. The decay of the velocity perturbations ends on the numerical noise level.
From our study, we conclude that the model which is linearly unstable independent of the boundary conditions (log T eff = 4.2, Fig. 10) finally exhibits finite amplitude pulsations (including mass-loss), whereas instabilities caused by the boundary conditions only (log T eff = 4.6,4.4,4.0 ;Figs. 8,9,11) lead to modified hydrostatic equilibria. Thus, with respect to the final fate of the model (hydrostatic equilib-rium or finite amplitude pulsations with mass-loss) the dependence on boundary conditions of the linear stability analysis becomes less important (see the discussion in section 4). The results of our simulations are summarized in table 1, where final pulsation periods and mass-loss rates are given as a function of mass and effective temperature. Apart from a dependence on the strength of the underlying instability of the mass-loss rate, the latter seems to increase with mass and luminosity.
DISCUSSION AND CONCLUSIONS
A linear stability analysis has been performed for primordial post main sequence stellar models with masses between 150 and 250 M ⊙ (corresponding to luminosities between log L/L ⊙ = 6.6 and 6.88) covering the range of effective temperatures between log T eff = 4.80 and 3.62. The luminosity to mass ratios of these models lie between 2.6 × 10 4 and 3.0 × 10 4 (solar units) and suggest the existence of strange mode instabilities with growth rates in the dynamical range which typically occur for luminosity to mass ratios in excess of 10 3 (Gautschy & Glatzel 1990b;Glatzel 1994).
Contrary to previous investigations (Moriya & Langer 2015), the expected strange mode instabilities have in fact been discovered, however only below an effective temperature of log T eff ≈ 4.5. These findings are consistent with the predictions of a model for strange mode instabilities proposed by Glatzel (1994). According to it, in addition to high luminosity to mass ratios, a non-vanishing derivative of the opacity with respect to density is required for the existence of strange mode instabilities. For temperatures above log T eff ≈ 4.5, the matter in the stellar envelope is completely ionized and for primordial chemical composition (Z = 0) only electron scattering contributes to the opacity. Since the latter is constant, the derivative of opacity with respect to density vanishes and -according to the model and in agreement with our findings -no strange mode instabilities do occur. On the other hand, if the effective temperature falls below log T eff ≈ 4.5, helium recombines and its boundfree transitions contribute to the opacity resulting in a finite derivative of opacity with respect to density. Thus according to the model, strange mode instabilities should appear together with helium recombination for temperatures below log T eff ≈ 4.5, which agrees perfectly with our results. We may thus take our results as a confirmation of the strange mode model introduced by Glatzel (1994).
A second type of instabilities was identified for effective temperatures below log T eff ≈ 3.7. The energy transport in the envelopes of models affected by this instability is almost entirely due to convection. As any linear stability analysis developed so far does not contain a satisfactory treatment of convection, we refrain from further speculations concerning possible implications and consequences of this instability. In particular, contrary to Moriya & Langer (2015) we have not performed simulations of the evolution of this instability into the non-linear regime.
For selected stellar models, the evolution of strange mode instabilities into the non-linear regime was followed by numerical simulations. Except for models, where the instability is caused by a special choice of boundary conditions (in these cases a modified hydrostatic equilibrium is the consequence of the instability) strange mode instabilities in primordial post main sequence models were found to lead to finite amplitude pulsations with velocity amplitudes of the order of 50 km s −1 . Associated with these pulsations are acoustic energy fluxes capable of driving winds with massloss rates up to 3.5 × 10 −4 M ⊙ yr −1 . That these mass-loss rates are smaller than those derived by Moriya & Langer (2015) by at least two orders of magnitudes, is noteworthy. The post main sequence phase of the primordial stars studied lasts for 10 4 -10 5 years. Even the maximum mass-loss rates determined here would then influence the evolution of these objects at most marginally.
We emphasize that extremely high accuracy requirements have to be satisfied for the numerical treatment of stellar instabilities and pulsations which can only be met by a with respect to energy fully conservative scheme. They are due to the fact, that the kinetic energies and the acoustic energy fluxes to be determined here, are smaller than the dominant gravitational and internal energies by several orders of magnitude. In the simulations performed by Moriya & Langer (2015) the kinetic energies reach a level (10 47 ergs) which is typical for gravitational and internal energies but not for kinetic energies. We suspect that the numerical scheme adopted by these authors is not conservative and the kinetic energies live on the numerical error of the gravitational and internal energies. This would cast severe doubts on the reliability of this investigation, in particular on the high mass-loss rates claimed. We note that the numerical calculations presented by Appenzeller (1970) suffer from the same problem. Strange modes and associated instabilities are not restricted to radial perturbations and have been identified for non-radial perturbations as well (see e.g., Glatzel & Mehren 1996;Glatzel & Kaltschmidt 2002). Therefore we suspect the presence of unstable strange modes in massive primordial stars for non-radial perturbations too. A linear stability analysis for non-radial perturbations will be presented in a forthcoming study. | 2018-01-15T17:57:40.000Z | 2018-01-15T00:00:00.000 | {
"year": 2018,
"sha1": "ec5522a9f49271f852397f28cbeda59281d797ff",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1801.04890",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7c0ea8d2b3f21bea716a8b581979154084722c07",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257231260 | pes2o/s2orc | v3-fos-license | A Thematic Survey on the Reporting Quality of Randomized Controlled Trials in Rehabilitation: The Case of Multiple Sclerosis
Background and Purpose: Optimal reporting is a critical element of scholarly communications. Several initiatives, such as the EQUATOR checklists, have raised authors' awareness about the importance of adequate research reports. On these premises, we aimed at appraising the reporting quality of published randomized controlled trials (RCTs) dealing with rehabilitation interventions. Given the breadth of such literature, we focused on rehabilitation for multiple sclerosis (MS), which was taken as a model of a challenging condition for all the rehabilitation professionals. A thematic methodological survey was performed to critically examine rehabilitative RCTs published in the last 2 decades in MS populations according to 3 main reporting themes: (1) basic methodological and statistical aspects; (2) reproducibility and responsiveness of measurements; and (3) clinical meaningfulness of the change. Summary of Key Points: Of the initial 526 RCTs retrieved, 370 satisfied the inclusion criteria and were included in the analysis. The survey revealed several sources of weakness affecting all the predefined themes: among these, 25.7% of the studies complemented the P values with the confidence interval of the change; 46.8% reported the effect size of the observed differences; 40.0% conducted power analyses to establish the sample size; 4.3% performed retest procedures to determine the outcomes' reproducibility and responsiveness; and 5.9% appraised the observed differences against thresholds for clinically meaningful change, for example, the minimal important change. Recommendations for Clinical Practice: The RCTs dealing with MS rehabilitation still suffer from incomplete reporting. Adherence to evidence-based checklists and attention to measurement issues and their impact on data interpretation can improve study design and reporting in order to truly advance the field of rehabilitation in people with MS. Video Abstract available for more insights from the authors (see the Video, Supplemental Digital Content 1 available at: http://links.lww.com/JNPT/A424).
INTRODUCTION
R esearchers are increasingly called to enhance not only the quality of their research but also completeness and transparency of the reports they attempt to publish. Optimal reporting permits higher replicability and allows readers to fully understand how a study was conceived, designed, and executed. If data collection and presentation are adequately reported, readers may be able to critically appraise and interpret the study findings. During the last 2 decades, several initiatives have been launched to increase the awareness of authors, active in the biomedical and clinical areas, about the importance of preparing adequate research reports. Among these, the EQUATOR (Enhancing the QUAlity and Transparency Of health Research, https://www.equator-network.org/) Network is the international reference for scientists from all research fields when using evidence-based reporting guidelines. EQUATOR maintains checklists for observational and experimental designs. Pertinent to the present communication, the CONSORT (Consolidated Standards of Reporting Trials) statement, which is part of the EQUATOR, was devised to alleviate problems arising from inadequate reporting of randomized controlled trials (RCTs). 1 At its core, the CONSORT consists of a minimum set of recommendations that help authors preparing their reports. Introduced in 1996, 2 it was developed and expanded in 2001 3 and further revised in its current 2010 version. 1 The CONSORT now stands as the reference checklist for RCTs. Several analyses have evidenced the positive impact of adhering to reporting checklist like, but not limited to, the CONSORT. Overall, they expand the reliability, utility, and impact of health research. 4,5 A recent methodological survey 6 critically assessed the reporting quality of 571 neurophysiological/transcranial magnetic stimulation articles dealing with the assessment of motor dysfunction in neurological populations. Weaknesses in reporting and data presentation included issues relating to methodology, statistics, reproducibility, consistency, accuracy, and responsiveness of the relevant measurements affecting most of the studies surveyed.
As introduced previously, adherence to reporting checklists has been suggested as a promising avenue of development. Along with other major rehabilitation journals, the Journal of Neurologic Physical Therapy endorsed the EQUA-TOR initiative in 2014. 7 Since then, authors wishing to have their research reports considered for publication are required to comply to the pertinent checklist for their study design (eg, CONSORT, SPIRIT, STROBE, PRISMA, etc).
On these premises, we completed a methodological survey of the reporting quality of published RCTs dealing with rehabilitation interventions for people with multiple sclerosis (PwMS). Given the breadth of such literature (43 713 RCTs retrieved in the 2001-2020 period; key word: rehabilitation. Source: PubMed/MEDLINE; effective date: December 31, 2020), we decided to operate within our area of expertise focusing on RCTs dealing with rehabilitation for multiple sclerosis (MS), which was taken as a model of a challenging condition for the rehabilitation professionals, although all rehabilitation interventions are inherently difficult to study. Indeed, PwMS exhibit large day-to-day fluctuations in functioning, strength, and fatigue, which may translate into high variability and low reproducibility in the outcomes assessed. 8,9 If not adequately captured, such variability reduces the clinician's ability to evaluate PwMS and prevents optimal tracking of the changes following therapeutic interventions. In this regard, the responsiveness to change of a measurement, that is, the ability of an instrument to detect change over time in the construct to be measured, 10 is closely related to its test-retest reproducibility, making it an important element to consider and report in clinical trials. This is crucial for those populations who display unstable motor performances. 11 Therefore, no efforts should be spared in quantifying the measurement error that surrounds true scores by directly determining measurements' reproducibility or, when this is already established in the literature, by specifically referring to the psychometric features previously reported. Relatedly, whether investigators interpret their findings against established thresholds for clinically important change, such as the minimally important change (MIC), is another aspect that deserves attention to identify the amount of change that a patient can perceive as practically beneficial for his or her functioning. To date, however, reproducibility, responsiveness, and clinical importance appraisal of the changes are not part of standard reporting checklists.
Although the present work was conceived having in mind the initiatives and structured checklists hosted by the EQUATOR, by this investigation we primarily aimed at expanding on the aforementioned issues, as they can affect the reporting quality of clinical trials.
In addition, we planned to examine the quality of methodological and statistical reporting of the studies, with a focus on specific statistical items relating to the reporting of changes observed following rehabilitation (ie, P value, confidence interval [CI], effect size [ES], type of ES, study power) while leaving other relevant aspects, such as randomization, concealment, blinding, etc, out of our analyses.
The general objective of this thematic survey was to appraise the reporting quality of RCTs on rehabilitation interventions for PwMS. To this aim, 3 main reporting themes were predefined as follows: (1) methodological and statistical aspects; (2) reproducibility and responsiveness of measurements; and (3) clinical meaningfulness of the change.
METHODS
Two decades of literature were vetted (2001-2020). Subgroup analyses were planned to compare the completeness of reporting based on four 5-year temporal quartiles of publication date. Figure 1 depicts the PRISMA flowchart and the screening process for study selection. Table 1 summarizes the criteria used to check whether the included studies satisfied the requirements of methodological and statistical completeness.
Study Selection
Three electronic databases (PubMed/MEDLINE, Scopus, Web of Science) were searched for all available articles written in English. The search was restricted to the 20 years following the publication date of the seminal works that prompted evidence-based checklists to enhance the quality of scientific reports. 3,12 The initial search was undertaken by 3 authors (L.V., A.M., G.M.). The search included Medical Subject Headings, key words, and matching synonyms relevant to the topic. The search strategies employed in the databases are presented in Supplemental Digital Content 2, available at: http://links.lww. com/JNPT/A417.
Based on titles and abstracts, studies clearly out of scope were manually excluded. Animal studies were not considered. To be eligible for inclusion, articles had to meet the following criteria: enrollment of participants with definite diagnosis of MS; administration of a rehabilitative intervention program (least duration: 2 weeks); and RCT design. When the title or the abstract presented insufficient information to determine eligibility, the full text of each article was scrutinized. Based on the information in the full text, eligible studies were considered for data extraction. In case of disagreements, consensus was reached by discussion. To ensure homogeneity, weekly team meetings were held to cross-check the studies.
Data Extraction
A customized data extraction form was developed. The extracted information referred to whether the authors of the individual studies had satisfied the methodological and statistical requirements ( Table 1).
The manual extraction process was coded into 3 main themes: (1) methodological/statistical aspects, and results reporting, for example, power analysis, trial registration, reporting of the ES, CI of the difference/change, P value, exact P value (whether an exact or approximated value was reported), and study limitations; (2) reproducibility and responsiveness of measurements, for example, test-retest, reproducibility cited, standard error of measurement (SEM), and minimal detectable change; and (3) clinical meaningfulness of the observed differences, for example, minimal clinically important difference (MCID) or change (MCIC), aka MIC. Data were extracted dichotomously based on whether a criterion was satisfied or not, except for "ES type" and "retest type," for which more than 2 levels were considered (eg, for "ES type," whether a Cohen d or Hedges' g or eta was calculated). The completeness of reporting clinical information about PwMS, such as degree of MS-related disability and disease course, was also appraised.
Data Analysis
The collected data were exported into a statistical software (SPSS 20, IBM Corp, Armonk, New York) and descriptive analyses were computed. To control for the expected differences in the quality of reporting depending on the publication date, four 5-year temporal intervals were predefined and compared using odds ratios adjusted for multiple comparisons. Odds ratios were also calculated comparing data by decade (2001-2010 vs 2011-2020). For all the comparisons, the significance level was set at P value of less than 0.05.
RESULTS
The process of study selection is displayed in Figure 1. Of the initial 526 RCTs retrieved after removing duplicates, 370 satisfied the inclusion criteria (see Supplemental Digital Content 3, available at: http://links.lww.com/JNPT/ A418). Main reasons for exclusion comprised administration of single-session interventions, short-term programs (<2 weeks), design other than RCT, and PwMS not assigned to rehabilitation. Figure 2 summarizes the main results of the analyses. From a methodological standpoint, a priori sample size calculations were provided in 148 of 370 RCTs (40.0%); a follow-up reassessment after discontinuing the intervention was planned in 128 (34.6%); standardized or unstandardized ES was reported in 173 (46.8%; of these, 138 studies reported the Cohen d, 29 the eta or partial eta, and 6 did not specify the ES type); the CI of the change was reported in 95 (25.7%); test-retest reproducibility of the measurements was directly determined in 16 (4.3%; of these, 5 studies examined sameday retest, 2 one-day retest, 3 one-week retest, 3 more than one-week retest, and 3 did not specify the time frame) and cited in the methods and/or discussion by referring to previous works dealing with the reproducibility of the outcomes employed in 55 (14.9%); measurements' responsiveness (ie, SEM; MDC) was determined in 70 (18.9%); and clinical meaningfulness of the observed change (ie, MCID/MCIC or MIC) was determined in 22 (5.9%) and cited in the methods and/or discussion by referring to previous works dealing with the clinical importance of the observed changes in 103 (27.8%). Trial registration in a registry prior to study commencement was declared in 141 (38.1%). Figure 2 summarizes data for each of the methodological, statistical, and clinical items surveyed. Finally, study limitations were clearly acknowledged in 309 studies (83.5%), where the most common study limitation acknowledged was the small sample size (177 studies of 309; 57.3%).
Regarding the reporting of disability degree, 299 out of 370 studies (80.8%) presented such information reporting EDSS score. Relatedly, the median disability was often presented stand-alone or as minimum-maximum range, without precise indices of dispersion. Regarding the reporting of the MS course, of the 370 RCT analyzed, 286 disclosed it, whereas 204 (71.3%) failed to report data depending on the MS course providing only merged data.
DISCUSSION
The main finding of the present survey is that several sources of weakness emerged in the way authors reported methods and presented data from RCTs dealing with rehabilitation for PwMS. Lack of transparency involved all the 3 predefined themes. Failure in reporting crucial clinical information, such as disability degree and MS course, was also found.
Study Methodology and Statistics P Value
The survey showed that most of the studies report the exact P value for the observed differences, in line with CON-SORT recommendations. However, it should be noted that the P value is a unitless, binary measure of the plausibility of a result, and is conventionally employed to measure statistical significance against a predefined threshold, generally 0.05. 13 Moreover, these group-level statistics could be accompanied by individual data analyses, especially when authors deal with small samples. Other indices of change, such as CI and ES, have been recommended to complement P values, as they provide a representation of the magnitude of an effect. 14-16 P values alone do not provide information on the magnitude of change, which is ultimately what is needed to determine clinical meaningfulness. Conversely, the CI width indicates the degree of the uncertainty, 16 with a narrow interval giving reassurance, whereas a wide interval reveals large uncertainty about the ES being examined.
Confidence Interval
Although the use of CIs has markedly increased in health research, [16][17][18] this did not apply to the MS rehabilitation literature here surveyed, as only 1 study in 4 (25.7%) complemented P values with CIs. To describe the amount of difference observed between the groups or the extent of the change in an outcome following an intervention, reporting CIs of the difference/change rather than that of the mean is advisable. Confidence interval of the change has the advantage to convey both statistical and clinical information to assist clinicians in determining the usefulness of the findings and their decision making. 18,19 It also provides researchers and clinicians with a more informative view of how much of an effect an intervention had, compared with observing only statistical significance. 19 Importantly, CIs are appropriate for parametric and nonparametric analyses and for both individual studies and aggregated data in meta-analyses. Therefore, it is recommended that when inferential statistics are performed, CIs of the change, both within-and between-groups, accompany point estimates and conventional hypothesis tests.
Effect Size
Approximately half of the studies reported the ES for the observed differences. This finding can be directly compared with a recent survey on the reporting quality of neurophysiological/transcranial magnetic stimulation studies that assessed individuals with neurological conditions, including MS: only 4% of the articles reported ES of the differences/changes. 6 This comparison suggests that authors active in the rehabilitation field may be more aware of the importance of not solely relying on P values.
The ES is an estimate of the magnitude of the change in a score following an intervention. 20 Its use is increasingly recommended. 21 Among the number of ES calculated, the most employed in clinical trials are the raw mean difference and the standardized mean difference. Raw mean differences use the scale of the original measurement, which allows judging the magnitude of effect and comparing data across studies that used the same metric. However, measurement methods are often dissimilar across studies. Standardized ES are generally preferred as they give indexes that are expressed in a common metric, that is, standard deviations. 21 Hedges' g and Cohen d are the 2 most common standardized mean difference statistics. They are similar as both are computed considering the mean and the standard deviation. The 2 statistics also have similar performances except for sample sizes less than 20, when Hedges' g performs better than Cohen d. 22 For this
Power Analysis
Forty percent of the studies surveyed performed power analyses to establish the least sample size of participants. This percentage raised to 44.8% after considering the number of RCTs where the authors declared that a pilot trial had been performed (41 of 370; 11.1%). Pilot feasibility studies are needed in ground-breaking studies lacking crucial a priori information, and, given their exploratory nature, they are generally not requested to run power analysis. In this regard, in their scoping review of clinical studies on physical activity and its benefits for PwMS, Learmonth and Motl 23 call for "more and more feasibility trials to substantially strengthen the foundation of research on exercise in MS prior to engage in large scale RCTs." However, while almost half of the studies predefined the minimum sample size to reach the least statistical power, the uncommon predefinition of the number of participants remains problematic. As a result, the findings generated tend to associate with considerable uncertainty and potentially flawed conclusions 24,25 so that, almost inevitably, the readers have become familiar with the common conclusion that " . . . future studies over larger samples are needed to confirm the findings." Accordingly, the most common study limitation acknowledged was the small sample, with half of the investigations associated with low statistical power. For successful pilot/feasibility studies worthy of being developed in larger RCTs, overcoming this issue would ensure that the observed differences/changes are less biased, the error less inflated, and the findings more reliable. 16,26 In this perspective, when foundation research is well available, the authors should attempt to validate the findings of the pilot studies over larger scales.
Reproducibility and Responsiveness of Measurements
Only 4.3% of the studies performed retest procedures to determine measurements' reproducibility. Subgroup analyses showed a significant decrease in the number of studies performing such procedures in the 2011-2020 decade compared with 2001-2010, possibly due to several common outcome measures being profiled in terms of their reproducibility and responsiveness. Indeed, not all intervention studies with PwMS need to conduct their own reliability analyses as a number of relevant outcomes, mostly relating to gait, mobility, MS impact, and quality of life, have been established in terms of their psychometric properties. 27 Other outcomes that are psychometrically established in other populations (eg, the elderly, other neurological conditions) may not be as stable and reliable in PwMS. 8 In these selected cases, reproducibility analyses with multiple baselines would be advisable. Better reproducibility results in higher precision of measurements, which is considered a critical prerequisite for tracking changes. 28 Single measurements can be collectively distorted by measurement error, which involves accuracy of the measuring instruments, tester's expertise, patient variabil-ity over time, testing protocol, and environmental conditions where the test takes place. 29 It is, therefore, critical to outline the measurements' reproducibility, that is, to what extent the findings of a test remain stable at retest, in the absence of an intervention, over a period that may be considered clinically meaningful. Measurements' precision, often estimated by the SEM, is the ability of a test to produce exact values. Failure in outlining reproducibility and measurement precision weakens the validity of the findings, undermining data analysis and interpretation, and practitioners' decision making. Hopkins 28 demonstrated that at least 50 subjects are generally required to be tested over 3 or more trials to provide adequate precision for the estimate of the change in measurement error.
Efforts to determine reproducibility are still uncommon in research conducted on neurological populations. Deriu and colleagues 6 reported that only 5% of the 571 neurophysiological/transcranial magnetic stimulation studies reviewed planned retest procedures to establish measurements' reproducibility. This finding is in line with the present survey, although we evidenced that a relatively larger number (14.9%) of RCTs dealing with MS rehabilitation tend to report measurements' reproducibility at least for the primary outcome, while discussing the observed changes in that outcome. By doing so, the authors give reassurance that the reproducibility of the measurements considered is known and possibly under control. However, in several instances, the test-retest study that they refer to had been carried out in populations other than MS, which in some way undermines the very ground of such reassurance. As said previously, this issue is even more relevant to PwMS, who are considered extremely variable in their neuromuscular performance 30 and display day-today fluctuations in their functioning, strength, and fatigue. 8,9 Accordingly, the poorly established reproducibility of the measurements taken from other populations of persons with neurologic diseases may potentially weaken the power of the studies, their ability to detect clinically meaningful changes induced by rehabilitation, and their clinical implications.
Although carrying out time-consuming and patientdemanding retest measures may not always be practicable due to intrinsic and extrinsic difficulties related to PwMS status (for example, fatigue, tiredness, spasticity), establishing measurements' reproducibility for those measurements for which psychometric profiles are lacking is important and could significantly enhance the accuracy and precision of the measurements taken and thereby allow optimal quantification of any changes induced by rehabilitation. 11
Clinical Meaningfulness of the Changes
Approximately 5% of the studies checked whether the observed change surpassed indexes of clinical importance, such as the MIC, which is the smallest change in an outcome that a patient would identify as meaningful. 10,[31][32][33] Also for this index, a significant decrease in the number of studies reporting it was observed from 2001-2010 to 2011-2020. Unlike reproducibility/responsiveness, for which a reduction of reporting in clinical trials is somewhat expected due to accumulation of test-retest observational studies, reduction of MIC reporting in the last decade is in sharp disagreement with the general impulse prompted in clinical research literature to aim for clinically meaningful rather than statistically significant results. 15,33 The MIC is currently considered the most appropriate estimate to evaluate changes over time within individuals or groups. 33 It can be determined in several ways, 34 mainly through anchor-based and distribution-based methods. 35 Briefly, the former require an independent standard or anchor (eg, the patient rating of change) that establishes whether the patient is better after treatment compared with baseline, according to his or her own experience. The distributionbased methods rely on expressing the magnitude of effect in terms of the underlying distribution, that is, by taking into account measures of variability of the findings, such as between-patient or within-patient variability. 31 Although the combined use of the 2 strategies is likely to enhance the interpretability of the change, the anchor-based approach is generally recommended, as it is more reflective of the patient's view. 33 Accordingly, reporting not only group-level but also individual-level data would allow the identification of responders, that is, those patients who managed to surpass a preset threshold for change, such as the MIC. As a counterargument to the calculation of MIC or other indexes of clinical importance in any clinical trial targeting PwMS, these should be established through studies that are completed on adequate samples of participants to avoid misleading thresholds derived from underpowered RCTs. The MS Outcome Measures Task Force (https: //www.neuropt.org/practice-resources/neurology-sectionoutcome-measures-recommendations/multiple-sclerosis) is a useful initiative that has reviewed the psychometric properties and clinical utility of a total of 63 measures for the use in clinical practice, entry-level education, and research. We advocate the referral to such initiatives when selecting outcome measures for clinical trials in the MS realm.
The debate on the clinical meaningfulness of the changes, however, seems to have only trivially made its way into MS rehabilitation literature, as only 5.9% of the reviewed studies attempted to determine the MIC. Importantly, we also found that almost 30% of the RCTs critically appraised their results against previously established MIC thresholds when discussing the amount of change detected and the practical importance of their findings. However, MIC cutoffs are still not available for many key clinical and functional outcomes, or are there for populations other than MS, thus justifying continued research in this field.
Study Limitations
The first limitation of the survey is that we narrowed the focus to the MS rehabilitation field; therefore, the present findings cannot be directly generalized to other pathological populations. Future studies should aim at verifying the generalizability of our findings in major neurological conditions, other than MS. Second, the term "rehabilitation" that we used as the main key word in our search strategy is an umbrella term that encompasses a wide range of interventions but may not include the whole spectrum. This choice resulted in retrieving RCTs that mainly dealt with physical rehabilitation and physical therapy and, to a minor extent, cognitive, behav-ioral, and nutritional interventions. Another limitation relates to restricting the survey to articles written in English. In addition, the design chosen for this study (retrospective thematic survey) does not allow identifying and understanding the potential reasons why authors active in MS rehabilitation do not provide enough methodological and statistical details in their reports. Future studies using a qualitative interview design may allow to better answer such relevant question.
Although the present work shares some of the items belonging to the structured checklists hosted by the EQUATOR Network, it also departs from its framework as we aimed at expanding on selected issues, such as reproducibility, responsiveness, and clinical meaningfulness, which are currently not covered in the checklists even though they can affect the quality of reporting of clinical trials. In this perspective, the themes here proposed and examined should not be viewed as alternative to tools like those from the EQUATOR Network, which hopefully will soon include items for the assessment of reproducibility, responsiveness, and clinical importance. One final limitation is that the quality of the journals that have published the articles here surveyed was not taken into account. Beyond the use of journal metrics, such as the impact factor, the H-index, or other emerging parameters such as the Scimago Journal Rank score, which are regarded as controversial ways to appraise the quality of a scientific journal, we admit that some difference in the reporting quality may exist between major journals with strict methodological requirements (including mandatory adherence to the EQUATOR checklists) and relatively minor journals with no predefined policies of reporting.
CONCLUSIONS
Despite the increasing awareness of the need for a complete and transparent reporting of clinical studies and the number of evidence-based initiatives to enhance its quality, RCTs dealing with MS rehabilitation still suffer from important limitations associated with methodological and statistical reporting, reproducibility of measurements, and clinical responsiveness. To counteract such weaknesses and potential threats to research validity and usability, we propose that not only major journals such as the Journal of Neurologic Physical Therapy but, overall, all the journals active at the intersection of neurorehabilitation, clinical neurophysiology, neurology, and neuroscience fully endorse valuable initiatives like those hosted by the EQUATOR Network by asking submitting authors to follow, complete, and upload the appropriate reporting guideline for the design of their study. Another initiative that shares many of the EQUATOR goals is the Physiotherapy Evidence Database (PEDro), which aims at facilitating evidence-based physiotherapy by promoting the best available evidence in physiotherapy clinical practice (https://pedro.org.au/). Trials indexed in PEDro are also rated for quality using the PEDro scale.
In line with EQUATOR and PEDro recommendations, the quality of reporting could be further enhanced by policies that mandate protocol registration in public registries, as well as data deposition and sharing. Along with increased compliance with structured guidelines for transparent reporting, we suggest that researchers active in the MS rehabilitation field spare no efforts in ensuring measurements that are not only accurate but also reproducible (via retest procedures, or referring to already established thresholds) and responsive to change (by determining indexes of measurements' variability, or recalling available cutoffs), which are key prerequisites to outline the error zone that surrounds any measurements and that needs to be exceeded to interpret change as reasonably induced by the administered intervention. On these premises, the next step would be to take patient's perspective into account by determining the least amount of change (ie, MIC) in a health or functional outcome that the patient would perceive as positively impacting his or her status. Hopefully, adding these actions would advance the field of rehabilitation in PwMS through enhancement of our ability to determine clinical meaningfulness of the changes that are observed following rehabilitation. | 2023-03-01T06:18:04.409Z | 2023-02-25T00:00:00.000 | {
"year": 2023,
"sha1": "7f79bb45c54d2f6ed925c93a0f4012d1f24741ff",
"oa_license": "CCBY",
"oa_url": "https://journals.lww.com/jnpt/Fulltext/9900/A_Thematic_Survey_on_the_Reporting_Quality_of.29.aspx",
"oa_status": "HYBRID",
"pdf_src": "WoltersKluwer",
"pdf_hash": "a328ef1a5e89c76d3b337f27ca1364c2f2b987e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
151419292 | pes2o/s2orc | v3-fos-license | Measuring poverty and wellbeing in developing countries
This is an open access title available under the terms of a CC BY-NC-SA 3.0 IGO licence. It is free to read at Oxford Scholarship Online and offered as a free PDF download from OUP and selected open access locations. Detailed analyses of poverty and wellbeing in developing countries, based on household surveys, have been ongoing for more than three decades. The large majority of developing countries now regularly conduct a variety of household surveys, and the information base in developing countries with respect to poverty and wellbeing has improved dramatically. Nevertheless, appropriate measurement of poverty remains complex and controversial. This is particularly true in developing countries where (i) the stakes with respect to poverty reduction are high; (ii) the determinants of living standards are often volatile; and (iii) related information bases, while much improved, are often characterized by significant non-sample error. It also remains, to a surprisingly high degree, an activity undertaken by technical assistance personnel and consultants based in developed countries. This book seeks to enhance the transparency, replicability, and comparability of existing practice. In so doing, it also aims to significantly lower the barriers to entry to the conduct of rigorous poverty measurement and increase the participation of analysts from developing countries in their own poverty assessments. The book focuses on two domains: the measurement of absolute consumption poverty and a first order dominance approach to multidimensional welfare analysis. In each domain, it provides a series of flexible computer codes designed to facilitate analysis by allowing the analyst to start from a flexible and known base. The book volume covers the theoretical grounding for the code streams provided, a chapter on 'estimation in practice', a series of 11 case studies where the code streams are operationalized, as well as a synthesis, an extension to inequality, and a look forward. Contributors to this volume - Olu Ajakaiye, Nigerian Institute of Social and Economic Research Olufunke A. Alaba, University of Cape Town Channing Arndt, UNU-WIDER Samuel Kobina Annim, University of Cape Coast Ulrik Richardt Beck, University of Copenhagen M. Azhar Hussain, Roskilde University Afeikhena T. Jerome, FAO E. Samuel Jones, University of Copenhagen Raymond Elikplim Kofinti, University of Cape Coast Vincent Leyaro, University of Dar es Salaam Kristi Mahrt, UNU-WIDER Gibson Masumbu, Zambia Institute for Policy Analysis and Research Richard Mussa, University of Malawi Malokele Nanivazo, University of Kansas Fiona Nattembo, International Food Policy Research Institute Hina Nazli, International Food Policy Research Institute Olanrewaju Olaniyan, University of Ibadan Lars Peter Osterdal Karl Pauw, International Food Policy Research Institute Faly Rakotomanana, Household Survey Unit at the National Statistical Institute Tiaray Razafimanantena, Centre de Recherches, d'Etudes et d'Appui l'Analyse Economique Madagascar Vincenzo Salvucci, UNU-WIDER Haruna Sekabira, University Goettingen Nikolaj Siersbaek Kenneth Simler, World Bank David Stifel, Lafayette College Finn Tarp, UNU-WIDER Bjorn Van Campenhout, International Food Policy Research Institute Edward Whitney, University of California, Davis Tassew Woldehanna, Addis Ababa University
Poverty analysis in developing countries is still largely an activity undertaken by technical assistance personnel and consultants based in developed countries
The frequency of income and consumption surveys is insufficient in many countries, and surveys are often too complex
A toolkit developed for rigorous poverty measurement proves valuable
The importance of reducing poverty is universally acknowledged, and represents an important part of the Sustainable Development Goals. However, the appropriate measurement of poverty and wellbeing remains complex and controversial. A UNU-WIDER study addresses means to significantly lower the barriers to entry to the conduct of rigorous poverty measurement and increase the participation of analysts from developing countries in their own poverty assessments. If properly organized, many pointed debates in the literature can be boiled down to remarkably few lines of software code.
Lowering the entry barriers to undertaking poverty assessments
There is a high-level of dependence in many developing countries on external assistance for the conduct of poverty analysis, particularly the analysis of consumption poverty. Even in the cases where local analysts are strongly engaged, the occasional nature of detailed household consumption surveys combined with the complexity of the analysis results in difficulties.
A regular household consumption survey, coming to grips with price trends and differentials, concerted efforts to monitor non-monetary indicators such as those in focus in demographic and health surveys, and a series of more pointed surveys including panel elements-provide ample raw material for the emergence of a healthy and active community of quantitative analysts.
While increasing the frequency of consumption surveys increases costs, the associated call for avoiding excessive complexity reduces costs. In addition, the capacity-building gains associated with greater frequency allow better cost efficiency as well as collection of higher-quality data.
Consumption poverty and multidimensional poverty indicators
There is no single set procedure for estimating absolute poverty lines.
The cost of basic needs (CBN) approach provides a series of valuable guideposts, but in practice, numerous choices must be made.
Differing country circumstances will lead to different choices with respect to the overall approach. In addition, past choices often strongly influence current choices due to the desire to make relevant comparisons with earlier analyses.
Measuring poverty and wellbeing in developing countries by Channing Arndt and Finn Tarp
Multidimensional, non-monetary indicators are now broadly recognized as important. Non-monetary measures also frequently have the advantage of directly relating to policy agendas and are readily available from censuses and household surveys. While consensus has emerged on the need to consider the multidimensionality of poverty, methods for incorporating multiple indicators into welfare analysis remain contentious with debate centred on the implications of imposing strong assumptions in terms of weighting schemes, the actual extent of new information provided by generating combined indicators, and the nature of welfare functions.
A unique toolkit for rigorous measurement
A new analytical code stream referred to as Poverty Line Estimation Analytical Software (PLEASe) allows for consumption poverty analysis in developing countries. The approach follows the cost of basic needs methodology, identifies poor households, and allows flexible consumption bundles over time and space in estimating poverty lines with results representing a consistent level of utility.
Estimating First-Order Dominance (EFOD) is a robust tool used for estimating multidimensional poverty and population wellbeing. The approach starts from choosing a set of binary welfare indicators. The data is then operationalized by organizing it into populations and then into groups whose welfare levels are being compared. The software generates the distributions for each sub-population, and it then produces estimated probabilities of domination.
These tools, consisting of Stata and GAMS code, allow analysts to reproduce the poverty rates and poverty comparisons obtained in the country cases and further test the implications of alternative assumptions and approaches. With these practical tools, poverty analysis in developing countries, conducted by local analysts and institutions can take firmer root.
Case studies leading the way
Using these new tools, case studies-covering Ethiopia, Madagascar, Malawi, Mozambique, Pakistan, Uganda, Democratic Republic of the Congo, Ghana, Nigeria, Tanzania and Zambia-provides us with highly informative results.
In Ethiopia, declines in poverty as presented in official statistics are largely confirmed, and in Malawi the poverty rates fell by more than indicated by the official estimates. The cases also illustrate that EFOD analysis represents a powerful addition to the analytical toolkit. It shares the desirable properties that data challenges are relatively mild and implementation is straightforward. Overall, the case studies highlight the formidable advantages to beginning from a standardized and known code stream that has been well documented and modularized. Note: * The 2006 and 2010 poverty rates are not strictly comparable with earlier years. These rates were calculated using year-specific Engel ratios to derive food shares while previous years used a fixed ratio.
Increasing the frequency of consumption surveys would benefit
Source: CSO (2005CSO ( , 2012 respectively, or they own at least a specified number of livestock or poultry. To be classified as small-scale farms, households must own fewer than five exotic dairy cows and no beef cattle, exotic pigs, broilers, or layers. See CSO (1997) for specific details.
Figure 1 Poverty in Zambia
View from Entoto Mountain. © Arne Hoel / World Bank | 2018-10-08T05:58:18.349Z | 2016-12-22T00:00:00.000 | {
"year": 2016,
"sha1": "c8c1bae5f49ea5f4291b57eb46bca5fb8136a9ee",
"oa_license": "CCBYNCSA",
"oa_url": "http://library.oapen.org/bitstream/20.500.12657/31902/1/622851.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "06ac40c1f58f1570e29785ec557f6ba35020dbe3",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
28536373 | pes2o/s2orc | v3-fos-license | Enhanced ferroelectric polarization by induced Dy spin-order in multiferroic DyMnO3
Neutron powder diffraction and single crystal x-ray resonant magnetic scattering measurements suggest that Dy plays an active role in enhancing the ferroelectric polarization in multiferroic DyMnO3 above TNDy = 6.5 K. We observe the evolution of an incommensurate ordering of Dy moments with the same periodicity as the Mn spiral ordering. It closely tracks the evolution of the ferroelectric polarization which reaches a maximum value of 0.2 muC/m^2. Below TNDy, where Dy spins order commensurately, the polarization decreases to values similar for those of TbMnO3.
Neutron powder diffraction and single crystal x-ray resonant magnetic scattering measurements suggest that Dy plays an active role in enhancing the ferroelectric polarization in multiferroic DyMnO3 above T Dy N = 6.5 K. We observe the evolution of an incommensurate ordering of Dy moments with the same periodicity as the Mn spiral ordering. It closely tracks the evolution of the ferroelectric polarization which reaches a maximum value of 0.2 µC/m 2 . Below T Dy N , where Dy spins order commensurately, the polarization decreases to values similar for those of TbMnO3. The strong coupling between ferroelectricity and magnetism in modern multiferroics has offered a new paradigm of magnetoelectric materials. It has stimulated a search for new multiferroics and highlighted the need for a deeper understanding of the physics behind these exciting materials.
Although the requirements of magnetism and ferroelectricity are chemically incompatible [1], in these new multiferroics spin frustration [2] leads to complex magnetic arrangements that can break inversion symmetry [3,4]. The strong coupling provided by frustration in the perovskite manganites RMnO 3 leads to a number of magneto-electric phenomena. For example, compounds with R=Tb and Dy exhibit flops of the direction of the spontaneous polarization (P s ) with applied field (H) while ferroelectricity is observed only under a magnetic field in R=Gd [3,5,6,7]. Of these perovskites, R=Dy exhibits the largest value of P s =0.2 µC/m 2 (∼ three times larger than that for R =Tb) and a giant magnetocapacitance effect [6].
In these materials, a phenomenological treatment of the coupling of a uniform electric polarization P to an inhomogeneous magnetization M leads to a term linear in the gradient ∇M, the so-called Lifshitz invariant, that is allowed only in systems with broken inversion symmetry [8]. This model is consistent with neutron diffraction experiments on TbMnO 3 that showed that a spiral arrangement of Mn-spins within the bc-plane develops at T l = 28 K, coinciding with the onset of P s [9]. Although in this picture the contribution to P s from the magnetic ordering of R-ions is ignored, their role is underscored as a source of magnetic anisotropy that is required to predict the correct direction of P s under an applied magnetic field [8].
In this Letter we present measurements on DyMnO 3 suggesting that Dy plays an active role in enhancing the ferroelectric polarization in multiferroic DyMnO 3 above T Dy N = 6.5 K. We have combined neutron powder diffraction and single crystal x-ray resonant magnetic scattering to investigate the evolution of magnetism in DyMnO 3 with temperature. We find that the Dy moments order in an incommensurate (ICM) structure with the same periodicity as the Mn moments below a temperature T Dy l = 15 K. The transition from the commensurate (CM) Dy ordering below T Dy N to the ICM state is associated with an enhancement of P s just above T Dy N to a value approximately twice that found for the maximum P s in TbMnO 3 . The CM-ICM transition shows a large hysteresis in which P s and the intensity of magnetic reflections arising from the ICM Dy magnetic ordering exhibit a similar behavior. Our work suggests a magneto-strictive coupling between Mn and Dy spins giving rise to an enhanced P s above T Dy N compared to other perovskite multiferroics. This mechanism would add to the ferroelectric polarization that arises from the Mn spin-spiral.
In the multiferroic perovskite manganites it is found that for R=Tb below T N ∼41 K, Mn-spins order first along the b−axis in an ICM sinusoidal arrangement with propagation vector τ Mn = (0 0.28...0.29 0) [2,10]. A similar behavior with τ Mn = (0 0.36...0.385 0) is expected from x-ray measurements for DyMnO 3 [2]. Second harmonic lattice reflections (q = 2τ ) associated with the magnetic ordering have been observed to arise from a coupling of the ICM magnetic ordering to the lattice via a quadratic magneto-elastic coupling [11,12]. Below T l =28 K for TbMnO 3 , an additional component of the Mn magnetic moment along the c-axis, in phase quadrature with the component along b, gives rise to a spiral magnetic ordering and breaks inversion symmetry, leading to the observation of P s along the c−axis. In this regime Mn spins induce an ICM ordering of Tb-spins with the same propagation vector [9]. A similar transition occurs for R=Dy at T l =18K [6] as illustrated in Fig. 1(a). Below T R N <10 K, Tb and Dy magnetic moments order separately with propagation vectors τ T b = (0 0.42 0) and τ Dy =(0 1 2 0) [13,14]. To investigate the evolution of the magnetic ordering of DyMnO 3 with neutron powder diffraction (NPD) we used a 0.65g polycrystalline 162 DyMnO 3 sample prepared from a mixture of isotope enriched 162 Dy 2 O 3 oxide (94.4% enrichment) and Mn 2 O 3 , using standard solid state synthesis methods. Isotopic 162 Dy was chosen for its smaller neutron absorption cross section (σ a ) compared to that of natural Dy [15]. NPD data were measured from this sample between 2-300 K on the GEM diffractometer at the ISIS-facility, Rutherford-Appleton Laboratory. The data were analyzed with the FullProf refinement package [16]. Synchrotron x-ray diffraction measurements on a DyMnO 3 single crystal were conducted at the 7 T multipole wiggler beamline MAGS, operated by the Hahn-Meitner-Institut at the synchrotron source BESSY in Berlin. Details on the crystal growth, beamline and experimental procedure have been reported previously [14,17]. Low field (500 Oe) magnetization of the polycrystalline and single crystal samples was measured in a SQUID magnetometer between 4 -100 K. The data showed T Dy N = 9 and 6 K for the polycrystalline and single crystal sample, respectively. The value of T Dy N of our single crystal is in good agreement with published data [2] and may suggest that the higher value obtained for the powder sample may arise from a small non-stoichiometry. This assumption is corroborated by a difference in the lattice constants between our powder sample and the single crystal at 300 K [14]. Turning first to the NPD measurements we found that 162 DyMnO 3 crystallized with the orthorhombically distorted perovskite structure (space group P bnm). Cooling the sample below T N ∼ 40 K we find a series of magnetic satellites arising from Mn spin-ordering (see Fig. 2(a)) similar to the observations of Quezel et al. [10] on TbMnO 3 . Using their notation, these (hkl) ± τ reflections are characterized as A-type where h+k = even and l = odd and τ = (0 τ Mn y 0) is an incommensurate propagation vector along the b * -axis. An increase of satellite intensities (and τ Mn y ) with decreasing temperature is accompanied by the appearance of G-type satellites (h+k = odd and l = odd) below ∼15 K, where a non zero value of P s is observed as shown in Fig. 1(a) [6,7]. Below T Dy N the NPD data show an intensity reduction of the A-type satellites, which coincides with the appearance of a CM Dy magnetic order with propagation vector τ Dy = (0 1 2 0) (see Figs. 1b, 2a) [14]. This transition coincides with a sharp decrease of P s . The G-type reflections are too weak for a quantitative comparison. Rietveld anal-ysis of the NPD data between T Dy N < T <13 K on the assumption of pure Mn spin-ordering (see below) leads to unphysically large moments for Mn 3+ (> 4µ B /Mn) ( Fig. 1(b)). The rapid increase of the magnetic intensity below 13 K (shaded area in Fig. 1(b)) and its decrease below T Dy N suggests an additional magnetic contribution to the intensity of these reflections from Dy. As can be seen in Fig. 2(a), the magnetic reflections are relatively broad and below (Fig. 3, caption) we shall argue that this probably arises from a distribution of the propagation vectors.
In order to probe directly the magnetic contribution of Dy to the intensities of these ICM magnetic reflections, we have used single crystal resonant x-ray scattering at an x-ray energy of 7.794 keV, slightly above the Dy-L 3 absorption edge. With this resonant condition, a survey of reciprocal space was carried out at different temperatures. Fig. 2 respectively. This transition has been discussed recently elsewhere [14].
In order to identify the nature of the Bragg reflections associated with τ Mn above T Dy N we employed linear polarization analysis [14]. Figure 3 shows the dependence of the intensities of both the (0 2.385 4) and the related (0 2.77 4) reflections on the polarization analyzer configuration (σ → σ ′ vs. σ → π ′ ) at 8.5 K. The characteristic behavior shows that the former is of magnetic and the latter of structural origin. Tuning the x-ray energy to a value 20 eV below the Dy-L 3 absorption edge leads to a reduction of the intensity of the magnetic (0 2.385 4) reflection measured in σ → π ′ configuration by a factor of 40. This resonance enhancement shows that the (0 2.385 4) reflection is due to ordered Dy magnetic moments and that any contribution of the ordered Mn moments to this reflection is negligible. Figure 1(c) shows the temperature dependence of the integrated intensities of two particularly strong Bragg reflections, (0 1.385 2) and (0 1.5 2), related to τ Mn and τ Dy , respectively. The half-integer reflection, associated with the CM magnetic ordering of the Dy moments, vanishes above T Dy N = 6.5 K. Simultaneously, the intensity of the ICM reflection increases steeply on heating. At the same temperature P s c increases to approximately twice its value compared to that at temperatures just below T Dy N ( Fig. 1(a)). On further heating, the (0 1.385 2) Bragg intensity passes a maximum around 8 K, decreases monotonically with a concave curvature -typical for an induced magnetic moment, and finally vanishes above T Dy l = 15 K. In addition, it exhibits a significant hysteresis (see inset in Fig. 1(c)). On cooling from 16 K, the (0 1.385 2) Bragg intensity measured at 7 K is about twice as large as the intensity obtained on heating from 4.5 to 7 K. Monitoring the count rate at fixed k = 1.385 while sweeping the temperature at various rates (0.05...1 K/min.) we verified that whenever one does not cross T Dy N , the cooling and heating curves above T Dy N are reversable and the hysteresis is independent of the heating/cooling rate. This behavior of Dy induced ordering strongly resembles the hysteresis of the electric polarization shown in Fig. 1(a), where a factor of 2 difference is shown in the cooling and heating curves at 7K, the same difference that we find in the hysteresis of the intensity of the (0 1.385 2) reflection. Finally, the insert in Fig. 2(b) shows the temperature dependence of q Mn measured in two successive heating and cooling cycles between 7.5 and 40 K. Clearly, a significant hysteresis of about 5 K in width is observed in the temperature dependence. Neither on cooling nor on heating any sharp lock-in transition is observed. The edge of the heating curve, however, is consistent with the reported value for T l = 18 K.
Having directly established the behavior of Dy-spins above and below T Dy N we turn our attention to models of magnetic ordering for Dy that can be obtained from the analysis of the NPD data. In the CM regime below T Dy N the diffraction data indicates a cell doubling along b arising from antiferromagnetic Dy spin-ordering. Rietveld analysis shows that the Dy moments with absolute value m Dy = 6.2(6)µ B /Dy lie in the ab plane, being canted 30(±10) • from the b axis, and are stacked antiferromagnetically along the c-axis.
In the analysis of the induced magnetic ordering of Dy-spins at 10 K, above T Dy N , we have taken solutions of the magnetic structure that are allowed by the Γ 2 ⊗ Γ 3 representation of the P bnm space group indicated by the spiral arrangements of Mn-spins in TbMnO 3 [9,18]. In the case where two irreproducible representations are coupled [9], the R moments are allowed to have components along the three principal crystallographic directions denoted as m Dy =(a x , f y , f z ) [18]. Our Rietveld analysis shows that the magnetic moment is m Dy =(0.4(6),2.5(2),0.0(3)) µ B /Dy. The maximum value of the moment here is approximately twice as much as that found for the induced moment of Tb in TbMnO 3 (m T b =(1.2,0,0)µ B /Tb [9]). In these refinements we assumed the same magnetic symmetry for Mn as that reported for TbMnO 3 [9].
The CM structure of Dy-spins below T Dy N preserves the mirror plane perpendicular to the c axis and thus obviously does not allow for a contribution to P s along the c−axis. Polarization measurements show that this spin ordering also does not contribute to P s along the other principal crystallographic directions (P a∼P b∼ 0) [7]. For the region just above T Dy N our data is consistent with a sinusoidal modulation of the Dy-spins along the b−axis. The absence of a c−axis component would exclude the presence of a Dy bc-spiral as found for the case of Mn spin-ordering in TbMnO 3 . Thus, on the basis of symmetry the induced Dy-spin order alone is unlikely to result in a ferroelectric polarization along the c−axis above T Dy N . Compared to R=Tb or (Eu,Y), the behavior of DyMnO 3 is unusual, as P s just above T Dy N is three times larger for comparable conditions and shows a large hysteresis. Indeed Goto et al. [6] find almost a 50% difference in P s at 7 K obtained after cooling to 2 and 7 K, a behavior that is similar to the hysteresis of the magnetic intensity from the induced Dy spin-order ( Fig. 1(c)). Below T Dy N , P s decreases to a value of 0.06µC/m 2 which is similar to values found for non-magnetic R-ions and R=Tb at 2K. This would indicate that in this regime ferroelectricity arises from a similar coupling in all of these materials.
Our measurements indicate that there is an additional (or alternate) mechanism by which Dy-spins can provide a significant enhancement of P s above the value expected from a Mn spiral alone. Although our measurements can not provide a detailed microscopic model of the lattice distortions that arise from the magnetic ordering of Mn and Dy, we can exclude a mechanism based on a bc Dy spin-spiral. The temperature hysteresis of P s and the induced Dy magnetic reflection occur over the same region, suggesting a close coupling between these two quantities. The lattice distortion that arises from the Dy induced ordering is significantly larger in magnitude and different in nature than those for R=Tb. The hysteretic behavior of P s and the intensity of the ICM Dy magnetic reflections suggests that the enhancement of P s above T Dy N arises from a magneto-strictive coupling between Mn and Dy spins. Indeed this may be a peculiarity of the single ion anisotropy of Dy since it is a Kramers ions and Tb in contrast is not. Clearly the magneto-strictive lattice distortions must be conform to Γ 2 ⊗ Γ 3 symmetry.
The present results are important for a full description of the magneto-electric coupling in perovskite manganites. While R-spins thus far have been perceived to be spectators to ferroelectricity, here we show that in DyMnO 3 the induced magnetic ordering of Dy coincides with a region of enhanced polarization compared to other RMnO 3 manganites, while below T Dy N when Dy spins order commensurately P s is sharply reduced. We argue that these observation suggest a peculiar coupling of the lattice that enhances P s when Dy-spins are ordered incommensurately above T Dy N . | 2017-09-06T12:52:18.669Z | 2006-09-01T00:00:00.000 | {
"year": 2007,
"sha1": "d74871ae3d7e5baaadeeb28882fdebca59c29bd1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0609024",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "280d53e7677e030e0996bc93ee10567f2916f639",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine",
"Physics"
]
} |
32913256 | pes2o/s2orc | v3-fos-license | Reprocessing of Medical Products in Electrophysiology
Electrophysiological procedures use high-cost multipolar electrode catheters which can be reprocessed. The reuse thereof has been performed by electrophysiology services in Europe, United States, Latin America and also in our midst. In fact, prior studies have proved that there is an actual cost decrease1,2 and have also attested to the safety and efficacy of such practice,3-12 observing rates of complication and therapeutic results similar to the ones obtained with first-use electrophysiology devices. The growing concern with sustainability and no waste, associated with the efficacy and safety already demonstrated, increasingly stimulate the practice of reprocessing single-use medical devices throughout the world.
Electrophysiological procedures use high-cost multipolar electrode catheters which can be reprocessed. The reuse thereof has been performed by electrophysiology services in Europe, United States, Latin America and also in our midst. In fact, prior studies have proved that there is an actual cost decrease 1,2 and have also attested to the safety and efficacy of such practice, [3][4][5][6][7][8][9][10][11][12] observing rates of complication and therapeutic results similar to the ones obtained with first-use electrophysiology devices. The growing concern with sustainability and no waste, associated with the efficacy and safety already demonstrated, increasingly stimulate the practice of reprocessing single-use medical devices throughout the world.
The American Society of Cardiac Arrhythmias issued a favorable opinion to the reprocessing of electrophysiological devices to the FDA -Food And Drug Adminstration 13 as did the GAO -Government Accountability Office, a federal oversight entity of the United States. 14 In Brazil, the reprocessing of such products was regulated by the National Health Surveillance Agency (ANVISA), through a Resolution by the Collegiate Board (RDC) 156 15 and Special Resolution (RE) 2605, 16 both published in 2006. The RDC 156 establishes that the authorization of reprocessing singleuse medical devices, should be at the time of registration in Brazil. 15 Despite the fact that most of the manufacturers labeled their products as single-use, ANVISA demands the submission of documents that substantiate the reasons for not reprocessing. Once the manufacturer's arguments are proved and accepted, the words "Reprocessing Forbidden" must be included in the label of that certain product. Also, RE 2605 16 lists 66 materials classified as materials whose reprocessing is invariably forbidden. We stress that said list does not contain any product used in the electrophysiological procedures routine.
In 2013, ANVISA issued Technical Note No. 001/2013 17 reiterating the validity of the reprocessing rules published in 2006, in reply to the users' recurrent doubts and demands for clarifications, as per the following excerpt from the resolution: "demands and questions regarding the correct interpretation to be given to the contents of the labels of product for a single use, available in the market, has become increasingly frequent". Currently, in spite of this notice, doubts still persist with regard to the understanding of the rules in force. Due to that, we have made a detailed analysis of the labels of materials routinely used here in electrophysiology procedures, with the purpose of assessing possible incongruences that justify misunderstandings and interpretation errors.
For such analysis, we analyzed the contents of the labels of materials used in electrophysiological procedures, written in Portuguese, available in ANVISA's database http://www. anvisa.gov.br/scriptsweb/correlato/correlato_rotulagem.htm. We included labels from 7 manufacturers that registered products intended for electrophysiology with ANVISA. Once the website had been accessed, we typed the name of the manufacturers in field "Supplier's Name", obtaining a complete list of medical products each manufacturer. Afterwards, we chose only the labels of the products used in electrophysiological procedures. The labels were then printed out, numbered and grouped according to their similarity with regard to physical characteristics and technical applicability, classified as: 1) fixed-curve diagnostic catheter; 2) deflectablecurve diagnostic catheter; 3) circular catheter or high-density mapping catheter; 4) non-irrigated ablation catheter; 5) irrigated ablation catheter; 6) introducers and sheaths; 7) transseptal needle; and 8) intracardiac echocardiography catheter. Labels and/or records with more than one kind of product were attached more than once, that is, one for each of the products to which they corresponded, according to the applicability and characteristic thereof.
This classification was based on the RDC 156, which recommends that the labels should contain only the words: "Reprocessing Forbidden" or "The manufacturer recommends single use" 15 . Thus, the products whose labels did not contain the expression "Reprocessing Forbidden" were defined as G1, and they may or may not contain the words "The manufacturer recommends single use". In G2, the products whose labels carried the expression "Reprocessing Forbidden" were included, in spite of the presence of any other word or information. In G3, labels with expressions "Single Use", "Product for Single Use", "Do Not Re-Sterilize", "Discard after using" and "Destroy after using" were included, even if accompanied by expression "The manufacturer recommends single use", given that, pursuant to Technical Note No. 001/2013, 17 these sentences are considered to not be in conformity with the rules of the regulatory agency. The products that had 2 or more labels, with recommendations differing from one another and/or irregular, were classified as G4. And lastly, the products not in ANVISA's database were classified as G5.
The products included in G1 and G2 were considered to have their labels in conformity with ANVISA's rules, while those classified as G3, G4 and G5 were considered to not be in conformity.
For each group of products with the same applicability and characteristic, and whose labels were in conformity with ANVISA (G1 and G2), it was also assessed whether they were uniform with regard to reprocessing prohibition or not.
Lastly, physical labels were compared by sampling with the labels in ANVISA's database, to assess whether the information contained in both sources matched.
The analysis of sub-groups of products with similar characteristics and the same applicability, included in G1 and G2 (labels in conformity with RDC 156), showed that only the intracardiac echocardiography catheter was uniform with regard to the reprocessing recommendations. In this specific case, all six existing types had in their labels the words "The manufacturer recommends single use", which characterizes, therefore, a reprocessing permission. Other products did not have parity in the contents of their labels (Table 1).
Three products were classified as G4, of which one was a fixed-curve diagnostic catheter, one was a deflectable-curve diagnostic catheter and another a non-irrigated ablation catheter with bidirectional curve, all of which were from different manufacturers. The three products had more than one label catalogued in ANVISA's database, under the same registration number and with different recommendations. For the fixed-curve diagnostic catheter (ANVISA registration 10192030102), three labels were found, with the following information: "Reprocessing Forbidden", "The manufacturer recommends single use" and "Product for Single Use", which, pursuant to the RDC 156, mean, respectively, reprocessing forbidden, reprocessing allowed and irregular information. The deflectable-curve diagnostic catheter (ANVISA registration 10341350368) and the ablation catheter (ANIVISA registration 10332340206), for their turn, had two labels, with the following words: "Reprocessing Forbidden" and "The manufacturer recommends single use", which are contradictory instructions.
Lastly, nine physical labels of products used in electrophysiology ( Table 2) were analyzed. In six of them, no reprocessing information was found. Upon assessing these six labels in ANVISA's database, we were able to ascertain that one of them contained the expression "The manufacturer recommends single use"; three of them contained the words "Reprocessing Forbidden", and in the other one, the information was not in conformity with RDC 156. In addition, one product (transseptal introducer sheath, registered with ANVISA under No. 10332340208) did not have a label in ANVISA's database.
As we were able to note in this analysis, in spite of the fact that the reprocessing of materials used in electrophysiological procedures is allowed and regulated by ANVISA, there are important incongruences in the labels, in a number of products that is not trifling, which may generate mistaken interpretations by the users, and consequently the improper reprocessing of said materials.
The contents of 34 labels (28.9%) from ANVISA's database, which are not in conformity with RDC 156, require urgent adaptation.
We consider it to be extremely important for this information, defined upon the registration of the product, to be clear and irrefutable, thus making sure a quick and correct indentification of medical products with regard to the use thereof. In this regard, the labels should contain a single expression, clearly defining the situation of each medical product: "reprocessing forbidden" or "reprocessing allowed". We are also of the opinion that the criteria used to classify the product must be standardized and ensure the equity of information contained in the physical labels and in ANVISA's database.
Also, it is our opinion that the technical information submitted to ANVISA by the manufacturers, in justification of the prohibition of reprocessing a certain product, upon the registration thereof, must be accessible to the users for them to be aware of it.
For these reasons, the Brazilian Society of Cardiac Arrhythmias (SOBRAC) met with the suppliers of electrophysiological products available in the domestic market and suggested an immediate review of the information contained in the labels, so as to adapt them to ANVISA's standards and make the information unequivocal.
In summary, after the conduction of this research, it was possible to reach the following conclusions and suggestions: 1) The reprocessing and reuse of medical products in electrophysiology is permitted in Brazil and regulated by ANVISA, through RDC 156; 2) A thorough analysis of the labels found inconsistencies that could entail misinterpretations and improper decisions by the users with regard to the compliance with RDC 156, even if unintentional. 3) In the current scenario, while these incongruences are not rectified, it is necessary for the healthcare services that reprocess such products to make a stringent and systematic assessment of both product labels: the physical one and the one in ANVISA's database, with the purpose of identifying the ones that are not in conformity with the RDC 156, as well as those that contain differing instructions, thus avoiding mistakes in the compliance with ANVISA's orders.
Author contributions
Conception and design of the research and Critical revision of the manuscript for intellectual content: Kuniyoshi RR, Sternick EB, Nadalin E, Hachul DT; Acquisition of data: Kuniyoshi RR; Analysis and interpretation of the data: Kuniyoshi RR, Sternick EB, Hachul DT; Writing of the manuscript: Kuniyoshi RR, Sternick EB.
Potential Conflict of Interest
No potential conflict of interest relevant to this article was reported.
Sources of Funding
There were no external funding sources for this study.
Study Association
This study is not associated with any thesis or dissertation work. | 2018-04-03T02:16:13.241Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "a49d79e9d7534991970bb104fcf3c66cda6872c9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5935/abc.20170010",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a49d79e9d7534991970bb104fcf3c66cda6872c9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9865588 | pes2o/s2orc | v3-fos-license | Optimal Control Approaches to the Aggregate Production Planning Problem
In the area of production planning and control, the aggregate production planning (APP) problem represents a great challenge for decision makers in production-inventory systems. Tradeoff between inventory-capacity is known as the APP problem. To address it, static and dynamic models have been proposed, which in general have several shortcomings. It is the premise of this paper that the main drawback of these proposals is, that they do not take into account the dynamic nature of the APP. For this reason, we propose the use of an Optimal Control (OC) formulation via the approach of energy-based and Hamiltonian-present value. The main contribution of this paper is the mathematical model which integrates a second order dynamical system coupled with a first order system, incorporating production rate, inventory level, and capacity as well with the associated cost by work force in the same formulation. Also, a novel result in relation with the Hamiltonian-present value in the OC formulation is that it reduces the inventory level compared with the pure energy based approach for APP. A set of simulations are provided which verifies the theoretical contribution of this work.
Introduction
Nowadays one of the most important challenges faced by business is the adjustment of firm resources in order to satisfy market requirements subjected to fluctuations over time, namely costs, prices, existences, demands, etc. [1].In the case of facing fluctuating, hard-to-predict demand, several companies around the world have the conflicting goals of (1) limiting the buildup in finished goods inventory; and (2) minimizing changes in the capacity level.Traditionally, the most economical solution to absorb fluctuations in demand is a mix of two alternatives [2,3]: Adjust the capacity level (hiring/firing labor force, working overtime/undertime, subcontracting, etc.); this is known as the chase alternative [4,5]: track the expected monthly sales and compute the corresponding capacity requirements.
‚
Use of inventories (excess of SKUs, backlog of orders, or lost sales); this is known as the level plan: maintain a steady production rate over the entire year, using finished goods (smoothing/anticipation) stocks to absorb ongoing differences between output and sales.
This problem-known as the aggregate planning problem-can be stated as follows [6][7][8][9]: ‚ determine simultaneously the production rate P, inventory level β, and capacity levels C, ‚ for meeting a fluctuating demand (a set of forecasts), ‚ at each period of a finite planning time horizon in days (t = 1,2,..., N), ‚ for a given set of production resources, ‚ involving one product or a family of similar items (with small differences so that considering the problem from an aggregate viewpoint is justified), ‚ while minimizing total relevant costs (i.e., payroll, hiring/layoffs, overtime/undertime, inventory/shortage, etc.), ‚ and subject to non-constant, time varying constraints.
Aggregate production planning (APP, or workforce planning, production and employment smoothing, capacity and production planning), deals with matching capacity (via adjustment of production load, inventory, and employment levels) to changing demand, over a finite planning horizon, in order to achieve long-run profitability.By converting monthly sales forecasts, inventory levels, labor inputs, and production rates-of a single entity with characteristics representative of an entire product group-to a convenient aggregate load/capacity format (such as standard hours), a production plan is generated [10]: this involves a tradeoff between penalties for carrying inventory and varying the capacity level incurs the minimum total marginal cost over a calendar year.Because of this, the aggregate planning process has an economic importance due to the decisions involved (regarding the capacity and inventory levels necessary to meet anticipated demand over the planning period), as they impact the company's performance, i.e., profit maximization.This in turn requires complete and accurate information about: machine capacity, labor utilization, levels (inventory, safety stock, manpower adjustment, subcontract, storage), time (regular/overtime), costs (production, inventory, overtime/idle time, subcontracts, shortage, lost sales, break down, backorder, hiring/firing/training).With this idea on mind, the next section presents a discussion of APP in production-inventory systems, while Section 3 introduces the research statement of APP as a control engineering problem in the context of optimal control theory.In Section 4, the stability analysis and mathematical modeling of APP is present.In order to show the validity and usefulness of the proposed formulation, results are presented in Section 5. Finally, Section 6 shows the conclusions derived from the case study plus future research venues.
Decision Science Approaches to the Aggregate Planning Problem
Models that have been used to solve the APP problem include: linear decision rule (for long-term strategic APP decisions), transportation model, dynamic programming, lot sizing model, linear programming (the most widely accepted method like in ( [11]), , heuristics [12], simulation [13], goal programming [14], micro spreadsheet analysis [15], multi-objective optimization [16], fuzzy formulation models [17,18], genetic algorithms (GAs) [19], and multiple criteria mixed integer programming [20].Based on Holt et al., in [2], in the area of models for planning/scheduling production-inventory systems, two types of models can be identified:
‚
Static models which are based on a finite planning horizon, considering deterministic demand.In this scenario, the receding planning horizon is bigger than the period by period plan.
‚
Dynamic models which consider a indefinite planning horizon, with a proper forecasting in demand; this based on the context that period by period decisions are based on a receding forecast over a planned time horizon.
where most of the existing aggregate planning models found in the literature [21]: ‚ Try to minimize an objective function representing "total relevant costs" (like production, inventory, shortage costs) over the fixed planning horizon.
‚
The usual constraints employed are inventory and capacity constraints, and are formulated to a single-objective function in linear programming [22].
A review of the literature over the past 20 years, in the area production planning modeling, shows the following: a taxonomy of the mathematical models used in aggregate planning; a literature review in aggregate, disaggregate, scheduling, and sequencing methods by feature, model, objectives, decision variables, and solution method; the achievements in optimal control methods-based on the maximum Pontryagin´s principle theory-that allowed analytical investigation of aggregate production planning systems to be carried out in order to gain an insight into their optimal behavior; and the literature in aggregate production planning models that have been introduced in the last four decades; the different planning models used in the production arena-from the supply chain level down to the manufacturing resource and capacity-as well the different approaches followed (namely conceptual, analytical, etc.) are summarized in [23].In recent years, it has become evident that many researchers and practitioners are increasingly aware of addressing:
‚
Real-life situations of management and decisions with the presence of multiple objectives; by developing goal programming optimization models.
‚
Variations over time; by developing multi-period optimization models.
‚
Novel optimization methods as convex optimization in [24].
Even though all of the APP models cited in the literature have shown good performance in the academic field that have not in practical life, according to [25] the following reasons that apply are:
‚
Current models do not capture the real world approaches in APP scenarios.
‚
The hypothesis in which items are homogeneous and are aggregated.
‚
The hypothesis that workforce has the same competence.
‚
In the mathematical modeling, the following areas are not considered, such as: human resources, marketing, and finance.
‚
The information provided by industry is not proper in the context of sales forecasting and cost information.
‚
Complex mathematical modeling and analysis of the APP process.
Furthermore, in [26] the following reasons apply:
‚
The hypothesis that uniform rates are assumed for different items, which do not capture the scenario for producing various items.
‚
There is an absence of interest from managers to adapt mathematical methods and proper techniques.
‚
The cost related to the collection and quantification of data in order to apply the above techniques.
‚
The real cost functions in organizations are not well established by actual techniques.
Each of the last mentioned issued can be grouped as follows: ‚ Regarding the modeling approach; based on [27], decision makers such as operations researchers and other mathematical decision builders and its application in real scenarios are far from business practice.
‚
Regarding the aggregation approach; even though it intends to (1) simplify/facilitate the myriad of calculations, and (2) to reduce the solving time of a large problem, it presents some issues [28]: aggregation does not result in a better optimal solution; it probably compromises optimality and may even result in an infeasible solution; it usually results in a better attained solution within a fixed solution time.
‚
Regarding the use of mathematical models; comments in journals indicate that because a worrying gap exists between theory and practice, as mathematical (optimizing) models have not had a significant impact on industry operations management practices managers find mathematical methods too daunting; the cost assumptions used in models are over-simplified and unrealistic; simplified/inflexible assumptions which limit their industrial applicability; cannot cope with calendar variations (i.e., holidays), and the revision of distant, and therefore, speculative forecasts cause instability in the schedules; broader concepts in the area of employment policy and inventory practices need to be introduced.
Summarizing: due to the highly complex constraints of the APP problem, exact optimal solutions provided by traditional optimization methods may very possibly be meaningless.It is true that a number of artificial intelligence approaches-combined with mathematical programming models-have been used to solve the APP problem, but little attention has been given to the consideration that marks the difference between a pure academic treatment of the APP problem and a result with real-life, practical implications: the simultaneous combination of many constraints affecting the quality of the APP.Based on this, in its simplest form the production planning paradigm describes firm operating in a market facing external demand which it tries to meet by utilizing a limited set of production resources that has limited ability to generate output in a given time period [29].In this context, the application of optimal control approaches presents a suitable mathematical tool with the intention to model and analyze the APP problem in a short time horizon, which is our case of analysis.Also, it is important to establish that optimal control is a dynamic optimization problem which can handle several decision variables and a control law in order to set up the conditions for a decision maker in the production planning context.
The Dynamic Nature of APP
The previous sections can be summarized as follows: an economical solution (for companies) to deal with a fluctuating demand is a mix between the chase alternative (hire/fire capacity) and level plan (inventory excess/shortages) strategies.This involves an inventory-capacity tradeoff, and is known as the aggregate planning problem: determine what must be the production rate, inventory and capacity levels necessary to minimize inventory holding, shortages, and production switching costs subject to time varying constraints.To address this problem, static and dynamic models-which try to minimize a linear programming objective function subject to inventory and capacity constraints-have been proposed.More recently, and in order to reflect more realistic situations, models with multiple objectives and variations over time, have been developed.As mentioned before, the proposed approaches have several shortcomings: they are over-simplified and unrealistic, attained solutions may be even impractical and infeasible, the use of speculative forecasts causes instability in the obtained schedules, etc.However, it is the premise of this paper that the main drawback of these proposals is, that they do not take into account the dynamic nature of the aggregate planning problem: the excessive inventory and production costs due to the use of speculative forecasts, is worsened by a poor understanding of the time lags between the ordering of goods and receipt into stock [30,31].In fact, a production system designed to deal with the APP problem should have as objectives:
‚
to buffer the production system from the customer; with a minimum reasonable inventory that absorbs the high frequency content in demand and allows having a level schedule.In this way, the variability of customer demand is reduced/avoided as switching production levels up and down frequently may be very expensive in practice [32].
‚ to buffer the customer from supply time lags; by selling goods straight off the shelf.In this way high customer service levels can be achieved, which can only be accomplished when the dynamic behavior of its constituent parts (i.e., materials/information flow, operations performed, resources/decision, rules/performance measures used) has been taken into account [33].
‚
Demand patterns; after a ramp increase there is a continuing freefall in inventory levels, after a step increase there is a permanent inventory deficit.So demand-without some form of averaging-results in excessive fluctuations in production rates (which are supposed to be absorbed by the inventory buffers).
‚
Lead times; the amount of inventory holding that is needed to satisfy a customer service level is dependent on the uncertainties in both demand and lead times.
‚
Inventory levels; trying to correct all the inventory discrepancy in a single time period, when in fact it may take many more time periods, provokes excessive (overshoots and undershoots around the target level).
Control Engineering & Production/Inventory Control
In [34] the author presents a series of reasons for using control engineering techniques in production/inventory control, i.e., the use of standard forms, the block diagram format, standard techniques that enable important performance metrics to be calculated without recourse to simulation, there are a number of techniques for transferring problems from one domain into another, etc. Authors like [35][36][37][38][39] present in-detail reviews of control engineering applications to production/inventory control.One of these applications that deserves to be mentioned apart-due to the number of research studies based on it-is the APIOBPCS concept.APIOBPCS stands for Automatic Pipeline, Inventory and Order Based Production Control System, and it is a well-established (both industrially and theoretically) production scheduling/control system model which operates on a knowledge of customer demand, inventory level, and unfilled orders, and that it provides an acceptable trade-off between production smoothing and a high level of stock turnover.The ordering policy/production algorithm of APIOBPCS is representative of work in [40].Most of the APIOBPCS research has been undertaken using both control theory mathematics and system dynamics (SD), the most representative works in SD are present in [41] where it is applied to APP via transfer function approach; in [42] applying SD for production-inventory systems; in [43] a production-inventory system for APP described by differential equations via control oriented approaches; and in [44] where the SD approach is extended in production-inventory in remanufacturing.Based on this, APIOBPCS models are usually expressed in a continuous control form, but there is a discrete version available as well.In any case, the APIOBPCS simulation models may be used confidently as a benchmark to demonstrate performance enhancement for a wide range of practical scenarios [45].
Order-Up-To (OUT) and Smoothing Policies
As inventories should have a stabilizing effect on material flow patterns, a minimum reasonable inventory is necessary to absorb the variations in demand and allow a level schedule.The level scheduling problem (LSP), or production smoothing problem (PSP), refers to the problem of finding level schedules where: ‚ production of a given product is constant over time or, ‚ the cumulative production amount of a product is proportional to time or, ‚ the items should be dispersed over the schedule as uniformly as possible, with a minimal total deviation from the final level of operations.
Setting a takt-paced production results in a leveled production/production smoothing (key to establish the strategic/market pull), which is a simple matter of buffering-with either a time backlog or inventory-the production line from demand variability.The tradeoff between stocks and schedule stability is reflected in the master production schedule.On the other hand, a replenishment strategy that strives to bring the inventory position up to a predetermined target level is called an OUT Sustainability 2015, 7, 16324-16339 policy.These kind of policies are very popular both in research as in practice since they are known to minimize inventory holding and shortage costs.Dejonckheere, et al. [39] presents two shortcomings of OUT policies are: 1.
Generation of a bullwhip effect.2.
If production is not flexible and costs are excessive, a not optimal option is present.
Regarding point 1; the bullwhip effect has largely been analyzed by the OR, system dynamics, and control theory communities, where two popular approaches are: the statistical inventory control approach and the control engineering approach.Most of the research developed analyzing bullwhip effect considered supply chain systems.However, from the point of view of production-inventory systems the bullwhip effect has been studied in [46] where it applies autoregressive models to achieve multiple steps demand; in [47] an adaptive base-stock policy to determine order quantities; and in [48] explores the inventory stability in the context of a seasonal supply chain.In the case of the first approach: steady-state models based on steady-state conditions-like deterministic, stochastic, economic game-theoretic, and simulation models-are insufficient and therefore, unable to describe, analyze, and find remedies for problems like the bullwhip effect.In the case of the second approach: the bullwhip effect can be avoided by smoothing the ordering pattern-a problem known in the literature as the "production smoothing problem"-which means that it is possible to dampen order fluctuations even in environments where decision makers have to rely on forecasts.
Regarding point 2; the total costs of a perfectly controlled system-defined as a system that faithfully tracks some reference or target signal [49], is composed by costs associated with perfectly tracking the target (i.e., traditional fixed and variable costs), and costs associated with not being in perfect control of a system (i.e., under produce/over produce costs, and excessive/insufficient inventory costs).As there is trade-off to be made between OUT policies' minimum inventory holding plus shortage costs, and smoothing policies' minimum production switching costs, when the cost structure is altered, there is a need to identify a set of values (related to the production rate, and inventory/capacity levels) that reduce the sum of total costs [50].
The Aggregate Planning Problem as an Optimal Control Problem
Fact #1: the aggregated planning problem has to do with finding an optimal set of parameters' values that minimizes a set of costs, facing a varying demand and within a finite planning time horizon; fact #2: the control engineering approach allows to analyze, design and simulate dynamic models; fact #3: in order to dampen order fluctuations and avoid the bullwhip effect (generated by OUT policies), a smoothing policy is required.When viewed together, these facts suggest (1) the understanding of the aggregated planning problem as an optimization problem; and (2) the use of control engineering-based tool capable of dealing with dynamic systems that allows the characterization of damping strategies.With this idea on mind, we state the research proposal of this paper as the formulation of APP as an optimization problem, applying OC techniques via an energy-based formulation for the dynamical system which describes the behavior and nature of the problem.
Mathematical Modeling of APP
Based on a previous work [51], and [52] which applies second-order differential equation analogizing production system with mechanical vibration systems, here the approach is to propose an energy based analogy to obtain the dynamic equation for APP in production-inventory systems.In order to develop a consistent energy based analogy for APP, the following energy based function for APP, where E APP is the total energy of the APP system which is proposed: , and the potential energy is calculated as: V " 1 2 WP 2 , where in our analysis W pPq " WP.The Lagrangian is: Calculating the Lagrangian equation is: From where, for stability purposes, the damping is of the form: φ " ´pβ ptq ´αq ‚ P. Finally, for purposes of APP, the equation of the system is the following model: In order to achieve an understanding about the α parameter (EOQ) in the formulation, please refer to the Appendix.The Damping factor in Equation ( 3) has the characteristic to be a function over time, Lipschitz and it requires convergence in a scalar value.Regarding to a Production-Inventory system, the inventory level βptq presents an ODE form, as presented in [53], in this paper the approach is of the form:
Optimal Control Basic Concepts and Notation
Optimal control theory has as its objective the maximization of the return from, or the minimization of the cost of, the operation of physical, social, and economic processes [54].Based on this, in this work the interest is to apply Optimal Control approaches to problems in APP with dynamics in production-inventory level taking into account a contribution of the EOQ.
Optimal Control Formulation for APP: Continuous Inventory Policy
An optimal control is defined as an admissible control which minimizes an objective function.Given a dynamic system with initial condition x 0 , and which evolves in time according to ‚ x " f px, u, tq, the objective is to find control vector which is admissible and achieves a minimum for the cost functional.The optimal control problem is: x " f px, u, tq , x pt 0 q " x 0 , u ptq P m (5b) In economics, several problems in OC present a discount factor e ´δt .In this work, this discount factor is applied to production-inventory systems, as the ratio between the EOQ and Capacity.Based on this, the optimal control problem is of the form: From where the Hamiltonian is: Defining the Hamiltonian "Present value" by: Considering the relation: m ptq " λ ptq e δt , after some manipulation, a detailed discussion is on [55], the Hamiltonian "Present value" is: The interest is to apply both Optimal Control formulations to the dynamic system from Equations ( 3) and ( 4), which results are provided in the following section.
Theorem 1. (LaSalle's Invariance Principle)
Let f pxq be a locally Lipschitz function defined over a domain D Ă n and Ω Ă D be a compact set that is positively invariant with respect to ‚ x " f pxq.Let V pxq be a continuously differentiable function defined over D such that ‚ V pxq ď 0 in Ω.Let E be the set of all points in Ω where ‚ V pxq " 0, and M be the largest invariant set in E. Then every solution starting in Ω approaches M as t Ñ 8 .
In order to apply Theorem 1, for stability purposes, in the dynamical system conformed by Equations ( 3) and ( 4), the input (demand function) is of the form: d ptq " 0. From which, the dynamical system has the form: Considering that: x 1 " P, x 2 " ‚ P and x 3 " β, with an equilibrium point in px 1 , x 2 , x 3 q " p0, 0, 0q from Equations ( 10)- (12).To probe stability, a Lyapunov candidate function is proposed such as: Corollary 2. ( [56]) Let x = 0 be an equilibrium point for ‚ x " f pxq.Let V : D Ñ R be a continuously differentiable positive definite function on a domain D containing the origin x = 0, such that ‚ V pxq ď 0 in D. Let S "
"
x P D| ‚ V pxq " 0 * and suppose that no solution can stay identically in S, other than the trivial solution.Then, the origin is asymptotically stable.Applying the condition ‚ V ď 0 to Equation ( 13): Sustainability 2015, 7, 16324-16339 After some algebraic manipulation Equation ( 14) is of the form: Substituting Equations ( 10)- (12) in Equation ( 15): Grouping terms and factorizing in Equation( 16): The following condition must be satisfied in Equation ( 17) for stability purposes: α " K 2 .From which we have: Considering that x 1 , x 2 , x 3 ą 0 and From Equation (19), in order to satisfy ‚ V ď 0, a necessary and sufficient condition is to achieve:
Energy Based Optimal Control Formulation: Continuous Inventory Policy Approach
Consider the following energy based-Optimal Control problem: Calculating the Hamiltonian from: Applying first condition from Pontryagin Maximum Principle: We achieve the following set of ODE, which are the co-states: Sustainability 2015, 7, 16324-16339 Applying the second condition from Pontryagin Maximum Principle: and B 2 H Bu 2 " 1 ą 0 thus we have a minimum.Finally, substituting u " λ 3 ´λ2 C in the set of states and co-states the following simulations are achieved.
In order to present a discussion over the previous results, Figure 1 establishes the production rate behavior over time which has the characteristic of a maximum and minimum level over the horizon presented.Figure 2 presents a maximum level in the same time horizon this is based on the formulation of the first order differential equation which characterizes it.Figure 3 relates that the demand level increase conforms with the time horizon increases.
10
( ) Applying the second condition from Pontryagin Maximum Principle: In order to present a discussion over the previous results, Figure 1 establishes the production rate behavior over time which has the characteristic of a maximum and minimum level over the horizon presented.Figure 2 presents a maximum level in the same time horizon this is based on the formulation of the first order differential equation which characterizes it.Figure 3 relates that the demand level increase conforms with the time horizon increases.
Energy Based-Optimal Control with a Discount Factor: A Hamiltonian "Present Value"
Based on a previous work, in [57] which applies a discount factor in the performance index, this approach considers the following problem with a discount factor ( ) The first condition for the Hamiltonian-Present value, which are the co-states:
Energy Based-Optimal Control with a Discount Factor: A Hamiltonian "Present Value"
Based on a previous work, in [57] which applies a discount factor in the performance index, this approach considers the following problem with a discount factor The first condition for the Hamiltonian-Present value, which are the co-states: where the Hamiltonian as the form: The following set of ODE, are for the co-states: Applying the second condition from Pontryagin Maximum Principle: and B 2 H Bu 2 " 1 ą 0 thus we have a minimum.Substituting u " m 3 ´m2 C in the set of states and co-states, after some manipulation, the following simulations are achieved.
The introduction of the Hamiltonian present value produces a lower a higher production rate level in the dynamics, based on Figure 4 and a lower inventory level (almost half of the energy based inventory level from Section 5.1) which is present in Figure 5. Finally, the demand level increases conform with time horizon increases as is shown in Figure 6.
Applying the second condition from Pontryagin Maximum Principle: The introduction of the Hamiltonian present value produces a lower a higher production rate level in the dynamics, based on Figure 4 and a lower inventory level (almost half of the energy based inventory level from Section 5.1) which is present in Figure 5. Finally, the demand level increases conform with time horizon increases as is shown in Figure 6.
Conclusions
This research work presents an Optimal Control formulation for APP problems via the approach of Energy-Based and Hamiltonian-Present value.Stability analysis via LaSalle invariance principle establishes a condition in the inventory level which considers the EOQ parameter.The main contribution of this paper is the mathematical model which integrates a second order dynamical system coupled with a first order dynamical system which incorporates production rate, inventory level, and capacity as well with the associated cost of the work force in the same formulation.
Simulations show conforming with the increased production rate, inventory level achieves a maximum level when that demand level grows.A novel result in relation with the Hamiltonian-present value in the Optimal Control formulation is that it reduces the inventory level compared with the
Conclusions
This research work presents an Optimal Control formulation for APP problems via the approach of Energy-Based and Hamiltonian-Present value.Stability analysis via LaSalle invariance principle establishes a condition in the inventory level which considers the EOQ parameter.The main contribution of this paper is the mathematical model which integrates a second order dynamical system coupled with a first order dynamical system which incorporates production rate, inventory level, and capacity as well with the associated cost of the work force in the same formulation.
Simulations show conforming with the increased production rate, inventory level achieves a maximum level when that demand level grows.A novel result in relation with the Hamiltonian-present value in the Optimal Control formulation is that it reduces the inventory level compared with the pure energy based approach for APP.Simulations show conforming with the increased production rate, inventory level achieves a maximum level when that demand level grows.
Further work presents the idea of integrating the associated cost with dynamics, and to extend the case studies in the cost functional.Also the interest is to extend the mathematical model, in discrete formulation, to apply Model Predictive Control (MPC).Future research will consider the use of robust optimal control, another way of dealing with uncertainty, where a deterministic uncertain-but-bounded quantity is used (i.e., future demand can be bounded between lower and upper limits, without needing to define the probability of occurrence of each possible event within these limits), and the constraints regarding the operation of the system.Our intention is to extend this work with the application of MPC strategies via a suitable dynamic system which integrates dynamics in the inventory level.A practical real life problem which addresses this approach is our interest as well.
in the set of states and co-states the following simulations are achieved.
Figure 1 .
Figure 1.Production Rate for APP (time scale in days).
Figure 2 .
Figure 2. Inventory level for APP (time scale in days).
Figure 2 .
Figure 2. Inventory level for APP (time scale in days).
Figure 3 .
Figure 3. Demand level for APP (time scale in days).
Figure 3 .
Figure 3. Demand level for APP (time scale in days).
of states and co-states, after some manipulation, the following simulations are achieved.
Figure 4 .
Figure 4. Production rate for APP (Hamiltonian-present value, time scale in days).
Figure 5 .
Figure 5. Inventory level for APP (Hamiltonian-present value, time scale in days).
Figure 5 .
Figure 5. Inventory level for APP (Hamiltonian-present value, time scale in days).
Figure 5 .
Figure 5. Inventory level for APP (Hamiltonian-present value, time scale in days).
Figure 6 .
Figure 6.Demand level for APP (Hamiltonian-present value, time scale in days).
Figure 6 .
Figure 6.Demand level for APP (Hamiltonian-present value, time scale in days). | 2016-03-01T03:19:46.873Z | 2015-12-10T00:00:00.000 | {
"year": 2015,
"sha1": "e6deb24e0ffb5c888652c153eafa14fce36465f0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/7/12/15819/pdf?version=1449739234",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e6deb24e0ffb5c888652c153eafa14fce36465f0",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
73642980 | pes2o/s2orc | v3-fos-license | The Unbearable Lightness of Finger Movements : Commentary to Doliński
In the target article, Doliński (2018, this issue) showed that empirical studies of “real” behaviour are an almost extinct species of research, judged from articles published in the most recent volume of JPSP (Journal of Personality and Social Psychology). This finding continues a trend identified by Baumeister and colleagues ten years ago. The reliance on self-reports and rating scales can hardly be explained as an aftermath of the cognitive revolution in psychology, or a preoccupation with measurements and advanced statistical analyses, as Doliński suggests, but is more compatible with the ease of collecting questionnaire data, combined with the pressure to publish large multi-study papers and to obtain approval from ethical review boards. This development is further strengthened by the accessibility of online participant pools. An informal count showed that students participating for course credit were in 2006 involved more than 90% of empirical JPSP studies, as against 22.5% in 2017. In contrast, Amazon Mechanical Turk workers, non-existent in 2006, participated in 55.3% of the empirical studies published in the most recent volume. Parallel to this development the number of participants per study and the number of studies per article have vastly increased.
Dariusz Doliński's target article is intriguing and alarming.Social psychologists profess to study people's thoughts, feelings and behaviours in social settings, but examining all empirical studies presented in the most recent volume of JPSP (Journal of Personality and Social Psychology), Doliński found only 6% reporting observations of "real behaviour".The remaining 94% presumably reported people's thoughts.That is, thoughts about opinions, values and feelings, and thoughts about the participants' own and others' behaviours, rather than observations of how they actually behaved.Such thoughts were collected during brief episodes of sedentary behaviour involving nothing more physical than "finger movements" on a keyboard.The finding is consistent with a trend identified by Baumeister, Vohs, and Funder (2007) ten years ago.These scholars had observed a reduction of behavioural studies from 80% to about 15% in JPSP over a 30-year period.Doliński's recent analysis has revealed a further decline.
Doliński intimates that social psychologists are so busy investigating presumable causes of behaviour that they forget to examine the actual behaviour that was to be explained.This may in itself reflect a widespread human bias, reminiscent of a concern expressed four hundred years ago by Montaigne: "They leave things and runne for causes […] They commonly beginne thus: How is such a thing done?Whereas they should say: Is such a thing done" (de Montaigne, 1885/1603, p. 526).
A Search for Explanations
In keeping with his own preference for behaviours over explanations, Doliński makes a more convincing case for what social psychologists do (or do not do) than why they do it.He mentions just in passing that studying real behaviour is far more difficult and challenging than collecting keyboard strokes.But instead of expanding on this most obvious explanation, he highlights two more far-fetched ones: The cognitive shift in the 1960s and the present-day obsession with advanced statistical techniques.However, I doubt their importance.
As for the first: The influence of the cognitive revolution caused (in his view) a shift away from observations of behaviour to internal processes of a cognitive nature.But the so-called cognitive revolution did not displace a previous focus on naturally occurring human behaviour, it displaced a previous focus upon responses of caged rats!It remains true that the novel information-processing paradigm could be blamed for its reliance on highly artificial mini-behaviours of actors in front of a computer screen.But in this respect cognitivism continued rather than broke away from the mechanistic tradition inherited from behaviourism.One of the central spokesmen for the cognitive movement, Ulric Neisser, known for the first main textbook of the new approach (Neisser, 1967), was also one of the first to take a critical view.In his subsequent book, Cognition and reality (Neisser, 1976), he warned that the research inspired by the new approach had become disappointingly narrow, flawed by a lack of "ecological validity, " downplaying the perceiving individual's Social Psychological Bulletin | 2569-653X https://doi.org/10.5964/spb.v13i2.26110active rather than reactive role, and neglecting actors' interactive relationship with their environment.
In contrast, the classical studies of "real" human behaviour in social psychological experiments (like the seminal studies by Festinger, Asch, Milgram and Schachter) were rather an outgrowth of the Gestalt tradition and the impact of Kurt Lewin and his disciples on the American scene (Patnoe, 1988).This tradition could be regarded as allied to, rather than opposed to a general shift away from stimulus-response models to cognitions (e.g., Korman, Voiklis, & Malle, 2015).My point is that the absence of behavioural studies cannot be attributed to a predominant interest in cognitions.In fact, the graph of behavioural studies reported by Baumeister et al. (2007) shows a strong increase in the decade from 1966 to 1976, in apparent contradiction to Doliński's claim that social psychologists' neglect of behaviour was "clearly linked" to the cognitive revolution in the 1960s.
As for the second claim, about the effect of overly refined statistical models: I agree that many studies, especially in the last couple of decades, appear scientific mainly because of their sophisticated (and sometimes inscrutable) ways of handling data, demonstrating what Elster (2012) has called "hard obscurantism" in the social sciences.But such methods do not require self-reports or tick marks on a rating scale."Real" (physical) behaviour is not as binary as Doliński claims, but lends itself to graded measurements on all kinds of physical scales: intensities, latencies, completion times, drops of saliva, eye movements, heart rates, and occurrence frequencies.A passion for measurement and statistics has been an integral part of behavioural studies from the beginning.Francis Galton suggested in 1884 that people's inclinations towards each other could be measured behaviourally by placing pressure gauges under the chair legs of his dinner guests.Norman Triplett's (1898) legendary social facilitation study measured performance time with a stopwatch.We still want measures of behaviour.A study of alcohol consumption would not be complete unless we knew how much a person drinks.In studies of aggression we would like to know the frequency and severity of actual fights.Regions of personal space can be measured in inches and feet, and studies of risk taking in economic psychology record amounts to be invested, gained or lost.In fact, statistical measures and models appear to be at least as applicable to behavioural indices as they are to thoughts.
So, if the cognitive revolution is not what started it, and the reliance on measurement and mathematical procedures is not what keeps it alive, what drives social psychologists' avoidance of real behaviours?Baumeister et al. (2007) suggested several mechanisms that are, in my opinion, no less important today than they were ten years ago.Here are two obvious ones: (1) The pressure to produce as many papers as possible, preferably with several studies and a large and powerful N. Students need to earn their doctorates in a span of a few years; post docs need a number of published papers to compete successfully for academic positions; professors, engaged in a Darwinian struggle for continued existence, understand they need a large batch of offspring to ensure that some of their ideas may survive and reproduce.They realize that they can keep their productivity unchecked by repeating rather than renewing their message and their methods.(2) More strict enforcement of ethical principles makes researchers shun away from procedures that have a potential of harming participants in any conceivable way.We cannot study people's need for intimacy by actually touching strangers or asking them to undress.The sheer intention to do so would be cut short by an institutional review boards (IRB), before the first participant would have a chance to report the incident to a #MeToo campaign.A more admissible project might still be delayed by the bureaucracy involved in obtaining ethical approval.Perhaps paradoxically, experiments of the Zimbardo, Milgram, or Darley and Latané type can nowadays only be staged by the entertainment industry in TV and film productions, not by researchers.
Mechanical Turks
The last decade has seen one additional reason reinforcing the trend.While behavioural studies are made more difficult, data based on "finger movements" have become increasingly easy to obtain, thanks to the availability of participants through online platforms like Amazon's Mechanical Turk.Many researchers, including myself, consider this a blessing.Not only are we less dependent on a pool of students, collections can be done almost overnight for a relatively modest cost, and although respondents cannot be regarded as a truly representative sample of the population, they are more mixed in terms of age and educational background than student participants working for credit points.By regular participation, they have achieved some skills in answering questionnaires in a professional way, uncontaminated by reactance and misunderstandings.Most research has shown that their responses are of good quality and can be relied upon (Paolacci & Chandler, 2014).However, like other non-native invasive species, they multiply in their new habitat, perhaps too fast.
To highlight the reliance of social psychological research on Mechanical Turk, let's take a second look at the individual studies listed in Doliński's Table 1 (Doliński, 2018, this issue).An inspection of participants described in the studies' methods sections, revealed that traditional student samples are on the decline.A rough count (by the present author) showed that students participated in only 22.5% of them, while MTurk workers took part in no less than 55.3% of all studies.The remaining empirical studies included a mix of participants recruited from other websites, people representing specific organizations and interest groups, archival data and so on.For comparison purposes, I also inspected the March and May issues of JPSP from 2006 that featured in the overview of Baumeister et al. (2007).It turned out that out of 88 empirical studies reported in these two issues, as many as 85 (96.6%) were based on student samples.This was before the advent of MTurk, so only one drew its participants from an internet site (of the remaining two, one studied couples and another the personality of orangutans). | 2018-12-29T10:32:06.514Z | 2018-05-29T00:00:00.000 | {
"year": 2018,
"sha1": "da37f49e370990736fd5e4e55900ce474ef2e66e",
"oa_license": "CCBY",
"oa_url": "https://spb.psychopen.eu/index.php/spb/article/download/2343/2343.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "da37f49e370990736fd5e4e55900ce474ef2e66e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
14399297 | pes2o/s2orc | v3-fos-license | Conductance through contact barriers of a finite length quantum wire
We use the technique of bosonization to understand a variety of recent experimental results on the conductivity of a quantum wire. The quantum wire is taken to be a finite-length Luttinger liquid connected on two sides to semi-infinite Fermi liquids through contacts. The contacts are modeled as (short) Luttinger liquids bounded by localized one-body potentials. We use effective actions and the renormalization group to study the effects of electronic interactions within the wire, the length of the wire, finite temperature and a magnetic field on the conductivity. We explain the deviations of the conductivity away from 2Ne^2/h in wires which are not too short as arising from renormalization effects caused by the repulsive interactions. We also explain the universal conductance corrections observed in different channels at higher temperatures. We study the effects of an external magnetic field on electronic transport through this system and explain why odd and even spin split bands show different renormalizations from the universal conductance values. We discuss the case of resonant transmission and of the possibility of producing a spin-valve which only allows electrons of one value of the spin to go through. We compare our results for the conductance corrections with experimental observations. We also propose an experimental test of our model of the contact regions.
Introduction
With the rapid advances made in the fabrication of high mobility semiconductor heterojunctions, these systems have provided the setting for the discovery of several new phenomena in quantum systems. Popular examples include mesoscopic systems like quantum dots, quantum wires, and the two-dimensional electron gas samples in which the quantum Hall effects are observed [1]. In particular, quantum wires are created by the electrostatic gating of two-dimensional electron gases (2DEG) (with typical densities of n 2DEG ∼ 0.5 − 6 × 10 15 m −2 ) in the inversion layer of GaAs heterostructures. These GaAs samples typically have a very high mobility (typically, µ ∼ 3 − 8 × 10 2 m 2 V −1 s −1 ) because there is very little disorder in them; the mean free path of an electron in the 2DEG is of the order of λ M F ∼ 5 − 20µm. This makes it possible to create ballistic channels a few microns in length for studying electron transport in such wires, especially at low temperatures when the thermal de Broglie wavelength of the electron is comparable to the channel length. Furthermore, since it is possible to maintain a low carrier concentration in these wires, it becomes possible for transport to take place through only a few channels or even a single channel.
Thus, several observations [2][3][4][5][6][7][8][9][10][11][12] of the quantization of the conductance in electron transport through such channels have been made over the last two decades. More recently, several new ways have been found to produce such channels and this has led to even more precise experimental studies. This has brought into focus novel aspects of electron transport in such channels, not all of which are understood as yet.
Let us first briefly review some of the recent experimental findings in quantum wires. The first striking observation is that of a number of flat plateaus in the dc conductance which are separated by steps of roughly the same value [4] where N denotes positive integers starting from one. The factorg < 1 and is found to vary with the length of the quantum wire and the temperature; it has been seen to be as low as 0.75. In fact, the plateaus tend towards N(2e 2 /h) as either the temperature is raised or the length of the quantum wire is shortened [3,4]. This seems to imply a uniform renormalization of the plateau heights for each channel in the quantum wire as a function of wire length and temperature [4,6,8,12]. Also, the flatness of the plateaus appears to indicate an insensitivity to the electron density in the channel. Furthermore, kinks are observed on the rise of the conductivity to some of the lowest plateaus. One such kink has been named the "0.7 effect" [5,7,12]. These kinks are seen to wash away quickly with increasing temperature [5,7] and an external magnetic field placed in-plane and parallel to the channels [7]. Also, upon increasing such a magnetic field, a splitting of the conductance steps is observed together with an odd-even effect of the renormalization of the plateau heights, with the odd and even plateau heights being renormalized by smaller and larger amounts respectively. Finally, at very high magnetic field, another kink is seen to arise near the first spin-split plateau.
Several of these experimental observations have found no satisfactory explanation till date. It is the purpose of this work to provide a consistent framework within which most of these observations of transport in quantum wires can be explained. We will rely upon several concepts and techniques that have become popular in the study of interacting mesoscopic systems. These include the concept of bosonization, effective actions and the renormalization group (RG) [13,14,15]. To be precise, we will employ these techniques and ideas to understand the low energy transport properties of ballistic electrons in a finite quantum wire attached to two semi-infinite Fermi leads [16,17,18], but with a difference; the contacts of the quantum wire with its leads will themselves be modeled as short quantum wires with junction barriers at either end. The properties of the contacts are unaffected by the external gate voltage which causes the formation of the discrete subbands in the quantum wire. The junction barriers will be modeled as localized δ-functions to account for the back-scattering of electrons due to the imperfect coupling between the quantum wire and the 2DEG reservoirs; these barriers will renormalize the conductance as observed in the experiments. We will also study the effect of external electric and magnetic fields on this system. The properties of such a model for the quantum wire will be seen to account for several of the experimental observations mentioned above, as well as predict the possibility of some more interesting observations in future experiments in these systems. It should be stated here that a possible mechanism for some of the experimental observations [4] has been proposed in Ref. [19]; this is based on the anomalously enhanced back-scattering of electrons entering the 2DEG reservoir from the quantum wire due to the formation of Friedel oscillations of the electron density near the edges of the reservoirs, and it neglects interactions between the electrons in the quantum wire. Our model, however, attempts to understand these observations keeping in mind the importance of electronelectron interactions, barrier back-scattering, finite temperature and magnetic field as well as all length scales in the quantum wire system. The paper is organized as follows. In Sec. 2, we discuss the basics of the model outlined above. We show that the model can be described by a K L -K C -K W Luttinger model [20] with three different interaction parameters in the lead (K L ), the contacts (K C ) and the wire (K W ). By assuming that the electron-electron interactions get screened out rapidly as one goes from one region to another, we get in a natural way the existence of localized barriers at the junctions. We then discuss how our model goes beyond the concept of ideal contact resistances as studied in the Landauer-Buttiker formalism. In Sec. 3, we study the effective action of our model in the presence of external electric fields after integrating out all bosonic fields except those at the boundaries between the various regions -the leads, the contacts and the wire. Depending on the relative sizes of the contacts and the wire, we define two regimes -a) the quantum wire (QW) limit, where the length of the wire is much greater than the contact, and b) the quantum point contact (QPC) limit, where the length of the contacts is much greater than the length of the wire. We then study the symmetries of the effective action to determine when resonant transmission is possible as a function of a tunable gate voltage. All of this is first done for spinless fermions and subsequently, we give the modifications of the results for spinful fermions. In Sec. 4, we explicitly compute the corrections to the conductance due to the barriers at finite temperature (T ) and for a finite length of the wire (l) and for finite contact lengths (d). We compute the frequency dependent Green's functions of the model, in the different frequency regimes and use the Kubo formula to compute the conductance. We also show how these results could have been anticipated from the renormalization group (RG) equations for the barrier strengths. In Sec. 5, we study our model in the presence of an external in-plane magnetic field. Using RG methods, we show that the spin-up and spin-down electrons see different barrier heights at the junctions, and use this idea to explain the odd-even effect mentioned earlier. We also outline all the possible resonances that can be seen under such conditions. We point out the possibility of producing a spin-valve at moderate magnetic fields. In addition, we compute the conductance of our model and discuss its qualitative features as a function of the strength of the magnetic field. In Sec. 6, we compare the features of the conductance expressions obtained with the observations made in various experiments for transport in quantum wires with and without an external magnetic field. We find that our model is applicable to a large class of experiments and gives a unified and qualitatively correct explanation of all of them. In particular, our model gives a possible explanation for the uniform renormalization of all the conductance steps seen in several experiments. We also explain the odd-even effect seen in experiments in the presence of magnetic field. In addition, we propose more precise experimental tests of our model. Finally, we end in Sec. 7, with a summary of all the new results in our paper, and outline further investigations that are possible.
The Model
In this section, we will study the Tomonaga-Luttinger liquid (TLL) model [13] of a quantum wire of finite length with no disorder, which is connected to the two 2DEG reservoirs modeled as two semi-infinite Fermi leads through two contact regions. The contacts are modeled as short quantum wires with the junctions at either end modeled as δ-function barriers. The inter-electron interactions in the system, and hence the parameter K which characterizes the interactions, vary abruptly at each of the junctions. Hence, we study a K L -K C -K W -K C -K W model (see Fig. 1). The motivation for the above model is as follows. The electrons in the 2DEG are basically free, and hence, in the equivalent 1D model, they are modeled as semi-infinite leads with Luttinger parameter K L = 1. This can be understood as follows: if each end of the quantum wire is approximated by a point, only those electrons in the 2DEG which are in a zero angular momentum state (with respect to the appropriate end) can enter (or leave) the wire. Thus, the wave function of such a state has the radial coordinate as its only variable and we may, therefore, model the 2DEG as noninteracting 1D systems lying on either side of the quantum wire. The electron velocity in the leads v L is given by the Fermi velocity of the 2DEG electrons in the reservoirs v F = 2E F 2D /m. On the other hand, the externally applied gate voltage V G is applied over a small region and this leads to the formation of several discrete sub-bands where the electrons feel the transverse confinement potential produced by V G . This region is the one-dimensional quantum wire where the density of electrons is controlled by the gate voltage. The lowest energies E s in each sub-band are given by the discrete energy levels for the transverse confinement potential (and can therefore be shifted by changing V G ) [21]. The Fermi energy in the s th channel is given by is open when E F 1D > 0; the electron velocity v W (e) in the channel is then related to the 2DEG Fermi velocity v F by v W (e) = v 2 F − 2E s /m. In this gate voltage constricted region, the electrons will be considered as interacting via a short range (Coulomb-like) repulsion. Thus each discrete channel is modeled by a separate TLL. Let us, for the moment, consider one such channel with an interaction parameter K W and quasiparticle velocity v W .
The contacts represent the regions where the geometry changes from two-dimensional (2D) to 1D. In these regions of changing geometry, interactions between the electrons are likely to be very important; thus we model the contact region as a Luttinger liquid with K = K C . However, the gate voltage V G is unlikely to affect the properties of the electrons in these regions as the discrete sub-bands form a little deeper inside the wire. We choose different parameters K W for the wire and K C for the contact, because it is not obvious that inter-electrons interactions within the quantum wire will be the same as in the contact. The density of electrons in the quantum wire is controlled by the gate voltage, whereas the density of electrons in the contacts is controlled by the density of the 2DEG at or near the Fermi energy. Hence, we expect K C to be independent of V G , but K W is dependent on E F 1D , which, in turn depends on V G . We will also show below that the change in the inter-electron interactions in each of the lead, contact and quantum wire regions gives rise to barrier-like back-scattering of the electrons.
Simpler versions of this model (but without junction barriers and without contacts) have been studied by several authors [16,17,18] who found perfect conductance through the TLL channel which is independent of the inter-electron interactions. Perfect con-ductance is also seen in several of the experiments [2,3,4,5,7]. In the opposite limit, the model of a finite quantum wire connected to the two reservoirs by tunneling through very large barriers has also been studied [22]. The idea of modeling 2DEG reservoirs by 1D noninteracting Fermi leads has also earlier been employed in studies of the fractional quantum Hall effect edge states coupled to Fermi liquids through a tunneling term in the Hamiltonian [23]. Some studies of disordered quantum wires in such a model (again with perfect junctions) have also been conducted and the corrections to the conductance because of back-scattering impurities found [24,27]. The continuity of the results found in these studies (which have quantum wires of a finite length) with those found earlier for infinite quantum wires [14] has also been established [18,24].
The main difference between our model and the earlier studies of the quantum wire is that here we explicitly model the contacts as short TLL wires bounded by junction barriers on either end and whose properties are unaffected by the gate voltage V G . As we will discuss later, an experiment performed recently [9] has conclusively shown the existence of a region (of an appreciable length of 2 − 6µm) in between the quantum wire and the 2DEG reservoirs which leads to the back-scattering of 2DEG electrons entering the quantum wire. Furthermore, the idea that the properties of a one-dimensional system are determined by the Fermi energy of the 2DEG reservoirs has been used in Ref. [25] to study the quantum point contact. In addition, we assume that the changes in the interelectron interactions take place abruptly in going from the contacts into the quantum wire and that all inter-electron interactions get screened out very quickly in going from the contacts into the leads. It can, however, be shown that a smoother variation of the interaction parameter K upon going from the quantum wire into the contacts and in going from the contacts into the noninteracting leads does not affect any of the transport properties in the ω → 0 (dc) limit as long as we have no barriers of any kind in the system. We will now show that changes in the inter-electron interactions at the lead-contact and contact-quantum wire junctions give rise to barrier-like terms in the Hamiltonian of the system; the existence of these terms is mentioned briefly in the work of Safi and Schulz [24]. This is, however, only one reason why the junctions between the 1D channel and its leads can cause the back-scattering of electrons; another reason is clearly the change in geometry in going from the 2DEG reservoirs into the 1D channel. This cause for the drop in the conductance of the channel has earlier been studied within the purview of the Landauer-Buttiker formalism; see [26] and references therein.
Let us begin by studying the simpler case of a quantum wire (in which electrons are interacting with each other) connected directly to the noninteracting, semi-infinite leads without any intermediate contact regions. Then there is only a single change in inter-electron interactions from zero in the leads to a finite value in the quantum wire. The kinetic part of the Hamiltonian for this system of interacting spinless electrons in the quantum wire when expressed in terms of the bosonic field φ(x) and its canonically conjugate momentum Π(x) = ∂ t φ/v F is given by where v F is the Fermi velocity of the electrons in the channel. The part of the Hamiltonian which characterizes the short-ranged density-density interactions between the electrons in a 1D channel of length l is given by where U(x, y) characterizes the strength of the density-density interactions between the electrons, and ρ(x) is the electronic density at the point x. Using a truncated form of the Haldane representation for the electronic density in terms of the bosonic field φ(x) [13], the density ρ(x) is given by whereφ(x) = φ(x) + k F x/ √ π, c 0 = 1, c 1 = Λ/(2k F ), and k F is the Fermi wave vector. Λ is the ultraviolet cutoff (Λ < O(E F 1D )); it is the energy limit up to which the linearization of the bands and hence bosonization is expected to be applicable. If we now characterize the short range inter-electron interactions by U(x, y) = U 0 δ(x − y), then we can substitute the expressions for the density and the inter-electron interaction into the interaction term in the Hamiltonian. This gives us We can now simplify this expression by noting that several of the terms above contain rapidly oscillating factors of cos(k F x) or sin(k F x) which make those terms vanish upon performing the integration (unless we are at very specific fillings of the electron density). Thus, we can ignore the fifth term straightaway. The first term can be added to a similar term in H 0 where it renormalizes the velocity and introduces an interaction parameter K. The second term is a chemical potential term and that too can be accounted for by shifting the field φ accordingly. The third term is clearly a boundary term, and it gives us two barrier like terms at x = 0 and x = l, with Finally, the fourth term can also be rewritten as in which the first and third terms again vanish because they contain rapidly oscillating factors within the integrals, and the second term adds on to H barrier exactly. All this finally gives us two δ-function barriers at the junctions of the quantum wire with its Fermi liquid leads as The extension of the derivation given above to our model with two intermediate contact regions where the inter-electron interactions are U(x, y) = U 1 δ(x − y) (i.e., different from that in the quantum wire) is straightforward, and it yields four barrier terms: two at the junctions of the contacts with the leads, and two at the junctions of the contacts with the quantum wire. It is also very likely that the inner two barriers are much weaker than the outer two since the change in inter-electron interactions in going from the contacts to the quantum wire is likely to be much smaller than that in going from the contacts to the leads; also the change in geometry at the contact-quantum wire junction is likely to be much more adiabatic. Thus, we will from now on consider the junctions between the wire and the contacts and between the contacts and the leads as local barriers whose heights are determined by several factors, such as the nature of the inter-electron interaction and its screening, and the deviations from adiabaticity in the change in geometry in going from the reservoirs into the contacts or from the contacts into the quantum wire. To be general, we should take these four barriers to have different heights but it is very likely that any asymmetry between the left two and right two contacts will be small. Thus, we can finally write the complete Hamiltonian for the quantum wire of spinless electrons, its contacts and its leads as where Π C, Finally, it is worth commenting here that since our model shows that a quantum wire with no disorder already has back-scattering junctions built into it, the notion of ideal contact resistances (which are seen in a study of this system using the Landauer-Buttiker formalism and arise from the ideal connection of the quantum wire to its reservoirs) which are universal in value, h/2e 2 to be precise, does not seem to hold true even for the so-called clean quantum wire with adiabatic junctions in the presence of inter-electron interactions within the quantum wire. We will show later that these junction barriers are likely to be weak when the lengths of the quantum wires are quite short or temperatures are not very low, and that the junction barriers are likely to remain weak even after some small renormalization that might take place due to the electron-electron interactions in the quantum wire. Thus, the contact resistances between the wire and the reservoirs due to the junction barriers will be very nearly the universal value quoted above only for very short quantum wires (i.e., quantum point contacts) or when the temperatures are not very low. This is also observed in all the experiments till date [3,4,5,7].
The generalization of the model to spinful fermions is straightforward. For completeness, we give below the Hamiltonian for spinful electrons in a quantum wire connected to external reservoirs through the contacts and junction barriers, Note that we have allowed for independent velocities and interaction strengths for the ↑ and ↓ electrons. This generality will be required when we study the model in the presence of a magnetic field. Finally, let us note the fact that we will be taking into account only the outer two junction barriers (i.e., those at the junctions of the contacts and the leads) in all our subsequent calculations as these are likely to be the more significant junction barriers in the system as long as transport through fully open quantum wires is considered.
Effective Actions
In this section, the aim is to obtain an effective action in terms of the fields at the junction barriers for both spinless and spinful electrons. We then analyze the symmetries of the effective action and obtain the resonance conditions.
The case of spinless electrons
In Sec. 2, it was shown that the screening out of the interactions in the 2DEG leads to a Hamiltonian with junction barriers given in Eq. (9). The effective action for this model of spinless electrons can be written as where we have defined each of the actions separately below. where and we have defined L(φ; K, v) = (1/2Kv)(∂ τ φ) 2 + (v/2K)(∂ x φ) 2 , and used the imaginary time τ = it notation.
where we have set V LC = V CL = V 1 Λ and V CW = V W C = V 2 Λ assuming left-right symmetric barriers (V 1 and V 2 are dimensionless), and have used φ(0, τ The total length of the wire is denoted by L = l + 2d. We shall henceforth assume that V 2 ≪ V 1 and can be dropped; as we have explained earlier, the inner two barriers are likely to be weaker than the outer two barriers. We also include the coupling of the electrons in the wire to an external gate voltage V G given by This coupling is necessary because it is the gate voltage which controls the density of electrons in the wire, which, in turn, controls the number of channels in the quantum wire. Experimentally, an external voltage drop across the wire drives the current through the wire, which is measured as a function of the gate voltage or the density of electrons in the wire. Since the Luttinger liquid action is quadratic, the effective action can be obtained in terms of the fields φ i , i = 1...4, by integrating out all degrees of freedom except those at the positions of the four junction barriers, following Ref. [14]. Using the (imaginary time) Fourier transform of the fields we explicitly obtain the S 0 part of the effective action; this is presented in Appendix A.
(Theω n are the Matsubara frequencies which are quantized in multiples of the temperature asω n = 2πnk B T ). In the high frequency limit, or, equivalently at high temperatures, whereω n d/v C ,ω n l/v W ≫ 1, the effective action reduces to . (17) In this limit, all the barriers are seen as the sum of individual barriers with no interference. In fact, if we integrate out the two inner fields φ 2 and φ 3 , we are just left with The surprising point to note is that the effective interaction strength K ef f = K L K C /(K L + K C ) depends only on the interaction strengths in the contacts and in the leads (where there are no interactions), and not on the interaction strength in the wire! Furthermore, since the gate voltage V G couples only to the inner fields φ 2 and φ 3 and these two fields are completely decoupled from the outer fields φ 1 and φ 4 in L 0,ef f,high (φ 1 , φ 2 , φ 3 , φ 4 ) above, integrating out φ 2 and φ 3 does not lead to any gate voltage term in the final effective action in this temperature regime.
Depending on whether d ≫ l or l ≫ d, we can have two possible scenarios of intermediate regimes, each with two crossovers. We can express all our lengths in terms of equivalent temperatures by defining v C /d = k B T d and v W /l = k B T l . So the high temperature limit defined above is just T ≫ T d , T l .
• Let us first consider the quantum wire limit where l ≫ d.
In the intermediate frequency (or temperature) regime of T l ≪ T ≪ T d , the action becomes where is an energy whose significance will become clear shortly. As the action is quadratic, we can integrate outφ 2 andφ 3 to be left with an action dependent only onφ 1 andφ 4 as given by where A = U C + |ω n |/K W andṼ G = V G / √ π. We can approximate A by U C which is justified in the intermediate regime as T ≪ T d and K C ∼ K W . Then we are finally left with the expression • Now, we consider the QPC limit where d ≫ l.
In the regime where T d ≪ T ≪ T l , the action becomes where is again a frequency independent energy. As before, we can integrate outφ 2 andφ 3 to be left with an action dependent only onφ 1 andφ 4 given by Thus there is no difference between the intermediate and high energy scales in the QPC limit because the gate voltage is applied over too short a length to affect the conductance even at intermediate temperatures.
Finally, in the low frequency limit whereω n ≪ v W /l, v c /d (i.e., T ≪ T d and T ≪ T l ), S 0 reduces to Since the action is still quadratic, it is possible to integrate out the two inner fieldsφ 2 andφ 3 and get the effective action wholly in terms of theφ 1 andφ 4 fields, remembering however, to also include the gate voltage term which couples to the inner fields. After doing out, we are left with the full effective action as In this limit, the full action can be rewritten in terms of a "current" field χ(τ ) and a "charge" field n(τ ) (and their Fourier transformsχ andñ) defined as The action is given by The derivation of the effective action in this limit follows the method outlined in Ref. [14]; however, their derivation was for a uniform wire with a single interaction parameter K, whereas we have three interaction parameters here. K W acts only within the quantum wire delimited by the two contact regions, K C acts within the contact region, and K L = 1 outside the contact and wire region. The current field is interpreted as the number of particles transferred across the two barriers, and the charge field is the number of particles between the barriers. In the low frequency limit, the two barriers are clearly being seen as one coherent object with charge and current degrees of freedom. Since in the limit of weak barriers, V 1 ≪ U ef f , the action is minimized when n = n 0 , we can integrate out the quadratic fluctuations of n − n 0 to obtain an effective action only in terms of the single variable χ; we obtain The first term in this effective action is precisely the same term that is obtained for the impurity potential for a single barrier in terms of the variable χ.
In the low frequency limit, from Eq. (27), we see that the effective action contains extra terms due to the interference between the two barriers. It is easy to check that this effective action is invariant under χ → χ + √ π, n → n; this is the same symmetry which exists for a single barrier [14], and it corresponds to the transfer of a single electron across the two barriers, and hence in our model, from the left lead to the right lead. But when n 0 is precisely equal to a half-odd-integer, the action is also invariant under χ → χ+ √ π/2, n → 2n 0 −n. As explained in Ref. [14], this corresponds to the 'transfer of half an electron across the wire' accompanied by a change in the charge state of the wire. In the language of scattering, this corresponds to resonant tunneling through a virtual state. Within the TLL theory, this is the explanation of the Coulomb blockade phenomenon, which leads to steps or plateaus in the current versus gate voltage for quantum dots.
The case of spinful electrons
The spinless electron model is expected to be valid for real systems in the presence of strong magnetic fields which completely polarizes all the electrons in a given channel. However, for a real system without magnetic field, or in the presence of weak magnetic fields which do not polarize all the electrons, one has to study a model of electrons with spin. We shall study such a model here. Its modification due to the presence of a magnetic field will be studied in Sec. 5. The basic action of the model is a straightforward extension of the model for spinless fermions given in Eqs. (12) and (13), with φ ↑ denoting the spin up boson and φ ↓ denoting the spin down boson. However, since the Coulomb interaction couples the spin up and spin down fermions (for instance, remember the Hubbard term which is U i n i↑ n i↓ ), the Luttinger model is diagonal only in terms of the charge and spin fields In terms of these fields S 0 is given by where with L defined as before, are the interaction parameters in the two external leads, and K C/W ρ and K C/W σ are the interaction parameters in the contacts and wire respectively. As for the spinless case, we include junction barrier terms of the form at the junctions of the contacts and the leads, and we assume that the barriers at the junctions of the wire and the contact are weak and can be ignored. The barrier action can be re-expressed in terms of the diagonal fields of the model (using where, as before, we define φ ρ (0) = φ 1ρ and φ ρ (L) = φ 4ρ and similarly for the φ σ fields. The gate voltage only couples to the charge degree of freedom within the wire region as where φ ρ (d) = φ 2ρ and φ ρ (l + d) = φ 3ρ as before. So just as in the spinless case, we can integrate out all degrees of freedom except at x = 0, d, l and L and obtain the effective action. The full details of the effective action are spelt out in Appendix B. By taking its high, intermediate and low frequency limits, we will be able to obtain conductance corrections just as we did for the spinless fermions.
In the high frequency limit where ω ≫ v Ca /d and v W a /l, the two barriers are seen as decoupled barriers, with The fields φ 2a and φ 3a are completely decoupled from the fields at x = 0 and L and can be integrated out yielding Just as for the spinless case, we see that the parameters of the wire do not enter K ef f,ρ/σ = K Lρ/σ K Cρ/σ /(K Lρ/σ + K Cρ/σ ). Nor does the gate voltage affect the action.
Just as in the spinless case, we have two possibilities for the intermediate frequency regime, the QW limit or the QPC limit.
For the QW limit, we have v W /l ≪ω n ≪ v C /d, and we obtain where U Cρ,σ = v Cρ,σ /(K Cρ,σ d) is the charging energy for the charge degrees of freedom. As the action is quadratic, we can integrate out theφ 2,ρ/σ andφ 3,ρ/σ spin and charge fields to be left with an action dependent only on theφ 1,ρ/σ andφ 4,ρ/σ spin and charge fields as given by where we have approximated U C +ω n /(K W,ρ/σ ) by U C ; this is justified in the intermediate regime as In the QPC limit, we have where U W ρ = v W ρ /(K W ρ l) is the charging energy for the charge degrees of freedom in the wire. As before, we may integrate out the inner degrees of freedom to find that as expected.
Finally, in the low frequency limitω n ≪ v C /d and v W /l, as in the spinless case, the terms multiplying 1/K Cρ/σ and 1/K W ρ/σ in Eq. (144) in Appendix B become constant 'mass' terms. We get the full effective action as Just as we did in the spinless case, we now integrate out the fields at x = d and l + d, in terms of which the above action is quadratic, to get Here, we see that the effective mass terms are given by for the 'charge' and 'spin charge' fluctuations respectively. We denote the 'charge on the quantum wire' fields as n ρ = 2/π(φ 1ρ − φ 4ρ ) and n σ = 2/π(φ 1σ − φ 4σ ) respectively, and the 'current' fields as 2 along with their appropriate Fourier transformsχ ρ/σ and n ρ/σ just as we did in the spinless case. The action then takes the following form, We have used the fact that since it is only the ρ field which couples to the gate voltage and not the σ fields, we only get n 0ρ = (2k C d + k W l)/π − V G /(π 3/2 U W ρ ) and n 0σ = 0.
We now study the symmetries of the effective action to find out the possible resonances. As in the spinless fermion case, this effective action is invariant under the transformation χ ρ → χ ρ + √ π and χ σ → χ σ + √ π, which corresponds to the transfer of either an up electron or a down electron through the two barriers. But besides this symmetry, there are also some special gate voltages at which one can get resonance symmetries. This can happen when we adjust the gate voltage so as to make n 0ρ an odd integer. In that case, As explained in Ref. [14], this resonance which occurs when n 0ρ is tuned to be an odd integer, is called a Kondo resonance because it happens when two spin states in the island with n σ = ±1 become degenerate.
The kind of resonance which was seen for spinless fermions when two charge states on the island becomes degenerate is harder to see for spinful fermions. Two charge states become degenerate when n 0ρ is tuned to be a half-odd-integer. But, in that case, the effective action in Eq. (42) does not have any extra 'resonance symmetry' unless n 0σ (which we have set to be zero) is also tuned to be a half-odd-integer. But non-zero n 0σ is only possible when there is an effective magnetic field or SU(2) breaking field just over the quantum wire. This is because the Zeeman term is given by the Hamiltonian density and it does not lead to any boundary terms as long as the magnetic field is felt through the full sample. However, although in current experiments it is not possible to tune the SU(2) breaking to occur only between the two barriers, it could be possible in future experiments. Hence it is of interest to look for possible resonances in this case as well. We see that if one could arrange to tune both the gate voltage and the magnetic field (adjusted to be just over the quantum wire) so that n 0ρ and n 0σ are both half-odd-integers, the effective action in Eq. (42) is symmetric under n ρ → 2n 0ρ − n ρ , n σ → 2n 0σ − n σ , χ ρ → χ ρ + √ π/2, and χ σ → χ σ + √ π/2. This resonance is exactly analogous to the resonance that existed for spinless fermions and corresponds to hopping an electron from either of the leads to the wire. But since this requires the tuning of two parameters, it is a 'higher' order resonance and will be more difficult to achieve experimentally.
In fact, if we allow for non-zero n 0σ , then the effective action also has the symmetry when n 0σ is an odd integer and n 0ρ = 0. But this is hard to achieve, because one needs to tune the external gate voltage so as to cancel the field due to the presence of all the other electrons within the two barriers as well. Hence, this resonance will not be easy to see in experiments. Moreover, it will show up in the spin conductance and not the charge conductance.
In conclusion, we have studied in this section the effective actions of our model for both spinless and spinful fermions, and used them to obtain conductance corrections away from resonances (where the conduction is perfect) as a function of finite temperature and finite length of the wire. The same technique will again be used in Sec. 5, where it will be used to study the symmetries and obtain the conductance corrections of the quantum wire in the presence of a magnetic field.
Computation of the conductances
In this section, we compute the conductances of our TLL quantum wire with contacts, two semi-infinite Fermi liquid leads and two weak barriers at the junctions of the contacts and the leads, for both spinless and spinful electrons, perturbatively in the barrier strength. We explicitly derive an expression for the conductance to lowest order in barrier strength (quadratic) in terms of the Green's functions of the model. The RG flow of the barrier strengths has been incorporated through a function χ(x, y). Thus, the behavior of the Green's functions in the different frequency regimes determines the conductance corrections. The conductance corrections for a simpler version of the model of the quantum wire (i.e., one in which the quantum wire is directly connected to the Fermi leads through two weak junction barriers) has already been studied by Safi and Schulz [24], who used a real time formulation and computed time-dependent Green's functions. The perturbative corrections in the Kane-Fisher imaginary time formalism was also extended to the case of finite length wires by Maslov [27] and Furusaki and Nagaosa [18], who computed frequency dependent Green's functions. For our model, with five distinct spatial regions with their boundaries, the real time picture of TLL quasiparticle waves reflecting back and forth between the boundaries (as developed by Safi and Schulz [16]) is more cumbersome; hence, we use the imaginary time formulation and compute frequency dependent Green's functions.
The formulation of the conductance expressions
The current through a clean quantum wire through which spinless electrons are traveling can be found using the Kubo formula where σ(x, y, ω) is the non-local conductivity and is related to the two-point Green's function G(x, y, ω) at finite frequency ω as For our model of the quantum wire, Gω(x, y) has been computed in Appendix D. Note that the real frequency ω is related toω (used in the earlier sections) by the analytic continuation ω = iω + ǫ. From Appendix D, we find that Gω(x, y) = K L /(2|ω|) + nonsingular terms inω in the limit ω → 0 for our model. hence the dc conductance g 0 is given by This shows perfect dc conductance through the system as in the earlier models without contacts [16,17]. This result remains unchanged for the case of electrons with spin, except for a multiplication of the conductance by a factor of two.
For a quantum wire in the presence of stationary impurities, an explicit expression for the conductance can be derived to lowest (quadratic) order in the impurity strength from the partition function, using perturbation theory [14,24,27]. The renormalization group (RG) equations for the barriers (discussed in detail in subsection 4.4) imply that the barrier strengths grow under renormalization. However, it is only for very low temperatures or very long wire lengths that there will be considerable renormalization. In real experimental setups, the length of the wire is in the range of micrometers and the temperatures in the range of a Kelvin; hence one does not expect much renormalization. Hence, it is expected that the barrier strengths remain small enough for perturbation theory to be applicable. We follow the methods of Safi and Schulz [24] and Maslov [27], who derived explicitly a conductance expression for a non-translationally invariant system and obtained where R is the perturbative correction to second order in the impurity strength. R is given by where the c m 's are the coefficients for the terms in the Haldane representation of the fermionic density, and R (m) is the correction due to the back-scattering of m electrons given by In the above expression, V (x) is the bare potential of the impurities, ξ is a phase factor which includes the k F x factor coming from the back-scattering process and other factors which arise due to the removal of the forward scattering terms from the Hamiltonian by shifts in the bosonic field φ, and χ m (x, y) is a factor which incorporates the renormalization group (RG) flows of the barrier strengths. In general, it is given by a two-point correlation function defined as G 0 (x, y, it) is the two-point Green's function for a clean quantum wire. Here, the Green's functions are in terms of the imaginary time τ = it. τ 0 ∼ 1/Λ is the inverse of the high energy cutoff. In a later subsection, we show how the one-point function χ m (x, x) can be obtained directly from the RG equation for the barriers.
For our system, we shall instead compute the two-point Green's function in terms of ω, in terms of which, the correction to the conductance is given by (specializing to the case m = 1) where Note that the Green's functions in the prefactors of R (1) are dependent on the external driving frequency, but the Green's functions in the exponential depend only on the Matsubara frequenciesω ′ n , and not on the external driving frequency ω or its analytic continuationω. The sum over the Matsubara frequencies are cutoff at the low energy end byω ′ n=1 ∼ k B T and at the upper end by the high energy cutoff Λ. In evaluating R (1) , we will approximate ω ′ n by dω ′ /(2π) which is reasonable since we always assume that the temperature T is much smaller than the cutoff Λ.
Results for the Quantum Wire
We will concentrate here on calculating the conductance of a quantum wire system in which the length of the quantum wire l is much greater than the length of the contact regions d. Also, we will finally be interested in studying the effects of the junction barriers placed at the two lead-contact junctions (as explained earlier). Hence, for our model For this potential, we can obtain the expression for the conductance corrections as • Spinless electrons Now, the expression for the one-point Green's function (in frequency space) for a barrier placed inside the contact region on the left of the QW and at a distance a from the left lead-contact junction (which is taken to be the origin, giving the hierarchy of length scales a ≪ d ≪ l) can be easily obtained from Appendix D. It is given by where The two-point Green's function G(x, y) for y in the left contact region and x anywhere is given in Appendix D. By setting y = 0 (i.e., at the first barrier) and x = L = l + 2d (i.e., at the second barrier) we obtain the conductances in the different frequency regimes given by where Thus, we see that G(x, y,ω ′ ) decays exponentially to zero except at the lowest frequency regime where G(x, y,ω ′ ) = G(x, x,ω ′ ) = G(y, y,ω ′ ).
To obtain the conductance corrections, we use the above Green's functions to compute F (x ′ , y ′ ,ω ′ ) in each of these frequency regimes. For the high frequency regime, it is simply given by where K ef f = K L K C /(K L + K C ), and in the second line, we have scaled t by T , i.e., we have used t = z/T to write the integral in terms of dimensionless variables so that the temperature power-laws can be made explicit. When x ′ = y ′ , G(x ′ , y ′ ,ω ′ ) → 0, so that one can check that lim ω→0 Im dF dω also tends to zero. This means that the cross-term in Eq. (56) does not contribute. For each of the terms involving just one barrier, we find that Hence, we obtain the following answer for the conductance correction for high tem- where c 1 is a dimensionful constant dependent on factors like the contact quasiparticle velocity v C , but is independent of the gate voltage V G .
is very similar to that performed for the high frequency case, except that the integral over the Matsubara frequencies is now split into two regions The rest of the calculations go through as above, and we find that the conductance expression is Finally, for very low temperatures T ≪ T l , the sum over Matsubara frequencies split into three regions so that we have Furthermore, in this regime, the cross-term does not vanish; in fact, for The contribution of the cross term is hence identical to that of the terms due to a single barrier. Hence, we obtain the corrections to the conductance as where c 3 is a dimensionful constant similar in nature to c 2 ; thus the two barriers are seen coherently.
Note that the power-laws come purely from the one-point Green's functions, whereas the phase coherence between the barriers is determined by the behavior of the twopoint correlation function. At high or intermediate frequencies, lim ω→0 Im dF (x ′ ,y ′ ,ω) dω tends to zero for x ′ = y ′ leading to the lack of phase coherence between the two barriers. At very high temperatures, the interaction parameters of the contact region K C and the lead region K L controls the renormalization of a barrier in the contact region. As the temperature is lowered, the phase coherence length of the electronic excitations increases, and the renormalization exponent makes a crossover to a combination of the interaction parameters of the contact and QW, and finally to that of the lead alone at the lowest temperature regime. The lowest temperature regime is also the one in which resonant transport through both the lead-contact junction barriers can take place as phase coherence over the entire system is achieved at these temperatures.
• Electrons with spin The above expressions were given for a model of the QW system but for spinless electrons. Let us now see what the conductance expressions are for electrons with spin. These expressions can be derived in the same way as for spinless electrons by using the appropriate Green's functions for spin and charge fields. This gives us for the high temperature regime of T d ≪ T where now K ef f = K L K Cρ /(K L + K Cρ ) + K L K Cσ /(K L + K Cσ ), and c 4 is a dimensionful constant much like c 1 for the spinless case (i.e., dependent on the contact charge velocity v C,ρ but independent of the gate voltage V G ). For the intermediate , and c 5 is a constant similar to c 2 for the spinless case (i.e., dependent on V G ). Finally, for the low temperature regime T ≪ T l , we obtain where c 6 is similar in nature to c 3 for the spinless case.
Results for the Quantum Point Contact
The Quantum Point Contact (QPC) is simply a quantum wire system in which the length of the quantum wire region l ∼ 0.2 − 0.5µm (i.e., the region undergoing the constriction due to the application of the gate voltage) is much reduced in comparison to typical lengths for a quantum wire l ∼ 2 − 20µm. Thus, in our model of the quantum wire system, we can reach the QPC by studying the limit when the contact region length d is much greater than the wire length l. Let us then study the effects of barriers/impurities placed in the contact and wire region of the QPC.
• Spinless electrons
In order to study the effect of a weak barrier placed in the contact region such that its distance a from the left lead-contact junction falls in the hierarchy of a ≪ l ≪ d, we again start by computing the one-point Green's function for such an impurity. We find that where As before, for the two-point function, we find that the high and low frequency limits are the same as that given in Eq. (58) for the QW, but for T d ≪ T ≪ T l , the answer turns out to be the same as in the high frequency limit. This is similar to what one sees for the one-point Green's functions above as well. So, without giving any further derivations, we directly quote the expressions for the conductance corrections. In the high and intermediate frequency regimes, the conductance is given by where i = 4, 5 allows for the constant to be different in the high and intermediate frequency regimes. For the low frequency regime, we get It is clear from the above expressions that the contributions of barriers in the contacts of a QPC are always going to be independent of the gate voltage V G as the QPC interaction parameter K W does not enter anywhere. Thus, such an impurity would always lead to a flat and channel independent renormalized conductance. It should be noted that we have found from a similar calculation that even for an impurity placed deep inside the contact (i.e., with the hierarchy of l ≪ a ≪ d), the above conclusions still remain true; this is because the only change that takes place is that K = K C (rather than the combination of K L and K C found earlier) for the Finally, let us study the effect of an impurity placed inside the QPC itself. We find the one-point Green's function for such a case (with the hierarchy of l ≪ d < a) to be where We have here only three frequency regimes as the regime of v C /d ≪ |ω| ≪ v W /l cannot be taken sensibly within the given hierarchy of length scales. This shows again that the effect of an impurity placed within the QPC will always be dependent on the gate voltage V G , and can never lead to flat and channel independent renormalizations of the conductance.
• Electrons with spin The generalization to spinful electrons can be obtained just as was done for quantum wires with the appropriate substitutions.
Evaluation of the conductances from the effective actions using the RG equations
Here, we note that the above results for the conductances could have been anticipated by computing the RG equation for the impurity potentials using the effective actions calculated in Sec. 3.
The conductance is governed by the renormalized barrier potentials at the two junctions. Since the interaction is repulsive, the barrier potentials are expected to grow as a function of the frequency cutoff. This is what leads to the result that any impurity potential, however small, eventually cuts the wire; in the zero temperature limit, there is no transmission at all [14]. However, at a finite temperature T , finite wire length l or finite contact length d, the growth is cutoff by either T , v W /L or v C /d. In fact, since the energy scales in the problem are the temperature k B T , the high frequency cutoff Λ and those related to the contact length k B T d = v C /d and the wire length k B T l = v W /l, we can see that there will exist two energy scale crossovers in the system -one from T /Λ to T /T d and the other from T /T d to T /T l for d ≪ l (the QW limit), or from T /Λ to T /T l and then from T /T l to T /T d for l ≪ d (the QPC limit).
In fact, an explicit RG calculation of either of the individual barrier strengths in the high, intermediate and low frequency regimes simply involves computing the dimension of cos(2 √ πφ 1 ) or cos(2 √ πφ 4 ) (which turn out to be the same) using those respective actions. For example, for the high frequency effective action given in Eq. (17), the RG equation for a single barrier is given by where λ = ln Λ(λ) Λ . Using this, we can get the renormalized barrier strength to be V ren in the high frequency regime T ≫ T d , where we have used T to cutoff the RG flow (which begins from Λ). From this, we infer that to quadratic order, the T dependence of the conductance corrections is given by wherec 1 is a dimensionful constant like c 1 defined earlier containing factors like v C but is, most importantly, independent of the gate voltage V G . Comparing with Eq. (61), we see that if we include the subtraction due to two barriers, the expressions are identical.
In the intermediate regime of T l ≪ T ≪ T d , the RG equation for the same barrier now becomes using the effective action in Eq. (21). At the same time, the appearance of the energy scale v C /d (through U C ) in the effective action in this temperature regime and the taking of the approximationω n ≪ U C means that v C /d has replaced Λ as the high energy cutoff in the expression for the T dependence of the conductance correction. The influence of those degrees of freedom whose energies lie between v C /d and Λ can be taken into account by noting that they will contribute a factor of (T d /Λ) 2(K ef f −1) ; this is because these degrees of freedom have been integrated away during the RG procedure, and there must be continuity between the conductance expressions for T ≫ T d and T l ≪ T ≪ T d at T = T d . Thus, we get the conductance expression in this regime as wherec 2 is a constant similar in nature toc 1 , but it can depend on v W and is hence dependent on the gate voltage V G . Thus the conductance is no longer independent of V G . This again is the same as the expression obtained by the explicit computation of the conductance.
Finally, in the low temperature limit, we recognize the fact that there is phase coherence over the distance between the two barriers; this follows from the low frequency effective action which has cross terms between the fields at the two barriers. This is what leads to resonant transmission. To compute the conductance corrections away from resonance in this limit, we note the following. Since the resonance occurs precisely when the 2k F component of the barrier term goes to zero, the relevant term away from this resonance is precisely the back-scattering potential V cos(2 √ πχ). Computing the dimension of this operator gives us the RG equation for our barriers in this temperature regime as This makes the T dependence of the conductance correction clear. Again, the appearance of the energy scale v W /l (through U W ) and the approximationω n ≪ U W indicate that v W /l has now replaced v C /d as the high energy cutoff in the expression for the T dependence of the conductance correction. As before, the influence of those degrees of freedom whose energies lie in between v W /l and v C /d is shown by the appearance of the term . This is because these degrees of freedom have also been integrated away during the RG procedure, and there must be continuity in the conductance expressions at T = T l whether we come from the T d ≫ T ≫ T l regime or the T l ≪ T regime. Thus, we obtain the conductance in this regime as wherec 3 is a constant similar in nature toc 2 . We can now see that, as K L = 1 (for 2DEG Fermi reservoirs), the conductance has no temperature dependence in the low temperature regime.
A similar analysis can be done for the QPC limit, which reproduces the conductance expressions for the QPC limit that were obtained explicitly in the earlier subsection. We note, however, that the conductance corrections are small in this case as the RG flow for the barriers is restricted by the small length scales in the system. The conductance corrections for electrons with spin can also be obtained using the effective actions and the RG equations for the barriers, by proceeding in the same way as was done for spinless electrons. Since the conductance expressions have already been given in the previous section, we do not repeat them here.
Thus, we emphasize that just by using the effective action and the RG equations for the barriers, we can actually obtain the conductance corrections. However, all that we actually do here is to compute the RG flows of the individual barriers, and then infer the temperature and length power-laws in the conductance corrections. Hence, even in principle, there is no way of obtaining the constantsc 1 , . . . ,c 6 from this method, whereas the explicit computation of the conductance in the earlier subsection can give the explicit forms of the constants as well. In fact, the correlation functions computed there can be directly related to the coefficients which appear in the RG equations. In the various frequency regimes, the RG equation for a single barrier for spinless fermions can be written as On Fourier transforming, this gives where τ 0 is the high energy cutoff 1/Λ. Integrating this gives the renormalized strength of the impurity V ren as where U(x, x, τ 0 e λ ) = −2(G 0 (x, x, τ 0 e λ ) − G 0 (x, x, τ 0 )). Thus, in this case, However, the non-local χ 1 (x, y) is not so easy to obtain just from the RG equations.
Now, let us study the conclusions that can be drawn from the conductance expressions. To begin with, the expressions in the various frequency regimes reveal that as either the temperature T is raised or the total length L of the contacts and QW is decreased, the conductance corrections become smaller and the conductance approaches integer multiples of 2g 0 as expected [3,4]. Furthermore, we can see from these expressions that in the high temperature limit i.e., when T ≫ T d , T l , the conductance corrections are independent of the QW parameters. Hence, they are independent of the gate voltage V G and of all factors dependent on the channel index. Thus they yield renormalizations to the ideal values which are themselves plateau-like and uniform for all channels. Such corrections to the conductance explain some of the puzzling features observed in the experiments of Ref. [4]. A more detailed comparison of these results against experimental findings will be made in a later section; it is important to note here that our results are in qualitative agreement with most experimental observations on electronic transport through a variety of quantum wire systems.
Let us now compare these observations with what we find as the perturbative renormalizations to the perfect conductance of a barrier/impurity placed anywhere within the quantum wire itself such that its distance from the left lead-contact junction (taken as the origin) is again denoted by a. An exactly similar computation of the one-point Green's function in this case reveals that (note that we are now working with the hierarchy of d ≪ a ≪ l) where Now, for a barrier at the left contact-QW junction, a → d, the second frequency regime of v W /a ≪ |ω| ≪ v C /d does not exist. We also find that K = K ′ ef f ≡ 2K C K W /(K C + K W ) rather than K W for |ω| ≫ v C /d. Thus for a quantum wire system which has the two contact regions and only barriers at the two contact-QW junctions, the conductance in the highest temperature regime of T ≫ T d is Herec 1 is a dimensionful constant which will depend on factors like v W and hence also the gate voltage V G . For the intermediate temperature regime of T l ≪ T ≪ T d , we find the conductance to be whereK ef f = 2K L K W /(K L + K W ) as before, andc 2 is also a dimensionful constant dependent on V G . Finally, for the lowest temperature regime of T ≪ T l , we obtain wherec 3 too is a dimensionful constant dependent on V G . Thus, we can see that any barrier or impurity placed anywhere inside the QW will always give a perturbative renormalization to the conductance which will be dependent on the gate voltage and hence can never be flat or even channel independent. The conductance expressions for the case of spinful electrons can be found for this case in exactly the same way as before.
Effects of a Magnetic Field
In this section, we will study the effects of an in-plane magnetic field on the conductivity of a quantum wire. In general, a magnetic field couples to both the spin (Zeeman coupling) and the orbital motion of an electron. However, orbital motion is not possible in an in-plane magnetic field because the electrons are constrained to move only in the plane. Thus we will only consider the effect of the Zeeman term. This term couples differently to spin up and spin down electrons; here up and down are defined with respect to the direction of the magnetic field which may or not be parallel to the quantum wire. Thus the SU(2) symmetry of rotations is explicitly broken. We will now see that the spin and charge degrees of freedom do not decouple any longer. Our findings reveal that (a) for low magnetic fields (of about 0 − 3T for Ga-As systems), the Zeeman splitting of the Fermi energies of the two spin species of electrons in the QW is very small, and its effects can be ignored. (c) at still higher magnetic fields (of about 8 − 16T ), when each of the earlier sub-bands is completely Zeeman split into two spin-split sub-bands, conductance steps will be seen in multiples of g 0 = e 2 /h; the odd-even effect will be most pronounced here with odd numbered spin-split sub-bands (containing only aligned moments) having a much less renormalized conductance and even numbered spin-split sub-bands (containing only antialigned moments) having a much more renormalized conductance, and we can treat each spin-split sub-band as an effectively spinless TLL system.
(d) at magnetic fields much higher than this, all the spins in the system will be spin polarized.
The infinite TLL Quantum Wire and the Odd-Even Effect
Let us first consider an infinitely long wire containing noninteracting electrons. A magnetic field h contributes the following term to the Hamiltonian where g is the gyromagnetic ratio (which is 2 for free electrons but may be substantially smaller in quantum wire systems), ρ 0↑ , ρ 0↓ respectively denote the mean density of the spin up and spin down electrons, and φ ↑ , φ ↓ denote the bosonic fields for the spin up and down electrons. The density terms ρ 0σ = ρ 0↑ − ρ 0↓ have a bigger effect than the derivative terms ∂ x φ σ ; by altering the chemical potentials for spin up and down electrons, these terms lead to different Fermi momenta and therefore to different Fermi velocities v F ↑ and v F ↑ for the two kinds of electrons.
We now add a density-density interaction of the form Uρ 2 /2 where ρ = ρ ↑ + ρ ↓ . (For instance, this may describe a short-range Coulomb repulsion as in the Hubbard model; in that case U is positive). The bosonized Lagrangian density takes the form where we have dropped some additive constants, and have only kept terms which are quadratic in the fields. We can rediagonalize this Lagrangian by defining two new fields φ i , velocities v i and interaction parameters K i (where i = +, −), and a mixing angle γ, where and The Lagrangian density in Eq. (89) then takes the decoupled form Thus the charge and spin degrees of freedom get mixed since the fields φ + and φ − which diagonalize the Lagrangian will generally be different from the fields Note that if the magnetic field h is zero, then v F ↑ = v F ↓ and γ = π/4; φ + and φ − are then identical (up to a sign) to the charge and spin fields φ ρ and φ σ .
We will now present the RG equations for a weak δ-function impurity placed at the origin. For this, we need to compute the scaling dimension of the impurity term in the Lagrangian. We first write the impurity term in terms of fermionic fields ψ ↑ (0) and ψ ↓ (0), and then in terms of the bosonic fields φ ↑ (0) and φ ↓ (0) at the origin as We then invert the relations between φ ± and φ ↑,↓ given above in order to rewrite the above expression for the impurity in terms of the diagonal fields φ ± . The scaling dimensions of the impurity terms are then found to be Hence the RG equations for V ↑ and V ↓ are given by where V ↑ and V ↓ both start from the value V 1 at the microscopic length scale.
We can now study what happens in the presence of strong and weak magnetic fields. But let us first remind ourselves of the following relations (which result from the Zeeman splitting), where v F = 2E F 1D /m is the Fermi velocity in the absence of a magnetic field. Therefore, in the limit of a strong magnetic field where the Zeeman splitting of the two spin species is much larger than the short ranged interaction energy U (i.e., U ≪ |v F ↑ − v F ↓ | and γ ≪ π/2), we can approximate the relations for the two velocities v ± (to linear order in By putting these relations for v ± into the expressions given above relating v ± and v F ↑,↓ , we get Now, using this (together with the fact that γ ≪ π 2 ) in the two RG equations obtained above gives us and Now, these two RG equations indicate that as v F ↑ is larger than v F ↓ , the renormalized impurity strength felt by those electrons which have their magnetic moments aligned with the external B field is less than the renormalized impurity strength felt by the electrons which have their magnetic moments anti-aligned with the B field. Furthermore, if the B field is further increased, we will reach a situation when alternate sub-bands in the QW will be populated by either only the aligned or only the anti-aligned electrons. In this regime, the difference in back-scattering felt by the two species of electrons will be very clear from the alternating weak and strong corrections to the conductance. To be more specific, all odd numbered sub-bands will show much less corrections to the perfect conductance (as they will be populated by electrons aligned with the magnetic field), while all even numbered sub-bands will show much greater corrections to the perfect conductance (as they will be populated by the anti-aligned electrons). This odd-even effect had, in fact, been predicted by a two-band TLL study of Kimura et al [28], but its explanation on the grounds of impurity renormalization is now made clear. Furthermore, this effect has been recently observed by Liang et al [7], and we will discuss their observations in a later section. It should be noted here that though the odd-even effect is easy to show upon taking the limit of U ≪ |v F ↑ − v F ↓ |, the existence of this phenomenon does not need this limit to be taken. Furthermore, we also find that upon taking the limit of h ≪ E F 1D (i.e., weak magnetic field which tells us that the odd-even effect vanishes in the weak magnetic field limit. Finally, we comment on the fact that the odd-even effect discussed above gives rise to the possibility of the creation of a spin-valve (i.e., a spin polarized current creating device) in these QW systems. Even though the odd-even effect needs a high magnetic field to be observed in current day experiments [7], it may be possible to employ artificial barriers like negativelybiased finger gates to heighten the difference in renormalization of the up and down spin electrons at lower magnetic fields. At this point, however, quantitative predictions are difficult to make.
A study of our model for the QW with a magnetic field
Having discussed how to obtain a diagonal Lagrangian when both a magnetic field and interactions are present as well as shown the interesting odd-even effect that takes place because of an impurity in an infinite TLL in the presence of an external B field, we will now study what happens when the model in Sec. 2 is placed in a magnetic field. In the regions x < 0 and x > l + 2d, we have a system of noninteracting electrons parametrized by velocities v F ↑ and v F ↓ . In the regions of the contacts 0 < x < d and l + d < x < l + 2d, we have an interacting system parametrized by two velocities v C+ , v C− and a mixing angle γ C . In the quantum wire d < x < l + d, the system is parametrized by the velocities v W + , v W − and a mixing angle γ W . The last six parameters are functions of v F ↑ , v F ↓ and the strengths of the interaction in the contacts and quantum wire. The action for this model is given by Note that we have ignored the junction barriers and the gate voltage for the moment. It is worth mentioning here that we have verified, by performing a calculation of the kind outlined in Ref. [17], that our model for the QW system when placed in an external magnetic field and in the absence of any barriers/impurities still gives perfect conductance in the dc limit for each sub-band.
We begin by noting that we will present here the calculation for the case when the mixing angle γ is the same in both the contacts as well as the QW, i.e., the short ranged electron-electron interaction U is equal in all the three TLLs. Though this is not necessarily the case in a real system, we will present it as it considerably simplifies the computations while providing us with an adequate discussion of all the important results for effective actions, conductance expressions, resonances, etc. Later, we will briefly discuss the case in which the mixing angle is different in the contact and QW regions. The explicit derivation of the effective action is given in Appendix C. The high frequency effective Lagrangian density We can clearly see the separation between the outer two and inner two fields. This means that we can integrate out the inner fieldsφ 2↑,↓ andφ 3↑,↓ without any further work and be left with a high frequency effective action dependent onφ 1↑,↓ andφ 4↑,↓ exactly as given above (and without any influence of the gate voltage V G either). We can also make the prediction that the conductance corrections due to barriers at the outer two junctions will have temperature power-laws which will be combinations of K L and K C± (much like those seen before) and also that it will be not be dependent on the gate voltage.
In the intermediate frequency range of v W ± /l ≪ω ≪ v C± /d, we get (after integrating out the inner fieldsφ 2↑,↓ andφ 3↑,↓ ), where α C± = v C± /(K C± d) are the charging energies for theφ ± fields, and they appear because of the growth of the coherence in the system over the contact regions. We can now predict that the conductance corrections due to barriers at the outer two junctions will have temperature power-laws which will be combinations of K L and K W ± (again like those seen previously), and that this correction will definitely be dependent on the gate voltage.
Finally, in the low frequency limit ofω ≪ v W ± /l, we get Integrating out the inner fields, we are then left with an effective action in terms of the new current and charge variables respectively that span the coherent TLL system between the two lead-contact junction barriers in this low frequency regime. Thus, we get the effective Lagrangian density in this regime as where and the term coming from the two barriers can be written as It becomes clear from the above expression that the temperature power-law will be dependent on the lead interaction parameter K L = 1, and the conductance correction will, in this regime, be temperature independent (as seen previously). Furthermore, the length corrections will be gate voltage dependent. Let us now study the possible resonance symmetries of the low frequency effective action given above. Even though the structure of this expression is more complicated than those encountered previously, we can rewrite the two charging terms as follows: where and f (n 14↓ ) is a quadratic function of the field n 14↓ only. Thus, we can see that if we set we get a resonance in the n 14↑ parts of the charging and barrier terms whenever n 14↑ = Z or Z + 1 and one makes the transformation of φ 14↑ → φ 14↑ ± √ π/2. This means that only the transport of all up-spin electrons through the two barriers is at resonance, and this is clearly a one-parameter tuned resonance. A one-parameter tuned resonance for only the transport of all down-spin electrons through the two barriers can be found in exactly the same manner by rewriting the above charging expressions but for n 14↓ instead of n 14↑ . We also find another resonance given by where Z is the same integer in both equations; then there exists a possible resonance whenever n 14↑ = Z or Z + 1, and one makes the transformation of φ 14↑,↓ → φ 14↑,↓ ± √ π/2.
This resonance symmetry would lead to a vanishing of the barrier terms in the effective action, and would correspond to the transfer of an electron across the system. One can immediately see that the above two conditions on n 14↓ mean that two parameters, here the gate voltage V G and the external magnetic field h, have to be manipulated to achieve this resonance condition. Such a resonance will, therefore, be much more difficult to observe experimentally. However, this resonance will lead to a complete vanishing of all backscattering (and hence the conductance corrections) while the two one-parameter tuned resonances will give only partial lessening of the conductance corrections.
The two one-parameter tuned resonances can prove useful in creating a spin-valve. Even at a low magnetic field, the odd-even effect can be enhanced by using stronger artificial barriers (e.g., by employing finger gates over the channel) or making the length of the channel longer or working at lower temperatures, together with tuning the transport of only one spin species of electrons through the two barriers at resonance. Thus one can create an enhanced spin polarized electron current output from the QW system.
Before we go on to computing the conductance through the system for the above model in the presence of the magnetic field, let us make some remarks about the case when the mixing angle in the QW is taken to be different from that in the contacts. A long calculation does give expressions for the effective action in the three frequency regimes similar to those obtained above, but with two sets of the transformation coefficients relating the φ ± and φ ↑,↓ fields. However, the integrating out of the inner fields is a far more difficult task; furthermore, the analysis reveals that the only possible resonance symmetry of the low frequency effective action is one that needs at least four parameters to be manipulated. We will, therefore, not present these results as we do not find anything substantially new from the analysis compared to the simpler case of equal mixing angles.
Conductance of our model for the QW with a magnetic field
We will begin by a re-writing the RG equation, obtained by Safi and Schulz [24] for an impurity placed within a QW of a finite size and connected to Fermi leads, in a way which will be convenient to use in computing the conductance expressions for our model of the QW with barriers even in the presence of the external magnetic field. We begin by quoting the expression for the RG flow found for an impurity in Ref. [24], where where the space-time indices are implicit on the right hand sides. Substituting the expressions for U ρ and U σ given above in the RG equation, and working with the case for the back-scattering of one electron m ρ = m σ = 1, we write the RG equation as where We will now use the effective actions found in the various frequency regimes to obtain the two Green's functions G ↑ and G ↓ , put these in the RG equations and thereby infer the corrections to the conductance caused by the junction barriers.
We start with the high and intermediate frequency/temperature effective actions given earlier for the model when the mixing angle is the same in the contact and QW regions. Here, we can see that the final effective action (in terms of only the fields at the outer two junctions) is the sum of two distinct parts, each of which is an expression of the kind A 2 φ 2 ↑ + B 2 φ 2 ↓ + Cφ ↑ φ ↓ separately for fields φ 1 and φ 4 . This tells us that we can simply take the sum of the contributions from each of the two incoherent barriers. Thus, the general expression can be diagonalized in terms of two new fields φ a and φ b i.e., written as where λ a and λ b are the eigenvalues of the transformation given by Then, Using the two eigenvectors corresponding to these two eigenvalues, we obtain Gω ↑ and Gω ↓ as and We finally obtain an expression for Gω ↑ + Gω ↓ as We can now use the Fourier transform of the above expression to obtain the temperature and length power-laws for the conductance corrections in the high and intermediate frequency regimes. In the high temperature regime of T ≫ T d (∼ v C± /d), we get the conductance as where c 1 is a dimensionful constant independent of the gate voltage V G , and K ef f,mag is given by the expression for K mag where the coefficients A, B and C are given by Similarly, the conductance expression for the intermediate temperature defined by T l (∼ v W ± /l) ≪ T ≪ T d is given by where c 2 is another dimensionful constant which is dependent on the gate voltage, T d has replaced Λ as the correct cutoff for the temperature, andK ef f,mag is found in exactly the same way as K ef f,mag but with coefficients A 1 , B 1 and C 1 defined as Finally, we obtain the low frequency conductance expression for the temperature regime of T ≪ T l as where c 3 is a dimensionful constant similar in nature to c 2 , i.e., dependent on gate voltage. This expression is also independent of the temperature for Fermi leads with K L = 1, and the coherence between the barriers means that this correction term could go to zero at resonance.
We end by noting that we can again take the limit of U ≪ |v C↑ − v C↓ | in our equations to highlight the existence of the odd-even effect within our model of the QW as well. Upon taking this limit in the high temperature regime, we find that where U is the inter-electron interaction term. This clearly shows that as v C↑ increases and v C↓ decreases with an increasing magnetic field, the renormalized barrier seen by the two spin species of electrons will be different. We also note that, just like the case of the infinite, homogeneous QW, a weak field of h ≪ E F 1D in our model of the QW does not give rise to the odd-even effect.
In summary, we can see that by turning on an external magnetic field in the QW system, the up and down spin electrons see different renormalized strengths of any barriers (or impurities) -this is the odd-even effect. We speculate on the possible use of this effect in creating a spin-valve using QW systems. The effective actions, their resonance symmetries as well as the temperature and length power-law corrections to the conductance in the various temperature regimes, however, still follow a pattern similar to that for a QW without a magnetic field.
Comparison with the Experiments
We now discuss the relevance of this model to many of the experiments that have been performed so far on quantum wire systems fabricated using cleaved-edge overgrowth as well as split-gate techniques. But before doing that, let us reiterate some well-known observations about the experimental system that we are trying to model here. In this system, the electrons enter the wire from the 2DEG reservoirs lying outside the wire with a Fermi energy E F whose value (typically around 5 − 10meV ) is fixed by the parameters of the 2DEG. Within the quantum wire, the gate voltage produces a discrete set of subbands labeled by an integer s (see Fig. 2); let E s denote the energies of the bottoms of these sub-bands. In a sufficiently long quantum wire, we expect E s to be constant along the length of the wire provided we are not too close to either of the junctions. Thus an electron which has energy E F and enters the sub-band s will have a wave number k F s inside the wire given by k 2 F s /2m = E F − E s and a velocity given by v F s = k F s /m [21]. We know that if N of the 1D sub-bands lie below the 2DEG Fermi energy E F 2D (which itself at any finite temperature is surrounded by a small thermal spread), we will get N quantized steps in the conductance when the quantum wire is completely free of any impurities; this statement is true irrespective of the electron velocities, densities or how they interact among themselves while in the various channels [16,17,21]. Now, upon increasing the gate voltage V G , one adds an energy eV G to every electron in each of the 1D sub-bands in the quantum wire. This has the effect of pushing up each of the sub-bands by the same energy and can even de-populate the sub-bands by pushing them above E F 2D (see Fig. 2). Thus, changing the gate voltage decreases the electron density in the quantum wire and allows the transport process to take place through only a few channels, and in the extreme limit, only one channel, before cutting off the wire altogether by pushing all the 1D sub-bands above the E F 2D (this is called pinch-off). The conductance measurement which shows step quantization in terms of rises and plateaus can then be explained in the following way. Whenever, by decreasing the gate voltage V G , the bottom one of the 1D sub-bands (which is initially well above E F 2D ) first touches the top of the thermal spread just above E F 2D , that band starts filling up and so we can see a rise. Once the bottom of this sub-band crosses the bottom of the thermal spread just below E F 2D , the rise is topped off by a plateau which signals that another channel is fully open to electron transport between the two reservoirs (see Fig. 2). Some of the earliest experiments with quantum wires free of impurities did indeed reveal quantization of the conductance in integer steps of 2g 0 [2].
But later Tarucha et al [3] performed experiments with wires of lengths of 2µm to 10µm fabricated using split-gate methods at temperatures from 0.3K to 1.1K, and found deviations from the perfect quantization of the steps. Attempts were then made to explain these deviations as due to electron-electron interactions. Although, a clean TLL wire between Fermi liquid leads would not lead to renormalization of the conductance quantization, several authors [18,24,27], showed that the presence of impurities in a TLL connected to Fermi leads would cause renormalization. However, they expected the renormalizations to be gate voltage dependent; this was indeed seen by Tarucha et al [3].
However, Yacoby et al [4] made the following surprising observation for a quantum wire 2µm long fabricated in cleaved-edge overgrowth systems: the dc conductance showed several nearly flat plateaus whose heights are uniformly renormalized from the ideal values of integer multiples of 2g 0 for measurements made over a temperature range of 0.3 − 25K.
Similar observations were subsequently made in several experiments on quantum wires made using the split-gate technique [6,7,8,12]. In all these experiments the step heights were increasingly renormalized as either the temperature was lowered (for a fixed length of quantum wire) or the length of the quantum wire was increased at a fixed temperature. Such renormalizations would require back-scattering of electrons. If these back-scatterings were due to impurities within the quantum wires, the conductance corrections would be gate voltage dependent as shown in our calculations. This can certainly not lead to flat conductance plateaus as seen in the experiments.
Our model, however, has contact regions independent of the gate voltage and has barriers at the contacts arising due to the changes in the nature of the electron-electron interactions and geometry. Thus, the back-scattering at these barriers is independent of gate voltage and the sub-band index (as can be seen in our results), and will lead to conductance plateaus which are flat as a function of gate voltage and uniform for all the sub-bands at the highest temperatures. We note that a recent experiment [9] on a quantum wire system similar to that used by Yacoby et al [4] revealed the existence of a region of length 2 − 6µm which lies in between the gated quantum wire region and the 2DEG reservoirs and gives rise to the back-scattering that causes the flat and uniform renormalization of the conductance of each sub-band. Such contact regions correspond to T d ∼ 0.2 − 0.7K. This is much less than most of the temperature range shown in Fig. 3 of Ref. [4]. The similar flat and uniform conductance corrections seen in the experiments of Refs. [6,7,8,12] seem to suggest that their QW systems also include contact regions and have T ≫ T d . Now, as explained earlier for a quantum wire system in which the contact length d ≪ l, in the intermediate and low temperature regimes of T l ≪ T ≪ T d and T ≪ T l , we know that the correct cutoffs for the RG procedure are T d and T l respectively; that is why the length power-laws of d and l appear in the conductance corrections in these two regimes besides the customary temperature power-law. We can clearly see that the inverse length scale d −1 (for the contact region) and l −1 (for the wire region) have similar power-laws to those obtained for the temperature. Thus, one can qualitatively understand the increase in the conductance with increasing temperature and its decrease as the length of the quantum wire is increased. This has been observed by several groups [3,4,6,8]. Furthermore, one recent experiment using a split-gate QW system [12] shows that the conductance of a 2µm long QW at T = 1K shows flat, renormalized plateaus which are replaced by uneven conductance fluctuations at T = 50mK. However a different experiment [6] reveals that a QPC created using similar split-gate methods shows plateaus which are hardly renormalized at higher temperatures, and no conductance fluctuations are seen at lower temperatures. This can also be understood from our model: the conductance corrections due the junction barriers for a quantum wire are gate voltage independent at higher temperatures, but are dependent on it at lower temperatures. For the experiment in Ref. [12], T l = 0.4K >> T = 50mK. Hence, resonance effects are expected at these temperatures. This is in contrast to the conductance corrections for a QPC which are gate voltage independent at all temperatures. In fact, if the quantum wire samples of Yacoby et al have contact regions as long as 2 − 6µm (as found by the authors of Ref. [9] on similar samples), this would suggest that their 2µm long wire is actually closer to a quantum point contact. This would help explain the flatness of the renormalizations seen over a wide temperature range of 0.3 − 25K.
We now discuss our attempt to quantitatively understand the variation of conductance against temperature as given in the inset of Fig. 3 of the work of Yacoby et al [4]. The conductance given there is measured at a fixed value of the gate voltage on the plateau of the first sub-band (i.e., close to 2g 0 ). We find that the conductance correction versus temperature variation found by them (i.e., δg ≡ 2g 0 − g vs. T ) is best fitted by a function of the form δg = −0.3512 T −0.1058−0.0345T (132) as shown in Fig. 3. We find that the goodness of this fit is given by the correlation coefficient R 2 = 0.9955. Clearly, this expression for the conductance corrections does not match the simple form δg ∼ T −α given in Sec. 4 for the QW or QPC systems. The presence of the T dependent piece in the exponent implies that our model is only qualitatively correct. Several factors could be important in determining this complicated temperature dependent power-law. Some of these are: • a more extended transition region between the leads and the contacts in which the parameters K and v vary smoothly as a function of x, • more extended junction barriers lying within the contacts rather than the local δ-function barriers that we have studied, and • the possibility of the electron-electron interactions having a finite range instead of the short-ranged interactions that we have used to study our TLL systems.
A detailed quantitative comparison of our model with the experiments would, therefore, need a more sophisticated treatment taking these factors into consideration. We should emphasize here that a temperature and length dependence of the conductance correction of the form that we have obtained (decreasing at high temperatures or short lengths) is a nontrivial effect of the electron interactions, and our simple model has already captured this qualitatively. A non-interacting theory does not have temperature or length dependences of this kind.
We now discuss the important experimental finding of Liang et al [7] of the oddeven effect in the transport of electrons through a quantum wire in the presence of a magnetic field. Liang et al find that as they turn up the external magnetic field (kept in plane and aligned along the direction of the channel) from 0 to 11T , the increasing magnetic field expectedly lifts the spin degeneracy and splits each conductance step into two steps, with the heights of both being less than g 0 . Furthermore, at a magnetic field strength of 11T , they find that the difference between the conductance of successive pairs of spin-split sub-bands alternates. This shows that the conductance of the odd numbered spin-split sub-bands containing the moments aligned with the magnetic field undergoes little renormalization (i.e., is close to g 0 in their Fig. 4), while the conductance of the even numbered spin-split sub-bands containing the moments anti-aligned with the magnetic field undergoes a large renormalization correction; their Fig. 4 indicates a correction as large as 0.3g 0 . As discussed earlier, this phenomenon can be simply understood as the aligned moments seeing a much weaker barrier and the anti-aligned moments seeing a much stronger barrier. This is due to the Zeeman splitting of the Fermi levels of the up and down spin electrons and their interactions with each other. Since the difference in renormalizations between the aligned and anti-aligned electrons occur for all magnetic fields (i.e., even when the up and down sub-bands are not spin-split), we suggest the following possibility. One can artificially enhance the barrier strengths so that the difference in renormalizations of the up and down spins can be made substantial at moderate magnetic fields. More importantly, we can vary the gate voltage so as to tune the spin polarization with greater transmission to resonance. This would mean that at these values of the magnetic field and gate voltage, transmission of one of the polarizations is completely suppressed and the other one greatly enhanced. This leads us to the possibility of creating a spin-valve at moderate magnetic fields.
Finally, we comment on a new set of experiments [10,11] which have used scanning probe microscopy techniques to study transport through QPCs and propose a test for our model based on such a study. In these experiments, a negatively charged atomic force microscope tip is held at a distance of 100 − 150nm above the 2DEG gas on which the QPC is created via split-gate methods. A capacitive coupling between the 2DEG and the tip reduces the density of the 2DEG in a small spot directly beneath the tip, thereby creating a small depletion region (negatively charged "bubble") which can backscatter electrons approaching it. The tip then scans the surface of the 2DEG reservoir into which the electrons are entering after traveling through the QPC, and the two-probe conductance is measured. This allows one to "image" the electron current flowing out from the QPC. Topinka et al [10] have made such measurements at a temperature of T = 1.7K and find that the electrons flow out into the 2DEG reservoir in streaks from each subband. The number and nature of the streaks is governed by the electron wave function in each sub-band caused by the quantization due to the confinement in the transverse direction. They find that the electron flow is coherent along these streaks quite far from the QPC mouth where they finally disperse into the 2DEG. Furthermore, they find that placing the depletion bubble in the path of a particular streak (at a distance of about 0.3 − 0.5µm from the mouth of the QPC) gives rise to a flat, renormalized plateau only for the particular sub-band from which it is emanating, while the other sub-bands give the universal conductance value of 2g 0 . This tells us that the effect of the gate voltage must vanish quickly since it is not felt beyond distances as short as 0.3 − 0.5µm from the mouth of the QPC. Crook et al [11] find a series of peaks and troughs upon measuring the differential conductance dg/dV G versus the gate voltage V G (which are caused by the step rises and plateaus for each sub-band respectively) while scanning the tip through the QPC. Their finding that the troughs do not fall to zero indicate that the conductance corrections caused by the depletion bubble (when placed within the QPC) is gate voltage dependent as would have been expected. Now, the availability of the tip generated depletion bubble as a controlled barrier to the flow of electrons through the QPC also suggests a possible use of scanning probe microscopy techniques to test the predictions of our model in a quantitative fashion. This would require the gate voltage to be first fixed such that only the lowest sub-band is fully open to the flow of electron current, and then the depletion bubble to be placed somewhere on a streak emanating from this lowest sub-band at a distance from the QPC mouth; the conductance can then be measured by changing the gate voltage but holding the temperature fixed. The nature of the conductance versus gate voltage curve will tell us whether the gate voltage does or does not have any effect on the electrons on the streak at that distance from the mouth of the QPC. Furthermore, the gate voltage can then be held fixed somewhere on a plateau and the conductance measured as the temperature is varied. The form of the conductance corrections versus temperature can then be obtained. This entire chain of measurements can then be repeated after taking the depletion bubble closer to the QPC mouth and into the QPC in a series of steps. Such a series of measurements would help answer questions about where the conductance corrections start becoming dependent on the gate voltage as well as how the conductance corrections vary with temperature when a barrier is placed within the QPC or away from the QPC. Such experiments could also be carried out with longer QWs to check the length dependences of the conductance corrections.
Summary and Outlook
The main idea in this paper is to introduce a model which explicitly describes the regions in between the quantum wire and the 2DEG reservoirs as interacting 1D systems which are independent of the density of electrons in the quantum wire. We show that the difference in the strengths of the interactions in the different regions leads to local junction barriers between the regions; the barriers simulate the effects of the imperfect coupling between the 2DEG and the quantum wire. Our model leads to the following results for wires with no impurities, all of which are in agreement with a large body of experimental observations.
• Flat (independent of gate voltage) and uniform (for all the sub-bands) renormalizations of the quantized conductance plateaus.
• The renormalizations increase as the temperature is lowered or the length of the quantum wire is increased.
• At still lower temperatures, the flatness of the plateaus disappears and oscillatory features in the conductance can be observed which we interpret as resonant transmission through the quantum wire.
• In the presence of a magnetic field, an odd-even effect is found in the conductance of alternate spin-split sub-bands. This effect may be used to construct a spin-valve, which allows only electrons with one particular spin to transmit through the wire even if the magnetic field is not high enough to completely spin-split the sub-bands.
For quantum wires with impurities, which are either intrinsic or externally imposed as finger gates, the conductance corrections are always gate voltage dependent and therefore, are neither flat nor sub-band independent. Some interesting questions for future studies include the following. A quantitative fit to the conductance corrections as a function of the temperature and wire length still remains to be done. This would require an even more realistic modeling of the quantum wire system (including some of the features itemized in the previous section) as well as more experimental data. Theoretical studies at finite frequencies and finite external voltages across the quantum wire also need to be pursued. Finally, one needs to understand several features which are observed on the rise between two successive plateaus, such as the "0.7 effect" mentioned in the introduction, the observation of continuous oscillations as a function of the gate voltage upon introducing finger gate barriers [29], and the fixed point that exists on the rise as the temperature is varied [30]. For all of these, one needs to study the model when some sub-band is partially opened.
Acknowledgments SL thanks I. Safi for useful correspondence. DS thanks the Council of Scientific and Industrial Research, India for financial support through grant No. 03(0911)/00/EMR-II.
A Effective action for spinless fermions
In this Appendix, we will obtain explicitly the S 0 part of the effective action in terms of the fields φ i , i = 1, ..4, at the junctions x = 0, d, l + d and x = l + 2d = L for the K L -K C -K W -K C -K L model described by the action in Eq. (12) by integrating out all degrees of freedom except at the positions of the junctions. We will also give the effective action of the simpler model K L -K W -K L for comparison, since they have also not been explicitly given anywhere.
We first start with the simpler K L -K W -K C model, which is defined as a length L quantum wire with interaction parameter K W between x = 0 and x = L, and with leads defined by K L = 1 for x < 0 and x > L, described by the Lagrangian where to set the notation. There are three ways to derive the effective action. We can (a) integrate out the fields at all points in space except at x = 0 and L, or (b) find the solution of the equations of motion in terms of the above two fields and then compute the action from that solution, or (c) compute the Green's function Gω(x, x ′ ), set x, x ′ equal to 0 or L, and invert G to get S ef f . All the methods produce the same result since the original action is purely quadratic. We will use the second method here because it is technically simpler.
As in other sections, we will work with the Euclidean time action for convenience. If all the fields have a time dependence of the form exp(−iω n τ ), then normalizability of the solutions imply that they should decay exponentially at x → ±∞. We assume that the solution of the equation of motion has the following forms in the three regions, Matching solutions at the boundaries x = 0 and x = L to eliminateθ i (ω n ), and using this solution in the effective action and carrying out the spatial integration, we obtain the action whereφ 1 ≡φ(0,ω n ) andφ 2 ≡φ(L,ω n ) and k nW and k nL are defined as |ω n |/v W and |ω n |/v L respectively. In the limitω n ≫ v L /L, v W /L, we get the high frequency effective action where the two junctions are decoupled as expected. In the low frequency limitω n ≪ v L /L, v W /L, we get where Using the same method as above, we can also obtain the full effective action for the K L -K C -K W -K C -K L model in terms of the fields at the four junctions φ i , i = 1, ..4, wherẽ φ(0,ω n ) =φ 1 (ω n ) ≡φ 1 ,φ(d,ω n ) ≡φ 2 ,φ(l + d,ω n ) ≡φ 3 andφ(L = l + 2d,ω n ) ≡φ 4 . The solutions in the five regions can be written as where by matching the solutions at x = 0, d, l + d and L, we can obtain the functions of ω n , B, C, D, E, F and G in terms of theφ i , i = 1...4, and k nC is defined as k nC = |ω n |/v C .
Substituting this solution in the action and integrating over all space, we get the effective action with .
The action finally simplifies to whereφ 1 ≡φ(0,ω n ) andφ 2 ≡φ(L,ω n ). The high and low frequency limits of this effective action have been used in Sec. 3, to compute the finite temperature and finite length corrections off-resonance.
B Effective action for spinful fermions
In this section, we will explicitly compute the effective action for spinful fermions in the K L -K C -K W -K C -K L model. Although, the method followed is exactly the same as that in the previous section for spinless fermions, we do it explicitly because there are a few points where the inclusion of spin makes a difference.
The effective action for spinful fermions is normally computed in terms of the 'charge' and 'spin' field variables defined as φ ρ = (φ ↑ + φ ↓ )/ √ 2 and φ σ = (φ ↑ − φ ↓ )/ √ 2 because in the presence of interactions, the spin ↑ and ↓ fermions are mixed (remember the Hubbard term U i n i↑ n i↓ ). Here, in our model with contacts, the interaction term U is different in the contact region and in the wire region. But since the linear combination that diagonalizes the interaction is independent of the value of U, the action in terms of the φ ρ and φ σ fields are decoupled. In the presence of a magnetic field, in the next Appendix, we will see that the action continues to be diagonalizable; however, the diagonal fields are defined in terms of mixing angles which explicitly depend on U and the magnetic field and hence are different in the leads, the contacts and the wire.
The starting action for the spinful fermions is given in Eqs. (12) and (13) in the text in terms of the charge and spin fields. As in the earlier Appendix, we will obtain the solution of the equations of motion in terms of the eight fieldsφ ia , i = 1...4, a = ρ, σ defined to be at the positions x = 0, d, l + d and L = l + 2d and then compute the effective action from that solution. We assume that the solutions in the five regions can be written as φ a (x,ω n ) =φ 1a e k nLa x , x < 0 = B a e k nCa x + C a e −k nCa x , 0 < x < d = D a e k nW a x + E a e −k nW a x , d < x < l + d = F a e k nCa x + G a e −k nCa x , l + d < x < L =φ 4a e k nL (L−x) , x < 0 .
and as before, the coefficients, B a , C a , D a , E a , F a and G a can be found in terms of thẽ φ ia , i = 1...4, a = ρ, σ by matching the solutions at x = 0, d, l + d and L. Note that k nW a , k nCa and k nL are defined as |ω n |/v W a , |ω n |/v Ca and |ω n |/v La respectively. Substituting this solution in the action and integrating over all space, we get the effective action as The high and low frequency limits of this effective action have been used in Sec. 3, to discuss the various resonances that are possible in the low temperature limit and to explicitly compute the off-resonance corrections to the conductances at finite temperatures and for finite length wires.
C Effective action for spinful electrons in the presence of a magnetic field
We present here the calculation for the effective action for spinful fermions in a magnetic field when the mixing angle γ is the same in both the contacts as well as the QW, i.e., the short ranged electron-electron interaction U is equal in all the three TLLs. We start with the action given in Eq. (102) in Sec. 5 and will now integrate out the fields at all points except at the four junctions as these will be the sites for the two outer barriers while the two inner junctions are the ends of the region to which the gate voltage couples. Thus, we write down the equations of motion in each of the five regions and solve them. If all the fields have a time dependence of the form exp(−iω n τ ), then normalizability of the solutions imply that they should decay exponentially at x → ±∞. The general solution is given byφ where we have definedk ± = |ω n |/v W ± , k ± = |ω n |/v C± , k ↑ = |ω n |/v F ↑ and k ↓ = |ω n |/v F ↓ .
We now solve for the coefficients A, ..., H in Eq. (145) by matching the fieldsφ ↑ and theφ ↓ at the four junctions. At this point, we make the simplifying assumption that the mixing angles γ C and γ W (defined as in Eqs. (90) and (91)) in the contact and wire regions are equal to each other, γ C = γ W = γ. This implies thatφ W ± = K W ± v W ± K C± v C±φ C± at x = d and x = l + d. We find that the coefficients are given by where D C± = e k ± d − e −k ± d and D W ± = ek ± l − e −k ± l .
Then using the relations written down in Eq. (90) connecting the φ ± and φ ↑,↓ fields, we find that the effective Lagrangian density is given by where N C± = e k ± d + e −k ± d , and D C± , D W ± , k ± andk ± have already been defined above.
D Calculation of the Green's function in our model for the Quantum Wire
Here, we will present a calculation of the Green's function for the bosonic excitations in the model that we have presented for the quantum wire system of spinless fermions. The method we follow is along the lines of the calculation presented by Maslov and Stone [17]. We will study the case when there are no barriers present anywhere in the system. Then, we see that the Euclidean action S E in all the five distinct TLL regions in our model (Fermi lead, contact, QW, contact and Fermi lead) is given by with K(x) = K L , v(x) = v L in the first and fifth (Fermi lead) regions, K(x) = K C , v(x) = v C in the second and fourth (contact) regions and K(x) = K W , v(x) = v W in the third (QW) region. Then, defining the two-point bosonic Green's function/propagator (in Euclidean time τ ) as G(x, x ′ , τ ) =< T τ φ(x, τ )φ(x ′ , 0) >, it can be shown that the equation satisfied by the Fourier transform of the above Green's function Gω(x, x ′ ) is We now have to solve the above equation to obtain a functional form for Gω(x, x ′ ). We know that the interaction parameter K and the velocity v change abruptly at each of the junctions and that the two Fermi leads are semi-infinite in length (i.e. Gω(x, x ′ ) must decay to zero as x → ±∞). As we are interested in finding the one-point Green's function at a point in the left contact, we will choose x ′ to lie between 0 (the left leadcontact junction) and d (the left contact-QW junction). Furthermore, we know that the Green's function Gω(x, x ′ ) must satisfy the following boundary conditions: (a) Gω(x, x ′ ) must be continuous at x = 0, x ′ , d, l+d and l+2d (b) ( v(x) K(x) )∂ x Gω(x, x ′ ) must be continuous at x = 0, d, l + d and l + 2d and i.e., ( v(x) K(x) )∂ x Gω(x, x ′ ) undergoes a jump of unity at x = x ′ . It is then easily seen that the solution for Gω(x, x ′ ) is of the form The coefficients A, B, . . . , J are found by matching the boundary conditions. To begin with, it is worth noting that in the dc limit ofω → 0, we find that which gives the dc conductance to be This gives the perfect quantized conductance observed in several experiments on transport of electrons through a QPC when we take the leads to be Fermi liquids with K L = 1.
We now give the expressions for the Green's functions for the case when both x and x ′ are taken equal to a at a point in the left contact: where The results upon taking the limits corresponding to the various frequency (or temperature) regimes are given in the section where the conductance is computed for quantum wires and quantum point contacts with a junction barrier in the left contact region and we will not repeat them here.
We also give the general form of the two-point propagator Gω(x, y) for when x is a point in the right lead and y is a point in the left contact: where the expressions for p, q, r, s and γ 1 have already been given earlier.
Now, we give the expression for the one-point propagator at a point a inside the quantum wire: Gω(a, a) = K W 2|ω| where Again, we will not give the results of taking the various limits corresponding to the different frequency (or temperature) regimes as these have already been quoted in the section on the conductance of a quantum wire and quantum point contact. | 2014-10-01T00:00:00.000Z | 2001-04-21T00:00:00.000 | {
"year": 2001,
"sha1": "1836e3a67aacf9baa73aca5c7074f4771a410d21",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0104402",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1836e3a67aacf9baa73aca5c7074f4771a410d21",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
2805520 | pes2o/s2orc | v3-fos-license | Improving the efficiency of genomic loci capture using oligonucleotide arrays for high throughput resequencing
Background The emergence of next-generation sequencing technology presents tremendous opportunities to accelerate the discovery of rare variants or mutations that underlie human genetic disorders. Although the complete sequencing of the affected individuals' genomes would be the most powerful approach to finding such variants, the cost of such efforts make it impractical for routine use in disease gene research. In cases where candidate genes or loci can be defined by linkage, association, or phenotypic studies, the practical sequencing target can be made much smaller than the whole genome, and it becomes critical to have capture methods that can be used to purify the desired portion of the genome for shotgun short-read sequencing without biasing allelic representation or coverage. One major approach is array-based capture which relies on the ability to create a custom in-situ synthesized oligonucleotide microarray for use as a collection of hybridization capture probes. This approach is being used by our group and others routinely and we are continuing to improve its performance. Results Here, we provide a complete protocol optimized for large aggregate sequence intervals and demonstrate its utility with the capture of all predicted amino acid coding sequence from 3,038 human genes using 241,700 60-mer oligonucleotides. Further, we demonstrate two techniques by which the efficiency of the capture can be increased: by introducing a step to block cross hybridization mediated by common adapter sequences used in sequencing library construction, and by repeating the hybridization capture step. These improvements can boost the targeting efficiency to the point where over 85% of the mapped sequence reads fall within 100 bases of the targeted regions. Conclusions The complete protocol introduced in this paper enables researchers to perform practical capture experiments, and includes two novel methods for increasing the targeting efficiency. Coupled with the new massively parallel sequencing technologies, this provides a powerful approach to identifying disease-causing genetic variants that can be localized within the genome by traditional methods.
Background
Sequencing capacity has greatly advanced over the years and took a major leap with the commercialization of two new platforms for next-generation sequencers. Currently, three major platforms are actively being used including Applied Biosystems (ABI) SOLiD (Sequencing by Oligo Ligation and Detection), Illumina Genome Analyzer (GA) and Roche 454 Sequencing System [1][2][3]. Further, proofs of principle experiments with the Helicos system and Pacific Biosciences system have been published [4][5][6]. These sequencer technologies differ in their sequencing methods and hence they vary in number of reads sequenced, read length, error characteristics. However, all rely on the generation of shotgun libraries for sequencing. With these technologies, a single machine can generate in the range of 0.5-2 gigabases (Gb) of sequence reads per day. While advancement in these technologies is certain, the use of these technologies for sequencing targeted regions of the genome has been limited based on the efficiency of methods to enrich regions of the genome for analysis which are matched to the capacity. The rapid advancement in genotyping technology made possible by the advent of DNA microarrays has resulted in a flood of linkage and whole genome association studies for various disorders, and now the community is overwhelmed with genomic regions of interest for which additional targeted sequence analysis is key bottleneck. Most recently, several studies on exonic capture for broad based sequencing of the amino acid coding portion of the genome have shown successful identification of rare mutations/alleles involved in rare genetic disorders and yielded insights in applying the technique to searching for common variants as well as de novo cancer mutations [7][8][9][10].
Several groups have attempted to capture regions of interest by multiplex amplification [11][12][13][14]. For the target regions, primer pairs are systematically designed and, as the target regions are amplified by PCR, only the fragments with the right primer pairs are enriched. Reports have demonstrated successful amplification of hundreds of ~200 bp sized fragments but with substantial bias in the amplification between the different fragments. This method may work well for targeting tens of genes but beyond that scale, it requires more effort to design unique primers and optimize the PCR amplification process to ensure uniformity across all fragments. Also, the high cost of primer design and amplification will not compare favorably with that of sequencing as sequencing costs have been reduced significantly. To overcome the cost and effort of a primer design process, Porreca et al. have developed an assay that uses a microarray to design the oligonucleotides in parallel [15]. Using a modification of MIP (molecular inversion probes) assay, 55,000 exons of sizes varying from 60 to 191 bp were targeted and although the specificity of the capture was very good, only ~11,000 exons were captured. [15][16][17]. In the following month, Hodges et al. demonstrated success at capturing 'all' exons using the same methods with better balance of coverage (uniformity) and specificity relative to any other previous capture assays, and it was the least labor intensive and the most cost-effective method [18]. However, the performance of these assays varied across the sample types and array types used.
Albert et al. targeted a total ~5 Mb of sequences for 660 genes dispersed across the genome [16]. They reported the specificity varied with the same array design from 38% to 76% depending on the samples captured. For their tiling arrays encompassing from 200 Kb to 5 Mb around the BRCA1 gene, the fraction of the reads mapped to the intended targets varied from 14% to 64%. The Nimblegen 385 K custom array was used for all capture protocols while the 454 GLX sequencer was used for sequencing. Since the 454 sequencers produce longer individual reads than those from ABI or Illumina sequencers, their sequences are easier to map to the correct genomic location and this should be factored in when comparing capture technologies.
Hodges et al. targeted 'all' human exons using seven Nimblegen arrays and sequenced the captured DNA with the Illumina sequencer [18]. They first hybridized 500 bp genomic fragments to all seven arrays, and the fraction of the reads mapped to the targeted regions varied from 36% to 55%. When they extended the definition of the targeted region to include 300 bp upstream and downstream of each exon, the targeting efficiency was increased to 55-85%. Next, they used 100-200 bp fragments to hybridize to one of the arrays in an attempt to tighten the sequenced region around the targets. However, the specificity of the intended targets was reduced three-fold with the exon coverage rate up to 99%. For both studies, no detailed interpretation was described for the specificity variations across different array designs and sample types.
Here, we concentrate on improving the capture specificity using consistent sample and array design. Throughout the experiment, paired genomic DNA of both cancer and normal tissues were used from a cancer patient. We approached with two different methods to specifically block the adapters while generating the genomic library for hybridization and investigated two sequential rounds of hybridization. These changes resulted in improvements in our measured specificity of the targeted genomic DNA from the same sample and the same array design.
Baseline capture
Initial data demonstrated a specificity of the intended capture intervals of 35% with exon hit rate of 99% using 100-200 bp genomic fragments and protocols similar to the published results and Agilent custom 244 K oligo arrays. These results were comparable with the Hodges et al. data using shorter fragments (29% specificity). With these initial data as baseline, we attempted to improve the capture efficiency by changing the hybridization protocol.
Modified capture protocol to block adapter-adapter hybridization
The first change we made to the Agilent hybridization protocol was to block the adapters ligated at the end of every genomic amplicon in the hybridization mix. We assumed that possible hybridization between different genomic targets based on adapter-adapter hybridization will lead to the inadvertent and non-specific enrichment of off-target fragments. This is due to the fact that all of the genomic fragments in the hybridization mix are flanked with the same Illumina adapters (52 nt and 34 nt), which have comparable length to the intended genome location specific target probes (45-60 nt). Thus, the melting temperature between the adapter hybridizations will be similar to that between the appropriate genomic fragment and its specific probe. Moreover, the effective concentration in the hybridization of the adapter sequence is approximately 10 7 fold higher than the genome specific sequences. Thus, these adapter mediated hybridization may dominate the hybridization process.
We tried two different approaches to overcome this nonspecific pull down issue. First, to remove complementary adapter strands from the hybridization mix, we separated the two strands of genomic fragments and used one of the strands for the hybridization. To accomplish this, we biotinylated only one of the PCR primers (primer1.1) used in the generation of the genomic library. After the PCR step, the amplicon was bound to streptavidin beads, and the non-biotinylated strand was collected and hybridized to the array. The second approach we tried was mechanistically easier to prepare than separating the two strands. For this method, we added 10 fold molar excess of Illumina primers to the hybridization mix assuming that the primers will bind to all of the adapters flanking the genomic fragments and block them from hybridizing to other adapter sequences on different genomic fragments. It was shown that both approaches increased the specificity to ~60% with more even coverage resulting from the simpler blocking approach, which is the preferred protocol ( Table 1).
Modified capture protocol with hybridization repeat
The other protocol modification tested was to repeat the hybridization step with the notion that each round of successive hybridization will further enrich for the target sequence as a substantially simplified amplicon is hybridized in the second round. In 2005, Bashiardes et al. reported in Nature Methods on capturing genomic loci using bacterial artificial chromosome (BAC) in solution [19]. They performed two rounds of hybridization to enhance targeting and achieved 50% specificity. We incorporated this idea and repeated the hybridization step after the 2 nd PCR. The targeted specificity was successfully increased up to 90% (Table 1). This two step modified protocol was independently replicated externally. Total of 2 Mb was targeted for 7,475 nonoverlapping exon intervals under 10 linkage regions across the genome. The capture was done by Agilent custom 244 K oligo arrays strictly following the presented protocols except for reducing the 2 nd hybridization time to 24 hrs and re-using the same capture array for the 2 nd hybridization. The 36 bp single end sequencing was performed using Illumina GA I in the authors' laboratory and the sequences were aligned to the whole genome using MAQ (Mapping and Assembly with Qualities). With the use of blockers, the specificity resulted in 44% and by repeating the hybridization with the blockers, the specificity increased to 84%, which are comparable to the data presented.
Analysis of the sequence data
For both the basal and modified capture protocols, only a few percent of sequences mapped beyond 100 bp upstream or downstream from the end oligo sequence boundaries reflecting the sharpness of the capture which was determined by the fragment library size initially cre-ated ( Figure 1). It has been shown in the previous reports that SNPs (single nucleotide polymorphisms) can be reliably detected [16][17][18]. For validation of the variant calls, we compared the captured data to the Illumina 1 M Duo genotyping array data of the same sample. There were 5,746 dbSNP129 SNPs that were present on both the 1 M Duo genotyping array and within the targeted amplicons. The amplicons were sequenced an average of 6× for single hybridization and 9× for double hybridization. 6.3% and 9% of the polymorphic positions were not sequenced in single and double hybridization, respectively. Excluding Mapping of sequences relative to probe position in the genome Figure 1 Mapping of sequences relative to probe position in the genome. a) Sequence coverage distribution averaged across all targeted regions captured by basal capture protocol and b) sequence coverage distribution averaged across all targeted regions captured by double hybridization (modified) protocol show that the sequence reads are tightly limited around the targeted regions. Here, a targeted region is not necessarily a targeted exon but a probeset composed of multiple probes that are < 200 bp apart to each other. The y axis plots the relative abundance and the x axis is the base position relative to the probes positions.
these positions, the false negative rate (missing the variant allele from capture data while detecting it by the Illumina genotype data) was calculated to be 7.1% and 8.4% for the single and double hybridization, respectively. The false positive rate (detecting the correct variant allele according to the HapMap Caucasian data when Illumina genotype data calls it homozygous reference) was less than 0.1% in both experiments. Although random sampling effect was observed, the range of the variant allele detection ratio at the polymorphic positions was narrowed as the coverage increased for both experiments (Additional file 1). In addition to base substitutions, we have detected small (< 3 bp) insertions and deletions in our dataset that are described in the dbSNP129. Further, novel indels have been discovered and validated from cancer samples that will be described more completely in another publication.
We also gathered information about the frequency of sequence observations and their correlation with copy number in the targeted genome post single hybridization based pull down. Since we used both cancer and normal tissue samples from a single cancer patient, we compared the copy number differences between the two (Figure 2) based on the relative frequency of reads mapping to specific chromosomes. In this experiment, the cancer sample was trisomic for chromosome 7 which had been previously determined by whole genome SNP typing (data not shown). The mean number of counts normalized for the physical length of chromosome in the tumor tissue was 1.4 relative to that in the normal tissue. Further, the cancer sample had a loss of one chromosome, which we observed to have 0.65 the number of reads as the same chromosome in the normal tissue. These results indicate the capture method in aggregate preserves the copy number state of the original genomic DNA, and may be useful for copy number detection even when using the capture method. This is important for the identification of larger deletions using sequencing based approaches. Out of 18 places in the genome that showed regional copy number changes in the cancer sample by whole genome SNP typing, 6 of them harbored captured gene(s) and all except for one place agreed between the two datasets ( Table 2). However, considering that there was no SNP placed by Affymetrix 250 K array within the disagreed genic region and the low resolution of the Affymetrix 250 K array for detecting copy number changes, it is likely that the copy number changes detected by the captured sequencing may be true. The sample was also known to have DNA amplification of EGFR, "epidermal growth factor receptor", and a focused observation indicated ~25× fold more reads mapping to the EGFR exons in tumor sample than in normal sample (normalized to average coverage of all targeted exons across the whole genome) when single hybridization capture protocol was applied to both samples. We note that this is substantially higher than the SNP based copy number data which indicated a 2 fold increase at EGFR and may indicate higher dynamic range from the capture approach followed by sequencing than a microarray approach which has a limited dynamic range due to fluorophor measurement. In addition, the background noise for the ~1 Mb flanking region of EGFR also showed the ~25× fold amplification compared to elsewhere on the same chromosome reflecting that this 1 Mb region containing the EGFR gene is amplified itself at the same ratio ( Figure 3).
Discussion
Both Nimblegen and Agilent have released their commercial products for capture. However, Nimblegen's protocol is specific for the Roche 454 sequencer and no details of the hybridization mix contents are provided. Agilent's protocol uses solution based oligos and although the protocol can be adjusted for either Illumina GA or ABI SOLiD sequencing, it is not cost effective yet for small number of samples. Here, we described a complete instruction of the Copy number fold differences between the normal and tumor tissues per chromosome using single hybridization capture pro-tocol with blockers Figure 2 Copy number fold differences between the normal and tumor tissues per chromosome using single hybridization capture protocol with blockers. The cancer specimen used in these experiments was known to have a chromosome 7 copy number gain and a chromosome 10 deletion. The normalized counts per chromosome are plotted for all chromosomes and are markedly different for the two chromosomes at altered copy numbers.
improved capture protocol with a troubleshooting guide ( Table 3) that should facilitate the preparation of enriched genomic libraries given access to either Agilent or Nimblegen hybridization equipment and any of the next generation sequencers and be applicable to other genomes.
Two simple optimizations of the hybridization protocol have improved the capture performance significantly. First, by blocking the adapter sequences flanking each of the genomic fragments, we reduced the non-specific pull down through adapter-adapter hybridization. Blocking the nonspecific DNA is an old trick to reduce the background when microarray experiments are performed, with human cot-1 being the most commonly used reagent to block repetitive sequences [20]. Recently, Hodges et al. has shown similar results with the same approach, validating our experimental protocol [21]. Secondly, we repeated the hybridization step to further enrich the genomic fragment pool. While the specificity was enhanced up to 90%, this step introduced ~1% of variant loss and some degree of bias in the relative abundance of specific amplicons. For example, the fold difference observed for EGFR gene was weakened by 2.5-fold when the double hybridization capture protocol was applied, suggesting saturation of the hybridization step effectively normalizing the yield from each amplicon. The overall correlation coefficient between the single hybridization experiment and the double hybridization experiment after excluding the ~100 exons that were outlier was 0.82. This interferes some with the ability to reliable call copy number state of individual exons from the pull down sequence data. Two-round hybridization should be used with caution when copy number detection is critical. The array designed for our current experiments and those in previous reports were all masked for repeats. To test whether including the repeat regions would affect the capture, we have attempted to tile every 15 bp across a 4 Mb region of a single chromosome using Agilent 244 K custom array without vigorously masking for the repeats. The specificity was significantly reduced to 15~30% even with the addition of the primer blockers and increased human cot-1 DNA in the hybridization mix (data not shown).
EGFR DNA amplification event is preserved in sequence data This phenomenon should be taken into account when it is unavoidable to target the repeat regions.
Throughout the experiments, the sequence reads generated were tightly mapped nearby the intended probe regions. For each probe, the local sequence coverage will extend out in relation to the length of genomic fragment library initially created. Without any major variations in the genomic fragments that could interrupt the hybridization to the probe, the sequence coverage will peak within the probe region and decrease with increased distance.
There are ~18,000 genes in RefSeq database composed of 33 Mb of coding sequences. To tile every 30 bp, ~914 K probes are needed to be designed which is possible to accomplish on a total set of four Agilent 244 K custom arrays or one Agilent 1 M custom arrays. Figure 4 shows the proportion of 8 million targeted bases covered at various minimum coverage for different mean coverage within the targeted regions. For example, 76% of the targeted bases were considered completely sequenced with sequence depth of 20× or more when the mean coverage within the targeted regions was 55×. From these data, we can project how many sequence reads are required to comprehensively sequence all RefSeq exons. In this report, we used 36 bp of single end sequence reads generated by Illumina GA I. Currently, longer reads of 76 bp paired end sequence reads can be generated and are of sufficient quality for resequencing by Illumina GA IIx. This improvement not only increases the total sequences read by one channel of a flowcell, but also facilitates the alignment to the genome significantly. On average, 2.5 Gb of sequences are generated by one channel of Illumina GA IIx run. Of this, about half of the sequences are mapped uniquely to the human genome and assuming 60-85% specificity of capture, we will be able to generate 0.75-1.06 Gb of sequences within the targeted region. If targeting 33 Mb of the human genome for all RefSeq coding exons, it will require 2 channels (quarter a machine run) of sequencing with Illumina Genome Analyzer (GA) IIx to achieve 20× or more coverage on ~80% of the targeted sequences for one sample: or four samples can be sequenced with each machine run. Alternatively, each run of the ABI SOLiD 3 Plus instrument can generate up to 1 billion 50 base paired end reads, and a total of 40 Gb of mapped genomic sequence, such that 12 exomes can be resequenced at comparable coverage with each machine run (S. Nelson, unpublished results). Thus, whole transcriptome resequencing is economically feasible on the current generation of capture tools and sequencing devices, and, in principle, can be performed for under $2000 per genome.
Conclusions
Capturing genomic regions for sequencing has a wide scope of application. Many genetic studies with linkage or association signals will benefit immensely as it becomes possible to reliably and inexpensively capture the region of interest and perform high throughput, shotgun sequencing. Additionally, improvements in exonic enrichment protocols will usher in an era of cost effective sequencing of all the amino acid coding bases of genomes. This will lead to more rapid identification of the causative genes in many disorders.
Array Design
We chose to capture exonic sequence of ~3,000 cancer genes. Two cancer gene lists, 'cancer gene census list' and 'CGP (cancer genome project) planned studies list' were retrieved from Wellcome Trust COSMIC (catalogue of somatic mutations in cancer) database and com- bined [22]. Boundaries for exons and UTRs (untranslated regions) were retrieved from RefSeq database and any intervals that overlapped were merged so that non-redundant contiguous intervals were generated. In total, 3,038 genes (31,678 exons) spanning 8.4 Mb were included in the list. Based on the preliminary capture experience data (not shown), we tiled the probes every 120 bp on average so that the distance between the start positions of the two consecutive probes is ~120 bp and the regions between the probes are covered by the two flanking probes resulting in the same coverage as the regions within the probes. Both forward and reverse strands of each probe region were spotted on the array and 3 of the genes (150 exons) of higher interest were spotted at 12× (6× for each strand). Instead of using Nimblegen arrays as other studies have, we used Agilent 244 K custom 60 mer arrays. Probe design was performed using Agilent e-array system http:// www.Agilent.com with the repeat mask function on. This resulted in losing 155 exons completely and ~27% of the exons were partially covered (see Additional file 2).
Sample Preparation
We used a paired normal and tumor whole genomic DNA as the starting materials. The tumor DNA was extracted from the glioblastoma (GBM) specimen and the normal DNA was extracted from the blood sample of the same patient. The collections were approved by the UCLA IRB and the samples were processed at the Biological Samples Processing Core at UCLA using Autopure LS™ nucleic acid purification instrument from Gentra Systems. Both the normal and tumor samples were run on the Affymetrix GeneChip Human Mapping 250 K array for global genomic examination and comparison. Both chromosomal and regional copy number aberrations were detected in the tumor sample with the hallmarks of glioblastoma like EGFR amplification and chromosome 10 loss observed. The detailed description of the mutational landscape and the chromosomal abnormalities of this cancer sample is in preparation (Lee et al.) As we were sequencing with the Illumina Genome Analyzer, we followed the Illumina library generation protocol version 2.3. Five μg of high molecular weight whole genomic DNA from each sample were diluted in 150 μl water as the starting material. The DNA was sheared using a sonicator (Bioruptor, Diagenode) for 1 hr at high power level to generate short fragments. The size of the sheared product ranged between 150 bp to 400 bp with the median size around 200-250 bp. The sample was concentrated in 30 μl of elution buffer (EB) after purification using QIAquick PCR Purification Kit (Qiagen). To repair 3' or 5' overhangs and
Problem Possible Reason Solution
Genome is not fragmented after sonication. Buffer condition is not adaptable for sonication. Purify the DNA. We used QIAGEN PCR Purification Kit, eluted in EB to have it work.
Nothing is visible on the gel after 1 hr electrophoresis during library generation.
When the starting amount of DNA is small or there is significant DNA loss during the process for various reasons, it is possible that the DNA is smeared over a wide range after an hour of electrophoresis and not visible on the gel.
It is good to check the gel to see if the DNA is present after ~10 min run when the DNA is not smeared at a wide range. Even though nothing is visible, it is highly possible that the DNA is still present. Proceed to the next step regardless and see if PCR amplifies anything.
Cannot collect ~400 ul after the stripping step. Gasket slide was re-used. The array slide was lifted up too quickly. Different buffer was used for the 95C stripping process.
Do not re-use the gasket slide. The solution can be flushed to a collection boat and collected. When using multi-array slides like 2×105 K, 4×44 K or 8×15 K, it is still possible to run the capture protocol as indicated. After the stripping, the array slide should be slowly lifted up to prevent contamination. The solution tends to stay within the gaskets.
Not enough DNA amplified after the first stripping.
Stripping was not efficient. Another stripping process can be done and checked if there were left over genomic fragments hybridized on the probes. Since it does not matter if the stripped solution contains contaminants as long as the contaminants do not have adapters ligated at the end, it is possible to thoroughly continue the stripping process until no products get amplified. Percentage of targeted bases sequenced at various minimum coverage for different mean coverages Figure 4 Percentage of targeted bases sequenced at various minimum coverage for different mean coverages. X-axis represents the coverage per base level and the corresponding y-axis represents the percentage of targeted bases that were covered at greater or equal with certain coverage.
Washing
After hybridization, the arrays were washed according to the Agilent CGH wash procedure A protocol with the 2 nd wash extended to 5 minutes for increased stringency. The chamber was disassembled and the array slide was separated from the gasket slide in a glass dish filled with Oligo aCGH wash buffer 1 (Agilent). The array slide was placed on a slide-rack in another glass dish filled with Oligo aCGH wash buffer 1 and washed for 5 min at room temperature with stirring on a magnetic stirring plate. The slide rack was carefully moved to a glass dish filled with Agilent Oligo aCGH wash buffer 2 that was pre-heated to 37°C in a water bath and was washed for 5 min. After washing, the arrays were transferred back to a glass dish with Agilent Oligo aCGH wash buffer 1 until the next step was ready.
Stripping 490 μl of 1× Titanium Taq PCR Buffer (Clontech) preheated to 95°C was dispensed to a new gasket slide and covered with the array slide that was washed. After securely locking the arrays in the chamber, the array was incubated in the 95°C rotating oven for 10 min at 20 rpm. After this stripping process, the chamber was disassembled and the array slide was carefully lifted up from one side so that the solution converged on the gasket slide.
Pipette was used to collect the solution and transfer to a 1.5 mL microtube. This step needed to be done promptly as the solution was heated to 95°C and it started evaporating quickly. Collected solution was approximately 400 μl. The sample was aliquoted into 4 tubes so that when added with the Illumina primer pair (final concentration 0.1 μM), enzyme (final concentration 0.5×), and dNTPs (final concentration 250 μM each), the final volume were 100 μl. The stripped DNA was amplified by 15 cycles of PCR as described previously. The samples were purified with QIAquick PCR purification kit, consolidating and eluting the sample in 50 μl of EB. The concentration of DNA was measured and the size was checked on a 2% agarose gel to confirm that the size matched the size extracted from the gel in the previous step. Hybridization, washing and stripping steps were repeated. The stripped DNA was amplified under the same condition as before. After checking the concentration and template size again, the sample was diluted in a final concentration of 10 nM, which is the working concentration for cluster generation.
Sequencing
The Illumina flowcell was strictly prepared according to the manufacturer's protocol and the clusters were sequenced on the Genome Analyzer using standard manufacturer's recommended protocols. The image data produced were converted to intensity files and the sequential image data were processed through the "Firecrest" and "Bustard" algorithms provided by Illumina were used to call the individual sequence reads.
Alignment
We used the Blat-like Fast Accurate Search Tool (BFAST, in submission) to map each sequence read back to its location in the reference genome. Here we used the NCBI human genome build 36 as our reference genome [23]. Ten different genome indexes (Table 5) were built for use in the BFAST alignment process to be robust to errors and variants in the short (typically 36 base pairs) reads used throughout this project. For each read, the potential genome locations identified by BFAST were evaluated using a standard local alignment algorithm and ranked by score (see http://genome.ucla.edu/bfast). The best scoring alignment for each read was chosen, while reads with multiple top-scoring alignments or no alignments were discarded. MAQ was run on the same dataset to ensure we do not see any bias introduced by the alignment program. The aligned reads were post filtered so that the reads with mapping quality 0 that are aligned to multiple places in the genome were removed. The difference of the number of reads uniquely mapped to the whole genome (paired ttest p-value: 0.14) and the specificities for each experiment (paired t-test p-value: 0.13) were not significant between the BFAST aligned data and MAQ aligned data (Additional file 3). Two summary reports were created once the most likely alignment for each read was identified by BFAST. The first report was a BED file describing mismatches between the Illumina sequence and their corresponding genomic sequence. The second report was a Wiggle (WIG) file describing sequence coverage at each position along the chromosomes. These file formats were used because they are compatible with the popular UCSC genome browser. The open source SeqWare project (in submission), which provides a LIMS tool for tracking samples (SeqWare LIMS) and a pipeline for sequence analysis (SeqWare Pipeline), http://seqware.source forge.net was used throughout this work. It streamlined the sequence processing by running the Illumina-provided image analysis and base-calling tools, BFAST, and the report generation code, which itself is part of the SeqWare project. The BED file was further mapped to the dbSNP129 database to filter out known SNPs from de novo variants.
Data Analysis
We counted the sequences that were mapped back to the targeted region to calculate the specificity. From the WIG file for each chromosome, the base positions that mapped within the target intervals were filtered, and the sequence counts for each base position were summed. The target intervals were defined as extending 100 bp upstream and downstream from the final oligo intended to capture each exon interval. To compare the copy number of each chromosome between the normal and tumor sample from a patient, sequence counts were first tallied for each target interval and divided by total sequence counts within all target intervals for normalization. Mean of sequence counts per chromosome was calculated and compared between the normal and tumor sample. To compare the background noise between flanking region of EGFR and FOXP2 genes on chromosome 7, only the coverage in the non-targeted region (excluding targeted region +/-100 bp) was considered. 200 Kb moving average of the coverage was calculated across each region. and the National Institute of Mental Health (R01 MH071852). The replication study was funded by grant D22657 and DE019567. | 2017-08-03T01:21:12.260Z | 2009-12-31T00:00:00.000 | {
"year": 2009,
"sha1": "d42b9a278eebd1a9170b795c29b67dbe7e842b90",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/counter/pdf/10.1186/1471-2164-10-646",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "991bf15d01aaa43d86c67533a92296b4810ede10",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
209377448 | pes2o/s2orc | v3-fos-license | Investigation on the Durability of PLA Bionanocomposite Fibers Under Hygrothermal Conditions
Hygrothermal ageing of neat poly(lactic acid) (PLA), PLA/microcrystalline cellulose (MCC) and PLA/cellulose nanowhiskers (CNW) fibers prepared by melt-spinning process was investigated at 95 % relative humidity (RH) and two temperatures, i.e., 45 and 60°C. PLA bionanocomposite fibers were melt compounded at filler content of 1 wt% in the presence of PLA-grafted-maleic anhydride (PLA-g-MA) (7 wt%) used as compatibilizer. The influence of the type of cellulosic filler and the temperature on the hydrolytic degradation kinetics was evaluated through changes in molecular structure and physico-mechanical properties of the samples. The study showed, that all exposed fibers to hygrothermal ageing, were subjected to chain scission mechanism responsible for the decrease in average molecular weight, thermal stability and tensile properties, however, more pronounced after 14 days at 60°C. Furthermore, an increase in crystallinity with a fast crystallization process was noticed for all exposed fibers. The study revealed that the hydrolysis rate increased by 5, 6 and 7 times after 14 days at 60°C compared to 25 days at 45°C for neat PLA, PLA/PLA-g-MA/MCC1 and PLA/PLA-g-MA/CNW1 fibers, respectively. This has been ascribed to the catalytic behavior of the cellulosic fillers which promotes water diffusion into the PLA matrix. Finally, the study concludes to the capacity of PLA fibers to better withdraw to hydrothermal ageing in comparison to PLA/cellulose bionanocomposites. The durability of PLA fibers to hygrothermal degradation is established in the following order: PLA>PLA/PLA-g-MA/MCC1>PLA/PLA-g-MA/CNW1.
INTRODUCTION
The development of biodegradable and renewable polymeric materials as natural fiber composites is increasing significantly regarding their economic and ecological advantages (Vilaplana et al., 2010). The interest shown in biodegradable polymers meets the concerns of preserving the environment by minimizing the use of generally polluting petrochemical synthetic polymers and also by avoiding dependence on non-renewable resources. In this context, PLA, which belongs to the family of aliphatic polyesters, is one of the main representatives of the biodegradable polymers (Hajba et al., 2015). Moreover, PLA has good mechanical and optical properties, which are comparable to the conventional synthetic polymers, like polyolefin and PET. It is therefore widely used in many applications involving food packaging, automotive parts, disposable tableware, sutures and drug delivery device (Chow et al., 2014). However, expanding the utilization of PLA to other industrial fields is rather limited due to its slow crystallization speed and brittleness to some extent (Sun et al., 2017). To overcome these issues, many studies have shown that adding natural fibers or cellulose nanomaterials is an effective, useful method to reinforce PLA (Mokhena et al., 2018). Cellulose due to its abundant availability, renewability, biodegradability, high strength and stiffness, could replace advantageously layered silicates, carbon nanomaterials and other metallic oxide fillers. According to the literature (Rahman et al., 2014), the theoretical modulus of the native cellulose is estimated at 167.5 GPa, which is one of the strongest and stiffest natural fibers available. Cellulose materials as cellulose nanofibers (CNF), cellulose nanowhiskers (CNW), and microcrystalline cellulose (MCC) have a high potential to act as reinforcing agents in biopolymers. However, the highly hydrophilic surface of cellulose makes it difficult to prevent fiber aggregation in hydrophobic polymers, such as PLA (Wang and Drzal, 2012). There are three main approaches available to improve the dispersion and the interface bonding of the cellulosic filler with the polymer matrix, through either polymer or filler modification, or the addition of a third component, i.e., a coupling agent, such as maleic anhydride grafted polymers (Hassaini et al., 2017;Hamad et al., 2018). In the current paper, which is a continuation of a previous work (Aouat et al., 2018), PLA-g-MA was used as the compatibilizer for the PLA/cellulose bionanocomposites to improve the matrix-filler affinity.
Furthermore, the sensitivity to moisture uptake is a wellknown weakness, which limits the performance of biocomposite materials, due to the hydrophilic nature of the biopolymer matrix and/or the natural reinforcement (Vilaplana et al., 2010). Moisture uptake can induce swelling of the biocomposite which may impair interfacial strength and subsequently generate cracks in the matrix (Bayart et al., 2017). Swelling phenomenon is attributed to the interaction of the fiber cell-wall components (containing -OH, -COOH, and other polar groups) with water molecules via hydrogen bond formation (Islam et al., 2010). These are serious issues for long-term applications where the biocomposites may be exposed to the combined effect of high humidity and temperature conditions. Although, a recent publication (Mangin et al., 2018) has shown that incorporating miscible PMMA to flame-retarded PLA improves its resistance to hydrothermal aging, further studies are however necessary to better understanding the behavior of such materials in a high humid atmosphere or in water. This is a prerequisite for any outdoor application. Despite the technological importance of this research theme, few studies are unfortunately available in literature on degradation of PLA/cellulose biocomposite materials in hygrothermal conditions, and even less on melt-spun PLA fibers (Xian et al., 2018).
Therefore, the objective of this paper was to investigate the influence of combined humid atmosphere and temperature on the morphology, the chemical structure and the physical properties of neat PLA, PLA/PLA-g-MA/MCC1, and PLA/PLAg-MA/CNW1 bionanocomposite fibers. The hygrothermal aging was conducted in a climatic chamber at 95% RH and at two temperatures: 45 and 60 • C. The filler size effect on the rate of hydrolysis of PLA fibers was also investigated. The choice of 45 and 60 • C as the hygrothermal degradation temperatures was not arbitrary, it was justified by the fact that PLA fibers are in glassy state at 45 • C and rubbery state at 60 • C, considering that the transition temperature of PLA is around 60 • C. Furthermore, 60 • C is often the temperature which is used in clearing treatment of textile fibers in the industry.
Materials Used
PLA was fiber-grade resin 6202D and supplied by Nature Works LLC. According to the manufacturer, the main physical characteristics of the polymer are as follows: density = 1.24 g/cm 3 , glass transition temperature (T g ) = 60 • C, and melting point (T m ) ∼160-170 • C.
Microcrystalline cellulose (MCC) was supplied by Sigma-Aldrich under the trade name Avicel PH 101. MCC was also used as the raw material for extracting cellulose nanowhiskers (CNW) by using sulfuric acid hydrolysis in aqueous media (Aouat et al., 2018). Sulfuric acid 95-97% was purchased from Sigma-Aldrich. PLA-g-MA (∼3 wt.% of maleic anhydride) used as the compatibilizer for the cellulosic PLA fibers, was prepared in the laboratory Materia Nova (Mons, Belgium) by reactive extrusion using a Leistritz twin-screw extruder (L/D = 50).
Preparation of PLA/Cellulose Bionanocomposites
PLA and PLA bionanocomposite fibers were manufactured by two-step process. The first one consisted of preparing pellets by a Thermo-Haake co-rotating intermeshing twin-screw extruder (L/D = 25) according to the compositions reported in Table 1. In the second step, the pellets were used to obtain the multifilament fibers using a melt-spinning machine, Model Spinboy I, manufactured by Busschaert Engineering. Elaboration of PLA fibers has been detailed in a recent paper (Aouat et al., 2018).
Hygrothermal Aging
Both PLA and PLA bionanocomposite fibers in form of coils were subjected to hygrothermal aging in a climatic chamber of Model Excal 2221-HA at 95% RH and two temperatures, i.e., 45 and 60 • C. The fibers were placed on metal grid in the center of the enclosure having the following dimensions: 50 × 50 × 75 cm. The climatic chamber used is equipped with the Spirale R software, which allows the aging parameters to be controlled. Fiber samples were removed periodically with time for characterization tests.
Technical Characterization
Water Uptake The moisture uptake of PLA fibers was estimated by weighing. The samples removed from the climatic chamber, were immediately weighed (m 2 ) to avoid any moisture loss and weighed again after sampling before being replaced in the chamber. Percent moisture uptake (%H) is determined by Equation (1): Where, %H is the percent moisture uptake, %H 1 is the percent moisture uptake at previous removing; m 1 is the sample mass at previous removing, while m 2 is the sample mass currently noted. In addition, the water uptake capacity of the exposed PLA and PLA bionanocomposite fibers in the climatic chamber was also expressed in terms of diffusion coefficient. Assuming that the PLA fibers have a cylindrical shape, the water diffusivity in the matrix is expressed by Equation (2) (Hossain et al., 2014): Where, D is the water diffusion coefficient in (m 2 .s −1 ), d is the average diameter of the fiber in (m), Ws is the water uptake at saturation in (%) and (W 2 −W 1 ) 2 2 is the square slope of the linear portion of the curve of water uptake vs. root of time.
Viscosimetric Measurements
Viscosimetric measurements were carried out in an Ubbelhode viscometer at 30 • C with chloroform as solvent. Assuming the kinetic energy and shear corrections negligible, the Huggins equation was applied to estimate the intrinsic viscosity [η]. The latter is related to the viscosity average molecular weight (M v ), by the Mark-Houwink-Sakurada equation: [η] = K.M a v (where, K and a, are empirical constants). For the PLA/chloroform system at 30 • C, K = 1.31 × 10 −4 dl/g and a = 0.759 (Persson and Mikael, 2013). The extent of hydrolytic degradation of PLA fibers and its bionanocomposites is determined from the number of mainchain scission index (SI). SI is defined according to the following Equation (4) (Remili et al., 2009).
Where M v0 and M v are the viscosity-average molecular weight before and after hygrothermal exposure of the fibers. In addition, the hydrolysis rate was also followed by the hydrolysis rate constant (k) determined by the linear regression method.
Tensile Measurements
The tensile measurements were conducted on twisted fibers (80 monofilaments). A mechanical tester system MTS associated with a force sensor of 1 kN was used. In order to adjust the clamp load and to grip the sample with the least amount of stress, a special design for testing yarns was used (capstan grips). Capstan roller in addition to vise action allows the sample to be both clamped at the desired level and to be wound around the capstan to distribute the remaining stress via friction. The tensile properties were measured according to ISO 2062 standard test method. A loading speed of 200 mm/min and a distance of 200 mm between grips were applied. All mechanical tests were carried out by using specimens previously stored for at least 48 h at 20 ± 2 • C at 50 ± 3% RH. The values were averaged out over five measurements for each sample.
Because of the variation in the fibers fineness, the tensile strength is expressed as tenacity (cN/tex), a specific value related to fineness (force per unit fineness). Fineness in tex (g/km), was determined by dividing the mass of fibers by their known length (Milanovic et al., 2012).
Differential Scanning Calorimetry (DSC)
DSC thermograms of PLA fibers were performed using a 2920 Modulated DSC (TA Instruments) before and after exposure to hygrothermal aging. The dried samples of an average weight of about 10 mg were placed in hermetically closed DSC capsules in nitrogen atmosphere at 50 ml/min. The heating and cooling steps were carried out at a rate of 10 • C/min from 20 to 200 • C and from 200 to 20 • C, respectively. Glass transition temperature (T g ), cold crystallization temperature (T cc ) and melting temperature (T m ) were determined from the second heating cycle of the PLA fibers. The crystalline index (X c ) was calculated according to Equation (5) (Dadbin and Kheirkhah, 2014): Where, H m is the melting enthalpy of the sample, H m0 is the melting enthalpy of 100% crystalline PLA, taken as 93 J/g (Fortunati et al., 2012). H cc is the crystallization enthalpy and W is the weight fraction of PLA in the bionanocomposite fibers.
Wide Angle X-Ray Scattering (WAXS) WAXS measurements were carried out on a Philips PW1050 diffractometer. The X-ray patterns were recorded in a range of 2-40 • with a step of 0.02 • and step time of 2 s. The wavelength of the Cu/Kα rod surface was λ = 0,154 nm and the spectra were obtained at 20 mA with an accelerating voltage of 40 eV.
Thermogravimetric Analysis (TGA)
Thermogravimetric analysis (TGA) was performed on a Perkin Elmer Pyris-1 TGA thermo-balance (PerkinElmer, Waltham, MA, USA) operating under N 2 atmosphere in alumina crucibles containing around 10 mg of material and ranging from 30 to 900 • C at a heating rate of 10 • C/min.
Scanning Electron Microscopy (SEM)
SEM images of the fibers were recorded using a QUANTA 200 FEG (FEI Company) environmental scanning electron microscope at an acceleration voltage of 7-10 keV. Prior to any observation in scanning mode (SEM), the transversal surfaces of the fibers were sputter coated with carbon using a Carbon Evaporator Device CED030 (Balzers), to ensure good surface conductivity and to avoid any degradation.
Transmission Electron Microscopy (TEM)
TEM observations were carried out on a JEOL 1200EX TEM scanning electron microscope operating at an accelerating voltage of 100 kV. The samples were embedded in a LR white resin and ultrathin-sectioned at 70 nm using a Leica EM UC7 ultra-microtome with a diamond knife Ultra 45 (Nissei Sangyo). The sections were transferred to carbon-coated Cu grids of 300 meshes.
RESULTS AND DISCUSSION
Water Uptake (WU) Belonging to the family of aliphatic polyesters, PLA and its bionanocomposites absorb moisture when they are immersed in water or exposed to a humid atmosphere. Moisture uptake phenomenon leads to property changes and degrades also the materials through hydrolysis (Elsawy et al., 2017). In this regard, water uptake (WU) kinetics of PLA, PLA/PLA-g-MA/MCC1, and PLA/PLA-g-MA/CNW1 fibers were determined at 45 and 60 • C.
The relative plots are shown in Figures 1A-D. Furthermore, the values of WU at saturation, diffusion coefficient, and activation energy are also provided in Table 1. Figures 1A,B displays the curves of WU as a function of exposure time for PLA and PLA bionanocomposite fibers at 45 and 60 • C, respectively. As expected, WU capacity of PLA matrix is lower compared to that of its bionanocomposites. Nevertheless, an increase in WU is observed for all fibers with increasing both exposure time and temperature, being less pronounced for PLA. It is also observed that for PLA bionanocomposite fibers filled with MCC1, WU % is much higher than those filled with CNW1, whatever the temperature. This may be due to higher level of crystallinity in PLA/PLA-g-MA/CNW1. Indeed, the downward trend in WU of highly crystalline polymers has already been reported by many authors (Zhou and Xanthos, 2008;Balakrishnan et al., 2011;Hossain et al., 2014;Mitchell and Hirt, 2015), which is attributed on one hand, to the barrier effect of impermeable crystallites, and on the other hand, to the tortuosity of water diffusion into the polymeric matrix. In addition, the filler specific surface is another parameter, which has to be considered, since the larger the filler specific surface, the higher the amount of water trapped. Figures 1C,D show that WU of all fibers increases almost linearly with the root of time at 60 • C compared to 45 • C before reaching saturation. This suggests that water diffusion in PLA fibers is governed by Fick's law, which is in agreement with the data reported in the literature (Yew et al., 2005;Balakrishnan et al., 2011;Ndazi and Karlsson, 2011;Chow et al., 2014;Gil-Castell et al., 2014;Hossain et al., 2014;Yu et al., 2018). The increase of WU of PLA and its bionanocomposites with time may also result from the formation of strong polar groups during hydrolysis process, mainly hydrophilic acid functions and also from the increase of the free volume in PLA matrix (Mortaigne, 2005;Zhou and Xanthos, 2008). Indeed, Gupta et al. (2012) reported a decrease in contact angle of PLA with time and subsequently, an increase of its polarity in the course of the hydrolysis process. Table 1 shows that the activation energy value of PLA fiber is much higher than that of PLA bionanocomposites with 38 and 72% increases compared to that of PLA/PLA-g-MA/CNW1, and PLA/PLA-g-MA/MCC1, respectively. The lower WU value of PLA results from its higher relative hydrophobic character compared to that of the bionanocomposites. This is consistent with the literature data (Yew et al., 2005;Zhou and Xanthos, 2008;Balakrishnan et al., 2011;Yu et al., 2018) reporting WU values of PLA ranging from 0.5 to 1.0%. In addition, the significant mass gain of PLA bionanocomposites over PLA mainly could be ascribed to the cellulosic fillers, which are highly hydrophilic materials. The presence of hydroxyl groups (OH) in MCC and CNW is favorable for the occurrence of hydrogen bonding with moisture (Elsawy et al., 2017). This is in a good agreement with many authors who reported that the incorporation of natural hydrophilic fillers to PLA increases its WU capacity. These include cellulose nanowhiskers (Hossain et al., 2012), sisal fibers (Gil-Castell et al., 2014, ramie fibers (Yu et al., 2018), coconut fibers (Wu, 2009), and wood pulp (Azwar et al., 2012).
Scission Index Evolution
The hydrolytic degradation kinetics of PLA fibers and its bionanocomposite were investigated by determining the scission index (SI) with exposure time. The plots are shown in Figures 2A,B for PLA and the bionanocomposite fibers at 45 and 60 • C, respectively. Furthermore, Table 2 summarizes the k values, which give the hydrolysis rate of the exposed fibers.
In Figures 2A,B, there is an increasing evolution of SI curves with time for all PLA fibers whatever the temperature meaning that the degradation mechanism predominantly occurring in the matrix is chain scission (Gajjar and King, 2014;Gil-Castell et al., 2014). Indeed, the literature (Elsawy et al., 2017) reported that under humid conditions, hydrolysis reactions occur between PLA ester groups and water molecules resulting in chain scission forming chain segments with low molecular weight (Girdthep et al., 2016;Lins et al., 2016;Lorenzo et al., 2016;Mohammad et al., 2016;Pinese et al., 2016;Stloukal et al., 2016;Yang et al., 2016). Moreover, the hydrolysis of PLA bionanocomposites is strongly dependent on the intrinsic characteristics of PLA matrix, the nature of fillers, their dispersion in the polymer and the environment conditions (humidity and temperature) (Zhou and Xanthos, 2008;Maharana et al., 2009). In this regard, an increase in temperature from 45 to 60 • C, results in a fast hydrolysis process of PLA. Thus, the k values given in Table 3, indicate that all PLA fibers are more sensitive to hydrolysis at 60 • C than 45 • C. Indeed, the k values of PLA, PLA/PLAg-MA/MCC1 and PLA/PLA-g-MA/CNW1 fibers recorded after 14 days at 60 • C are 5, 7, and 8 times higher than after 25 days at 45 • C, respectively. At 60 • C, which is the T g of PLA, the chain mobility increases significantly, thus promoting water diffusion in the amorphous phase of PLA and subsequently a faster hydrolysis (Zhou and Xanthos, 2008;Balakrishnan et al., 2011;Castro-Aguirre et al., 2016). This is consistent with the data published by Copinet et al. (2004) and Zhou and Xanthos (2008) who reported a faster degradation of PLA at 60 • C than at 45 and 50 • C. From Table 3, the catalytic role of cellulosic fillers on PLA hydrolysis is highlighted, especially at 60 • C. An increase in the k value by almost 53 and 80% is recorded for PLA/PLA-g-MA/MCC1 and PLA/PLA-g-MA/CNW1, respectively compared to that of neat PLA. This result is attributed to filler hydration, which is one of the key parameters responsible for accelerating the polymer hydrolytic degradation (Loo et al., 2005;Zhou and Xanthos, 2008). Accordingly, hydration phenomenon is explained by the easier accessibility to water of PLA in the presence of cellulosic fillers, which is in line with the water diffusion coefficient values shown in Table 1. Furthermore, the data provided in Table 2 show clearly the effect of the specific surface of the cellulosic filler on the hydrolysis of PLA. Although, the accessibility to water of PLA/PLA-g-MA/MCC1 is easier than that filled with CNW1 as shown in Table 1, it is however observed that the latter is more vulnerable to hygrothermal degradation. Indeed, Figure 2B shows the presence of a short induction period of about 5 days for PLA/PLA-g-MA/MCC1 fibers at 60 • C up to 14 days, whereas the chain scission mechanism starts up on exposure for both PLA and PLA/PLA-g-MA/CNW1 fibers. This behavior is explained as a result of the high capacity of MCC to store the absorbed water, therefore reducing the wettability of PLA matrix. Unlike, CNW leads to better and homogeneous hydration of PLA matrix, thus promoting hydrolysis. Similarly, Kummerer et al. (2011) reported that cellulose nanocrystals are more sensitive to degradation than MCC in an aqueous environment. Table 2 TABLE 3 reports also the values of timescale for diffusion (r 2 /De) for all fibers, which are much higher than those of timescale of reaction (1/k) at both 45 and 60 • C. This indicates that the process of hydrolysis occurs mainly through a series of reactions rather than by a water diffusion process (Mitchell and Hirt, 2015). Figures 3-5 shows SEM images of both external and crosssectional surfaces of PLA and PLA bionanocomposite fibers before exposure and after 25 days at 45 • C and 14 days at 60 • C. Figure 3a displays the external surface fiber of neat PLA before exposure. The surface is smooth and regular. After 25 days at 45 • C, no noticeable change was observed on the surface of neat PLA as shown in Figure 3b. However, after 14 days at 60 • C, some cracks were formed which were preferentially localized on the fiber sides (Figure 3c). Similar morphology has been observed by Yuan et al. (2002) on hygrothermal degradation of PLA fibers. In Figure 3d, the cross sectional surface fiber exhibits a homogeneous morphology, which seems intact without any damage. This result indicates that the hygrothermal aging of PLA occurs on the fiber surface rather than in the bulk. This is explained by the weak polarity of PLA which prevents the water diffusion from the surface to the bulk of material (Gupta et al., 2012) in concordance with the WU data reported in Table 1. Figure 4a shows the SEM micrograph of the external surface of PLA/PLA-g-MA/MCC1 fiber before exposure. Although the surface appears smooth, its diameter is variable. Indeed, the diameter varies along the fiber passing from 65 to 100 µm, which is probably due to the presence of MCC aggregates of various sizes in PLA matrix. Figure 4b displays the external surface of PLA fiber filled with MCC1 after 25 days of exposure at 45 • C. The surface seems also smooth, however a decohesion between MCC and PLA matrix was observed. This phenomenon became more pronounced after 14 days at 60 • C as shown in Figure 4c since many cracks were formed randomly at the fiber surface, playing a role of degradation precursors. Conversely to PLA fiber, it can be seen in Figure 4d that the hydrolytic degradation of PLA/PLA-g-MA/MCC1 occurs not only on the fiber surface, but also in the bulk as clearly demonstrated by the formation of internal crack starting from the surface to the filler aggregate. Figure 5a shows regular PLA/PLA-g-MA/CNW1 fibers with a diameter very close to that of neat PLA. The surface morphology of the fibers remained almost unchanged after 25 days of exposure at 45 • C (Figure 5b). However, after 14 days at 60 • C, the bionanocomposite fiber was severely damaged with the appearance of a surface erosion phenomenon as shown in Figure 5c. Further, cracks of almost 10 µm long, regularly distributed on the surface and perpendicularly oriented to the fiber direction were observed. The cracks are probably formed due to the migration of various species including monomers and oligomers resulting from hydrolysis. In addition, the effect of hygrothermal aging on the morphological structure of neat PLA and PLA bionanocomposite fibers was also investigated by TEM. The corresponding TEM images are shown in Figures 6 and 7. Figure 6a shows the surface morphology of neat PLA before exposure. The sample exhibits a regular and homogenous morphology with no surface defects. After 25 days of exposure at 45 • C, some microvoids were observed on the fiber surface (Figure 6b), whose number and size seemed to increase with increasing the temperature to 60 • C as illustrated in Figure 6c. In Figure 7a, which corresponds to PLA/PLAg-MA/CNW1 recorded before exposure, CNW particles are clearly distinguished from the PLA matrix by their whiteness and also by their typical rod shape. Figure 7b shows the presence of defects on the surface observed after 25 days at 45 • C. The morphology of the bionanocomposite fiber exhibits essentially microvoids similarly to neat PLA. However, after 14 days at 60 • C, the CNW particles appeared as black spots of higher density as clearly shown in Figure 7c. This means that CNW were completely disintegrated during hydrolysis at 60 • C. According to the literature (Pan et al., 2010;Ruiz et al., 2013), the aging of cellulosic fillers due to moisture uptake may lead to several structural and properties changes involving their depolymerization. At this stage, CNW showed a remarkable change in color from white to black (Dong et al., 1998;Jewena et al., 2016).
Tensile Measurements
Tensile properties, which are one of the main functional properties of polymers, are generally used as aging criteria to evaluate the durability of polymers in hygrothermal conditions (Chow et al., 2014). In this regard, tensile properties of PLA and PLA bionanocomposite fibers were investigated at 45 and 60 • C and the data are summarized in Table 3. In addition, the kinetics curves of tenacity of PLA fibers plotted at 45 and 60 • C are shown in Figure 8. According to Table 3, elongation at maximum deformation, Young's modulus and tenacity of the whole PLA fibers were reduced from hygrothermal exposure. Thus, at 45 • C and after 25 days, the value of Young's modulus decreased by ∼8, 10, and 15% from the initial one for the neat PLA, PLA/PLA-g-MA/MCC1, and PLA/PLA-g-MA/CNW1, respectively. The decrease in Young's modulus may be attributed to the molecular weight decrease of PLA due to chain scission (Yu et al., 2018). Moreover, Table 3 shows also that the loss in tensile properties of the exposed PLA fibers is logically more pronounced at 60 • C than 45 • C. Hence, after 7 days at 60 • C, the PLA fibers were no longer stretchable, while at 45 • C, the relative tenacity was almost stable up to 14 days. After this, a slight decrease in Young's modulus and elongation at maximum deformation was noted up to 25 days. It can be seen that the kinetics curves of relative tenacity and SI show similar trend. Whatever the filler specific surface, its incorporation in PLA matrix even at a very low content ratio, resulted in a decrease in the mechanical properties of the bionanocomposite fibers, especially at 60 • C. As a matter of fact, more than 92% decrease in the initial relative toughness of PLA/PLA-g-MA/CNW1 fibers were observed after 7 days at 60 • C, compared to 69% loss for the neat PLA. Water diffusion at filler-matrix interface, could cause a differential swelling due to the difference in absorption capacity between the cellulosic filler and PLA resulting in the bionanocomposite degradation (Le Duigou et al., 2009;Yu et al., 2018). This corroborates the TEM analysis on the morphology of PLA/PLA-g-MA/CNW1 fibers, which clearly shows the complete disintegration of CNW particles causing structural defects, which are responsible for the deterioration of the tensile properties.
Thermal Properties
The effect of hygrothermal aging on thermal properties of neat PLA and PLA bionanocomposite fibers was investigated by DSC at 45 and 60 • C. The detailed data recorded at the second heating cycle, are presented in Table 4. From the data in Table 4, T g , T cc , T m , and X c remained almost unchanged for all fibers at 45 • C until 14 days of exposure. After this, T g and T cc slightly decreased by 1 and 2 • C, respectively, while X c of neat PLA, PLA/PLA-g-MA/MCC1, and PLA/PLA-g-MA/CNW1 increased by 2.2, 1.2, and 1.5 times, respectively compared to their initial values. However, at 60 • C, the thermal characteristics of PLA fibers, especially X c , were significantly affected after 14 days of exposure. The hydrolytic splitting-chains of PLA, which proceeds preferentially in the amorphous regions, led to the formation of short chain segments (Yuan et al., 2002;Zhou and Xanthos, 2008) having enough energy to rearrange themselves and subsequently to crystallize (Loo et al., 2005;Zhang et al., 2008). This is in a good agreement with the data reported by Mitchell and Hirt (2015) who indicated an increase in X c of PLA fibers from 11 to 41% after only 24 h at 60 • C and 100%RH. Moreover, the cold crystallization temperature (T cc ) decreased considerably with exposure time at 60 • C. This is consistent with the decrease in the activation energy, which promotes the chain mobility and subsequently, the crystallization process of PLA (Zhou and Xanthos, 2008;Chen et al., 2012;Santonja-Blasco et al., 2013). Furthermore, the incorporation of MCC and CNW into PLA matrix, even at a very low content, significantly reduced the thermal properties of the biocomposite material. Table 4 indicates also a slight decrease in melting temperature (T m ) for the bionanocomposite fibers with exposure time. This is often attributed to the formation of less perfect crystallites or less thermally stable ones which melt at low temperature (Zhang et al., 2008;Chen et al., 2012;Mitchell and Hirt, 2015). The presence of a double melting point in the DSC thermograms (not shown) for both PLA and PLA/PLA-g-MA/MCC1 fibers may result from complex phenomena involving polymorphism, melting-recrystallization-melting or short chains reorganization phenomena during heating (Ling and Spruiell, 2006;Shieh and Liu, 2007;Murariu et al., 2012;Santonja-Blasco et al., 2013). The lower melting peaks correspond to the imperfect crystallites, while the higher ones correspond to the perfect ones (Ma and Zhou, 2015).
Crystallinity Measurement by WAXS
The crystallinity structure of PLA and PLA bionanocomposite fibers was also investigated by WAXS at 45 and 60 • C. The relative patterns are shown in Figure 9. It can be seen that all PLA fibers display a typical amorphous pattern before exposure. However, the semicristalline structure of PLA clearly appears on the WAXS spectra at 45 • C, even more at 60 • C. Thus, two peaks are observed; the most intense one is localized at 2θ = 16.7 • corresponding to the crystallographic planes (110, 200) of PLA crystallites (Sullivan et al., 2015), while a second peak of less intensity is centered at 2θ = 18.9 • , which is relative to the (203) plane (Chen et al., 2012). The remarkable increase in peak intensity at 2θ = 16.7 and 18.9 • in PLA fibers at 60 • C up to 14 days is attributed to the increase in crystallinity of PLA and its bionanocomposites, however much higher for PLA/PLA-g-MA/CNW1. This result is consistent with the scission index (SI) and DSC data.
Thermal Stability
The effect of hygrothermal exposure on the thermal stability of PLA and its bionanocomposite fibers was investigated by TGA. Table 5 summarizes the values of degradation temperature at 5 wt% loss (T 5% ) and 50 wt% loss (T 50% ) with exposure time. It is observed that T 5% of PLA fibers decreased significantly at 60 • C, while T 50% was almost unchanged, particularly at 45 • C. This is in a good agreement with the data published by Gil-Castell et al. (2016) who reported that the temperature at maximum degradation rate of PLA and PLA/sisal biocomposites remains constant after hydrolysis in water at 85 • C, while the onset degradation temperature decreases significantly. Table 5 shows also that after 14 days at 60 • C, T 5% decreased considerably by 22, 52, and 56 • C for neat PLA, PLA/PLA-g-MA/MCC1, and PLA/PLA-g-MA/CNW1, respectively. This is attributed to the catalytic role of cellulosic fillers in PLA, which accelerates 4 | Thermal characteristics (T g , T cc , T m , and X c ) of PLA, PLA/PLA-g-MA/MCC1, and PLA/PLA-g-MA/CNW1 fibers recorded at 45 and 60 • C in hygrothermal conditions.
Fibers
Exposure time (days) the hydrolysis process and consequently increases the fraction of short-length fragments, which can degrade at relatively low temperature (Gupta et al., 2012).
CONCLUSION
From this study, it can be concluded that under hygrothermal conditions (45/60 • C and 95%RH), both PLA fibers and those based on PLA/PLA-g-MA/MCC1 and PLA/PLA-g-MA/CNW1 bionanocomposites undergo hydrolytic degradation, which proceeds mainly by chain scission mechanism. Consequently, an increase in SI and a decrease in T 5% and tensile properties (tenacity, modulus and elongation at maximum deformation) are observed for all samples, however more pronounced for the PLA bionanocomposite fibers. The decrease in properties depends on filler specific surface and temperature. After 14 days at 60 • C, the hydrolysis rate constant is estimated to 5, 7, and 8 times faster for PLA, PLA/PLA-g-MA/MCC1, and PLAPLA-g-MA/CNW1, respectively compared to that recorded after 25 days at 45 • C. Moreover, crystallinity and crystallization rate of PLA fibers show a substantial increase during their exposure to hygrothermal aging. SEM observations show damaged topographies for all exposed fibers after 14 days at 60 • C compared to those recorded after 25 days at 45 • C due probably to the molecular mobility in the vicinity of the glass transition temperature of PLA (60 • C). On the basis of all the results obtained, the durability of PLA fibers to hygrothermal degradation is established in the following order: PLA > PLA/PLA-g-MA/MCC1 > PLA/PLA-g-MA/CNW1.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
AUTHOR CONTRIBUTIONS
This manuscript has been written by MK. The manuscript is a part of the Ph.D. thesis of TA who has conducted the experimental work as well as the interpretation of the results. J-ML-C received TA in his laboratory for scientific internships several times, especially for the study of characterization of the morphology and properties of PLA fibers. ED received also TA in his laboratory for scientific internships several times for the preparation of the PLA, fibers by meltspinning process. | 2019-12-17T14:10:32.808Z | 2019-12-17T00:00:00.000 | {
"year": 2019,
"sha1": "010ac519220fec1f644eaf4d79e7e6c1978effec",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmats.2019.00323/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "010ac519220fec1f644eaf4d79e7e6c1978effec",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
216345759 | pes2o/s2orc | v3-fos-license | A New Model For Endocrine Glucose-Insulin Regulatory System
To gain insight into complex biological endocrine glucose-insulin regulatory system where the interactions of components of the metabolic system and time-delay inherent in the biological system give rise to complex dynamics. The modeling has increased interest and importance in physiological research and enhanced the medical treatment protocols. This brief contains a new model using time delay differential equations, which give an accurate result by utilizing two explicit time delays. The bifurcation analysis has been conducted to find the main system parameters bifurcation values and corresponding system behaviors. The results found consistent with the biological experiments results.
I. INTRODUCTION
Diabetes Mellitus (DM) which is commonly known as diabetes is one of the most widespread chronic disease that the world face nowadays. The number of subjects with diabetes in the world is increasing continuously every year. International Diabetes Federation (IDF) estimates that 436 million people around the world live with diabetes corresponding to 1-11 of the 20-79 adult population. The figure is expected to hit the 700 million people in 2045 [1]. Diabetes in fact resulting from malfunctioning in the plasma glucose-insulin kinetics, causing abnormal high plasma sugar levels known as hyperglycemia. Moreover, due to increasing interest in the development of the artificial pancreas, the mathematical modeling of the human endocrine glucose-insulin regulatory system gained much focus and attracted more scientific research to mimic the expected mechanism of the endocrine system and determine the underlying reasons of diabetes mellitus. Knowledge of these models provide a safe and efficient control algorithm of the plasma glucose level and enhances control devices, which relieve the diabetic subjects. These reasons motivated the investigation of mathematical models, which may mimic this biological process. Thus, investigating the mathematical model is of great importance theologically and practically. Both, the theoretical investigation and numerical computation of the endocrine glucose-insulin regulatory system might enhance the medical treatment protocols and enrich the medical insight [2]. Blood glucose level is regulated through a negative feedback loop where hyperglycemia incites a rapid increase in insulin secreted from the β-cell in the pancreas. The increase in the plasma insulin level causes increased glucose uptake and decreases glucose production by the liver and leads to reduction in plasma glucose [3]. Where, this feedback loop keeps the glucose concentration in the human body within a narrow range following an overnight fast (70-109 mg/dl), and it is known that the basal blood insulin is in the range of (5-10 µU/ml) [4] and it might be in a wider range (10-40 µU/ml) during continuous enteral nutrition [5], and at meal ingestion and high glucose level reach (30-150 µU/ml) [4].
Two types of oscillation in human glucose-insulin interaction have been observed [6], with two different periods, a rapid (10-15 min) and slow or ultradian about (100-150 min). The cause of the ultradian oscillation in human body may be entirely originated by the dynamic interaction of glucose-insulin negative feedback regulatory system [6]. This oscillation already detected in human body at different physiological situations: After meal ingestion [7], glucose oral intake [8], through continuous enteral nutrition [9] and during constant glucose intravenous infusion [10]. These different oscillation patterns are given in Fig. (1) adapted from Sturis 1991 [6]. Many other biological experiments have shown that the insulin secretion from β-cell in the pancreas has an oscillatory behavior [9], where the periodic secretion of hormones are more effective than other types of stimuli such as constant or stochastic [11]. This field of vigorous interdisciplinary research came into being with the pioneering works of Bergman and his co-workers [12,13]. In 1991 Sturis [6] suggested a mathematical model consisting of six nonlinear differential equations to describe the glucose-insulin ultradian oscillation, at different glucose feeding and showed that the feedback mechanism is the underlying source of sustained oscillation however, the model includes three non-observable auxiliary variables. Topp et al. [3] incorporated the β-cell in the model in addition to the glucose and insulin concertation level, the model has two stable fixed points representing physiological and pathological steady states. Engelborghs in 2001 [14], provided a bifurcation analysis of the periodic solution of the delay differential equations system represent the glucoseinsulin metabolic system, with discrete time delay. Incorporating explicitly two time delays is presented in [15] the resulting system consists of three delay differential equations with proven positiveness, stability and stability using Lyapunov function method. Jiaxu Li [16] proposed robust model for endocrine metabolic regulatory system and showed the ultradian oscillation with time delay. Two compartments model for both glucose and insulin variables and incorporating two time delays explicitly is presented in [17], their model focuses on the importance of the subcutaneous tissues glucose and insulin concertation levels. Strike in 2018 [18] provided a qualitative numerical study of glucose dynamics in patients with stress hyperglycemia and diabetes receiving intermittent and continuous enteral feeds. Amit [2] proposed a smooth approximation of the minimal model, with linear feedback-based control algorithm.
In this paper, we proposed a time delay differential equation model to represent the metabolic endocrine glucose-insulin regulatory feedback system, two time delays have been incorporated explicitly in the model for better and more accurate representation of the biological system. The model has been analyzed through stability and Hopf bifurcation analysis. The effects of varying multiple parameters in the system model are presented and different system behaviors are captured. The paper organized as follow, section II includes the mathematical model analysis, Sec. III presents the simulation results and Sec. IV shows the final conclusions and future work.
II. THE MODEL
The main elements in the glucose-insulin metabolic regulator system are shown in the schematic diagram illustrated in Fig. (2). The delay differential equations have been used in the model to simulate the finite time response of the pancreas (to release insulin) and the liver (to secrete glucose) to changing conditions managed by the glucose insulin regulatory system. The principle of mass conservation can be described as follows: where it was employed to derive the glucose insulin dynamic equations.
The equations depict the rate of change of the glucose concentration, ̇( ) and the rate of change of the insulin concentration, ( ), which should equal its amount produced minus the amount cleared. The glucose production, ( ), glucose utilization, ( ) , insulin production, ( ) , and insulin clearance, ( ), are defined by a set of highly nonlinear functions ( 1 through 6 ): ( ) = ( ) + 5 ( ( − 2 )) ( ) = 2 ( ( )) + 3 ( ( )) 4 ( ( )) ( ) = ( ) + 1 ( ( − 2 )) ( ) = − ( ) − 6 ( ( )) 7 ( ( )) The functions where = 1,2, 3, … 5 , which derived directly from human physiologic data [11,16], and 6 and 7 are used to represent the insulin degradation which depends on glucose, they determine the various components of the glucose-insulin regulatory system; the purpose of each function is mentioned in represented endogenous glucose release are denoted by 1 and 2 respectively. In order to investigate the effect of system parameters on the stability of the system and the possibility of periodic behavior of the system dynamics, the analysis will be as follows: where ( ) ∈ ℝ , : ℝ * ( +1) × ℝ → ℝ and ∈ ℝ . The solution of (4) is not unique function of ( ) at fixed time point due to the dependence on the past history. So, instead the initial solution should be specified over an interval of time length such that = =1,2,.., { }. Then the initial function segment belongs to = ([− , 0], ℝ ) , the infinite dimensional space function mapping the delay interval [− , 0] into ℝ . The equilibrium solution ( ) ≡ * ∈ ℝ of (4) can be evaluated as a solution of the nonlinear system: It is worth noting that * does not depend on the time delay values, but the stability of the steady state solution * depends on the time delays. To find the stability, the system (4) linearized about * to obtain the variational equation as follows: where ( * , ) = ( * , ), = 0,1,2, … , .
Then, the characteristic matrix can be written as follows: The eigenvalues of (8) can be found by solving the transcendental polynomial equation: (Λ( )) = 0 (9) where (9) has an infinite number of roots that give the stability of the steady state solution * . Which mean that all the roots should be in the left hand side, and it is unstable otherwise. To ensure the bifurcation of the steady state solution with changing some biological parameter , then the eigenvalues should cross the imaginary axis not through the real axis. Therefore, a periodic solution arises at the bifurcation point. Assuming that the system (1) has a steady state * = ( * , * ) then the transcendental characteristic equation (9) can be written as follows: Then the steady state solution * losses its stability as the eigenvalue real part become positive. So, the stability boundary where = , ∈ ℝ + can be obtained by So, the solution of the equation (11) can be found by intersection of the two curve the first is − 1 that is scanned repeatedly as increasing 1 . The second curve is the ratio curve given by (12) as shown below: Which scanned once as increase from 0 to ∞. This curve start at the point −( 2 + 5 )/( 3 + 4 ) for = 0 , then growth toward the ∞ as → ∞ , and making a spiral around the point −( 2 + 5 )/( 3 + 4 ) . The spiral form and number of intersections with the unit circle change depending on the parameter values.
III. SIMULATION RESULTS
Extensive numerical simulation for the system (1) has been implemented for the system parameters given in Table 2 to capture the variety of system dynamics and behaviors. Fig. (3) and Fig. (4) show the time courses of the glucose and inulin variables and the corresponding steady state phase portrait, which show clearly a limit cycle, for two different sets of parameters which show clearly that the proposed model ensures sustain oscillation and robust performance for wide range of time delay. To demonstrate the system dynamics and the evolution of the solutions, four parameters will be changed consequently to reveal the Hopf bifurcation dynamics. The parameters that will be chosen are the two time delay ( 1 and 2 ) and the exogenous glucose infusion rate and the insulin degradation rate . Fig. (5) shows the bifurcation diagram and phase portrait for range of values of 1 ∈ [0, 20]. It is clear that the bifurcation point is at 1 ℎ = 2.55 and the amplitude of both variable in this case in the accepted range and consistence with the biological finding [6,11,16]. Sustained oscillation can be observed in the range 1 ∈ [2.55,20]. and the amplitude of both variables in this case in the accepted range and consistent with the biological finding [6,11,16]. Fig. (8) shows the period variation with respect to the time delay is in the range [97, 163] is agree with the experiments. To investigate the effect of the glucose infusion rate on the system behavior, the rate has been changed from 0 to 1.5 mg/dl/min, as shown in Fig. (9), the dynamics bifurcate at ℎ = 1.275 mg/dl/min, and the system is periodic for < ℎ and asymptotically stable otherwise, in other word if the exogenous glucose infusion rate is greater than the initial glucose level the glucose concertation level returns to the basal level in a definite time [19]. The corresponding period is shown in Fig. (10), the period is slightly decreasing with changing the exogenous glucose infusion rate. Finally, the effect of the insulin degradation rate is shown Fig. (11) where degradation rate has been changed in the range ∈ [0.01, 0.12] a bifurcation point is found to be ℎ = 0.026 where the dynamic is periodic when the insulin degradation rate above ℎ and the period is monotonically decreasing as shown in Fig. (12).
IV. CONCLUSION
The modeling of the biological system is an important approach to understand the complexity of the systems, and it gives an important tool to reveal the hidden dynamics of the biological processes. As shown in the results, the slight change in the system parameter can give rise for variety of dynamics and the oscillation and periodic solution can emanate at certain bifurcation point, this behavior should be considered with much attention biologically where it enriches the medical insight about the endocrine metabolic glucose-insulin regulator feedback system which have a complex behavior. More biological facts and factors can be incorporated within the mathematical model such as the stress effect, glucagon, human state and the dynamics of the -cell and other components of the endocrine system. 11 Bifurcation diagram with . Fig. 12 Period of the solution. | 2020-04-09T09:05:25.850Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "d0b58ae7fb5fab65f2816f2e92bb9b4d9da5be47",
"oa_license": null,
"oa_url": "https://doi.org/10.37917/ijeee.16.1.1",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b04ebc3a479481e675147ea13dfe117a61a8e933",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232147192 | pes2o/s2orc | v3-fos-license | Coupled models for total stress dissipation tests
Two linear, point-symmetric, coupled consolidation model families with various embedding space dimension values (oedometer models - 1, spherical models - 3, cylindrical models - 2), differing in one boundary condition (coupled 1 - constant displacement, coupled 2 - constant stress) are analysed analytically and numerically. The method of the research is partly analytical, the models are unified into a single model with unique analytical solution, every model can be derived from this by inserting the proper boundary condition and embedding space dimension m. The constants of the solutions are determined and an approximate time factor and model law are derived for the m>1case which is identical to the one valid in the oedometer case. The convergence of the infinite series are examined in the function of the initial condition. Concerning the total stress at the pile shaft, significant decrease (with the value of the initial mean pore water pressure) is encountered for the coupled 1 consolidation models, zero stress drop is resulted by the coupled 2 models. The total stress dissipation test is suggested to be evaluated by the coupled 1 models with a time dependent constitutive law, eg., by adding a relaxation part-model. The rate of convergence is the smaller if the initial condition is the closer to the one of a zero solution (beyond the trivial one, a non-trivial zero solution exists for the coupled 1 model, at the Terzaghi initial condition).
Dissipation tests
The dissipation type tests are used for the laboratory/in situ assessment of permeability/coefficient of consolidation by evaluating the measured displacement or pore water pressure or total stress data with a consolidation model (Tables 1 to 5, Fig. 1, [1 to 11]).
Two kinds of staged oedometer tests are known with total stress load or with displacement load. In the conventional compression test, the total stress load is increased stepwise, the pore water pressure at the bottom, the displacement at the top of the sample are measured. In the oedometric relaxation test, the displacement load is increased stepwise, the pore water pressure at the bottom, the total stress at the top are measured.
The dissipation tests are made by stopping the steady penetration, clamping the cone penetrometer CPT system and measuring some stress variables in the function of the time. The stress variables are the local side friction fs and the cone resistance qc, the pore water pressure u (CPTu), the total stress and the pore water pressure u (the piezo-lateral stress cell (PSL) test) andthe total stress and the pore water pressure u (the piezo-lateral stress cell at the flat dilatometer (DMT).
In the pore water pressure dissipation tests, the record is monotonic or non-monotonic with time, which is generally associated to low or high values of OCR, respectively. The u-sensors can be mounted in various positions, the corresponding dissipation curves are significantly different (eg., u1, u2, u3).
In the piezo-lateral stress cell test ( [16]) the time variation of the radial total normal stress and the pore water pressure are measured. In the DMT dissipation test the time variation of the radial total normal stress is recorded. In soft clay, the radial total stress may decrease by 73%, the effective stress may vary nonmonotonically ( [11]), decreasing or increasing during the first few minutes, depending on the soil plasticity and OCR, the long term behaviour is not properly known. In other soils, very few pieces of information are available, the total stress may initially increase or the dissipation curve may not have inflextion point.
In the "simple rheological test" the time variation of the local side friction and the cone resistance are measured, the rod is clamped. Again, very few pieces of information are available. According to the results in relation to 2-minute long records, after an immediate stress drop, the cone resistance decreases, the shaft resistance decreases or increases in sand in the first minutes [19].
Models
The concept of linear, coupled 1 and coupled 2 consolidation model families is introduced for the folloowing displacement domains. The displacement domain of the point-symmetric consolidation models is bounded by spheres in 1,2,3 dimensions (Tables 3 to 5, Fig. 1). Constant displacement boundary condition is assumed at the inner boundary, which is at r=r0 =0 for the two kinds of oedometer tests, being the symmetry point of a double-drained oedometric samples. The r=r0 boundary is the surface of the model pile. Total stress load (coupled 2 models) or displacement load (coupled 1 models) is assumed at the outer boundary r=r1, where in addition zero pore water pressure boundary condition is assumed. The pore water pressure solution of the coupled 2 models "reduces" to the one of the uncoupled model ( [20]), so the uncoupled model family do not be needed to be discussed separately.
Evaluation
The analytical solutions of the oedometer case yield a dimensionless time variable T oed equations: where c is coefficient of consolidation, H is model constant (Fig. 1). The non-linear parameter identification problem is solved approximately, in the lack of an automatic model. The one-point model fitting in practice requires time of t90. The analytical solution of the cylindrical and spherical cases do not yield similar time factor. The CPTu pore water pressure dissipation test are evaluated at present approximately with uncoupled models, using embedded initial conditions (generally assuming undrained penetration) and approximate time factors, two one-point fittings look like ( [10,15]): where r0 is the radius of the CPT equipment, T50 is a time factor, and t50 is the measured time for 50% dissipation, Ir is the rigidity index.The time factors are heuristic, they are based on the observation so that the theoretical dissipation curves could be normalized. These include a model law for time t variable only.
The approximate, one-point model fitting requires time of t50, cannot handle if t50< 50 s (partly drained penetration), and if the dissipation is starting from less than u0 values (the initial condition can not be varied as needed). Further problem is that it is difficult to assign a value of Ir since the shear modulus decreases with strains by a factor of 20 or 30 (Mayne, 2007).
The DMT total stress dissipation test evaluation method is model-free, it is based on an empirical formula concerning the inflexion point of the dissipation curve (and is not working in the lack of inflexion point).
Model validations
An automatic global minimisation algorithm was given to the injective solutions of linear PDE-s, giving reliability information as well [26].The non-linear parameter identification problem is solved mathematically precisely. The models of the two kinds of staged oedometer tests were validated against short multistage data with the results, that the linear models are acceptable for the pore water pressure, but for the total stress or displacement the relaxation or creep have to be taken into account ( [22 to 26]).
The validation of the two kinds of cylindrical/spherical models against CPTu pore water pressure dissipation test data ended with the statement that both models are usable but the identified psrsmeters differ in a constant multiplier ( [12,13]).
The mean pore water pressure solution of the coupled 1 cylindrical model was used in an approximate way for the evaluation of dilatometer total stress dissipation test data with no inflexion point [18]. The total stress solution of the of the coupled 1 cylindrical model has not been used for the evaluation of the DMT or CPT total stress dissipation test data.
The aim and content of the paper
The total stress solution of the coupled 1 model has not been used for the evaluation of the total stress dissipation test. The cylindrical coupled model of Randolph-Wroththe cylindrical analogon of the coupled Biot model for the oedometer compression test gives constant total stress solution at r=r0.
The analytical solution of the cylindrical and spherical coupled models do not yield time factor since the Bessel function roots are just nearly "regular". The "suggested, not derived"time factors are used with rigidity index and r0 instead of the measure of the displacement domain. These include a model law for time t variable only.
Two kinds of coupled consolidation models are related to the staged oedometer tests differing in one boundary condition (coupled 1kinematic load -and 2total stress load). The hypothesis of the research that one of this which gives total stress dissipation, may qualitatively be good for the CPT dissipation tests. It is also assumed that a "more precise" time factor can be derived from the analytical solution, using the asymptotic Bessel formulae.
The aim of the paper is to analyze the two linear, pointsymmetric, coupled, linear model-families in terms of the initial condition, and displacement domain (undrained and partly drained cases) including the analytical and numerical properties. The solution is computed at embedding space dimension m=2 and compared with the case of other space dimensions using a suggested time factor.
The poperties of the analytical solution are determined in the function of the initial condition function qualitatively and quantitatively.
In this work it is shown that the cylindrical coupled 1 model can be used for the modelling of the dissipation around piles at r=r0. The Terzaghi's time factor concept is extended to the cylindrical and, spherical case in a precise way.
The analytical solutions have basically the same numerical (convergence) properties within a model family. It is found that due to the similarity, the CPTu pore water dissipation test can even be evaluated by the oedometer model.
A unified mathematical formulation is given for the two coupled model families (i.e. two model sets with fixed boundary conditions and various space dimension m values).
In the first part of the paper the model analysis is given in the form of system of differential equations and analytical solution in terms of the dimension m. The structure of solution is treated. The analytical properties of the solution are qualitatively analysed for the two kinds of boundary conditions, independently of the embedding space dimension m.
In the second part of the paper some simulations are made. The constants of the solutions are presented in the function of the boundary conditions for embedding space dimension m=2. Approximate closed form solutions for the boundary condition equations are given which are the same for the same boundary condition (within a model family), resulting a time factor T. The convergence properties are characterized.
The practical significance is that the analytical models can be used in the precise evaluation of the pore water pressure / total stress dissipation tests with the identification of the initial condition. In this way the evaluation methods can be used to reduce of the test duration. Concerning the evaluation of the total stress dissipation test, an example is showen with the coupled 1 models used for DMT data.
System of differential equations
Two unified equations can be derived ( [12]). Equation (1) compiles the equilibrium condition, the effective stress equality, the geometrical and, the constitutive equations, as follows: (4) and, Equation (4) compiles the continuity equation, the Darcy's law (neglecting the gravitational component of the hydraulic head) and the geometrical equation, as follows: where the volumetric strain and the Laplacian operator are dependent on m as follows: v is the radial displacement, u is the excess pore water pressure, r and t are the space and the time co-ordinates respectively, E oed is the oedometric modulus: G is the shear modulus, E is the is Young modulus, is the Poisson's ratio.
Boundary conditions
Four boundary conditions are presented for m=2.
(1) The (common) boundary condition Nr. 1 implies that the pore water pressure is zero at r= r1: (2) The (common) boundary condition Nr. 2 entails that the flux is equal to zero at r= r0 : (3) The (common) boundary condition Nr. 3 implies that the displacement equals to a constant at r= r0: These boundary conditions are equally usable for m=3, in the case of m=1, r0 = 0 is assumed, the half of the space domain is used.
Structure of Solution
The structure of the solution can be determined on the basis of the concept of linear ordinary differential equations ( [5]). The solution is equal to the following sum for each variable: where the superscripts p or L indicate the steadystate, drained continuum-mechanical or seepage problems resp., t indicates transient, w concerns the self-weight component.
The solution of the transient part satisfies the homogeneous form of the boundary conditions and its final value is zero. The solution of the steady-state part satisfies the inhomogeneous form of the boundary conditions.
Methods
The displacement solution is expressed in terms of the pore water pressure solution. The m=2 case is considered, however, nearly all equations are valid for every space dimension.
The initial condition for the pore water pressure is assumed to be given in the form of the following monotonic normalised parametric functions for both the qualitative and numerical analysis in this work, if it is not indicated otherwise: The value for u0 at r= r0 is equal to 1. The value for u0 at r= r1 is equal to 0. Analysing the shape function of the pore water pressure initial condition, it can be observed that parameters F and D are in one-to one relation.The mean normalised initial pore water pressure for embedding space dimension m=2: In the qualitative and quantitative analyse a normalised shape function series is used with the mean, normalised initial pore water pressure D series are shown in Table 6.
The monotonic initial condition series for the pore water pressure was given in the form of the monotonic, normalised, parametric functions (Eq 11).
Analysing the shape functions of the pore water pressure initial condition, it can be observed that parameters F and D are in one-to one relation for fixed space domains.
Ten values for the mean of the initial pore water pressure (parameter D) were selected, the value of parameter F was determined for each value of D and, for each value of r1. The initial pore water pressure functions are shown in Figure 2 in the case of n=37.
The shape functions are strictly convex, or concave, or linear. At the limits F→-0 and, F→+0, it is constant u0(r)0 and, u0(r)1, respectively, it is linear at the limit where F equals to the plus or minus infinity (the D value is about equal to 0.33 and 0.5 in the cylindrical and oedometric case, resp.).
Analysis of Equilibrium Equation
By integrating the modified equilibrium Equation (4) with respect to r including boundary condition Nr. 1:
Coupled 1 model
A boundary condition function is derived by further integration between r0 and r1 using boundary condition Nr. 3 and boundary condition Nr. 4: where: Inserting this boundary condition function into the equilibrium Equation: From this: It follows that for a realistic u the change in with t is positive in the vicinity at the outer boundary (rebound) and negative in the vicinity of the pile (compression). The final value of the transient strain is zero. By further integration: It follows from the analysis of Equation (1) that for a realistic u the transient part of v is non-negative and, monotonously decreases with t for any r. The initial condition functions for u and v t have the following relationships: The Terzaghi's initial conditionwhere u0(r) is uniform with a positive value of c1yields the following initial displacement function v t 0(r) for the coupled 1 model: This is the zero solution function and the meaning is that the dissipation is instantaneous.
Coupled 2 model
Inserting the inhomogeneous form of boundary condition Nr. 5 into the equilibrium Equation: From this: It follows from the analysis of the equilibrium Equation that for a realistic u the change in with t is -negative (compression). By further integration: (29) It follows from the analysis of the equilibrium Equation that for a realistic u the transient part of v is non-negative and, monotonously decreases with t for any r. The initial condition functions for u, t and v t have the following relationships: . ( The Terzaghi's initial conditionwhere the initial pore water pressure function u0(r) is uniform with a positive value of c1yields the following initial displacement function v t 0(r) for the coupled 2 model: For a monotonic, positive initial pore water pressure function u0(r), compression and, as a result, in the vicinity of r1 inward displacement takes place, the volume of the displacement domain is decreasing. The final value of the transient strain is zero.
Analysis of Continuity Equation
The modified continuity Equation (5) can be written as follows by inserting the time dependent part of the volumetric strain t : By integrating the resulting equation twice with respect to r using the homogenous form of the boundary conditions Nr. 2 and Nr. 1 respectively, the following explicit expression can be derived for the pore water pressure: The pore water pressure function is further integrated with respect to t between 0 and : The initial value for the transient volumetric strain (which can be written in terms of the initial pore water pressure) characterizes the rate of consolidation in every point. Especially, there is no time dependent consolidation if this function is the zero function.
Ratio of functional (33) for the initial condition series shown in Table 6 in case of the coupled 1/coupled 2 model is shown in Table 7. The result indicates that the rate of dissipation is increasingly larger for the coupled 2 model with increasing value of parameter D.
Total and effective stress solutions
The stress-state variable of the constitutive equation of the saturated soils is the effective stress ', the difference of the total stress and the pore water pressure '=-u where is the total stress. The compression strain is positive. On the basis of u and the solutions, the total stress and the effective stress 'can be assessed using the effective stress equality and the constitutive equations for embedding space dimension m=2: For the coupled 1 model, the effective stresses at the shaft-soil interface: It follows that for a realistic u the transient effective stress is negative around the shaft with zero final value and the effective stress at the shaft-soil interface increases with time here. It also follows that the effective stress at the outer boundary decreases with time, the mean of the first invariant of the effective stress tensor on the displacement domain is constant.
For the coupled 1 model, the radial total stress at the shaft-soil interface: It follows that for a realistic u the radial total stress at the shaft-soil interface decreases with time.
For the coupled 2 model, the effective stresses at the shaft-soil interface: It follows that for a realistic u the transient effective stress is negative around the shaft with zero final value and the effective stress at the shaft-soil interface increases with time.
For the coupled 2 model, the radial total stress at the shaft-soil interface is constant with time.
It follows that the radial total stress at the shaft-soil interface is constant.
Steady-state solution part
The solution of the drained continuum-mechanical problem for the displacement v p is the solution of the following part of Equation (1): which is the cavity expansion model for m=2, 3 and the oedometer (K0) compression model for m=1. The solution has the following general form: where the parameters can be determined from the inhomogeneous form of the boundary conditions (i.e. the common boundary condition Nr. 3 and, Nr. 4 -Imre-Rózsa model , Nr. 5 -Randolph-Wroth model).
These can be rewritten in the following form for m=2 as follows. The displacement v p for the Imre-Rózsa model: and, for the Randolph-Wroth model: The solution of the steady-state seepage problem for the pore water pressure u L is identically equal to zero since the hydrodynamic boundary conditions are homogeneous. Therefore, superscript t is omitted for the pore water pressure in the following.
Analytical solution
where Jm/2 and Ym/2 are the Bessel functions of the first and second kinds, with the order of m/2, and λk, μk, Ck parameters of the solution, m is embedding space dimension, c is coefficient of consolidation.
The volumetric strain and the pore water pressure solution from this: The function u is then determined using Equation (1). For the Imre-Rózsa (i.e. m=2, coupled 1) model:
Constants of solutions
The parameters Ck of the solution can be determined from the initial condition. The parameters λk, μk of the solution can be determined from the boundary conditions.
The parameters Ck of the solution can be determined from the initial condition as follows. The initial displacement functions v t 0 (r) can be determined from u0(r) with the use of Eqs 21 and 27 . The coefficients Ck can be determined using the orthogonality of the solution functions. In the case of m=2, the Bessel coefficients C k and D k from initial displacement function v t 0 (r): The parameters λk, μk of the solution can be determined from the boundary conditions as follows. For the coupled 1 or 2 model-families, the "boundary condition equation" (arisen from the homogeneous form of boundary conditions Nr.3 and Nr.4 or Nr.3 and Nr.5) can be written as follows, respectively: The roots of the boundary condition equation for the coupled 1 and 2 model-families, respectively, for m=1: Approximate closed form solution can be suggested as follows for m>1. The asymptotical Bessel function formulae: Using the asymptotical Bessel function formulae, the approximate form for the BCE for the coupled 1 and 2 model-families, respectively, for the m>1 case: (61) being equally valid for dimensions embedding space dimension m=2 or 3. The roots for the coupled 1 and 2 model-families, respectively: Within a model-family, the 1 dimensional and the approximate 2 and 3 dimensional formulae are identical. Inserting into the analytic solutions of the coupled 1 models, some dimensionless variables are resulted: These formulae reflect that the rate of dissipation is faster for the coupled 1 than for the coupled 2 models.
Simulation methods
The space domain is defined with radius r0 =1.75 and r1 = 64.75 in the case of undrained penetration, well above the tip. Some numerical examples are given to illustrate the solution for m =2, coupled 1 (Imre-Rózsa) model and coupled 2 (Randolph-Wroth) model, in the function of the monotonic inital initial pore water pressure function series, since the initial condition may vary in the function of the soil properties even in case of undrained penetration. Similar analyses were made for the oedometric case in ( [5]).
The monotonic initial condition series for the pore water pressure was given in the form of the monotonic, normalised, parametric functions (Eq 11). The mean ordinate was used to specify the numerical examples (Eq 12, Table 6).
The sum of the solution of the drained continuummechanical problem and the self-weight was set to be equal to the initial value of the piezo-lateral stress cell measurement (see Fig 16, [15]). A parametric analysis was made, the model constant displacement v0 was computed for both models assuming =0.3, G=50 kPa and various values for K0.
The following normalized space coordinate was applied representing the results on the space domain:
Simulation results
The results are presented for m=2, on the example of r0 =1.75 cm and r1 = 64.75 cm, n = r1 /r0 = 37 (this space domain is related to the undrained penetration problem of the CPT, well above the tip [21].
Pore water pressure at the shaft
The initial pore water pressure functions are shown in Figure 2 where the initial conditions of the one-term solution moreover, the initial pore water pressure distribution after undrained penetration determined by the strain path method are also indicated.
According to the results, the one-term solutions and the pore water pressure distribution after undrained penetration determined by the strain path method are roughly found in the strip of the four not too extreme initial conditions (i.e. 4 to 7, see Fig 2). Therefore, the one-term solution, the solution related to the initial condition of the undrained penetration and, the solutions related to the initial conditions 4 to 7 are similar to each-other and can be interchanged.
According to the results of the qualitative analysis, the rate of the pore water pressure dissipation on the shaft is controlled by the initial, transient mean effective stress, which depends differently on the initial condition for the two models. The pore water pressure dissipation functions are shown in Figure 3.
According to the results of the simulation, the pore water pressure dissipation curves are in accordance to this, the dissipation is faster for the coupled 1 model than the coupled 2 model at fixed initial condition.
As the distance from the zero solution is increasing (ie., with increasing D), the dissipation time in terms of time factor (T) increases for any fixed (i.e. the curves "move" from left to right, see Fig 3).
For the coupled 1 model, having two zero solutions, the linear initial condition separates the dissipation curves solutions. If D<0.33 (i.e. convex initial distributions) or if D<0.33 (i.e. concave distributions) the dissipation time increases/decreases with increasing D. As a result, the dissipation curves related to some concave and convex initial condition functions coincide (Fig 4(b)).
The common features of the solution of the two models are as follows. According to the results shown in Figure 3 (a) to (c), for the not too extreme initial conditions (i.e. 4 to 7), the dissipation curve solutions are very similar, they nearly coincide. Especially, the dissipation curves coincide at great degrees of dissipation (99,9%, =(umax-u)/ umax). The time factor T is about three times larger for the Randolph-Wroth's model, than for the Imre-Rózsa model resulting in larger dissipation times. It follows that both models can be used for the evaluation of the the pore water pressure dissipation tetsts, the identified parameters will slightly differ.
Total stress at the shaft
According to the results of the qualitative analysis, the rate of total stress dissipation on the shaft is controlled by the initial pore water pressure distribution differently for the two models.
Concerning the transient part of the solution for total stress, in the case of the Randolph-Wroth model it is the zero function and, as a result, the radial total normal stress at r0 is constant and, therefore, the radial effective stress at r0 increases by the value of the initial pore water pressure at r0. These features are unrealistic for soft clays (Fig. 4a).
Concerning the transient part of the solution for total stress, in the case of the Imre-Rózsa model, it is the mean pore water pressure. As a result, the radial total normal stress at r0 decreases with time by the value of initial mean pore water pressure depending linearly on D and, the radial effective stress at r0 increases with time by the value of the difference of the initial mean pore water pressure and the initial pore water pressure at r0 (Fig. 4b).
Stresses within the soil
The solutions for the transient displacement v t and volumetric strain t are shown in Figure 5 and 6. The initial value of the transient part of the radial displacement v t is non-negative, the final value is identically equal to zero. Therefore, the transient displacement v t basically decreases with t for any r up to zero in the case of both models. As a result, the displacement at r1 is decreasing for the Randolph-Wroth model (inward moving outer boundary). The displacement at r1 is constant for the Imre-Rózsa model (non-moving outer boundary). The results for the transient volumetric strain t show compression in the case of the Randolph-Wroth model. Partly compression (in the vicinity of the shaft) and, partly swelling (at the outer boundary) takes place in the case of the Imre-Rózsa model (Fig. 6). The pore water pressure dissipation is faster for the Imre-Rózsa model (Fig. 7a).
The result of the parametric analysis in terms of K0 for some values for K0 is shown in Fig. 7. According to the results, negative effective normal stresses were encountered within the displacement domain for both models for some K0 values (Fig.7). It follows that hydraulic fracturing may occur, in accordance to the experiences [24,[26][27].
Methods
This chapter considers the numerical properties of the analytical solutions in terms of the space dimension, the initial condition and the boundary conditions.
It can be noted that the solution can be easier to be computed in the case m=1 or 3 using sin and cosine functions. The simplest numerical work is related to the oedometric models where no numerical solution is needed for the roots. The numerical analyses of the oedometric models is presented in [5]. For m=2, Bessel functions are needed which are approximated due to the slow convergence and, the so resulted series is 'semi-convergent', becomes divergent after a while.
The cylindrical cases with embedding space dimension m=2 were analysed for various displacement space domains and initial conditions. The space domain defined with radius r0 =1.75 and r1 = 64.75 are the case of undrained penetration, well above the tip, which was analysed in the previous chapter. The boundary conditions were related to seven space domains with the same r0 ( Table 8). The precise roots of the boundary condition equations (λi, μi) were determined with the secant method for the case of the m =2 and 3 for the same of r0 and seven values of n= r1 / r0, ( Table 7). The number of terms was 40 at each specified r0 and r1 (some results for m =2 and n=4 to 584 are shown see App A), 250 terms were considered for n=37. Ten values for the mean of the initial pore water pressure (parameter D) were selected, the value of parameter F was determined for each value of D and, for each value of r1. The initial pore water pressure functions are shown in Figure 2(a) in the case of n=37 and in Figure 2(b) for other n values, where these were selected such that the D parameter was the same for the different r1 values.
Results
The aim is to determine the numerical properties of the analytical solution, which is related to two problems, is the computed series convergent at all and how many terms are needed to be used.
Computing Bessel function values
The initial condition is an infinite series of Bessel function, of the first and second kinds, order of 1 and 0, which converge very slowly for large values of the independent variable. According the usual practice ( [25,30]), these functions were approximated differently in the small (x<8) and large (x>8) range of the independent variable. The small range functions look like simple power laws and were approximated by rational functions. The large range functions look like sine or cosine with decay of x -1/2 . The products of polynomials and sine-cosine functions were used in the form of a library routine ( [30]). The series applied in the range x>8 is not convergent in the sense of convergence of power series, after a certain number the terms begin to increase, even in the case of arbitrarily large x (semi-convergent series [25,30]).
The separation of the two ranges can be seen in Figure 8 for the various space domains, noting that arguments are about the same for the two models on a given space domain. Concerning r0λk and r0yk,, large is the range if k>8 for the smallest space domain (n=4) and if k>90 for the space domain of the dissipation test (n=37). Concerning r1λk and r1k they are in the large range if k>3 for the space domain of the dissipation test (n=37), and every k fall in the large range for n=4.
It follows that for n=37 the shaft stresses can be computed by the small range approximation precisely, at r1 k~3 terms can be used in the small range approximation. In the case for small displacement domains (e.g. n=4) at r0 k~8 terms can be used in the small range approximation, at r1 the large range approximation has to be used which may entail some convergence problems. The few terms in the small range approximation may entail non-precise solution, the large range approximation may entail nonconvergent series (see App B). Concerning the Imre-Rózsa model, the series after summation was convergent with k up to about 200 terms then it became divergent for initial conditions 1 and 10. Concerning the R-W model, after around 125 terms the series became divergent for most initial conditions (Fig. 8). The solution is non-precise at the outer boundary for n=37 and the situation is worse for n<37.
Convergence and initial condition
The rate of convergence of the Fourier-Bessel expansion of the pore water pressure at r=r0 was tested in the function of the initial condition shape functions for each seven space domain and model. The value of 1 was approximated as follows: Some fixed certain numbers the terms k were considered. The results are summarized in the function of the initial conditions, space domain and number of terms in Figure 9, for the case of k<41, n=37. The error related to a certain k is rapidly increasing as the mean initial condition ordinate D varies 'towards' the D value of the closest zero solution (i.e. D→0 for both models, D→1 for the Imre -Rózsa model). For the not too extreme initial conditions (i.e. 3 to 7), the numerical error is not important. This results can be explained as follows. If the initial condition series "converges" to the zero solution then the coefficients converge to zero, too. As a results, being any other term constant in Equations (52)-(53), the sum will decrease at every fixed k for an initial condition 'being closer' to the zero solution. The decrease of the coefficients in terms of the initial conditions for any fixed k can be seen in Figure 10. (b) Fig. 10. The coefficients for fixed k, depending on the serial number Fig. 9. Cylindrical models. The Bessel series approximation of the initial pore water pressure on the shaft (i.e. with value of 1) after summation. (b) Fig. 11. The validity of the approximate root formulae. . (a) (b) Results of the numerical tests for =(r1-r0) k/k or =2(r1-r0) k/(2k-1) using various values for r1 =n r0 and r0=1.75 cm.
Root formulae
In the case of the two cylindrical models (m =2), 250 terms were considered for n=37 and 40 for other space domains. The precise roots of the boundary condition equations were determined with the secant method and, were transformed using the approximate closed form formulae. The results are shown in Figure 11. According to the results, the error of the approximate closed form formulae decreases with k for both models. The validity of the approximate model law was numerically proven for one-term solution and for "nonextreme" initial conditions ( [33]
Total stress dissipation tests
No mathematically precise method is used at present for interpreting the DMTA/CPT total stress dissipation curve (Fig. 12). Concerning the total and effective stress dissipation tests, the coupled 1 model -being completed by a relaxation term -was verified by measured oedometric relaxation test data (for unmoved boundaries [28][29][30]). The total stress solutions of the coupled 1 models with various embedding space dimensions m=1 to 3 are qualitatively similar (Fig. 13). However, the controlling parameter (the mean initial normalized pore water pressure D) is significantly different for various the embedding space dimensions (i.e. for the linear initial function: D~1/(m+1)).
In this work, the cylindrical coupled consolidation models with constant displacement boundary conditions were started to be used to evaluate some well-documented DMT data (). In this work the results of the evaluation with the oedometric models are also presented. The model law c cyl = k c oed was derived from the time factor (Fig. 14).
The models were linear in the simplest form, the non-linear behavior was approximated by applying a relaxation part-model. Results are shown in Table 9, Figs 15 to 17.
According to the results, the identified c is slightly larger than expected. In other words, the CPTu total stress dissipation may entail 'too large' stress drop from modelling aspect possibly since the boundary is not unmoved due to the stress release (during the test, unclamped condition prevails) and, as a result, the diameter of the penetrometer may slightly decrease with time due to the stress release.
The pore water dissipation tests
At present a numerical solution of an uncoupled twodimensional models is applied for approximate model fitting based on the use of the rigidity index which is estimated generally by thumb rules [15,22,31,32].
In the frame of this research, both cylindrical models and the spherical coupled 1 model were included into an evaluation software for the evaluation of the CPTu tests, using a mathematically precise inverse problem solution method ( [11] to [14]), needing the value of r1 (which may depend on the rigidity index) and its value is known for undrained penetration only. The pointsymmetric 3 and the 2-dimensional coupled 1 solutions agree, the difference is tiny for typical initial conditions (Fig.18). The point-symmetric 2-dimensional coupled 1 and coupled 2 solutions agree at the physically admissible initial conditions (Figs. 2, 3c).
The analytical solutions are very useful in case of non-monotonic initial conditions, the dissipation curve is non-monotonic (Fig. 19) which may occur in CPTu testing if the soil is OC around the shaft (can originally be OC clay or can be a highly compressed sand or silt due to penetration which rebounds around the shaft). The modelling of non-monotonic dissipation curve is easier with the analytical models.
It can be noted that in case of partly negative initial pore water pressure distribution, the mean effective stress may initially increase during dissipation resulting in an initial total stress increase.
The analytical and numerical pore water pressure dissipation solution agree, as shown in Fig. 20 (related to Ir=150 and the conventional time factors). The drained continuum-mechanical model is the oedometer (K0) compression model for m=1 and the cavity expansion models for m=2, 3, with two kinds of boundary conditions.
The transient solution part has the form of Bessel series with order of m/2 and (m-2)/2. The coefficients can be determined from the initial condition, the constants can be determined from the boundary conditions. (4) The transient effectives stresses are negative in the vicinity of the shaft, the sum of the steady-state and transient effective stresses may become negative, implying the possibility of hydraulic fracturing. This may occur if the steady-state normal stress may become small due to a rebound, after partial unloading. (5) The dissipation is instantaneous at the shape function of related to D =1 (constant, non-zero function) for the coupled 1 model since in the case initial, transient effective stress state is identically equal to zero. It follows that the intensive variable of the coupled seepage in soils is the effective stress, the stress state variable the saturated soils. (1) The constants from the boundary condition equations can be computed with a closed-form formula only for embedding space dimension m=1 and cannot be computed with closed form formulae for m=2 and 3. In the latter case the constants were determined by the secant method numerically for various displacement domains. (2) For m=2 and 3, by using the asymptotic Bessel formulae, two approximate, closed-form root formulae were derived, being the same as the precise formula of the one dimensional case within a model family. (4) Having identical (approximate) dimensionless time coordinate, it was possible to compare the pore water pressure dissipation solutions for all embedding space dimensions within a model family (Fig. 19). Due to the similarity, the solutions of the m=1, 2 and 3 solutions can be interchanged, the solutions of the m=1or 3 model can be used instead of the m=2 models in the evaluation. The differences in the identified parameters for the various models can likely be compensated by constant multiplyers. (The dissipation is the faster if the domain has larger boundary, is more 'rounded'.) (5) The steady-state part of the solution was determined experimentally, the sum of the gravitational stresses and the cavity expansion stresses were taken from real-life PLS data. The analytical solution of the coupled 1 and 2 models clearly indicated locally negative values for the effective stresses inside the displacement domain in agreement with the experiences (hydraulic fracturing around sea-bad wells, around piles [30] and in the oedometric relaxation test in case of load reversal [23]).
Convergence
The convergence of the analytical solution depends on the initial condition in the same way within each model family independently of the m embedding space dimension values (oedometer:1, spherical:3, cylindrical:2). Since the coefficients vary continuously with initial condition parameter as D→0 to zero, due to the small-valued coefficients, larger number of terms may give similar accuracy close to a zero solution. Far from the zero solutions, for the not extreme initial conditions (i.e. 3 to 7, Fig. 2), the 10 to 40 -term approximation gives good accuracy, however, the error rapidly increased getting 'closer' to a zero solution (i.e. D→0 for both models, D→1 for the coupled 1 model).
Due to the slow convergence, a Bessel series approximation is needed in case of the cylindrical model (m=2). Assuming undrained penetration (n= r1 r0 =37), the shaft stresses can be computed with acceptable preciseness using the small range approximation. At r1 the large range approximation has to be used which may entail some convergence problems. Concerning the Imre-Rózsa model, the series after summation is convergent with k up to about 200 terms, being le then it became divergent for initial conditions 1 and 10.
Assuming that the size of the displacement domain decreases (n=r1/ r0 <37) in case of partly undrained penetration with increasing soil permeability to r1→ r0, then preciseness of the shaft stress approximation decreases with decreasing r1 and decreasing D and becomes not acceptable at around n=r1/ r0 =4.
Computing the value at r=r0 of the normalized initial condition shape functions, the numerical convergence test resulted in the same picture within a model family. For the not extreme initial conditions (i.e. 3 to 7, Fig. 2), the 10 to 40 -term approximation gave good accuracy, however, the error rapidly increased getting 'closer' to a zero solution (i.e. D→0 for both models, D→1 for the coupled 1 model) since the coefficients of the Bessel series decreased rapidly. (6) In the cases m=1 and m=3 the solution can be computed by using sin and cosine functions. However, for m=2, the Bessel function series had to be computed. Due to the low convergence, the series were approximated, according to the practice ( [26]), differently in the small (x<8) and large (x>8) range of the independent variable. The preciseness in the small range was acceptable, in the large range the series was not convergent in the sense of convergence of power series.
It was found that in case of n=37 the series was convergent in the small range up to about 90 terms at r=r0. The n=37 series was in the small range only up to about 8 terms at r=r1 and the n=4 series was in the small range up to about 7 terms at r= r0, the result was realistic but non-precise with 40 terms in these cases.
Analysis of the model
The solution of the system of Equations (1) and (2), the initial and the boundary conditions were analysed assuming an initial pore water pressure function series u0(r) of positive and monotonic function, being in unique relation with a single parameter (i.e. the initial mean pore water pressure, denoted by D). The main features of the transient part of the solution of the two model-families are as follows.
1) The transient effective stress depends differently on the pore water pressure function (and, on the initial pore water pressure distribution) for the coupled 1 and for the coupled 2 models, respectively, being equal to (umean-u) and -u, respectively. For the coupled 1 models the mean effective stress and the volume of the domain is constant during consolidation, swelling takes place in the vicinity of the shaft, compression takes place in the vicinity of r1. For the coupled 2 models, the domain is decreasing by moving the boundary r1 inward during consolidation. The mean effective stress is increasing by the value of the initial mean pore water pressure.
2) The transient radial total stress depends differently on the pore water pressure function (and, on the initial pore water pressure distribution) for the coupled 1 and for the coupled 2 models, respectively, being equal to umean and 0, respectively (i.e. the mean pore water pressure and the zero function). As a result, for positive pore water pressures, the total stress is decreasing at the shaftsoil interface by the value of the initial mean pore water pressure for the coupled 1 model during dissipation. For the coupled 2 models, at the shaftsoil interface and the total stress is constant.
It can be noted that in case of partly negative initial pore water pressure distribution, the mean effective stress may initially increase during dissipation resulting in an initial total stress increase.
3) The integral of the dissipation curve on the time domain is a functional depending on the initial, transient effective stress state, being different for the coupled 1 than the coupled 2 models. This functional characterizes the rate of consolidation (or seepage).
As a first consequence, the effective stress (and the functional) is smaller (the pore water pressure dissipation is faster) for the coupled 1 than for the coupled 2 model for any fixed initial condition.
The effective stress is the zero function (the functional is zero) at the zero initial pore water pressure function for both model-families (this is the trivial zero solution, at D=0) and at the constant, nonzero initial pore water pressure function for the coupled 1 models (this is a non-trivial zero solution, at D=1).
As a second consequence, at the non-trivial zero solution, the dissipation is instantaneous. As a result, if the initial condition of the oedometer relaxation test contains a constant component, this will dissipate instantaneously. As a third consequence, if the initial condition is the closer to one of a zero solution (eg., in terms of initial condition parameter D), then the dissipation is the faster.
Similarity of solutions
Having identical (approximate) dimensionless time coordinate, it was possible to compare the pore water pressure dissipation solutions for all embedding space dimensions within a model family (Figs. 18,21).
According to the results, the dissipation is the slightly faster if the domain has larger boundary, is more 'rounded', but the similarity is surprising.
Due to the similarity, the solutions of the m=1, 2 and 3 solutions can likely be interchanged, the solutions of the m=1or 3 model can be used instead of the m=2 models in the evaluation.
The slight differences in the identified parameters for the various models can likely be compensated by some constant multiplyers. Further research is suggested on the similarity. Concerning the two model families, the dissipation curve solutions nearly agree, the (newly introduced) time factor values are increasing slightly with decreasing embedding space dimension due to geometrical reasons (Fig. 15).
Note on the uncoupled models
It can be examined whether or not the uncoupled model can be derived from the coupled 2 (i.e. Randolph-Wroth model). The integral expressions and, the constitutive law are used to express the radial total stress and, the first invariant of the total stress tensor: in terms of the pore water pressure. This may be constant if the -the Poisson's ratio in terms of the effective stressis equal to 0.5. This case is impossible (soil is incompressible and there is no consolidation). It follows that the uncoupled model can not be derived from the coupled model.
Thermodynamic interpretation
The transport of any extensive quantity implies an intensive quantity the homogeneous distribution of which is the precondition of the equilibrium (Theorem 0 of Thermodynamics). The movement of an extensive quantity is caused by the inhomogeneous distribution of the intensive quantity which is tended to be eliminated (Law 2 of Thermodynamics).
The extensive variable for seepage is the water mass or volume. The intensive variable for seepage is the total hydraulic head of the water phase: where z is the vertical distance from an arbitrary datum. In the models presented here the effect of z was neglected assuming that h=u/ w .
The rate of the dissipation at any point rbeing characterised by the area of the subgraph of the dissipation curve u(t,r) -was expressed as a functional of the initial transient volumetric strain. It follows from this expression that the dissipation is instantaneous if the initial transient volumetric strain is identically equal to zero. It may occur that the initial transient volumetric strain is identically equal to zero while the initial pore water pressure function is non-zero (i.e. coupled 1 model, constant initial pore water pressure distribution). The dissipation of the pore water pressure is instantaneous in this case. Therefore, it can be said that transient seepage takes place if and only if the initial transient volumetric strain is not identically equal to zero.
Conclusions
Two linear, point-symmetrics coupled consolidation model families, with embedding space dimensions m=1 to 3, differing in the boundary condition at the outer, boundary.
Constant displacement boundary condition is assumed at the inner boundary, which is at r=r0 =0 for the two kinds of oedometer tests, being the symmetry point of a double-drained oedometric samples. The outer boundary r=H, where zero pore water pressure boundary condition is assumed besides the total stress load (coupled 2 models) or displacement load (coupled 1 models). The r=r0 boundary is the surface of the model pile. The outer boundary r=r1, where zero pore water pressure boundary condition is assumed, is unknown. It is determined as the zero pore water pressure line after penetration. The coupled 1 models assume constant displacement, the coupled 2 models assume constant volumetric strain here. The main points of the results are repeated here as follows.
1) The differences between the two model families are significant as follows, in the total stress modelling. The coupled 1 models describes the total stress drop during disspation (and the effective stress variation) encountered during the CPT or DMT total stress dissipation tests, the coupled 2 models prognosticate constant total stress at the shaft-soil interface.
2) The similarity of the solution within a model family is surprising, and the modeified Terzaghi's model law can be used to model eg. DMT dissipation test by an oedometer test evaluation model. The analytical and numerical properties are similarly dependent on the initial condition and the numerical properties are worse closer to the zero solutions in each case.
3) Being similar, the analytical solutions can be interchanged within a model family (ie., in other words, coupled 1 models for m= 1, m= 2 and m=3 can be interchanged). In addition, the oedometer tests can be used to study the phenomena after pile penetration (eg., to study the effect of the stress release of the elastic pile material after pile penetration).
4) The coupled 1 modelling was verified for m=1 oedometer test case and started to be adalysed for the DMT dissipation test. First result indicate that in the latter case the stress release may influence the boundary condition.
5) The steady-state part of the solution may influence the effective stress. Using experimental data, the computed effective stresses can be negative locally inside the displacement domain in agreement with the experiences. (Hydraulic fracturing occurred several times around sea-bad wells or around piles after penetration ( [30] and in the oedometric relaxation test in case of load reversal [28]).
6) The analytical solutions is numerically simple in the case m=1 or 3 being sin and cosine functions. For m=2, Bessel functions are needed with slow convergence and, which are approximated. The so resulted series is 'semi-convergent', being divergent for some space variables eg., in the case of small displacement domains.
7) Very few pieces of information are available on the relaxation (time dependency of constitutive law) which is needed to be considered for the total stresses around piles.
8) Very few pieces of information are available on the initial condition of partly drained penetration and it follows from the results of the analyses of the numerical properties of the analytical solutions that the dissipation modelling after partly drained penetration may face some numerical difficulties.
Derivation of the system of partial differential equations
The system of differential equations can be derived as follows for The constitutive equations: The equilibrium equation, the effective stress equality, the geometrical and the constitutive equations are combined to give the modified Equilibrium Equation. The continuity equation, the Darcy's law and the geometrical equation are combined to give the modified Continuity Equation.
Solution
The general solution of the models, subject to the specified boundary conditions, is the sum of two parts: one transient and one steadystate. The steady-state part of the displacement (v p ) is given by the solution of the following equation where the parameters α and β can be determined from the nonhomogeneous boundary conditions. The steady-state pore water pressure solution is the solution of the Laplacian Equation (part of the modified Continuity Equation) which is equal to zero here.
The transient solution parts for the volumetric strain (ε t ), the displacement (v t ) and the pore water pressure (u), respectively(see where Jp and Yp are the Bessel functions of the first and second kind, of order p, n is the space dimension; λk, μk are the roots of the boundary condition equations (composed from the homogeneous form of the boundary conditions); Ck (k=1...) are the Bessel coefficients determinable from the initial condition, and c is coefficient of consolidation (c= k Eoed /v). Around 250 roots for constants λk, μk were determined for the models (see [8]).
The pore water pressure is determined from v t by integrating the equilibrium Modified Equilibrium Equation with respect to r including boundary condition Nr. 1: Inserting these into the analytical solution, the independent variables become non-dimensional. For n=3 or n=2 no closed form root formulae can be found. Solving the boundary condition equation and observing the numerical properties of the roots, two approximate formula can be found as follows ([41]).
The approximate solution of the boundary condition equation
The approximate formulae can analytically be derived by inserting the asymptotical Bessel function formulae into the boundary condition equation.The asymptotical Bessel functions formulae: | 2021-03-09T02:16:14.304Z | 2021-03-07T00:00:00.000 | {
"year": 2021,
"sha1": "1edfeb16e5c532b76519aa3465d9b3d84c70fcab",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1fe439b8769f85a4208e459a65ef2ea86fb7da79",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
15127871 | pes2o/s2orc | v3-fos-license | Lateral Temporal Lobe: An Early Imaging Marker of the Presymptomatic GRN Disease?
Abstract The preclinical stage of frontotemporal lobar degeneration (FTLD) is not well characterized. We conducted a brain metabolism (FDG-PET) and structural (cortical thickness) study to detect early changes in asymptomatic GRN mutation carriers (aGRN+) that were evaluated longitudinally over a 20-month period. At baseline, a left lateral temporal lobe hypometabolism was present in aGRN+ without any structural changes. Importantly, this is the first longitudinal study and, across time, the metabolism more rapidly decreased in aGRN+ in lateral temporal and frontal regions. The main structural change observed in the longitudinal study was a reduction of cortical thickness in the left lateral temporal lobe in carriers. A limit of this study is the relatively small sample (n = 16); nevertheless, it provides important results. First, it evidences that the pathological processes develop a long time before clinical onset, and that early neuroimaging changes might be detected approximately 20 years before the clinical onset of disease. Second, it suggests that metabolic changes are detectable before structural modifications and cognitive deficits. Third, both the baseline and longitudinal studies provide converging results implicating lateral temporal lobe as early involved in GRN disease. Finally, our study demonstrates that structural and metabolic changes could represent possible biomarkers to monitor the progression of disease in the presymptomatic stage toward clinical onset.
INTRODUCTION
Frontotemporal lobar degeneration (FTLD) are rare neurodegenerative disorders characterized by behavioral changes and language deficits. Mutations of the GRN (progranulin) gene, all leading to progranulin haploinsufficiency, are responsible for 25% of familial cases. The prevalent clinical phenotype of GRN patients is behavioral variant of frontotemporal dementia (bvFTD). Primary progressive non-fluent aphasia and corticobasal syndrome are less common presenting phenotypes [1,2]. Neuroimaging pattern of GRN carriers is characterized by asymmetrical frontotemporal-parietal atrophy [3,4].
So far, it is not known how long structural and functional changes occur before the clinical onset of FTLD disease. It is expected that biological alterations and morphological changes leading to dementia could occur decades before the first symptoms of FTLD, as demonstrated in other genetic forms of dementias such as Alzheimer's disease [5]. Establishing how long these brain changes precede the clinical onset and their chronology during the presymptomatic stage is crucial because therapeutics such as HDAC inhibitors or amiodarone [6][7][8] are currently being developed to compensate progranulin haploinsufficiency. In this study, we performed a multimodal approach to investigate the chronology of brain structural and metabolic changes in a cohort of asymptomatic GRN carriers.
Subjects
Forty-three neurologically healthy individuals with 50% risk to carry a GRN mutation (first degree rela-tives of GRN carriers from 15 unrelated families) were recruited in four French centers over a 3-years period (2011 to 2013). All participants have signed informed consent for the study that was approved by the Ethics Committee of 'Assistance Publique-Hopitaux de Paris, Paris'.
At inclusion, asymptomatic status was ascertained based on relative's interview, neurological examination and the normality of scores of behavioral scales and neuropsychological tests (Supplementary Methods 2, Supplementary Table 1). Three participants presented cognitive impairment at neuropsychological evaluation and were considered as 'cognitively symptomatic non dementia' (CSND); therefore, they were excluded from the analyses. Additionally, 7 were also excluded from analyses because they did not undergo the full protocol, or because of the discovery of coincidental lesions on brain MRI a posteriori.
Finally, 33 healthy individuals were included in the analyses. GRN sequencing revealed that sixteen asymptomatic participants carried GRN mutation (aGRN+, see Supplementary Table 2 for the list of mutations); the 17 participants who did not carry mutation (GRN−) were used as control group. The characteristics of aGRN+ and GRN− groups are summarized in the Table 1 and Supplementary Table 1. There were no statistical differences in age at examination, gender composition, and educational level between the two groups ( Table 1, Supplementary Methods 1). The 33 subjects underwent standard MRI and FDG-PET study at baseline (T0); all except 5 underwent a second evaluation 20 months later (T20) with the same cognitive and neuroimaging protocol (14 GRN carriers, 14 non-carriers, n = 28) ( Table 1). Five participants (2 carriers, 3 non-carriers) refused to be reevaluated and dropped out the study. Baseline Means ± SD are reported. Significant p-value <0.05. Educational level has been scored as follow: score 1 (5-8 years of study); score 2 (9-12 years); score 3 (>12 years). and longitudinal statistical analyses were performed for brain structural MRI and metabolism, as described below. The participants were age-and gender-matched for the analyses at each time points (Table 1). We estimated the distance from the age of clinical onset in aGRN+ by subtracting the age at examination to the mean age at onset in the family.
MRI acquisition
MRIs were acquired with 3 Tesla and 1.5 Tesla scanners according to the scanner available in each center. All centers used the same MRI sequences protocol that was designed and optimized to minimize centers bias. Prior to the study, phantom acquisitions were performed in order to ensure the comparability of the results across centers. The same proportion of carriers and of non-carriers was investigated in each center, and baseline and follow-up MRIs were performed on the same scanners for each participant. High-resolution three-dimensional T1-weighted images were acquired with full brain coverage and isotropic voxels (TR: 2300 ms; TE: 4,18 ms; matrix = 256 mm; slice thickness = 1 mm).
Cortical thickness analysis
Cortical thickness analyses were performed on T1weighted 3D images using Freesurfer software (http:// surfer.nmr.mgh.harvard.edu). Briefly, T1-weighted 3D images were preprocessed with intensity variations correction, normalization, affine registration to the Talairach atlas, skull stripping, and segmentation of grey and white matter. The pipeline for longitudinal processing has been used that includes the creation of an unbiased within-subject template using robust, inverse consistent registration [9]. For cortical thickness, we used surface-based analysis of thickness values at each vertex. Surface-based analyses of cortical thickness were performed using Surfstat software (http://www.math.mcgill.ca/keith/surfstat/) following the methodology previously used [10]. Cortical thickness maps were smoothed using a 20 mm surface-based kernel. The comparison of baseline cortical thickness between groups was carried out using a two-sample ttest at each vertex. For longitudinal analyses, a paired t-test was used. Statistics were corrected for multiple comparisons using the random field theory for non-isotropic images [11]. A statistical threshold of p < 0.005 was first applied (height threshold). An extent threshold of p < 0.05 corrected for multiple comparisons was then applied at the cluster level. 18 Fluorodeoxyglucose positon emission tomography ( 18 FDG-PET) scans were acquired in four departments of nuclear medicine with a standardized protocol. Phantom acquisitions were performed prior to the study in order to measure the spatial resolution (FWHM) of each scanner. A dose of 2 MBq/Kg of fluorodeoxyglucose ( 18 FDG) was injected 30 to 45 min prior to an acquisition of 15 min. Patients rested in quiet surroundings with the eyes closed at least 20 min postinjection. Follow-up scans were performed on the same tomograph as the baseline, with the same protocol.
Positon emission tomography protocols
PET volumes were co-registered to their corresponding MRI volumes. MRI volumes were segmented into grey matter and white matter probability maps and spatially normalized to MNI space using SPM8. PET co-registered images were spatially normalized applying the transformation parameters of MRI normalization. Individual variability was taken into account by dividing for each subject voxel uptake by the mean pons uptake, yielding parametric images. Pons uptake was obtained from a Pickatlas (http://fmri. wfubmc.edu/software/pickatlas) region of interest. Parametric images were smoothed using an isotropic Gaussian kernel of 12 mm. Voxel-by-voxel comparison between carriers and non-carriers was then performed with a two-sample T-test on smoothed parametric images using an explicit mask. This mask was obtained from the mean of grey matter probability maps of each subject included in this analysis, with a threshold of 0.4. Age, gender, and tomograph spatial resolution were used as covariates. MarsBaR toolbox in SPM8 was used to extract [ 18 F]FDG-uptake adjusted values from significant clusters.
The method used to analyze the longitudinal data has been adapted from the one previously described by Fouquet et al. [12]. The follow-up MRI was coregistered to the baseline MRI, and a mean image was calculated. This mean image was used to calculate optimal transformation parameters to MNI space. Next, baseline and follow-up PET images were coregistered to the baseline MRI, spatially normalized to MNI using optimal transformation previously calculated, scaled with mean pons uptake, and smoothed with an isotropic Gaussian kernel of 4 mm. Individual percent annual changes maps or "PET-PAC" were then calculated. These maps represent the voxel-wise calculation of percent metabolic change over the 20month follow-up period (i.e., the difference between follow-up and baseline scaled PET value divided by baseline PET value x 100) expressed in annual percent change. A voxel-by-voxel comparison of PET-PAC between carriers and non-carriers was then performed after a second smoothing of the individual PET-PAC maps with an isotropic Gaussian kernel of 10 mm, and using a mask obtained with the same method as for the cross-sectional analysis.
All results are reported with p-value <0.001 uncorrected for multiple comparisons with an extent threshold k corresponding to the expected number of voxels per cluster. Differences in spatially normalized FDG-PET scans obtained with scanners of different resolutions were minimized by the following measures: i) restricting the analysis to voxels with intensity 80% greater than the whole-brain mean, and ii) excluding voxels from the uppermost 10 slices (i.e., from the top 22.5 mm of the brain) and from the lowermost 5 slices, where significant inter-scanner effects due to different fields of view have been reported [13].
RESULTS
Groups did not differ for gender, age at examination (p = 0.8), age at follow-up (p = 0.9). The mean estimated distance to the age at clinical onset was 20 ± 10 years in aGRN+ (Table 1).
Cortical thickness
At baseline, no significant difference was found for cortical thickness between aGRN+ and GRN−. At follow-up a reduction of cortical thickness was found in one cluster in the left middle (1607 voxels) and inferior (554 voxels) temporal gyri (Fig. 1) with peak in the left middle temporal gyrus (p < 0.05, cluster-corrected).
The longitudinal analysis revealed areas of greater metabolism decrease (p < 0.001, uncorrected) in aGRN+ compared to GRN− in the left inferior temporal, left middle frontal, left inferior orbital frontal, right superior orbital frontal gyri as well as in the left thalamus ( Fig. 3; Supplementary Table 4). Mean and maximal percent annualized change values in the regions represented in Fig. 3 Supplementary Table 4 for MNI coordinates and for values).
DISCUSSION
The major neuroanatomical signature of GRN disease in symptomatic patients carrying mutations is an asymmetric involvement of the inferior frontal, temporal, and parietal brain regions [1,3,4]. A recent study also demonstrated that the most important annual percentage change of atrophy occurs in temporal lobe (lateral, polar), parietal (lateral, posterior) lobes, and insula in GRN symptomatic patients, compared to all other genotypes [14].
In this study, we have evaluated the presymptomatic phase of GRN disease. We have conducted a multimodal analysis combining two neuroimaging approaches to evaluate the chronology of structural and metabolic brain changes occurring during the presymptomatic phase in GRN carriers. The mean distance from estimated age at onset in our series (20 ± 10 years) is longer than in most other studies (7 to 12 years, Supplementary Table 5) and allows detecting very early changes. We also evaluated the progression of brain changes across time in a longitudinal study. Importantly, this is the first longitudinal study conducted in GRN disease. In most studies, the progression in presymptomatic stage of dementia is evaluated by correlating changes to the mean distance to clinical onset, estimated as the difference between age at examination and mean age at onset in a family [15][16][17][18]. This estimation can be easily applied in genetic diseases where age at onset is relatively stable within families, as in genetic forms of Alzheimer's disease [5], but this approximation is less confident in GRN disease, where age at onset is highly variable within families. For this reason, we evaluated the progres-sion of changes across time by longitudinal evaluation of presymptomatic GRN carriers, during a 20-month follow-up period.
At baseline, the absence of structural changes measured by cortical thickness in this study is consistent with one other study [18]. These negative results might be explained by the long distance to clinical onset. Otherwise, this method might be not sensitive enough to detect small effects in small groups of asymptomatic individuals. Only one cross-sectional study performed by Pievani et al. demonstrated reduced cortical thickness in five GRN carriers in the orbitofrontal cortex, middle frontal and precentral gyri that are not completely consistent with our results at baseline [19]. These inconsistencies might be due the age at examination that is higher than in our study, the population is thus closer to clinical onset, and to the sample size of carriers which is smaller than our cohort, possibly explaining different results at baseline. Furthermore, statistical methodology in our study is less liberal than that used by Pievani et al. and shall minimize the report of false positive findings. This may also explain why Pievani et al reported differences in a smaller group of carriers while we did not find significant differences at baseline. Importantly, even if no changes were present at baseline in our study, the cortical thickness reduced across time at follow-up in our aGRN+ individuals in lateral temporal lobe, in particular in the left middle and inferior temporal gyri. Notably, accordingly with our results, cortical thickness decreased faster with aging in the same regions in GRN carriers in another study [18]. Our results indicate that a comparison across time might be an appropriate method to detect affected brain regions during the presymptomatic stage.
Hypometabolism was present at baseline in GRN carriers and was initially limited to the left middle temporal region. Unexpectedly, frontal lobes were not involved at baseline, although another metabolism study in aGRN+ carriers [17] found diffuse hypometabolism in frontal lobes. In the latter study, however, half of 9 carriers were cognitively symptomatic, which might explain less selective impairment at a later stage of disease progression. These inconsistencies can also be partially related to different methodologies used in the two studies. Conversely, our follow-up evaluation evidenced a rapid metabolism decrease in aGRN+ involving the frontal lobe (left middle, orbital) in addition to the inferior temporal gyrus and thalamus. Our results suggest that metabolic abnormalities, detectable at baseline, could predate the structural changes, and be one of the earliest predictor of the pathological process. It also suggests that temporal lobe might be initially more susceptible to the pathological process which secondarily progress to the frontal cortex.
Finally, both our baseline and longitudinal studies provide converging results implicating the lateral temporal lobe as one of the earliest regions involved in GRN disease. Other studies [15,20] also indicate that temporal areas could be noticeably impaired, before the frontal regions. A recent European study in a large cohort of aGRN+ carriers demonstrates that temporal atrophy is detectable 15 years before estimated clinical onset, before frontal involvement [21]. Consequently, one might hypothesize a dynamic model of the presymptomatic stage of GRN disease where temporal areas, involved many years before the clinical onset, could be the 'epicenter' of the pathological seeds, that might progress later toward frontal and/or parietal regions.
The left middle temporal gyrus, which is early and consistently involved in this study, is implicated in language and semantic processing as well as in the recognition and retrieval of semantic information [22]. The involvement of this region fits well with clinical presentation of language disorders, especially agrammatic/nonfluent variant of FTD, characterizing a subset of GRN patients [1,3]. The lateral temporal lobe also plays a role in theory of mind [23] that is one of the first detectable cognitive deficits in the early stage of FTD, and that significantly decreases in GRN carriers approaching the age of onset of the disease [16].
A more rapid metabolic decrease was also detected in the thalamus, a key node in the prefrontal-basal ganglia circuits, as well as in the prefrontal cortex. Interestingly, thalamic atrophy is more frequently detected in symptomatic GRN carriers than in other FTD subtypes [24], and already detected in the presymptomatic stage of FTD [21]. Both thalamus and the prefrontal cortex generate and control goal-directed behaviors [25,26] and are implicated in apathy, one of the predominant clinical symptom of FTD.
Studies in GRN presymptomatic carriers have some limitations. First, clinical heterogeneity of GRN disease, reflecting variable topography of lesions at onset, can diminish robustness of changes detection in presymptomatic carriers. Moreover, subtle changes detected during the presymptomatic stage, could also vary according to methodological approaches. Finally, disease-specific markers are not available in FTLD, thus possibly delaying the detection of presymptomatic changes in this pathology.
However, our study provides important results. First, it evidences that the pathological process develop a long time before clinical onset in GRN carriers, and that early metabolic changes might be detected approximately 20 years before estimated disease onset. Second, it shows that metabolic changes are detectable before structural modifications and cognitive deficits that possibly appear in a shorter delay from the clinical onset. Finally, our study contributes to demonstrate that structural and metabolic changes could represent possible biomarkers to monitor the progression of disease in the presymptomatic stage toward the clinical onset. | 2016-08-09T08:50:54.084Z | 2015-08-03T00:00:00.000 | {
"year": 2015,
"sha1": "ef4629daf276aafe49c71e729f6f6548c5c364dd",
"oa_license": "CCBYNC",
"oa_url": "https://content.iospress.com/download/journal-of-alzheimers-disease/jad150270?id=journal-of-alzheimers-disease/jad150270",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef4629daf276aafe49c71e729f6f6548c5c364dd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
254297586 | pes2o/s2orc | v3-fos-license | The Effect of Sound in the Dental Office: Practices and Recommendations for Quality Assurance—A Narrative Review
Sound is inextricably linked to the human senses and is therefore directly related to the general health of the individual. The aim of the present study is to collect data on the effect of two dimensions of sound, music, and noise from an emotional and functional point of view in the dental office and to perform a thorough review of the relevant literature. We collected articles from the databases PubMed and Google Scholar through keywords that were related to noise and music in healthcare. Important information was also extracted from articles on the web and official websites. Screening of the relevant literature was performed according to accuracy and reliability of the methodology tested. A total of 261 articles were associated to sound and music in healthcare. Ninety-six of them were the most well documented and were thus included in our article. Most of the articles associate noise with negative emotions and a negative impact on performance, while music is associated with positive emotions ranging from emotional state to therapeutic approaches. Few results were found regarding ways to reduce noise in a health facility. If there is a difficulty to find effective methods of reducing the daily noise-inducing sounds in the dental office, we must focus on ways to incorporate music into it as a means of relaxation and therapy.
Introduction
Dental staff and patients in the dental office are exposed to a barrage of sounds [1]. Sound can have either a negative impact, perceived as noise [2], or a positive one, for example while listening to pleasant music [3]. The dental office is a health working environment with external sounds that can be limited but not completely controlled or extinguished and internal ones that derive from physical movements, communications and dental equipment activities in the waiting and operational rooms. Anxious and phobic patients are common phenomena in dental healthcare units [4], and they are especially sensitive to sound and touch [5]. Dental anxiety has been already defined as a state of worry, nervousness, or unease for a dental procedure with an uncertain outcome [6] that can worsen with loud noise [7]. Anxious patients can become uncooperative and potentially more difficult to manage, as they tend to avoid dental visits. Consequently, they suffer from dental diseases such as caries, represented with pain [8]. Age can play a certain role in the estimation of noise and the consequent sentiments it brings. Especially for children, feared aspects of dental treatment are often sensory, such as the sound of a dental drill [7,9]. Furthermore, music listening offers an effective, nonpharmacologic alternative to reducing preprocedural dental anxiety in patients with intellectual and developmental disabilities (IDD) [10]. In their systematic review, Moola et al. [11] concluded that there was enough evidence to suggest that adult patients may also benefit from a procedural music-listening program but there was inconclusive evidence on the effectiveness of music in reducing dental anxiety in children. Pathological issues with sound are also described in the relevant literature, with misophonia being one of the most searched disorders [12]. In misophonia, certain sounds trigger emotional or physiological responses that could be perceived as unreasonable for others under the given circumstances. In such cases, a sound such as a dental drill may "drive dental patients crazy". They can range from anger and annoyance to panic and the need to leave the office [5].
Inversely, a therapeutic approach to sound in the dental office is a topic of increasing interest in the literature. Music can be used as a self-management technique to reduce or control distress [13]. Music therapy is an effective, non-invasive, and cost-effective intervention, which decreases anxiety and therefore optimizes the outcome of a medical intervention [6]. Music is seen to have wellbeing benefits but also many advantages in health, across a spectrum of practices: from community settings to its use in waiting rooms and surgical settings as background music, both to directly influence mood and arousal levels and to distract from unpleasant thoughts and feelings [5,7]. Through music, great changes can be observed in mood improvement, pain and anxiety masking, cardiovascular fitness enhancement, mindfulness or/and greater social integration [7]. According to the literature and to the best of our knowledge, little attention has been paid to sound's impact in the therapeutic procedure of dental patients and the relaxation state or health of both patients and dental staff. The aim of this article is then to perform a narrative review on the effects of two dimensions of sound, music and noise, to both patients and dental professionals, from an emotional and functional point of view. We further proceed to suggested technical actions for soundproof design of modern dental offices as well as personal actions of sound control and music use for the wellbeing of dental staff and patients.
Materials and Methods
For the purpose of this narrative review, we performed research in databases and google searching with keywords such as "noise", "dental office", "sound effect", "healthcare", "music in healthcare", "healing music", "occupational noise" and "dentistry". From this search, we found 261 articles discussing the aspects of sound and music in healthcare settings. From these articles, 165 were excluded because of insufficient, irrelevant or ambiguous conclusions and data. From the remaining articles, 45 were performed in hospitals, and 41 were performed in dental settings. From the screened articles, we used the ones that had scientific references and were well-documented, which had a final total of 96. A total of 4 of the 96 references led to official foundations and organizations and 3 to official websites. All the above are presented in the following Prisma flow chart ( Figure 1) [14].
Sound Types and Levels in the Dental Office
The sounds received in a dental office can be mainly divided into two main groups. The first group includes sounds from non-dental external sources, such as traffic noise, roadworks, etc., and sounds inside the dental clinic, for example phone ringing, conversations between the staff and patients, air conditioner, computer printers, music, and television [14,15]. The exposure of the patient and staff to these sounds can be limited but not completely controlled. The second group includes sounds from dental sources, i.e., sounds produced by various machines that operate continuously or intermittently. For example, high-speed drills [16][17][18] and cutting machines in general [19], ultrasonic scaling handpiece [19,20], power suction equipment [19][20][21], amalgamators, autoclave laser equipment and other instruments and trolleys [1,22]. The noise levels in the dental operating room have been linked previously to those encountered on a motorway [23].
According to the National Research and Safety Institute, for an 8 h workday, hearing is in danger from 80 dB [24]. The noise levels in the dental clinics are below the limit of risk of hearing loss [3,[21][22][23]. This is attributed to the development of modern dental equipment and machines that considerably reduce the degree of noise produced. To decrease the prevalence of hearing loss among dental professionals, ISO standard 7785:199716 suggests that the noise levels (namely, sound pressure levels) generated by the high-speed handpieces should be below 65 dBA and should never exceed 80 dBA. According to this ISO, the noise levels produced by new dental equipment are generally below 85 dBA [22,23].
Thus far, there have been several studies in the literature that have examined the dBA level produced by sounds in the dental office. In those studies, noise levels associated with clinical handpieces ranged from 76 to 105 A-weighted decibels (dBA), and suction ranged from 74 to 80 dBA in a 1979, United States Army study [25], such as levels of 70-82 dBA for clinical handpieces and 82-90 dBA for cleaners and scalers reported in a separate study [26]. Reports in the contemporary literature suggest that noise levels may have declined
Sound Types and Levels in the Dental Office
The sounds received in a dental office can be mainly divided into two main groups. The first group includes sounds from non-dental external sources, such as traffic noise, roadworks, etc., and sounds inside the dental clinic, for example phone ringing, conversations between the staff and patients, air conditioner, computer printers, music, and television [14,15]. The exposure of the patient and staff to these sounds can be limited but not completely controlled. The second group includes sounds from dental sources, i.e., sounds produced by various machines that operate continuously or intermittently. For example, high-speed drills [16][17][18] and cutting machines in general [19], ultrasonic scaling handpiece [19,20], power suction equipment [19][20][21], amalgamators, autoclave laser equipment and other instruments and trolleys [1,22]. The noise levels in the dental operating room have been linked previously to those encountered on a motorway [23].
According to the National Research and Safety Institute, for an 8 h workday, hearing is in danger from 80 dB [24]. The noise levels in the dental clinics are below the limit of risk of hearing loss [3,[21][22][23]. This is attributed to the development of modern dental equipment and machines that considerably reduce the degree of noise produced. To decrease the prevalence of hearing loss among dental professionals, ISO standard 7785:199716 suggests that the noise levels (namely, sound pressure levels) generated by the high-speed handpieces should be below 65 dBA and should never exceed 80 dBA. According to this ISO, the noise levels produced by new dental equipment are generally below 85 dBA [22,23].
Thus far, there have been several studies in the literature that have examined the dBA level produced by sounds in the dental office. In those studies, noise levels associated with clinical handpieces ranged from 76 to 105 A-weighted decibels (dBA), and suction ranged from 74 to 80 dBA in a 1979, United States Army study [25], such as levels of 70-82 dBA for clinical handpieces and 82-90 dBA for cleaners and scalers reported in a separate study [26]. Reports in the contemporary literature suggest that noise levels may have declined substantially over the intervening 30+ years; mean levels of 70-76 dBA for clinical handpieces and suction were reported in 1998 [23], 66-76 dBA in 2006 [27], 64-97 dBA in 2011 [28], 64.2 +/− 2.4 dB in 2013 [29] and 75-84 dBA in 2014 [30] or 51.7-67.37 dBA [31] and 60-65 dBA in 2017 [19]. Most of these measurements were brief measurements made near operating dental instruments, rather than measures of personal exposure [32] or in a big clinic with more than one dental unit. In another research set up, measures have been made in different spots within a clinic suggesting that mean sound levels in the working clinics ranged from 63.0 to 81.5 dBA, being within the suggested limit. In the same study, the combination suction and either low-or high-speed handpiece in the postgraduate clinic was significantly noisier than the undergraduate clinic at several times, suggesting that more intensive dental work might generate more noise [33]. Furthermore, in a recent study, an overall noise of 73.83 ± 4.39 dB was found to be generated within a dental clinical setting, suggesting that in clinics with more than one dental unit, especially when dentists perform different operational activities, sounds remain high and close to average limits of risk [34]. More specifically, the highest sound level of 79.44 ± 2.10 dBA was observed during restorative treatment followed by 74.14 ± 3.08, 73.22 ± 1.93, 71.39 ± 3.37 dBA for endodontic, periodontal, and prosthodontic treatments, respectively. A statistically significant difference was observed in the noise levels produced from all these different specialty treatments [34].
The extensive reports made by organizations such as the World Health Organization, the Control of Noise at Work Regulations in the United Kingdom and the US Occupational Safety and Health Administration (OSHA) are also interesting. National and European community directives and United Nations guidelines apply for workplaces including operation theatres. The recommended threshold for work, which is characterized by a significant part of mental activity such as decisions under time pressure, decisions with severe consequences, is 55 dBA [1]. Noise levels frequently exceed recommended noise levels by the World Health Organization in hospitals, especially in the operating room [35], as it recommends sound levels up to 35 dBA [36]. Although average sound levels do not exceed the thresholds recommended by law and international standards, momentary peaks are higher than the allowable level. Therefore, it is essential to introduce means of prevention and measures of safety against the daily dental exposure to noise [24].
Effect of Noise in the Dental Office
Noise is defined as an unpleasant and unwanted sound [5]. For dental patients, the most studied effect of noise is anxiety. Fear or anxiety due to noise produced in the dental clinic is rated third among the reasons to avoid dental visits [18]. Patients often associate the dental office as an unfriendly, offensive, and anxiety-provoking environment, distinguished by loud noises [17]. All patients' age groups can be annoyed or stressed from noise derived from dental equipment, with children being most affected. Among children with adverse responses to sensory stimuli, noise was the most adverse stimulus, followed by touch, smell and backward tilting of the examination chair. [37]. While one study found that noise from a high-speed drill and the Erbium laser did not cause any irritable behavior among children, Yu et al. [20] showed that noise from the ultrasonic scaling handpiece was perceived as an aversive auditory stimulus by young patients and induced unpleasant feelings. In another study, 38% of the patients in the age group 6-11 years reported that the sound of the drill makes them uncomfortable [18]. Sound and sensation of the drill were rated as the most fear-eliciting stimuli in children [17,18] and cause of dental anxiety. In older children, dental anxiety is found to be higher, with 76% of 12-year-olds and 64% of 15-year-olds reporting either moderate or severe dental anxiety when visiting the dentist [7].
Similarly, noise in the dental office affects clinicians and auxiliary dental staff as well. Occupational noise is the most frequently studied type of noise exposure in the dentistry field [38]. Noise can affect the hearing capability of dental professionals. Auditory disorders, tinnitus and hearing damage are common harmful effects of prolonged exposure to noise in dental settings [22]. In modern studies, a large discrepancy in the prevalence of hearing loss for professionals of the dental field was found, while elsewhere, no hearing loss was noticed [19]. All researchers agree, however, that noise levels in the dental office are high enough to cause other non-auditory negative effects, such as annoyance, anxiety, irritation, conversation interference and concentration difficultly [19], as well as fatigue and tension headaches [38,39]. Dentists with a service length of more than 10 years and daily working hours of more than eight were found to have the highest risk to their hearing state. In addition, the worse the hearing state was, the worse the health state was found for them [40]. Elsewhere, hearing loss was significantly related to work tenure longer than 15 years and age older than 40 years in a dental population [41].
External noise pollution is regarded further as a general stressor for both patients and staff. It increases mental stress, the development of cerebral cardiovascular disease, and the risk of hearing loss [35,42]. Volume level and the frequency of noise (sound quality) have negative effects on concentration as well. Furthermore, external noise can bring adverse effects that range from poor concentration to mental and physical stress both subjectively and objectively in an already stressful environment, plagued with high burnout levels [1,35,43]. Higher volumes of noise correlate directly with higher levels of surgical errors, putting patients at risk. The more complex the operation procedures are, the more severe the negative effects of external noise become [44].
Noise pollution in an operational room can also be caused by employee-related behaviors and surgical equipment. Communication was the factor believed to be most adversely affected by noise in the operating room [14,35]. The noise caused disruption and "masking", which resulted in impaired speech discrimination and speech intelligibility. Consequently, the staff raised their voices to be well understood, which amplified the noise level [2].
The effect of noise on performance depends not only on its level and the stress tolerance of the individual, but also on the complexity of the task and the type of noise. Two features of the type of noise are important, namely, whether it is predictable and whether it is controllable. Predictable noises are continuous or periodic, and unpredictable ones are discontinuous or episodic; controllable noises can be terminated at will, whereas uncontrollable ones cannot. Even high-level continuous noises (90-120 dBA) have no detrimental effects on the performance of simple motor or mental tasks. However, noises of lesser amplitude, especially when they are unpredictable, uncontrollable, or both, can interfere substantially with the performance of complex tasks [1]. In general, unexpected suddenness of noise occurrence shows significant influence on dental works. In comparison, frequency of noise occurrence has a less but still significant influence on dental work, particularly on the quality of work and necessary conversation among staff. However, noise level shows a weak influence on dental staff, and noise from their own equipment shows a secondary influence [19].
Positive Aspects of Sound in the Dental Office
The healing effect of sound is traced back thousands of years. From the Aborigines who used the powerful sound vibrations of the "yidaki" (wind instrument) that would help listeners to enter a deep state of relaxation, to the Tibetan monks using the Himalayan bowls whose vibrations were described as universal manifestation, and from American Indian healers who would fast in order to dream of a healing song for their patient [45], to the ancient Egyptians who developed a method that called "toning" (a method aiming into manipulating the sound of vowels in order to create therapeutic sounds) [46], and to the ancient Greeks who used music to improve wellbeing [3], sound has always been an extraordinary tool of healing, both physical and emotional [47,48]. There has been a rising interest in the therapeutic power of music in healthcare in the last two decades. Music has been used in different medical fields to meet physiological, psychological, and spiritual needs of patients [11]. Specifically, the anxiolytic effects of music have been studied in a variety of medical patients, including surgical, cardiac, oncology, and urology patients as well as dental patients [49].
Music has many benefits as far as concerning relaxation matters, and it can have a positive influence on the patient by making concentration easier and by easing anxiety [50].
Many scientists have investigated the therapeutic effect of music on patients before, during and after surgical procedures of different kinds. It generally proves to be useful: (a) for reducing anxiety and pain levels, (b) during the recovery period [3,51,52], and (c) for encouraging people to commit to routine and necessary preventive care [53]. In the field of dentistry, a systematic review concluded that music intervention was effective in reducing anxiety and pain in children undergoing dental procedures and in adults undergoing medical procedures [13,50,54,55]. In another study, patients were asked what they would recommend as a useful way of reducing discomfort during dental procedures. The most frequent response was the use of music [56]. A 10 min music intervention is a sufficient period to allow music to exert an anxiolytic effect. This is an important finding, as waiting periods in dental or other medical offices are rather short in time [57] or should be. Moreover, it was found that music listening is equally effective in decreasing anxiety levels or even to a greater extent than the administration of benzodiazepines [57]. Despite the broad range of settings and patients in which music intervention is tested, it has not yet been broadly adopted in clinical practice [3].
Music also has a positive effect on the performance of the healthcare staff. It brings higher satisfaction within the working environment, which is associated with a lower chance of a burnout [35]. It is reported that the only form of noise that can be beneficial in the operating room is music, which may raise the concentration when experienced operators perform a monotonous task [42]. Moreover, it is believed that general conversation and music should be acceptable, as this increases work enjoyment in an already stressful environment and prohibiting it entirely would not be feasible [35]. Although the "sterile cockpit concept" is often mentioned (a room isolated from external noises), a total soundsterile work environment in the operational room seems to be neither practically possible nor desirable [35]. In addition, stress-reductive effects of music in healthcare professionals have been described [58] making music an effective way of happiness, excellence and productivity in the workplace that positively affects the surgical performance and the postoperative complications of a difficult dental procedure such as implantology (Sound Healing Research Foundation) [59]. On the other hand, in a clinical trial, 20% of the responders viewed music as a distracting factor when played during a long, complicated, or emergency procedure. Music influenced communications positively between staff as reflected by 63% of the responders, and 77% reported that music made them calmer and more efficient. Those who refused to listen to music during surgery indicated that it might interfere in extended and complicated procedures as well as in emergency procedures [51].
In terms of the type of music proposed, a study concluded that the anxiolytic and pain-reducing effects are not restricted to one specific type of music [3]. For example, classical music was preferred by 58% of the responders [51]. A meta-analysis concluded that there is a small but statistically significant beneficial effect of listening to Mozart on task performance [60]. However, this effect can also be observed with other types of music [52]. Thus, the delivery of music that is appropriate in dental settings is another important issue of research. The brain of each individual patient has picked up musical building blocks from the local sonic environment in infancy and developed preferences based on this experience. To the extent possible, music needs to be tuned to resonate with patients' particular and deep-rooted musical instinct. The evidence for this is overwhelming patient preferences, and prior musical experiences are vital determinants of the ultimate success of any intervention. Ideally, music should be relevant to its listeners in terms of culture, genre, mood, and era of origin [61]. Because music is an inherently evocative medium, dental professionals also need to be cautious not to evoke too much feeling or irritation instead of relaxation.
The volume of music is also important in healthcare units. Some people need high volume music to remain calm, whereas others feel overwhelmed by the very same sensory experiences [37]. Another issue is who is delivering the music program. Ullmann et al., in 2008, mentioned that speed and accuracy of a task performed is greater when the surgeon selects the music [51], and this is also mentioned elsewhere [52]. The same effect is found for patients as well. In a relevant study, it was reported that music is most effective when the musical program is selected by the patient [56]. There is a distinction between music interventions administered by medical or healthcare professionals (passive music listening) and those implemented by trained music therapists (active music therapy) or those performing sound baths for mindfulness and relaxation. Active music therapy is the planned and creative use of music by a music therapist to attain and maintain health and wellbeing. People of any age or ability may benefit from a music therapy program regardless of musical skill or background [11,61].
Specifically in the dental field, it was concluded that music alone did not produce any quantifiable distraction affecting pain, anxiety, or patient behavior in dental patients [62]. However, patients enjoyed listening to the music during their visits. Another systematic review mentions that music relaxation administered prior to dental treatment yielded no dental-anxiety-reducing effect compared to the control group that was resting in silence [63]. Furthermore, several dental studies have attempted to evaluate the use of audio and video distraction as an adjunct to dental treatments [62]. Elsewhere, it was mentioned that adult dental patients reported reduced pain and reduced anxiety with video distraction and audiotaped relaxation instructions, but not with music, which at best, results in a placebo effect [56]. An innovative approach to these methods that has many benefits to the management of pain and anxiety is virtual reality [64], more extensively used currently [65], as it results in an audiovisual distraction [18]. As it is mentioned thus far, the distraction with music and audio-visual interventions may be more effective for patients with mild forms of anxiety compared to patients with severe dental anxiety. This is because distraction therapies operate on the principle of masking of fear-stimuli prior to or during dental treatment and do not facilitate learning in patients with dental anxiety [63]. This is extremely important in cases of needle-related procedures that cause pain and distress (especially common during childhood). During these moments, it is recommended for the dental staff to control the level of sound of the background music either through the headphones or other audio-visual equipment to cause distraction, relaxation through deeper breathing and creation of pleasant new memories [66]. It is also recommended that pre-recorded music could be offered through headphones during stressful procedures to adult patients to reduce their dental anxiety too [11].
Finally, guidelines have been issued that describe the therapeutic use of music in healthcare [61]. In the protocol of the Music Therapy Unit, of the Royal Children's Hospital in Melbourne (2004) [67], it is mentioned that: (a) radio is a source of uncontrollable stimulation. Radio use should be limited to use with headphones; (b) background recorded music should be controlled by a member of staff (for example waiting areas), and the situation should be regularly assessed (i.e., every few hours) and altered where not appropriate; (c) patients should be encouraged to bring and use music they consider to be a supportive strategy.
Selected literature on music effects in dental settings are shown in Table 1. We selected the most up-to-date, well-organized articles with high scientific value. Regarding operators and staff, music shows a positive influence on behavior and emotional state and is preferable while working. Working is more enjoyable, but surgical performance is not shown to be affected either positively or negatively. Concerning patients, listening to music is helpful prior to dental treatment but also during the procedure, without affecting the levels of pain. no previous dental experience divided in three groups. Group A: control group, Group B: instrumental music group, Group C: nursery rhymes music group A significant difference (p < 0.05) was observed regarding anxiety in groups B and C; higher anxiety levels in group C. A statistically significant (p < 0.05) difference was seen between the pulse rates in groups B and C, the anxiety being more in C. The values of oxygen saturation showed minimal variations during all the visits for all the groups, and the results were not statistically significant.
Audio distraction technique decreased the anxiety level but not to a very significant level. Instrumental music was the music of choice. Despite lack of any relief from pain, patients had a positive response to music and wanted to listen to it at their subsequent visits.
patients in Saveetha
Dental College were randomly selected and allocated to test group and control group. The test group (N = 25) was subjected to music during extractions and control (N = 25) was not exposed. Dental anxiety levels and hemodynamic changes were assessed before and after extraction.
The control population had elevated hemodynamic changes, as the diastolic pressure rise was significant. In the test population, there was a statistically significant fall in the hemodynamic changes.
Music seems to be a psychological and spiritual way to calm oneself down. Hence, music therapy can be used as an anxiolytic agent for stressful dental procedures.
Oomens et al., 2019 [52] Systematic review Systematic literature search-9 studies (212 participants) Beneficial effects of music were reported on time to task completion, instrument handling, quality of surgical task performance and general surgical performance.
Insufficient evidence to definitively conclude that music has a beneficial effect on surgical performance in the simulated setting Instrumental music was played for the patient via earphones during MOS treatment. Both physiological and psychological measures of anxiety were recorded using heart rate measurements, patient completed questionnaires and a subjective ten-point anxiety score.
The majority of patients reported music reduced their anxiety levels, pain and discomfort (92%). Almost half of the respondents (48%) reported that music made communication with the dental team easier, and 90% of patients reported that they would request to have music playing during their next dental visit.
Music can be helpful in making patients feel more at ease during dental treatment.
Systematic literature search-22 prospective studies (3507 participants)
Over half of the surveyed staff found noise levels to be a disturbing stressor and impacted performance negatively.
Although music increased decibel levels in the operation room, attitude of surgical team members toward music during surgery is generally regarded favorable.
Mechanism of Healing Effect of Sound and Music in the Dental Office
It is reported that music engages sensory processes, attention, memory-related processes, perception-action mediation ("mirror neuron system" activity), multisensory integration, activity changes in core areas of emotional processing, processing of musical syntax and musical meaning, and social cognition [70]. Music has been shown to stimulate the brain primary engines of human capacity. Musical engagement exercises, attentional networks, and executive function evoke emotional response, stimulate the central nervous system, and appear to activate the human mirror-neuron system, supporting the coupling between perceptual events (visual or auditory) and motor actions (leg, arm/hand, or vocal/articulatory actions). It has been used successfully to induce cognitive repair in patients with stroke, Parkinson's disease, cerebral palsy, or traumatic brain injury. It is reported that music has the potential to fix the brain, by providing an alternative entry point into a broken brain system to remediate impaired neural processes or neural connections [61]. It is likely that the engagement of these processes by music can have beneficial effects on the psychological and physiological health of individuals, although the mechanisms underlying such effects are currently not well understood [71,72]. Experiment, measures of physiological response, and imaging together show that creating or listening to music engages regions throughout the brain, bilaterally, and in the cortex, neo-cortex, paleo-, and neocerebellum [73,74], distinguishing eight perceptual dimensions of music: pitch, rhythm, tempo, timbre, meter, contour, loudness, and spatial location, each of which has been tested independently by experiments. Each of these energetic components of music has been shown to activate distinct brain structures and neural circuitry. The current diagram of musical perception is of a sequence, wherein the base components of pitch, rhythm, and loudness are processed individually and separately within the brain [74,75] and then synthesized to create the understanding of an entire phrase [76]. Then, music healing effects run from its intervention in the brain's regions that control emotions through the production of certain hormones [77].
Today, the research in the field of psychoacoustics, the scientific study of the perception of sound, has led scientists to analyze the brain waves that are triggered by different sounds.
They found that specific sounds of different frequencies can touch off brain waves, which as electric pulses, can vibrate and work into the interaction of masses of neurons. All this activity can boost good hormones such as serotonin and can lower cortisol, which is associated with stress [48].
Dr. Margaret Patterson and Dr. Ifor Capel worked on a series of experiments on how Alpha brainwaves can boost serotonin. As Dr. Capel stated: "As far as we can tell, each brain center generates impulses at a specific frequency based on the predominant neurotransmitter it secretes. In other words, the brain's internal communication system-its language, is based on frequency. Presumably, when we send in waves of electrical energy at, say 10 Hz, certain cells in the lower brain stem will respond because they normally fire within that frequency range" [45]. In another research by the British Academy of Sound Therapy, results have shown that sound is involved in the different domains of "physical relaxation, imagery, ineffability, transcendence of time and space, positive mood, insightfulness and disembodiment and unity both in live and recorded studies" and causes deep relaxation [78]. Especially, during sound baths, people come to the so-called "altered state of consciousness" (ACS); a state as if daydreaming or just before falling asleep. In this ACS state, the theta brain waves are synchronized with the healing sounds and provide the person with a state of deep relaxation that lasts awhile [79] and can alter their behavior during stressful events.
Options of Sound Control Design in the Dental Office
There are ways in which we can practically reduce the sounds coming from all the sources mentioned above or, in cases where this is not possible, we can get the person to pay less attention to them [80]. There are practical suggestions in the literature characterized as positive distractions that can be applied relatively easily, ranging from the design of a dental clinic to the daily management of the equipment and machines: (1) art and environmental aesthetics, (2) spatial arrangement and atrium, (3) considerations of socialization patterns, (4) play and interactive technologies, (5) sound and lighting interventions, and (6) access to nature. Relaxation music can combine all these interventions, especially in dental settings where fear is the main emotion. The research indicated that such positive distractions in healthcare environments provide a series of health benefits for patients, including improved behavioral and emotional wellbeing, reduced stress and anxiety, enhanced healthcare experience and satisfaction, facilitated therapeutic procedures, and postoperative outcomes that enhance quality assurance in the setting [81]. However, significant research gaps are emerging between positive distractions and play/relaxation in garden spaces within the waiting areas, suggesting that spatial design to accommodate interactive technology (music and audio visual) and socialization in dental settings need further research.
It is crucial to consider that from the very first design of a new dental office or operating room, noise control should be considered, and design should follow such practical issues. For example, we can proceed to install sound insulation between office, scrub-up and sterilization areas and/or the operating room itself. We should also avoid the use of hard sound-reflective ceilings and walls [1], by using sound-absorbing ceilings instead [38]. Further, the use of carpeting and acoustically absorbent ceiling tiles or plants in hallways helps to reduce sound propagation [14]. Flooring used by healthcare units to limit the spread of sound has been also studied [82]. When comparing three types of flooring, it was concluded that a significant difference was found for sound levels between flooring type for equivalent continuous sound levels. Carpet tile performed better for sound attenuation by absorption, reducing sound levels by 3.14 dBA. Carpet tile provides sound absorption that affects sound levels and influences occupant's perceptions of environmental factors that contribute to the quality of the indoor environment but is suggested only for waiting areas in dental settings. Communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors and floor soundproofing solutions in the construction of buildings or later in the renovation before establishing the dental units [83]. Further, openings should be covered with soundproof windows, frames, and doors with at least double glass to control the external noises sufficiently. Overall, when designing dental structures, it is obligatory to consider all structural parameters and design characteristics as well as types of acoustic insulating materials such as acoustic putty, sealants, caulk, plaster, wallpapers, sprays or paint and concrete types [84].
Other measures to reduce noise sources include placing compressors away from the dental operatory or within a metal cabinet (such as the M series, Jun Air International Corporation, Japan). The latter has been advocated to reduce noise by 75% to 47-60 dBA [33]. We can also use an anti-vibration soundproof floor mat underneath the compressor to reduce the transmission of noise and vibrations [85]). Additionally, closing the doors when operating is a simple and effective measure for noise reduction within a healthcare unit [86]. Other measures that might work at this direction are also reported, such as verbal and visual alarm reminders for staff, posting quiet signs for patients, and limiting electrical equipment through the waiting areas or the operational office [86].
Architectural design of modern dental offices should consider the use of "service corridors" driving to operational rooms with specific sound isolation walls or sound absorber panels in the walls or ceilings that can distinguish the waiting area from the unit [87] or the different floors of the building. In this way, a stress-free atmosphere in the waiting area with soft music from hidden sound sources can remain undisturbed by the work in the operational room. Finally, within the operational room, equipment that is not used should be kept in cupboards made of soundproof materials mentioned above or of new sustainable ones to lower the environmental impact. There is, for example, an increasing interest for novel ideas to recycle wasted materials such as discarded facemasks for thermal insulation and sound absorption for buildings, replacing the synthetic or petrochemical insulation materials [88]. In addition, in the field of fabrics for furniture or curtains, soundproof rock or stone wool insulation fabrics are suggested [85,89].
Finally, proper maintenance of the handpieces is also important to function well under demanding cutting procedures and to reduce performance noise even after 30 months [90]. Noise appears to be a useful indicator of imminent bearing failure of these cutting instruments. Thus, assiduous adherence to manufacturers' directions for cleaning and lubrication should contribute to increased bearing life and less noise propagation.
Finally, the method that we can apply to differentiate the perception of the person towards sound and especially noise so that it creates less disturbances is termed "white noise". White noise is a signal whose spectrum has equal power within any equal interval of frequencies [91], and it is produced by combining sounds of all different frequencies together. In addition, when several distinct auditory signals are presented simultaneously, it is often difficult for the human ear to distinguish or discriminate between them. This phenomenon, known as masking, accounts for the difficulty experienced in hearing others talk in the presence of loud background noise [1]. To minimize noise disturbances, introducing white noise through a sound-masking system-consisting of a central electronic controller and several emitters (speakers)-can be helpful. This constant, low-level background sound fills in the gaps between louder, intermittent noises, making them less noticeable [14]. An easy way to take advantage of the positives of masking is to use background calming music [18] that patients could listen to over headphones to block background noises [37].
Discussion
The literature reveals that most dental professionals do not take any actions against noise at work. Particularly, most of them do not complain or change, either noisy equipment or the workplace. This is probably because they are familiar with the acoustical environment in dental hospitals, due to their studies in dental schools. It is likely that they do not consider the frequency or the duration of high-level sounds in their everyday working place. Such a noisy workplace would significantly increase the dissatisfaction of dental professionals if they were to know it before opening their own dental office. However, some dental professionals take physical protective measures, such as earplugs [19]. Dentists can use the same solution as performing studio musicians such as the custom-filtered musician's earplug, which allows for accurate hearing but at lower loudness levels, resulting in a smooth, flat attenuation from 9 to 25 dBA [33]. Regarding dentists and dental assistants, it was found that the aspirator, not the rotating instruments, was the most intense source of noise in the office [21]. The noise of the surgical suction can be reduced through innovative modifications and designs [92]. Another suggestion is that suctions should be used as little as possible and turned off completely when not required [1]. After the COVID-19 pandemic, the use of a rubber dam was universally suggested in safety protocols [93,94], thus diminishing the need for constant use of suction during the whole operational procedure.
Overall, reducing noise in a workplace such as the dental office will contribute to quality services. Dental staff will work more efficiently, and productivity will grow. Time management issues will be better controlled, and the personnel will be happier [95].
Under these conditions, demanding dental procedures can have better outcomes, and patients will be more satisfied. Dentists should show their care for sustainable acoustic solutions in their office. This could be an auto-marketing campaign for a modern dental office committed to sustainability. For example, all electrical equipment should have the relevant acoustic labeling while certificates of soundproof materials used either in furniture or building construction could be advertised as well. New audiovisual equipment with prerecorded music files can be in use while patients can participate in the "noise-free" office by giving their opinions and suggestions in relevant questionnaires found in the waiting areas.
We need to consider that at least 25% of the European Union population (as estimated) experiences a reduced quality of life due to environmental noise-induced annoyance and that between 5% and 15% of the population suffers serious noise-induced sleep disturbance. This fact is further estimated suggesting that in the EU, environmental noise costs between EUR 13 and 38 billion per annum due to medical costs, lost workdays, reduction in house prices and reduced land use potential. The EU relevant directive [96] (which is also reflected in the approach to noise strategy embodied in the 6th EAP) aims to provide for a common approach to the avoidance, prevention and reduction of the harmful effects of exposure to environmental noise by implementing: (a) strategic noise mapping: determining noise exposure using common noise indicators and methods of assessment; (b) informing the public: providing information on environmental noise and its effects; (c) adopting action plans: based on the results of noise mapping, seeking to reduce noise where necessary and protect environmental noise quality where it is good. The relevant legislation should be encountered worldwide for "noise-free" working places and sustainable modern dental settings.
As dentists, our concern and demand from the construction and dental industry should be the manufacturing of better soundproof "green materials" that can further enhance the initiative of "green dental settings" and the prospect of "noise-controlled" dental working places. We also need to dive deeper into the psychology of the dental patient through the studying of sound while performing dental procedures. In this way, we will be able to help them managing better their emotions, especially negative ones, and make their dental visits more pleasant. Therefore, new studies must continue working in this direction.
Limitations of the Study
This study was a narrative and non-systematic review, thus possible relevant articles may have not been used. But the overall search was extensive and focused on the main issues addressed about the sound effect in dental offices under the new vision of sustainability for humans in healthcare settings. Novel research on this topic would include the estimation of sound in an academic dental environment with modern devices that guarantee accurate measurements. In addition, estimation of professionals' and patients' opinions about the sound levels and the quality of music involved for stress release and relaxation in the dental office are important points for future research.
Conclusions
There is controversial evidence as to whether music really helps to manage negative emotions and whether it can be applied as therapy or whether it acts as a placebo and is just a pleasant company in the dental office. Positive outcome of most studies, however, suggests that sound control in the dental office will enhance communication, feeling of safety and relaxation to all age groups. Classical or relaxation music will offer distraction to fearful patients during fear-inducing dental procedures. Conducting more research will shed light on the specific spatial and audio-visual design of the waiting areas. This way, we will be able to offer concrete conclusions as to the type of music preferred by patients and staff as well as how they react to diminished noise level while waiting for or accepting therapy services. Finally, modern dental units could be designed using the principles and soundproof materials that already exist or newly designed ones. | 2022-12-07T16:02:52.751Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "2347459d4a5a086770c03121893964eda2fd918b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-6767/10/12/228/pdf?version=1670240718",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "40d8abf959b4d9e8f97165c280fa7a460cecc6e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59278031 | pes2o/s2orc | v3-fos-license | A PROSPECTIVE OBSERVATIONAL STUDY ON PATTERN OF ADVERSE DRUG REACTION TO ANTIBIOTICS COMMONLY PRESCRIBED IN THE HOSPITALIZED PEDIATRIC PATIENTS
Objective: Antibiotics are the almost usually specified or authorized medication in hospitals, and antibiotics were found to be the almost bothersome classes of drugs providing or endowing to adverse drug reactions (ADRs). Therefore, the present study was conducted to check or regulate the precautions (ADRs) of antibiotics usually specified or authorized in the pediatrics unit. Methods: A potential, experimental, non-interventionist study was conducted or executed in the Department of Pediatrics for a time of 6 months to analyze the ADRs reported spontaneously from the hospital using patient statistics, objective and medication information, data of ADRs, onset time, causal drug details, outcome, and severity. Results: Among 72 ADRs observed, beta-lactams and quinolones were set up to be contributing the highest number of ADRs. The duct or abdominal system was the almost commonly affected organ, followed by respiratory system and the cardiovascular system. The assessment by the World Health Organization causation estimation scale demonstrated that 5.56% ADRs were certain, 55.56% were possible, 30.56% were probable, and 8.33% were unlikely. Conclusion: Thus, the pattern of ADRs occurring in the pediatric population was observed and assessed. Early recognition and management of ADRs are essential to reduce the burden of ADRs.
INTRODUCTION
Adverse drug reactions (ADRs) are a chief origin of anguish and place a strong burden on limited health-care resources. Drug safety is a major concern in the field of medicine. ADR reports can indicate the important safety issues on drug treatment. According to the World Health Organization (WHO), an ADR is described as "a response to a drug which is noxious and unintended and which occurs at doses normally used in man for prophylaxis, diagnosis, or therapy of a disease or for modifications of physiological function" [1].
Serious adverse events can cause admission to hospital, prolongation of hospitalization, increase in investigations or treatment costs, poor work adherence, birth defects, and danger to life leading to death. ADRs are crucial or essential origin of death and anguish. The primordial disclosure, assessment, checking, and recording of ADR are vital to cause drug treatment safe, constructive or capable, and cost efficient [2]. The occurrence and ferocity of ADRs can be controlled by patient-linked or connected elements such as age, sex, simultaneous illness, and genetic factors and drug-linked elements such as type of medication, course or way of administration, time or period of therapy, and dose. According to a study, the almost bothersome classes of medications providing to ADRs were antibiotics ensued by anticancer medications [3].
Antibiotics are the almost usually specified or authorized medication in hospitals, worldwide. However, extravagant and improper use of antibiotics gives its most important restriction or constraint, i.e., increased drug resistance [4]. The reasonable use of antibiotics is crucial in fitness care. Inhibition of ADRs is feasible or viable by actual or right checking, which reinforced the state mandate or regulation to institution alias a pharmacovigilance center in each medical college in the country [5,6].
Although India is the third largest medicine market in the world, it had documented only 2% of global ADRs until 2013. PvPI increased the ADR monitoring centers from 90 to 150 including the private hospitals, which led to increase in ADR reporting. India became the first country in reporting the individual case safety reports of more than one lakh to VigiFlow, Uppsala Monitoring Center. One of the most important ways to prevent adverse drug events is to share information since all medication errors are preventable which can be achieved by sensitizing awareness among the health-care professionals to report and follow-up the events [7].
Chief purpose of the study was to regulate the protection (ADR) of medication usually specified or authorized in the pediatric unit of tertiaries care teaching hospital, for 6 months, find maximum regular or usual antibiotics that provide most ADRs, regulate or ascertain the list of maximum usually pompous organ system, and estimate the causation of ADR.
METHODS
A potential, data-based, non-interventionist study was conducted or executed in the Department of Pediatrics from November 2016 to April 2017 to analyze the ADRs reported spontaneously from the hospital. Patient statistics, objective and medication information, data of ADRs, onset time, causal drug details, outcome, and severity were collected as per the Central Drug Standard Control Organization, Indian Pharmacopoeia Commission (CDSCO) adverse drug event reporting form. Descriptive statistics were used to analyze the data.
Ethics
The study was evaluated and endorsed by the Institutional Ethics
RESULTS AND OBSERVATIONS
A sum of 72 ADRs was accumulated or gathered, filed or listed in CDSCO forms, studied or examined and appraised on the WHO causation computation scale. The information gathered amid the 6 months period was examined for the sum of ADRs recorded and the precipitating drug.
DISCUSSION
Antibiotics are reasoned or studied as the second most specified or authorized medication in the universe, only after to the drugs suggested for cardiac diseases [8]. Antibiotics are utilized for remedy and prevention or protrusion of different compatible state and are continued or reasoned as secured drugs when utilized reasonably. However, as with further medications, they also exhibit ADRs. This study attempted to determine the sign or paradigm of ADRs of antibiotic medication class. In the studies carried out in Nigerian children, antibiotics were the maximum or almost reported medication class in ADR incident or matter and they were the next almost considered in a further study [9]. In this study it was observed male patients were more than female patients.
Further study, moreover, demonstrated the male supremacy and the age group almost considered were grown-ups in during examination. [10]. High antibiotic ADRs were noticed in pediatrics departments and perhaps due to periodic or regular direction of medication in these units. The alimentary tract was the most affected system by ADR due to antibiotics followed by the respiratory system, CVS, skin, CNS, musculoskeletal system, urinary system, and hematopoietic disorders. Further studies, moreover, established the preponderance or prevalence of the alimentary canal system succeeded by the dermis in ADR occurrence [11,12]. Among the ADRs, major proportions of adverse reactions were seen with beta-lactam antibiotics which were similar to the observation by Tunger et al. [13]. This can be explained by the more common prescribing of beta-lactam antibiotics among the study population [14]. When we analyze the presentation of reactions, almost 75% showed abdominal pain followed by throat pain, cough, [15]. The drug suspected to have caused the ADR was dechallenged in several patients, its dose was modified in some patients and was substituted with another drug in few patients [16]. A great majority of the cases or ills cured from ADR since neither one of the recorded reactions were deadly or harmful [17]. ADR result in diminished quality of life which leads to hospitalizaton and sometimes to death [18]. Most of the ADRs are observed by medical supervision from reports and patient complaints [19].
CONCLUSION
Monitoring of ADRs is a continuous process as the number of newer drugs entering the pharmaceutical market is increasing. Health-care experts possess an essential role or duty in checking the progressive protection of medication. The occurrence of adverse drug events is not equivalent or constant of proportionality to the several or various medication being taken but enlarges extraordinarily as several or various medication advances. Pharmacovigilance demands or wants to be imposed in our state for superior or finer and safe utilize of drugs. Early recognition and management of ADRs are essential to reduce the burden of ADRs. Unclassified 0 (0.00) 6 Conditional 0 (0.00) Total 72 (100) All values are mentioned as numbers and percentages. WHO: World Health Organization | 2019-01-27T14:11:00.714Z | 2018-12-28T00:00:00.000 | {
"year": 2018,
"sha1": "2d3884b0a8686714b5a14b0a2646c2bab9638c51",
"oa_license": "CCBYNC",
"oa_url": "https://innovareacademics.in/journals/index.php/ajpcr/article/download/31711/16630",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cd03a419f715068136b6cd12cc57d1114f9c55a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245119060 | pes2o/s2orc | v3-fos-license | The Impact of Hospital Capacity Strain: a Qualitative Analysis of Experience and Solutions at 13 Academic Medical Centers
Background Hospital capacity strain impacts quality of care and hospital throughput and may also impact the well being of clinical staff and teams as well as their ability to do their job. Institutions have implemented a wide array of tactics to help manage hospital capacity strain with variable success. Objective Through qualitative interviews, our study explored interventions used to address hospital capacity strain and the perceived impact of these interventions, as well as how hospital capacity strain impacts patients, the workforce, and other institutional priorities. Design, Setting, and Participants Qualitative study utilizing semi-structured interviews at 13 large urban academic medical centers across the USA from June 21, 2019, to August 22, 2019 (pre-COVID-19). Interviews were recorded, professionally transcribed verbatim, coded, and then analyzed using a mixed inductive and deductive method at the semantic level. Main Outcome Measures Themes and subthemes of semi-structured interviews were identified. Results Twenty-nine hospitalist leaders and hospital leaders were interviewed. Across the 13 sites, a multitude of provider, care team, and institutional tactics were implemented with perceived variable success. While there was some agreement between hospitalist leaders and hospital leaders, there was also some disagreement about the perceived successes of the various tactics deployed. We found three main themes: (1) hospital capacity strain is complex and difficult to predict, (2) the interventions that were perceived to have worked the best when facing strain were to ensure appropriate resources; however, less costly solutions were often deployed and this may lead to unanticipated negative consequences, and (3) hospital capacity strain and the tactics deployed may negatively impact the workforce and can lead to conflict. Conclusions While institutions have employed many different tactics to manage hospital capacity strain and see this as a priority, tactics seen as having the highest yield are often not the first employed. Supplementary Information The online version contains supplementary material available at 10.1007/s11606-021-07106-8.
BACKGROUND
Hospital capacity strain results when there is a mismatch between supply and demand on any resources a hospital uses to provide care (e.g., beds, nurses, physicians, equipment). 1 This is often defined as increased bed demand relative to hospital bed or resource supply 1 and has been shown to negatively impact patient care, 1-7 increase costs, 8 and disrupt patient flow. [9][10][11][12][13] Large academic medical centers have been found to be at particular risk of having daily patient demand exceed supply 14 and therefore often face capacity strain; this was further heightened by the COVID-19 pandemic. 15,16 The Institute for Healthcare Improvement published the white paper "Achieving Hospital-Wide Patient Flow" that provides a framework for hospitals to improve hospital-wide patient flow through the framework of "the right care, in the right place, at the right time." 14 Numerous specific interventions to manage capacity strain and optimize patient flow have been described in the literature, including strategies that focus on earlier discharges, huddles, and reducing unnecessary hospital days. [17][18][19][20][21] It is clear that hospital flow is of strategic importance to many hospital systems; however, the perceived impact of the various strategies has not been well studied.
To better understand the experience of hospitalist leaders and hospital leaders, we utilized qualitative methods to explore interventions used to address hospital capacity strain and the perceived impact of these interventions, as well as how hospital capacity strain impacts patients, the workforce, and other institutional priorities.
Study Design
We conducted semi-structured interviews via telephone and through in-person meetings with hospital leaders and hospitalist leaders at large academic medical centers to understand the strategies they utilize to combat hospital capacity strain.
The Colorado Multiple Institutional Review Board (COMIRB), University of Colorado, Aurora, reviewed and approved the study. Interviews were conducted from June 21, 2019, to August 22, 2019.
Setting and Participants
Interviews were conducted with participants from 13 academic medical centers. Academic medical centers were chosen for this study as they may be more likely to experience hospital capacity strain. 14 To select sites, stratified purposeful expert sampling was performed after creating a comprehensive list of US medical schools along with their respective hospitals, identifying those that had over 200 beds and had hospital medicine groups (sections or divisions). We included hospitals from all regions as grouped by American Hospital Association (AHA) Regions 22 and then combined these into larger regions for reporting to ensure anonymity.
We included both hospitalist leaders and hospital leaders as participants to ensure diverse perspectives were included given the focus of this work impacts both groups of leaders and it has also been suggested that alignment between medical staff and executive leaders is needed in order to build successful patient flow initiatives. 14 We hypothesized that the perspectives might be different and important to explore. Hospitalist leaders were leaders in their hospital medicine group who had knowledge of and led initiatives related to managing hospital capacity strain and hospitalist operations such as staffing and service planning and similarly for hospital leaders except that their role was focused on hospital flow. A convenience sample of hospitalist leaders and hospital leaders was selected from the list of hospitals meeting inclusion criteria. When contacted, individuals were asked if their institution faced hospital capacity strain, whether they were interested in participating, and whether they felt they were the appropriate contact for their institution. If not, we asked for suggested participants at their respective site (snowball sampling). Only hospitals that stated they faced hospital capacity strain were included. Consent was performed during the in-person meeting or phone call and participants were provided the consent form prior to the consent discussion and interview.
Interview Guide
Semi-structured interviews with the hospitalist leaders and hospital leaders used open-ended questions to explore interventions used to address hospital capacity strain and the perceived impact of these interventions. Interviews typically lasted one hour.
Questions were derived through a literature review as well as hospitalist expertise and practical experience (collectively spanning more than four decades of experience in the field). Hospital capacity strain was defined as excess bed demand relative to hospital bed or resource supply. 1,23 A broad definition of hospital capacity strain was utilized for this study as both space and staffing constraints may be encountered by hospitals facing hospital capacity strain. We utilized the job demand-resource model of burnout 24 and the conceptual model for integrated approaches to the protection and promotion of worker health and safety by Sorensen et al. in hospital settings 25 as the guiding models for this study, namely that workplace policies and practices can directly impact the workforce and enterprise outcomes. 25 The full interview guides are available in Appendices 1 and 2.
Data Collection
Eligible participants were consented and interviewed by investigators (M.B., S.A., and N.V.). Interviews were conducted by S.A. and N.V. (each were in the process of pursuing doctorate-level degrees at the time of the study) with the assistance of a hospitalist physician with qualitative research experience (M.B.). Recruitment of participants was halted when no new codes or themes emerged during analysis.
Interviews were audio-recorded and transcribed. Any identifiers inadvertently captured on the audio-files were removed during professional transcription. The interview transcripts were then supplemented with notes and observations by research personnel made during the interviews. After professional transcription, interviews were imported into the Dedoose qualitative software program.
Analysis
Coding for themes was conducted (S.A., N.V., A.K., S.A., K.B., M.K., M.D., L.M., and M.B.). Both inductive and deductive coding approaches were applied to identify themes hypothesized a priori as well as new themes emerging from the data. An initial codebook was developed a priori, with new codes added as interviews and analysis were conducted. To ensure consensus, research personnel met virtually as a group to code two interviews together. After individual coding for each transcript was completed by at least two researchers, researchers virtually met as a group to harmonize any code disagreements. An inter-rater agreement was not measured as consensus was found through discussion. A thematic analysis was conducted using a mixed inductive and deductive method at the semantic level. 26 Coded transcripts were analyzed both within hospitalist leader and hospital leader roles and across roles to identify commonalities and differences. Member checking, 27 a technique for exploring the credibility of results, did not yield additional significant revisions.
RESULTS
A total of 29 leaders participated in 27 interviews at a total of 13 large academic medical centers (all 200 beds or more). There were sites from all nine American Hospital Association Regions. 28 Interviews were conducted with 13 hospitalist leaders and 16 hospital leaders and noted in Figure 1. All sites that were approached participated with at least one interview (100% site participation). All sites had an interview with a hospitalist leader and all but one site had a hospital leader. All interviews had one participant except one, which had three individuals from the same site and all were hospital leaders. Demographic data for the hospitals the participants were associated with are in Table 1. Specific roles of the respondents were omitted to ensure anonymity; however, high-level roles (physician, non-physician) are noted in Figure 1.
Across the 13 sites studied, a multitude of provider, care team, and institutional tactics were implemented with perceived variable success (Tables 2 and 3). The solutions that were most highly recommended were (1) ensuring appropriate staffing, (2) having proactive data-driven approaches which were felt to be more helpful than multiple pages and meetings, (3) planning for discharge at the time of admission, (4) establishing protocols and plans to manage high-capacity days, and (5) identifying barriers to discharge with a multidisciplinary approach. While there was some agreement between hospitalist leaders and hospital leaders, there was also some disagreement about the perceived successes of the various tactics deployed. Interventions that overall were perceived as positive were caps on patient loads, huddles, multidisciplinary rounds, and triagist roles. On-call providers received mixed reviews. Overall negative interventions were discharge lounges, flexing providers from teaching teams to non-teaching teams, and care escalation initiatives. Hospitalist leaders often felt multiple huddles and new care areas (i.e., surge spaces or adapting non-care areas into areas where patient care is provided) were interventions that were not perceived as successful in helping with capacity strain, whereas hospital leaders felt that designated discharge nurses, using existing staff and resources without adding staff or resources as census rises, and care escalation processes were not perceived as successful. Hospital leaders had mixed reviews on post-acute care contracts, predictive modeling, discharge lounges, and huddles. A coding summary for hospitalist and hospital leaders is provided in Appendix 3.
Themes
Three main themes as elucidated from hospitalist leaders and hospital leaders emerged and are shown below along with the subthemes and verbatim exemplar quotes.
Theme 1: Hospital capacity strain is complex and difficult to predict. Hospitalist leaders and hospital leaders agree that drivers of hospital capacity are complex and difficult to predict, which often leads to conflict and the sense of constant "churn." Because of the lack of predictability, staffing concerns often lag.
It is like the spigot game…you got one spigot that's coming out, you put a finger in that to stop it, then also there's the other spigot that now comes out spraying water...it's like anything else that you solve one problem, careful you may open up a new problem to be encountered. (Participant 110b, hospitalist leader) Theme 2: The interventions that were perceived to have worked the best when facing strain were to ensure appropriate resources; however, less costly solutions were often deployed and this may lead to unanticipated negative consequences. Both leader types recognized that the capacity crisis was almost daily, caused stress, and that resources often lagged. Because drivers of hospital capacity are complex and difficult to predict, resource allocation can be challenging. Resources and time were felt to be very valuable in managing hospital capacity strain; however, they often lagged or were not deployed in response to the current crisis.
Staffing. Participants noted the need to ensure enough providers for volume, often using a formula based on census with the goal of keeping numbers stable across teams/ providers and with a consistent workload. It was perceived as stressful for providers when the hospital became progressively busier with no maximum in sight. Solutions often fell into (1) asking providers on service to take on more patients in a day, (2) adding staff like a backup/on-call/jeopardy system (perceived as challenging because this system is often used for Regions grouped by the American Hospital Association (AHA) Regions. https://www.ahvrp.org/sites/default/files/aha-regional-map.pdf (accessed January 31, 2021) and then combined into larger regions. Regions 1-3 are referred to as the eastern region, 4 and 7 southern, 5 and 6 midwestern, and 8 and 9 western regions APP Advanced Practice Provider, ACGME Accreditation Council for Graduate Medical Education, ED Emergency Department, N/A not applicable *Cap definition: there is maximum number of patients a provider will see in a day/shift And even with that, we have a soft cap, wherein we don't give two or three more patients than they are supposed to see for the day. This is being done to prevent burnout, and also to protect the patients to ensure the quality of care is not compromised. So what this does to us is this will drive up our moonlighting cost. Every time we have a higher census, we have to bring in moonlighters. So, that will drive up cost for the division and this is like unplanned cost.
[moonlighting] …it leads to discontinuity in care with the moonlighters who are coming in and probably coming in for a day or two, and they are not here on a regular cycle. So the discontinuity again leads to a lot of things, including increased length of stay, increased readmissions, and also poor patient satisfaction.
(Participant 103b, hospitalist leader) Provider rounding styles adaptations Discharges first/discharge by "X" time Conditional discharges (discharge once "X" occurs) May lead to longer lengths of stay
Lots of gains with APPs
We're relatively new to using nurse practitioners on our service. We've tried a few things to figure out what's the best way to have the nurse practitioners help us with these-these flow surges, like focus on discharges and taking care of patients who are expected to go home that day or before noon. We've tried to have them pitch in with complex discharge, a lot of things along those lines. I think the balancing thing here is that we want the job to be satisfying for the nurse practitioners. providers who call in sick), or (3) using moonlighters to add staff when needed. The biggest challenges were often noted to be financial (balancing being overstaffed versus understaffed) and accurate projections of volume to know when to staff up because projections are difficult in a constantly changing environment. In addition, some solutions such as moonlighters were perceived as costly and potentially unsafe due to discontinuity of care and there were essentially thresholds at which you can run out of providers to pick up the extra work.
It is ideal to staff at a level where it is just built-in that there is some ability to flex up. Participants noted that increasing providers, calling in float pool nurses, and adding beds to handle increases in volume does not ensure that the other necessary resources for delivering patient care are available such as care management, social work, physical therapy, occupational therapy, speech and language therapy, respiratory therapy, and imaging.
The high-capacity situation has created busy days, seeing larger numbers of patients, which it is just physically and cognitively mentally harder, and potentially creates more frustrations as well because all of the available resources in the hospital can get soaked up, and the things that normally happen quickly happen more slowly, which then creates a vicious cycle… it creates inefficiencies. (Participant 101b, hospitalist leader) Time. Time is a resource in short supply in a high-capacity strain environment. Those interviewed reflected that it takes time to decompress the hospital after patient volume decreases. In addition, there is not enough time in a day for providers to complete all their tasks if caring for a large number of patients, with institutional initiatives seeming to directly compete with each other in order to finish everything
Interventions
Perceptions Exemplar quotes One of the biggest discharge barriers certainly is housing and security. Part of the reason our census is so high at baseline is because we probably have about 15% of our service consists of patients who do not actually require hospital level care. Some of them are patients that they need their six weeks of intravenous antibiotics but they're homeless and they don't have anywhere to go. A larger proportion of them are patients who are cognitively impaired either due to dementia or psychosis or some other reason, and they don't have a surrogate decision-maker, and/or they're homeless and so they came into the hospital for some acute reason, but now they have nowhere to go. It was really just a very inefficient system that people would then try to cram and make work faster and better than they normally would, but something I think makes the providers feel nagged when everyone's paging, they're trying to see patients, and they're running around the hospital trying to find this one patient that got tucked away somewhere in the back of the outpatient area here, and at the same time getting emailed and called and pages, asking if you can discharge people as quickly as possible. (Participant 101b, hospitalist leader) Theme 3: When a hospital is facing hospital capacity strain, it negatively impacts the workforce and can lead to conflict. Hospital capacity strain was perceived to have negatively impacted patients, providers, and staff, and it was also noted to encroach upon other academic medical center missions such as education, research, innovation, and financial stability. Appendix 4 highlights these key subtheme areas by key stakeholder groups and core mission areas. Figure 2 is a conceptual model depicting the impact of hospital capacity strain on the workforce, the patients, and institutional priorities. Additional subthemes emerged and are shown below.
The tactics implemented to mitigate hospital capacity strain directly impacts the ability of providers to do their jobs. The many different initiatives that hospitals craft to mitigate hospital capacity strain can have a perceived negative impact on care
Interventions Perceptions Exemplar quotes
Smoothing admissions-developing plans around operating rooms and other expected admissions in order to prevent admission stacking Taking advantage of times when there are less patients/open operating rooms or procedural areas We have what we call our [de-identified] dashboard and our [de-identified] tracker just so that we're reviewing our metrics on a monthly basis across the state and then for any metric where they may be in red, then they're expected to come up with countermeasures and report out on their countermeasures every month. And now, we're taking that one step further, and we're developing a unit-specific dashboard so that each individual, case manager and social work team can see what their performance looks like and not to necessarily be a comparison because we know that their populations are different, their lengths of stays are different, etcetera, but just for them to have an opportunity to actually see their own data. I mean, that's kind of meaningless to them unless they can tie it back to the exact work that they're doing, so that's what we're trying to do (Participant 109a, hospital leader)
Communications
Huddle/calls around discharges-brief team meetings to address discharge barriers and discharge plans The thing that we do have that I think is effective but I just couldn't tell you how effective it is, is we have…a HIPAA-compliant text messaging system. And so, we're able to loop in nursing, rehab, pharmacy all on the same group text just to review what the care plan is. (Participant 111b, hospitalist leader) Regional plans Regional wide plans for moving patients to other hospitals-utilizing system approaches to managing patient volumes across multiple hospitals Post-acute care contracts (hospital paying when patients do not have funding)-hospital will develop contracts to help facilitate patient movement to next care location (e.g., subacute nursing facility) when patient may not have funding source and thus hospital covers cost Patients are moved from one hospital in the system to another; may be challenging for patients and their families We have contracts with a long-term acute care hospital, with a skilled nursing facility. We have a good relationship with an acute rehab for unfunded patients and with residential care facilities. So, we will pay for them while their Medicaid is in process so they don't live in the hospital. (Participant 112a, hospital leader) One of the things that we'll do when it's appropriate is identify patients who have not yet been admitted to go to one of our network hospitals, which overall works well. It can be a patient dissatisfier, but when it works well it works well, but it does require a lot of coordination and upfront identification of patients who would be eligible for a transfer before ultimately being admitted here. Hospital capacity strain impacts the well being of providers.
Hospital capacity strain and the tactics implemented were perceived to lead to increased stress and tension which places providers at increased risk of burnout.
Unfortunately, in the meantime, when you are outnumbered with patients, and finding it difficult to provide the type of care that our providers want to give, there is a long time before you can staff up appropriately to make sure you're managing that well, and that puts a strain on people and their morale, and their sense of whether this position-this job is sustainable. (Participant 102b, hospitalist leader) The tension between sufficient resources, the tactics deployed, and being able to do one's job creates conflict.
Conflict was perceived to be experienced when tactics to address hospital capacity strain were implemented without sufficient additional resources.
Years ago, before we had a true throughput surge plan, what the typical strategy was for someone in the ED to e-mail someone in hospital leadership, like the president, and say, "It's crazy down here, can you get those guys upstairs to discharge?" It was very confrontational. (Participant 105b, hospitalist leader) We do that actually just for the hospitalist teams. It was tried on teaching and I think it failed. And I think that there were a couple of reasons why it failed. One is they didn't have a strong leader advocate for the project. So, it was kind of like, "Okay, we're doing this, but what exactly are we doing?" And then additionally, the way it was done for the teaching service because teaching services are generally a little less efficient and, you know, they take just longer to round where you talk outside the room and then you go in the patient room and you talk again on most of the services and what they did when they did the pilot is they actually put one attending nurse with two teams. So, the attending nurse would join the teaching team, I think, on the post-call day. And they wouldn't have necessarily rounded with the team the day before. And then the team wasn't quite sure what the nurse's role was. So, there was kind of role definition issues. There was maybe a little bit of undermining, and that the attending our end, that's what we call them, the discharge nurse wasn't following with the team daily. And so, to keep up on all those patients was a little trickier. And then there was also a concern that "Hey, should an intern be able to draft this." Is this taking away from their educational thought? (Participant 104b, hospitalist leader)
DISCUSSION
We found that hospital capacity strain was perceived to have wide-reaching impact at each of the participating sites. Participants from all institutions noted a continued struggle with how to manage hospital capacity strain and had implemented numerous measures with variable success. Both hospitalist leaders and hospital leaders felt that the most effective way to address strain is often through ensuring sufficient resources, particularly through staffing, but noted that it is often not the first intervention utilized. Instead, seemingly more cost neutral interventions (e.g., discharge lounge or huddles) are implemented first, even though most people interviewed felt they do not fix the problem and may lead to negative consequences and a repetitive cycle of lagging resources and stress.
There is limited literature about the impact of hospital capacity strain on the various stakeholders and key mission areas of academic medical centers. Some reports have highlighted the impact of hospital capacity strain on timeliness of discharge, 29 length of stay, 6,7 and quality of care 1 ; however, this study highlights the consequences on the workforce with the words "churn," "burnout," and "conflict" frequently utilized when describing how the inpatient workforce manages hospital capacity strain. Clinician burnout is consequential not only for individual providers but also for health care systems, as it may lead to providers leaving the workforce, 30-32 medical errors, [33][34][35][36][37] and has been projected to cost $4.6 billion annually in the USA for burnout related to physicians. 38 Hospitalist leaders and hospital leaders had differing opinions on the impact of the various initiatives aimed at improving patient flow and capacity. The perception of conflict was noted throughout the interviews. Workflows may differ for various roles, so an assessment of how these well-intentioned interventions may impact the workforce's ability to get work done may be necessary as it was noted that some of the interventions could cause distractions and negatively impact patient care.
Some research has suggested that adequate staffing may lend itself to more expedited care and potential cost savings. Elliott et al. showed that increasing hospitalist workload is associated with clinically meaningful increases in length of stay and cost. 39 Previous work by Michtalik et al. highlighted that having fixed census caps on teams decreased the odds of reporting unsafe census situations. 40 Thus, while adequate staffing may require resources, these studies suggest that the cost may be offset through improved patient flow and improved patient safety. This study suggests that often hospitals employ less costly solutions to address hospital capacity strain; however, the reasons behind why hospital systems choose measures that are perceived to be less effective is unknown and could be a future area of study. Future work should focus on the economic impact of the various initiatives, in particular the impact of high patient census and increased workloads (and cognitive load) as well as interventions that may inadvertently result in provider distractions.
Our study has several strengths. We explored both the hospitalist leader and hospital leader perspectives to understand the impact of hospital capacity strain on key stakeholders and core mission areas as well as the impact of the various tactics deployed to manage strain. We included both perspectives given hospitalist leaders and hospital leaders might have distinct perspectives on the topic given different incentives, constraints, and resources available to respond to capacity strain situations. We also included a large number of institutions from a variety of geographic regions. This work adds to the understanding of which strategies have been deployed and the experiences with these initiatives. While several studies have shown the operational impact of hospital capacity strain (through increased length of stay and mortality), we believe this is one of the first to show the impact on the workforce (i.e., ability to do one's job, conflict, well being), though there is increasing literature on how COVID-19 has strained clinical care teams. 41 Our study also has some limitations. It involved large academic medical centers with greater than 200 beds and hospital medicine groups, and thus, our findings may not apply to smaller hospitals or non-academic medical centers or institutions without hospitalist groups. We interviewed two individuals (a hospitalist leader and hospital leader) at most institutions and thus our findings may not represent the beliefs of frontline workers, though many of the hospitalist leaders were also frontline clinicians. Hospital leaders that were interviewed also had a variety of roles some of which were hospitalists (i.e., hospitalists that led initiatives for the hospital) and thus some of the perspectives could have overlapped between the two groups. Lastly, this work covers hospital capacity situations that may differ from a crisis situation (e.g., COVID, mass casualty event) though likely with some overlapping components.
CONCLUSION
Across the 13 sites, a multitude of provider, care team, and institutional tactics were implemented with variable success. Hospital capacity strain was perceived as complex and difficult to predict with wide-reaching impact on patients, the workforce, and institutional priorities. While ensuring appropriate resources was felt to be key to managing hospital capacity strain, less costly solutions were perceived to be deployed that may result in further negative consequences and conflict. | 2021-12-13T14:23:06.806Z | 2021-12-13T00:00:00.000 | {
"year": 2021,
"sha1": "50f5cecd751701b0c2f9f82bf3a97657ffed64fe",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11606-021-07106-8.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ea042d5fd2201a57b1a029c6b4b2e52da4c1033",
"s2fieldsofstudy": [
"Business",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119565182 | pes2o/s2orc | v3-fos-license | Conformal nets III: fusion of defects
Conformal nets provides a mathematical model for conformal field theory. We define a notion of defect between conformal nets, formalizing the idea of an interaction between two conformal field theories. We introduce an operation of fusion of defects, and prove that the fusion of two defects is again a defect, provided the fusion occurs over a conformal net of finite index. There is a notion of sector (or bimodule) between two defects, and operations of horizontal and vertical fusion of such sectors. Our most difficult technical result is that the horizontal fusion of the vacuum sectors of two defects is isomorphic to the vacuum sector of the fused defect. Equipped with this isomorphism, we construct the basic interchange isomorphism between the horizontal fusion of two vertical fusions and the vertical fusion of two horizontal fusions of sectors.
Introduction
There are various different mathematical notions of field theories. For many of these there is also a notion of defects that formalizes interactions between different field theories. See for example [9,13,21,25] and references therein. Depending on the context, sometimes the terminology 'surface operator' or 'domain wall' is used in place of 'defect'. Often field theories are described as functors from a bordism category whose objects are (d− 1)-manifolds and morphisms are d-dimensional bordisms (usually with additional geometric structure) to a category of vector spaces. Defects allow the extension of such functors to a larger bordism category, where the manifolds may be equipped with codimension-1 submanifolds that split the manifolds into regions labeled by field theories. The codimension-1 submanifold itself is labeled by a defect between the field theories labeling the neighboring regions.
In this paper we give a definition of defects for conformal nets. Conformal nets are often viewed as a particular model for conformal field theory 1 . Our main result is that under suitable finiteness assumptions there is a composition for defects that we call fusion. We also extend the notion of representations of conformal nets, also known as sectors, to the context of defects. Sectors between defects are a simultaneous generalization of the notion of representations of conformal nets, and of bimodules between von Neumann algebras. Ultimately, this will lead to a 3-category whose objects are conformal nets, whose 1-morphisms are defects, whose 2-morphisms are sectors, and whose 3-morphisms are intertwiners between sectors. The lengthy construction of this 3-category in all formal details is postponed to [4], but the key ingredients of this 3-category will all be presented here. In [4] we will use the language of internal bicategories developed in [8], but we expect that the results of the present paper also provide all the essential ingredients to construct a 3-category of conformal nets, defects, sectors, and intertwiners in any other sufficiently weak model of 3-categories.
Conformal nets. Conformal nets grew out of algebraic quantum field theory and have been intensively studied; see for example [6,10,15,29,30]. In this paper we will use our (non-standard) coordinate-free definition of conformal nets [2]. A conformal net in this sense is a functor A : INT → VN from the category of compact oriented intervals to the category of von Neumann algebras, subject to a number of axioms. The precise definition and properties of conformal nets are recalled in Appendix B. In contrast to the standard definition, in our coordinate-free definition there is no need to fix a vacuum Hilbert space at the outset-this feature will be useful in developing our definition of defects. Nevertheless, the vacuum Hilbert space can be reconstructed from the functor A. The main ingredient for this reconstruction is Haagerup's standard form L 2 (A), a bimodule that is canonically associated to any von Neumann algebra A. This standard form and various facts about von Neumann algebras that are used throughout this paper are reviewed in Appendix A.
Defects. To define defects we introduce the category INT •• of bicolored intervals. Its objects are intervals I that are equipped with a covering by two subintervals I • and I • . If I is not completely white (I • = I, I • = ∅) or black (I • = I, I • = ∅) then we require that the white and the black subintervals meet in exactly one point and we also require the choice of a local coordinate around this point. For conformal nets A and B, a defect between them is a functor such that D coincides with A on white intervals and with B on black intervals and satisfies various axioms similar to those of conformal nets. Often we write A D B to indicate that D is a defect from A to B, also called an A-B-defect. A defect from the trivial net to itself is simply a von Neumann algebra (Proposition 1.23), so our notion of defect is a generalization of the notion of von Neumann algebra. The precise definition and some basic properties of defects are given in Section 1.
Certain defects have already appeared in disguise in the conformal nets literature, through the notion of 'solitons' [5,14,18,19]. The thin line should be thought of as white and stands for the conformal net A and the thick line should be thought of as black and stands for B. Usually we simplify the picture further by dropping the letters. The precise definition and some basic properties of sectors are given in Section 2.
The vacuum sector of a defect. For any defect D we can evaluate D on the top half S 1 ⊤ := of the circle. Applying the L 2 functor we obtain the Hilbert space H 0 (S 1 , D) := L 2 (D(S 1 ⊤ )) and, as a consequence of the vacuum axiom in the definition of defects, this Hilbert space is a sector for D, called the vacuum sector for D. In our 3-category, the vacuum sector is the identity 2-morphism for the 1-morphism D. We often draw it as This darker shading is reserved for vacuum sectors.
Composition of defects. Let D = A D B and E = B E C be defects. Their composition or fusion D ⊛ B E is defined in Section 1.e. The definition is quite natural, but, surprisingly, it is not easy to see that D ⊛ B E satisfies all the axioms of defects. We outline the definition of the fusion D ⊛ B E. In our graphical notation, double lines will now correspond to A, thin lines to B, and thick lines to C. Let us concentrate on the evaluation of D⊛ B E on S In this picture the middle vertical line corresponds to I. By the axioms for defects the actions of D( ) and of B(I) on H 0 (S 1 , D) commute. Consequently, we obtain an action of D( ) on the Connes fusion (0.1). Similarly, there is an action of E( ) on (0.1). Now (D ⊛ B E)(S 1 ⊤ ) is defined to be the von Neumann algebra generated by D( ) and E( ) acting on the Hilbert space (0.1). A similar construction, using the local coordinate, is used to define the evaluation of D ⊛ B E on arbitrary bicolored intervals.
A main result of this paper, Theorem 1.42, is a proof that if the net B has finite index (see Appendix B.IV), then D ⊛ B E is in fact a defect.
Towards a 3-category of conformal nets. The purpose of our next paper [4] is to construct the symmetric monoidal 3-category of conformal nets. More precisely, we will construct an internal dicategory 2 object (C 0 , C 1 , C 2 ) in the 2-category of symmetric monoidal categories [8,Definition 3.3]. In this paper, we develop the essential ingredients of that 3-category, but we do not check all the axioms. These ingredients are: • A symmetric monoidal category C 0 whose objects are the conformal nets with finite index, and whose morphisms are the isomorphisms between them.
• A symmetric monoidal category C 1 whose objects are the defects between conformal nets of finite index, and whose morphisms are the isomorphisms.
• A symmetric monoidal category C 2 whose objects are sectors (between defects between conformal nets of finite index), and whose morphisms are those homomorphisms of sectors that cover isomorphisms of defects and of conformal nets.
• These come with source and target functors s, t : C 1 → C 0 and s, t : C 2 → C 1 subject to the identities s • s = s • t and t • s = t • t.
• A symmetric monoidal functor composition : C 1 × C0 C 1 → C 1 that describes the composition (or fusion) of defects (1.46). That the composition of defects exists is the content of Theorem 1.42.
• Two monoidal natural transformations unitor t , unitor b : C 2 C 2 that relate fusion v and identity v (2.18).
• The coherences for composition and identity are "weak": instead of natural transformations C 1 C 1 , we have four functors unitor tl , unitor tr , unitor bl , unitor br : C 1 → C 2 (3.2, 3.3). This weakness, which is an intrinsic feature of conformal nets and defects, is what forces us to use the notion of internal dicategory [8,Def. 3.3] instead of the simpler notion of internal 2-category [8,Def. 3.1].
• The coherence between composition and identity v is a monoidal natural transformation C 1 × C0 C 1 C 2 (6.1, 6.3). This is the most difficult construction of this paper and it is also the one that forces us to restrict the morphisms in the category C 1 of defects to be isomorphisms. We call this natural transformation the "1 ⊠ 1-isomorphism" because its domain is a Connes fusion of two identity sectors.
• Finally, the crucial interchange isomorphism, a coherence between fusion h and fusion v , is a monoidal natural transformation Its definition relies crucially on the 1 ⊠ 1-isomorphism.
The 1 ⊠ 1 isomorphism. This isomorphism provides a canonical identification of the Hilbert space (0.1), used to define the defect D⊛ B E, with the vacuum sector for D⊛ B E. By definition the vacuum sector is H 0 (S 1 , D⊛ B E) := L 2 (D⊛ B E(S 1 ⊤ )). By construction the algebra D ⊛ B E(S 1 ⊤ ) contains D( ) and E( ) as two commuting subalgebras and is generated by those subalgebras. We can think of the algebra D ⊛ B E(S 1 ⊤ ) as associated to the tricolored interval which is the upper half of the circle ∂([0, 2]×[0, 1]); it is therefore natural to draw the vacuum sector for D ⊛ B E as In the language of the 3-category the vacuum sector for a defect D is the identity 2-morphism, and a fusion along the middle vertical line as in (0.1) is the horizontal composition of 2-morphisms. Thus (0.1) is the composition of the identities for the defects D and E, while (0.2) is the identity for the composition D ⊛ B E. For this reason, we refer to the desired isomorphism between (0.1) and (0.2) as the "one times one isomorphism".
The construction of the 1 ⊠ 1-isomorphism is quite involved and is carried out in Sections 4, 5, and 6. Section 6 also contains a short summary, on page 52, collecting all the necessary ingredients in one place. The existence and construction of the 1 ⊠ 1-isomorphism, completed in Theorem 6.2, is one of the main results of this paper.
Construction of the 1 ⊠ 1-isomorphism. For any von Neumann algebra A the standard form L 2 (A) carries commuting left and right actions of A, i.e., L 2 (A) is an A-A-bimodule. In the case of the vacuum sector H 0 (S 1 , D) = L 2 (D( )) these two actions correspond to the left actions of D( ) and of D( ). 3 One difficulty in understanding the Connes fusion (0.1) comes from the fact that the algebra B(I), over which the Connes fusion is taken, intersects both D( ) and D( ). To simplify the situation we can consider a variation of (0.1) with a hole in the middle: This Hilbert space is build from vacuum sectors for D and E together with two (small) copies of the vacuum sector for B. Its formal definition is given in Section 4; see in particular (4.8). The Connes fusion of B(I) is now replaced by four Connes fusion operations along smaller algebras. This allows us to identify, in Theorem 4.11, the Hilbert space (0.3) with the L 2 -space of a certain von Neumann algebra that we represent by the graphical notation . It is generated by algebras D( ),B( ), and E( ) acting on the Hilbert space ; hereB( ) is a certain enlargement of the algebra B( ) that we abbreviate graphically by . We defer to (4.9, 4.10) for the details of the definitions, and to 3.10 for an explanation of the notationB. At this point we blur the distinction between intervals and algebras in our graphical notation and often draw only an interval to denote an algebra. For example we abbreviate B( ) as simply , and D ⊛ B E(S 1 ⊤ ) as . We therefore write, for instance D ⊛ B E(S 1 ⊤ ) ⊗ B( ) as . As the notation indicates, this tensor product is a subalgebra of . (Note the additional dotted line in the middle.) If B has finite index, then we show in Corollary 4.16 that this inclusion ⊆ is a finite homomorphism of von Neumann algebras. As the L 2 -construction is functorial for such homomorphisms [1], we can apply L 2 to it. Combining this with (0.4) we obtain a map (0.5) L 2 → L 2 ∼ = .
In the next step, we need to fill the hole in (0.3). Formally, this is done by applying Connes fusion with a further (small) vacuum sector for B. On the domain of (0.5) this cancels the algebra . On the target, we simply denote the result 3 The reflection along the horizontal axis R×{ 1 2 } provides an orientation reversing identification → and this accounts for the fact that the right action of D( ) on L 2 (D( )) corresponds to a left action of D( ) on H 0 (S 1 , D). by filling the hole with a (small) vacuum sector for B. In this way we obtain, in Proposition 4.18, an isometric embedding The existences of this isometric embedding enables us to prove that D ⊛ B E is a defect.
To produce the 1 ⊠ 1-isomorphism from (0.6) requires two further steps. We first, in Proposition 4.29, construct an isomorphism (0.7) ∼ = and then define the "1 ⊠ 1-isomorphism" Ω as the composite of the two maps (0.6) and (0.7). In Theorem 6.2, we prove that the composite of (0.6) and (0.7) is indeed an isomorphism. The proof of that theorem proceeds as follows: both the domain and the target of Ω carry commuting actions of the algebras and . On = L 2 these two actions are clearly each other's commutants and so to prove that Ω is an isomorphism it suffices to show that the same holds for . This is a kind of Haag duality for fusion of defects. It appears as Theorem 5.2 and is one of the main technical results of this paper. All of Section 5 is devoted to its proof.
Remark. In constructing the 3-category of conformal nets, it is essential to know that the 1 ⊠ 1-isomorphism Ω satisfies certain axioms, such as associativity. In Lemma 4.32 we prove that the isomorphism is appropriately associative, but unfortunately this is done directly by tracing through the entire construction of Ω. Better would be to use a characterization of Ω (and thus of composites of multiple Ω maps) as the unique map satisfying certain properties. Haagerup's standard form (that is, the L 2 -space of a von Neumann algebra) does admit such a characterization: it is determined up to unique unitary isomorphism by the module structure, the modular conjugation, and a self-dual cone. There is a natural choice of modular conjugation on . Thus, to characterize the isomorphism Ω, it suffices to specify a self-dual cone in that fusion of vacuum sectors. Unfortunately, we do not know how to construct such a self-dual cone from the self-dual cones of and of .
Further structure maps. As an application of the 1 ⊠ 1-isomorphism, we construct in Section 6.d the interchange isomorphism between horizontal and vertical composition of sectors. We also prove in Lemma 6.15 a compatibility between the 1 ⊠ 1-isomorphism and the unit map for identity defects. Section 6.c contains the construction of two further structure maps concerning units that will be needed for the detailed construction of the 3-category in the sequel [4].
Summary of results. Let A D B and B E C be defects between irreducible conformal nets, and assume B has finite index. Let S 1 ⊤ and S 1 ⊥ denote respectively the top and bottom halves of the standard circle, and let I denote an interval identified as necessary with the left or with the right quarter of the standard circle. The three main theorems in this paper are the following.
Theorem A (Existence of fusion of defects). The fusion D ⊛ B E of two defects is again a defect. Theorem C (The 1 ⊠ 1-isomorphism). There is a canonical isomorphism between the vacuum sector H 0 (D ⊛ B E) of the fused defect D ⊛ B E and the Connes fusion H 0 (D) ⊠ B(I) H 0 (E) of the two vacuum sectors of the defects.
These are established in the text as, respectively, Theorem 1.42, Theorem 5.2 (see also Corollary 5.9), and Theorem 6.2.
The bicolored intervals form a category INT •• , whose morphisms are the color preserving embeddings that respect the local coordinates (that is, such that the embedding intertwines the local coordinates on a sufficiently small neighborhood of 0 Similarly, a bicolored circle S is a circle (always oriented) equipped with a cover by two closed, connected, possibly empty subsets with disjoint interiors S • , S • ⊂ S, along with local coordinates in the neighborhood of S • ∩ S • . We disallow the cases when S • or S • consists of a single point. A bicolored circle necessarily falls in one of the following three categories: 1.b. Definition of defects. Let VN be the category whose objects are von Neumann algebras with separable preduals, and whose morphisms are C-linear homomorphisms, and C-linear antihomomorphisms 4 .
Recall our definition of conformal nets (see Appendix B). For the following definition of defect, we do not require that the conformal nets A and B are irreducible: Definition 1.7. Let A and B be two conformal nets. A defect from A to B is a functor D : that assigns to each bicolored interval I a von Neumann algebra A(I), and whose restrictions to INT • and INT • are given by A and B, respectively. It sends orientationpreserving embeddings to C-linear homomorphisms, and orientation-reversing embeddings to C-linear antihomomorphisms. The functor D is subject to the following axioms: topologically generate D(I). (iv) Vacuum sector: Let S be a genuinely bicolored circle, I ⊂ S a genuinely bicolored interval, and j : S → S a color preserving orientation reversing involution that fixes ∂I. Equip I ′ := j(I) with the orientation induced from S, and consider the following two maps of algebras: Here α is the left action of D(I) on L 2 D(I), and in β, the map D(I) op → B(L 2 D(I)) is the right action of D(I) on L 2 D(I).) Let J ∈ INT • ∪ INT • be a subinterval of I such that J ∩ ∂I consists of a single point, and equip J := j(J) with the orientation induced from S. We then require that the action of the algebraic tensor product extends to an action of D(J ∪J).
We will write A D B to indicate that D is a defect from A to B.
The following properties might naturally have been added as axioms in the definition of a defect, but are in fact consequences of the listed axioms and the corresponding properties of conformal nets: inner covariance (Proposition 1.10), the split property (Proposition 1.11), Haag duality (Proposition 1.17), and continuity (Proposition 1.22).
Inner covariance and the split property. Recall that Diff 0 (I) is the subgroup of diffeomorphisms of I that fix some neighborhood of ∂I. Proposition 1.10 (Inner covariance for defects). Let I be a genuinely bicolored interval, and let ϕ ∈ Diff 0 (I) be a diffeomorphism that preserves the bicoloring and the local coordinate. Then D(ϕ) is an inner automorphism of D(I).
Let {J, K, L} be a cover of I such that J is a white interval, K is a genuinely bicolored interval, L is a black interval, supp(ϕ • ) is contained in the interior of J, supp(ϕ • ) is contained in the interior of L, and ϕ acts as the identity on K. By inner covariance for the nets A and B (see Appendix B.I), there are unitaries u ∈ A(J) and v ∈ B(L) that implement ϕ • and ϕ • . Let w be their product in D(I). Then waw * = D(ϕ)a holds for every a ∈ D(I) that is in the image of A(J), of D(K), or of B(L). By strong additivity, it therefore holds for every element of D(I).
Proposition 1.11 (Split property for defects). If J ⊂ I and K ⊂ I are disjoint, then the map D(J) ⊗ alg D(K) → D(I) extends to the spatial tensor product D(J)⊗ D(K).
Proof. We assume without loss of generality that the interval J is entirely white and that it does not meet the boundary of I (otherwise, replace I by a slightly larger interval). Let J + ⊂ I be a white interval that contains J in its interior and that does not intersect K. Finally, let ι : A(J + ) → D(I) be the map induced by the inclusion J + ֒→ I. By the split property and Haag duality for conformal nets, the inclusion ι A(J) ⊆ ι A(J + ) is split in the sense of Definition A.29. As A(J + ) commutes with D(K), the inclusion ι A(J) → D(K) ′ is then also split, where the commutant is taken in any faithful representation of D(I). Thus, extends to the spatial tensor product D(J)⊗ D(K).
Vaccum properties. Let S be a genuinely bicolored circle, along with an orientation reversing diffeomorphism j : S → S, compatible with the bicoloring and with the local coordinates. Let I ⊂ S be an interval whose boundary is fixed by j and let I ′ := j(I). The Hilbert space H 0 := L 2 (D(I)) is called the vacuum sector of D associated to S, I, and j. It is endowed with actions of D(J) for every bicolored intervals J ⊂ S, as follows. (Recall that bicolored intervals contain at most one color-change point.) The maps (1.8) provide natural actions of D(J) on H 0 for all subintervals J ⊂ I and J ⊂ I ′ . By the vacuum axiom for defects, these extend to the algebras D(J) associated to white and to black subintervals of S. To define the action ρ J : D(J) → B(H 0 ) of an arbitrary genuinely bicolored interval J ⊂ S, pick a white interval K 1 ⊂ S, a black interval K 2 ⊂ S, and diffeomorphisms ϕ i ∈ Diff 0 (K i ) such that ϕ 1 ϕ 2 (J) does not cross ∂I. If u 1 ∈ A(K 1 ) and u 2 ∈ B(K 2 ) are unitaries implementing ϕ 1 and ϕ 2 , then the action on H 0 of an element a ∈ D(J) is defined by (1.12) ρ J (a) := u * 2 u * 1 ρ ϕ1ϕ2(J) D(ϕ 1 ϕ 2 )(a) u 1 u 2 . This action is compatible with the actions associated to other intervals, and is independent of the choices of ϕ 1 , ϕ 2 and u 1 , u 2 (see Lemma 2.5 for a similar construction in a more general context).
The following result, constructing isomorphisms between different vaccum sectors, is a straightforward generalization of [2, Cor. 1.15]: Lemma 1.13. Let S be a genuinely bicolored circle. Let I 1 and I 2 be genuinely bicolored subintervals and let j 1 and j 2 be involutions fixing ∂I 1 and ∂I 2 . Then the corresponding vacuum sectors L 2 D(I 1 ) and L 2 D(I 2 ) are non-canonically isomorphic as representations of the algebras D(J) for J ⊂ S.
Proof. If I 1 and I 2 contain the same color-change point, then let ϕ ∈ Diff(S) be a diffeomorphism that sends I 1 to I 2 , that intertwines j 1 and j 2 , and that can be written as ϕ = ϕ • • ϕ • where ϕ • acts on the white part only and ϕ • acts on the black part only. Let K be a white interval that contains supp(ϕ • ) in its interior and let L be a black interval that contains supp(ϕ • ) in its interior. Finally, let u ∈ A(K) and v ∈ B(L) be unitaries implementing ϕ • and ϕ • . Then is the desired isomorphism.
If I 1 and I 2 contain opposite color-change points, then we may assume without loss of generality that j 1 = j 2 and I 2 = j 1 (I 1 ). The isomorphism from L 2 (D(I 1 )) to L 2 (D(I 2 )) is then given by L 2 (D(j 1 )). Notation 1.14. Given a genuinely bicolored circle S and a defect A D B , we denote by H 0 (S, D) the vacuum sector associated to some interval I ⊂ S and some involution j fixing ∂I. By the previous lemma, that Hilbert space is well defined up to noncanonical unitary isomorphism.
Remark 1.15. If S is a circle that is either entirely white (or entirely black), then the above description of H 0 (S, D) still makes sense and recovers the notion of vacuum sector of a conformal net H 0 (S, A) (or H 0 (S, B)) [2, Definition 1.16].
Our next result, concerning the gluing of vacuum sectors, is a straightforward generalization of [2,Cor. 1.33] in the presence of defects (compare Appendix B.III). Let S 1 and S 2 be bicolored circles, let I i ⊂ S i be bicolored intervals (whose boundaries do not touch the color change points), and let I ′ i be the closure of S i \ I i . Assume that there exists an orientation reversing diffeomorphism ϕ : I 2 → I 1 compatible with the bicolorings, and let S 3 := I ′ 1 ∪ ∂I2 I ′ 2 . Assume that (S 3 ) • and (S 3 ) • are connected and non-empty. Then, up to exchanging S 1 and S 2 , we are in one of the following three situations: S1 S2 S3 , S1 S2 S3 , S1 S2 S3 .
Equip S 1 ∪ I2 S 2 with a smooth structure compatible with the given smooth structures on S 1 and S 2 [3, Definition 1.4]. That is, provide smooth structures on S 1 , S 2 , and S 3 such that there exists an action of the symmetric group S 3 on S 1 ∪ I2 S 2 (with no compatibility with the bicoloring) that permutes the three circles and has π| Sa smooth for every π ∈ S 3 and a ∈ {1, 2, 3}. When A D B is a defect, it will be convenient to write H 0 (S, D) := H 0 (S, A) if S is entirely white and H 0 (S, D) := H 0 (S, B) if S is entirely black. Lemma 1.16. Let S 1 , S 2 , S 3 , and ϕ be as above, and let D be a defect. Use the map D(ϕ) to equip H 0 (S 1 , D) with the structure of a right D(I 2 )-module. Then there exists a non-canonical isomorphism compatible with the actions of D(J) for J ⊂ S 3 .
Proof. Depending on the topology of the bicoloring, we can either identify H 0 (S 1 , D) with L 2 D(I 1 ) or identify H 0 (S 2 , D) with L 2 D(I 2 ). We assume without loss of generality that we are in the first case.
Let j ∈ Diff − (S 1 ) be an involution that is compatible with the bicoloring and that fixes ∂I 1 , and let H 0 (S 1 , D) = L 2 D(I 1 ) be the vacuum sector associated to S 1 , I 1 , and j. We then have where the first isomorphism uses L 2 (D(ϕ)) : L 2 (D(I 2 )) → L 2 (D(I 1 ) op ) = L 2 (D(I 1 )) and the third one is induced by the map (j • ϕ) ∪ Id I ′ 2 : S 2 → S 3 . Haag duality. In certain cases, the geometric operation of complementation corresponds to the algebraic operation of relative commutant: Proof. (1) Let j ∈ Diff − (S) be an involution that exchanges I and I ′ and that is compatible with the bicoloring and the local coordinates. By definition, we may take H 0 (S, D) = L 2 (D(I)) with the actions of D(I) and D(I ′ ) provided by (1.8). The result follows, as the left and right actions of D(I) on L 2 (D(I)) are each other's commutants.
(2) We assume without loss of generality that K ∈ INT • . Let S := I ∪ ∂I (Ī) be a circle formed by gluing two copies of I along their boundary, such that there is a smooth involution j that exchanges them: By strong additivity and the first part of the proposition, and considering actions on H 0 (S, D) we then have Canonical quantization. Let S be a bicolored circle and I ⊂ S a genuinely bicolored interval. Let j ∈ Diff − (S) be an involution that fixes ∂I and that is compatible with the bicoloring and the local coordinates. Also let K ⊂ S be a white interval such that j(K) = K. We call a diffeomorphism ϕ ∈ Diff 0 (K) ⊂ Diff(S) symmetric if it commutes with j, and set Diff sym 0 (K) := ϕ ∈ Diff 0 (K) ϕj = jϕ . Given a symmetric diffeomorphism ϕ, we also write ϕ 0 ∈ Diff(I) for ϕ| I ; to be precise, ϕ 0 := ϕ| I∩K ∪ id I\K . Let K ′ be the closure of the complement of K in S. Since u ϕ commutes with A(K ′ ), we have u ϕ ∈ A(K) by Haag duality (Proposition B.4). We call u ϕ the canonical quantization of the symmetric diffeomorphism ϕ. The map Diff sym 0 (K) → Diff + (I) given by ϕ → ϕ 0 is continuous for the C ∞topology. The map A : Diff + (I) → Aut(A(I)) is continuous because A is a continuous functor 5 . The map Aut(A(I)) → U(L 2 (A(I)) given by ψ → L 2 (ψ) is continuous by [11,Prop. 3.5]. Therefore, altogether, ϕ → u ϕ defines a continuous map from the group of symmetric diffeomorphisms of K to U(A(K)). Lemma 1.19. Let S, I, K, ϕ, ϕ 0 , and u ϕ be as above, let A D B be an irreducible defect, and let H 0 := L 2 D(I) be the vacuum sector of D associated to S, I, and j. Then, letting ρ K be the action of A(K) on H 0 (given by the vacuum axiom), we have L 2 (D(ϕ 0 )) = ρ K (u ϕ ). Proof. We first show that the map Diff sym 0 (K) → Aut(D(I)) ϕ → D(ϕ 0 ) (1.20) is continuous for the C ∞ topology on Diff sym 0 (K) and the u-topology on Aut(D(I)). Since Ad(u ϕ ) = A(ϕ), the operator ρ K (u ϕ ) implements ϕ on H 0 . In particular, D(ϕ 0 ) is the restriction of Ad(ρ K (u ϕ )) under the embedding D(I) ֒→ B(H 0 ). The map is continuous and lands in the subgroup N := {u ∈ U(H 0 ) | uD(I)u * = D(I)}. Since D(ϕ 0 ) = Ad(ρ K (u ϕ )) and Ad : N → Aut(D(I)) is continuous [2, A.18], the map (1.20) is therefore also continuous. Recalling [11,Prop. 3.5] that L 2 : Aut(D(I)) → U(L 2 D(I)) is continuous, we have therefore shown that is a continuous homomorphism.
Recall that ρ K (u ϕ ) implements ϕ. By the same argument as in [2, Lem. 2.7], L 2 (D(ϕ 0 )) also implements ϕ. It follows that for some scalar λ ϕ ∈ S 1 . Thus, we get a continuous map ϕ → λ ϕ from the group of symmetric diffeomorphisms of K into U(1). Our goal is to show that λ ϕ = 1.
To finish the argument, note that Diff sym 0 (K) is connected and that {±1} is discrete. The map ϕ → λ ϕ being continuous, it must therefore be constant. Proof. For every N as above, we need to show that the map D : Hom (N ) (I, J) → Hom VN (D(I), D(J)) is continuous. We argue as in [2,Lem 4.4]. Pick a bicolored interval K, and identify I and J with subintervals of K via some fixed embeddings into its interior. Given a generalized sequence ϕ i ∈ Hom (N ) (I, J) with limit ϕ, and given a vector ξ in the predual of D(J), we need to show that D(ϕ i ) * (ξ) converges to D(ϕ) * (ξ) in D(I) * .
Let Diff (N ) 0 (K) be the subgroup of diffeomorphisms of K that fix N and also fix a neighborhood of ∂K. Pick an extensionφ ∈ Diff (N ) 0 (K) of ϕ, and letφ n,i ∈ Diff (N ) 0 (K), n ∈ N, be extensions of ϕ i such that φ n,i −φ C n < ϕ i − ϕ C n , where C n is any norm that induces the C n topology. Letting F be the filter on N × I generated by the sets {(n, i) ∈ N × I | n ≥ n 0 , i ≥ i 0 (n)} (see [2,Lem 4.4]), then F -limφ n,i =φ in the C ∞ -topology.
1.c. Examples of defects.
Algebras as defects, forgetful defects, and embedding defects. The trivial conformal net C evaluates to C on every interval [2, Eg. 1.3]. Proposition 1.23. There is a one-to-one correspondence (really an equivalence of categories) between C-C-defects and von Neumann algebras.
Proof. Given a von Neumann algebra A, the associated defect is if I ∈ INT and the local coordinate is orientation preserving A if I ∈ INT and the local coordinate is orientation reversing where A denotes the complex conjugate of A. Conversely, let D be a C-C-defect. Given a bicolored interval I, the orientation reversing map Id I : I → −I identifies D(−I) with D(I), where −I denotes I with opposite orientation. So we just need to show is that the restriction of D to the subcategory of genuinely bicolored intervals with orientation preserving maps (compatible with the local coordinates) is equivalent to a constant functor. By applying Proposition 1.17, we see that every embedding J → I between two such intervals induces an isomorphism D(J) → D(I).
To finish the proof, we need to check that D(φ) = Id D(I) for any φ : I → I. Proof. The axioms of isotony, locality, and vacuum sector follow directly from the corresponding axioms for B. It remains to prove strong additivity. We need to show that τ Proof. The only non-trivial axiom is strong additivity. Consider the situation where I = K ∪ J, with J genuinely bicolored and K white. Letting ∆ : A → A ⊕2 denote the diagonal map, we need to show that E(I) is equal to the subalgebra generated by the images of ∆A(K) and E(J). (Note that our notation is a little bit misleading, as the map ∆A(K) → E(I) might fail to be injective). Pick a white interval L ⊂ J that touches K in a point. Since ∆ is a conformal embedding, it follows from the previous proof that ∆A(K) ∨ A ⊕2 (L) = A ⊕2 (K ∪ L).
K J L I Thus, we have the following equalities between subalgebras of E(I): Remark 1.28. By the same argument as above, one can also show that a direct integral of A-B-defects is an A-B-defect.
Disintegrating defects. As in the case for conformal nets [2, Sec. 1.D], we can then introduce an algebra Z(D) that only depends on D, and that is canonically isomorphic to Z(D(I)) for every genuinely bicolored interval I. Disintegrating each D(I) over that algebra, we can then write where X is any measure space with an isomorphism L ∞ X ∼ = Z(D).
Recall that a conformal net is called semisimple if it is a finite direct sums of irreducible conformal nets (Appendix B.I).
Lemma 1.30. Any A-B-defect between semisimple conformal nets 6 is isomorphic to a direct integral of irreducible A-B-defects.
Proof. The algebra D(I) disintegrates as above. We need to show that for K ⊂ I a white subinterval (respectively a black subinterval), the map A(K) → D(I) (respectively B(K) → D(I)) similarly disintegrates. It suffices to see that A(K) → D(I) induces maps A(K) → D x (I) for almost every x.
Note that it is in general not true that a map N → ⊕ M x from a von Neumann algebra N into a direct integral induces maps N → M x for almost every x. This is however true when N is a direct sum of type I factors. Indeed, letting K ⊂ N be the ideal of compact operators, we obtain maps K → M x by standard separability arguments. One then uses the fact that a C * -algebra homomorphism from K into a von Neumann algebra extends uniquely to a von Neumann algebra homomorphism from N . We can leverage this observation about direct sums of type I factors to construct the desired maps A(K) → D x (I). Consider a slightly larger interval I + that contains I, and let K + ⊂ I + be a white interval that contains K in its interior. .
For almost every x the image of ι x is therefore contained in D x (I), and we have our desired maps A(K) → D x (I).
The isotony, locality, and strong additivity axioms for D x are immediate, and we omit their proofs.
The vacuum sector axiom requires a little bit more work. Let S, I, and J be as in the formulation of the axiom, and let us assume without loss of generality that J is white. We need to show that, for almost every x, a certain representation of Irreducible defects over semisimple nets. In Section 1.e, we will define the operation of fusion of defects, which is the composition of 1-morphisms in the 3-category of conformal nets. That operation does not preserve irreducibility (even if the conformal nets are irreducible) and so, unlike for conformal nets, it is not advisable to restrict attention to irreducible defects. We call a defect D faithful if the homomorphisms D(f ) are injective for every embedding f : I → J of bicolored intervals. Proof. Let S be a genuinely bicolored circle, I ⊂ S a white interval and I ′ the closure of its complement. Since D is irreducible, the vacuum sector H 0 (S, D) is acted on jointly irreducibly by the algebras D(J), J ⊂ S.
Since D is faithful, A(I) acts faithfully on H 0 (S, D). A non-trivial central projection p ∈ A(I) would thus induce a non-trivial direct sum decomposition of H 0 (S, D), contradicting the fact that it is irreducible. Indeed, for a bicolored interval J ⊂ S, the projection p commutes with both D(J ∩ I) and D(J ∩ I ′ ). By strong additivity, p therefore commutes with D(J).
Here, as for conformal nets [2, Sec. 3.A], we have used the split property to extend the functor D to disjoint unions of bicolored intervals by setting The above discussion shows that defects between semisimple conformal nets can be entirely understood in terms of defects between irreducible conformal nets. In the rest of this paper, we will therefore mostly restrict attention to irreducible conformal nets.
1.d. The category CN 1 of defects. Definition 1.34. Defects form a symmetric monoidal category CN 1 . An object in that category is a triple (A, B, D), where A and B are semisimple conformal nets, and D is a defect from A to B. A morphism between the objects (A, B, D) and The symmetric monoidal structure on this category is given by objectwise spatial tensor product.
Recall that a map between von Neumann algebras with finite-dimensional centers is said to be finite if the associated bimodule A L 2 B B is dualizable (Appendix A.VI).
Remark 1.36. We believe that the condition of having finite-dimensional centers is not really needed to define the notion of finite homomorphism between von Neumann algebras [1,Conjecture 6.17]. If that is indeed the case, then we can extend the notion of finite natural transformations to non-semisimple defects.
Recall from Appendix B.I that CN 0 denotes the symmetric monoidal category of semisimple conformal nets and their natural transformations, and CN f 0 denotes the symmetric monoidal category of semisimple conformal nets all of whose irreducible summands have finite index, together with finite natural transformations. Later on, we will denote by CN f 1 the symmetric monoidal category of semisimple defects (between semisimple conformal nets all of whose irreducible summands have finite index), together with finite natural transformations. The category CN 1 is equipped with two forgetful functors Remark 1.38. A conformal net A also has a weak identity given on genuinely bicolored intervals I by . That defect is not isomorphic to 1 A in the category CN 1 . It is nevertheless equivalent to 1 A in the sense that there is an invertible sector between them; see Example 3.5.
1.e. Composition of defects. Given conformal nets A, B, C, and defects A D B and B E C , we will now define their fusion D ⊛ B E, which is an A-C-defect if the conformal net B has finite index. If B does not have finite index, then D ⊛ B E might still be a defect, but we do not know how to prove this.
respectively. If I is genuinely bicolored, then we use the local coordinate to construct intervals If I is genuinely bicolored then, by Proposition 1.17, we have where the algebras act on H ⊠ B(J) K for some faithful D(I ++ )-module H and some faithful E( ++ I)-module K. Therefore, we obtain the following equivalent definition of composition of defects: We conjecture that that D ⊛ B E is always an A-C-defect. Our first main theorem says that this holds when B has finite index.
Main Theorem 1.42. Let A, B, and C be irreducible conformal nets, and let us assume that B has finite index. If D is a defect from A to B, and E a defect from B to C, then D ⊛ B E is a defect from A to C.
Proof. We first prove isotony. Let I 1 ⊂ I 2 be genuinely bicolored intervals, let H be a faithful D(I ++ 2 )-module and let K be a faithful E( ++ I 2 )-module. By the isotony property of D and E, the actions of D(I ++ 1 ) on H and of E( ++ I 1 ) on K are faithful. Therefore, both (D ⊛ B E)(I 1 ) and (D ⊛ B E)(I 2 ) can be defined as subalgebras of We next show locality and strong additivity. Let J ⊂ I and K ⊂ I be bicolored intervals whose union is I and that intersect in a single point. We assume without loss of generality that K is white and that I and J are genuinely bicolored. In particular, we then have + I = + J. By the strong additivity of D, we have which proves that D ⊛ B E is also strongly additive. Since D satisfies locality, the images of A(K) and D(J + ) commute in D(I + ). The algebra D(I + ) commutes with E( + I) = E( + J) by the definition of ⊛. It follows that all three algebras A(K), D(J + ), and E( + J) commute with one another. The algebras A(K) and (D ⊛ B E)(J) therefore also commute, as required.
The vacuum axiom is much harder. Let us first assume that D and E are irreducible. Let J ⊂ I be as in the formulation of the vacuum axiom (Definition 1.7), and let us assume without loss of generality that J is white. We need to show that the A(J) ⊗ alg A(J)-module structure on L 2 ((D ⊛ B E)(I)) given by (1.8, 1.9) extends to an action of A(J ∪J ). This will follow from the existence of an injective homomorphism from L 2 ((D ⊛ B E)(I)) into some other A(J) ⊗ alg A(J)-module that is visibly an A(J ∪J)-module. The desired homomorphism is (4.19) and will be constructed in Proposition 4.18. The fact that A(J ∪J) acts on the codomain of (4.19) is an immediate consequence of the vacuum axiom for D.
For general defects D and E, write them as direct integrals
In view of Corollary 1.33 and the fact that any defect between semisimple conformal nets can be disintegrated into irreducible defects (Lemma 1.30), the above theorem generalizes in a straightforward way to the situation where A, B, and C are not necessarily irreducible but merely semisimple: in this case, if all the irreducible summands of B have finite index, then the composition of an A-B-defect with a B-C-defect is an A-C-defect.
One might hope that composition of defects induces a functor However, some caution is needed. First, we used the finite index condition on B for our proof that D ⊛ B E is a defect. Second and more important, the operation of fusion of von Neumann algebras is only functorial with respect to isomorphisms of von Neumann algebras: given homomorphisms Moreover, requiring that the maps a, b, and c be finite homomorphisms does not help to construct the map (1.44). However, unlike the fusion of von Neumann algebras, the composition of defects is functorial for more than just isomorphisms.
and B2 E 2 C2 be defects, and let d : Assume moreover that b is finite (Appendix B.IV and Appendix A.VI). Then the above maps induce a natural transformation Moreover, if D and E are semisimple and if d and e are finite, then the defects D i ⊛ Bi E i are semisimple and the above natural transformation is finite.
Proof. The semisimplicity of D i ⊛ Bi E i is the content of Theorem 3.6. Given a genuinely bicolored interval I, we need to construct a homomorphism ( We assume without loss of generality that d and e are faithful (otherwise, their kernels are direct summands). Let H be a faithful D 2 (I ++ )-module, and let K be a faithful E 2 ( ++ I)-module. By [1, Thm 6.23], the natural transformation b induces a bounded linear map H ⊠ B1(J) K → H ⊠ B2(J) K, which is surjective by construction. That map is equivariant with respect to the homomorphism D 1 I + ⊗ alg E 1 + I → D 2 I + ⊗ alg E 2 + I , and therefore induces a map from the completion of We suspect that the functor (1.43) does not exist as stated. However, instead of trying to compose over the full category CN 0 of semisimple conformal nets, we can restrict attention to the subcategory CN f 0 ⊂ CN 0 of semisimple conformal nets all of whose irreducible summands have finite index, together with their finite natural transformations. If we let CN 1 × CN f 0 CN 1 be a shorthand notation for exists by Theorem 1.42 and Proposition 1.45.
1.f. Associativity of composition. It will be convenient to work with the square model S 1 := ∂[0, 1] 2 of the "standard circle" (see the beginning of Section 2) and to use the following notation.
be the subsets of ∂M hinted by the pictorial superscript.
. The vacuum sector associated to the standard bicolored circle, its upper half, and its standard involution, is The fiber product * of von Neumann algebras was studied in [28]. It is an alternative to the fusion ⊛ of von Neumann algebras which remedies the formal shortcomings of the fusion operation (see Appendix A.IV). In view of this, one might rather have defined the composition of defects as where I ++ , ++ I, and J are as in (1.39). This is related to the previous definition (1.39) as follows. Let S 1 be the standard bicolored circle (see Definition 1.48), with upper and lower halves S 1 ⊤ and S 1 ⊥ . Lemma 1.50. Let A D B and B E C be defects, with corresponding vacuum sectors H := H 0 (D) and K := H 0 (E). Viewed as algebras acting on H ⊠ B K, we then have Proof. Using a graphical representation as in (1.40), we have: where the third equality follows by Haag duality (Proposition 1.17). The second equation is similar.
When the conformal net B has finite index (and conjecturally even without that restriction), the two definitions of fusion (1.39) and (1.49) actually agree: and B E C be defects. If B has finite index, then for every bicolored interval I the inclusion is an isomorphism. Using the above theorem, the associator is then induced from the associator for the operation * of fiber product of von Neumann algebras. If I is a genuinely bicolored interval, then evaluating the two sides of (1.53) on I yields where 1], and the embeddings I ++ ←֓ J ֒→ K ←֓ J ′ ֒→ ++ I are as in (1.39). The associator relating the two sides of (1.54) (see [28, Prop. 9.2.8] for a construction) is the desired natural isomorphsims (1.53). The properties of (1.53) can then be summarized by saying that it provides a natural transformation (1.55) associator : that is an associator for the composition (1.46). This associator satisfies the pentagon identity by the corresponding pentagon identity for the operation * .
Sectors
We will use the constant speed parametrization to identify the standard circle {z ∈ C : |z| = 1} with the boundary of the unit square ∂[0, 1] 2 . Under our identification, the points 1, i, −1, and −i get mapped to (1, (2.1)
Definition 2.2. Let A and B be conformal nets, and let
Pictorially we will draw a D-E-sector as follows: The thin line stands for the conformal net A and the thick line stands for B. Recall from Proposition 1.23 that if A and B are both equal to the trivial conformal net C, then a C-C-defect may be viewed simply as a von Neumann algebra. A D-E-sector between two such defects is given by a bimodule between the corresponding von Neumann algebras.
The following lemma is a straightforward analog of [2, Lem. 1.9].
Lemma 2.5. Let S 1 be the standard bicolored circle, and let {I i ⊂ S 1 } be bicolored intervals whose interiors cover S 1 . Suppose that we have actions subject to the following two conditions: 1.
then the images of ρ i and ρ j commute. Then these actions endow H with the structure of a D-E-sector.
Proof. Given an interval J ⊂ S 1 , pick a diffeomorphism ϕ ∈ Diff + (S 1 ) that is trivial in a neighborhood N of the two color changing points, and such that ϕ(J) ⊂ I i0 for some I i0 in our cover. Write ϕ as ϕ 1 • . . . • ϕ n for diffeomorphisms ϕ k that are trivial on N and whose supports lie in elements of the cover. Let u k be unitaries implementing ϕ k (Proposition 1.10). Upon identifying u k with its image (under the relevant ρ i ) in B(H), we set Here we have used ϕ(a) as an abbreviation for A(ϕ)(a), D(ϕ)(a), B(ϕ)(a), or E(ϕ)(a), depending on whether J is a white, top, black, or bottom interval. Finally, as in the proof of [2, Lem. 1.9], one checks that ρ J | K = ρ I ℓ | K for any sufficiently small interval K ⊂ J ∩I ℓ , and then uses strong additivity to conclude that ρ J | J∩I ℓ = ρ I ℓ | J∩I ℓ .
As before let S 1 1 2 ]) be the upper and lower halves of the standard bicolored circle.
Definition 2.7. Sectors form a category that we call CN 2 . Its objects are quin- There is also a symmetric monoidal structure on CN 2 given by objectwise spatial tensor product for the functors A, B, D, E, and by tensor product of Hilbert spaces.
The category CN 2 is equipped with two forgetful functors Provided we restrict to the subcategory CN f 1 ⊂ CN 1 whose objects are semisimple defects between semisimple conformal nets and whose morphisms are finite natural transformations (another option is to allow all defects between semisimple conformal nets but restrict the morphisms to be only the isomorphisms), there is also a 'vertical identity' functor (2.8) identity is as described in Definition 1.48. We represent it pictorially as follows: We reserve this darker shading of the above squares for vacuum sectors. Note that it is essential to restrict to the subcategory CN f 1 ⊂ CN 1 because the L 2space construction is only functorial with respect to finite homomorphisms of von Neumann algebras [1] (see also [1,Conjecture 6.17]).
Remark 2.10. We will see later, in Warning 6.8, that we will have to further restrict our morphisms, and only allow natural isomorphisms between defects (even if the defects are semisimple). This will render otiose the subtleties related to [1,Conjecture 6.17]; in particular, there is no need to restrict to semisimple defects.
The algebra B(J) has actions of opposite variance on H and on K, so it makes sense to take the Connes fusion
]) given by
respectively-see Appendix A.IV. Upon identifying the intervals (I + ∩ S 1 ) ∪ J and J ∪ (I + ∩ S 1 + ) of (2.11) with the intervals I ++ and ++ I of (1.39), we see that the algebras (2.11) are equal to D ⊛ B F (I) and E ⊛ B G (I), respectively. We can now define the functor Pictorially, we understand the functor fusion h as the operation of gluing two squares along a common edge as follows: The associator for fusion h is induced by the usual associator for Connes fusion. It consists of a natural transformation and satisfies the pentagon identity.
2.c. Vertical fusion. We now describe the functor fusion v of vertical fusion. Given 1 2 ]) be the top and bottom halves of our standard circle ∂[0, 1] 2 , and let j : be the reflection map along the horizontal symmetry axis. The algebra E(S 1 ⊤ ) has two actions of opposite variance on H and K, and so it makes sense to take the Connes fusion We first treat the case 2 ) in its interior, then the algebra (2.14) 14) is given by E(j). We observe, as follows, that there is a canonical homomorphism (typically not an isomorphism) from A(I) to the algebra (2.14). In the definition of that fusion product, we are free to chose any faithful E(I ∪S 1 ⊥ )-module and any faithful E(I ∪ S 1 ⊤ )-module (see Appendix A.IV): let us take both of them to be the vacuum H 0 (E). Then, by definition, the algebra (2.14) is generated on By the vacuum and locality axioms, we have natural homomorphisms . Composing this composite with the action of (2.14) on H ⊠ E K gives our desired action of A(I).
By the same argument, we also have actions of coming from their respective actions on H and on K. We can therefore apply Lemma 2.5 to all the actions constructed so far, and conclude that H ⊠ E K is a D-F -sector.
One might expect vertical fusion to be a functor CN 2 × CN 1 CN 2 → CN 2 . However, as the vertical identity (2.8) is only a functor on the smaller category CN f 1 , and the horizontal fusion is only a functor on the restricted product CN 2 × CN f 0 CN 2 , so too vertical fusion only gives a functor on the restricted product: The restriction is necessary to ensure the Connes fusion H ⊠ E K is functorial with respect to the relevant natural transformations of the defect E [1]. Unlike horizontal fusion, vertical fusion is not the operation of gluing two squares along a common edge. Rather, it consists of gluing those two squares along half of their boundary: The associator for vertical fusion (2.17) associator comes from the associator of Connes fusion and satisfies the pentagon identity. There are also 'top' and 'bottom' identity natural transformations, that describe the way fusion v and identity v interact. Given a sector D H E , they provide natural isomorphisms to the usual triangle axioms. Strictly speaking, the source functor of unitor t is only defined on the subcategory CN f 1 × CN 1 CN 2 of CN 2 , and so the transformation unitor t itself is only defined on that subcategory. Similarly, unitor b is only defined on the subcategory CN 2 × CN 1 CN f 1 .
Properties of the composition of defects
3.a. Left and right units. Units are a subtle business. One might guess that the left unit is a natural isomorphism CN 1 CN 1 whose source is the functor composition • ((identity • source) × id CN 1 ) and whose target is the identity functor. (Here id CN 1 : CN 1 → CN 1 is the identity functor and identity : CN 0 → CN 1 takes a net to the identity defect, as in (1.37).) But, unfortunately, in general there is no such natural isomorphism. Instead, we have the following 'weaker' piece of data: a functor unitor tl : CN f 1 → CN 2 ('tl' stands for top left) with the property that and target The construction of this functor is based on the following lemma. We are now ready to define the functor We also have functors
It assigns to every
is non-trivial, which is not the case if 1 A ⊛ A 1 A is replaced by 1 A in the intersection expression.
The invertible sector between 1 A and 1 A ⊛ A 1 A is the vacuum module of A associated to the "circle" constructed by inserting a copy of [0, 1] at the point 3.b. Semisimplicity of the composite defect. Given two semisimple defects, we can ask whether their fusion is again a semisimple defect. From now on, we always assume that our conformal nets are irreducible. The purpose of this section is to prove the following theorem: Detecting semisimplicity. We begin with a few lemmas. Proof. The center of A acts faithfully by A-B-bimodule endomorphisms. It is therefore finite-dimensional.
From now on, we fix a faithful defect A D B , and denote its vacuum sector H 0 = H 0 (D). Recall that our standard circle is S 1 := ∂[0, 1] 2 , and that its top and bottom halves are denoted S 1 ⊤ and S 1 ⊥ .
Notation 3.8. Given an interval I ⊂ S 1 that contains the two color-change points ( 1 2 , 0) and ( 1 2 , 1) in its interior, we define an algebraD(I) ⊂ B(H 0 ) as follows. It is the algebra generated by D(I 1 ) and D(I 2 ), where I 1 and I 2 are any two intervals covering I with the property that ( 1 2 , 1) ∈ I 1 and ( 1 2 , 0) ∈ I 2 . By strong additivity, the algebraD(I) does not depend on the choice of covering.
Then there is a natural action of the algebraÂ(J 2 ∪I 3 ) on the vacuum sector H 0 (D), and we haveD( Proof. We assume that D is faithful (otherwise D = 0, and there is nothing to show). By Haag duality for A, the algebraÂ(J 2 ∪ I 3 ) is the relative commutant of . The latter acts naturally on H 0 (D), and therefore so doesÂ(J 2 ∪I 3 ). As an algebra on H 0 (D),Â(J 2 ∪I 3 ) is given by the same expression A(J 2 ∪ I 2 ∪ I 3 ) ∩ A(I 2 ) ′ , where the commutant is now interpreted on H 0 (D). By Lemma A.30, The latter is equal to D(I 4 ) ∨ A(I 2 ) ′ = D(I 2 ∪ I 4 ) ′ by Haag duality for defects (Proposition 1.17).
Lemma 3.13. Let I 1 , I 2 , I 3 , I 4 be arranged as in (3.12). Assuming D is irreducible, then A(I 2 ) is the relative commutant ofD( Proof. By Lemma A.32, we have A( . The latter is equal to (A(I 2 )∨D(I 4 ))∩D( In the next Lemma we will use the notion of minimal index [A : B] of a subfactor B ⊆ A; see Appendix A.VIII for a definition. Let us decompose I 1 into intervals J 1 , J 2 as in (3.12). By Lemma 3.11, we havê We also have Finiteness implies semisimplicity. We can now prove the semisimplicity of the fusion of semisimple defects.
Proof of Theorem 3.6. Because the defects D and E are semisimple, we may write them as finite direct sums of irreducible defects: D = D i and E = E j . Fusion of defects is compatible with direct sums It therefore suffices to assume D and E are irreducible, and to show that for I genuinely bicolored, the von Neumann algebra (D ⊛ E)(I) has finite-dimensional center.
The Lemma 3.7, it is enough to show that the algebra of bimodule endomorphisms of H ⊠ K is finite-dimensional. This algebra of endomorphisms is equal to the algebra ofD( )-Ẽ( )-endomorphisms of H ⊠ K.
Note that the algebras D( ) andẼ( ) are factors by Lemma 3.9. If a bimodule has finite statistical dimension (see Appendix A.VIII), then its algebra of bimodule endomorphisms is finite dimensional [1,Lem. 4.10]. It is therefore enough to show that the statistical dimension of H ⊠ B K as aD( )-Ẽ( )-bimodule is finite.
Using the compatibility of statistical dimension with Connes fusion (A.18) the dimension in question can be computed as So it suffices to argue that the dimension of H as aD( )-B( )-bimodule and the dimension of K as a B( )-Ẽ( )-bimodule are finite. This is the content of Lemma 3.16 below.
Before proceeding, let us fix some new names for certain subintervals of our standard circle: 3 4 ] × [0, 1 2 ] Given a defect A D B , let us also introduce the following shorthand notations: Finiteness of the defect vacuum as a 4-interval bimodule. (splitting ) We record the following finiteness result, somewhat similar to Lemma 3.14, for future reference.
Let I 1 , I 2 , I 3 , I 4 now be the four sides of our standard bicolored circle: The intervals I 1 and I 3 are genuinely bicolored, I 2 is white, and I 4 is black. It will be convenient to introduce a graphical notation for the subalgebras of B(H 0 (D)) used in this proof: In particular, the algebra is a factor. We have to show that Using Haag duality and strong additivity, note that the algebra (D(I 1 ) ∨ D(I 3 )) ′ is the relative commutant of inside . Similarly, it follows from Lemma A.32 that the algebra A(I 2 ) ∨ B(I 4 ) is the relative commutant of in :
A variant of horizontal fusion
In Section 2.b we saw how to define the horizontal fusion of two sectors. We will now define a variant of the horizontal fusion, called keystone fusion, which itself depends on an intermediate construction we refer to as keyhole fusion. In Section 4.d, we will show that horizontal fusion and keystone fusion are in fact naturally isomorphic, and we will construct a canonical isomorphism Φ between them. That isomorphism will be essential in our construction of the 1 ⊠ 1-isomorphism Ω (4.31).
Recall that we implicitly assume that all our conformal nets are irreducible. Orient I l and I r counterclockwise, and orient I so that the inclusion I ֒→ I r is orientation preserving-see (4.1). The inclusion I ֒→ I l is then orientation reversing. Let J be the closure of (I l ∪ I r ) \ I. We orient J so that it agrees with the orientation of I l on J ∩ I l . We draw these intervals as follows: (4.1) I l = , I r = , I = and J = .
Given a conformal net A with finite index, we will define three functors F, G 0 , G : A(I l )-modules × A(I r )-modules → A(J)-modules.
These operations will be called respectively the fusion, the keyhole fusion, and the keystone fusion, and will be denoted graphically as follows: When we want to stress the dependence on the conformal net A, we will denote these functors F A , G 0,A , G A .
The ordinary horizontal fusion. The functor F is defined by fusion over A(I): using the orientation preserving inclusion I ֒→ I r , any left A(I r )-module is also a left A(I)-module, and using the orientation reversing inclusion I 0 ֒→ I l , any left A(I l )module is also a right A(I)-module. We can therefore define the horizontal fusion functor as follows: Write J as J 1 ⊔ J 2 ; we obtain actions of A(J 1 ) and A(J 2 ) on H l ⊠ A(I) H r , by [2,Cor. 1.28]. Note that in the case H l = L 2 A(I l ) and H r = L 2 A(I r ), the actions of A(J 1 ) and A(J 2 ) extend to an action of A(J) = A(J 1 )⊗ A(J 2 ); the same therefore holds for arbitrary H l and H r . The only difference between the functor F and the functor fusion h from (2.12) is that they have somewhat different source and target categories-the main construction is identical in both functors.
The keyhole fusion. We will need to name a few more manifolds. Let We draw these as (4.2)Ĩ l = ,Ĩ r = , S u = , S m = , S d = and K = .
The intervalsĨ l andĨ r are oriented counterclockwise, as were I l and I r . The manifolds S u , S m and S d are conformal circles via their constant speed parametrizations and are also oriented counterclockwise. (A conformal circle is a circle together with a homeomorphism with S 1 that is only determined up to orientation preserving conformal diffeomorphisms of S 1 .) Finally, the manifold K inherits its orientation from S u ∪ S d . Note that the inclusion K ֒→Ĩ l ∪Ĩ r is orientation reversing. We will also need the reflection j along the horizontal axis y = 1/2. Let us fix orientation preserving identifications φ l :Ĩ l ∼ = → I l and φ r :Ĩ r ∼ = → I r that are symmetric with respect to the reflection j, restrict to the identity in a neighborhood of ∂Ĩ l = ∂I l and ∂Ĩ r = ∂I r , and satisfy φ l (5/6, t) = (1, t) and φ r (7/6, t) = (1, t) for all t ∈ [0, 1]. Using these identifications, any A(I l )-module becomes an A(Ĩ l )-module and any A(I r )-module becomes an A(Ĩ r )-module. We can now define the keyhole fusion functor as follows: where H 0 (S u ) and H 0 (S d ) are the canonical vacuum sectors provided by [2, Thm. 2.13]. The right hand side is an instance of what we call cyclic fusion-see Appendix A.III. In the notation of cyclic fusion, we have where K 1 =Ĩ l ∩ S u , K 2 =Ĩ r ∩ S u , K 3 =Ĩ r ∩ S d , and K 4 =Ĩ l ∩ S d , appropriately oriented. It follows from [2, Cor. 1.28] that the algebras A(J ∩ (Ĩ l ∪Ĩ r )) and A(J ∩ (S u ∪ S d )) generate an action of A(J) on G 0 (H l , H r ).
The keystone fusion. Note that the algebra I ⊂ S m on the Hilbert space . By Theorem B.9, the algebra A(S m ) contains a direct summand that is canonically isomorphic to B(H 0 (S m , A)). We can therefore define the keystone fusion functor as follows: Moreover, since B(H 0 (S m , A)) and A(J) commute on G 0 (H l , H r ), there is a residual action of A(J) on the Hilbert space G(H l , H r ).
Fusion and keystone fusion are isomorphic. We will show presently that the functors F, G : A(I l )-modules × A(I r )-modules → A(J)-modules are naturally isomorphic to one another, and then later (in Proposition 4.29) construct a specific such natural isomorphism. We use the following straightforward generalization of Lemma A.28. commutes.
Using [2, Thm. 3.23] and the above lemma, we prove that the two different version of horizontal fusion are naturally isomorphic to each other: → I r induce isomorphisms H 0 (S l ) ∼ = H 0 (S l ) and H 0 (S r ) ∼ = H 0 (S r ) that are equivariant with respect to A(I ′ l ) and A(I ′ r ) (here, I ′ l and I ′ r are the closures of S l \ I l and S r \ I r , respectively). From the isomorphism it follows that G 0 H 0 (S l ), H 0 (S r ) represents the Hilbert space of an annulus; see Appendix B.V. Using Theorem B.8 we therefore have We draw the above isomorphisms as follows: Note that the two isomorphisms intertwine the natural {A(I)} I⊂(S b ∪−Sm) -actions.
We can now compute Since H 0 (S l ) and H 0 (S r ) are faithful A(I l )-and A(I r )-modules, we can use Lemma 4.3 to finish the argument: it remains only to check that ϕ is equivariant with respect to all r 1 ∈ End A(I l ) (H 0 (S l )) and r 2 ∈ End A(Ir ) (H 0 (S r )). That equivariance follows immediately from Haag duality for nets (Proposition B.4) and the fact that ϕ is equivariant with respect to A(I ′ l ) and A(I ′ r ). Unfortunately, the above proposition is not sufficient for our purposes: it does not construct a natural isomorphism Φ A : F A → G A , but only proves that one exists. This leaves unsettled, for instance, the question of whether these natural isomorphisms can be chosen so that Φ A⊗B = Φ A ⊗ Φ B . In the following sections, we will construct a canonical choice of such natural isomorphisms for which the desired symmetric monoidal property is clear. the extension by the identity of the maps φ l |Ĩ l,⊤ and φ r |Ĩ r,⊤ . We then have canonical identifications We now have an isomorphism which we draw as follows: Here, the lines , , and correspond to the conformal nets A, B, and C, and the transition points , and indicate the defects D and E.
Keyhole fusion as an L 2 -space. We need to introduce yet more manifolds. We have already encountered K 1 =S l ∩ S u and K 2 =S r ∩ S u . We define K u := K 1 ∪ K 2 and J u := J 1 ∪ J 2 , where J 1 := S b ∩ S u and J 2 := S u ∩ S m . We orient K u and J u compatibly with S u . Let J l be the closure ofS l,⊤ \ K 1 and, similarly, let J r be the closure ofS r,⊤ \ K 2 . The orientations and the bicolorings of J l and J r are inherited fromS l andS r . We include pictures of these manifolds: (4.9) J l = , J r = , K u = , J u = .
Following Notation 3.10, we letB(J u ) denote the commutant of B(K u ) on H 0 (S u , B). Our computation of the keyhole fusion will be in terms of the algebra H 0 (S r , E) , which we denote pictorially by The dotted line in this picture picture serves to remind us thatB(J u ) was used instead of B(J u ). Note that the algebra (4.10) also acts on G 0,B (H 0 (S l , D), H 0 (S r , E)) because the latter is obtained from with orientations and bicolorings as in the following pictures Note that these manifolds do not include their boundary points.
Theorem 4.11. Let A, B, C be conformal nets, and let A D B and B E C be defects. Then there is a canonical unitary isomorphism In formulas, this is a map where S l , S r , J l , J u , J r are as in The equivariance of Ψ 0 is clear for intervals I that are contained in the upper half {(x, y)|y ≥ 1 2 } or in the lower half {(x, y)|y ≤ 1 2 }, and follows by strong additivity for more general intervals.
Associativity of the standard form identification. The isomorphism Ψ 0 is in an appropriate sense associative, as follows. Suppose that we have three defects A D B , B E C , and C F D . We then have various applications of Ψ 0 forming the square (4.14)
This diagram commutes by Proposition
). The lower right corner of (4.14) is ) . Note that, following (4.8), this Hilbert space is also denoted . As in (4.10), L 2 ( ) and L 2 ( ) denote the Hilbert spaces L 2 (D(J l )∨B(J u )∨ E(J r )) and L 2 (E(J l ) ∨Ĉ(J u ) ∨ F (J r )), respectively. The upper right and lower left corners of (4.14) are therefore given by and Finally, the vector space L 2 ( ) that appears in the upper left corner of (4.14) is the L 2 space of the von Neumann algebra where the completion is taken on the Hilbert space or, equivalently, on the Hilbert space .
4.c. The keystone fusion of vacuum sectors of defects. In this section, the defects A D B and B E C are assumed to be irreducible. As before, the conformal net B is taken to be of finite index. Recall the algebra from (4.10). Let us also introduce The algebra is a factor, as can be seen by applying Lemma A.15 in the situation of (4.13), but its subalgebra will typically not be a factor. However, since B has finite index, we know by Theorem 3.6 that the subalgebra is at least semisimple. Proof. Let X := , with minimal central projections p 1 , . . . , p n , and let Y := . Recall that Y is a factor. By definition, the inclusion X → Y is finite iff the bimodule X L 2 Y Y is dualizable iff its summands piX (p i L 2 Y ) Y are dualizable. Indeed, the commutant of Y on p i L 2 Y is p i Y p i , and the inclusion p i X ֒→ p i Y p i is finite by the previous lemma.
Keystone fusion contains vacuum sector of fused defect. Let B be a conformal net with finite index. Recall from Section 4.a that, given a B(I l )-module H l and a B(I r )-module H r , then the keystone fusion G B (H r , H l ) is defined by This construction uses the isomorphism B(S m ) ∼ = λ∈∆ B (H λ (S m , B)) from Theorem B.9. In formulas, this is a map Proof. By the split property, we can identify with⊗ , and thus L 2 with Fusing with H 0 (S m , B) and applying Lemma 4.17, we get a canonical isomorphism Recall from Appendix A.VI that the L 2 -space construction is functorial for finite homomorphisms between von Neumann algebras with finite-dimensional center. By Corollary 4.16, the inclusion ι : → therefore induces a map L 2 (ι) : L 2 ( ) → L 2 ( ). Let L 2 (ι) iso be the isometry in the polar decomposition of L 2 (ι). We set Ψ to be the composite where Ψ 0 is the unitary isomorphism from Theorem 4.11.
We will prove later, in Theorem 6.2 concerning the composite map (4.31), that the map Ψ is actually an isomorphism. We can already observe the following special case of that result: Proof. We need to show that the map is an isomorphism. By the computation (4.6), we know that the right hand side is isomorphic to H 0 (S b ), and thus is irreducible as an S b -sector of A. The above map is a homomorphism of S b -sectors and is injective by the previous proposition. It is therefore an isomorphism.
Associativity for the inclusion of the vacuum sector. Using the isometric embedding Ψ from (4.20) in place of the unitary isomorphism Ψ 0 from (4.12), we can form the following diagram analogous to (4.14): (4.22) that contains 9 squares. The four upper left squares of that grid are given by where ⊠ and ⊠ stand for ⊠ B(Sm) H 0 (S m , B) and ⊠ C(S tr m ) H 0 (S tr m , C), respectively, and ⊠⊗ ( ⊗ ) stands for ⊠ B(Sm)⊗C(S tr m ) (H 0 (S m , B)⊗H 0 (S tr m , C)). Here, S tr m denotes a translated copy of the circle S m .
The squares 1 , 2 , and 4 clearly commute. To see that 5 commutes, note first that is a factor, as can be seen by applying Lemma A.15 twice. That square then commutes by the functoriality of L 2 (−) iso -see [1,Prop. 6.22] and note that the necessary conditions for that functoriality are satisfied by Corollary 4.16 and by Lemma 4.24 below. The upper right squares of our 4 × 4 gird are given by and their commutativity is unproblematic. We refrain from drawing the last row of the gird. The squares 7 and 8 are similar to 3 and 6 . The commutativity of 9 follows from that of (4.14). Proof. By the split property, we have isomorphisms ∼ =⊗ and ∼ =⊗ . It is therefore sufficient to show that Let S u , K u , J u be as in (4.2) and (4.9), and let H := H 0 (S u , B), and M := B(K u ), with commutant M ′ =B(J u ). Since H is a faithful M -module, we can pick an Mlinear isomorphism ℓ 2 ⊗ H ∼ = ℓ 2 ⊗ L 2 (M ). 9 Under the corresponding isomorphism of Hilbert spaces ℓ 2 ⊗ ∼ = ℓ 2 ⊗ , the algebra It follows that where the last equality is because A D B is irreducible. We now argue that the natural inclusion ֒→ induces an isomorphism of centers. By Theorem 3.6, the center of these algebras is finite-dimensional. The center Z( ) certainly maps to the center Z( ) and that the map is injective. It is therefore an isomorphism. The claim now follows, as Remark 4.25. All the defects in this section were assumed to be irreducible. However, using the compatibility of directs integrals with various operations, it is straightforward to extend Proposition 4.18 and Lemma 4.23 to arbitrary defects. 9 Here, ℓ 2 := ℓ 2 (N) could be removed from this isomorphism if we knew that M was a type III factor, a fact which is likely to be true (unless B is trivial) but which we haven't proven in our setup.
4.d. Comparison between fusion and keystone fusion. Let
A be a conformal net with finite index (implicitly irreducible as before). In this section, we will define a unitary natural transformation Φ A : F A → G A between the functors introduced in Section 4.a. Graphically, this natural transformation is denoted Recall the circles S l , S r , and S b introduced in (4.5): (4.26) S l = , S r = , and S b = .
As before, we let I := S l ∩ S r , with orientation inherited from S r . The circles S l and S r are given conformal structures by their unit speed parametrizations. The circle S b is also given a conformal structure, as follows. Let j l ∈ Conf − (S l ) and j r ∈ Conf − (S r ) be the involutions fixing ∂I. The conformal structure on S b is the one making ǫ l := j l | I ∪ Id jr (Sr\I) : S r → S b into a conformal map. Equivalently, it is the one for which ǫ r := j r | I ∪ Id j l (S l \I) : S l → S b is a conformal map. Proof. Let I l and I r be as in (4.1), and let I ′ l and I ′ r be the closures of their complements in S l and S r , respectively. Since the actions of A(I l ) and A(I r ) on H 0 (S l ) and H 0 (S r ) are faithful, by Lemma 4.3 it is enough to define the isomorphism Φ H0(S l ),H0(Sr) : → , and to check that it commutes with the natural actions of A(I l ) ′ = A(I ′ l ) and A(I r ) ′ = A(I ′ r ). We define this isomorphism as the composite (4.30) 2) associated to the upper half S b,⊤ of the conformal circle S b , and Ψ is the unitary isomorphism from Lemma 4.21.
Let A and C be conformal nets, let B be a conformal net with finite index, and let A D B and B E C be defects. Let us introduce the notation for the Hilbert space L 2 ( ) = L 2 ((D ⊛ B E)(S 1 ⊤ )) that appears in the left hand side of (4.19). Combining Proposition 4.18 (see also Remark 4.25) and Proposition 4.29, we can construct an isometric map (4.31) Ω : where Φ stands for Φ H0(S l ,D),H0(Sr,E) . We will show later, in Theorem 6.2, that the map Ω = Ω D,E is in fact an isomorphism. This map is the fundamental "1 ⊠ 1 = 1 isomorphism" comparing 1 D⊛E with 1 D ⊠ 1 E . Proof. By the definition of Ω, the above diagram can be expanded to The upper left square commutes by Lemma 4.23 (see also Remark 4.25). The remaining three squares commute by the naturality of Φ −1 .
Haag duality for composition of defects
Throughout this section we fix conformal nets A, B, and C, always assumed irreducible, and irreducible defects A D B and B E C . In our pictures, we will use the notation for intervals on which we evaluate A, we will use for intervals on which we evaluate B, and for intervals on which we evaluate C. We will also use for bicolored intervals on which we evaluate D, and for bicolored intervals on which we evaluate E.
Let S l and S r be as in (4.5) and (4.7), with intersection I oriented like S r . As before, we use the notation := H 0 (S l , D) = L 2 ( ), similarly := H 0 (S r , E) = L 2 ( ), and := H 0 (S l , D) ⊠ B(I) H 0 (S r , E). We will again be using the Notation 1.47. Letting Main Theorem 5.2. Assuming B has finite index, then on the Hilbert space , we have Proof. Let us introduce some notation for various algebras that act on the Hilbert space . The main algebras of interest are = (D ⊛ B E)(S 1 ⊤ ) and = (D ⊛ B E)(S 1 ⊥ ), and our goal is to show that the inclusion We will use the following algebras: HereB,D, andÊ are as in 3.10, and J 0 and J 3 are bicolored as in (4.7). By Lemma 3.11, the algebrasB(J 1 ) andB(J 2 ) act on and respectively, and satisfy D(J 0 ) ∨B(J 1 ) =D(J 0 ∪ J 1 ) andB(J 2 ) ∨ E(J 3 ) =Ê(J 2 ∪ J 3 ). The equalities in (5.6)-(5.8) for actions on follow. In Section 5.a below we will obtain some purchase on the Haag inclusion ⊆ ( ) ′ by showing that its statistical dimension is the same as that of the inclusion ⊆ ( ) ′ . (Here the algebra is defined similarly to .) We can compute the statistical dimension of that latter inclusion by squeezing it into a sequence of simpler inclusions of von Neumann algebras, as follows: Because D and E are irreducible, the algebra (5.5) is a factor. Using Lemma 3.11, note that the algebra (the right connected component of the picture (5.7)) is the commutant of a factor acting on a vaccum sector for E; it follows that (5.7) is a factor. More difficult is the fact that (5.8) is a factor-that is the content of Corollary 5.16, following from Lemma 5.13 below. The algebra (5.6) is not a factor, but combining Lemma 5.10 below and Theorem 3.6, we will learn that it does have finite-dimensional center; let n be the dimension of this center.
In Theorem 5.2, the defects D and E were assumed to be irreducible, but the statement holds in general: which, combined with (5.11), proves the Lemma. 10 Here, we use "Spec" in the sense of algebraic geometry.
We then haveS l = K l ∪ K ′ l andS r = K r ∪ K ′ r . We use H = H 0 (B,S l ) and We denote the above equation graphically by = * . where stands for H 0 (S l , D), and stands for H 0 (S r , E). We have the following sequence of equalities Here the third equality uses Lemma A.34. By Lemma 3.11, we also have = ′ on .
We therefore similarly have Corollary 5.16. The algebra is a factor. : .
In light of the above computations, that equation gives ν µ(B) = ν 2 ; since ν is finite, we must have ν = µ(B), as required.
As a corollary, we obtain the following improvement on Lemma 3.14: Corollary 5.20. We have Corollary 5.21. We have the following two equalities: Proof. The first equality follows immediately from Corollary 5.20. For the second equality, note that : = : by Lemma 5.13; the result follows by a version of Corollary 5.20 in which the roles of the nets A and C have been interchanged.
The 1 ⊠ 1 isomorphism
We are now in a state to prove that the map Ω (4.31), from the vacuum sector of the composition of two defects to the fusion of the vacuum sectors of the individual defects, is an isomorphism. This isomorphism provides the modification that one expects in any 3-category. More importantly, it also provides the basis for our construction of the fundamental interchange modification present in any 3-category; see Section 6.d.
6.a. The 1 ⊠ 1 map is an isomorphism. Let A, B, and C be conformal nets, always assumed irreducible, and let A D B and B E C be defects. Assume furthermore that B has finite index. As before, we let represent the Hilbert space L 2 ((D ⊛ E)(S 1 ⊤ )), and let stand for the fusion L 2 (D(S 1 ⊤ )) ⊠ B(I) L 2 (E(S 1 ⊤ )), where I is the vertical interval as in (4.1).
Main Theorem 6.2. Let A, B, C, D, and E be as above. Then the map Proof. As both sides of (6.3) are compatible with direct integrals, we may assume without loss of generality that D and E are irreducible. By construction Ω D,E is an isometry. The algebras = (D ⊛ E)(S 1 ⊤ ) and = (D ⊛ E)(S 1 ⊥ ) act faithfully on both sides of (6.3). By definition, they are each other's commutants on . By Theorem 5.2, they are also each other's commutants on . Therefore, when viewed as (D ⊛ E)(S 1 ⊤ ) -(D ⊛ E)(S 1 ⊥ )-bimodules, both sides of (6.3) have a matrix of statistical dimensions that is the identity matrix, and Ω D,E is an isomorphism.
Given the crucial importance of the "1 times 1 isomorphism" Ω, we collect in one place the main ingredients used in its definition. These are the unitary isomorphisms Φ : Here, the symbol " : " is an abbreviation for − ⊠ B(Sm) H 0 (S m , B).
Proof. For every step in the construction of Ω there are versions of the isomorphisms (6.5) and (6.4). It is a lengthy, but not difficult exercise to check that for each step in the construction of Ω the corresponding version of equation (6.7) holds.
Warning 6.8. It appears that Ω is not a natural transformation! More precisely, there seem to exist irreducible defects A D B , B E C , A F B , B G C and finite natural transformation τ : D → F , σ : E → G (Definition 1.35) for which the diagram (6.9) fails to be commutative. This problem can be blamed on the bad functorial properties of L 2 iso (used in the definition of Ψ). However, Ω is still natural with respect to natural isomorphisms of defects. There are two ways of dealing with the above situation: 1. Restrict to the groupoid parts of CN 0 and CN 1 ; 2. Replace the L 2 iso by L 2 in the definition of Ψ; the price to pay is that Ω is then no longer unitary.
Both options seem to have shortcomings-in our exposition, we have opted for the first option. One unfortunate consequence of the failure of commutativity of (6.9) is that given defects D, E, F , G as above, and given dualizable sectors D H F and E K G with normalized duals 11 (H, r H , s H ) and (K, r K , s K ) the horizontal composition (H ⊠ BK , r, s) of those two normalized duals is not a normalized dual for H ⊠ B K. Here, the structure maps r and s are given by the obvious formulas in terms of r H , r K , s H , and s K . that we describe below. Let S l , S r , S b , I, j l , j r be as in Section 4.d, and let I l := j l (I), I r := j r (I). We draw them once more: j l : , I l = , j r : , I r = . (6.10) Recall that we equipped S b with a conformal structure that makes j l | I l ∪ Id Ir : S b → S r and Id I l ∪ j r | Ir : S b → S l conformal (and therefore smooth).
where τ := j l | I l ∪ j r | I = ǫ −1 l • ǫ r ∈ Conf + (S l , S r ). given by γ l = j l ∪ Id and γ r = Id ∪ j + r , and j + r is obtained by conjugating j r by (x, y) → (x + 1, y).
Proof. Using Lemma 6.12 twice, we can expand (6.14) into the following diagram: The lower right square commutes by the functoriality of H 0 , see (B.1). The remaining three squares commute by the fact that Υ l and Υ r are natural transformations.
Let ǫ l and ǫ r be as above, and let ǫ l,⊤ : S r,⊤ → S b,⊤ and ǫ r,⊤ : S l,⊤ → S b,⊤ be their restrictions to the upper halves of S r and S l , respectively. Lemma 6.15. Let A be a conformal net with finite index, and let A D B be an irreducible defect. Let H r := H 0 (S r , D), where the circle S r is bicolored as in (4.7). Then the map Ω id A ,D : → is the inverse of L 2 (D(ǫ l,⊤ )) • Υ l Hr . Similarly, assuming B has finite index, the map Ω D,idB : → is the inverse of L 2 (D(ǫ r,⊤ )) • Υ r H0(S l ,D) .
Proof. We only treat the first equation Ω −1 idA,D = L 2 (D(ǫ l,⊤ )) • Υ l Hr . We first prove it in the case when D = id A . By definition, Ω idA,idA is the composite It follows that Ω idA,idA = Υ −1 and we are done by Lemma 6.12.
We now treat the general case. As a special case of Lemma 4.32 (with the defects id A , id A , and D), we get the commutativity of the following diagram: Consider 5 2 − ǫ] be diffeomorphisms whose derivative is 1 in a neighborhood of the boundary, where ε is a fixed small number. These extend to diffeomorphisms ϕ lm : S lm,⊤ → S lmr,⊤ , ϕ l : S l,⊤ → S lm,⊤ , ψ lm : S lm,⊤ → S lmr,⊤ , ψ m : S m,⊤ → S mr,⊤ , whose derivative is 1 outside the domains of ϕ and ψ, respectively. Let also χ := ψ −1 lm • ϕ lm . We will use later on that χ(x, y) = (x, y) for y ≥ 3 2 .
Using the identity Ω id A ,idA = Υ −1 proved earlier, the case D = id A of (6.17) implies the commutativity of 2 . The triangles 3 commute by virtue of Lemma 6.12, and so the whole diagram (6.18) is commutative. Letτ ,σ ∈ Diff(∂[0, 1] 2 ) be the symmetric extensions of τ and σ, so thatτ | S 1 ⊤ = τ andσ| S 1 ⊤ = σ, and they both commute with (x, y) → (x, 1 − y). From the fact that χ(x, y) = (x, y) for y ≥ 3 2 , it follows thatσ(x, y) =τ (x, y) = (x, y) for y ≥ 1 2 . Let u, v ∈ A( ) be the canonical quantizations as in (1.18) of the symmetric diffeomorphismτ andσ. By definition, we then have L 2 (A(τ )) = π(u) and L 2 (A(σ)) = π(v), where π is the action of A( ) on = H 0 (A). We now consider the following diagram of natural transformations between functors from A( )-modules to Hilbert spaces: When evaluated on H 0 (A), the above diagram commutes by (6.18). Therefore, by Lemma A.28, since H 0 (A) is a faithful A( )-module, the diagram (6.19) commutes regardless of the module one evaluates it on. We now consider the following variant of diagram (6.18): Our goal is to show is that the triangles 5 are commutative. Since D is irreducible, there exists an invertible complex number λ such that The 7-gon 4 is simply (6.17), and it is therefore commutative. The triangle 5 occurs two times with a given orientation, and once with the opposite orientation: the outer 7-gon therefore commutes up to a factor of λ. But that outer 7-gon is an instance of (6.19) by Lemma 1.19, and is therefore commutative. It follows that λ = 1.
6.c. Unitors for horizontal fusion of sectors. In this section, we will introduce certain variants of the transformations Υ l and Υ r that will be more convenient for the full verification [4] that conformal nets form a 3-category (more precisely, an internal dicategory in the 2-category of symmetric monoidal categories [8,Definition 3.3]). We will again be using the circles S l , S r , S b , the intervals I, I l , I r , and the involutions j l ∈ Conf − (S l ), j r ∈ Conf − (S b ) from (6.10). Let α l := j l | I l ∪id Ir : S b → S r and α r := id I l ∪j r | I : S b → S l be the diffeomorphisms used in the definition of Υ l and Υ r -their inverses appeared in Lemma 6.12 under the names ǫ l and ǫ r . 6.d. The interchange isomorphism. In a 2-category, the interchange law says that the two ways of evaluating the diagram are equal to each other: if one first performs the two vertical compositions and then composes horizontally, or one first composes horizontally and then vertically, one should obtain the same answer. In our case, the two ways of fusing four sectors is the isomorphism that exchanges the two middle factors. More concretely, given sectors D H F , E K G , F L P , G M Q as in (6.24), we are looking for a unitary isomorphism We can view (H, K, L, M ) as an object of the category . 12 Here, as in the 1 ⊠ 1-isomorphism Ω, we restrict to the groupoid parts of CN 1 and CN 0 .
The forgetful functor (CN
In order to construct the natural transformation (6.25), it is therefore enough to produce corresponding natural transformations (6.27) C Hilbert Spaces for every F and G. The fact that (6.26) intertwines the actions of F ( ), G( ) F ( ), and G( ), i.e., that it is a morphism of D ⊛ B E -P ⊛ B Q -sectors, will then follow from the naturality of (6.27).
Since the object H 0 (F ), H 0 (G), H 0 (F ), H 0 (G) ∈ C consists of faithful modules, the obvious analog of Lemma 4.3 (itself a generalization of Lemma A.28) applies, and so it is enough to construct the natural transformation (6.27) on the above object. Using the isomorphisms (6.3) and (2.19), the latter is given by Using the compatibility of Ω with the monodial structure (Proposition 6.6), the same can be deduced for the exchange isomorphism: (6.25) is a monodial natural transformation.
Appendix A. Von Neumann algebras Given a Hilbert space H, we let B(H) denote its algebra of bounded operators. The ultraweak topology on B(H) is the topology of pointwise convergence with respect to the pairing with its predual, the trace class operators.
Definition A.1. A von Neumann algebra, is a topological *-algebra (no compatibility between the topology and the algebra structure!) that is embeddable as closed subalgebra of B(H) with respect to the ultraweak topology.
The spatial tensor product A 1⊗ A 2 of von Neumann algebras A i ⊂ B(H i ) is the closure in B(H 1 ⊗ H 2 ) of their algebraic tensor product A 1 ⊗ alg A 2 .
Definition A.2. Let A be a von Neumann algebra. A left (right) A-module is a Hilbert space H equipped with a continuous homomorphism from A (respectively A op ) to B(H). We will use the notation A H (respectively H A ) to denote the fact that H is a left (right) A-module.
We now review the parts of our earlier publications [1,2,3] that are used in the present paper. For further details, we refer the reader to [1, §2 and §6] for Section A.I, to [1, §3] for Section A.II, to [3, Appendix A] for Section A.III, to [2, §1C] for Section A.IV, to [1, §4] for Section A.VI, and to [1, §5] for Section A.VIII.
A.I. The Haagerup L 2 -space. A faithful left module H for a von Neumann algebra A is called a standard form if it comes equipped with an antilinear isometric involution J and a selfdual cone P ⊂ H subject to the properties (i) JAJ = A ′ on H, (ii) JcJ = c * for all c ∈ Z(A), (iii) Jξ = ξ for all ξ ∈ P , (iv) aJaJ(P ) ⊆ P for all a ∈ A where A ′ denotes the commutant of A. The operator J is called the modular conjugation. The standard form is an A-A-bimodule, with right action ξa := Ja * Jξ. It is unique up to unique unitary isomorphism [11].
The space of continuous linear functionals A → C forms a Banach space A * = L 1 (A) called the predual of A. It comes with a positive cone L 1 + (A) := {φ ∈ A * | φ(x) ≥ 0 ∀x ∈ A + } and two commuting A-actions given by (aφb)(x) := φ(bxa). Given a von Neumann algebra A there is a canonical construction of a standard form for A [17]. It is the completion of with respect to some pre-inner product, and is denoted L 2 (A). The positive cone in L 2 A is given by L 2 Hom with respect to the inner product φ 1 ⊗ξ 1 ⊗ψ 1 , φ 2 ⊗ξ 2 ⊗ψ 2 := (φ * 2 φ 1 )ξ 1 (ψ 1 ψ * 2 ), ξ 2 . Here, we have written the action of ψ i on the right, which means that ψ 1 ψ * 2 stands for the composite The L 2 space is a unit for Connes fusion in the sense that there are canonical unitary isomorphisms defined by φ⊗ξ⊗ψ → (φξ)ψ and φ⊗ξ⊗ψ → φ(ξψ). If f : A → B is an isomorphism of von Neumann algebras, H A and B K are modules, then Here the indices f −1 and f indicate restrictions of actions along the isomorphisms f and f −1 . Using (A.5) is independent, up to canonical unitary isomorphism, of the choices of i and j [3, Appendix A]. We call the above Hilbert space the cyclic fusion of the H i 's, and denote it by A.IV. Fusion and fiber product of von Neumann algebras.
Definition A.8. Let A ← C op , C → B be two homomorphisms between von Neumann algebras, and let A H and B K be faithful modules. Viewing H as a right C-module, we may form the Connes fusion H ⊠ C K. One then defines the fusion of A and B over C as where the commutants of C op and C are taken in H and K, respectively. If C = C, then A * C B = A ⊛ C B is the spatial tensor product A⊗ B of von Neumann algebras.
A.V. Compatibility with tensor products. There is a canonical isomorphism [20,24] L This isomorphism provides a natural compatibility between Connes fusion and tensor products This later isomorphism can then be used to construct natural compatibilities isomorphisms for the spatial tensor product and the fusion respectively the fiber product of von Neumann algebras: where the latter also relies on the equation (A⊗ B) ′ = A ′⊗ B ′ , [27,Thm. 12.3].
A.VI. Dualizability. A von Neumann algebra whose center is C is called a factor. Von Neumann algebras with finite-dimensional center are direct sums of factors.
Definition A.11. Let A and B be von Neumann algebras with finite-dimensional center. Given an A-B-bimodule H, we say that a B-A-bimoduleH is dual to H if it comes equipped with maps subject to the duality equations (R * ⊗ 1)(1 ⊗ S) = 1, (S * ⊗ 1)(1 ⊗ R) = 1, and to the normalization R * (x ⊗ 1)R = S * (1 ⊗ x)S for all x ∈ End( A H B ). A bimodule whose dual module exists is called dualizable.
If A H B is a dualizable bimodule, then its dual bimodule is well defined up to canonical unitary isomorphism [1,Thm 4.22]. Moreover, the dual bimodule is canonically isomorphic to the complex conjugate Hilbert space H, with the actions bξa := a * ξb * [1, Cor 6.12].
A homomorphism f : A → B between von Neumann algebras with finite-dimensional center is said to be finite if the associated bimodule A L 2 (B) B is dualizable. If f : A → B is a finite homomorphism, then there is an induced map L 2 (f ) : L 2 (A) → L 2 (B), and we have L 2 (f • g) = L 2 (f ) • L 2 (g). In other words, Haagerup's L 2 -space is functorial with respect to finite homomorphisms [1]. The map L 2 (f ) is bounded and A-A-bilinear, but usually not isometric.
A.VII. Two-sided fusion on L 2 -spaces. Let M be a von Neumann algebra, and let M 0 and A be two commuting subalgebras such that M 0 ∨ A = M . Let H A be a faithful right A-module, and let B be its commutant, acting on H on the left. Then H is naturally a B-A-bimodule, and its conjugateH is an A-B-bimodule.
Consider the Hilbert space
There is then an antilinear involution J : H → H given by where J A and J M are the modular conjugations on L 2 A and L 2 M , ξ ∈ L 2 M is a vector, and for ϕ ∈ hom(L 2 A A , H A ) and ψ ∈ hom( A L 2 A, AH ), the maps In the following proof, ℓ 2 stands for ℓ 2 (N), or maybe ℓ 2 (X) for a set X of sufficiently large cardinality. If H admits a cyclic vector for A then we can replace ℓ 2 by C, and the proof simplifies.
Let us define M 1 := B(ℓ 2 )⊗ M , with associated standard form (L 2 M 1 , J M1 , P M1 ) and let q := p J M1 p J M1 ∈ B(L 2 M 1 ) or, equivalently, q(ξ) := p ξ p. Composing u⊠id L 2 (M) ⊠ū with the obvious identifications ( with range projection vv * = q. The resulting isomorphism H ∼ = q(L 2 M 1 ) intertwines J and qJ M1 = J M1 q, as can be seen from the commutative diagram where the last equality holds because the preimage of uψ under the left action map agrees with the preimage of (ū ψ) * under the right action map ℓ 2 ⊗ A → hom( A L 2 A ⊗ ℓ 2 , A L 2 A), and the preimage ofūφ under the right action map agrees with the preimage of (u ϕ) * under the left action map.
Recall that B is the commutant of A on H. In its action on L 2 M 1 = ℓ 2 ⊗L 2 M ⊗ℓ 2 , we have B ∼ = vBv * = q(B(ℓ 2 )⊗ A)q, and so it follows that (A.14) Now by [11, Lemma 2.6], we know that q(L 2 M 1 ), qJ M1 , q(P M1 ) is a standard form for qM 1 q. Therefore, by letting P := v −1 (q(P M1 )), we get that ( H, J, P ) is a standard form for M . Furthermore, we can form the Hilbert spaces on which the algebra M := M 0 ∨ B 1 ∨ B 2 acts. By Proposition A.13, we then get Proposition A. 16. In the above situation, the diagram Proof. Let ℓ 1 and ℓ 2 be two copies of ℓ 2 . Pick isometries u i : (H i ) Ai ֒→ (ℓ i ⊗L 2 A i ) Ai , so as to identify H 1 with L 2 (p 1 (B(ℓ 1 )⊗ M )p 1 ), and H 2 with L 2 (p 2 (M⊗ B(ℓ 2 ))p 2 ), for p i := u i u * i . Here, we have p 1 ∈ B(ℓ 1 )⊗ M and p 2 ∈ M⊗ B(ℓ 2 ). Let us also define the projections q 1 on L 2 (B(ℓ 1 )⊗ M ) ∼ = ℓ 1 ⊗ ℓ 1 ⊗ L 2 M and q 2 on Given the above notations, the proof consists in a careful examination the following commutative diagram: in which arrows denote inclusions, and lines denote isomorphisms.
A.VIII. Statistical dimension and minimal index. The statistical dimension of a dualizable bimodule A H B is given by where R and S are as in (A.12). For non-dualizable bimodules, one declares dim( A H B ) to be ∞. If A = ⊕A i and B = ⊕B j are finite direct sums of factors, then we can decompose H = ⊕H ij as a direct sum of A i -B j -bimodules and use the matrix-valued statistical dimension This matrix-valued dimension is additive with respect to addition of modules and multiplicative Let A and B be von Neumann algebras. We call a functor F : A-modules → B-modules normal is it continuous with respect to the ultra-weak topology on homspaces, preserves adjoints F (f * ) = F (f ) * and is additive in the following sense: of H with the image of a projection p ∈ End A (M )⊗ B(ℓ 2 ). We can then define At the level of arrows, if H ∼ = im(p) and K ∼ = im(q) are A-modules given to us as above, then the image under F of an A-linear map r : H → K is the unique map F (r) : F (H) → F (K) for which the composite A similar result holds for natural transformations.
Lemma A.28. Let F, G : A-modules → B-modules be two normal functors and let M be a faithful A-module. Then, in order to uniquely define a natural transformation a : F → G, it is enough to specify its value on M , and to check that for each r ∈ End A (M ), the diagram This prescription is independent of the choice of isomorphism.
A.X. The split property. Since their images in B(H ⊗ K) agree, the two algebras (A.33) are equal.
Recall the fiber product operation * from Definition A.10.
Lemma A.34. Let A 0 and A 1 be commuting subalgebras of B(H), and let B 0 and B 1 be commuting subalgebras of B(K). Let C → A 0 and C op → B 0 be homomorphisms. If C and A ′ 0 are split on H, then we have (A.35) on H ⊠ C K.
Proof. Since A ′ 0 and C are split on H, the actions of A ′ 0 and C ′ on H ⊠ C K induce an action of A ′ 0⊗ C ′ . In particular, the actions of A ′ 0 on H and of B ′ 0 on K induce an action of A ′ 0⊗ B ′ 0 on H ⊠ C K. Consider H ⊗ K as a A ′ 0⊗ B ′ 0 -module, where A ′ 0 acts on H and B ′ 0 acts on K. Since this is a faithful module, we can find an A ′ 0⊗ B ′ 0 -linear isometry H ⊠ C K ֒→ H ⊗ K ⊗ ℓ 2 . Proof. Let H be a faithful A-module and K a faithfulB-module. Let A ′ be the commutant of A on H, and let B ′ andB ′ be the commutants of B andB on K.
Finally, let C ′ be the commutant of C on K, and let ′ C be the commutant of C op on H.
Since the inclusion of C into A op is split, so is the inclusion A ′ ֒→ ′ C. The algebra C ′ is ′ C's commutant on H ⊠ C K, and so A ′ and C ′ are split on H ⊠ C K. Finally, B ′ andB ′ being a subalgebras of C ′ , we conclude that A ′ and B ′ and that A ′ andB ′ are split on H ⊠ C K. It follows that the algebras Appendix B. Conformal nets B.I. Axioms for conformal nets. Let VN be the category whose objects are von Neumann algebras with separable preduals, and whose morphisms are C-linear homomorphisms, and C-linear antihomomorphisms. A net is a covariant functor A : INT → VN taking orientation-preserving embeddings to injective homomorphisms and orientation-reversing embeddings to injective antihomomorphisms. It is said to be continuous if for any intervals I and J, the map Hom INT (I, J) → Hom VN (A(I), A(J)), ϕ → A(ϕ) is continuous for the C ∞ topology on Hom INT (I, J) and Haagerup's u-topology on Hom VN (A(I), A(J)) [2, Appendix]. Given a subinterval I ⊆ K, we will often not distinguish between A(I) and its image in A(K).
A conformal net is a continuous net A subject to the following conditions. Here, I and J are subintervals of an interval K: Here, J ∪ pJ is equipped with any smooth structure extending the given smooth structures on J andJ, and for which the orientation-reversing involution that exchanges J andJ is smooth. commute, where J is the modular conjugation on L 2 (A(I)), and j I ∈ Conf − (S) is the unique involution that fixes ∂I. Note also that we should really have written L 2 (A(ϕ| I ) in place of L 2 (A(ϕ)), and similarly for L 2 (A(ψj I )). Taking ψ := j I in the second diagram, we recover the modular conjugation as J = v I H 0 (j I , A)v * I . If S is a circle without a conformal structure, then it is still possible to define H 0 (S, A) as L 2 (A(I)) of some interval I ⊂ S, but this only defines H 0 (S, A) up to non-canonical unitary isomorphism [2,Def. 1.16]. We recall [2,Prop. 1.17].
Proposition B.4 (Haag duality for conformal nets). Let A be a conformal net, and S be a circle. Then for any I ⊂ S, the algebra A(I ′ ) is the commutant of A(I) on H 0 (S, A).
Given intervals J ⊂ K such that J c , the closure of K \ J, is itself an interval, the commutant of A(J) in A(K) is A(J c ).
B.III. Glueing vacuum sectors. Consider a theta-graph Θ, and let S 1 , S 2 , S 3 be its three circle subgraphs with orientations as drawn below: Θ : Let us give K the orientation coming from S 1 , and let us give I and L the orientations coming from S 2 . Given a conformal net A, then there is a non-canonical isomorphism [2, Cor. Moreover, in the presence of suitable conformal structures, this isomorphism can be constructed canonically: equip S 1 and S 2 with conformal structures, and let j 1 ∈ Conf − (S 1 ), j 2 ∈ Conf − (S 2 ) be the unique involutions fixing ∂I. Then there is a unique conformal structure on S 3 for which j 1 | I ∪ Id K : S 1 → S 3 and j 2 | I ∪ Id L : S 2 → S 3 are conformal. We can then use (B.2) to obtain the canonical isomorphism subject to the compatibility condition ρ I | A(J) = ρ J whenever J ⊂ I. We write ∆ for the collection of unitary isomorphism classes of irreducible S-sectors of A. The vacuum sector discussed before is an example of a sector and we write 0 for the corresponding element of ∆. As all circles are diffeomorphic ∆ does not depend on the specific circle S. There is an involution λ →λ given by sending an S-sector to its pull back along an orientation reversing diffeomorphism of S, as defined in [2, (1.12)]. For λ ∈ ∆ we write H λ (S, A) for a representative of λ as an S-sector. Of course, H λ (S, A) is only determined up to non-canonical isomorphism. Let S l be a circle, decomposed into four intervals I 1 , . . . , I 4 as in (B.7), and let S r be another circle, similarly decomposed into four intervals I 5 , . . . , I 8 . Let ϕ : I 5 → I 1 and ψ : I 7 → I 3 be orientation-reversing diffeomorphisms. These diffeomorphisms equip H 0 (S l ) with the structure of a right A(I 5 )⊗ A(I 7 )-module. We are interested in the Hilbert space This space is associated to the annulus Σ = D l ∪ I5∪I7 D r , where D l and D r are disks bounding S l and S r . (As H 0 (S l ) and H 0 (S r ) are only determined up to non-canonical isometric isomorphism the same is true for H Σ at this point.) Let S b := I 2 ∪ I 8 and S m := I 4 ∪ I 6 be the two boundary circles of this annulus.
The Hilbert space H Σ is an S m -S b -sector, which means that it is equipped with compatible actions of the algebras A(J) associated to all subintervals of S m and S b [2, Sec. 3.B].
We finish by stating an important result which, formulated in a different language, is due to [16]: Note that every though H λ (S, A) is only defined up to non-canonical isomorphism, its algebra of bounded operators is defined up to canonical isomorphism. It therefore makes sense for the isomorphism (B.10) to be canonical. | 2018-03-13T23:02:40.000Z | 2013-10-30T00:00:00.000 | {
"year": 2013,
"sha1": "cc12520163332e6e9be441a8a84ebf7baace430b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cc12520163332e6e9be441a8a84ebf7baace430b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
246854893 | pes2o/s2orc | v3-fos-license | Genomic Analyses of the Fungus Paraconiothyrium sp. Isolated from the Chinese White Wax Scale Insect Reveals Its Symbiotic Character
The Chinese white wax scale, Ericerus pela, is an insect native to China. It harbors a variety of microbes. The Paraconiothyrium fungus was isolated from E. pela and genome sequenced in this study. A fungal cytotoxicity assay was performed on the Aedes albopictus cell line C6/36. The assembled Paraconiothyrium sp. genome was 39.55 Mb and consisted of 14,174 genes. The coding sequences accounted for 50.75% of the entire genome. Functional pathway analyses showed that Paraconiothyrium sp. possesses complete pathways for the biosynthesis of 20 amino acids, 10 of which E. pela lacks. It also had complementary genes in the vitamin B groups synthesis pathways. Secondary metabolism prediction showed many gene clusters that produce polyketide. Additionally, a large number of genes associated with ‘reduced virulence’ in the genome were annotated with the Pathogen–Host Interaction database. A total of 651 genes encoding carbohydrate-active enzymes were predicted to be mostly involved in plant polysaccharide degradation. Pan-specific genomic analyses showed that genes unique to Paraconiothyrium sp. were enriched in the pathways related to amino acid metabolism and secondary metabolism. GO annotation analysis yielded similar results. The top COG categories were ‘carbohydrate transport and metabolism’, ‘lipid transport and metabolism’, and ‘secondary metabolite biosynthesis, transport and catabolism’. Phylogenetic analyses based on gene family and pan genes showed that Paraconiothyrium sp is clustered together with species from the Didymosphaeriaceae family. A multi-locus sequence analysis showed that it converged with the same branch as P. brasiliense and they formed one group with fungi from the Paraconiothyrium genus. To validate the in vitro toxicity of Paraconiothyrium sp., a cytotoxicity assay was performed. The results showed that medium-cultured Paraconiothyrium sp. had no harmful effect on cell viability. No toxins were secreted by the fungus during growth. Our results imply that Paraconiothyrium sp. may establish a symbiotic relationship with the host to supply complementary nutrition to E. pela.
Introduction
The Chinese white wax scale insect (Ericerus pela) is well known for its wax production. White wax secreted by males has high economic value and is widely used in machinery, food, medicine, and other fields [1][2][3]. The white wax is produced by the second-instar male larvae. The males live from about May to August. The lifespan of females is about one year and they produce eggs in the summer. Males and females of E. pela parasitize the branches of the Chinese ash tree (Fraxinus chinensis) and glossy privet (Ligustrum lucidum) for almost their entire lifespan and remain immobile due to the degeneration of their appendages [4][5][6]. E. pela display a varied relationship with microorganisms as a result of their sedentary lifestyle [7]. They may also inherit a variety of microorganisms from their ground-dwelling ancestors [8].
In previous studies, we measured the diversity of microorganisms in E. pela and found that they house a variety of microbes; we identified 20 phyla from 128 bacterial families and 4 phyla from 48 fungal families [7]. These microorganisms may be transmitted vertically or obtained from the diet or the environment. The bacteria are mainly concentrated in three families: Comamonadaceae, Streptococcaceae, and Rickettsiaceae. The fungi are less abundant and diverse than the bacteria and are mainly concentrated in the Hypocreales, Pleosporales, and Capnodiales orders Studies on scale insect symbiotic microorganisms have primarily focused on bacterial symbionts [9,10]. However, it is unclear whether these fungi play a role in the long parasitic lifespan of E. pela.
With the development of high-through sequencing technology, genomic sequencing has provided a robust tool to explore the genetic aspects of the relationships between fungi and insects [11][12][13]. To date, 3413 fungi genomes have been published in GenBank. More than 500 fungi of the three orders mentioned above have been genome sequenced. These data allow for the reliable taxonomic classification of the fungi that E. pela host and will also facilitate comparative genomic analyses to reveal the evolution and gene functions of these fungi. Genome sequences also prove information on the complementariness of genes and pathways related to amino acid biosynthesis with the host. Because plant sap supplies unbalanced nutrition to insects, piercing-sucking insects must acquire essential nutrients through nutrition partners [14,15]. In addition, the genome sequences of a large number of insect symbionts have revealed that all of the symbionts have smaller genomes and a higher AT content than free-living relative species, as well as fast evolutionary rates according to their coding genes [16,17].
To understand the genetic information related to the function of the fungi, we cultured fungi from E. pela using homogenized eggs in this study. The isolated fungus was genome sequenced, and its toxicity to cells was analyzed. These studies will provide an important foundation for studying the genome character and understanding the relationship between E. pela and fungi.
Fungal Culture and Genomic DNA Isolation
E. pela eggs were collected and quickly washed with 75% alcohol. The sterilized eggs were homogenized in a 1.5 mL centrifuge tube (Axygen, San Francisco, CA, USA). The homogenate was diluted with sterile water and 50 µL of the diluted homogenate was inoculated onto PDA medium. The medium was incubated at a constant temperature of 30 • C. The culture status was periodically observed; when mycelium growth was observed, the marginal mycelium was promptly transferred to new PDA medium for purification. This process was repeated several times until a pure culture strain was obtained. The strain was used for sequencing.
Genomic DNA was isolated with the BGI Customized Magnetic Plant Genomic DNA Kit (Tiangen Biotech, Beijing, China) according to the manufacturer's recommended protocol. The DNA concentration was determined using a Qubit fluorometer (Thermo Fisher Scientific, Waltham, MA, USA) and DNA quality was measured using a Nanodrop 2000 spectrophotometer (Thermo Fisher Scientific, Carlsbad, CA, USA). Then, DNA integrity was evaluated using 0.5% agarose gel electrophoresis.
Universal ITS primers [18] were used to amplify the DNA fragments and the amplification results were sequenced and aligned with the ITS sequences in NCBI using BLASTN.
Library Construction and Sequencing
In this study, whole-genome sequencing was performed based on NGS (BGISEQ platform) and SMRT (PacBio Sequel system).
For BGISEQ sequencing, genomic DNA was fragmented using a g-TUBE device (Covaris, MA, USA). Fragments of 300-400 bp were selected with magnetic beads. After DNA purification, dsDNA end repair, 3 adenylation, and adapter ligation, the templates were amplified by PCR. Then, the amplification products were again selected with magnetic beads. With a splint oligo sequence and ligase, single-stranded circular DNA (ssCir-DNA) was denatured and circularized. With digestion of the linear DNA and qualification using Agilent 2100, a final library containing an insert with a paired-end sequencing length of 150 bp was formed.
The DNBSEQ library was sequenced with the BGISEQ platform through rolling circle amplification to transform ssCir-DNA into DNA nanoballs (DNBs)-nanospheres containing more than 300 copies. Through high-density DNA nanochip technology, the obtained DNBs were added to the holes of the chip mesh with high-density DNA nanochip technology and sequenced by combined probe anchoring polymerization (CPA).
Approximately one microgram of DNA from the strain was used for library construction. The PacBio library was constructed using the SMRTbell Express Template Preparation Kit 2.0 (Pacific Biosciences, Menlo Park, CA, USA). First, DNA samples were sheared using a g-TUBE device and 10-15 Kb fragments were selected using the BluePippin size selection system (Sage Science, Beverly, MA, USA). After the single-stranded overhang was removed, DNA damage and DNA ends were repaired, A-tailing was conducted, and inserts were ligated to adapters (blunt hairpins) to form the SMRTbell library. Ligation products were purified using AMPure PB beads and then pooled. Selection was conducted and inserts less than 10 kb in size were discarded using the BluePippin size selection system (Sage Science, Beverly, MA, USA). Finally, the library was detected with a Qubit DNA HS Assay Kit, and the insert was checked with an Agilent HS DNA Kit (Agilent Technologies, Santa Clara, CA, USA). Sequencing of the SMRTbell library was performed using the Sequel (PacBio) Sequencing Kit 2.0.1 The SMRT bell corresponds to 1 million ZMWs. With the Sequel Binding Kit 3.0 (Pacific Biosciences, Menlo Park, CA, USA), ZMWs loaded with one template and one DNA sample were prepared for sequencing. Sequence information was analyzed through fluorescent signals linked to dNTPs. Subreads obtained from sequencing also contained redundancy, which was removed in the subsequent procedure.
Genome Assembly
Data generated from BGISEQ were filtered to obtain clean data. Reads with a certain proportion of low-quality bases and Ns were filtered, and contaminating duplicates were removed.
For PacBio sequencing, after removal of the adapters several subreads were generated from the same polymerase reads in one ZMW, and subreads less than 1000 bp were trimmed. Then, these subreads were transformed into a circular consensus sequence with enforced consistency (Table S1).
Before genome assembly, the genome size was estimated by K-mer analysis based on BGISEQ data. In the study, we assigned K a value of 15 (Table S2).
Genome assembly and long-read polishing were performed after data polishing. A Falcon (v0.3.0) was used for de novo assembly based on PacBio long reads. Due to the high error rate, preassembly errors still needed to be corrected. First, corrected subreads were obtained through Pbdagcon (https://github.com/PacificBiosciences/pbdagcon, accessed on 20 March 2020) and Falcon (v0.3.0) after assembly with several software programs, such as Celera (v8.3) and Falcon, and the best assembly result remained. Single-base errors were corrected using GATK (v3.8), and the last gaps were filled with pbjelly2 (v15.8.24) after reads with a long insert size were assembled to transform contigs into scaffolds using SSPACE_Basic (v2.0). To further improve assembly accuracy, polishing steps were executed. Initial polishing, which is available for only PacBio long reads, was carried out.
Then, the high-accuracy PacBio-corrected assembly was obtained with the help of BGISEQ short reads.
With rudimentary sequencing and subsequent fragment assembly, it was feasible to carry out genome analysis under general conditions. Genome size, gene number, genome characteristics, and assembly statistics, including contig/scaffold (in both number and length), N50, the GC content, and the gap number, were also preserved.
Repetitive Sequences
The assembly results were compared with the transposon sequence database using the de novo method. A database of assembly sequences was established using RepeatMasker software (v4-0-6) and the de novo method [19]. A transposon model was built using RepeatModeler (v2.0.1) based on the database. Transposon prediction was executed using RepeatMasker software (v4-0-6) with the help of the established model. Tandem repeats were predicted using Tandem Repeat Finder (TRF) [20].
Gene Prediction
To determine gene location, homology prediction was conducted by comparing the genome sequence with a protein set from several other reference species using GeneWise (v2.20) [25]. With the help of SNAP (v2010-07-28) [26] and GeneMarkES (v4.21) [27], the gene structure and general statistics of the genome regarding functional elements containing introns, exons and CDSs were identified. Specifically, Augustus (v3.2.1) was used for the prediction of protein-coding genes [28].
Gene Annotation
After gene prediction, gene annotation was performed by aligning the predicted protein sequence with Swiss-Prot protein data using BLASTP to assign each gene an annotation of the best match. Predicted proteins were annotated by searching against the NR database. KEGG [29][30][31] annotation was performed through sequence alignment to identify the pathways in which genes might be involved. GO (v07012019) [32] and COG (v11102014) [33] were also used for gene functional annotation in this study.
Fungal databases are essential, especially for identifying the functions of specific genes in the target fungus. Genes involved in the interaction of pathogen and host were analyzed using PHI (v4.6) [34,35]. In addition, CAZy (v201906) [36] was used to identify genes encoding CAZymes, which can damage the cell wall of the host. Additionally, genes that might be related to the transportation of toxic secondary metabolites were searched for in other universal gene annotation databases.
The prediction of secondary metabolites was performed using the online software antiSMASH [37].
Functional Pathway Analysis of E. pela and Paraconiothyrium sp.
We considered plant sap to be deficient in essential amino acids and vitamins for E. pela and, combined with E. pela genome annotation, aligned the annotated genes of both with the KEGG database-related pathways and constructed metabolic complementary pathway maps.
Core/Pan-Gene Analysis
To explore the functional similarities and differences between the identified genes, an analysis of core/pan-genes among all samples was carried out. Molecular evidence was also obtained to explain the underlying cause for the phenotypic variation. Gene cluster analysis of protein-coding genes from all samples was performed using CD-HIT (v4.6.6) [38], generating final gene clusters that were regarded as pan-gene clusters. The pan-gene clusters were divided into core genes, specific genes, and dispensable genes. The core genes were contained in every strain, and most were essential for the growth of the strain, such as genes related to metabolism and the production of energy. Certain genes, however, were present in only a specific strain to show their unique characteristics or carry out specific metabolic activities. The rest of the genes were dispensable genes. The
Phylogenetic Tree Was Constructed
A gene family was constructed based on the genes of related species and the target strain, after which gene family identification was carried out.
The genome sequences of the related species downloaded from NCBI were used for reference to analyze gene families. The species used were the same as those mentioned in Section 2.9, but L. fluviatile was an outgroup. The identification of gene families was performed by aligning the protein sequences using BLAST, eliminating redundancy using solar, TreeFam [39] gene family clustering treatment of the alignment results using hclus-ter_sg, converting the protein alignment results into multiple sequence amino acids in the CDS area, and aligning multiple sequences with clustered gene families using Muscle. A phylogenetic tree was constructed based on a module of core/pan genes, gene families, and resequencing data. In this study, with a gene family module, a phylogenetic tree was constructed based on multiple sequence alignment results by adopting the NJ method with TreeBeST (v1.9.2).
In order to verify the results, the three conserved sequences of ITS, LSU, and Tub of 32 species (Table S3) were selected on GenBank for multi-locus sequence analysis with Paraconiothyrium sp. [3,18,40]. Multiple sequence alignment was performed for each conserved gene using Muscle (v5.1). Sequence Matrix was used to concatenate the three conserved genes of each species, and the concatenation order was ITS-LSU-Tub. A phylogenetic tree was constructed using the maximum composite likelihood model of neighbor-joining method with a bootstrap value of 1000 using MEGA11.
Synteny Analysis
To detect the evolution of homologous genomes, we performed synteny analysis of Paraconiothyrium sp. and the other six species mentioned in Section 2.9 except for Lentithecium fluviatile in pairs at the nucleotide and protein levels. The genetic order of the relatives was used as a standard for analysis. Then, the upper and following axes of the linear synteny graph were constructed after the length of both sequences was reduced by the same proportion. According to BLAST, each pair of nucleic acid sequences in the two alignments was marked in the coordinate diagram based on its position information after size reduction at the same proportion. Then, the amino acid levels were compared using the following method. Paraconiothyrium sp. was aligned with the relative as a database and the best hit value of each protein as selected. Then, Paraconiothyrium sp. was used as a database and other species were aligned with it. The best hits from the two alignments were selected for synteny analysis.
Cytotoxicity Assay
To validate whether Paraconiothyrium sp. could secrete toxic materials, the cytotoxicity of secondary metabolites secreted from Paraconiothyrium sp. were evaluated in the Aedes albopictus cell line C6/36. The fungus was cultured using liquid PDA medium. After replacement of the medium in the test cell with new RPMI 1640 medium (MeilunBio, Dalian, China), 100 µL of the inoculum was inoculated into 96-well plates that had been preincubated for 24 h in an incubator (37 • C). Then, each dilute Paraconiothyrium sp. medium was added to each well. Seven gradients were set up; for each, 0, 1, 2, 4, 6, 8, and 10 µL of liquid PDA medium were added. After 24 h of incubation, 10 µL of CCK-8 solution (Proteintech, Wuhan, China) was added to each well. The samples were incubated until the absorbance at 450 nm was approximately 1.0, as measured with a microplate reader (Thermo Fisher Scientific, Waltham, MA, USA). Homogenates of the strains were also assayed for cytotoxicity. DMSO (Sangon, Shanghai, China) was used as a positive control. The volume gradients of DMSO were 0, 1, 2, 4, 6, 8, and 10 µL in each well. The survival rate was calculated using the following formula:
Fungal Culture
The collected E. pela was homogenized and cultured. Two pure fungi were isolated. Internal transcribed spacer (ITS) analyses showed that one fungus exhibits 99.42% similarity with the ITS sequences of Paraconiothyrium brasiliense. This fungus was genome sequenced in this study. The morphology of the grown fungus is shown in Figure 1, which exhibits the obvious characteristics of Ascomycetes.
Cytotoxicity Assay
To validate whether Paraconiothyrium sp. could secrete toxic materials, the cytotoxicity of secondary metabolites secreted from Paraconiothyrium sp. were evaluated in the Aedes albopictus cell line C6/36. The fungus was cultured using liquid PDA medium. After replacement of the medium in the test cell with new RPMI 1640 medium (MeilunBio, Dalian, China), 100 μL of the inoculum was inoculated into 96-well plates that had been preincubated for 24 h in an incubator (37 °C ). Then, each dilute Paraconiothyrium sp. medium was added to each well. Seven gradients were set up; for each, 0, 1, 2, 4, 6, 8, and 10 µ L of liquid PDA medium were added. After 24 h of incubation, 10 μL of CCK-8 solution (Proteintech, Wuhan, China) was added to each well. The samples were incubated until the absorbance at 450 nm was approximately 1.0, as measured with a microplate reader (Thermo Fisher Scientific, Waltham, MA, USA). Homogenates of the strains were also assayed for cytotoxicity. DMSO (Sangon, Shanghai, China) was used as a positive control. The volume gradients of DMSO were 0, 1, 2, 4, 6, 8, and 10 µ L in each well. The survival rate was calculated using the following formula: OD(Negative control group)₋OD(Blank control group) ×100%
Fungal Culture
The collected E. pela was homogenized and cultured. Two pure fungi were isolated. Internal transcribed spacer (ITS) analyses showed that one fungus exhibits 99.42% similarity with the ITS sequences of Paraconiothyrium brasiliense. This fungus was genome sequenced in this study. The morphology of the grown fungus is shown in Figure 1, which exhibits the obvious characteristics of Ascomycetes.
Genome Sequencing and Assembly
The Paraconiothyrium sp. genome size was estimated to be approximately 41.79 Mb using K-mer analysis ( Figure S1). The Paraconiothyrium sp. genome was sequenced using a combination of the BGISEQ and PacBio approaches, and the sequence depths were 86× and 236×, respectively. The assembled Paraconiothyrium sp. genome was 39.55 Mb with a scaffold N50 of 4.92 Mb (Figure 2, Tables 1 and S4). The overall GC content was 51.36% (Table 1).
Genome Sequencing and Assembly
The Paraconiothyrium sp. genome size was estimated to be approximately 41.79 Mb using K-mer analysis ( Figure S1). The Paraconiothyrium sp. genome was sequenced using a combination of the BGISEQ and PacBio approaches, and the sequence depths were 86× and 236×, respectively. The assembled Paraconiothyrium sp. genome was 39.55 Mb with a scaffold N50 of 4.92 Mb (Figure 2, Table 1 and Table S4). The overall GC content was 51.36% (Table 1)
Genome Components
The Paraconiothyrium sp. genome consists of 14,174 genes with an average length of approximately 1.5 kb ( Table 1). The CDS accounted for 50.75% of the entire genome with an average length of nearly 1.4 kb, and each gene contained approximately 2.75 exons and 1.75 introns.
Genome Components
The Paraconiothyrium sp. genome consists of 14,174 genes with an average length of approximately 1.5 kb ( Table 1). The CDS accounted for 50.75% of the entire genome with an average length of nearly 1.4 kb, and each gene contained approximately 2.75 exons and 1.75 introns.
Genomic Annotation
GO annotation returned 7238 proteins, accounting for 51.06% of the total proteins (Table 2, Figure S2). These genes were assigned different GO terms. In the biological process category, genes involved in 'metabolic process', 'cellular process', 'localization', 'biological regulation', and 'regulation of biological process' accounted for the majority. Among the molecular function category, genes were mainly associated with 'catalytic activity', 'binding', 'transporter activity', 'transcription regulator activity', and 'structural molecule activity'. Figure S3). Among these categories, except 'general function prediction only', the 'carbohydrate transport and metabolism' cluster represented the largest group, followed by the 'lipid transport and metabolism', 'amino acid transport and metabolism', and 'energy production and conversion' clusters.
Among all the predicted genes, 4655 (32.84%) genes were mappable through the KEGG pathway database and found to be distributed in 148 metabolic pathways (Table 2, Figure S4). At the third level, these pathways were mainly classified into several categories related to metabolism, such as 'metabolic pathways', 'biosynthesis of secondary metabolites', 'microbial metabolism in diverse environments', 'biosynthesis of antibiotics', 'biosynthesis of amino acids', and 'carbon metabolism'.
Secondary metabolism prediction showed many gene clusters that produced polyketide (Table S8).
Analyses of Functional Pathway Related to Nutrition Contribution of Paraconiothyrium sp.
Functional pathway analysis showed that E. pela and Paraconiothyrium sp. were complementary at least in the amino acid synthesis and vitamin synthesis pathways. Genomic annotation indicated that E. pela lacked the ability to de novo synthesize 10 essential amino acids (valine, leucine, isoleucine, lysine, arginine, methionine, histidine, phenylalanine, tyrosine, and tryptophan). However, Paraconiothyrium sp. has the genes necessary for the synthesis of these essential amino acids (Figure 3). In the vitamin B synthesis pathways, a complementary form of the other is presented. Except for vitamin B6 and riboflavin, which can be synthesized from scratch by Paraconiothyrium sp., the complete pathway exists for the other kinds of B vitamins only if the genes of both are complementary ( Figure S5). polyketide (Table S8).
Analyses of Functional Pathway Related to Nutrition Contribution of Paraconiothyrium sp.
Functional pathway analysis showed that E. pela and Paraconiothyrium sp. were complementary at least in the amino acid synthesis and vitamin synthesis pathways. Genomic annotation indicated that E. pela lacked the ability to de novo synthesize 10 essential amino acids (valine, leucine, isoleucine, lysine, arginine, methionine, histidine, phenylalanine, tyrosine, and tryptophan). However, Paraconiothyrium sp. has the genes necessary for the synthesis of these essential amino acids (Figure 3). In the vitamin B synthesis pathways, a complementary form of the other is presented. Except for vitamin B6 and riboflavin, which can be synthesized from scratch by Paraconiothyrium sp., the complete pathway exists for the other kinds of B vitamins only if the genes of both are complementary ( Figure S5). Figure 3. Amino acid synthesis pathways in E. pela and Paraconiothyrium sp. The green and blue areas represent E. pela and Paraconiothyrium sp., respectively. The blue and red oval boxes indicate nonessential amino acids and essential amino acid, respectively. Yellow and black boxes represent enzymes whose genes are encoded by E. pela and Paraconiothyrium sp., respectively. The name of the gene in the box is the EC number or the name of the enzyme corresponding to the KEGG annotation. . Amino acid synthesis pathways in E. pela and Paraconiothyrium sp. The green and blue areas represent E. pela and Paraconiothyrium sp., respectively. The blue and red oval boxes indicate nonessential amino acids and essential amino acid, respectively. Yellow and black boxes represent enzymes whose genes are encoded by E. pela and Paraconiothyrium sp., respectively. The name of the gene in the box is the EC number or the name of the enzyme corresponding to the KEGG annotation.
Identification of Pathogenic Factors
In our analysis, 1276 genes with high homology were identified in the PHI database, accounting for 9% of the total number of genes in the Paraconiothyrium sp. genome, and they covered 65 fungal species (Table 2). In addition, 540 genes and 511 genes were involved in 'unaffected pathogenicity' and 'reduced virulence', respectively. Only 82 genes concern the survival state of the pathogen-'lethal factor'. In particular, the number of genes strongly associated with 'pathogenicity' and 'increased virulence (hypervirulence)' was only 28 (Figure 4, Table S9).
accounting for 9% of the total number of genes in the Paraconiothyrium sp. genome, and they covered 65 fungal species (Table 2). In addition, 540 genes and 511 genes were involved in 'unaffected pathogenicity' and 'reduced virulence', respectively. Only 82 genes concern the survival state of the pathogen-'lethal factor'. In particular, the number of genes strongly associated with 'pathogenicity' and 'increased virulence (hypervirulence)' was only 28 (Figure 4, Table S9). A total of 595 carbohydrate-active enzyme (CAZyme)-coding gene homologs were identified in the Paraconiothyrium sp. genome (Table 2); among these homologs were 261 glycoside hydrolases (GHs), 124 carbohydrate-binding modules (CBMs), 64 glycosyl transferases (GTs), 45 carbohydrate esterases (CEs), 30 polysaccharide lyases (PLs), and 120 enzymes with auxiliary activities (AAs) (Table S10). GHs and AAs were the two most abundant annotated CAZyme genes. Classification of the GH family revealed that the majority of the GHs are members of the GH16 and GH28 families (Table S11). Classification of the AA family showed up to 31 members of AA3, including cellobiose dehydrogenase, glucose 1-oxidase, aryl alcohol oxidase, alcohol oxidase, and pyranose oxidase (Table S11).
Comparative Genome Analyses
Analysis of MUMmer outputs revealed that the genomes of K. rhodostoma (Pleosporales: Didymosphaeriaceae) and Paraconiothyrium sp. share 1453 syntenic blocks at the nucleotide level, accounting for approximately 584 kb of the sequence ( Figure 5, Table S11). In contrast, Paraconiothyrium sp. shares fewer regions of synteny with others ( Figure S6, Table S12). This result revealed that Paraconiothyrium sp. is more closely related to K. rhodostoma than to the other species mentioned in Section 2.9. Similar conclusions were obtained at the protein level ( Figure S6, Table S13).
Comparative Genome Analyses
Analysis of MUMmer outputs revealed that the genomes of K. rhodostoma (Pleosporales: Didymosphaeriaceae) and Paraconiothyrium sp. share 1453 syntenic blocks at the nucleotide level, accounting for approximately 584 kb of the sequence ( Figure 5, Table S11). In contrast, Paraconiothyrium sp. shares fewer regions of synteny with others ( Figure S6, Table S12). This result revealed that Paraconiothyrium sp. is more closely related to K. rhodostoma than to the other species mentioned in Section 2.9. Similar conclusions were obtained at the protein level ( Figure S6, Table S13). The pan-genome from eight species from Pleosporales contains 44,480 genes ( Table S14). The core genome consists of 3027 genes ( Figure 6). Except for P. minitans, the percentage of unique genes to the number of coding genes was the smallest for Paraconiothyrium sp. Additionally, the proportion of dispensable genes was large among the eight species. Paraconiothyrium sp. possesses 2592 specific genes, most of which are concentrated in metabolism-related processes. The top three COG categories were 'carbohydrate transport and metabolism', 'lipid transport and metabolism', and 'secondary metabolite biosynthesis, transport, and catabolism' (Figure 7). Meanwhile, the GO annotation of unique genes revealed genes enriched in the metabolic pathways of secondary products, such as the austinol metabolic process, dehydroaustinol metabolic process, and toxin metabolic process (Table S15). The yellow box represents the forward chain and the blue box represents the reverse chain within the upper and following sequence regions. In the boxed sequence, the yellow region represents the nucleic acid or amino acid sequence in the forward chain of the genome sequence, and the blue region represents the nucleic acid or amino acid sequence in the reverse chain of the genome sequence. In the region between two sequences, the yellow line represents forward alignment and the blue line represents reverse complementary alignment.
The pan-genome from eight species from Pleosporales contains 44,480 genes (Table S14). The core genome consists of 3027 genes ( Figure 6). Except for P. minitans, the percentage of unique genes to the number of coding genes was the smallest for Paraconiothyrium sp. Additionally, the proportion of dispensable genes was large among the eight species. Paraconiothyrium sp. possesses 2592 specific genes, most of which are concentrated in metabolism-related processes. The top three COG categories were 'carbohydrate transport and metabolism', 'lipid transport and metabolism', and 'secondary metabolite biosynthesis, transport, and catabolism' (Figure 7). Meanwhile, the GO annotation of unique genes revealed genes enriched in the metabolic pathways of secondary products, such as the austinol metabolic process, dehydroaustinol metabolic process, and toxin metabolic process (Table S15). Comparison of COG-based specific genes in eight species. C represents "Energy production and conversion"; E represents "Amino acid transport and metabolism"; F represents "Nucleotide transport and metabolism"; G represents "Carbohydrate transport and metabolism"; H represents "Coenzyme transport and metabolism"; I represents "Lipid transport and metabolism"; J represents "Translation, ribosomal structure and biogenesis"; L represents "Replication, recombination and repair"; M represents "Cell wall/membrane/envelope biogenesis"; N represents "Cell motility"; O represents "Posttranslational modification, protein turnover, chaperones"; P represents "Inorganic ion transport and metabolism"; Q represents "Secondary metabolite biosynthesis, transport, and catabolism"; R represents "General function prediction only"; S represents "Function unknown"; T represents "Signal transduction mechanisms"; U represents "Intracellular trafficking, secretion, and vesicular transport"; V represents "Defense mechanisms"; and X represents "Mobilome: prophages, transposons".
The phylogenetic tree was constructed using L. fluviatile (Pleosporales: Lentitheciaceae) as an outgroup and eight species based on the pan-genome core genes or single-copy ortholog genes, respectively. The topology of the two trees is the same, but the divergence of bases differs slightly. The clusters composed of K. rhodostoma, P. minitans, and P. sporulosa are sister groups to Paraconiothyrium sp. (Figure S7). The species of these branches belongs to the Didymosphaeriaceae family.
Further multi-locus sequence analysis (Figure 8) showed that Paraconiothyrium sp. Comparison of COG-based specific genes in eight species. C represents "Energy production and conversion"; E represents "Amino acid transport and metabolism"; F represents "Nucleotide transport and metabolism"; G represents "Carbohydrate transport and metabolism"; H represents "Coenzyme transport and metabolism"; I represents "Lipid transport and metabolism"; J represents "Translation, ribosomal structure and biogenesis"; L represents "Replication, recombination and repair"; M represents "Cell wall/membrane/envelope biogenesis"; N represents "Cell motility"; O represents "Posttranslational modification, protein turnover, chaperones"; P represents "Inorganic ion transport and metabolism"; Q represents "Secondary metabolite biosynthesis, transport, and catabolism"; R represents "General function prediction only"; S represents "Function unknown"; T represents "Signal transduction mechanisms"; U represents "Intracellular trafficking, secretion, and vesicular transport"; V represents "Defense mechanisms"; and X represents "Mobilome: prophages, transposons".
The phylogenetic tree was constructed using L. fluviatile (Pleosporales: Lentitheciaceae) as an outgroup and eight species based on the pan-genome core genes or single-copy ortholog genes, respectively. The topology of the two trees is the same, but the divergence of bases differs slightly. The clusters composed of K. rhodostoma, P. minitans, and P. sporulosa are sister groups to Paraconiothyrium sp. (Figure S7). The species of these branches belongs to the Didymosphaeriaceae family.
Further multi-locus sequence analysis (Figure 8) showed that Paraconiothyrium sp. and Paraconiothyrium brasiliense converged upon the same branch. They clustered together with all the species from the Paraconiothyrium genus and formed one group in the phylogenetic tree.
Cytotoxicity Assay
To estimate the toxicity of materials secreted by Paraconiothyrium sp, the cytotoxicity assay was performed. The results of a cytotoxicity assay showed slight inhibition of the C6/36 cell line by the PDA liquid medium containing secretions. The cell viability was over 90% with different doses of medium. Cell survival was the lowest when 6 µ L of medium was added, at approximately 91.3%. We used DMSO, which is highly cytotoxic, as a positive control, and the results showed that the more DMSO was added, the lower the cell survival rate was (Figure 9).
Cytotoxicity Assay
To estimate the toxicity of materials secreted by Paraconiothyrium sp, the cytotoxicity assay was performed. The results of a cytotoxicity assay showed slight inhibition of the C6/36 cell line by the PDA liquid medium containing secretions. The cell viability was over 90% with different doses of medium. Cell survival was the lowest when 6 µL of medium was added, at approximately 91.3%. We used DMSO, which is highly cytotoxic, as a positive control, and the results showed that the more DMSO was added, the lower the cell survival rate was (Figure 9).
Discussion
Paraconiothyriumis is widely distributed and has a variety of host habitats [41]. It has been found living within insects. It was reported that P. brasiliense was isolated from Acrida cinerea [42]. Another fungus from the Paraconiothyrium genus, P. hawaiiense, was isolated from the scale insect Diaspidiotus sp. [43]. We also found ITS sequences of Paraconiothyrium in the microorganism diversity analyses in our previous study [7]. They exist in insects, suggesting that they may play some role in the insect host. We isolated Paraconiothyrium sp. From E. pela and sequenced the genome in this study. Phylogenetic trees based on gene families showed that Paraconiothyrium sp. is evolutionally close to K. rhodostoma, P. minitans, and P. sporulosa.
The genome assembly is approximately 39.55 Mb and smaller than the genomes of the relatives of Didymosphaeriaceae. For example, B. novae-zelandiae, the species with the largest genome in the phylogenetic analysis, has a genome size of 78.18 Mb, and the Paraconiothyrium sp. genome is reduced in size by nearly half. However, the genomes of obligate symbiotic bacteria range from 0.5-2 Mb. In contrast, the genomes of related free-living bacteria are approximately five times larger (usually 4-10 Mb) [44].
The P. sporulosa genome has a GC content of 53.3%, higher than that of the majority of the fungi used for phylogeny. The GC content of the Paraconiothyrium sp. genome was 51.36%, similar to that of the P. sporulosa genome. However, the GC content of the obligate symbiotic bacterial genome was much lower than that of its free-living relatives [45][46][47][48]. This suggests that Paraconiothyrium sp. represents a transition stage between endogenous fungi and obligate symbionts and that the symbiotic relationship between Paraconiothyrium sp. and E. pela is relatively casual.
By synteny analysis, we found some sequence fragments that are lacking in the Paraconiothyrium sp. genome. E. pela is a scale insect. According to Gullan, scale insects may have long ago inherited mutual relationships with a variety of microorganisms from their ancestors due to their lifestyles [11]. After establishing a symbiotic relationship with the host, the symbiont becomes host-dependent due to the loss of genes involved in some biological processes [49,50]. Although they lose many genes, symbionts retain certain genes required for nutrient synthesis that are somewhat complementary to
Discussion
Paraconiothyrium is widely distributed and has a variety of host habitats [41]. It has been found living within insects. It was reported that P. brasiliense was isolated from Acrida cinerea [42]. Another fungus from the Paraconiothyrium genus, P. hawaiiense, was isolated from the scale insect Diaspidiotus sp. [43]. We also found ITS sequences of Paraconiothyrium in the microorganism diversity analyses in our previous study [7]. They exist in insects, suggesting that they may play some role in the insect host. We isolated Paraconiothyrium sp. From E. pela and sequenced the genome in this study. Phylogenetic trees based on gene families showed that Paraconiothyrium sp. is evolutionally close to K. rhodostoma, P. minitans, and P. sporulosa.
The genome assembly is approximately 39.55 Mb and smaller than the genomes of the relatives of Didymosphaeriaceae. For example, B. novae-zelandiae, the species with the largest genome in the phylogenetic analysis, has a genome size of 78.18 Mb, and the Paraconiothyrium sp. genome is reduced in size by nearly half. However, the genomes of obligate symbiotic bacteria range from 0.5-2 Mb. In contrast, the genomes of related free-living bacteria are approximately five times larger (usually 4-10 Mb) [44].
The P. sporulosa genome has a GC content of 53.3%, higher than that of the majority of the fungi used for phylogeny. The GC content of the Paraconiothyrium sp. genome was 51.36%, similar to that of the P. sporulosa genome. However, the GC content of the obligate symbiotic bacterial genome was much lower than that of its free-living relatives [45][46][47][48]. This suggests that Paraconiothyrium sp. represents a transition stage between endogenous fungi and obligate symbionts and that the symbiotic relationship between Paraconiothyrium sp. and E. pela is relatively casual.
By synteny analysis, we found some sequence fragments that are lacking in the Paraconiothyrium sp. genome. E. pela is a scale insect. According to Gullan, scale insects may have long ago inherited mutual relationships with a variety of microorganisms from their ancestors due to their lifestyles [11]. After establishing a symbiotic relationship with the host, the symbiont becomes host-dependent due to the loss of genes involved in some biological processes [49,50]. Although they lose many genes, symbionts retain certain genes required for nutrient synthesis that are somewhat complementary to the missing parts of the host [51,52]. Additionally, genome annotation of Paraconiothyrium sp. showed retention of an intact essential amino acid synthesis pathway and that Paraconiothyrium sp. is also capable of producing nonessential amino acids, which should compensate for the host in this regard (Figure 3). In addition, E. pela lacks the complete vitamin B synthesis pathway, as shown by the gene functional annotation of the previously reported genome sequence [2]. However, Paraconiothyrium sp. contains synthetic pathways for vitamin B group members. Paraconiothyrium sp. and E. pela may collaborate in the production of most of the vitamin B group members ( Figure S5). The above shows that Paraconiothyrium sp. plays a significant role in providing nutritional metabolic assistance to the host.
Fungi of the Paraconiothyrium genus have often been considered pathogens to plants [41,53]. They can infect the leaves of plants and cause leaf spots. They also cause human cutaneous phaeohyphomycosis [54,55]. However, they have been found living inside scale insects without causing disease, implying their adaptation to an insect host [41,42]. PHI annotation of the Paraconiothyrium sp. genome showed genes associated with reduced virulence and unaffected pathogenicity to be common in Paraconiothyrium sp. To verify this prediction, we performed a cytotoxicity assay. The Paraconiothyrium sp. liquid medium containing secretions had no significant inhibitory effect against the C6/36 cell line. In contrast, DMOS was substantially toxic to the cells. The presence of a symbiotic relationship between Paraconiothyrium sp. and E. pela can be inferred from this finding.
Evidence suggests that certain symbionts can assist the host in penetrating plant cell walls [47,48,56]. A large number of GH genes, such as GH16, GH28, GH43, and GH47, have also been found in the Paraconiothyrium sp. genome, and their main role involves the degradation of plant cell wall hemicellulose and pectin, suggesting that Paraconiothyrium sp. can assist E. pela in piercing plant tissues.
Lignocellulose-degrading fungi usually contain genes encoding lytic polysaccharide monooxygenases (LPMOs) [57], which are classified in the AA family in the CAZyme database. These enzymes are mainly involved in the depolymerization of lignin [58]. Thirty-one AA3 and 37 AA9 family genes were found in the Paraconiothyrium sp. genome. Members of the AA3 family assist other AA family enzymes through hydrogen peroxide or hydroquinone or assists glycoside hydrolases in the degradation of lignocellulose. LPMOs, which belong to the AA9 family, are involved in the degradation of various hemicelluloses from cellulose and lignocellulose, such as xyloglucan, xylan, and glucomannan [58]. A large number of AA3 and AA9 genes work together to degrade lignin by oxidation [58]. Experiments have shown that Paraconiothyrium sp. is highly abundant in the host epidermis and digestive tract. Both families are hypothesized to assist in digestion or in the invasion of plants by the insect through epidermal contact. These AAs facilitate the establishment of a symbiotic relationship between E. pela and Paraconiothyrium sp.
Supplementary Materials: The following supporting material can be downloaded at: https://www. mdpi.com/article/10.3390/genes13020338/s1, Figure S1. 15-mer analysis. X-axis is depth, and Y-axis is proportion. Theoretically, 15-mer distributions should follow a Poisson distribution. In fact, heterozygotes cause the possible appearance of other peaks at 1/2 of the main peak, and duplication causes the possible appearance of duplicate peaks near integer multiples of the main peak. Figure S2. Functional categories of the annotated genes, broadly separated into 'biological process', 'cellular component' and 'molecular function' based on Gene Ontology. X-axis indicates 45 functional GO categories. Blue boxes represent biological processes, yellow boxes represent cell composition, and orange boxes represent molecular functions. Y-axis indicates number of genes in a category. Figure S3. Cluster of orthologous groups (COG) classification of putative proteins. Y-axis indicates 22 functional COG categories. X-axis indicates number of genes in a category. Figure S4. KEGG classification of the genes. A total of 4655 genes were assigned to 383 KEGG pathways. X-axis indicates number of genes in a pathway. Y-axis indicates 46 second level pathways. Figure S5. Pathways of vitamin B synthesis involving E. pela and Paraconiothyrium sp. Red boxes and lines indicate genes and pathways unique to Paraconiothyrium sp. Green boxes and lines indicate genes and pathways unique to E. pela. Gray dashed lines and boxes indicate that neither gene or pathway is present. Figure S6. Synteny analysis between Paraconiothyrium sp. and the other species at the nucleic acid and amino acid level. Yellow box stands for forward chain and blue box stands for reverse chain within the upper and following sequence region. In the box of sequence, the yellow region stands for the nucleic acid sequence in the forward chain of this genome sequence and the blue region stands for the nucleic acid sequence in the reverse chain of this genome sequence. In the middle region of two sequences, the yellow line stands for forward alignment and the blue line stands for reverse complementary alignment. Column A is based on nucleotide level, column B is based on amino acid level. The species analyzed from top to bottom with Paraconiothyrium sp. are Bimuria novae-zelandiae, Didymosphaeria enalia, Laburnicola sp. JP-R-44, Paraphaeosphaeria sporulosa and Paraphaeosphaeria minitans. Figure S7. Phylogenetic tree in the eight species determined by the maximum likelihood method. L. fluviatile is the outgroup. (a) based on the pan-genome; (b) based on single-copy ortholog genes. The scale bar corresponds to 0.06 nucleotide substitutions per two sites; Table S1. PacBio reads statistics. Table S2. The statistics of BGISEQ based on next generation sequencing. Table S3. Three conserved sequences used in phylogenetic analyses. Table S4. Genome assembly of Paraconiothyrium sp. Table S5. Statistics of noncoding RNA in Paraconiothyrium sp. genome. Table S6. Repeat statistic in Paraconiothyrium sp. genome. Table S7. Transposons statistic in Paraconiothyrium sp. genome. Table S8. Predictions of secondary metabolite using antiSMASH. Table S9. Identification of pathogenicity genes through querying the Paraconiothyrium sp. genome with the PHI database. Table S10. Results of classification and annotation of carbohydrate enzymes. CBM, carbohydrate-binding module; CE, carbohydrate esterases; GH, glycoside hydrolases; GT, glycosyl transferase; PL, polysaccharide lyase; AA, auxiliary activity. Table S11. Statistics of all classifications in CAZy database. Table S12. Statistics for synteny at the nucleic acid level. Scaffold represents the 10 scoffolds assembled by Paraconiothyrium sp. Refs Length represents the length of each scoffold. Map number and map length represent the genome of the species corresponding to one of the Paraconiothyrium sp. scaffold gene and length, respectively. Table S13.Statistics for synteny at the amino acid level. Aligned indicates the number of genes of Paraconiothyrium sp. on the corresponding species match. Target percent (%) represents the number of genes of the corresponding species match as a percentage of Paraconiothyrium sp. Query percent (%) represents the number of genes of the corresponding substance match as a percentage of the total number of own genes. Table S14. The statistics of pangenome.
Data Availability Statement:
The data is available from GenBank repository with accession number JAJUBI000000000, and BioProject number PRJNA791143. | 2022-02-16T16:26:43.745Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "54255ad55f903d99824c2df75b47ff91ab5f756a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/13/2/338/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9199fa5a6f2d9b9834dff9eabc7a1adaa53c9fbe",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216267945 | pes2o/s2orc | v3-fos-license | Multi-Modes Control for Semi-Active Suspension Systems
The aim of this work is to design and analyze multi-modes semi-active suspension using a Continuously Variable Damper (CVD). A modeling approach for the CVD is presented, and three suspension modes are developed. The studied damper can achieve different suspension modes by controlling the actuation force, which makes its damping adjustment more efficient. By applying an appropriate control strategy to the damper based on minimizing the quadratic gap between the control actuation force for each mode and a control target, satisfaction of both ride comfort and driving safety can be realized. The control target is synthesized using CRONESkyHook approach. Performances of the proposed method are validated through a speed bump profile and a real measured road profile.
INTRODUCTION
Vehicle suspension should be able to ensure different performances on the aspects of vibratory isolation, road handling and ride comfort (Hamrouni et al. (2019)). Based on the external power input, vehicle suspension can be classified on three categories: passive, semi-active and active. Semi-active suspensions have been extensively studied because of their high performances when compared to the passive ones, and the low energy consumption when compared to active ones (Jialing et al. (2006)).
Since 1980, several studies have been conducted to develop semi-active suspension using Continuously Variable Dampers (CVD) (Ivers and Miller (1989); Heo and Son (2003)). A CVD can yields variable damping force at a given damper velocity. Thus, it is possible to obtain a satisfaction of driving comfort and driving safety by adopting an appropriate implementable algorithm. Different types of CVD, such as electromagnetic actuators (Gysen et al. (2010)), pneumatic actuators (Bouvin et al. (2017)), and hydropneumatic actuators (Rizzo et al. (2009)), are often considered as the actuator of the damping adjustment. Many existent methods focused on developing a practical, implementable and lower cost control scheme (Liu et al. (2019)). These methods include SkyHook control, Groundhook control, Hybrid control, PID control, Fuzzy 1 *This work took place in the framework of the OpenLab 'Electronics and Systems for Automotive' combining IMS laboratory and PSA Groupe company. logic control... The most implemented controller from a commercial point of view is the SkyHook damping concept.
Although CVD can realize greatly adjustable characteristics, a small damping variation doesn't significantly influence the control performances of a suspension system (Chen et al. (2013); Sun et al. (2017)). Therefore, a CVD with multiple damping modes achieved by a simple control strategy and a reliable regulation mechanism may be more reliable and efficient (Wei and Zhiqiang (2019)).
Compared with the conventional semi-active suspension using CVD, the aim of this study is to design a new semiactive suspension with different damping modes. Each suspension mode ensure either vibratory isolation or chassis holding. By just comparing the damper efforts of each mode with a target one, multi-mode switching damping characteristics are achieved. A Robust Control Frequency Synthesis (CRONE) technique (Moreau et al. (1999)), combined with the SkyHook one, are used to synthesize the target mode that satisfy both vibratory isolation and chassis holding.
The remainder of this paper is organized as follows. Section 2 details the analysis and design scheme for a quarter-car vehicle suspension model. Section 3 presents the design methodology of the three suspension modes. The CRONE-SkyHook approach used to design the target effort for the semi-active damper, and the switching control strategy, are described in section 4. In section 5, simulation results are presented to show the performance of the proposed switching strategy. Finally, conclusions are given in section 6.
Analysis and design method
Results presented in this paper are part of the hierarchical chassis control of an autonomous vehicle for which the operating domain is divided into three sub-domains: lowfrequency comfort, road behavior and active safety in emergency situations (Bouvin et al. (2018)). This study concerns the low-frequency comfort domain which corresponds to the range [0.5, 5.5] Hz for vertical dynamics, and for horizontal dynamics to longitudinal and transverse accelerations lower than |0.4| g. The vertical dynamics has no significant influence on the horizontal dynamics in this field. A two degrees of freedom (2-DOF) quarter vehicle model instead of the 14-DOF can then be used in the exploratory phase. Note that we dispose of a 14-DOF vehicle model operating in MATLAB/Amesim cosimulation and used for validation before implantation and vehicle test.
A full car model can be simplified to the well known quarter vehicle model. Fig. 1 shows the quarter-car model with 2-DOF, reflecting the Chassis/Road transfer in the range of [0.5, 5.5]Hz, where m 1 is the unsprung mass, z 1 (t) is its displacement and v 1 (t) its velocity, m 2 is the sprung mass, z 2 (t) is its displacement and v 2 (t) its velocity, k 1 is the vertical stiffness of the tire, k 2 is the stiffness coefficient of the suspension, b 1 is the damping coefficient of the tire, b 2 is the damping coefficient of the suspension. f 0 (t) and v 0 (t) are respectively the load transfer and the road vertical excitation applied to the quarter-vehicle. where The admittance A 2 (s) between F Σ2 (s) [N ], the algebraic sum of the forces applied to m 2 , and the velocity V 2 [m/s], is expressed by The open loop transfer function β(s), expressed by the product of the impedance of the suspension and the admittance of the sprung mass A 2 (s) is After development, the transfer function β(s) can be written in the following form: where The transfer function between the suspension effort F s (s) and the sprung force Using the Laplace variable s equals jω, the gain and the phase of D s (jω) in the frequency domain are expressed respectively by Add to that, the transfer in absolute transmission T 21 (s) Then, to specify the resonance peak Q 2 of the frequency response of T 21 (jω), a phase margin should be imposed to the transfer β(jω).
The phase margin M φ calculated at the proper angular frequency w n2 of the sprung mass m 2 is defined as follows: Let the phase and knowing that it results If the requirement resonant peak is chosen to be Q 2 ≤ 2, the phase margin M φ should be M φ > 40 • .
(14) So, the vehicle holding is directly linked to the phase lead provided by the D s (jω) in the vicinity of ω n2 .
Metallic suspension
The metallic suspension is taken as a benchmarking system in this study. So, relations between different variable parameters should be defined in term of transfer functions.
The transfer function between Z 2 (s) and Z 0 (s) is where The transfer function Z 1 /Z 0 is defined by The transfer function Z 2 /F 0 is equal to Parametric variables of the metallic suspension are chosen to be those of Citroën C4 Picasso (Table 1). The viscous damping coefficient b 2 is fixed to obtain a resonance factor Q 2 = 2. Note that the damping of the wheel dynamic is defined by and the damping of the chassis dynamic is 3. CONTROL OF MULTI-MODES SEMI-ACTIVE SUSPENSION The studied suspension system is composed of passive metallic spring damper units associated to actuators enabling the vehicle high regulation. b 20 is the damping coefficient of the passive damper and F a is the controlled actuation force (Fig. 3-Functional Level).
The suspension force is then described by the following equation
Degraded mode suspension
For passive suspension, the actuation F a equals zero and hence the force suspension (21) is equal to This mode is dimensioned to ensure a minimal wheel holding. For that, b 20 should be computed to guarantee a maximal resonance factor Q 1 = 3 of the frequency response Z 1 /Z 0 of the tyre, and therefore a minimum damping factor ξ 1 = 0.15.
For the passive suspension design, a high damping coefficient b 20 is required to attenuate the peak of the sprung mass resonance. However, high frequency attenuation is degraded in this case.
Multi-modes semi-active suspension
For a component point of view, the quarter-car semi-active suspension system can be modeled as described in Fig. 3, Fig. 3. From Functional level to Component level.
(23) According to the concrete structure of damper, three modes of the damper based on the control actuation force can be summarized: • 'Mode 1' is designed to ensure vibration isolation performances; • 'Mode 2' is equivalent to the metallic suspension; • 'Mode 3' is modeled to ensure chassis holding.
The equivalent expressions of the control actuation force f a for each mode are as follows The domain of definition of each mode is detailed in Fig. 4.
where As expected, none of the architectures can ensure the best chassis holding and the best vibratory isolation at the same time. However, each mode (from 1 to 3) has a frequency range where it provides better performances as shown in Fig. 5.
CRONE-SkyHook based target effort
In this section, the aim is to synthesize a target mode which can improve both chassis holding and vibratory isolation in a very significant way. For that, a CRONE-SkyHook strategy is used. An additional controller C csh (s) is added to the quarter-car model (Fig. 6). The impedance of the additional system is where n ∈ R + and b csh is the control parameter to be synthesized. In order to facilitate comparison with the traditional SkyHook method, the order n is set to zero. C csh (s) is chosen to be a low pass filter in order to act for chassis holding and limit the action of b csh in low frequency.
Then, the modified impedance of the sprung mass becomes By supposing n = 0, the open-loop transfer β (s) is expressed by where Supposing that ω 20 = ω 3 , (30) then β (s) becomes and the new transfer in absolute transmission T 21 (s) becomes From (28), (30) and (31), it results Finally, the target effort designed by the CRONE-SkyHook strategy is Remark 1: Note that this result can only be achieved with an active device (which is not the case here). The interest therefore lies in the simplicity of the calculation of the target effort f a csh (t) which leads to a Chassis/Road frequency response which is sufficiently discriminating compared to the frequency responses of each of the three modes (Fig. 7).
Decision criterion
The choice of the specified mode, and hence of the control, can be done by minimizing the quadratic difference between a target effort and the predicted effort for each suspension modes. It is therefore sufficient to switch at each step time to the mode which has the smallest gap with the target effort. Thus, the gap G between the target effort and the efforts is defined a following: where f atarget = f a csh is synthesized with the CRONE-SkyHook strategy.
Remark 2: The dynamics of the actuator is an important factor that affects the operation performance of the multimodes semi-active suspension system. To improve the control performance of the proposed control scheme, the dynamic of the actuator is modeled as a second order filter with 60ms of response time.
Isolated Obstacle
Simulation in time domain were carried out for a vehicle driving over a speed bump at V x = 15km/h. The geometric profile of the obstacle is defined as follows where x 0 = 5m is the position of the first inflection point, L = 30m is length between the two inflection points, h = 0.01m is the height of the obstacle and α = 10 • is the approach angle at the inflection point.
The isolated obstacle is used to check the coherence of the temporal results with the frequency results. Thus, the length of the obstacle is defined in such a way that the transitional regime of the climb phase is completed before going on to the transitional regime of the descent phase.
Thus, the duality between time and frequency domain can be checked. Fig. 8 shows the dynamic of the chassis displacement for the whole length of the obstacle, where z 0 is the speed bump profile, z 2i , i = 1, 2, 3 are the body displacement related to the three suspension modes, z 2 csh is the body displacement issued from the use of the CRONE-SkyHook controller, and z 2 is the resulted body displacement using the switched control scheme. The zoom in the transitional regime of the climb phase shows that, in the first steps, the vibratory isolation is improved. Then, a reduction of the first overshoot and a decrease of the oscillatory nature of the response z 2 (t) are obtained with the piloted suspension, an thus chassis holding is improved. Note that the piloted suspension is chosen to be initialized with the 'mode 2'. Fig. 9 presents the target effort f a csh , the resulted effort f a , as well as efforts of the three suspension modes which are predicted at each instant and then used for the decision test. It can be seen from Fig. 9 that the resulted f a is acceptable compared to the predicted ones. The analysis of the cumulative time per mode shows that that the third mode of suspension ensuring chassis holding, is predominant for such type of obstacle.
Road profile
A stochastic road excitation is used to validate performances of the proposed multi-modes semi-active suspension using a CVD. The used road profile is measured with a constant vehicle speed of 60km/h. Fig. 10 describes the controlled vertical displacement z 2 (t) and compared to the measured road z 0 (t). A performance indicator based on the Root Mean Square (RMS) can be calculated for the entire duration of the profile: where RM S [z 2 (t)] is the root mean square of the vertical displacement z 2 (t), calculated for the benchmarking metal suspension and for the controlled suspension, and then normalized by RM S [z 0 (t)] (the root mean square of the measured road z 0 (t)).
A such criterion can be interpreted as following • J > 1 → an amplified transmission; • J = 1 → a neutral transmission; • J < 1 → an attenuated transmission.
The value of the performance indicator for this road profile is J = 0.9652. Thus, the piloted suspension has improved the vibratory isolation. The analysis of the histogram of commutations for the measured road profile shows that 'mode 1' is dominant. Thus, the vibratory isolation is predominant during this road profile.
CONCLUSION
In this study, a continuously variable damper with three suspension modes has been proposed for semi-active vehicle suspension. By simply controlling the actuating force, the damper can achieve effectively adjustable damping and meet the set requirements as ride comfort and driving safety. The control strategy is based on the minimization of the quadratic difference between the control actuation force for the three suspension modes, and a target developed by a new CRONE-SkyHook technique. Numerical simulations are used to validate the performance of the proposed semi-active suspension. Results show that the control strategy can effectively improve vehicle ride comfort and safety performances. Future works will focus on road behavior and active safety in emergency situations of autonomous vehicle. | 2020-04-27T16:22:59.808Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "e2ee3aa3881920e72b9830cd72081064456f1a95",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ifacol.2020.12.1405",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "201aa32c4216b15d81632c31f9f70d693f984745",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
264048255 | pes2o/s2orc | v3-fos-license | The Relationship of Formula Milk Promotion with The Intention of Exclusive Breast Milk in Babies Aged 0-6 Months in The Work Area of The Tering Health
Introduction : Breast milk is an ideal nutrition for babies which contains the most suitable nutrients for the baby's needs and contains a set of protective substances to fight disease. The success of exclusive breastfeeding is influenced by a mother's intention to give exclusive breastfeeding, but currently the influence of formula milk promotion is very large and affects the coverage of exclusive breastfeeding. Objective : to determine the relationship between promotion of formula milk and the intention of exclusive breastfeeding in infants aged 0-6 months. Methods : This type of research is descriptive analytic with a cross sectional design. Result and Discussion : There is a relationship between the promotion of formula milk and the mother's intention to provide exclusive breastfeeding to infants aged 0-6 months in the Working Area of the Tering Public Health Center in West Kutai Regency in 2022 with a p value of 0.000 and OR = 28.500. Promotion of formula milk that is carried out on a large scale can affect the mother's intention to give exclusive breastfeeding because the promotion of formula milk gives a wrong understanding that formula milk is as good as exclusive breastfeeding. Conclusion : promotion of formula milk can affect the intention of exclusive breastfeeding in infants aged 0-6 months.
Introduction
Breast milk is an ideal nutrient for babies that contains nutrients that best suit the baby's needs and contains a set of protective substances to combat disease.The first two years of a child's life are very important, as optimal nutrition during this period decreases morbidity and mortality, reduces the risk of chronic diseases, and encourages better development throughout N (WHO, 2018).
The low coverage of exclusive breastfeeding has an impact, especially on the health of the baby.Research published in the European Respiratory Journal states that children who have never been breastfed have a risk of respiratory and digestive disorders in the first four years of life.Compared to infants who received breast milk for 6 months or more, and non-exclusive breastfeeding contributed 11.6% in mortality of children under the age of 5 years (Maryunani, 2018).
The lack of maximum exclusive breastfeeding is supported by the results that the percentage of mothers who breastfeed babies continues to decline as the baby ages.Results from the Turkey Demographic and Health Survey (TDHS) showed that although 58% of babies in the study were exclusively breastfed in the first and second months of life, the percentage decreased to 10% only in the next fourth and fifth month (Hacettepe University Institute of Population Studies, 2019).
The success of exclusive breastfeeding is influenced by a mother's intention to give exclusive breastfeeding.Factors that influence the intention of pregnant women in providing exclusive breastfeeding are gestational age, social norms,mmother's work, maternal motivation, promotion of formula milk and experience breastfeeding mothers (Jatmika et al., 2019).
The theory of resoned action states that behavior is an action that arises due to the intention that a person has.Intention (intention to perform behavior) is the transition from a person's beliefs or beliefs to a desired action.Intention will appear after the existence of a positive attitude and normative support from the surrounding environment to carry out a behavior.A person's intentions in the theory of reasoned action are influenced by the subjective attitudes and norms he has and believes in.Attitude towards behavior is influenced by behavioral beliefs and evaluation of behavioral outcomes.Subjective norms are influenced by normative beliefs and motivation to comply (Glanz and Viswanath, 2018).The high low intention of pregnant women to give exclusive breastfeeding will have an impact on the high and low coverage of exclusive breastfeeding.
A factor affecting exclusive breastfeeding is the promotion of infant formula.Rahmawati & Arti (2018) stated that currently it is difficult to avoid the promotion of formula milk, the ease of social media is a means of promotion of various products including advertisements for formula milk both directly nor indirectly.The results of a study by Ney et al. (2019) that respondents who were interested in formula milk advertising were 75% had no intention of exclusive breastfeeding.This proves that the magnitude of the impact of advertising on a person's interest in a product by using Sukensi, Hilda, Rosalin Ariefah Putri/KESANS The Relationship of Formula Milk Promotion with The Intention of Exclusive Breast Milk in Babies Aged 0-6 Months in The Work Area of The Tering Health Center West Kutai District in 2022 advertising to attract conTotalers to buy a product.This can increase the negative impact of optimizing exclusive breastfeeding coverage by changing the mother's perspective, intention to breastfeed exclusively and mother's confidence in exclusive breastfeeding (Hansen et al., 2018).
The preliminary study conducted by researchers through interviews of 10 breastfeeding mothers aged 0-6 months as many as 7 people did not give exclusive breastfeeding because they said their babies had been given formula milk and because feeling unsure that the milk is enough for the baby.According to researchers, mothers have less intention to give exclusive breastfeeding because mothers can breastfeed their babies exclusively but mothers prefer to give formula milk.
Method
This research was carried out with a quantitative approach with a cross-sectional research design.The population in this study was all mothers who had babies aged 0-6 months in the Tering Puskesmas Work Area West Kutai Regency, as many as 66 people.The sampling technique used a total sampling technique of 66 respondents.The research instrument used is a questionnaire.Data analysis in this study used univariate analysis with frequency distribution and bivariate analysis using the Chi Square formula.
Results and Discussion
Result 1.Based on the table above, out of 66 respondents, most of the respondents aged between 20-35 years were 36 people (54.5%), most of the respondents had a junior high Sukensi, Hilda, Rosalin Ariefah Putri/KESANS The Relationship of Formula Milk Promotion with The Intention of Exclusive Breast Milk in Babies Aged 0-6 Months in The Work Area of The Tering Health Center West Kutai District in 2022 school education of 34 people (51.5%), most of the respondents were Housewives peers of 51 people (77.3%), most of the multiparity of 36 people (54.5%).Based on the table above, it can be seen that of the 66 respondents, most of the respondents were affected by the promotion of formula milk, namely 42 people (63.6%) and those who were not affected by the promotion of formula milk as many as 24 people (36.4%).Based on the table above, out of 66 respondents, most of the respondents have a weak intention to provide exclusive breastfeeding, namely 44 people (66.7%) and those who have strong intentions gave exclusive breastfeeding as many as 22 people (33.3%).Analysis of the relationship between the promotion of formula milk and the intention of exclusive breastfeeding was carried out using the Chi Square formula with a significant level of alpha 5% and df= (2-1) (2-1) =1, when viewed at χ 2
Bivariate Analysis
The table found the number 3,841, while the value of χ 2 count= 26,592>χ 2 table=3,841.The result of the probability value (p value) =0.000 <α 0.05, by itself Ho was rejected, which means that there is a relationship between the promotion of formula milk and the intention of exclusive breastfeeding in infants aged 0-6 month in the Working Area of the Tering Health Center, West Kutai Regency in 2022.
OR (Odds Ratio) analysis shows a value of 28,500, which means that mothers who are affected by formula milk promotion are at 28,500 times greater risk of having a weak intention to give exclusive breastfeeding compared to with mothers who are not affected by the promotion of formula milk.
Discussion a. Age
The results showed that most of the respondents were aged between 20-35 years, namely 54.5%, this shows that most mothers who have babies aged 0-6 months are at the age of healthy reproduction is the age of 20-35 years.Age is behind a person's mindset or perspective, the more mature a person's age should be, the more logical or mature the person's mindset should be (Wulan & Hasibuan, 2020).
Age can affect a person's way of thinking, acting, and emotions.A more mature age generally has more stable emotions than a younger age.The age of the mother will affect the emotional readiness of the mother.A mother's age that is too young when pregnant can cause her physiological and psychological condition to not be ready to be a mother.This can affect pregnancy and parenting (Hurlock, 2018).Age affects how breastfeeding mothers make decisions in exclusive breastfeeding, the older they get, the more experience and knowledge they gain (Notoatmodjo, 2018).
b. Education
The results showed that most respondents had a junior high school education level of 51.5%, this shows that most mothers who have babies aged 0-6 months have a level of low education.
The level of education and the mother's access to mass media also influence decision making, where the higher the education the greater the opportunity to provide exclusive breastfeeding.Conversely access to media affects negative to breastfeeding, where the higher the mother's access to the media, the higher the chance of not giving exclusive breastfeeding (Abdullah et al., 2020).
c. Work
The results showed that most respondents did not work or as Housewives at 77.3%.This explains that most mothers have more time at home to breastfeed exclusively because they are not tied to formal work.
One of the most common reasons for mothers not breastfeeding is because they must work.Women are always working, especially at childbearing age, so it is always a problem to figure out how to care for a baby.Work not only means work that is paid for and done in the office, but it can also mean working in the fields, for people in rural areas (King, 2016in Liliana et al., 2017).
World Breastfeeding Week in 1993 was commemorated with the theme Mother Friendly Workplace, showing that there is worldwide attention to the dual role of breastfeeding and working mothers.One of the policies and strategies of the Ministry of Health of the Republic of Indonesia concerning the Improvement of Breastfeeding (PP-ASI) for female workers is to strive for facilities that support complementary food for mothers who breastfeed at work by providing milking room facilities, providing equipment for milking and storing breast milk, providing breast milk counseling materials, and providing counseling (Pertiwi & Suyatno, 2017).
d. Parity
The results showed that most respondents with multiparity had children between 2-4, which was 54.5%.This explains that pregnant women have good parity because the pregnancy at risk is the first pregnancy or > 4 pregnancy.Suradi (2007) in Handayani & Rustiana (2020) that one of the factors that influence breastfeeding includes the characteristics of 24 mothers, namely the experience of breastfeeding mothers.The difference in the number of children will affect the mother's experience in terms of breastfeeding.A mother who has successfully breastfeeding at a previous birth will be easier and more confident that she will be able to breastfeed at the next birth.A young mother with her first child will find it difficult to be able to breastfeed (Solihah, 2010in Handayani & Rustiana, 2020).
Formula Milk Promotion
Based on the results of the study, most mothers who had babies aged 0-6 months were affected by the promotion of formula milk, namely 42 people (63.4%), this explains that there are still many mothers who are more believe in formula milk compared to breast milk.
Promotion of formula milk is a variety of activities carried out by producers to communicate the benefits of formula milk products as breast milk substitutes with the aim of persuading and reminding target conTotalers in order to buy such formula milk products (Kotler, 2017).
Improved communication and transportation facilities that facilitate advertising of the distribution of artificial milk (formula milk) have led to a shift in behavior from breastfeeding to formula feeding in both rural and urban areas.Ads that promote that a factory's milk is as good as breast milk, can shake the mother's confidence so that she is interested in trying to use formula milk.The faster the addition of formula to the baby causes the suction power to decrease.Since the baby easily feels full, the baby will be lazy to suck the nipples, as a result of which the production of prolactin and oxytocin will decrease.Currently, there is an increasing number of formula milk advertisements with the distribution of brochures about formula milk advertisements.Compared to even the most expensive or claimed best formula milk, the quality of breast milk will never be matched.In breast milk, it contains many useful substances as well as what babies need for the growth and development process and support their intelligence that formula milk does not have, so the Department Health and WHO (World Health Organization) emphasized the importance of exclusive breastfeeding in babies at least the first 6 months (Ministry of Health of the Republic of Indonesia, 2017), This research supports research conducted by Dewi (2021) where in her research explained that as many as 80% of mothers who have babies aged 0-6 months are interested in the promotion of formula milk, they consider formula milk as good as breast milk.
According to researchers, there are still many mothers who are affected by the promotion of formula milk because there are still many as Totalptions that obese children are healthy children, so many mothers give formula milk With the hope that the baby will get big and fat quickly, even though the child who is fat because of formula milk can cause problems in the future.
Exclusive Breastfeeding Intentions
The results showed that most respondents had a weak intention in exclusive breastfeeding.This explains that it seems that the mother has no strong desire to give breast milk for 6 months without additional food.
Intention is an indication of a person's readiness to perform a certain behavior and is considered a direct determinant or cause of the appearance of a behavior.The intention is formed based on attitudes towards behavior, subjective norms, and perceived behavioral control, where each of these predictors has an important interrelationship weight to behavior and attraction (Ajzen, 2005in Azwar, 2019).
According to researchers, the behavior of a mother to continue breastfeeding only until the baby is 6 months old is still weak because of the lack of family support such as parents or husbands who supporting mothers to continue to give breast milk only without the addition of formula milk or other foods, this is because sometimes parents actually advise mothers to add formula milk so that Meet the needs of babies, which is a problem that is often faced by mothers who have babies aged 0-6 months.The Relationship between Formula Milk Promotion and Exclusive Breastfeeding Intentions The results showed that there was a significant relationship between the promotion of formula milk and the intention of exclusive breastfeeding in terms of the p-value of 0.000 <α 0.05.This explains that mothers who are affected by the promotion of formula milk are at greater risk of 28,500 times having a weak intention to exclusively breastfeed compared to mothers who are not affected by the promotion of formula milk.
Mothers who have confidence in breastfeeding, will be better prepared to face breastfeeding problems.But maternal self-confidence is also related to several dimensions, including maternal health status, occupation, knowledge about breastfeeding, culture, education, and career.Maternal appreciation of breastfeeding and maternal perception of the benefits of breastfeeding will increase the mother's intention in exclusive breastfeeding (Hamilton et al., 2018).
A factor affecting exclusive breastfeeding is the promotion of infant formula.Rahmawati & Arti (2018) stated that currently it is difficult to avoid the promotion of formula milk, the ease of social media is a means of promotion of various products including advertisements for formula milk both directly nor indirectly.The results of a study by Ney et al. ( 2019) that respondents who were interested in formula milk advertising were 75% had no intention of exclusive breastfeeding.This proves that the magnitude of the impact of advertising on a person's interest in a product by using advertising to attract conTotalers to buy a product.This can increase thennegative impact of optimizing exclusive breastfeeding coverage by changing the mother's perspective, intention to breastfeed exclusively and mother's confidence in exclusive breastfeeding (Hansen et al., 2018).
Currently, the promotion of formula milk is carried out on a large scale and through various media including health service places.The promotion of formula milk is informed through advertising and other print media, and manufacturers are pursuing more worrying marketing methods, namely direct marketing to mothers, health facilities, or through health workers, such as midwives and doctors (Kotler, 2017).
The promotion violates the Decree of the Minister of Health of the Republic of Indonesia Number: 237 / Menkes / SK / IV / 1997 concerning Marketing of Breast Milk Substitutes which states that all health services are prohibited from being used for formula milk promotion activities, providing and receiving samples of infant formula and advanced infant formula for routine or research purposes.
According to researchers, the vigorous promotion of formula milk in the community has caused a decrease in the coverage rate of exclusive breastfeeding success because mothers have more confidence in the content of formula milk compared to breast milk, some things This can happen because of the low level of public education so that it is easily influenced by the information provided without seeking the truth first through the mass media or energy health.
Conclusion
The results of maternal intentions in giving exclusive breastfeeding to babies aged 0-6 months in the Tering Puskesmas Working Area, West Kutai Regency, in 2022, most of them had weak intentions, namely 42 people (63.6%).The promotion of formula milk to mothers who have babies aged 0-6 months in the Working Area of the Tering Health Center, West Kutai Regency in 2022, was mostly affected, namely 42 people (63.6%).There is a relationship between the promotion of formula milk and the mother's intention in giving exclusive breastfeeding to babies aged 0-6 months in the Working Area of the Tering Health Center, West Kutai Regency in 2022 with a p value of 0.000 and an OR value = 28,500.
Sukensi
Sukensi, Hilda, Rosalin Ariefah Putri/KESANS The Relationship of Formula Milk Promotion with The Intention of Exclusive Breast Milk in Babies Aged 0-6 Months in The Work Area of The Tering Health Center West Kutai District in 2022
Table 1
Distribution of Age Frequency of Mothers who have babies aged 0-6 months
Table 2
Distribution of Frequency of Promotion of Formula Milk for Mothers who have Infants Aged 0-6 months
Table 3
Distribution of Frequency of Intention to Exclusive Breastfeeding in Mothers who have babies aged 0-6 months
Table 4
Crosssub Relationship of Promotion of formula milk with the intention of exclusive breastfeeding in mothers who have babies aged 0-6 months in 2022 | 2023-10-14T15:58:46.614Z | 2023-02-20T00:00:00.000 | {
"year": 2023,
"sha1": "f5299f3d75df649e40e268732b9808bfb856a994",
"oa_license": "CCBYSA",
"oa_url": "https://kesans.rifainstitute.com/index.php/kesans/article/download/137/172",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "286129ac7500d45e7aa419c5c973deb2e77b01f4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
267065269 | pes2o/s2orc | v3-fos-license | The Apelin/APJ System: A Potential Therapeutic Target for Sepsis
Abstract Apelin is the native ligand for the G protein-coupled receptor APJ. Numerous studies have demonstrated that the Apelin/APJ system has positive inotropic, anti-inflammatory, and anti-apoptotic effects and regulates fluid homeostasis. The Apelin/APJ system has been demonstrated to play a protective role in sepsis and may serve as a promising therapeutic target for the treatment of sepsis. Better understanding of the mechanisms of the effects of the Apelin/APJ system will aid in the development of novel drugs for the treatment of sepsis. In this review, we provide a brief overview of the physiological role of the Apelin/APJ system and its role in sepsis.
Introduction
Sepsis is a condition in which the body's response to infection becomes uncontrolled.Sepsis can lead to life-threatening organ dysfunction and is one of the main causes of death for patients in the intensive care unit. 1,2Fluid resuscitation and vasopressors are the cornerstones for sepsis treatment.The major causes for circulatory failure and failed interventions for septic shock are microvascular leakage, irresponsiveness of arteries to vasopressors and myocardial injury.Despite the implementation of organ support measures like fluid resuscitation, vasoactive drugs, inotropic agents, mechanical ventilation and hemodialysis, the overall prognosis of sepsis has not shown improvement. 3An effective strategy for the prevention and treatment of sepsis is lacking.
APJ is a G protein-coupled receptor (GPCR) that is structurally similar to the type 1 receptor of angiotensin II. 4 The first identified endogenous ligand for APJ is Apelin, which was extracted from bovine stomach. 5In 2013, the second ligand, Elabela (ELA), was discovered. 6In rodents and human, the Apelin/APJ system is widely expressed in many organs and tissues, including the heart, kidney, brain, lung, stomach, blood vessels, spinal cord, endothelium, and adipose tissue.Due to level of Apelin in plasma being lower than that in tissues, 7,8 and APJ and Apelin expressed in similar locations, Apelin may not be a major circulating hormone, and its secretion mechanism is probable paracrine or autocrine.The main subtypes of Apelin are Apelin-36, Apelin-17, and Apelin-13. 9Apelin-13 has the highest biological activity among all subtypes with a variety of effects and is the dominant type in the human circulatory system. 10The major subtypes of ELA are ELA-32, ELA-21, and ELA-11. 11ELA plays an important role in heart development.In zebrafish lacking ELA, embryos died early because of weakness or absence of heart, and addition of ELA reversed these abnormalities. 6Mice lacking ELA exhibited abnormal heart development and embryonic death. 12ELA is present in human plasma, but the expression of ELA in human tissues is not fully recognized. 13The Apelin /APJ system is involved in regulating physiological processes such as myocardial contractility and vascular tone, body fluid homeostasis, renal function, the inflammatory response and energy metabolism.
The extensive physiological effects of the Apelin/APJ system are closely related to many diseases such as cancer, heart failure, hypertension, atherosclerosis, diabetes, and neurological diseases.Numerous trials have demonstrated the beneficial impact of the Apelin/APJ system on sepsis, especially in improving hemodynamic disturbances and regulating fluid balance, indicating the Apelin/APJ system may represent a novel therapeutic target for sepsis.Research on the beneficial role of the Apelin/APJ system in various disorders has led to the development of various analogs of Apelin and ELA.Here we review the recent literature on the roles and mechanisms of the Apelin/APJ system in relation to its protective effects against sepsis.We also discuss the agonists and antagonists of APJ and their potential in sepsis treatment.
Physiological Roles of the Apelin/APJ System in the Cardiovascular System and the Kidney
The Apelin/APJ system is involved in many physiological processes (Figure 1).In this review, we focus our discussion on its functions in cardiovascular homeostasis and the kidney.These effects, including regulatory effects on vascular tone, myocardial contractility, anticoagulation and humoral balance, may represent potential therapeutic value for sepsis.
The regulation of vascular tone by the Apelin/APJ system is determined by effector cells.In endothelial cells, the Apelin/ APJ system regulates vasodilation through the NO/L-arginine system, which is activated by NOS phosphorylation in a rapid, transient, and dose-dependent manner. 14,15In vascular smooth muscle cells (VSMCs), Apelin functions in vasoconstriction through the phosphorylation of myosin light chain (MLC), also in a dose-dependent manner. 16ELA also promotes vasodilation but via a different mechanism that does not involve NO and only partially involves endothelial cells. 17,18he Apelin/APJ system is essential for normal vascular development.Apelin knockout mouse embryos exhibited vascular stenosis and impaired retinal angiogenesis. 19,20Hypoxia is an inducer of Apelin expression, and the hypoxiainducible factor 1α (HIF1α) transcription factor promotes Apelin gene transcription. 21,22Additionally, the hypoxiainduced proliferation of endothelial cells can be prevented by inhibition of the Apelin signaling pathway. 21These Figure 1 There is a wide range of physiological effects caused by the activation of APJ receptors in both the central and peripheral nervous systems.(eg vasodilatory, vasoconstrictive, angiogenic, and possibly antithrombotic; increases myocyte conduction velocity, arrhythmogenic, and decreases myocardial hypertrophy and fibrosis; enhances renal blood flow and diuresis, and reduces fibrosis; inhibits the release of vasopressin from the hypothalamus and reduces water intake).Also, the Apelin/APJ system has a variety of metabolic effects.It increases muscle glucose uptake and usage.It also improves insulin sensitivity.
findings indicate that the Apelin/APJ system may be a regulator of both the normal developmental and pathophysiological processes of blood vessels.
Apelin can improve myocardial contractility. 23The Apelin peptide has been shown to promote inotropic effects in humans at subnanomolar concentrations.5][26] Exogenous infusion of Apelin restored myocardial contractility in Apelin knockout mice and increased myocardial shortening in healthy rats. 23,27These effects were observed in isolated perfused rat cardiac myocytes in a dose-dependent manner. 23,28[Pyr1]-Apelin-13 increased cardiac output without left ventricular hypertrophy, unlike other positive inotropes. 29Furthermore, the Apelin/APJ system exhibited an antagonistic effect on the renin-angiotensin system, 30 and deficiency of the Apelin gene in mice exacerbated Ang II-induced cardiac dysfunction. 31he Apelin/APJ system is expressed in human platelets and exhibits anticoagulant effects in vitro. 32,33Animal experiments have demonstrated anti-thrombotic effects of Apelin in vivo.Apelin gene-deficient mice showed a shortened bleeding time, increased platelet aggregation, and rapid formation of small vein thrombosis.Apelin-13 infusion prolonged bleeding time in both Apelin gene-deficient mice and wild-type mice. 33Furthermore, Apelin inhibited thrombin and collagen-induced thrombocyte activation, but not ADP and TXA2-induced thrombocyte activation. 33The antithrombotic properties of Apelin have not yet been validated in humans.
In rats, Apelin induces relaxation of renal afferent and efferent arterioles pretreated with Ang II and reduces the intracellular Ca 2+ level, which is dependent on the integrity of the arteriolar endothelial cells and NO. 34Furthermore, the vasodilator effect of Apelin increases renal medullary blood flow, which is beneficial for diuresis.In rodents, Apelin directly inhibits the insertion of aquaporin 2 into the apical plasma membrane of the collecting duct, thereby promoting water excretion. 34,35[38][39]
The Involvement of the Apelin/APJ System in Septic Conditions
Apelin is an inotropic agent with anti-inflammatory effects and calcium sensitization and antioxidant properties.Studies in preclinical models of sepsis have indicated excellent protective effects of the Apelin/APJ system against sepsis.Here, we outline the potential therapeutic values of Apelin in sepsis, focusing on its protective effects on cardiovascular homeostasis (Figure 2).
Role of Plasma Apelin in Sepsis Diagnosis and Prognostic Prediction
The common biomarkers of sepsis are C-reactive protein (CRP) and procalcitonin (PCT).Both are significantly elevated in sepsis. 40Various studies have demonstrated changes in Apelin in sepsis and septic shock, highlighting its potential diagnostic and prognostic role.
Several studies have used enzyme-linked immunosorbent assay to determine the serum level of Apelin in sepsis patients.Safaa et al 41 studied plasma Apelin content in 80 neonates and found that the average serum value of Apelin in septic neonates (1214.7 ± 273.06 pg/mmol) was significantly higher than that in healthy neonates(116.27± 21.96 pg/ mmol).Furthermore, neonatal survivors of early-onset sepsis had lower Apelin levels than non-survivors.Another study noted a substantial increase (eight-fold) in plasma Apelin levels among neonates with sepsis compared with healthy neonates. 42Luo et al 43 measured Apelin levels in the serum of 73 adults, including 40 patients with septic myocardial injury and 33 healthy volunteers (Figure 3A).The serum Apelin levels in the septic group were significantly higher than those in healthy volunteers.Yuan et al 44 reported that serum Apelin levels were higher in 34 septic patients (including 9 sepsis-related ARDS patients) compared with 13 healthy volunteers.Furthermore, patients with mild ARDS had lower Apelin levels than patients with severe ARDS, and survivors had higher Apelin levels than non-survivors (Figure 3B).Liu et al 45 also confirmed significantly higher serum Apelin levels in 28 patients with sepsis-related ARDS compared with 20 healthy volunteers and a higher level in survivors compared with non-survivors.
Several studies have shown elevated Apelin levels in early septic patients, indicating that it may play a protective role. 43,44Clinical research has revealed that sepsis patients have higher blood Apelin content compared with healthy individuals.It's noteworthy that David et al verified elevated levels of both Apelin and ELA in patients with sepsis 46 (Figure 3C).Moreover, both Apelin and ELA are more prone to degradation in the septic environment. 46However, Apelin levels were markedly higher in septic shock patients compared with sepsis patients. 47,48The degree of increase was positively correlated with the severity of sepsis. 41,42Elderly survivors of sepsis also had lower plasma Apelin levels than non-survivors. 49his is stating that the mechanisms underlying these changes are unknown and need further elucidation.The amount of time it takes for the level of Apelin to increase after infection is unclear and needs further research.Overall, these studies suggest that Apelin may show utility in the diagnosis and prognostic prediction of sepsis (Table 1).
Protective Effects of Apelin on the Brain
Sepsis-associated encephalopathy (SAE) is a diffuse brain dysfunction that is secondary to sepsis and not caused by infection of the central nervous system. 50The pathophysiology of SAE is multifactorial, involving hemorrhage, bloodbrain barrier (BBB) damage, changes in neuronal synaptic density, neurotransmitter dysregulation, neuroinflammation and ischemic injury. 51,52he Apelin/APJ system is a potential therapeutic target for neurological diseases and has been suggested to be protective against several neurological disorders. 53Intracerebroventricular injection of LPS is one of the methods used to construct a model of SAE. 54Therefore, examining the mechanisms of Apelin in LPS-induced neuroinflammation may provide insights into its effects on neuroprotection in SAE.In cultured mouse N9 microglia, Apelin-13 reduced the expression of proinflammatory cytokines IL-6 and iNOS and increased the expression of anti-inflammatory cytokines IL-10 and Arg-1 by inhibiting N9 microglial activation, thereby attenuating LPS-induced neuroinflammation. 55LPS has also been reported to be an activator of the NF-κB pathway in neuroinflammation.Apelin was shown to inhibit the NF-κB pathway, thereby reducing neuroinflammation in rats. 56In septic rats, Apelin-13 promoted the expression and nuclear Figure 2 Potential impacts of the apelin/APJ system in human sepsis induced organ disfunction.Both apelin and ELA reduce organs inflammation, improve hemodynamics (eg, improvement of inotropy, reduction of pre-and post-load as well as vascular permeability).Also, apelin reduce the inflammation in the brain, lung, kidney and liver.Furthermore, it prevents pulmonary edema and fibrosis, reduce blood brain barrier permeability, and enhance diuresis.
translocation of the glucocorticoid receptor (GR) to ameliorate LPS-induced neuroinflammation and cognitive dysfunction in rats. 57There is also considerable evidence that LPS disrupts the BBB, causing SAE. 58Notably, Apelin has been proven to improve brain dysfunction by reducing the permeability of the BBB in rats. 59This shows that Apelin may alleviate SAE by improving the permeability of the BBB.The plasma level of patients is higher than that of healthy people.
Prognosis [43] Sepsis-associated ARDS 34/13 (Adults) --The plasma level of patients is higher than that of healthy people and NS is higher than S.
Prognosis [44] Sepsis-associated ARDS 28/20(Adults) S:7 NS:15 S: 44 NS: 58 The plasma level of patients is higher than that of healthy people and S is higher than NS.
The above studies suggest that Apelin-13 may reduce LPS-induced neuroinflammation and cognitive impairment.Whether Apelin improves SAE still needs further investigation, especially in septic models.
Cardioprotective Effects of Apelin
During sepsis, the heart is one of the organs that are vulnerable to damage. 60Cardiac dysfunction is a frequent comorbidity in patients with sepsis and is linked to elevated mortality. 61,62Sepsis-induced myocardial dysfunction (SIMD) is a reversible cardiac dysfunction typically characterized by reduced myocardial contractility and ventricular dilation and is usually treated with positive inotropic drugs. 63Blockage of APJ further exacerbated cardiac dysfunction and mortality in septic rats, 64 and down-regulation of cardiac Apelin expression was observed in non-surviving septic rats. 65These results demonstrated that the Apelin/APJ system may be involved in the amelioration of life-threatening septic cardiac dysfunction.
Currently, β-adrenoceptor agonists are recommended to improve cardiac dysfunction in sepsis, and the Surviving Sepsis Campaign guidelines recommend the use of dobutamine as the first-line cardiac medication. 66owever, β-adrenoceptor agonists increase myocardial oxygen consumption and increase the incidence of atrial fibrillation. 67Furthermore, β-adrenergic receptor sensitivity to catecholamines decreases during sepsis, rendering them relatively ineffective. 68In this case, using even large amounts of exogenous β-adrenoceptor agonists (including dobutamine) to enhance myocardial contractility will not yield benefits but instead lead to worse outcomes.Thus, effective strategies to treat cardiac dysfunction in sepsis are required.Apelin is considered a potential candidate for cardiac dysfunction treatment in sepsis, and animal and human experiments have confirmed its positive inotropic effect.Crucially, although sepsis reduces the myocardial sensitivity to βadrenergic receptor agonists, the effectiveness of Apelin in cardiac response is heightened during systemic inflammatory conditions or polymicrobial infection. 64,69Apelin's inotropic and vasodilatory effects may provide significant therapeutic effects for low-output septic shock (ie, low output and high resistance).
][72][73][74] Systemic infusion of [Pyr1]-Apelin-13 in heart failure patients resulted in a decrease in blood pressure and systemic vascular resistance and an increase in cardiac index of approximately 10%. 26Cardiac output improved with increasing infusion time, with an ejection fraction increase of about 10%.Walley et al had proposed Apelin as a new treatment to improve SIMD. 757][78] Rat cardiomyocytes co-cultured with various inflammatory factors exhibited decreased contractility. 79Apelin has been shown to enhance myocardial contractility by reducing the production of inflammatory factors in rats 43,80 (Figure 4C and D).One study reported that mild apoptosis of cardiomyocytes can cause severe structural and functional damage to the heart, leading to cardiac dysfunction. 81In addition to increasing myocardial contractility by reducing inflammatory factors, Apelin also reduces cardiomyocyte apoptosis in septic rats, thereby improving cardiac function 43,64 (Figure 4B).The Apelin/ELA-APJ system showed a positive effect on hemodynamic stability in animal models of sepsis 46,69,77 (Figure 4A).Apelin and ELA also significantly improved the left ventricular pressure-volume (P-V) relationship and reduced arterial elasticity (Ea) in experimental animal models of septic shock, which is beneficial for improving ventricular-arterial decoupling. 69Notably, in a rat model of sepsis, Apelin-13 showed superior performance compared with dobutamine (inotropes commonly used in sepsis), with higher responsiveness, significantly improved left ventricular function, and higher survival rate; dobutamine was associated with further myocardial damage and less responsiveness. 64These studies indicate that targeting the Apelin/APJ system may be a promising strategy to treat cardiac dysfunction in sepsis.
Protective Effect of Apelin on Blood Vessels
During sepsis and septic shock, endothelial cells undergo severe damage and the vascular integrity and tone are disrupted, 82 which causes and enhance vascular leakage.][85] Damage to the major component vascular endothelial calmodulin (VE-Cad) may disrupt adherens junctions (AJ) and lead to loss of the endothelial barrier. 86,87LPS decreases VE-Cad expression in pulmonary vessels and increases its phosphorylation, which leads to an increase in vascular permeability. 88One study showed that the increase of ROS promotes the phosphorylation of VE-Cad. 89Another report in mice demonstrated that Apelin-13 restored the expression of VE-Cad and reduced the phosphorylation of VE-Cad, thereby reducing LPS-induced pulmonary vascular leakage. 90pelin activates the AMPK pathway to promote mitochondrial biogenesis, reduce ROS production and inhibit VE-Cad phosphorylation, thereby reducing pulmonary vascular leakage.In addition to reducing VE-Cad phosphorylation, Apelin has been reported to reduce NF-κB p65 entry into the nucleus in cultured human umbilical vein endothelial cells (HUVECs), thereby attenuating LPS-induced permeability. 91Additionally, Apelin and ELA were recently reported to reduce vascular endothelial growth factor (VEGF) and inflammatory factor production, thereby reducing sepsis-induced increased vascular permeability in rats. 69Notably, the effect of Apelin/ELA-system to relieve vascular leakage further contributes to the maintenance of plasma volume and hemodynamic stability. 69
Protective Effect of Apelin on the Lungs
The large amounts of inflammatory factors released in sepsis damage the pulmonary capillary endothelial cells and type II alveolar epithelial cells, inducing pulmonary edema and alveolar atrophy.This cascade of events ultimately leads to ALI/ARDS.The main manifestations are increased pulmonary vascular permeability, hypoxemia, and respiratory distress. 92LPS-induced ALI/ARDS and pulmonary fibrosis mimic sepsis-related pulmonary ALI and pulmonary fibrosis. 93xogenous Apelin-13 was reported to attenuate the inflammatory response by reducing NF-κB and NLRP3 inflammasome, thereby improving ALI in mice 94 (Figure 5D).Apelin-13 suppressed pulmonary inflammatory responses by inhibiting the upregulation of NADPH oxidase 4 (NOX4) and reducing ROS production and ROS-activated fructose-2,6-bisphosphate kinase 3-mediated glycolysis, thereby attenuating ALI in septic mice 44 (Figure 5A-C).Previous studies have shown that endothelial-mesenchymal transition (EndMT) is a common disease-causing mechanism leading to lung fibrosis and inhibition of EndMT may be beneficial in the alleviation of lung fibrosis. 95The Apelin/APJ system can alleviate the fibrotic process in multiple organs; for example, Apelin/APJ reduces myocardial fibrosis by inhibiting the activation of cardiac fibroblasts and blocking TGF-β signaling to reduce fibrosis in the renal interstitium. 96,97Studies have shown that Apelin may be able to alleviate sepsis-induced lung fibrosis through inhibition of the TGF-β/Smad signaling pathway in mice. 45In recent times, there has been significant attention on angiotensin-converting enzyme 2 (ACE2) as the cellular receptor for the causative virus of the SARS-CoV-2 coronavirus disease pandemic. 98ACE2 competes with angiotensin-converting enzyme 2 in the conversion of Angiotensin II to Angiotensin-(1-7), leading to anti-inflammatory, anti-vasoconstrictive, and anti-fibrotic effects, 99,100 (ACE2) can effectively alleviate ALI and pulmonary fibrosis. 101Apelin was shown to promote ACE2 expression, thereby reducing endothelial-mesenchymal transition and preventing sepsis-associated pulmonary fibrosis in mice. 93Additionally, inhibition of APJ exacerbated pulmonary fibrosis.This is further evidence of the protective role of the Apelin/APJ system against organ damage in sepsis.Together, these investigations indicate that activation of the Apelin/APJ system may have a protective effect against septic ALI and pulmonary fibrosis (Figure 5D).
Protective Effects of Apelin in the Liver
Inflammation and oxidative stress are the main mechanisms leading to liver injury in sepsis. 102In animal models of sepsis, liver injury can be attenuated by inhibiting the inflammatory response and scavenging ROS. 103Several studies have demonstrated the anti-inflammatory and antioxidant effects of Apelin. 104,105Zhou et al 106 explored the protective effects of a novel long-acting Fc-Apelin fusion protein on LPS-induced liver injury in septic mice (Figure 6).The authors found that Fc-Apelin significantly reduced the level of alanine aminotransferase (ALT) in septic mice (Figure 6A) and reduced hepatic cell apoptosis and ROS generation (Figure 6C).Additionally, Fc-Apelin alleviated the infiltration of hepatic macrophages and decreased the expressions of IL-6 and TNF-α in the liver (Figure 6B).However, systemic IL-6 was not significantly reduced as in the liver, and the reasons still need clarification.Thus, Apelin has great potential in the treatment of liver injury caused by sepsis, although the mechanisms remain to be elucidated.
Protective Effect of Apelin on the Kidney
8][109] AKI in sepsis is characterized by a sudden decline in renal function, indicated by increased serum creatinine and oliguria. 110,111pelin-13 improved renal function in sheep with septic shock in a dose-dependent manner, restoring the kidney's ability to excrete urine and creatinine. 46Apelin and ELA were both shown to exert protective effects in multiple kidney disease models, 36 and the effects of ELA may be superior. 112In a septic rat model, both Apelin-13 and ELA improved water intake and urine output.However, ELA significantly reduced AKI and renal inflammation, whereas Apelin-13 did not appear to be effective in preventing renal injury. 69Apelin-13 also improved AKI in septic rats by reducing renal artery resistance and increasing creatinine clearance through countering the renin-angiotensin system (RAS). 113ELA was fused with the Fc domain of human immunoglobulin IgG to generate Fc-ELA-21.Fc-ELA-21 was examined for treatment of LPS-induced AKI in septic mice and showed a longer half-life compared with ELA (Figure 7); it reduced renal tubular apoptosis, macrophage infiltration and inflammatory cytokine expression and improved AKI. 114
Body Fluid Homeostasis Regulation by Apelin
The Apelin/APJ system and AVP co-localize in hypothalamic supraoptic and paraventricular nuclei and exhibit a reciprocal regulatory relationship. 115,116These findings are in line with clinical research results in healthy individuals, showing plasma osmolality changes paralleling reciprocal vasopressin and Apelin changes.Blood AVP level increases and Apelin level decreases under hypertonic saline stimulation.Conversely, blood AVP levels decrease and Apelin levels increase after water loading reduces osmotic pressure. 117The cross-modulation of osmotic stimulation by Apelin and AVP has important physiological significance; it can prevent renal water excretion after dehydration and promote water excretion after water load to maintain body fluid homeostasis.This has very important clinical implications for the management of fluid resuscitation in patients with septic shock.
Disordered fluid homeostasis is common in septic progression and the resuscitation process. 1180][121] An increase in water intake and urine output in septic rats in response to both Apelin-13 and ELA was observed.ELA prevents plasma volume loss without altering AVP level, whereas Apelin-13 reduces plasma AVP level and changes urine balance toward unwanted aquaresis, which results in a loss of plasma volume 69 (Figure 7F).Apelin-13 and ELA reduced vascular permeability in several major organs, which also played a role in maintaining fluid homeostasis. 69These findings suggest that ELA and Apelin-13 have contrasting effects on the cardio-renal axis primarily through counter-regulation in the vasopressinergic system.Additionally, Apelin prevents the reduction of plasma volume by regulating fluid homeostasis, which facilitates the amelioration of hemodynamic disturbances. 69These effects further support its potential as a therapeutic target for sepsis.
Summary of the Protective Effects of the Apelin/APJ System Against Sepsis
Much effort has been invested into exploring the protective mechanism of the Apelin/APJ system in sepsis, and multiple mechanisms have been implicated in its protective effects, including reducing inflammatory factors and ROS, enhancing myocardial contractility, and reducing vascular permeability (Table 2).It is important to mention that most studies have been conducted in animal models and human studies are rare.The protective effect of ELA on sepsis has also been reported.Apelin levels in the plasma of septic patients have been reported to be higher than those of healthy individuals and to predict prognostic outcome.Some evidence also shows that Apelin can increase ACE2 expression and inhibit TGF-β/Smad signaling to reduce sepsis-associated pulmonary fibrosis.Notably, Apelin can alleviate LPS-induced neuroinflammation, which provides a valuable basis for further study of Apelin in the treatment of SAE.ELA has also been proven to attenuate septic kidney injury.Together these findings indicate that the Apelin-ELA/APJ system may be a potential therapeutic target in sepsis.
Agonists and Antagonists That Target the Apelin/APJ System
In spite of the close association of the Apelin/APJ system with several physiological processes and diseases, no drugs have been identified that directly activate or inhibit APJ.Because the half-life of Apelin in the body is several minutes, 122 the bioavailability is low.Therefore, the identification of peptide analogs and small molecules with high biological activity and long half-lives is very important.Additionally, from a clinical perspective, ideal agonists and antagonists should not only be degradation resistant but also should be biased towards G protein signaling to avoid receptor desensitization caused by activation of the β-arrestin signaling pathway.To prolong the half-life of Apelin peptides and increase their biological activities, extensive research has been performed and several Apelin analogues have been developed.M007 is a cyclic peptidomimetic with a longer half-life than endogenous peptides such as Apelin-13, −36 and −17.Compared with [Pyr 1]-Apelin-13, the effects of M007 in enhancing myocardial contractility, dilating blood vessels and lowering blood pressure in rats are stronger. 123M007 also significantly increased cardiac output in human volunteers. 123A longer half-life is observed for another Apelin mimetic peptide that binds anti-serum albumin domain antibodies in rats. 124In vivo, it lowers blood pressure and increases myocardial contractility, stroke volume, and cardiac output.Encapsulating [Pyr1]-Apelin-13 in lipid nanocarriers significantly prolonged the half-life, and the protective effect of the encapsulated form on ischemia and perfusion injury is better than that of non-encapsulated [Pyr1]-Apelin-13 in rats. 125E339-3D6 is the first reported non-peptide APJ agonist that induces the production of intracellular cAMP and the internalization of APJ. 126Furthermore, E339-3D6 induces vasodilation in isolated rat aortas.However, the relatively large molecular weight of E339-3D6 and the difficulty of its synthesis and isolation make it difficult to apply to clinical practice. 126A new small-molecule agonist ML233 was later developed using high-throughput screening technology, but its stability in human plasma was shown to be very poor and it exhibited hepatotoxicity. 127,128Another non-peptide APJ agonist, CMF-019, was developed with a high affinity for APJ and biased activation of the G protein signaling pathway; it has been proven to enhance myocardial contractility in rats. 1293][134] In a recent clinical trial, AMG-986 demonstrated good tolerability in healthy individuals but it had no pharmacodynamic effects in patients with HF. 134 ALX40-4C is the first reported peptidic antagonist of APJ.In an in vitro study, ALX40-4C was shown to inhibit APJ-mediated membrane fusion and intracellular Ca 2+ elevation in a dose-dependent manner. 135The F13A peptide antagonist of APJ was obtained by mutating the C-terminal phenylalanine of Apelin-13 to alanine; F13A inhibited many Apelin-13-induced physiological effects in animal models. 136ML221 is the first non-peptide APJ antagonist reported to prevent pathologic retinal angiogenesis in ischemic retinopathy in mice and is expected to be a drug candidate for the treatment of ischemic retinopathy. 137With further in-depth study of the Apelin/APJ system, more APJ agonists and antagonists will be developed.APJ agonists and antagonists are safe and well-targeted, and thus drugs that target Apelin/APJ system will have clinical value.The feasibility of Apelin as a new drug for chronic HF was indicated by the results of the first human trial.Most APJ agonists and antagonists are currently still in preclinical research and further research will be required to explore their utility in the treatment of sepsis.
Potential for Improvements of APJ Agonists for Sepsis
Over the past 20 years, the physiological structures of Apelin and APJ have been revealed and APJ agonists have been developed.Typically, GPCR activation activates both G protein-dependent and -independent signaling pathways, regardless of the structure of the ligand.However, some ligands selectively activate or inhibit certain signaling pathways, resulting in biased signaling. 138In this case, the strategic design of drugs may help ensure specific and desired results with minimized side effects.A biased APJ agonist MM07 was previously developed for application in heart failure.This agonist selectively activates the G protein pathway and avoids activation of the β-arrestin pathway. 123tudies have indicated that APJ receptor signaling involves activation of Gαi, triggering adenylyl cyclase inhibition, resulting in cAMP inhibition and subsequent physiological effects. 139,140APJ also binds other G protein trimers, especially Gq, thereby activating the phospholipase C (PLC) and AMP-activated protein kinase (AMPK) signaling pathways. 23,141Several endogenous APJ ligands (such as Apelin-17, Apelin-36, ELA-32, and ELA-21) have been shown to produce a certain bias in the conduction of β-arrestin signaling. 13,25Overexpression of β-arrestin exacerbates immunosuppression, cardiac dysfunction, and mortality in septic mice. 142The development of APJ receptor agonists that preferentially activate the G protein signaling pathway is crucial for the targeted treatment of sepsis, and these agonists may show reduced adverse effects.
Summary
Sepsis is a global public health problem and has a high mortality rate.Apelin has shown promising outcomes in preclinical studies of sepsis and heart failure clinical research.In this review, we presented a brief overview of the functions of the Apelin/APJ system and discussed the potential of Apelin and ELA to treat sepsis.The Apelin/APJ system may be a promising new target for the treatment of sepsis, some candidate agonists and antagonists have been developed, with promising preclinical research results.Apelin has been proven to have inotropic and vasodilatory effects in healthy individuals and patients with chronic heart failure; however, whether Apelin exerts these effects in septic patients remains unclear.Studies on the treatment of sepsis with Apelin and ELA have been pursued in animal models and cell lines; clinical research results are currently lacking.Thus, further studies are required to determine the applicability of Apelin as a potential drug for the treatment of sepsis.
Figure 5
Figure 5 Apelin protects against septic lung injury.(A) Apelin reduces lung inflammation and edema, lung sections from various groups were examined at both 200x and 400x original magnifications, revealing intramural neutrophils within the alveolar walls, indicated by arrows in H&E-stained images.(B) Apelin reduces the expression of inflammatory factors in the lungs.(C) Apelin reduces the expression of NOX4 and alleviates oxidative stress, thereby reducing the activation of fructose-2,6-bisphosphate kinase 3.Copyright ©2022.Dove Medical Press.The above all figures are reprinted from Yuan Y, Wang W, Zhang Y, et al.Apelin-13 attenuates lipopolysaccharide-induced inflammatory responses and acute lung injury by regulating PFKFB3-driven glycolysis induced by NOX4-dependent ROS.J Inflamm Res.2022;15:2121-2139. 44(D) Apelin protects the lungs by reducing inflammation, oxidative stress, and fibrosis.*P<0.05 compared with control; # P<0.05 compared with LPS group.(Upward red arrows indicate increase, downward black/red arrows indicate decrease, the horizontal red line indicates inhibition).
Figure 6
Figure 6 Apelin reduces septic liver damage.(A) Apelin reduces the levels of AST and IL-6 in the serum of mice with septic shock.(B) Apelin improves hepatic edema and macrophage infiltration in septic shock mice, reducing the expression of inflammatory factors.(C)Apelin alleviates hepatocyte apoptosis in septic shock mice and LPSinduced apoptosis in Huh-7 cells; TUNEL staining is shown in green, and DAPI staining is shown in blue.The above all figures are reprinted from Zhou H, Yang R, Wang W, et al.Fc-apelin fusion protein attenuates lipopolysaccharide-induced liver injury in mice.Sci Rep. 2018;8(1):11428. 106(D) Mechanism diagram illustrating the protective effects of Apelin on septic liver injury.*P<0.05,**P<0.01,***P<0.001,n.s, no significant difference (Upward red arrows indicate increase, downward black arrows indicate decrease, the horizontal red line indicates inhibition).
Figure 7
Figure 7 The role of ELA in alleviating AKI and regulating body fluid homeostasis.(A) ELA increases creatinine clearance and rescues kidney function in mice with septic shock.(B-D) ELA reduced the expression of renal inflammatory factors and the infiltration of macrophages, and reduced renal edema (Red arrows indicate vacuolation and yellow arrows indicate nuclear pyknosis in the LPS group).(E) ELA alleviates tubular cell apoptosis in septic shock mice; green represents TUNEL staining, and blue represents DAPI staining.The above all figures are reprinted from Xu F, Zhou H, Wu M, et al.Fc-Elabela fusion protein attenuates lipopolysaccharide-induced kidney injury in mice.Biosci Rep. 2020;40(9). 114(F) The protective mechanism of ELA on the kidneys and the regulating mechanism of body fluids.*P<0.05,***P<0.001,n.s, no significant difference (Upward black/red arrows indicate increase, downward black arrows indicate decrease, The horizontal red line and the vertically downward red line indicate inhibition).
Table 1
Summary of Serum Apelin Levels in Patients with Sepsis
Table 2
Summary of the Animal Experiments | 2024-01-23T05:04:25.089Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "954a8735e215e6b1574862348f5b7dd1ba0f0693",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=96023",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "954a8735e215e6b1574862348f5b7dd1ba0f0693",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263826935 | pes2o/s2orc | v3-fos-license | Reflections on the surge in malaria cases after unprecedented flooding in Pakistan—A commentary
Abstract Background Malaria is a parasitic infection primarily caused by four main species of the genus Plasmodium, that is, Plasmodium falciparum, Plasmodium ovale, Plasmodium vivax, and Plasmodium malariae. It is transmitted through the bite of the female Anopheles mosquito. It holds the status of one of the leading causes of death in the developing world. Malaria is endemic to Pakistan, and the country experienced the worst floods in its history from April to October 2022. The stagnant flood water served as a breeding ground for mosquitoes, culminating in an alarming spike in malaria cases. According to the World Health Organization (WHO), the number of cases reported till August 2022 was more than in the whole year of 2021. There was more than a twofold rise in cumulative cases in 62 high‐burden Pakistani Districts in August 2022 as compared to August 2021. Aims This commentary aims to bring this emerging issue to notice and highlight the most effective probable measures to help eliminate and prevent the hazards the current outbreak poses. Results Rapid planning and execution are needed to ensure the most efficient and rapid elimination of malaria. To educate the general public, the national government must start public awareness efforts in electronic, print, and social media and deploy solar‐powered mobile healthcare units to far‐flung areas. Prophylactic and postexposure treatments should be planned because larvicidal preventive measures are less practical in flood‐affected vicinities. Conclusion The most effective preventive strategy is drug prophylaxis, followed by insecticide‐treated nets, indoor residual spraying, and untreated nets. Scientists should intensify their investigations for effective medications to alleviate the malaria burden in Pakistan.
| INTRODUCTION
Malaria, an infectious disease caused by the parasites of the genus Plasmodium and transmitted through the bite of the female Anopheles mosquito, holds the status of one of the leading causes of death in several developing countries of the world.Eighty-seven countries and territories, housing nearly half of the world's population, have been deemed areas at risk of malaria transmission.Two billion people, including travelers and residents of endemic countries, are at risk of getting malaria, with 1.5-2.7 million deaths reported annually. 1,2ve Plasmodium species have been implicated in malaria in human beings namely Plasmodium falciparum, Plasmodium ovale, Plasmodium vivax, and Plasmodium malariae, and the more recent Plasmodium knowlesi. 3Ninety percent of the world's malaria mortality and 99.7% of malaria cases have been attributed to P. falciparum, earning the reputation as the most prevalent and pathogenic specie of the malarial parasite. 4,5laria transmission involves the female Anopheles mosquito taking a blood meal from a person with nontreated malaria.This causes the ingestion of red blood cells containing male and female gametocytes by the mosquito, which are transformed into sporozoites upon the completion of their developmental process in the mosquito's gut and transported to its salivary glands.Inoculation of sporozoites, contained in the salivary glands into a capillary at the puncture wound of a noninfected person then follows. 6The malarial parasites initially make their way to the liver for maturation and upon the completion, they are released into the bloodstream.It is during this stage that signs and symptoms of malaria appear. 5,7Life cycle of malaria parasite is illustrated in Figure 1.
The mean incubation period for P. falciparum is 12 days, and symptom presentation in endemic areas begins in the first or second month after the mosquito bite.Typically, one can expect to see signs and symptoms within a few weeks.These primarily include fevers, chills, headaches, nausea, vomiting, diarrhea, abdominal pain, fatigue, muscle weakness, joint pain, tachycardia, rapid respiratory rate, and cough. 8These nonspecific signs and symptoms of uncomplicated malaria often make diagnosis difficult.One percent to two percent of P. falciparum cases often lead to severe malaria, hallmark features of which include prostration, acidotic breathing, impaired consciousness, multiple convulsions, pulmonary edema, disseminated intravascular coagulation, acute kidney injury, jaundice, shock, and coma. 5,9,10High-risk populations for malaria primarily comprise people with little or no immunity against the infection (children, pregnant women) and travelers to endemic areas.All species of Plasmodium, when contracted early in pregnancy, cause abortion or enhance the risk of neonatal death as a result of intrauterine growth restriction and prematurity. 11Other modes of transmission for malaria include transplacental and blood transfusion-mediated transmission, these especially pose a problem for healthcare workers in non-endemic areas. 8inical diagnosis of malaria based on signs and symptoms, and physical findings upon examination, is the traditionally employed method.Laboratory diagnostic methods like conventional microscopic diagnosis by staining thin and thick peripheral blood smears, other concentration techniques, including quantitative buffy coat (QBC) method, rapid diagnostic tests (ParaScreen, SD Bioline, Para check) are highly sensitive and specific while molecular diagnostic methods, such as polymerase chain reaction, loop-mediated isothermal amplification, microarray, mass spectrometry, and flow cytometric assay techniques are even more, opening up new avenues for effective and rapid diagnosis and contributing to better clinical outcomes by permitting timely initiation of treatment. 12
| STATISTICS
Pakistan is a developing country, endemic to malaria.Climate change, melting of glaciers, and torrential monsoon rainfall, all contributed to the devastating floods in the country, which started back in June 2022. 13The havoc wreaked by the floods, the likes of which had never been seen before in the country, amounted to a death toll of 1700, with 12,867 people injured and 7.9 million people temporarily displaced as of October 2022. 14th no roof over their heads and a place to call home, these refugees are currently seeking shelter in temporary camps set up by the government.Living out in the open has exposed them to a number of vector-borne diseases including malaria. 15e stagnant flood waters, covering vast expanses of land have served as breeding grounds for mosquitoes and the number of malaria cases in these flood-affected regions, mainly Sindh and Balochistan, have skyrocketed. 16 Pakistani districts was recorded at 389,372 in September 2022 as compared to 178,657 cases in August 2022. 17 measures are undertaken on an urgent basis to combat the foe.
However, there are multiple challenges the country is facing in this context.The post-flooding influx has put the already tumbling healthcare infrastructure of Pakistan in shambles.The sudden emergence of malaria followed by a failure to timely implement the malaria control program led to a huge upsurge in patients.This rapid increase in malaria cases also resulted in a massive lack of medications, which made the people's woes worse. 19Considering how huge this outbreak of malaria is and the damage which has been done, a major economic investment is needed to condone the loss which cannot be borne by the country's poor economy alone at the moment.An Emergency Response was mobilized by WHO, national authorities, and humanitarian associations but according to reports, only 15%-20% of the affected has been reached so far. 20e inability to reach the far-off areas, the lack of rapid diagnostic kits at places, and thus the failure to keep an accurate track of actual cases became another striking issue.Rural communities mainly suffered from this havoc due to substandard living and hygiene conditions, and are the target audience of culminating efforts.But the prevalent illiteracy in these communities and general noncompliant attitude has been key issue.On the policymakers' part, insufficient resource allocation, ineffective training of the prescribers on accepted treatment recommendations, and poor integration of the malaria control program and its outreach stand as major hurdles in the way to eliminate malaria. 21The overlapping signs and symptoms of malaria with other viral and bacterial diseases also pose a great challenge in accurate and timely diagnosis.Table 1 provides comprehensive features of differential diagnosis of malaria.The Government of Pakistan, in collaboration with WHO did initiate the mass execution of indoor residual spraying (IRS) procedures and mass distribution of insecticide-treated mosquito nets, however, this too is problematic in two contexts.First, the current revenue is not capable of catering to the existent dire need.
Secondly, the more prevalent (>80%) form of malaria in current upsurge is due to the P. vivax species, 17 which is notoriously less responsive to insecticide sprays and other preventive proceduresowing to the reason that it usually bites in the mornings, and in outdoor settings. 22P. vivax malaria is more difficult to identify since there are often fewer parasites circulating in the blood of an infected person than there are with P. falciparum malaria.Moreover, the existing diagnostic methods cannot detect the dormant liver hypnozoite type early in the course of an infection.P. vivax gametocytes can hence spread even before an infection is recognized or treated, posing a risk of uncontrolled transmission. 23The existing diagnostic methods, hence, cannot detect the dormant liver hypnozoite type. 24In addition, the only approved drug for P. vivax infections, primaquine causes mild to severe hemolysis and anemia in patients with G6PD deficiency whose percentage of incidence is extremely high in the South Asia region and it highly limits the management of malaria in Pakistan. 25
| RECOMMENDATIONS
To ensure the elimination of malaria in the most effective and rapid way, swift planning and execution are required in multiple domains.A new effective strategy is required for Pakistan's malaria response program that should involve significant financial investment in systems for disease surveillance and outbreak response.Different iterations of existing systems are currently all financed by donors and are not a top concern for Pakistani policymakers, which definitely needs to be looked into. 26Major economical support and funding from international sources are deemed essential in this situation.
Public awareness campaigns must be launched by the government via electronic, print, and social media to educate the masses.Significant steps have to be taken to improve health infrastructure and outreach in far-flung areas.Swift vector surveillance in all areas should be ensured, along with ensuring the widespread availability of rapid diagnostic testing units based on histological parasite confirmation.
Solar-powered mobile healthcare units must be deployed by the health department quickly to improve the reach and efficacy of these measures. 27nce larvicidal preventive procedures are not as feasible in floodaffected areas, prophylactic treatment, and postexposure treatments should be planned. 28Considering the contraindication of the only approved drug for P. vivax infections, primaquine in G6PD deficient patients in areas like Pakistan, G6PD deficiency screening must be included in the long-term treatment plans, while currently focusing more on preventive strategies and supportive treatment alternatives.
Owing to the difficulty in diagnosing currently prevalent P. vivax species, a considerable amount of further research is the need of the hour to develop more precise diagnostic methods.As transmission is likely to occur even before detection, timely measures must be enforced for prevention.Drugs for prophylaxis have been found to be the most effective preventive method followed by insecticide-treated nets, IRS, and untreated nets. 29Gearing up the pursuit of highly-effective prophylactic drugs is another recommendation for scientists.
In the times to come, effective planning and implementation of economic and healthcare reforms, along with continued efforts to eliminate poverty, improve living conditions and execute the country's response against climate change 30 are expected to yield fruitful results.
Figure 2 illustrates Pakistan's highrisk malaria areas and flood-affected areas in 2022.According to the World Health Organization (WHO), more malaria cases were reported till August 2022 than in the whole year of 2021 combined.The current malaria outbreak has been predominantly attributed to the P. vivax species.A huge disparity exists between the current malaria cases in Sindh province, Balochistan province, and highburden districts; and those that were reported in these regions at the same time last year.The number of confirmed malaria cases in Sindh province in August 2022 was 69,123 compared to 19,826 confirmed cases in August 2021.The number of confirmed cases in Balochistan province saw an increase from 22,032 in August 2021 to 41,368 in August 2022.The number of cumulative cases in 62 high-burden
Figure 3
illustrates malaria cases in Pakistan during the last 5 years.
3 |
CHALLENGESRecent flooding in Pakistan has put the country in a health crisis more severer than one would assume.The toll of cases for this deadly disease may rise to 2.7 million, WHO warns18 unless extreme F I G U R E 2 High-risk malaria regions and flood-affected regions in Pakistan.COMMENTARY | of 6
F
I G U R E 3 Number of malaria cases in Pakistan during the last 5 years till September 2022.T A B L E 1 Features of differential diagnosis of malaria. | 2023-10-12T05:05:50.583Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "db42e8ad2730187adfd159e6a4dfcbbea44dd757",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "db42e8ad2730187adfd159e6a4dfcbbea44dd757",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2415331 | pes2o/s2orc | v3-fos-license | Renaissance of base deficit for the initial assessment of trauma patients: a base deficit-based classification for hypovolemic shock developed on data from 16,305 patients derived from the TraumaRegister DGU®
Introduction The recognition and management of hypovolemic shock still remain an important task during initial trauma assessment. Recently, we have questioned the validity of the Advanced Trauma Life Support (ATLS) classification of hypovolemic shock by demonstrating that the suggested combination of heart rate, systolic blood pressure and Glasgow Coma Scale displays substantial deficits in reflecting clinical reality. The aim of this study was to introduce and validate a new classification of hypovolemic shock based upon base deficit (BD) at emergency department (ED) arrival. Methods Between 2002 and 2010, 16,305 patients were retrieved from the TraumaRegister DGU® database, classified into four strata of worsening BD [class I (BD ≤ 2 mmol/l), class II (BD > 2.0 to 6.0 mmol/l), class III (BD > 6.0 to 10 mmol/l) and class IV (BD > 10 mmol/l)] and assessed for demographics, injury characteristics, transfusion requirements and fluid resuscitation. This new BD-based classification was validated to the current ATLS classification of hypovolemic shock. Results With worsening of BD, injury severity score (ISS) increased in a step-wise pattern from 19.1 (± 11.9) in class I to 36.7 (± 17.6) in class IV, while mortality increased in parallel from 7.4% to 51.5%. Decreasing hemoglobin and prothrombin ratios as well as the amount of transfusions and fluid resuscitation paralleled the increasing frequency of hypovolemic shock within the four classes. The number of blood units transfused increased from 1.5 (± 5.9) in class I patients to 20.3 (± 27.3) in class IV patients. Massive transfusion rates increased from 5% in class I to 52% in class IV. The new introduced BD-based classification of hypovolemic shock discriminated transfusion requirements, massive transfusion and mortality rates significantly better compared to the conventional ATLS classification of hypovolemic shock (p < 0.001). Conclusions BD may be superior to the current ATLS classification of hypovolemic shock in identifying the presence of hypovolemic shock and in risk stratifying patients in need of early blood product transfusion.
Introduction
The early recognition and management of hypovolemic shock in multiply injured patients are still among the most challenging tasks in the acute assessment and treatment of trauma patients. For the initial evaluation of circulatory depletion, the American College of Surgeons has defined in its training program 'Advanced Trauma Life Support' (ATLS) four classes of hypovolemic shock. This classification is based upon an estimated blood loss in percent together with corresponding vital signs [1,2]. For each class, ATLS allocates therapeutic recommendations (for example, the administration of intravenous fluids and blood products) [1]. Recently, the clinically validity of the ATLS classification of hypovolemic shock has been questioned by two analyses independently from each other on two large-scale trauma databases: the TARN (Trauma Audit and Research Network) registry and the TraumaRegister DGU ® , which had consisted of more than 140,000 trauma patients. According to both analyses, ATLS seems (a) to overestimate the degree of tachycardia associated with hypotension and (b) to underestimate mental disability in the presence of hypovolemic shock [3][4][5].
These observations and conclusions prompted us to develop an alternative approach for the early assessment of hypovolemic shock in the emergency department (ED). Several studies have already identified worsening base deficit (BD) as an indicator for increased transfusion requirement [6,7]. Furthermore, BD has been associated with increased mortality, intensive care unit (ICU) and in-hospital lengths of stay, and a higher incidence of shock-related complications such as acute respiratory distress syndrome, renal failure, hemocoagulative disorders, and multiorgan failure (MOF) [6][7][8][9]. Monitoring of BD has also been suggested as an indicator and monitoring parameter for the success of resuscitation efforts [7,10,11]. In times of point-of-care testing (POCT), BD can be assessed in a fast and easy manner and therefore is available within minutes after admission to the ED. The aim of this study was to introduce and validate a four-class BD-based classification of hypovolemic shock on datasets of severely injured patients derived from the TraumaRegister DGU ® database.
Materials and methods
The TraumaRegister DGU ® The TraumaRegister DGU ® was founded in 1993 and details have been published in extenso elsewhere [3,12]. To date, datasets from approximately 70,000 patients from more than 450 hospitals have been entered into the database. The TraumaRegister DGU ® captures all severe trauma patients, who either are admitted to the hospital via the ED with subsequent ICU/intermediate care (ICU/IMC) care or reach the hospital with vital signs and die prior to ICU/IMC admission. It was approved by the review board of the German Trauma Society (DGU) and is in compliance with the institutional requirements of its members.
Data analyses
In the present study, datasets of multiply injured patients entered into the TraumaRegister DGU ® between 2002 and 2010 were analyzed. Inclusion criteria were age of at least 16 years, primary admission, and complete datasets for BD upon admission blood gas analysis as well as for systolic blood pressure (SBP), heart rate (HR), and Glasgow Coma Scale (GCS) score to rebuild the ATLS classification of hypovolemic shock for validation.
Characterization of the four classes of hypovolemic shock based upon base deficit at emergency department admission According to Davis and colleagues [6], four different classes of shock were defined and analyzed. Class I ('no shock') was defined by a BD of not more than 2 mmol/ L, class II ('mild shock') by a BD of more than 2.0 to 6.0 mmol/L, class III ('moderate shock') by a BD of more than 6.0 to 10.0 mmol/L, and class IV ('severe shock') by a BD of more than 10 mmol/L. Each patient was allocated to the corresponding shock class I to IV according to BD upon ED arrival. Vital signs (for example, HR, SBP, and GCS score) were assessed as present upon ED arrival and at the scene of the accident. Shock index (SI), defined by the ratio of HR to SBP, was calculated for both time points. Further assessments included demographics and injury patterns as well as therapeutic interventions such as administration of blood products, intravenous fluids, and vasopressors. Massive transfusion (MT) was defined by the administration of at least 10 blood products between ED and ICU admission. Coagulopathy was defined by a Quick's value of not more than 70%, which is equivalent to an international normalized ratio of approximately 1.3 [13,14].
Validation of the new base deficit-based classification to the current ATLS classification of hypovolemic shock
For the validation of the new BD-based classification to the current ATLS classification of hypovolemic shock, the latter was interpreted as previously described [3]. Briefly, SBP, HR, and GCS score were assessed to allocate the patients into the respective ATLS groups of hypovolemic shock but with some minor modifications [3]. As stated above, allocation of patients into the respective classes of hypovolemic shock was limited if a combination of all three parameters was applied. Therefore, in the present analysis, we allocated each patient into the respective shock class I to IV by the vital sign (HR, SBP, or GCS score) that matches the criteria of the highest shock class. If patients had been intubated and mechanically ventilated prior to ED admission, the GCS score at the scene of injury was considered. Patients were classified according to their BD at ED admission and according to the criteria suggested by ATLS. Transfusion requirements as well as mortality rates within the four groups were compared.
Statistical methods
Data are presented as means ± standard deviations for continuous variables or percentages for categorical variables. GCS scores are presented as medians and interquartile ranges. For continuous variables, normal distribution was excluded by using the Shapiro-Wilk test. To detect differences between the four groups of worsening BD, a Kruskal-Wallis test was performed. A Mann-Whitney U test on pairwise comparisons was performed in case of a significant overall difference. Categorical variables were analyzed accordingly with the chisquare test. For all statistical analyses, a probability of less than 0.05 was considered to be statistically significant. All data were analyzed by using IBM SPSS 19 (IBM Corporation, Chicago, IL, USA).
Results
Characterization of the four classes of hypovolemic shock based upon base deficit at emergency department admission In total, 16,305 patients were identified from the Trau-maRegister DGU ® for further analysis. General demographics and detailed information on injury severity, trauma mechanism, RISC (Revised Injury Severity Classification) prognosis, and outcome for the four classes of hypovolemic shock based upon BD at ED admission are shown in Table 1. Worsening of BD category was associated with increased injury severity and both increased morbidity and mortality. Consequently, ICU and overall in-hospital lengths of stay as well as times on ventilator were prolonged with worsening of BD category. Table 2 summarizes vital signs for the four classes of shock at the scene and upon ED admission. A significant increase in SI was observed through the groups I to IV. HR seemed unaltered within the four groups, and interestingly no group displayed a relevant tachycardia at all. A substantial hypotension with a mean SBP of 87 ± 45 mm Hg was observed in patients with a BD of more than 10 mmol/L (class IV) only. GCS scores decreased from a median of 14 (3 to 15) in class I patients to 3 (3 to 3) in class IV patients, whereas the percentage of patients intubated and mechanically ventilated at the scene increased from 40.2% (class I) to 83.4% (class IV), respectively. Furthermore, hemoglobin levels dropped from 12.8 ± 2.4 g/dL (class I) to 9.1 ± 3.3 g/dL (class IV), and platelet counts declined substantially throughout the classes I to IV (Table 3). Coagulopathy, defined by a Quick's value of not more than 70%, was found in patients with a BD of more than 6 mmol/L (classes III and IV).
An increase in BD category was associated with a progressively stepwise increasing number of blood products administered ( Figure 1). On average, the number of blood units transfused increased from 1.5 ± 5.9 units in class I patients to 20.3 ± 27.3 units in class IV patients. Packed red blood cells were transfused most frequently, followed by fresh frozen plasma and platelet concentrates ( Figure 1a). Simultaneously, observed and predicted transfusion requirements were concordant, as the number of blood products transfused paralleled increased TASH (Trauma-Associated Severe Hemorrhage) scores. Similarly, both fluid administration and the use of vasopressors increased through groups I to IV ( Figure 1b).
Validation of the new base deficit-based classification to the current ATLS classification of hypovolemic shock
When the two approaches to classify the extent of hypovolemic shock upon ED admission were compared, the new BD-based classification displayed a higher accuracy for discriminating the need for early blood products than the current ATLS classification of hypovolemic shock ( Figure 2). Through groups II to IV, the percentage of patients who had received at least 1 blood unit during early ED resuscitation was significantly higher compared with patients classified according to ATLS (Figure 2a). A similar pattern was noted for the frequency of MTs (Figure 2b). If patients were classified by BD, MT rates increased from 5% in class I (BD of not more than 2 mmol/L) to 52% in class IV (BD of more than 10 mmol/L). In contrast, when patients were classified according to ATLS, 4% of group I and only 25% of group IV patients received MT until ICU admission ( Figure 2b). Furthermore, BD distinguished more precisely between patients at risk of dying than the current ATLS classification of hypovolemic shock (Figure 2c). If classified by BD, 7.4% of class I and 51.5% of class IV patients, on average, died during in-hospital stay. In contrast, patients classified according to ATLS showed mortality rates of 2% in class I and 31% in class IV patients.
Discussion
The aim of this study was to introduce and validate a new BD-based classification of hypovolemic shock for the initial assessment of trauma patients. This analysis was conducted on a cohort of not less than 16,305 severely injured patients derived from the TraumaRegister DGU ® database.
The early assessment of hypovolemic shock and the prediction of transfusion requirements in multiply injured patients are still among the most challenging tasks in the initial management of trauma patients. One approach comprises the initial evaluation of vital signs as suggested by ATLS in its classification of hypovolemic shock by using combinations of HR, SBP, and GCS score. However, recent analyses on data of multiply injured patients derived from the TraumaRegister DGU ® and the TARN database indicated that the current ATLS classification of hypovolemic shock displays substantial deficits in allocating trauma patients into the corresponding classes [3,4]. Furthermore, the role of vital signs alone in the initial assessment of hypovolemic shock is still debated [3,[15][16][17][18]. Paladino and colleagues [19] recently assessed the additional use of metabolic parameters (for example, BD as a sensitive indicator of blood loss by measuring tissue perfusion) to traditional triage vital signs to distinguish major from minor trauma. In their retrospective single-center analysis, abnormal vital signs alone had a sensitivity of 40.9% for identifying major injury, but when abnormal metabolic parameters were added, the detection of major trauma increased significantly to a sensitivity of 76.4% [19].
In the present study, we propose a new classification based upon BD, a parameter that indicates the presence of hypovolemic shock and identifies patients who are at risk to require blood product transfusions. In times of POCT, BD is available within minutes after ED admission. As early as 2005, Rixen and Siegel [9] suggested the evaluation of BD as a more useful approach to quantify the extent of hypovolemic shock than the estimation of blood loss, the extent of volume resuscitation, or vital signs such as HR and SBP. Additionally, these authors proclaimed that BD may be superior to the measurement of lactate levels.
The diagnostic use and prognostic value of BD are well documented. Out of 10 clinical and 20 laboratory parameters assessed, changes of BD have been proven to be the best predictor of blood volume change in a canine model of hemorrhagic shock [20]. On the basis of 1,810 multiply injured trauma patients derived from the TraumaRegister DGU ® database, potential predictors for transfusion requirements, including BD and lactate, have been identified via logistic regression. Seven variables could be identified to independently predict MT: gender (male), SBP, HR, hemoglobin, relevant injuries to the abdomen and extremities (Abbreviated Injury Scale score of at least 3), and BD, but not lactate [21,22]. Furthermore, our group has recently compared six scoring systems to predict the risk of ongoing hemorrhage and MT, including the TASH, Prince of Wales Hospital/Rainer (PWH/Rainer), Larson, Vandromme, Schreiber, and ABC (assessment of blood consumption) scores. The TASH and PWH/Rainer scores showed the highest overall accuracy in predicting ongoing hemorrhage and MT. Interestingly, both scores include BD as a laboratory surrogate for hypoperfusion. In contrast, only one scoring system (that is, the Vandromme score) comprises lactate [23]. Similarly, several mortality scores (for example, the Emergency Trauma Score (EMTRAS) [24] and BIG score [25]) use BD as the laboratory surrogate for shock. In the present study, worsening BD paralleled worsening lactate. However, the use of Ringer's lactate in the initial fluid resuscitation as well as the presence of ketoacidosis in patients with diabetes may influence lactate levels and can falsify the initial assessment [9,26]. The present study did not intend to address the question of whether BD or lactate may be superior in risk-stratifying trauma patients, and therefore this question remains unanswered. However, the data derived from the TraumaRegister DGU ® database suggest that BD may be more accurate in detecting shock and blood loss as compared with lactate. Therefore, the proposed classification here is based on BD upon ED admission.
The present investigation revealed that increasing BD category reflected injury severity as demonstrated by an increasing injury severity score (ISS), new injury severity score (NISS), and RISC score and the incidence of MOF and sepsis. All of them are important factors influencing mortality and outcome of trauma patients. In our analysis, mortality rates rose from 7.4% to 51.5% with altered BD values. These observations are consistent with those of previous studies reporting an association between admission BD and mortality [6,7,10,11]. In a univariate logistic model, admission BD has been proven to be one of the best predictors for mortality, and a BD level of 6 mmol/L was identified as an important cutoff point for mortality [7,11]. Also, in pediatric and older trauma populations, BD has been shown to be an important indicator for injury severity and mortality [27][28][29][30]. Interestingly, the use of alcohol and drugs did not impair the predictive accuracy of admission BD with respect to trauma outcome [31].
In the present analysis, BD correlated with transfusion requirements, both in the overall amount of transfused Values are presented as mean (standard deviation). Cohort consisted of 16,305 patients. P < 0.001 for all parameters. aPTT, activated partial thromboplastin time; BD, base deficit.
blood units and in the percentage of patients who required any blood transfusion (≥ 1 blood unit). Furthermore, worsening BD paralleled increasing risk of ongoing hemorrhage as reflected by increasing TASH scores. The mean amount of blood products administered increased from 1.5 ± 5.9 to 20.3 ± 27.2 units with worsening BD category. These findings are consistent with those of a previous analysis demonstrating that worsening of BD was associated with an increased need for blood product transfusions [6,7,32]. Through the groups I to IV, the increasing amounts of intravenous fluids and vasopressors administered indicate the presence of hemodynamic instability and validated the results previously reported by Rixen and colleagues [7]. Laboratory findings such as decreases in hemoglobin levels and platelet counts and an impaired coagulation as reflected by a Quick's value of less than 70% were further interpreted as evidence for hypovolemic instability. Given these results, BD indicates the presence of hypovolemic shock related to hemostatic resuscitation need, transfusion requirements, laboratory findings, and mortality.
To the best of our knowledge, there is no gold standard to assess the presence of hypovolemic shock and to trigger therapeutic interventions. Thus, there is no option yet to test our novel approach against a gold standard. Therefore, the authors have decided to test against the current ATLS classification of hypovolemic shock given that this approach has been widely implemented in daily clinical routine as a standard protocol of care and for the initial assessment and treatment in trauma centers. Both the percentage of patients who had received at least one blood product and MTs were increased throughout the groups I to IV in both classifications. However, transfusion requirements were significantly higher when patients were classified by BD. Similar results were observed for mortality. Obviously, stratification by BD was associated with superior discrimination of trauma patients with respect to outcome and need for early blood products. In this context, ATLS seems to dramatically underestimate the need for blood product transfusion, particularly in group III and IV patients.
In summary, we suggest assessing patients in the ED on the basis of BD. Davis and colleagues [6] have already proposed that, in patients with a BD of less than 6 mmol/L, blood typing should be sufficient but that patients with a BD of at least 6 mmol/L should undergo blood typing and cross-match. Given MT rates and the identification of patients who are in need of emergent transfusion, a BD of 6 mmol/L could also be suggested as a threshold. Table 4 displays our suggestion for a modified version of the current ATLS classification of hypovolemic shock based upon BD as a principal trigger for action. Following the ATLS paradigm of 'keep algorithms simple', specific recommendations are presented with regard to preparation and use of blood products. For class I and II patients, a careful observation should be sufficient unless clinical circumstances dictate otherwise. In class III patients, preparation for transfusion should be initiated. In class IV patients, in whom MT rates were more than 50%, the trauma leader should definitely be prepared for an MT (for example, by activation of an MT protocol and corresponding logistics).
The retrospective nature of this study and the modifications applied to the ATLS classification in order to conduct the present analysis are clear limitations of this study, and the authors are aware of this shortcoming. Although POCT can provide BD within minutes after ED admission, not every ED is equipped with this technology. However, ATLS claims that the knowledge and skills taught are easily adapted to all venues of trauma care. This implies that every ED worldwide as well as pre-hospital systems (Pre-hospital Trauma Life Support) use similar principles and assessment tools as suggested by ATLS. However, this study may be a first step toward a 'modified ATLS classification of hypovolemic shock' with improved clinical applicability. Further validation on other trauma databases and in prospective studies is needed, especially on cohorts including higher numbers of penetrating injuries. In the absence of POCT, future research is needed to develop alternative approaches (for example, modified and clinically adopted combinations of vital signs), which can be used as an equivalent to BD in the initial assessment of hypovolemic shock. Hereby, the basic and underlying ATLS concept focusing on its intentionally simple applicability, independent of venue, technical prerequisites, and time scales, would be preserved.
Conclusions
BD upon ED admission indicates the acute presence of hypovolemic shock related to the need for hemostatic resuscitation, transfusion, laboratory findings, and mortality. The four proposed classes of worsening BD seem to predict transfusion requirements and mortality more appropriately than the current ATLS classification of hypovolemic shock. BD might be a relevant clinical approach to early risk-stratify severely injured patients in the state of hypovolemic shock and for blood product transfusion during initial assessment.
Key messages
• The early recognition and management of hypovolemic shock remain among the most challenging tasks in the initial assessment of trauma patients.
• The current Advanced Trauma Life Support (ATLS) classification of hypovolemic shock displays deficits in reflecting clinical reality; therefore, we propose a new hypovolemic shock classification based on a metabolic marker sensitive to blood loss by measuring tissue perfusion (for example, base deficit (BD) • A classification based on four groups of worsening BD correlates with the extent of hypovolemic shock in severely injured patients, as reflected by increased transfusion requirements, higher massive transfusion, and mortality rates.
• The new BD-based classification discriminates better the need for early blood product transfusion and mortality in severely injured patients than the current ATLS classification of hypovolemic shock.
Authors' contributions
MMu contributed to study design, acquisition and interpretation of data, and drafting of the manuscript. UN and BB contributed to analysis and interpretation of data and to revision of the manuscript. TB, AW, TF, and TP contributed to study design and to revision of the manuscript. MMae contributed to study conception and design, acquisition of data, analysis and interpretation of data, and revision of the manuscript. All authors read and approved the final manuscript. | 2016-05-12T22:15:10.714Z | 2013-03-06T00:00:00.000 | {
"year": 2013,
"sha1": "9ee291891620fba45199e977ef2ffc84becf14a8",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/cc12555",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11e5276e5ef8f89ee4f8a182204b903ea84f6601",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14857253 | pes2o/s2orc | v3-fos-license | A study on the flexibility of enzyme active sites
Background A common assumption about enzyme active sites is that their structures are highly conserved to specifically distinguish between closely similar compounds. However, with the discovery of distinct enzymes with similar reaction chemistries, more and more studies discussing the structural flexibility of the active site have been conducted. Results Most of the existing works on the flexibility of active sites focuses on a set of pre-selected active sites that were already known to be flexible. This study, on the other hand, proposes an analysis framework composed of a new data collecting strategy, a local structure alignment tool and several physicochemical measures derived from the alignments. The method proposed to identify flexible active sites is highly automated and robust so that more extensive studies will be feasible in the future. The experimental results show the proposed method is (a) consistent with previous works based on manually identified flexible active sites and (b) capable of identifying potentially new flexible active sites. Conclusions This proposed analysis framework and the former analyses on flexibility have their own advantages and disadvantage, depending on the cause of the flexibility. In this regard, this study proposes an alternative that complements previous studies and helps to construct a more comprehensive view of the flexibility of enzyme active sites.
Background
Enzymes are organic catalysts that play an important role in various biological processes. It has been shown that the speeds of enzymatic reactions are much faster than non-enzymatic ones [1]. Such catalytic processes do not occur at arbitrary regions but at specific sitesusually one or at most a few-of an enzyme. The sites of catalysis have been called "active sites". For rapidly and specifically distinguishing between closely similar compounds, the physiochemical properties of active sites are expected to be highly conserved. A good example is the Ser-His-Asp catalytic triad of serine proteases, where the relative positioning of these three catalytic residues remains rigid in enzymes with very different global structures [2].
However, this expectation of invariability is not applicable to all active sites. Several studies have found homologous enzymes that can perform different catalyses via different mechanisms [3][4][5]. Grishin has demonstrated examples of achieving similar reaction chemistries in completely different ways [6]. Such conditions often involve structural changes in the proximity of the active sites [6,7]. The variability of structural or chemical characteristics among binding sites has also been discussed as "flexibility" [8], "plasticity" [9] and "stability" [10], where stability is opposite to variability. We focus on the structure rather than sequence variations and adopt "flexibility" in this manuscript.
In 2002, Todd et al. reported 17 examples of enzyme homologues having obvious structural changes within the active sites [9]. They did not perform any computational analysis to measure the flexibility of the collected active sites. Instead, Todd and colleagues provided valuable discussion on this issue based on many examples. They confirmed the existence of flexible active sites and * Correspondence: darby@ee.ncku.edu.tw 2 Department of Electrical Engineering, National Cheng Kung University, Tainan 701, Taiwan Full list of author information is available at the end of the article proposed several evolutionary possibilities that could result in the flexibility. Another study conducted by Kahraman et al. analyzed the binding sites associated with the same ligand [11]. Kahraman and colleagues pre-selected nine ligand types of different sizes and flexibilities and analyzed 100 Protein Data Bank (PDB) [12] structures of binding sites that bind the pre-selected ligands. Their work used spherical harmonics as the shape descriptor to model the binding sites and invoked a shape comparison algorithm to quantify the flexibility of binding sites [13]. They have shown that shape variations between a binding site and its ligand counterpart are correlated. In 2009, Saranya and Selvaraj systematically analyzed the variation of protein binding sites of 200 PDB structures collected from eight protein families [14]. They focused on the cavity volume of protein binding sites and observed variations of the cavity volume of the same site when binding different ligands. Saranya and Selvaraj concluded that the volume of both the binding site and its ligand counterpart are highly correlated to the atom-atom interactions in the binding site.
The above three studies share a basic analysis philosophy: identifying some active sites known to be flexible and then investigating their flexibilities. Furthermore, a common assumption that the flexibility of active sites comes from compensating the ligand flexibilities led previous studies to associate site flexibility with the ligand counterpart. This assumption largely reduces the data-requiring protein-ligand complexes-available for analysis as well as the potential to understand new classes of flexible sites.
In this regard, this study proposes an analysis framework aimed at identifying flexible active sites rather than analyzing known flexible ones. We compile 58 groups of annotated active sites from the CSA (Catalytic Site Atlas) database [15], which is the largest catalogue of catalytic residues. This dataset of active site groups contain 1,612 PDB structures, spanning 46 EC (Enzyme Commission) codes and 188 protein families. In this study, the flexibility of an active site group is obtained from the pair-wise structure alignments among that group. Here we adopt the CLoSA (Constraint-based Local Structure Alignment) algorithm [16] as the alignment tool; this is designed for and has been shown to be successful in discriminating active sites. The proposed method requires less manual intervention and is suitable for analyzing larger dataset than existing analysis methods. Our experimental results show that it (a) gives results consistent with the previous works based on manually identified flexible active sites and (b) is capable of identifying potentially new flexible active sites.
To summarize, this study proposes a new strategy of data preparation and a corresponding analysis framework. The collected dataset is constructed using automatically extracted geometrical templates without manual intervention. This dataset is relatively large in comparison with those used in the previous studies on active site flexibility. This alleviates the bias of individual differences in protein structures due to unfavorable factors such as crystallization. In addition, the collected dataset can be used to analyze conformational changes during the catalytic reaction and/or between unbound and bound forms. This is an advantage that it can detect various types of flexibility caused by the factors different to known factors such as flexible ligands. However, this is also a disadvantage that further filtering is required if specefic types of flexiblity are of interest. Some limitations of the proposed analysis framework are discussed at the end of the "Results and Discussion" section. Namely, different data collection strategies may observe distinct causes of flexibility. In this regard, this study proposes an alternative analysis complementing previous studies and helps to construct a more comprehensive view of the flexibility of enzyme active sites.
Results and discussion
This section first describes the dataset, including how the data is prepared and how the physicochemical properties of active sites are obtained using local structure alignment. Next, the measured flexibilities of active sites and their relation to the corresponding ligands are present. At the end of this section, we outline some case studies and discuss the limitations of the proposed analysis framework.
Dataset
We started by collecting data from the CSA database, the largest database of active site annotations. This data preparation method has several advantages and provides analyses alternative to previous studies starting from ligands. First, the collected structures are not required to be complexes. The method can be used to analyze the flexibility of unbound structures. Second, CSA entries contain annotated catalytic residues, which can be used to "calibrate" the alignment. Because active sites are small and we focus on flexible ones, there are usually multiple alignments of two site structures with very different aligned residues. How to choose the correct alignment largely affects the measured flexibility. In this study, alignments aligning catalytic residues better are first considered.
CSA version 2.2.11 contains 91,840 entries, spanning 1,208 EC codes and 3,325 protein families. Each CSA entry is composed of two to six residues of a PDB structure. All the 91,840 CSA entries distribute in 23,449 PDB structures. To simplify the analysis, entries where the corresponding PDB structure has multiple EC codes are excluded. This study focuses on oxidoreductases, as their the application in performing synthetic transformations is an important area [17]. The remaining set comprises 5,613 CSA entries, where 324 entries are obtained from the literature and the other 5,289 entries are homologues of the 324 literature entries. Some of the 324 literature entries overlap, that is, they share some residues in the same PDB structure. In the proposed analysis framework, the overlapping literature entries and their homologous entries are considered as a group. Accordingly, the 5,613 CSA entries are clustered into 151 active site groups.
Then pair-wise structure alignments are performed on each of the 151 active site group using the CLoSA algorithm [16]. Each alignment includes information of transformations to superimpose the two aligned local structures and a list of matched residues. More details of the structure alignment can be found in the "Methods" section. Some active site groups are further removed during this step. First, small groups with less than ten PDB structures are discarded. Second, a group is considered as too diverse if CLoSA fails 25% pair-wise alignments or the proportion of residues successfully aligned is less than 50% in that group. This diversity could be due to the existence of subgroups of the active site, which will mislead the flexibility analysis. The final dataset of this study is a collection of 58 active site groups from 1,612 PDB structures across 46 EC codes and 188 protein families.
Measuring the flexibility of enzyme active sites
Based on the results of pair-wise alignment, this study provides several physicochemical properties of local structures for measuring the flexibility ( Table 1). The size of an active site is defined as the average number of residues of the associated local structures. #align and % align measure the quantity of matched residues, where #align is the number of residues successfully aligned in the pair-wise alignment and %align is the ratio of #align to the size number of residues of the smaller local structure in the alignment. The stdev (standard deviation) of %align is used as the flexibility index in this study. This index, as will be elaborated in the following subsections, is consistent with previous works on flexibility. Charge in Table 1 represents the electrostatic state around the active site by averaging the charge of each associated local structure of an active site group. Here the charge of a local structure is the sum of its amino acid charges (1:positive, 0:neutral and -1:negative) according to Klein et al.'s study [18]. The calculation of Polar and ASA (accessible surface area) in Table 1 is similar to Charge, except that the per amino acid polarity is obtained from the study of Radzicka and Wolfenden [19] and the "per amino acid ASA" is calculated with the DSSP (Dictionary of Protein Secondary Structure) package [20].
This proposed flexibility index is consistent with previous works
All information, including the physicochemical properties of the active site and some statistics such as number of CSA and PDB entries, of the 58 active site groups can be found in Additional file 1. Table 2 shows the ten most flexible active sites and the ten most rigid groups identified using the proposed method. In the ten most flexible groups, 1powA, 1getA and 1d4cA bind FAD (flavin adenine dinucleotide); while 4mdhA, 1arzA and 1emdA bind NAD (nicotinamide adenine dinucleotide). In the flexibility study of Kahraman et al. [11], FAD and NAD were selected as the biggest and most flexible molecules. 1dhfA binds NDP (nicotinamide adenine dinucleotide phosphate), which is simply NAD with a third phosphate group attached and has very similar chemistry of that of NAD [21,22]. In the ten most rigid groups, 1dveA, 3nosA, 1dj1A, 7atjA and 2cpoA bind heme; 1idtA and 1fcbA bind FMN (flavin mononucleotide); while 1aopA binds SRM (siroheme), a heme-like chromophore with a closely similar prosthetic group to heme [23,24]. Heme and FMN were reported to be slightly flexible in Kahraman et al.'s study. As a result, 15 of the 20 groups bind the ligands discussed in previous flexibility studies [11,25], where the flexible and rigid sites/ligands are manually selected. These results reveal, in addition to the good performance of the proposed analysis framework, that the flexibility of protein-ligand binding sites can be observed even with distinct data preparation strategies.
In Table 2, RMSD has the highest correlation to the flexibility index. However, the correlation (R 2 =0.35) is low, because RMSD relies heavily on the number of items under consideration. If two corresponding residues in two local structures are too distant to be aligned, a large conformational change is indicated. However, in the RMSD calculation, this residue pair will be discarded and usually lead to a smaller RMSD. Thus, a rigid active site must have a small RMSD (see the ten rigid groups in Table 2), but a small RMSD does not guarantee a rigid active site (see the ten flexible groups in Table 2). In this regard, the stdev of RMSD is more suitable than RMSD for measuring how the alignments vary among a group. The higher correlation (R 2 =0.55) of the stdev of RMSD to the flexibility index concurs with this argument. In addition to geometric properties, this study also analyzes the charge, polarity and surface area of active sites. In Table 2, the charge of all flexible active sites is in the range of [-2,2] and is either slightly negative or nearly neutral. Conversely, four rigid active sites (1idtA, 1n2cA, 2cpoA and 1apoA) have a larger charge. However, though Charge is more highly correlated to the flexibility index than Polarity and ASA, the R 2 (0.14 for Charge and 0.22 for the stdev of Charge) is still limited. This suggests that the chemical characteristics near the active sites are much more conserved than or irrelevant to the geometric characteristics. Otherwise, more chemical properties should be considered. The last observation in Table 2 is that ASA is completely uncorrelated to the flexibility index (R 2 =0.00), but its stdev has a comparable correlation (R 2 =0.15) to Charge. A reasonable explanation is the partially bound ligands, which sink only partially to an active site with the other end protruding into the solvent [11]. Such conditions make ASA a less useful measure. However, its stdev can slightly detect the surface variation of the bound end of the ligand.
Flexibility of the ligand counterpart
This subsection verifies the correlation of the flexibility of the active site to its ligand counterpart. To identify the ligand counterpart of an active site, the closest hetero molecule is considered. This study associates an active site with a set of the PDB structures, thus the closest hetero molecules could vary owing to the absence of some ligands in different PDB structures. We guarantee that the selected ligand for each active site (a) is the most frequent hetero molecule observed in the associated PDB structures and (b) has at least one heavy atom whose distance to a heavy atom of the active site is closer than 6.5Å. The selected ligands of 19 active sites in Table 2 satisfy the above conditions. The only exception is 1a05A of which the most frequent hetero molecule is the SO4 (sulfate ion), but SO4 only appears five times in the 25 associated PDB structures and three of them are distant (>6.5Å) from the active site.
The counterparts of the 20 active sites cover ten ligands (Table 3). Table 4 shows the nine properties of ligands adopted in this study, where eight properties, except Dist., are obtained by querying the PubChem database [26]. Note that identical ligands could have Table 3 is used to query the PubChem database. In Table 3, the property most correlated to the flexibility index is #HA. However, the correlation is limited (R 2 =0.31). The next most correlated properties are TPSA (R 2 =0.26), #HD (R 2 =0.21) and Dist. (R 2 =0.16). All the remaining properties are uncorrelated to the flexibility index (R 2 <0.1). When further checking the #HA, we observe that all ligands associated with flexible sites have more H-bond acceptors except CLF, the only inorganic compound. The Fe atom may make the binding mechanism distinct to other ligands. On the other hand, all ligands associated with rigid sites have fewer H-bond acceptors except SRM. This is an outlier of the ten rigid sites in terms of MW, #HD, #rotatable and #atom. Thus, we exclude these two ligands. The correlations of all of the properties are significantly increased (Figure 1).
After excluding CLF and SRM, #HA, TPSA and #HD are still the most correlated properties with higher R 2 of 0.89, 0.83 and 0.68, respectively. The next most correlated property becomes #rotatable. It has a much better R 2 of 0.63 than Dist. (0.43), the property more correlated to the flexibility index in the analysis without excluding CLF and SRM. In principle, #rotatable is a good indicator of the flexibility of a ligand, while Dist. might be misled by a few atoms. This suggests that the analysis that excludes CLF and SRM is more reasonable.
According to the results of this study, the flexibility of an active site is correlated to both the topological polar surface area and number of H-bond acceptors of its ligand counterpart, followed by the number of H-bond The details of each column are described in Table 4. 1 flavin adenine dinucleotide. 2 Fe(8)-S(7) cluster. 3 nicotinamide adenine dinucleotide. 4 nicotinamide adenine dinucleotide phosphate. 5 No appropriate ligand is available for this active site group. 6 adenosine triphosphate. 7 heme. 8 flavin mononucleotide. 9 3-hydroxy-3carboxy-adipic acid. 10 4-hydroxybenzoic acid. 11 siroheme. donors and of rotatable bonds. These four properties, however, are not universal to all kinds of ligands. More efforts are required to understand the flexibility of active sites binding to special ligands, such as inorganic compounds.
Case studies
This subsection uses three examples identified by the proposed method to discuss the active site flexibility. The first example is an active site which binds FAD, where a disordered segment is observed enabling adaptation to the flexible ligand. The second example is a dephosphorylation reaction, where the active site varies according to the chemical changes between the reactant and the product. The third example is a zinc site of two geometric forms. The last two examples demonstrate how the proposed method detects flexible active sites without known flexible ligand counterparts. These results suggest that the proposed framework helps to identify novel flexible active sites as well as novel flexibility types worthy for furthey studies.
The first example is the alignment between 2b7sA and 3cirA, both belonging to the 1d4cA active site group. The enzyme of this active site group is succinate dehydrogenase (EC code: 1.3.99.1) of which the corresponding ligand is FAD. In this case, nine of 16 residues in the local structures and three of four catalytic residues are successfully matched (Figure 2). The only unmatched catalytic residue (R402 of 2b7sA and R287 of 3cirA) locates on a helix (T401-A411 of 2b7sA and R287-H296 of 3cirA). This helix is denoted as h. We performed global structure alignment on 2b7sA and 3cirA and found that there is an obvious movement of h. We further checked the proximity of h and identify a disordered segment (L254-P286), denoted as d, in 3cirA. However, the corresponding segment of d in 2b7sA (L388-D400) is ordered. In the binding process of this active site, the helix h plays the role of latch to fix the ligand after it enters the active site. The disordered segment d is used to fasten the latch. As a result, the ordered and disordered forms of d reveal that 2b7sA and 3cirA could represent two states of the same binding process, leading to the observed flexibility.
The second example is the alignment between 2c8vA and 1nipB, both belonging to the 1n2cE active site group. The enzyme of this active site group is nitrogenase (EC code: 1.18.6.1). There are two corresponding ligands for this active site, ATP (adenosine triphosphate) and ADP (adenosine diphosphate). This is a typical dephosphorylation reaction where ATP is a coenzyme transporting chemical energy and will be converted into its precursor, ADP, after the reaction: Thus, in this case, although ATP and ADP are different ligands, they should be regarded as the starting and ending states of the same compound in this reaction. Figure 3 shows the alignment. Unlike the previous example, there is no specific residue that has obvious movement in the local structures. Most of the residues are successfully matched in this alignment, but the RMSD of 3.1Å is large. Figure 3 clearly reveals that both ATP and ADP attach the active site, but they do not overlap at all. This suggests that the ATP/ADP compound "shifts" along the active site during the reaction, leading to the observed flexibility.
The third example is the zinc site of Cu,Zn superoxide dismutase (SOD). Analyzing the flexibility from its ligand counterpart, the zinc ion, is not applicable. Thus, we manually looked into the results of pair-wise structure alignments and speculated that there are two subgroups in this active site group. The structures in the same subgroup match well but those in different subgroups match badly. Here we use the alignment among 1esoA, 1e9qB, 1oezZ and 1uxlB to demonstrate the observed grouping phenomenon (Figure 4). All of them belong to the 2jcwA active site group (EC code: 1.15.1.1). The four structures form two sets, 1esoA-1e9qB and 1oezZ-1uxlB. In Figure 4, the residues connected to H63 in all the four local structures matched well. It has been shown that the H63 histidine does coordinate to the zinc ion [27]. The unmatched residues were H44 (only appears in 1esoA and 1e9qB, the first subgroup) and H80 (only appears in the second subgroup). According to our survey the H44 histidine has been speculated to be more related to the copper ion of Cu,Zn [28,29]; while the H80 histidine has not been mentioned in any surveyed Cu,Zn SOD studies. This observation of the flexibility of Cu,Zn SOD concurs with the results of two previous studies based on [27] to study of the zinc site of Cu,Zn SOD and observed two different, pH independent, PAC spectrums. They concluded that Cu,ZN SOD has at least two geometric forms for the zinc site. Another study by Falconi et al. [30] analyzed the prokaryotic and eukaryotic Cu,Zn SODs with limited proteolysis and molecular dynamics simulation. They confirmed that a seven-residue insertion of the Escherichia coli Cu,Zn SOD forms an alternative organization of the active site, compared to those eukaryotic ones. The seven-residue insertion is observed in our first set (K54^A-A54^G of 1esoA) but not the second set. This further confirms our speculated bi-forms of Cu,ZN SOD is probably the same as the one observed in [30].
Limitations and problems
This study proposes an alternative analysis framework of collecting data without ligand information and summarizing flexibility by properties based on atomic coordinate comparison. The above results and discussion focus on the advantages of the proposed method, however, it suffers from some limitations. The most challenging problem is the existence of subgroups of an active site. This can result from active sites that have multiple binding forms or those that can bind multiple distinct ligands. The third example in the previous subsection reveals the potential of detecting subgroups of the proposed method. However, the analysis is still manual. Active site clustering, which itself is an important issue, is required to tackle this problem. The second problem is the limitation of the selected physicochemical properties. As shown in the case studies, the flexibility of a chemical compound is a complicated process and is difficult to describe by some measures. A reasonable solution to this problem is to design different measures for distinct binding conditions. The CLF ligand in Table 3 is a good example demonstrating that we need different measures for the flexibility of inorganic ligands to #HA, TPSA, #HD and #rotatable.
Finally, the proposed analysis is also limited to the adopted comparison algorithm. The proposed method uses atomic coordinate comparison, which compared to shape comparison, has the problem of superposing binding sites composed of different numbers of atoms and atom types [11]. This echoes that this is an alternative analysis to other works since shape representation has its own problems in active sites without star-like shapes [11]. A model considering both the atomic details and shape characteristics is needed to this limitation. Another possible solution is to classify sites by their shapes and use the appropriate comparison algorithm accordingly.
Conclusions
Knowing the flexibility of enzyme active sites is a crucial step of understanding the various binding mechanisms. There have been many studies examining this problem by selecting some flexible active sites and analyzing their evolutionary and structural conservation. This study, on the other hand, proposes an analysis framework to detect novel active sites with flexibility. The framework is composed of a new data collecting strategy, a local structure alignment tool and several physicochemical measures derived from the alignments. The experimental results show the applicability of combining the three components as well as its potential to identify flexible active sites. In general, the proposed analysis 1). The proteins are represented as strands and the active sites as sticks. 1esoA is pink, 1oezZ is green, 1e9qB is orange and 1uxlB is blue. The corresponding zinc ions are represented as balls (yellow for 1esoA and purple for 1eozZ). The seven-residue insertion of 1esoA (K54^A-A54^G) is red. The caret symbol (^) indicates an insertion code in PDB format, which reveals that these seven residues were discovered later after the first submission of 1esoA to PDB. All four structures share a histidine (H63 of 1esoA). The second histidine (H44 of 1esoA) only appears in 1esoA and 1e9qB, while the third histine (H80 of 1oezZ) only appears in 1oezZ and 1uxlB. framework provides an alternative rather than a substitute of previous works. It is highly automated and robust so that more extensive studies are feasible in the future.
Methods
Pair-wise structure alignments of an active site group In this study, an active site is represented by a group of PDB structures associated with annotated catalytic residues and the flexibility of an active site is obtained from the pair-wise structure alignments of its associated PDB structures. We define the local structure of an active site as the catalytic residues and those within 3Å of the catalytic residues, where the distance of two residues is the distance of their nearest heavy atoms. We define the local structure in this way because, although the catalytic residues play the most important role in ligand binding, the surrounding residues need to be conserved to provide a stable environment. A group of n local structures results in n(n-1)/2 alignments (n self-to-self alignments are not required).
This study adopts the CLoSA (Constraint-based Local Structure Alignment) algorithm to perform pair-wise structure alignment. It is an efficient local structure alignment tool with four constraints designed for active site comparison. The CLoSA algorithm is composed of three steps: cavity identification, structure comparison, and alignment scoring. In our implementation, this cavity identification step is disabled, since this study only aligns local structures that are already cavity-like by construction. The structure comparison step of CLoSA is based on the geometric hashing algorithm [31], where the alignment frames examined are defined by the two backbone bonds connected to the alpha carbon of each residue. This definition has been widely used when applying the geometric hashing algorithm to protein structure alignments [32,33]. In this step, two residues are regarded as successfully aligned if the distance between them is ≤5Å. Accordingly, the time complexity of the structure comparison step is O(n 1 n 2 (n 1 +n 2 )), where n 1 and n 2 denote the number of residues in the two compared local structures, respectively.
The most distinct feature of CLoSA is the inclusion of four constraints in the alignment scoring step. The first constraint ensures that ≥20% residues in the given structure are successfully aligned. The second constraint states that the RMSD (root mean square deviation) of the aligned alpha carbons must be ≤5Å. In our implementation, the third constraint of the opening direction is disabled to deal with active sites binding ligands of different orientations [34]. The fourth constraint ensures that SOC (sequence order conservation) ratio ≥0.37 and sRMSD (skew RMSD) ≤5Å of the aligned alpha carbons. SOC is the number of aligned residues having inconsistent sequence orders and sRMSD is an adjusted RMSD by the following equation: where MAX_RMSD denotes the maximum distance between two aligned alpha carbons (5 Å in this study). The sRMSD represents a heuristic measure to penalize order mismatches by assigning them larger RMSD values. Finally, the alignments that passed all the constraints are ranked with the TM-score [35], a measure of the similarity of topologies of two proteins. TMscore is more sensitive than the RMSD of the aligned alpha carbons in accessing the quality of structure alignment.
Additional material
Additional file 1: A collection of 58 active sites This file includes the physicochemical properties of the 58 active site groups collected in this study. | 2014-10-01T00:00:00.000Z | 2011-02-15T00:00:00.000 | {
"year": 2011,
"sha1": "d885503a1d4b114fb653e3246398ad48b4c5e7e9",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-12-S1-S32",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d885503a1d4b114fb653e3246398ad48b4c5e7e9",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
]
} |
174776611 | pes2o/s2orc | v3-fos-license | Current Noninvasive MR-Based Imaging Methods in Assessing NAFLD Patients
The chapter will focus on the different aspects of nonalcoholic fatty liver disease (NAFLD). An update in noninvasive MR-based imaging will be offered in detail, pointing mainly to fat, iron, and fibrosis deposition and the accuracy of quantitative methods in disease grading and severity assessment. NAFLD is the most common cause of chronic liver disease (CLD) in Western countries. MRI is used to evaluate the disease, to assess the severity, and to quantify the amount of fat deposition, being also the method of choice to evaluate and quantify iron overload. Diagnosis and staging of liver fibrosis is one of the most challenging aspects of noninvasive imaging. “Virtual biopsy” refers to the possibility of imaging techniques to depict, map, and measure fibrosis minimizing the need for invasive liver biopsies in CLD. MRI allows an accurate determination of steatosis, iron overload, and fibrosis, even if they coexist.
The importance of noninvasive evaluation of liver steatosis and fibrosis in NAFLD patients
NAFLD is currently the most common cause of CLD worldwide. It is defined by lipid droplet accumulation within hepatocytes in the absence of substantial alcohol intake. NAFLD comprises a disease spectrum ranging from simple steatosis to nonalcoholic steatohepatitis (NASH), which may progress into liver fibrosis and even end-stage cirrhosis [1]. NAFLD is becoming a major concern with the increasing incidence of obesity in Europe. Available data suggest that the global prevalence of NAFLD is estimated at 24%, being the leading cause of CLD in the USA and Europe [2].
The differentiation of simple steatosis from NASH has a great clinical importance. Additionally to liver steatosis, NASH presents inflammation and hepatocellular injury [3]. The differentiation between both entities is routinely made by histopathological analysis after liver biopsy. However, it is an invasive method, with inherent risks that include sampling error and serious complications [4].
Currently, there is an urgent need for a noninvasive method to accurately assess liver fibrosis and liver steatosis. Ultrasonography (US)-based and computer tomography (CT)-based modalities can demonstrate the morphologic alterations of cirrhosis, but they are limited in evaluating patients with earlier stages of liver disease [5].
Advancements in magnetic resonance imaging (MRI), with its unique and intrinsic imaging features, have provided the opportunity to revolutionize how we image and evaluate patients with diffuse liver diseases. In addition, with the development of new antifibrotic therapeutic agents, MRI-based techniques may play a central role in monitoring treatment response and in the clinical management of patients with NAFLD [6,7].
The recent technical developments in MRI hardware and software, including the use of three Tesla MR devices in daily routine work, have significantly improved the temporal and spatial resolutions, especially in the case of contrastenhanced T1-weighted 3D sequences. The use of various liver-specific hepatobiliary contrast agents enables not only morphological characterization but also a functional assessment of all liver lesions and also characterization of diffuse parenchymal changes [8].
Liver biopsy: the available but imperfect gold standard
Currently, liver biopsy is the reference standard for the diagnosis and staging of liver fibrosis [4]. However, this procedure has several major limitations, including its invasive nature, risk for potential complications, poor patient acceptance, interobserver variability, and possible sampling errors [4,9].
Liver biopsy captures only a tiny fraction of the liver (roughly 1/50.000), leading to sampling errors [10]. In an attempt to reduce sampling variability, it is recommended that liver biopsy specimens be at least 2.0 cm long and contain at least 11 portal triads. Biopsy specimens that do not meet these criteria are associated with a high risk of under staging (false negative) [11].
In contrast to fibrosis in chronic viral hepatitis, fibrosis in alcoholic hepatitis and in the adult form of NAFLD begins adjacent to the central veins. The fibrosis is laid down in a perisinusoidal manner, and the scar tissue surrounds individual hepatocytes. As the disease advances, perisinusoidal fibrosis accumulates adjacent to portal tracts, and the fibrotic tissue eventually coalesces into fibrous bridges connecting portal triads and central veins, ultimately culminating in cirrhosis [3]. As cirrhosis develops, the characteristic histologic features of fatty liver disease may be lost. The perisinusoidal may no longer be apparent, and other features (e.g., inflammatory cells, ballooned hepatocytes, and steatosis) may subside. Thus, cirrhosis due to fatty liver disease may be indistinguishable from cirrhosis due to viral hepatitis or other causes [12].
MRI-based methods for the noninvasive diagnosis of NAFLD
The search for the best diagnostic technique in terms of noninvasiveness and accuracy is still a major concern in recent research activity. In the recent literature, the role of several imaging diagnosis tools and specific contrast agents is reported in the evaluation of diffuse liver diseases such as steatosis, fibrosis, and cirrhosis.
The differentiation of prognostically relatively benign simple steatosis from potentially progressive NASH is a crucial issue [13,14]. Moreover, NAFLD is a reversible condition, especially during the early onset of the disease; therefore diagnosing and correct staging of patients with NAFLD are essential in order to prevent the development of an irreversible advanced liver disease. Routine biochemical laboratory tests and conventional imaging, including US, CT, and non-specific gadolinium-enhanced MRI, cannot distinguish between these entities with sufficient confidence [15,16]. Therefore, the differentiation between both entities is routinely made by histopathological analysis after liver biopsy. Liver biopsy is still considered the reference standard for the diagnosis of NASH [4]. There are several histological scoring systems to grade NASH, and the most commonly used is the so-called NAFLD activity score (NAS) [17]. The steatosis activity and fibrosis score (SAF) are a newly developed system for categorizing liver histology in NAFLD patients [18]. The lack of reliable, noninvasive methods for the diagnosis of disease severity and prediction of prognosis is one of the major drawbacks in the clinical management of patients with NAFLD [19].
Magnetic resonance elastography
Magnetic resonance elastography (MRE) assesses viscoelastic properties of soft tissues [20], offering a direct insight into the liver parenchymal stiffness. First step in the MRE technique is generating mechanical waves in the liver tissue. Then gradient-echo sequences are used to image wave motion, while a specialized software utilizing inversion algorithms transforms the images obtained into elastograms, revealing the tissues' stiffness quantitative map, expressed in kilopascals [21].
Studies comparing healthy volunteers and patients with CLD established that the shear viscoelastic parameters of the liver increased according to the stage of liver fibrosis, and a statistically significant difference between the patients with Metavir scores F0-F1 fibrosis versus F2-F3, F2-F3 versus F4, and F0-F1 versus F4 was found [20,22]. MRE also proved to be superior to biochemical testing using the aspartate aminotransferase-to-platelet ratio index [22]. Most importantly the authors could clearly separate the intermediate fibrosis stages, using MRE elasticity measurements.
Chen et al. [23] demonstrated that MRE-based assessments of liver stiffness in patients with NAFLD may have a high diagnostic accuracy (AUC 0.93) for discriminating NASH from simple steatosis, with a cutoff value of 2.74 kPa reaching 94% sensitivity and 73% specificity. However, a more recent study suggested that the performance of MRE for diagnosis of NASH versus simple steatosis was rather modest and did not provide a high level of accuracy. Using 2D-MRE (60 Hz), 3D-MRE (60 Hz), and 3D-MRE (40 Hz), the AUROC for diagnosing definite NASH was 0.754, 0.757, and 0.736, respectively [24].
In a prospective study, Cui et al. [25] proved that the diagnostic accuracy of 2D-MRE for the noninvasive evaluation of advanced fibrosis in patients with biopsy-proven NAFLD was significantly higher than five clinical prediction rules, widely validated for the assessment of fibrosis in patients with NAFLD, such as the NAFLD fibrosis score, the BARD score, the AST-to-ALT ratio, FIB-4, and AST-toplatelet ratio index. Using the cutoff value for 2D-MRE of 3.64 kPa, the AUROC of 2D-MRE for predicting advanced fibrosis was 0.957. This proved to be significantly higher than FIB-4 score with AUROC of 0.861, the best-of-all analyzed clinical prediction rules. Therefore, 2D-MRE is a promising noninvasive imaging-based biomarker for the diagnosis of advanced fibrosis in NAFLD patients used additionally to clinical prediction rules, especially when the latter have indeterminate values.
The cutoff values proposed by Loomba et al. [26] for the prediction of each fibrosis stage using 2D-SWE in patients with NAFLD were 3.02 kPa for early fibrosis, 3.58 kPa for significant fibrosis, 3.64 kPa for advanced fibrosis, and 4.67 kPa for the prediction of cirrhosis, with areas under the ROC curve of 0.838, 0.856, 0.924, and 0.894, respectively. The most promising results were obtained for discriminating advanced fibrosis (F3-F4) from fibrosis stages 0-2 with a sensitivity of 0.86 (95% confidence interval [CI]: 0.65-0.97) and a specificity of 0.91 (95% CI, 0.83-0.96).
Kim et al. showed, however, that the best cutoff for detecting advanced fibrosis value was 4.15 kPa (AUROC = 0.954, sensitivity = 85%, specificity = 92%). The performance of this technique for discriminating between other fibrosis stages was also satisfactory [27].
Nevertheless, this ability to stage pre-cirrhotic disease could make MRE very useful for the assessment of therapeutic success and disease progression [28].
More advanced versions of the imaging modality such as 3D-MRE allow the evaluation of a larger volume of liver parenchyma than 2D-MRE, being significantly more accurate for diagnosis of advanced fibrosis in NAFLD patients [24].
As it is not affected by the absence of an ultrasound window, MRE is more precise than ultrasonographic elastographic techniques. In patients with obesity to morbid obesity, MRE proved to have a better success rate than vibrant-controlled transient elastography (95.8 versus 81.3%) and a higher interobserver agreement than liver biopsy (intraclass correlation coefficient, 0.95 versus 0.89) [29].
Acute inflammation, passive liver congestion caused by cardiac insufficiency, or obstructive cholestasis leads to a false increase of liver stiffness values [30]. Moreover, on a gradient-echo MRE sequence, certain conditions such as iron overload states may lead to a lower MRI signal intensity, which does not allow shear wave recognition. This leads to a decrease in MRE diagnostic accuracy. Thus, using spin-echo or echo-planar sequences with lower T2* effect susceptibility can alleviate this problem [30].
The technique has the advantage of not being influenced by the patient's weight or the presence of ascites. MRE remains expensive and not widely accessible in the everyday imaging routine of patients with NAFLD.
Magnetic resonance spectroscopy
MR spectroscopy (MRS) enables the noninvasive measurement of concentrations of different chemical components within tissues, which are displayed as a 1D spectrum with peaks consistent with the various chemicals detected. The major problem in obtaining MRS signals from abdominal organs is sensitivity to physiologic movement during the scan time usually exceeding several minutes [31]. Usually, the measurement is performed by manually placing a single voxel into the liver parenchyma far from the liver capsule, in an area free of large vessels or bile ducts [32].
While proton MRS is a very useful technique for the quantification of hepatic fat, its use for the estimation of hepatic fibrosis appears to be limited [33,34].
According to Abrigo et al. [34], phosphorus-MRS (31P-MRS) shows distinct biochemical changes in different NAFLD states and has fair diagnostic accuracy for NASH. However, this technique requires considerable operator skills (sequence programming, shimming, analysis of spectra) and access to special equipment (scanner, 31P coil) [28].
31P-MRS permits in vivo evaluation of energy metabolism and intracellular compartment division through different signals and provides metabolic information, which is useful when assessing fibrogenesis [28]. A significant correlation between phosphodiester concentration and the stage of fibrosis and a correlation between "anabolic charge" (phosphomonoester/[phosphomonoester + phosphodiester]) and the stage of fibrosis were found in a study comparing a group of patients with steatosis and no to moderate inflammation to a group of patients with severe fibrosis or cirrhosis [35].
Hydrogen 1 MRS (1H-MRS) has proven its efficiency in quantifying liver steatosis, by measuring lipid peaks, identified in the liver at 0.9, 1.3, 2.0, 2.2, and 5.3 parts per million. The dominant lipid peaks are caused by the resonance of methyl (-CH3) protons and methylene (-CH2) in the triglyceride molecule [36].
The absolute fat concentration can be therefore calculated using the following formula: Triglyceride content = total lipid peak area / ( total lipid resonance peak + water resonance peak ) (1) As the steatosis grade increases, the size of the lipid peaks relative to the water peak increases as well [36].
The advantages of 1 H-MRS are the very high sensitivity, a good correlation with histological analysis, and the method's independency of confounders such as fibrosis and iron or glycogen depositions. On the other side, MRS has currently a limited clinical availability, and it is prone to sampling error, when a single-voxel liver spectroscopy is performed [36].
Furthermore, authors assessed the diagnostic accuracy of a novel magnetic resonance protocol for liver tissue characterization, using T1 mapping, 1H spectroscopy, and T2* mapping, which quantified liver fibrosis, steatosis, and hemosiderosis, respectively [37]. According to their results, the novel scanning method provides high diagnostic accuracy for the assessment of all three histology variables.
In a recent study, Idilman et al. [38] analyzed the efficiency of MRI-proton density fat fraction (MRI-PDFF) and MRS-determined liver fat content in patients with NAFLD in comparison with liver biopsy-determined steatosis.
No superiority between the two imaging methods was observed. This study emphasized that the estimation of fat liver content using both MR imaging techniques was more accurate in the absence of liver fibrosis. MRS showed promising results for discriminating moderate/severe steatosis from none/mild steatosis with an AUROC of 0.857. A cutoff value of 9% provided a sensitivity of 92%, negative predictive value of 83.3%, specificity of 71%, and positive predictive value of 84.6%.
The accurate assessment of liver fat content in patients with NAFLD is essential in identifying those who are at greater risk of progressing into advanced fibrosis stages, being also of great value in evaluating the response to therapy. Liver steatosis also influences the successful rate of liver transplantation (LT); one of the necessary requirements in many centers is that the living donor liver must not exceed 5% steatosis, as greater values are associated with increased recipient liver dysfunction [38].
MRS proves to be a highly accurate noninvasive technique, which allows us to distinguish between individuals with simple steatosis and steatohepatitis who may benefit from early intervention and more aggressive therapy.
Diffusion-weighted MR imaging
Diffusion-weighted imaging (DWI) is a noninvasive method that allows measurement of the microscopic motion of water in tissue and generates representative apparent diffusion coefficient (ADC) values. DWI uses very fast scans with an additional series of (diffusion) gradients rapidly turned on and off [28].
Within tissues with highly cellular component and therefore a narrowed extracellular space, the water molecule motion is impeded leading to restricted water diffusion in such tissues. In contrast, fluid-rich or necrotic structures are associated with a greater freedom of motion of water molecules, and the water diffusion in such tissues is considered to be "free." Therefore, on DWI sequences, the signal intensity reflects the tissue diffusion characteristics, which is influenced by cellularity and the integrity of cell membranes [39].
In a prospective study, Guiu et al. [40] demonstrated that both pure molecular diffusion and perfusion-related diffusion were significantly lower in the steatotic liver than in the normal liver. On a group of 89 NAFLD patients who underwent liver biopsy, Murphy et al. [41] also found a good correlation between histologic features of NAFLD liver and DWI-derived quantitative measures. Molecular diffusivity was significantly decreased with steatosis, while perfusion fraction decreased with fibrosis degree. Same associations were found between pediatric NAFLD histologic features and DWI parameters, with a high interobserver reproducibility [42]. As far as the apparent diffusion coefficient is concerned, studies show inconsistent results. One study in adults with NAFLD found that ADC decreased with steatosis, while others found no significant relationship [40,41].
Several studies have evaluated the use of DWI and ADC values for the diagnosis of hepatic fibrosis or cirrhosis in patients with diffuse hepatopathies. The complex assembly of collagen fibers, glycosaminoglycan, and proteoglycans that constitutes liver fibrosis may restrict the molecular diffusion measured by DWI [43].
DWI has been successfully applied to differentiate cirrhotic from healthy tissue. Girometti et al. reported a positive predictive value of 100%, a negative predictive value of 99.9%, and an overall accuracy of 96.4% in cirrhotic patients compared to healthy controls [44].
A recent meta-analysis suggests that DWI parameters can reliably stage hepatic fibrosis, having a good diagnostic accuracy with areas under the SROC curve between 80 and 90%. A high b value for liver fibrosis imaging (between 800 and 1000 s/mm 2 ) could significantly increase the diagnostic accuracy of diffusion imaging in differentiating between significant and severe fibroses (>F2). For diagnosing liver cirrhosis (F4), the use of 3T MRI equipment has also proved to optimize the DWI diagnostic accuracy, compared to 2T MRI [45].
Lewin et al. found a significant relationship between the ADC values and necroinflammatory scores and suspected an influence of steatosis on apparent diffusion coefficient values [46]. In addition, the ADC of fibrotic livers was decreased as the fibrosis scores increased in some studies [46], but not in others [43]. However, differences in MR equipment and sequence parameters make it difficult to compare studies. Clearly, more research is needed to create a standard setup for DWI sequence acquisition to make studies comparable and to determine whether or not DWI can be a useful tool for the diagnosis and staging of diffuse liver diseases.
Furthermore, DWI imaging is susceptible to artifacts (e.g., blurring, ghosting, and distortions) and offers a limited image quality; therefore, DWI is currently used as complementary and not as a replacement to conventional sequences in the evaluation of NAFLD [47].
DWI does not require administration of intravenous contrast; consequently the technique might represent a reasonable option for patients with kidney failure, where gadolinium-based contrast substances represent a contraindication due to the increased risk of developing nephrogenic systemic fibrosis, while iodinated CT contrast might lead to an even greater impairment of renal function, being also contraindicated [47].
Susceptibility-weighted MR imaging
It is known that, among other factors, increased iron content of the liver and secondary changes manifesting in progressive collagen deposition are important background alterations in the development of liver fibrosis [48]. DOI: http://dx.doi.org /10.5772/intechopen.82096 Susceptibility-weighted imaging (SWI) is well known as a three-dimensional (3D) gradient-echo (GRE) technique utilizing phase information to increase sensitivity for detecting susceptibility changes that result from, for example, iron, hemoglobin, and calcification. Initially used for neuroimaging [49,50], recent technical advances allow for possible abdominal applications.
SWI is based on T2*-weighted GRE sequences and exploits both magnitude and phase information. Traditionally SWI sequences are high-resolution 3D sequences. Employing 3D sequences for abdominal imaging is not feasible because of long acquisition times and the large B0 variations encountered in this body area. With the advent of a multi-breath-hold GRE-sequence-based SWI, a two-dimensional (2D) sequence was developed for abdominal imaging [51]. SWI utilizes the differences in the magnetic susceptibilities of different tissues and produces a contrast superior to conventional T1-and T2-weighted MR imaging in the detection of structures that cause susceptibility artifacts [52].
The superiority of SWI over the T2*-weighted sequence has been shown, both in the detection and conspicuity of increased liver iron deposition and siderotic nodules [51] and in the detection of intratumoral hemorrhage in hepatocellular carcinoma (HCC) [53].
The liver-to-muscle signal intensity ratio on SWI proved to be a reliable measurement in grading liver fibrosis in patient with diffuse liver disease, with a highdiagnostic accuracy for the differentiation of moderate to advanced (F2 and F3) liver fibrosis from liver cirrhosis (F4) (AUROC = 0.93). The multiple regression analysis showed that liver fibrosis independently influenced SWI measurements, being a main contributor to the decreasing liver-to-muscle SI ratio, followed by iron overload and necroinflammatory activity, when compared with histopathologic findings [52].
The relationship between iron load and fibrogenesis has multiple considerations. The increased iron content in the liver, either diffusely distributed or in the form of numerous siderotic nodules, does not represent the entire transformation of liver fibrosis. In the process of fibrogenesis, hepatic stellate cells are also activated by other factors such as inflammation, genetic determinants, and the immune system [52].
Using a multiparametric approach, a recent study proved that liver SWI signal intensity enhanced the diagnostic performance in diagnosing and staging liver fibrosis, when used together with the apparent diffusion coefficient of the liver parenchyma on DWI and the degree of liver enhancement on the hepatobiliary phase of dynamic contrast-enhanced MRI. The three MRI techniques used together were able to assess the severity of liver fibrosis with an AUC ranging from 0.90 to 0.95, and the best performance was obtained in predicting moderate fibrosis (F2 or greater), with a sensitivity of 86% and a specificity of 94%. This reflects the clinical significance of this diagnostic tool, as F2 or greater is the stage in which therapeutic action should be taken [54].
Proton density fat fraction
Proton density fat fraction (PDFF) measurement is a multi-echo chemical shiftencoded MRI method for quantitatively assessing hepatic steatosis, being available as an option from several manufacturers of MRI scanners. PDFF is defined as the ratio of the density of mobile protons from triglycerides and the total density of protons from mobile triglycerides and mobile water. It is expressed as an absolute percentage (%) and ranges from 0 to 100% [7].
This sequence allows the measurement of fat fraction in any segment of the liver, generating a fat mapping of the entire hepatic parenchyma. This is of great value, as several studies proved the heterogeneous intrahepatic fat distribution [55].
The advantages of PDFF calculation are its ability to be completely obtained during a short breath-hold (in less than 25 s) and the fact that it minimizes the errors from confounders of fat quantification encountered using conventional MRI methods (Dixon and fat saturation) such as T1 bias, T2* decay, or spectral complexity of lipid [38].
Emerging data support the use of MRI-PDFF in evaluating the response to treatment in the setting of early-phase clinical trials in NASH, using drugs with an anti-steatotic mechanism of action [7].
In a recent study, the mean fat fraction was significantly lower in the left lobe than it was in the right, while liver segments 4 and 5 proved to be the most adequate to estimate the entire hepatic lipid content [55].
Regarding technical parameters, using a six-echo map proved to have a higher diagnostic accuracy than three, four, or five echoes [56].
Permutt et al. showed a good correlation between MRI-PDFF and histologydetermined steatosis grade in adults with NAFLD. They observed an increasing average value of MRI-determined PDFF with increasing steatosis grade (8.9% for grade 1, 16.3% for grade 2, and 25% for grade 3 steatoses) [57]. PDFF was effective in differentiating moderate or severe hepatic steatosis from mild or no hepatic steatosis, with area under the curve of 0.95 and 93% sensitivity and 85% specificity. However, the correlation between biopsy and PDFF-determined steatosis was less pronounced when fibrosis was present (r = 0.60) than when fibrosis was absent [58].
When comparing the efficiency of MRI-PDFF to magnetic resonance spectroscopy, both techniques proved to strongly correlate with the histology-determined steatosis, with no superiority between them [38]. But the PDFF maps have the advantage of being automatically reconstructed without user input or post-processing, unlike MR spectroscopy-based methods.
Therefore, MR-PDFF represents another novel, noninvasive, and practical imaging tool in assessing patients with NAFLD, as the entire liver can be covered in assessment with a great accuracy in quantifying total hepatic fat amount [38,55].
Contrast-enhanced MRI
In the liver, contrast agents are categorized into non-specific agents that distribute into the vascular and extravascular extracellular spaces (such as the linear gadopentetate dimeglumine (Gd-DTPA) and the macrocyclic gadobutrol (Gd-DO3A-butrol) and gadoterate dimeglumine (Gd-DOTA)) and liver-specific agents taken up by liver cells. These liver-specific agents are either taken up by Kupffer cells (such as the super paramagnetic iron oxide particles ferumoxides and ferucarbotran) or by hepatocytes (such as gadolinium ethoxybenzyl dimeglumine or gadoxetic acid (Gd-EOB-DTPA) and gadobenate dimeglumine (Gd-BOPTA)) [8].
Hepato-specific contrast-enhanced MRI
Gadoxetic acid (Gd-EOB-DTPA, Eovist ® in the USA, Primovist ® in Europe) is a liver-specific MRI contrast agent which provides both morphological and functional information and can be used as an imaging biomarker in the diagnostic workup of liver fibrosis [8].
After intravenous injection, the gadoxetic acid (GA) distributes into the vascular and extravascular spaces during the arterial, portal venous, and late dynamic phases and progressively into the hepatocytes and bile ducts during the hepatobiliary phase. GA enhancement depends mainly on liver perfusion, vascular permeability, extracellular diffusion, and hepatocyte transporter expression [8,59]. All these functions are disturbed in diffuse liver diseases, and there may be a decrease in the balance between uptake and excretion of the contrast media by the impaired hepatocytes.
The transport of GA in the hepatocytes is mediated by two different transport systems located at the sinusoidal and canalicular membranes of the cell [60]. The contrast agent enters the hepatocytes through two organic anion-transporting polypeptide transporters (OATP1B1 and OATP1B3) [61], and it is excreted into the bile via the multidrug resistance protein 2 (MRP2) [62].
In patients with liver cirrhosis, the upregulation of MRP2 is associated with significant signal loss on gadoxetic acid-enhanced MR images [63]. Organic acid efflux from hepatocytes may also occur through the sinusoidal membrane because the transport through OATP is bidirectional and because the sinusoidal membrane also contains multidrug resistance proteins (MRP3 and MRP4), as it is illustrated in Figure 1. These efflux pumps are normally expressed at low levels in normal hepatocytes but can be upregulated in pathologic conditions, such as cholestasis. GA is not metabolized within hepatocytes [64].
With GA, approximately 50% of the administered dose in the normal human liver is transported through the hepatocytes and excreted into the bile, and the percentage of the contrast agent that is not cleared by the hepatobiliary system is excreted by glomerular filtration in the kidneys [65].
Hepatobiliary MR contrast agents can be used to characterize liver functional properties, and the relative enhancement quantification is a reflection of hepatocyte malfunction as a result of liver fibrosis accumulation and increased necroinflammatory activity [66].
Several MR-derived parameters can be used to estimate the amount of GA uptake, such as the relative liver enhancement, hepatic uptake index, and T1 mapping during hepatobiliary phase-on static images or the hepatic extraction fraction and liver blood flow-by using dynamic assessment [67]. Importantly, there is currently no clear consensus as to which of these MR-derived parameters is the most suitable for assessing liver dysfunction. The relative liver enhancement (RLE), the most commonly used parameter, is calculated by subtracting the signal intensity (SI) on the unenhanced images from the SI in the HBP, and dividing the difference by the SI of the unenhanced images, using the following formula [67]: Relative enhancement (RE) = ( SI 20 minutes post − contrast − SI pre − contrast ) / SI pre − contrast (2) In order to avoid bias due to liver parenchyma inhomogeneity, several regions of interest (ROI) are placed in different segments of both liver lobes.
Indeed, reports on animal models also proved that gadoxetic acid-enhanced MRI could differentiate simple steatosis from NASH by comparing the signal profile or the time of maximum relative enhancement [68]. Furthermore, several recent studies have shown the ability of gadoxetic acid-enhanced MRI to evaluate patients with CLD, particularly for the staging of hepatic fibrosis, and to obtain global and territorial liver function information [69]. In a retrospective, proof-of-concept study, the mean relative enhancement of the whole liver after GA administration was significantly lower in patients with NASH (0.82 ± 0.22), compared to those with simple steatosis (1.39 ± 0.52) [70]. Therefore, the relative enhancement measurements could potentially be used to differentiate between simple steatosis and NASH [AUC = 0.85 (95% CI 0.75-0.91)], providing a high sensitivity of 97% but a low specificity of 63% [70].
Histology parameters used to stage NASH, such as lobular inflammation, hepatocellular ballooning, and the degree of liver fibrosis, proved to be independent factors that negatively correlated with RLE. On the other side, fatty liver infiltration did not correlate with the relative enhancement. Due to its low specificity, GA-MRI cannot be used at this moment as the only criterion by which to differentiate simple steatosis and NASH. However, GA-MRI can be used as a valuable screening tool in identifying which NAFLD patients need to perform liver biopsy and which do not [70].
With regard to liver fibrosis staging, the contrast enhancement index (method that uses the paraspinal muscles' signal intensity as a reference for liver) proved to be an efficient biomarker, with higher diagnostic accuracy than other enhancement parameters or hematologic markers [71]. RLE is best suited for detecting moderate to advanced fibrosis, but the interpretation of results should consider laboratory parameters, with special attention to liver function. Elevated levels of aspartate aminotransferase, gammaglutamyl transpeptidase, and alkaline phosphatase levels were independent predictors of false-negative results [69].
The main advantages and disadvantages of each magnetic resonance imaging technique currently used in the noninvasive assessment of NAFLD are briefly synthetized in Table 1.
Conclusion
MRI is currently increasingly used in the assessment of NAFLD. Although all methods have their own advantages and disadvantages, the noninvasive diagnosis of NAFLD using innovative applications of MRI-based methods presents a promising future. Liver fibrosis can be accurately assessed using MRI methods that do not require contrast media administration, such as MRE, diffusion-weighted MRI, and susceptibility-weighted MRI, while quantitative detection of liver steatosis is better performed using MRS or chemical shift-based MRI techniques such as proton density fat fraction. Moreover, GA-enhanced MRI provides both morphological and functional information and can be used as an imaging biomarker in the diagnostic workup of liver fibrosis and may help to distinguish between the two subgroups of NAFLD, simple steatosis and nonalcoholic steatohepatitis.
Conflict of interest
There is none to declare. | 2019-06-02T22:41:27.081Z | 2019-05-31T00:00:00.000 | {
"year": 2019,
"sha1": "c9d9352f14f1b59b47af0b2aa3e258d0f518d67d",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/64458",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "84b5ec81cc4dd0b8d54fce06bde056a47c21ba39",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18630688 | pes2o/s2orc | v3-fos-license | Structure of neutrino mass matrix and CP violation
We reconstruct the neutrino mass matrix in the flavor basis, using all available experimental data on neutrino oscillations. Majorana nature of neutrinos, normal mass hierarchy (ordering) and validity of the LMA MSW solution of the solar neutrino problem are assumed. We study dependences of the mass matrix elements, m_{alpha beta}, on the CP violating Dirac, delta, and Majorana, rho, sigma, phases, for different values of the mixing angle theta_{13} and of the absolute mass scale, m_1. The contours of constant mass in the rho-sigma plane have been constructed for all m_{alpha beta}. These rho-sigma plots allow to systematically scan all possible structures of the mass matrix. We identify regions of parameters in which the matrix has (i) a structure with the dominant mu tau-block, (ii) various hierarchical structures, (iii) flavor alignment, (iv) structures with special ordering or equalities of elements, (v) the democratic form. In certain cases the matrix can be parameterized by powers of a unique expansion (ordering) parameter lambda approximately 0.2 - 0.3 (lambda_{ord} approximately 0.6 - 0.7). Perspectives to further restrict the structure of mass matrix in future experiments, in particular in the beta beta_{0 nu}-decay searches, are discussed.
Introduction
Significant amount of information about neutrino masses and mixing has already been obtained from experiments on the atmospheric [1] and solar neutrinos [2,3], from laboratory experiments, in particular the reactor experiments [4] and neutrinoless double beta decay searches [5], from astrophysics and cosmology. New substantial results are expected soon.
What are implications of these results for the fundamental theory, for the mechanism of neutrino mass generation, origin of large lepton mixing, relation between the quark and the lepton masses? The neutrino masses and mixing appear from diagonalization of the neutrino mass matrix. In a sense, the mass matrix unifies the information which is contained in the masses and mixing angles (which appear as independent physical observables). So, the questions we are asking should be considered in terms of properties of the mass matrix.
It is expected that the structure of the mass matrix can be explained by certain (broken) symmetry realized in certain basis at some high mass scale [6]. We will call this basis the symmetry basis. Thus, to approach the fundamental theory, one should find the mass matrix in the symmetry basis and at the corresponding symmetry scale. Both abelian (e.g., [7,8]) and non abelian (e.g., [9]) symmetries, broken (spontaneously) at the various symmetry scales, have been widely considered (see also the reviews [10,11]). Also, the possibilities have been studied to identify the flavor symmetry scale with other known scales in the theory like the Grand Unification scale or the string scale.
The first step to the fundamental theory is the reconstruction of the matrix in the flavor basis using all available experimental data. The flavor basis, formed by ν e , ν µ , ν τ , is determined as the basis in which the mass matrix of charge leptons is diagonal. However, the symmetry basis may not coincide with the flavor basis, while the structure of the mass matrix depends on the basis substantially. Furthermore, using the existing experimental information we can reconstruct the mass matrix at the low (electroweak) scale. The scale at which possible flavor symmetry is realized (broken) is unknown. So, the bottom-up approach would consist of determination of the structure of the mass matrix at low scale, selection of the appropriate symmetry basis and selection of the correct symmetry scale.
This picture should be taken with some caution: a priory it is not clear whether certain symmetry is behind properties of the mass matrix. This should be established by studying possible regularities of the mass matrix.
There is a number of attempts to reconstruct neutrino mass matrix in the flavor basis using the available experimental results [12,13,14]. Most of the studies have been performed in the context of three Majorana neutrinos and the data on atmospheric and solar neutrinos as well as from the CHOOZ reactor experiment [1,2,3,4] have been used as an input. Clearly this information is not enough to reconstruct the mass matrix completely. Apart from the oscillation parameters (mass squared differences and mixing angles), the mass matrix depends on non-oscillation parameters: the absolute mass scale, m 1 , and the CP violating Majorana phases. Furthermore, even not all the oscillation parameters are known. In particular, there is only an upper bound on the mixing angle θ 13 and there is no information about the value of the Dirac phase δ. Also the type of mass hierarchy (ordering) of the states is unknown.
The studies performed so far were concentrated, mainly, on identification of the dominant structures of the mass matrix and possible zeros of certain matrix elements. It was realized that in the case of spectrum with normal mass hierarchy, m 1 ≪ m 2 ≪ m 3 , the mass matrix has structure with the dominant µτ -block, formed by M µµ , M µτ , M τ τ elements, and small elements of the e−row (M ee , M eµ , M eτ ) [13,15]. In the case of inverted mass hierarchy, the dominant structure can be formed by elements of the e−row: M eµ and M eτ [8,12]. These structures may be related to an underlying L e − L µ − L τ symmetry.
In the case of degenerate mass spectrum new dominant structures appear depending on the CP-parities of the mass eigenstates (see, e.g., [8,12,16]). In particular, it has been found that the diagonal elements, being equal to each other, can form the dominant structure for equal CP-parities of all three neutrinos. Another interesting possibility is the dominant structure formed by the ee−, µτ − and τ µ−elements (moreover, |M ee | ≈ |M µτ |), which could imply, e.g., SO(3) flavor symmetry or U(1) symmetry, with charge prescription (0, 1, −1) and an additional permutation symmetry. Recently, the possibility that some matrix elements equal exactly zero has been considered [17].
It was shown that experimental data can be explained in models with universal Yukawa couplings [18], which lead to "democratic" mass matrices with all mass matrix elements having the same modulus but different phases.
Completely different approach is based on "Anarchy" of the mass matrix [19]. It has been proposed that the elements of the mass matrix appear as random numbers from certain interval and there is no special structure of the mass matrix dictated by certain symmetry. It was estimated how frequently neutrino oscillation data can be reproduced in this way. Random values of the complex phases of the mass matrix elements M αβ have also been considered [13].
It was realized that the structure of the mass matrix depends strongly on the unknown CP violating phases, especially in the case of degenerate spectrum. In general, in the system of three Majorana neutrinos there are three CP violating phases: the Dirac phase, δ, the unique phase in the mixing matrix relevant for oscillations, and two Majorana phases, which are relative phases of the three mass eigenstates.
In most of previous studies, the CP violating phases were neglected and CP-parities have been discussed mainly (see, however, [17] and also [20], where the role of phases in the generation of large solar mixing is considered).
In this paper we perform a systematic and comprehensive study of dependence of the neutrino mass matrix structure on the CP violating phases. We concentrate on the first step in the "bottom -up" approach: reconstruction of the mass matrix in the flavor basis. A short discussion of basis dependence (which deserves a separate study) will be presented in sect. 6.7. We suggest a way to analyze all possible structures of the mass matrix which are allowed by experimental data.
The paper is organized as follows. In section 2 we describe our approach and summarize physical inputs from neutrino oscillation experiments. In sections 3 and 4, we study the dependence of the mass matrix elements on CP violating phases. We consider spectra with mass hierarchy (section 3.1), partial degeneracy (section 4.2) and complete degeneracy (section 4.3). In section 5 we introduce and describe the (ρ − σ) plots. In section 6 we consider implications of CP violating phases for the structure of the mass matrix. In section 7 we discuss our result and draw conclusions.
Reconstructing ν mass matrix
The reconstruction of the mass matrix in the flavor basis is the first step of the bottom-up approach. The next step -selection of the symmetry basis -requires additional assumptions and therefore is more ambiguous. We will shortly discuss this issue in section 6.7. However, already in the flavor basis one can -search for regularities in the mass matrix, -study correlations of different matrix elements, -study correlations between the neutrino mass matrix and charged lepton masses, that is study of possible flavor alignment.
The flavor basis is convenient for searches of symmetries associated with the lepton numbers L e , L µ , L τ . Last but not least, it is not excluded that flavor basis is not much different from the symmetry basis.
Mass matrix in flavor basis. Parameterization
The neutrino mass matrix in flavor basis, M, can be written as where Here m i are the moduli of neutrino mass eigenvalues and ρ and σ are the two CP violating Majorana phases, varying between 0 and π. The neutrino mixing matrix U is defined by where ν α are the flavor neutrino states, and ν i are the mass eigenstates. We use the standard parameterization for U: where c ij ≡ cos θ ij , s ij ≡ sin θ ij and δ is the CP violating Dirac phase. The mixing angles vary between 0 and π/2 and δ varies between 0 and 2π. The matrix M is symmetric and, therefore, defined by six elements 1 . According to Eqs. (1,2), they can be written explicitly as The expression for M αβ , in terms of m i , θ ij , δ, ρ, σ, is given in the appendix.
The mass matrix elements, as functions of CP violating phases, depend on the parameterization. Our choice of parameterization has the following motivations.
In contrast to previous works, e.g. [16,35], we ascribe the Majorana phases to the first and the third mass eigenstates (2), so that in the limit of strong mass hierarchy the dependence on the phase ρ disappears. Furthermore, in the limit of degeneracy the interplay of the two Majorana phases (due to mixing) is weaker if σ is attached to the third mass eigenstate.
We use the standard parameterization (Eq.(3)) of the mixing matrix for two reasons: (1) it is the most often used parameterization, in particular in studies of the CP violation in neutrino oscillations; (2) the Dirac phase is associated to s 13 . So, the influence of δ on structure of the matrix is suppressed, and moreover, with improvements of bound on s 13 the effect of the phase will decrease.
Notice that in our parameterization m ee element depends on all three phases. In particular, the phase δ enters in combination σ − δ. The dependence of m ee on σ − δ is very weak being suppressed by s 2 13 . Another parameterization of the mixing matrix has been used, e.g., in [16], in which m ee does not depend on δ. We find that our parameterization is more convenient when all elements of the mass matrix (and not only m ee ) are analyzed. Moreover, transition to our parameterization in m ee is reduced to the simple shift In what follows we will, mainly, analyze the absolute values of mass matrix elements, m αβ : since the absolute values may give more straightforward information on possible underlying symmetry. The phases of the mass matrix elements, φ αβ , are also important for theory. The phases are known functions of 9 physical parameters: φ αβ = φ αβ (m i , θ ij , ρ, σ, δ). They can be found if these 9 parameters are measured. We comment on the phases and include separate discussion in section 5.4. Notice that, in flavor basis, m αβ are physical quantities, that is, they can be directly measured in physical processes. In particular, the rate of the neutrinoless 2β-decay is proportional to m 2 ee . Other entries are in principle measurable in processes with ∆L = 2, like the decay K + → π − µ + µ + or the scattering e − p → ν e l ± l ′ ± X (for a review see [21]). The rates of these processes are proportional to m 2 αβ , where α and β are the flavors of the two produced leptons in the final state or leptons in the initial and final states.
The present bounds on the elements m αβ other than m ee are many orders of magnitude weaker than indirect limits. For instance the bound on m 2 eµ is 16 orders of magnitude above the limit obtained from oscillations [22]. Clearly, the possibility to improve direct limits deserves further studies.
We introduce the dimensionless quantities where m 3 is the largest mass eigenvalue.
Conservation of the sum of masses squared
According to Eq. (1), the mixing matrix distributes the masses from M diag to the elements of the flavor mass matrix M: The following sum rule is useful for analysis of the flavor mass matrix: That is, the sum of moduli squared of all the elements of the mass matrix is invariant under basis transformation (rotation). The equality (6) The first term is immediately reduced to i |M i | 2 = i m 2 i , whereas the second term is zero due to orthogonality: α U * αi U αj = 0 for i = j.
Experimental input
In what follows we will find m αβ = m αβ (m i , θ ij , δ, ρ, σ), using all available neutrino data. We will restrict our analysis to the case of normal mass hierarchy (ordering): . The inverted hierarchy (ordering) is disfavored by supernova SN1987A data [23] (see, however, [24]). Normal hierarchy implies that m 2 We will also restrict ourself to the LMA MSW solution of the solar neutrino problem, which gives the best global fit of the solar neutrino data [25]. This solution looks especially plausible after SNO data [3] and it can be tested in the already operating KamLAND experiment [26]. We accept interpretation of the atmospheric neutrino results [1] in terms of ν µ → ν τ oscillations as the dominant mode.
The following experimental information is used.
In our discussion we will take into account the upper limit on the Majorana neutrino mass from neutrino-less 2β decay [5,28]: and m ee 1 eV, if uncertainties in the nuclear matrix elements are taken into account. We think, it is premature to include in the analysis the recent result on 2β 0ν -decay [29] which has a controversial interpretation (see discussion in [16,30]). We consider the direct kinematic bound on the mass of electron neutrino [27], The unknown CP violating phases δ, ρ, σ, as well as the absolute mass scale, m 1 , and the angle θ 13 are treated as free parameters.
Let us emphasize that the experimental input (7) -(10) does not depend on the Majorana phases, ρ and σ, and on m 1 , because only the differences of m 2 i enter the oscillation probabilities. The input does not depend also on the Dirac phase, δ. This can be explicitly seen from the parameterization (3): at the level of present experimental accuracy, the solar and atmospheric neutrino results are determined by U e1 , U e2 and U µ3 , U τ 3 respectively. CHOOZ gives the bound on |U e3 |. All these quantities do not depend on δ.
µτ -block and e-row elements
In view of large 2-3 mixing, it is convenient to split the six independent elements of the mass matrix into two groups: -elements of the µτ -block: m µµ , m µτ , m τ τ , with zero electron lepton number, L e = 0; -elements of the e-row: m eµ , m eτ with L e = 1, and m ee with L e = 2. As we will see later, these groups of elements have different dependences on CP violating phases. Moreover, such a split can be motivated by phenomenology.
Small parameters, mass ratios and limits
There are several small parameters in the problem: 1) The ratio of masses squared differences: where the central value corresponds to the best fit values of mass squared differences (see (7),(9)), and the interval is obtained varying ∆m 2 atm and ∆m 2 sol in the ranges given in (9) and (8).
Let us introduce dimensionless parameters -the ratios of mass eigenvalues: In the case of strong mass hierarchy, k ≈ 0 and r = r ∆ . Clearly, we may have k ∼ 1 and r ≪ 1. If r ∼ 1, then k ≈ 1.
Let us consider the mass matrix in various limits. 1) r = s 13 = ξ = 0. In this case we arrive at a matrix with zero e-row elements and µτ -block elements equal to m 3 /2. Obviously no dependence on CP-phases appears.
2) r = s 13 = 0, ξ = 0 (see (A.4)-(A.6)). We get The element m µτ is almost unchanged with respect to maximal θ 23 , while m µµ and m τ τ vary with ξ significantly and in opposite directions. The determinant of the µτ -block is zero. Again, there is no dependence on CP violating phases.
3) s 13 = ξ = m 1 = 0, but r = 0. We have Now dependence on the Majorana phase appears in the µτ -block, but there is no phase dependence of the e-row elements. The influence of the CP violating phases on the matrix structure is very weak in the limit of strong mass hierarchy and small s 13 . Indeed, for r → 1, the effect of phase σ disappears, dependence of the elements on the Dirac phase is associated with s 13 , so the effect of δ decreases with s 13 , the dependence on the phase ρ is associated to the mass m 1 and it is negligible when r ≪ 1.
Analytic expressions and phase diagrams
Exact analytic expressions for the mass matrix elements in terms of mass eigenvalues, m i , mixing angles and phases are given in the Appendix. We present the matrix elements as sums of three contributions corresponding to three different mass eigenvalues in (A.1 -A.6) and as series in powers of s 13 in (A.7). Representation of m αβ as the sums of three terms with different phases is given in (A.9, A.11, A.13). We will use various approximate expressions for m αβ which can be obtained from Eqs.(A.1 -A. 13).
For small s 13 , one can draw simple graphic representation of the mass matrix elements in the complex plane (Fig.1). Neglecting terms of order s 13 in the brackets of Eqs.(A.2 -A.6) or, equivalently, ǫ terms in Eqs.(A.9, A.11), we find that each mass m αβ turns out to be the sum of three terms with phase factors which depend on certain combinations of the phases δ, ρ, σ. So, in the complex plane the masses m αβ can be represented as sums of three vectors (corresponding to the three terms). The lengths of these vectors are determined by mass eigenvalues (ratios k and r) and mixing angles. The angles between vectors are given by combinations of the phases δ, ρ, σ.
We will call this graphic representation, used for m ee in [31] and mentioned in [32,33], the phase diagram. The phase diagrams allow one easily to find minimal and maximal values as well as phases of the matrix elements, and possible correlations between them. In Fig.1 we show phase diagrams for the case of partial degeneracy: k ≈ 1 , r 1.
The mass matrix elements are periodic functions of the CP violating phases. In the next section we will analyze these dependences by quantifying for each phase the amplitude of variations, the period and the average value of the element. The latter we define as the average between maximal and minimal possible value of the element. We will also consider the relative phases of variations of different elements and correlations between them.
CP phases in the case of hierarchical mass spectrum
In the limit of strong mass hierarchy, when m 1 ≈ 0, m 2 2 ≈ ∆m 2 sol and m 2 3 ≈ ∆m 2 atm , k ≈ 0, only one Majorana phase, σ, is relevant. If also s 13 ≈ 0, we have, for the matrix of the moduli:m Notice that the e-row elements are real, φ eα = 0, whereas for the phases of µτ -block elements we have φ αβ ≈ −2σ. The corrections are proportional to r.
Dependence of m αβ on CP violating phases
In Figs. 2, 3 we show the six mass matrix elements,m αβ , α, β = e, µ, τ , as functions of the phase σ, for different values of the mixing angles θ 23 and θ 13 , from the allowed regions given in (9) and (10). Main features of the dependences can be well understood taking the lowest order terms in r and s 13 from Eqs.(A.9,A.11,A.13). According to (18), the dependence of µτ -block elements on σ is a result of the interplay of the main, O(1), term and of the O(r) term. In the lowest order, the µτ -block elements do not depend on s 13 (see Figs.2,3); m µτ has an opposite phase with respect to the two other elements. The relative amplitudes of variations equal For the best fit values of the parameters the amplitudes are of order 10%. In the case of non-maximal 2-3 mixing, the amplitudes can reach ∼ 25%. The corrections ∼ rs 13 (A.10) lead to small phase shift and small change of the amplitude of variations. Neglecting terms of the order rs 13 s 2 12 , we get from (A.11) expressions for m eµ and m eτ : m eτ ≈ rs 12 c 12 s 23 − s 13 c 23 e i(δ−2σ) .
So, the elements m eµ and m eτ depend on phases in the combination (δ − 2σ), they change with (δ − 2σ) in opposite phases, their values are determined by the interplay of the order r and order s 13 terms, which can have comparable sizes. Maximal values of m eµ and m eτ increase with s 13 . The relative amplitude of variations of m eµ with (δ − 2σ) is maximal when the two terms in (20) have the same modulus: If s 13 = s 0 13 , Maximal value of m eµ equals m max eµ = 2m eµ = sin 2θ 12 c 23 r. For s 13 < s 0 13 , (Fig.2a,2c,2e), the average value of m eµ is determined by the first term in (20), whereas the amplitude of variations is given by s 13 /s 0 13 . For s 13 > s 0 13 , the second term in (20) dominates. It determines the average value of m eµ , around which variations occur. The relative amplitude of variations is given by the factor s 0 13 /s 13 (Fig.2b,2d,2f). Behavior of the element m eτ is similar: the two terms in (21) (s 0 13 = s 0 13 for maximal 2-3 mixing), so Corrections of order rs 13 s 2 12 , neglected in (20) and (21), produce a small relative shift of phases ofm eµ andm eτ (see Fig.2).
It depends on the combination of phases 2(δ − σ). Due to the factor rs 2 12 in the first term, both contributions in (24) can be comparable in spite of the s 2 13 -order of the second term. Two terms are equal at tan θ 13 = s 12 √ r ≈ 0.21, that is, near the upper limit for s 13 . In this case the amplitude of variation can be maximal and Such a situation is approximately realized in Fig.2b,2d,2f. For small values of s 13 (s 13 ≪ 0.2), the dependence of m ee on phases is negligible (Fig.2a,2c,2e). The relative amplitude of variations is determined by the ratio tan 2 θ 13 /(rs 2 12 ) and the average value equals m ee ≈ s 2 12 r. Let us analyze the dependence of matrix elements on the Dirac phase δ. The elements of µτ -block depend on δ very weakly, via order rs 13 corrections (see (A.10)). E.g., for s 13 ≈ 0.14 ( Fig.2b,2d,2f), we find ∆m δ µµ /m µµ ≈ ∆m δ τ τ /m τ τ ∼ 0.02. The dependence of m µτ on δ is further suppressed by the factor ξ ≡ cos 2θ 23 .
The elements of e-row have much stronger relative dependence on δ. As we pointed out, the elements m eµ and m eτ depend on phases in the combination (δ − 2σ) (this feature is weakly violated by corrections ∼ rs 13 , which depend on the phase δ only). So, up to corrections of order s 13 r, one can extract the information on the δ dependence of the elements from the Fig.2 (or Fig.3 for large r) immediately. The change of δ by amount ∆δ is equivalent to horizontally shift the lines which correspond to m eµ and m eτ along with σ-axis by ∆δ/2 and the m ee line by ∆δ, with respect to the lines of µτ -block, which are almost unchanged. The phase δ can be selected in such a way that certain features of the m ee line and other e-row lines will occur at the same value of σ. For instance, according to Fig.2b, one can getm ee ≪m eµ ≪m eτ .
All the elements have the same period of variation with σ, although the phases of variations are different. There is a phase shift by π within different groups: There is a relative shift of phase between µτ -block and e-row elements which is determined by δ: These relations are weakly broken by corrections of order rs 13 .
Dependence of masses on θ 12 , r and k
Variations of θ 12 within the allowed LMA region, given in (8), do not produce substantial changes of results shown in Fig.2. With increase of θ 12 , the amplitudes of variations of µτ -block elements with σ (see (19)) decrease as c 2 12 . For maximal 1-2 mixing we get ∼ 30% decrease in comparison with the best fit value of θ 12 . In contrast, the amplitude of variations of these elements with δ increases as sin 2θ 12 (see (A.10)). The dependence on δ remains weak, because the increase of the amplitude can be only 10%. For the e-row elements, the critical value s 0 13 is proportional to sin 2θ 12 (see (22)). The ee-element m ee can be two times larger for almost maximal solar mixing angle than for the best fit value (see (24)).
Changes of ∆m 2 sol and ∆m 2 atm within the allowed regions, (8) and (9), produce strong effect on the structure of mass matrix. In Fig.3 we show the dependence of mass matrix elements on σ for r = 0.3, corresponding, e.g., to ∆m 2 sol ≈ 2 · 10 −4 eV 2 and ∆m 2 atm ≈ 2 · 10 −3 eV 2 . For the µτ -block elements, the amplitudes increase linearly with r (see (19)) and for r ≈ 0.3 they can be larger than 30%. For the e-row elements the critical value s 0 13 (see (22)) also increases linearly with r; for r ≈ 0.3, we get s 0 13 ≈ 0.13. For s 13 < s 0 13 , the average values of elements increase as m eµ ∼ m eτ ∼ r, but the amplitude of variations with (δ − 2σ) does not change (compare Fig.3, panels a,c,e, with corresponding panels in Fig.2). For s 13 s 0 13 , the average values of m eµ and m eτ do not depend on r, while their amplitudes can be maximal (Fig.3, panels b,d,f). The average value of ee-element increases with r: m ee ∼ r; the amplitude of variations with 2(δ − σ) does not change.
Till now, we have considered the case m 1 = 0. A strong normal hierarchy among mass eigenvalues, m 1 ≪ m 2 ≪ m 3 , holds for m 1 up to approximately 0.002 eV (k < 0.3). Notice that, for m 1 = 0, both Majorana phases become relevant (see (4)). We have checked that varying m 1 between 0 and 0.002eV, the dependence of m αβ on angles and CP phases, showed in Figs.2,3, is qualitatively the same as for m 1 = 0, except for the dependence of m ee . The ee-element can be about two times larger. Indeed, neglecting terms of order s 2 13 , we get:m ee ≈ rs 2 12 1 + k cot 2 θ 12 cos 2ρ .
The second term in the brackets is of order one for, e.g., m 1 = 0.002 eV, m 2 = 0.006 eV, tan 2 θ 12 = 0.35 and ρ = 0, π. Depending on ρ, the ratio of m ee and the other e-row elements can significantly change.
3.3 Structure of the mass matrix in the hierarchical case 1) As follows from Figs.2,3, the sharp structure with the dominant µτ -block and subdominant e-row appears for small s 13 , small r and near maximal 2-3 mixing. In this case where m(e − row) and m(µτ − block) refer to typical masses of the e−row and µτ -block elements. Improvements of the upper bound on s 13 and on ξ, as well as establishing ∆m 2 sol near its present best fit value, will confirm this structure in assumption of mass hierarchy. In the limit of sharp structure [µτ -block]-[e-row], the elements of dominant block depend very weakly on δ and have about 10% variations (determined by r) due to the phase σ. The elements m eµ and m eτ depend significantly on the combination (δ − 2σ), unless very strong upper bound on s 13 will be established. The ee−element varies with 2(δ − σ), with amplitude ∼ s 2 13 . Thus, uncertainties in the structure of the mass matrix due to unknown CP violating phases can be substantially reduced by further measurements of mixing angles and mass squared differences.
According to Figs. 2,3, for a large part of the parameter space (θ 23 , θ 13 , r, δ, σ), the structure [µτ -block]-[e-row] is less profound or even disappears. Indeed, in the case of large ξ or/and large r, the split between masses within the µτ -block can be larger than the gap between m(e − row) and m(µτ − block), depending on σ. Separation of the elements in two groups loses any sense. For the extreme case of large values of r, the elements m eµ and m eτ can be even larger than m µµ or m τ τ .
2) Dependence of the gap between µτ -block and e-row elements on s 13 and r can be seen comparing left and right panels in Figs.2,3 and Fig.2 with Fig.3, respectively. The deviation of θ 23 from 45 • , leading to a spread among the µτ -block elements (see (16)), can strongly decrease the gap.
Let us quantify the size of gap. Taking only leading terms in ξ, r and s 13 , one has for the µτ -block elements: where m(µτ − block) min is the value of m µµ or m τ τ , for σ = π/2. The upper bound on the e-row elements is given by where m(e−row) max is the value of m eµ or m eτ , for δ−2σ = 0 or π, respectively. Therefore, minimal value of the gap equals One can also characterize the split of the elements by the ratio of mean values of the erow and µτ -block elements. Up to terms quadratic in ξ, r and s 13 , m(µτ − block) ≈ m 3 /2, while for the e-row we can take in accordance with (28). The ratio (30) does not depend on CP phases and on θ 23 .
3) Apart from special choice of phases, the ee-element is typically of the order of the other e-row elements.
4) The CP violating phases can change significantly the structure of e-row. As follows from Figs. 2,3, one can get, e.g., Any element of the e-row can be the smallest one. All possible orderings of e-row elements can be realized by appropriate choice of the phases. 5) Depending on phases, one can find a configuration with almost uniform splits among the six mass matrix elements and structure with the dominant µτ -block disappears. Still the average value of the e-row elements is smaller than the average value of the µτ -block elements (see (30)). Thus one can get flavor alignment (correlation of the neutrino masses and masses of charge leptons).
CP phases in the case of non-hierarchical mass spectrum
With respect to the hierarchical case, the structure of the mass matrix depends on two additional parameters: the mass ration k and the phase ρ. These parameters enter the mass matrix elements in the combinations (see (A.7)) (in the hierarchical case, k ≈ 0 , X ≈ c 2 12 , Y ≈ s 12 c 12 and Z ≈ s 2 12 ). We will use the following parameterization: where In the limit of very small s 13 using (A.7) and the notation (33), we have: where σ X ≡ σ + φ X /2. One can first analyze matrix (34) and then consider corrections of the order s 13 . Notice that now the elements of the e-row have non-zero phases which depend on ρ. The dependences of the absolute values and phases of these elements are correlated. The phases of µτ -block elements depend mainly on σ, with corrections which are functions of ρ.
Non-degeneracy case
For m 1 ∆m 2 sol , we have k 1, and r ≪ 1. The largest mass is given by m 3 ≈ ∆m 2 atm . The contributions of m 1 to the µτ -block elements appear as small corrections, but they can be of order 1 for the e-row elements.
Neglecting terms of order s 13 , we can use for the µτ -block elements the expressions from (34). Comparing with the hierarchical case (see Eq.(18)), we find that the effect of m 1 is reduced to renormalization of the mass ratio r and shift of the phase σ: That is, dependence of the elements on phases can be found from Figs.2,3, by appropriate change of r and σ.
Depending on the phase ρ, the contribution related to m 1 can suppress or enhance the amplitude of variations of µτ -block elements with σ (see (19)). The extreme modifications are determined by For k 1 and tan 2 θ 12 0.5, the relative effect of m 1 is below 50%. For ρ = 0, π/2, we have φ X = 0 and no phase shift occurs. In general, the phase φ X is in the interval (−φ max X ÷φ max X ), where sin φ max X = k tan 2 θ 12 . This maximal phase corresponds to r X = r 1 − k 2 tan 4 θ 12 . For the elements of e-row, the s 13 corrections should be taken into account (see (A.7)): Again, the effect of m 1 is reduced to renormalization of r and a shift of phase (compare with Eqs. (20,21)): Minimal and maximal values of r Y are given by: In these extreme cases there is no phase shift. In general, for arbitrary values of , where sin φ max Y = k and this maximal value corresponds to r Y = r √ 1 − k 2 . Notice that, for m eµ and m eτ , modifications of r can be larger than for the elements of µτ -block; moreover, r Y and r X are changing with ρ in opposite phases. The phases of variations of µτ -block elements are correlated as in the hierarchical case. No phase shift among these elements is induced by m 1 contribution: in (25) one should substitute σ → σ X . Similar conclusion is valid for e-row elements: in (26) For the ee-element, similarly to the previous cases, we get (including s 13 corrections): where Now the difference between r and r Z can be substantially larger: and r Z changes with ρ in phase with r X . Notice that, for k < tan 2 θ 12 , the shift φ Z is restricted to the interval (−φ max For k > tan 2 θ 12 , the shift is unrestricted. The ee−element is zero for Since r Z can be smaller than r, or even zero, the equality m ee = 0 can be realized for smaller values of s 13 than in the hierarchical case. Now the strongly hierarchical structure of the e-row, can be easily achieved. Maximal value of m ee equals approximatelym max ee ≈ r(s 2 12 + kc 2 12 ). The dependences of the mass matrix elements on phases can be deduced from Fig.2 and Fig.3. Since, now, the "effective" value of r is different for the µτ −block elements (r X ) and the e-row elements (r Y , r Z ), one should take, e.g., lines which correspond to the µτ -block from Fig.2 and lines which correspond to the e-row from Fig.3 or vice versa.
Let us analyze the dependence of the elements on the phase ρ. The relative amplitudes of variations of the µτ −block elements are suppressed by a factor s 2 12 rk (see (A.9)): The influence of ρ on the e−row elements is much stronger. If s 13 ≈ 0, we havẽ (form eτ one should substitute c 23 → s 23 ) and the relative amplitude of variations is given by k.
The amplitude of ee-element can be maximal if k ≥ tan 2 θ 12 .
Partial degeneracy
For ∆m 2 sol ≪ m 1 ∆m 2 atm , we get the spectrum with partial degeneracy m 1 ≈ m 2 m 3 . The ratios of masses are For m 1 > 2 · 10 −2 eV, the deviation of k from 1 is smaller than 5% and we can neglect it in comparison with other corrections (related to possible large deviations from maximal 2-3 mixing and to s 13 0.1). Now the scale of masses is determined by m 3 ≈ m 2 1 + ∆m 2 atm ∼ (1 ÷ 2) ∆m 2 atm . The sum (6) of all the matrix elements squared equals Let us consider first the dependence of the masses on phase σ (see Fig.4 panels a,c,e and phase diagrams in Fig.1). In the limit of small s 13 , we get where The massm µµ oscillates with σ around s 2 23 ; the amplitude of variations depends on the phase ρ. Maximal amplitude is for ρ = 0, which corresponds to X 1 = 1. The massm τ τ oscillates in phase withm µµ around the average value c 2 23 ; the massm µτ varies in opposite phase. The amplitudes of variations of all µτ -block elements decrease with increase of the phase ρ and it is minimal for ρ = π/2.
In the approximation (42), all the elements of µτ -block depend on the phase ρ in the same way. So, there is no relative shift and the relative phases are determined as in (25). The phase shift seen in Fig.4c is due to the interplay of ǫ corrections and phase ρ.
The dependence of elements of the e-row on σ (as well as δ) appears due to terms of the order s 13 (see (A.11) and Fig.4 panels a,c,e). Neglecting corrections ∼ rs 13 , we get (for ρ being not to close to 0): where The massesm eµ andm eτ vary with (δ − 2σ) in opposite phase (small phase shift may appear due to interplay of order rs 13 corrections and phase ρ). The amplitude of variations is proportional to s 13 . The average values of the elements increase with ρ, and they reach maxima,m max eµ = r sin 2θ 12 c 23 andm max eτ = r sin 2θ 12 s 23 , at ρ = π/2. Configuration with ρ = 0 or ρ ≈ 0 is the special one (see Fig.4a). In this case the main terms in (A.11) vanish and dependence on phases appears due to ǫ corrections, defined in (A.12):m Notice that elementsm eµ andm eτ vary in phase; both the average value and the amplitude are proportional to s 13 . Changing the phase δ by ∆δ one shifts lines which correspond tom eµ andm eτ , with respect to the lines of µτ -block elements by ∆σ = ∆δ/2. For instance, according to Fig.4c, one can get equalitiesm µµ =m τ τ =m µτ andm eµ =m eτ simultaneously.
Variations of the ee-element (A.13) with σ as well as with δ are strongly suppressed by the factor s 2 13 , so thatm where The average value decreases with increase of the phase ρ. It varies fromm max ee ≈ r for ρ ≈ 0, π down tom min ee ≈ r cos 2θ 12 for ρ ≈ π/2. Variations ofm ee with ρ are in opposite phase with respect tom eµ andm eτ .
Let us analyze the dependence of masses on the phase ρ ( Fig.5 panels a,c,e). The amplitudes of variations of the µτ -block elements with ρ, ∆m ρ ∝ rs 2 12 , are smaller (for non-maximal solar mixing) than the amplitudes of σ variations. The average values ofm µµ andm τ τ decrease whereas the average ofm µτ increases with increase of σ from 0 to π/2. Strong split of masses in the µτ -block (see Fig.5a,5e) is due to cancellation of contributions related to m 3 (first term in (42)) and to m 1 and m 2 (second term). For large s 13 , the terms of order rs 13 can enhance variations with ρ.
According to (45), variations of the e-row elements with ρ are strong: the amplitude can be close to maximal one. For large values of s 13 , the phase (2σ − δ) changes significantly the average values of the elements m eµ and m eτ and also modifies the amplitudes of variations with ρ.
The matrix elements are all correlated. This can be seen in the limit of very small s 13 . For partially degenerate spectrum (k = 1), we get: The mass matrix can be written as From (52), we find the following relations among the elements: m eµ m eτ = tan θ 23 ; (54) Notice that m τ τ = m µµ either for θ 23 = 45 • or for rx = 1. The latter corresponds to completely degenerate spectrum and ρ = 0. In this casẽ m τ τ =m µµ = 1 − sin 2 2θ 23 sin 2 σ ,m µτ = sin 2θ 23 sin σ .
Furthermore, we find, for the sum of the µτ -block elements, and consequently:m 2 µµ +m 2 τ τ + 2m 2 µτ −m 2 ee = 1 . For a given r the mass matrix (52) is determined by σ X and x. In general, σ X and x can be treated as two independent parameters. Depending on ρ, x changes from the minimal value x min ≡ cos 2θ 12 , for ρ = π/2, to x max ≡ 1, for ρ = 0. The phase φ X varies in a rather narrow interval, which decreases with θ 12 : sin φ X ∼ (− tan 2 θ 12 ÷ tan 2 θ 12 ) .
Degenerate spectrum
For m 1 ≫ ∆m 2 atm , we have m 1 ≈ m 2 ≈ m 3 , and the ratio of masses is given by For m 1 = 0.5 eV, the deviation of r from 1 is smaller than 1% and we can neglect it in comparison with other small parameters, s 13 and ξ. The e-row elements and the µτ -block elements are given by (A.9,A.11,A.13), with k = r = 1. Notice that, in the approximation ∆m 2 atm ≈ 0, the structure of the mass matrix for normal and inverted hierarchy is the same.
Transition to the degeneracy case does not produce qualitative changes in dependences of the matrix elements on phases in comparison with partial degeneracy case (see Fig.4 b,d,f and Fig.5 b,d,f). Amplitudes of variations of the µτ -block elements increase and can reach maximal size for specific values of phases. This leads to zero (small) values of certain matrix elements and therefore to the appearance of a hierarchical structure of the mass matrix. For example, in the case of maximal 2-3 mixing and ρ = 0, we find from (52): Therefore,m µµ =m τ τ = 0 for σ = π/2 (Fig.4b). In this case, however,m τ τ differs from zero. Such a configuration is realized approximately in Fig.4d.
The average values ofm eµ andm eτ increase with respect to the partial degeneracy case, whereas the amplitudes of variations with σ and ρ do not change. Average value of the ee-element increases with r and can reach 1 for ρ = 0 (Fig.4b).
The amplitudes of variations with ρ (Fig.5 b,d,f) increase and, for ρ ≈ 0, π, hierarchical structure of the mass matrix appears (Fig.5b,5f). For some values of phases all the elements become approximately equal to each other (see, e.g., Fig.5d at ρ = 1.3π).
From hierarchy to degeneracy
In Fig.6, we show the dependence of m αβ on m 1 for different values of the Majorana phases σ and ρ. As follows from the figure, the hierarchical structure with the dominant µτ -block and small e-row elements exists, independently on phases, for m 1 / ∆m 2 atm 0.1 (m 1 0.005 eV). This interval of m 1 corresponds to hierarchical or non-degenerate spectra. The structure with dominant µτ −block disappears for m 1 / ∆m 2 atm ∼ 0.3 ÷ 0.5 (m 1 < (0.02 ÷ 0.03) eV), that is for partially degenerate spectrum. For m 1 ∆m 2 atm ≈ 0.05 eV, the spectrum converges to the degenerate one. In this last case, the structure of the mass matrix depends substantially on the Majorana phases. Notice that, in general, the pairs of elementsm µµ andm τ τ , as well asm eµ andm eτ , have similar dependences on m 1 .
For large part of the phase parameter space, all elements of the mass matrix increase with m 1 being of the same order. Some accidental equalities among them may appear. Particular structures are realized for specific values of phases, ρ, σ ≈ 0, π/4, π/2, shown in Let us comment on properties of the ρ − σ plots. The periodicity in ρ and σ implies that the opposite sides of the plots must be identified. For example, the case of equal CP parities of ν 1 , ν 2 and ν 3 corresponds to any of the four corners of the plots.
The phase ρ is associated with the mass m 1 , therefore, in the case of strong normal hierarchy, the dependence of m αβ on ρ disappears and the iso-mass contours become parallel to the axis ρ. In contrast, the contours for m ee are nearly parallel to σ axis, since m ee depends on σ via O(s 2 13 ) terms. There is a relative shift of π/2, along the axis σ, between the patterns for m eµ and m eτ .
The elements m µµ and m τ τ have have the same ρ − σ pattern in the limit of maximal 2-3 mixing and zero s 13 . The difference between them originates from deviation of θ 23 from 45 • and from the terms (see (A.7)) ± sin 2θ 23 s 13 re −iδ Y , where the plus sign corresponds to m τ τ and the minus sign to m µµ . In the case of maximal 2-3 mixing, only the term (57) contributes to the difference. The pattern for m µτ is complementary to that for m µµ and m τ τ , in the sense that regions of large m µτ correspond to regions of small m µµ and m τ τ and vice versa. Small values of the µτ -block elements appear at the corners of the plots, ρ ≈ 0, π as well as σ ≈ 0, π, and in the region σ ∼ π/2. In the latter case, the corresponding value of ρ depends on 2-3 mixing. For maximal mixing, the regions of small elements are at ρ ∼ 0, π; with deviation from maximal mixing, the regions shift to the center of the plot and merge at ρ ∼ π/2 for large values of ξ.
Let us comment on specific features of Figs. 7 -13.
In Fig.7 we show the plots for the non-degenerate spectrum. There is a sharp separation of the e-row and dominant µτ -block elements. Structuring within these two groups is rather weak.
In Fig.8 we show the plots for spectrum with partial degeneracy. Dependence of elements on ρ becomes stronger with increase of m 1 . The µτ -block elements have more profound structure. The elements m eµ and m eτ are small in the regions near the corners of the plots.
The plots for spectrum with strong degeneracy are shown in Figs. 9 -13. Now the e-row elements depend strongly on ρ, whereas the dependence on σ is rather weak. With increase of m 1 the ρ-dependence becomes stronger for the µτ -block elements (see (A.9)). The patterns for m µµ and m τ τ differ due to order s 13 terms (57), which also depend on δ. The contribution of the term (57) has minus sign for m µµ and therefore it adds constructively with the other ρ-dependent term (see (A.9)). For m τ τ , instead, the contribution has an opposite sign, therefore ρ-dependence remains weak.
In Fig.10 we show the plots for δ = π/2. The difference between the plots for m µµ and m τ τ becomes smaller in comparison with the case δ = 0: indeed, for δ = π/2, the term (57) has pure imaginary coefficient and its contributions to m µµ and m τ τ become similar. For δ = π, the ρ − σ plots for m µµ and m τ τ interchange as compared with those in Fig.9. The pattern for m µτ is almost unchanged. In the first approximation, the effect of δ = π/2 on the e-row elements is reduced to a shift of σ by π/4 for m eµ and m eτ and by π/2 for m ee .
In Fig. 11 we show the plots for small s 13 . With decrease of s 13 , the dependence of e-row elements on σ disappears, patterns for m µµ and m τ τ become more similar, their complementarity to the pattern for m µτ becomes sharp.
In Fig. 12 we show the plots for non-maximal 2-3 mixing (θ 23 = 35 • ). The pattern for m ee is unchanged and the one for m µτ changes weakly. In contrast, the difference between the patterns for m eµ and m eτ increases. In particular, m eµ can be large for ρ ≈ π/2 and σ ≈ 0, π. Also difference of the patterns for m µµ and m τ τ increases. Dependence of m τ τ on phases becomes weaker and regions with very small values of m τ τ disappear. In contrast, for m µµ the region of small values appears near the center of the plot: ρ ∼ σ ∼ π/2. For θ 23 > 45 • (not shown) the situation is opposite: region of small values at ρ ∼ σ ∼ π/2 appears for m τ τ . Also m eτ becomes, in general, larger than m eµ .
In Fig.13 we show the plots for maximal possible 1-2 mixing. The ρ dependence becomes strong for all the elements and especially for m ee . This element can be zero at ρ ≈ π/2.
Correlations of mass matrix elements. Extreme values
The ρ − σ plots allow to systematically scan all possible structures of the mass matrices. The pattern of the ρ − σ plots themselves depends on the unknown parameters m 1 , δ, s 13 , as well as on the uncertainties of the known oscillation parameters. As follows from the figures, the dependence of the plots on m 1 is very strong, whereas the dependences on δ and s 13 are relatively weak (in view of the strong bound on s 13 ).
The ρ−σ plots allow to see immediately the correlations between the values of different matrix elements. Formally, the 6 independent moduli of matrix elements depend on 5 free parameters: m αβ = m αβ (m 1 , ρ, σ, δ, s 13 ) So, only one relation should exist among the matrix elements. Actually, the correlations are much stronger due to relatively strong upper bound on s 13 and the fact that the effect of δ is suppressed by a factor s 13 . In the physically interesting limits the number of free parameters further decreases. Thus, in the case of strong mass hierarchy (m 1 → 0) the m 1 and, consequently, ρ dependences disappear: m αβ = m αβ (σ, δ, s 13 ). In the limit of strong mass degeneracy the structure of the mass matrix does not depend on the absolute mass scale: m αβ = m 1 f αβ (ρ, σ, δ, s 13 ), etc. . Each point in the ρ − σ diagram (obviously, the same point should be taken in all six panels) corresponds to a mass matrix with certain structure. A given set of ρ − σ diagrams (which corresponds to fixed values of m 1 , s 13 and δ) shows 6 elements as functions of two parameters: ρ and σ. Therefore, imposing conditions on two (or even one element) one may reconstruct whole the matrix up to certain discrete ambiguity. E.g., in the degenerate case, imposing condition that m τ τ is the heaviest element, we find that m ee and m µµ should be equally large whereas three other elements are small.
The ρ − σ plots allow to find immediately maximal and minimal values of matrix elements. Using Eq. (4) it is easy to see that the maximal value of the individual matrix element is given by It does not depend on the Majorana phases. The minimal value is zero or if the latter is above zero. The first term in (59) is (two times) the largest among the contributions from the three mass eigenstates. These statements have been made for m ee element in [34] and generalized to other elements in [33,22].
In the limit of small s 13 , maximal values of the two other elements are m max eµ ≈ m 1 sin 2θ 12 c 23 and m max eτ ≈ m 1 sin 2θ 12 s 23 . Due to correlations among the mass matrix elements imposed by experimental data as well as the sum rule condition (6), only some elements can take their maximal or minimal values simultaneously. In particular, according to the ρ − σ plots of Fig. 7, in the hierarchical case only two e-row elements can be zero (very small) simultaneously: m ee and m eµ or m ee and m eτ . In the case of partial degeneracy m eµ and m eτ can be very small simultaneously. In the case of strong degeneracy, we see, from Figs. 9-13, that there are two groups of elements which can be simultaneously very small: 1) m eµ , m eτ , m µµ , m τ τ ; 2) m eµ , m eτ , m µτ . Similarly, from the ρ − σ plots one can get groups of elements which reach simultaneously their maxima.
ββ 0ν -decay and structure of the mass matrix
The ee-element, m ee , is the only matrix element for which we have immediate experimental access. The ρ − σ plots allow one to find immediately the implications of the results from ββ 0ν -decay searches for the structure of the mass matrix (in assumption that the exchange of the light Majorana neutrinos is the only mechanism of the decay). For m ee , the Majorana phase plots (using a different parameterization) have been considered in [35].
The iso-mass contours of m ee are nearly parallel to the axis σ. Weak dependence of m ee on σ appears due to term of the order s 2 13 . For very small s 13 and (partially) degenerate spectrum, the iso-mass contours are determined by Suppose that experimental searches give the upper bound m ee < m up ee . Then, according to Figs. 9-13, there are two iso-mass contours in the ρ − σ plots, which correspond to a given value m up ee (m ee (ρ, σ) = m up ee ) and a given set of the other parameters (m 1 , s 13 , δ, etc.): ρ 1 = ρ 1 (σ) (ρ 1 < π/2) and ρ 2 = ρ 2 (σ) (ρ 1 > π/2). The upper experimental limit on m ee excludes the following regions in the ρ−σ plots (obviously for all the matrix elements): The position and the shape of the contours ρ i (σ) (i = 1,2) depend on m 1 , θ 12 and s 13 . Taking, e.g., m 1 = 0.5 eV, s 13 = 0.1, tan 2 θ 12 = 0.36 and the bound (11), we find from Fig.9 that the regions covered by the three darkest strips are excluded. They correspond approximately to ρ < π/4 and ρ > 3π/4. These regions are excluded for all the elements. In this particular case, all corners of the plots and sides with ρ ≈ 0, π, which correspond to hierarchical structure of the mass matrix, are excluded. Clearly no constraint on the structure appears for weaker bound, m up ee > 0.5 eV (which is allowed by the uncertainty in the nuclear matrix elements), or, more in general, for m up ee > m 1 .
For small s 13 , the mass matrix can be written immediately in terms of m ee , using Eq.
Herem ee = xr ≤ r. This form shows how strongly the determination ofm ee can influence the structure of the mass matrix. The s 13 corrections to (63), can weakly modify the structure of the matrix. Positive results of ββ 0ν -decay searches will select two strips in the ρ − σ plot.
Substantial bounds on the structure of the mass matrix can be obtained when future solar neutrino experiments and KamLAND [26] experiment will stronger restrict the allowed range for θ 12 and also when future β decay measurements (KATRIN [36]) will strengthen the bound on the absolute mass scale.
ρ − σ plots for the phases of matrix elements
The phases of matrix elements are, in general, functions of all the unknown physical parameters: φ αβ = φ αβ (m 1 , ρ, σ, δ, s 13 ). In the limits of strong mass hierarchy or/and small s 13 , the expressions are simplified and for some elements the phases are zero. Also in certain situations the phases of some elements depend only on ρ or on σ.
The values of phases correlate (or anticorrelate) with the absolute values of the corresponding elements. Strong change of phase occurs typically in the regions of parameter space where the absolute value of the element is small. There are also correlations between phases of different elements.
In Fig. 14 we show the ρ − σ plots for the phases of matrix elements in the case of degenerate spectrum and the same choice of parameters as in Fig. 9. Notice that the pattern of ρ − σ plots for phases repeats partially the pattern for the absolute values. The phase φ ee depends strongly on ρ and weakly on σ. The phases φ eµ and φ eτ change with ρ and (weaker) with σ. The patterns are complementary to some extent: at ρ ∼ π/2, φ eµ has minimum whereas φ eτ maximum. The phases of the µτ -block depend both on σ and (weaker) on ρ. The patterns for φ µµ and φ τ τ are rather similar. Notice that maximal (π) values of these phases are achieved at σ ∼ π/2 and minimal (zero) values are at σ ∼ 0, π.
CP phases and structure of the mass matrix
Possible structures of the mass matrix can be classified in the following way: • Hierarchical matrices, with certain dominant and sub-dominant elements.
• Matrices with certain ordering of elements. In this case, the elements m αβ have the same order of magnitude.
• Democratic matrices, with equal moduli of all the elements: m αβ ≈ m 0 for any choice of α, β.
We will discuss these possibilities in order.
Hierarchical mass matrices
The regions of parameters which correspond to a hierarchical structure of the mass matrix can be identified as "white" zones in the ρ − σ plots, where one or several elements have small values. Notice that the "white" zones are mainly at the corners or in the center of the plot, which corresponds to definite CP-parities or small CP-violating phases. So, the most of hierarchical structures can be identified by considering certain CP-parities. A systematic search of possible hierarchical structures can be performed in the following way. In the limit s 13 = 0, the elements of e-row equal: Since tan θ 23 ∼ 0.7 − 1.4, these elements can be either both small or both large. Let us consider first the case when m eµ and m eτ do not belong to the dominant structure, i.e.,M eµ ≈M eτ ≈ 0. According to (64), this implies either r → 0 or ρ ≈ 0, π. In the first case we arrive at the structure with dominant µτ -block: which holds for any value of the phases (see Fig.7). Weak ordering of elements is possible in the µτ -block. In the second case, ρ = 0, π, which corresponds to the same CP-parities of ν 1 and ν 2 , the ratio r can be of order 1 and new structures appear. For ρ = 0, π, we get X = Z = 1 and (see (52) Such a possibility is realized near the left and right borders of the plots in Fig.8. The determinant of the µτ -block is given bỹ So, with increase of r, it deviates strongly from zero. In the first approximation, we get mass matrix with 4 independent dominant elements of the same order: m ee ∼ m µµ ∼ m τ τ ∼ m µτ . Hierarchy of elements in the µτ -block appears for special values of the phase σ. If, e.g., tan θ 23 ≤ 1, we can get M µµ ≈ 0 provided that The mass matrix is then reduced tõ Let us underline that such a structure is present in the case of partial degeneracy only.
In the limit of complete degeneracy, r → 1, the condition M µµ ≈ 0 requires tan θ 23 = 1 and therefore the matrix converges tõ This type of matrix has been discussed previously, e.g. in [8]. If also δ = π/2, then Z ′ = 0 (see (A.8)) and therefore order s 13 terms are zero (see (A.7) and Fig.10). If tan θ 23 ≥ 1, one can get M τ τ ≈ 0. This, again, requires σ = π/2 but r = cot 2 θ 23 and, in lowest order in s 13 , the mass matrix has the form It has the same limit (68) in the case of completely degenerate spectrum. Notice that in the limit s 13 → 0, one should take into account the deviations of k from 1. This leads to appearance of terms of order ∆m 2 sol /2m 2 1 instead of zeros (see (41)). According to (66), the off-diagonal elements of µτ -block are zero for r = e −2iσ , that is for r = 1 and σ = 0, π. In this caseM ee =M µµ =M τ τ = 1 and So, the dominant structure reduces to the unit matrix (as it has been described, e.g., in [8]) with small off-diagonal corrections (Fig.6a). If also δ = 0, π, the order s 13 terms are zero (see (A.7,A.8)). Let us study possible equalities of elements of the dominant structure. The conditions for the equality m µµ = m τ τ follow from Eq.(55). All the elements of µτ -block have the same absolute value provided that 2-3 mixing is maximal and σ = π/4, 3π/4. In this case: 1 + r 2 e ±φ 1 2 √ 1 + r 2 e ∓φ . . . . . .
where φ = arctan 1/r. For r = 1, we get:m Order s 13 terms are zero if also δ = σ or σ + π. Notice that mass matrices considered above depend on r and s 23 . Dependence on θ 12 appears only via s 13 and ∆m 2 sol /m 2 1 corrections.
Let us consider the case where M eµ and M eτ belong to the dominant structure. According to (64), this implies r ∼ 1 and ρ not to close to 0, π (see regions with ρ ∼ π/2 in Figs. [9][10][11][12][13]. In this case, also the element m ee can belong to the dominant structure. Indeed, minimal value of m ee is achieved at ρ = π/2, so that: So, all the elements of e-row have comparable values, unless 1-2 mixing is near maximal. For θ 12 ≈ π/4, the hierarchical structure m ee ≪ m eµ , m eτ is realized (see Fig.13). Let us consider the possibility of zeros in the µτ -block. Now the situation differs from that of the case ρ = 0, π (see Eqs.(50,52)). Since x < 1, we havem µτ = 0 and for maximal mixing, all elements of the µτ -block differ from zero. Still, for non maximal 2-3 mixing, we can getm µµ = 0 orm τ τ = 0. For instance, if θ 23 < 45 • ,m µµ = 0 when σ X = π/2 and tan 2 θ 23 = xr. So, in the case of large e-row elements, one or two of the diagonal elements can be zero: m ee , for maximal 1-2 mixing and/or m µµ (m τ τ ), for special relation among θ 12 , σ and θ 23 .
Summarizing, the mass matrix has a hierarchical structure: (a) In the case of hierarchical mass spectrum: the e-row elements can be about 10 times smaller than the µτ -block elements.
(b) In the case of degenerate mass spectrum: the hierarchy determined by a factor ∼ 10 or more appears in the regions near the corners of the ρ − σ plots: for the first corner, and similar interval for the three other corners. In these cases the matrix equals approximately to the unit matrix with small off-diagonal terms. Another possibility is ρ ≈ 0 − π/6, σ ≈ (0.45 − 0.55) π and similar reflected region ρ → π − ρ. In this case the mass matrix has a dominant structure withm ee ≈m µτ , while all other elements are small. (c) For non-maximal 2-3 mixing: the elementm µµ orm τ τ can be small for σ ≈ π/2 and for a value of ρ which depends on the deviation ξ of 2-3 mixing from maximal value. With increase of ξ, the region of small mass approaches the center of ρ − σ plots (ρ ∼ π/2).
Flavor alignment and flavor disorder
Does matrix show any flavor ordering (alignment), that is, the correlation of the neutrino mass terms and the charged lepton masses? To some extent, the lepton mixing matrix itself is the measure of the flavor alignment, so that small mixing would imply strong alignment. The observed large lepton mixing means weak ordering or absence of the flavor ordering. The question of flavor ordering can be studied in terms of mass matrix in flavor basis. In this connection, let us consider the possibility that masses decrease with transition from the τ -flavor to the e-flavor, that is, We will call this possibility the normal flavor ordering or alignment. The ordering with m eτ m µµ is also possible. Notice that, according to (55), m τ τ > m µµ provided that θ 23 < 45 • . In contrast, one gets from (54) that m eτ > m eµ , if θ 23 > 45 • . So, in the approximation s 13 ≈ 0, the "flavor ordering" is impossible. However, for near maximal 2-3 mixing, the differences (m τ τ − m µµ ) and (m eτ − m eµ ) are so small that corrections due to non-zero s 13 become important. These corrections can produce flavor ordering, as can be seen, e.g., in Fig.3b, for the case of strong mass hierarchy, m 1 = 0, and in Fig.4e (shifting e-row lines), for the case r ∼ 1.
There are other possibilities of the flavor ordering. Sets of parameters can be found for which the matrix has 1) τ τ -alignment (see, e.g., Fig. 3a, σ ≈ 2.6): 2) e-alignment (see Fig. 2d, σ ≈ 2.4), when the masses are sensitive to L e : 3) other alignments, such as (Fig. 2b, σ ≈ 0): Although in many cases m ee can be the heaviest element, inverted flavor alignment (when the mass increases with change of flavor from τ to e) seems to be impossible.
As follows from the Figs. 4 -6, in a number of cases (partially degenerate, degenerate spectrum) the matrix can show flavor disorder. That is, the matrix elements can take (relative) values between 0 and 1 without correlation with masses of the charge leptons.
Mass matrices with specific ordering of elements
For m 1 ∆m 2 atm (k ≈ 1), in large part of the phases space, all the elements of the mass matrix are of the same order (see Figs.9-13). Values of free parameters can be chosen in such a way that any element of the matrix can be the smallest one or the largest one. Also one can reach equalities between some of the elements. A number of configurations is possible, with only a few restrictions determined by relations among the elements, discussed at the end of section 4.2. Varying r, x, σ X and θ 23 (see (52)), one can get equalities among various elements of the matrix. In particular, 1) m ee = m eµ for .
2) m ee = m eτ for x given by a similar expression with the substitution c 23 ↔ s 23 .
3) All elements of the e-row are equal for maximal 2-3 mixing and x = 1/ √ 3. 4) One can reach equality of the diagonal elements m ee = m µµ or m ee = m τ τ and also m ee = m µµ = m τ τ ; see, e.g., Fig.4c.
5) The equality of elements of the second diagonal, m eτ = m µµ = m τ e , is possible, but in this case other elements are not small: m τ τ ≈ m µµ , for example. 6) According to Fig.4d, the following equalities can be satisfied: m ee = m µµ = m τ τ ≈ 2m eµ = 2m eτ = 2m µτ for σ ≈ 0.7. 7) For σ ≈ 1.2 (Fig.4d) we find However, it is not possible to get zero values of all the diagonal elements. Indeed, m ee vanishes for r = 0 or x = 0 (the latter corresponds to near maximal 1-2 mixing). However, for x = 0, m µµ and m τ τ are non-zero: they belong to the dominant structure. The only possibility would be to consider inverted hierarchy of the mass eigenvalues.
Democratic mass matrix
It is possible to have equal absolute values for all the matrix elements in the flavor basis. To obtain such a "democratic matrix" one should satisfy five equalities among independent matrix elements m αβ . In general, we have nine parameters (three masses, three mixing angles and three CP violating phases) and we should reproduce the solar as well as atmospheric mass squared differences and mixing angles (4 relations) as well as satisfy the CHOOZ bound. So, in principle, the problem is non-trivial. Let us present one realization of such a possibility.
The e-row elements should be as large as the µτ -block elements; this requires r ∼ 1 and ρ ∼ π/2. The µτ -block elements are equal to each other only for σ ∼ π/4, 3π/4. Then, if s 13 is very small, also ξ is required to be very small, otherwise m eµ differs inevitably from m eτ and the same is true for m µµ and m τ τ .
Bi-maximal mixing and its variations
The Fig.13 corresponds to bi-maximal mixing (θ 12 = θ 23 = 45 • ). Notice that, in contrast with pure bi-maximal mixing, θ 13 is non-zero here. The limit θ 13 → 0 leads to disappearance of dependence of the e-row elements on σ and to equality of the patterns for m µµ and m τ τ . According to Fig.13, large variety of mass matrix structures can lead to bi-maximal mixing. In particular, for ρ = 0, π and σ = 0, π (corners of the plot), we get the nearly diagonal matrix (70). For ρ = 0, π and σ = π/2, the mass matrix has the form (68). For ρ = π/2, it follows x = 0. In this case, neglecting O(s 13 ) terms, for any value of σ we get the matrixm discussed in the literature [8].
Apart from that, many other structures allowed, e.g. matrices with nearly equal elements, etc., can lead to bi-maximal mixing.
Notice that recent data on solar neutrinos strongly disfavor maximal 1-2 mixing [25]. Still mass matrix with bi-maximal mixing can be realized in the symmetry basis. In this case the observable non-maximal 1-2 mixing is the result of rotation of the charge lepton mixing matrix.
Parameterization of M
Let us consider the possibility to parameterize the mass matrix by powers of a unique expansion parameter λ ≪ 1:m where c αβ are numbers of order 1. In the flavor symmetry context, the exponents n αβ are related to the flavor charges of the corresponding mass terms. If n αβ = n α + n β , where n α , n β (α, β = e, µ, τ ) are numbers associated with corresponding flavor states, factorization occurs:m αβ = c αβ λ nα λ n β .
In this case the smallness of various mass terms is correlated: n µµ = 2n µτ − n τ τ , 2n eµ = n ee + n µµ , etc. Let us first consider the case of spectrum with mass hierarchy. As one can see in Eq. (18), for maximal 2-3 mixing and σ ≈ π/4, 3π/4, all elements of the dominant µτ -block can be equal to each other. Then, the elements of the e-row should be suppressed by powers of λ:m eβ ∝ λ n β , β = e, µ, τ .
As follows from our analysis, we can have all the e-row elements to be equal among themselves, simultaneously with equality of µτ -block elements: where (see (30)) λ ≈ 2(s 2 13 + r 2 c 2 12 s 2 12 ) .
Mild hierarchy of elements of the µτ −block is realized for non-maximal 2-3 mixing or/and non-trivial CP phases. According to Fig. 3a, 3b, we may have m τ τ ≈ m µτ > m µµ ≈ m eτ > m eµ ≈ m ee which corresponds to parameterization: with λ ≈ 0.3. Also m τ τ can be the smallest element of the µτ −block, instead of m µµ . In the case of partial or complete degeneracy, new dominant structures appear and therefore new types of expansion is possible. According to Fig.6e and (71), the mass matrix can have the following form: with λ ≈ s 13 r √ 2 .
Two other possibilities are (see Figs. 5f, 6f): with λ ≈ s 13 / √ 2, which should be taken of order 0.1 for the left matrix and 0.2 for the right matrix.
Notice that value of λ which appears in the matrices (84)-(88) and, therefore, consistent with present data, can not be too small. We find Values ∼ 0.3 − 0.4 are also allowed. The value of the parameter (89) can be equal to sin θ c , where θ c is the Cabibbo angle, used as an expansion parameter for quark mass matrices. In the flavor basis the structure of the charge lepton mass matrix is characterized by the two ratios: m µ /m τ = 0.059 and m e /m µ = 0.0049. These ratios can also be reproduced as powers of λ: m e : m µ : m τ ≈ λ 6 : λ 2 : 1 , In a large part of the parameter space, the elements of the mass matrix have the same order of magnitude, so that the ratio of matrix elements is close to 1. In this case we can introduce the ordering parameter λ ord ∼ O(1). Typical value of λ ord can be determined, e.g., by the possible spread of µτ -block elements, due to deviation of the 2-3 mixing from maximal value: Another possible choice for λ ord , in the partial degeneracy case, could be r. We find the following structures in Figs.2,3 (omitting the subscript 'ord'): These structures require rather large θ 13 to enhance the values of the e-row elements.
In the case of partial or complete degeneracy, situation appears where all elements are of the same order with small spread, see, e.g., Fig.4f at σ ≈ 0.7. In this connection one can consider the mass matrix as small deviation from the democratic one: where |M D αβ | = 1, ∆M ∼ O(λ) and λ is a small parameter. Here λ can be taken of order s 13 or ξ or 1−r (deviation from degeneracy). An interesting possibility could be to take for λ the deviation of ρ or σ from the values 0, π/2, which correspond to definite CP parities.
Remarks on the Symmetry basis
As we have outlined in the introduction, to get further theoretical inference, one needs to find the matrix in the symmetry basis and at the symmetry scale. In general, the symmetry basis differs from the flavor basis and the mass matrix of charged leptons, M l , is non-diagonal there. The neutrino mass matrix in the symmetry basis, M ν , is related to that in flavor basis as M ν = U T l MU l , where U l is the mixing matrix which diagonalizes M l .
The matrix U l is unknown and some additional assumptions are needed to fix its structure. Clearly this introduces a further ambiguity in the analysis. Here we mention two possibilities (two assumptions) which allow one to immediately relate the matrices in flavor basis and symmetry basis. (The extensive discussion of this issue will be given elsewhere [37]).
1) It may happen that due to strong hierarchy of the masses of the charged leptons, the charged lepton mixing is rather small and U l ≈ I. In this case, the structures of the mass matrix M, discussed in this paper, are not modified significantly under transition to the symmetry basis.
2) Being related to the ratio of masses of the µ and τ lepton, the 2-3 angle, θ l 23 ∼ m µ /m τ , can be the only large angle in U l (1-2 and 1-3 mixing angles are very small, if they are connected with the tiny electron mass). In this case, effect of charged lepton mixing on the neutrino mass matrix is reduced to change of the neutrino 2-3 angle in the flavor basis: θ 23 = θ sym 23 − θ l 23 . Taking into account this shift of the angle, one can use neutrino mass matrices obtained in this paper as mass matrices in the symmetry basis. This shift can justify large deviations of the neutrino 2-3 mixing from maximal value.
Structures of the mass matrix M will not be modified substantially due to running to high scales. It was found [39] that renormalization of M αβ is smaller than 10 −4 for the Standard Model and about few percents for MSSM.
Discussion and conclusions
The motivation of our study is to understand how far one can go in construction of the theory of neutrino mass using the bottom-up approach, that is, starting from experimental results. Neutrino mass matrix in flavor basis unifies information contained in masses and mixing angles measured in experiment and therefore can give deeper insight into the underlying physics.
We have elaborated a method which allows one to study dependences of the individual matrix elements and of the structure of the mass matrix as whole on the unknown yet parameters. In particular, we have performed a systematic and comprehensive study of dependences of the neutrino mass matrix elements on the CP violating phases.
We have introduced the ρ − σ plots which show contours of constant mass in the plane of the Majorana phases ρ and σ. We used the ρ − σ plots to analyze the possible structures of the mass matrix. Each point in the ρ − σ plot represents a certain neutrino mass matrix, so the ρ − σ plots allow one to scan all possible matrix structures.
The ρ − σ plots allow to study in rather transparent and straightforward way: -influence of the phases on magnitudes of individual matrix elements. In particular, one can find ranges in which elements can change and their extremal values (minimal and maximal).
-correlations between values of different matrix elements. Taking a given element in some range one can see immediately intervals in which other elements can change.
-correlations between the structure of the neutrino mass matrix and the charged lepton masses.
-consequences of experimental measurements of oscillation parameters and m ee on the structure of the mass matrix.
Our results can be summarized in the following way.
1) The structure of the mass matrix changes significantly with m 1 .
For strongly hierarchical mass spectrum (m 1 ≈ 0) and small s 13 , the mass matrix has a structure with the dominant µτ -block and small e-row elements. The ratio of masses of these two groups can be as small as 0.1.
The dominant structure becomes less profound for large ∆m 2 sol , large s 13 and significant deviation from maximal 2-3 mixing. For ∆m 2 sol > 2 · 10 −4 eV 2 , a separation of the elements in the dominant µτ -block and sub-dominant e-row has no sense and one can consider certain non-hierarchical ordering of the elements. In particular, a configuration with nearly equal split among masses is possible.
For partially degenerate spectrum, the gap between the µτ -block elements and e-row elements disappears and all elements can be of the same order. Various equalities between the elements and orderings can be realized depending on the CP violating phases.
In the case of degenerate mass spectrum, the mass matrix can have a hierarchical structure with some elements (in particular, from the µτ -block) being much smaller than other elements. The hierarchical structures appear for specific ranges of phases.
In the case of complete degeneracy, the structure of the mass matrix is insensitive to the ordering of mass eigenvalues. Therefore, our conclusions are valid also for inverted ordering.
2) The Majorana phases ρ and σ and the Dirac phase δ have different impact on the structure of mass matrix. This impact depends on values of oscillation parameters and m 1 .
(a) The Dirac phase δ is associated with the small parameter s 13 . The influence of this phase on the µτ -block elements is relatively weak for any type of spectrum (hierarchical or degenerate): it is suppressed by factor s 13 . In contrast, the elements of e-row can be substantially influenced by δ, especially in the case of hierarchical spectrum. In the first approximation δ entersm eµ andm eτ in the combination (δ − 2σ) andm ee -in the combination (2δ − 2σ). So, the effect of δ is reduced to the appropriate shifts of phase σ form ee ,m eµ andm eτ . In the ρ − σ plot, for fixed pattern of the µτ -block elements, the phase δ produces a shift of the patterns form eµ andm eτ , along the axis σ.
Improvements of the upper bound on s 13 in future experiments will further suppress the influence of the Dirac phase on the structure of the mass matrix.
(b) The phase ρ is associated with the mass eigenvalue m 1 . So, it has very small effect on the mass matrix in the case of hierarchical spectrum. The role of ρ increases with m 1 . The influence of this phase increases with the solar mixing angle. Therefore future measurements of θ 12 in KamLAND and solar neutrino experiments will allow one to further restrict the effect of ρ on the structure of the mass matrix.
For the best fit value of θ 12 , dependence of the µτ -block elements on ρ is not very strong. However, existence of hierarchical structure (zeros) in this block is related to specific values of ρ. There is a strong dependence of the e-row elements on ρ. Typicallym eµ andm eτ have minima at ρ ≈ 0, π and they are maximal at ρ ≈ π/2. The ee-element depends on ρ most strongly. There is a chance to measure/restrict ρ in the ββ 0ν -decay searches, provided that the absolute mass scale will be determined (further restricted) in the direct kinematic measurements.
(c) The phase σ is associated with the heaviest mass eigenstate and, consequently, the σ-dependence is strong for all the elements butm ee . Variations of the ee-element with σ are suppressed by a factor s 2 13 . The phase σ enters the e-row elements,m eµ andm eτ with a factor s 13 . In spite of this, in the case of hierarchical spectrum variations ofm eµ andm eτ with σ can be strong. With increase of r, the relative amplitude of variations of these elements with σ decreases. In contrast, the dependence of µτ -block elements on σ becomes stronger with increase of r. It can be enhanced, in addition, if the 2-3 mixing is non-maximal. In the case of degenerate spectrum, variations of the µτ -block elements with σ can be maximal, so that, at certain values of phases, a given element can be zero or the largest one.
There are correlations among the dependences of the matrix elements on phases. In general, patterns ofm µµ andm τ τ are complementary to the pattern ofm µτ . The patterns for m eµ andm eτ are shifted by ∆σ = π/2, etc..
3) Using the dependences of the matrix elements on the unknown parameters we have studied possible structures of mass matrices.
The matrix may have hierarchical form with various dominant structures and small or zero elements. The dominant structures can be identified considering the limit s 13 → 0. The terms of order s 13 give small corrections to the dominant elements. In contrast, the s 13 -order terms can be important or even give main contribution to the sub-dominant elements of the mass matrix. The phase δ does not determine the dominant structure.
In the case of hierarchical mass spectrum the dominant structure is formed by the µτblock (see Eq. (65)). The e-row elements can be about 10 times smaller than the µτ -block elements. Properties of this block depend on the 2-3 mixing and on the phase σ.
In the case of degenerate mass spectrum the hierarchy determined by a factor ∼ 10 or more appears mainly in the left and right-hand sides of the ρ − σ plots.
One arrives at two rather stable structures: (i) the matrix which equals approximately the unit matrix with small off-diagonal terms; (ii) the matrix which has a dominant structure withm ee ≈m µτ , while all other elements are small.
Apart from these known hierarchical matrices we have found several new structures with non-trivial values of CP violating phases. In particular, for non-maximal 2-3 mixing: the elementm µµ orm τ τ can be small for σ ≈ π/2 and for a value of ρ which depends on ξ. With increase of ξ, the region of small mass approaches the center of ρ − σ plots (ρ ∼ π/2).
Typically, CP violating phases which differ substantially from from 0 , π/2 or π lead to non-hierarchical matrices.
We have found that the matrix may have certain flavor ordering (alignment), when masses increase with change of the flavor from e to τ . At the same time we find that the data can be reproduced by matrices with flavor disorder, when no correlation between the size of the mass terms and the flavor is observed. The democratic mass matrix is also possible. 4) Typical separations among the elements in the hierarchical structures of the neutrino mass matrix are characterized by a factor 0.2 -0.3. We have found that it is possible to parameterize the matrix by powers of a single parameter λ (whose origin can be in the breaking of some flavor symmetry at high energy). The value λ ≈ 0.2 − 0.3 is consistent with the Cabibbo angle and also it can be related to the ratios of charge lepton masses. If 2-3 mixing is not maximal, one can introduce an ordering parameter λ ord ∼ tan θ 23 ∼ 0.6 − 0.7. We find that the whole matrix can be parametrized in terms of powers of this ordering parameter.
5) The following results from forthcoming experiments will have crucial impact on the structure of the neutrino mass matrix: -improvement of bound on (or determination of) the deviation ξ from maximal 2-3 mixing; -precise determination of the solar oscillation parameters, ∆m 2 sol and θ 12 ; -improvement of bound on (or determination of) s 13 ; -improvement of bound on (or determination of) m ee ; -direct kinematic measurements of the neutrino mass.
Is it possible to determine uniquely the mass matrix, at least in principle? The answer depends on future experimental results. Let us take the most optimistic situation: suppose that the neutrinoless 2β decay is discovered with m ee > 0.1 eV and the direct measurements of neutrino mass give m > 0.5 eV with high precision. Let us assume also that mixing angles are measured with high accuracy. In this case, the spectrum is strongly degenerate and one can use neutrinoless 2β decay data to determine the CP violating phases. The problem is that m ee depends both on ρ and on σ, and moreover, the dependence on σ is very weak being suppressed by s 2 13 . This means that ρ can be measured with rather good accuracy, whereas no bound on σ can be obtained: small variations of ρ can imitate effect of σ in the whole possible range. The only exception is if the measured m ee is at maximal (or minimal) possible value predicted for a given (measured) absolute scale of the mass. That would correspond to certain CP-parity of ν 1 and δ − σ = 0 or π/2. Then, measuring δ (in neutrino oscillation experiments) one can get σ. Clearly even this program looks very challenging. Other experimental situations are even more difficult.
The determination of σ looks practically impossible, unless methods of direct measurement or independent reconstruction of at least one another matrix element (apart from m ee ) will be found, The ρ − σ plots give an idea of uncertainty in the structure of the mass matrix if σ is unknown. If 2β decay searches give a positive result and direct measurements improve the bound on (or measure) m 1 , we will be able to select a narrow vertical strip in the ρ − σ diagram. This will also restrict other elements, but significant uncertainty will be left due to their dependence on the phase σ. In particular, as follows from the figures in the case of degenerate spectrum, the structure of the µτ -block will be largely unfixed.
The hope is that even partial reconstruction of the mass matrix may give important hint in favor of certain underlying theory. | 2014-10-01T00:00:00.000Z | 2002-02-26T00:00:00.000 | {
"year": 2002,
"sha1": "71d4ea58c95690ba9d9fb0efedbdaf1831629b1d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0202247",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "71d4ea58c95690ba9d9fb0efedbdaf1831629b1d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
168504498 | pes2o/s2orc | v3-fos-license | The importance of the supportive control environment for internal audit effectiveness – the case of Croatian companies
Abstract The paper investigates whether a supportive control environment is associated with the internal audit effectiveness and what characteristics of a control environment are important in this respect. A survey was conducted via a questionnaire on 54 mostly large companies in Croatia. Appropriate methods of statistical analysis were used in order to analyse the survey results. According to the research results, in the case of a supportive control environment there is a greater chance that the internal audit will be effective and that its recommendations will be taken into account to a greater extent. In addition, the survey results showed a statistically significant correlation between perceived internal audit effectiveness and a higher level of supportive control environment.
Introduction
Due to its role in corporate governance, the effectiveness of the internal audit is extremely important and the continuous improvement of its effectiveness is one way to improve the effectiveness of corporate governance as a whole. An internal audit is defined as '...activity designed to add value and improve organisation's operations... ' (IIA Global, 2015). An internal audit adds value to the company by fulfilling specific goals for which this activity is established. In other words, the scope of the internal audit's objectives affects its ability to add value to the company. If we define the ability to achieve the objectives as effectiveness, it is possible to conclude the following: the internal audit effectiveness affects the ability of the internal audit to add value to the company.
Effectiveness is usually defined as the ability to achieve planned results or to achieve set goals. The definition of internal audit effectiveness is usually derived from these general definitions as a degree of accomplishment of the internal audit target or the level of achievement of its raison d' être or reason of existence (Getie Mihret & Wondim Yismaw, 2007, p. 106). Dittenhofer (2001, p. 445) defines internal audit effectiveness as a level of achievement of a desired state and set goals, and he believes that internal audit activity affects the effectiveness of the auditee. He considers testing and measuring the internal audit effectiveness to be KEYWORDS internal audit; internal audit effectiveness; control environment important, but points out that because of the complexity of the audit activity it is difficult to determine the criteria to measure its effectiveness.
In recent years, researchers have shown the importance of improving the internal audit effectiveness in order to continue to retain its importance in the company (Ernst & Young, 2010). Sarens (2009, p. 3) refers to the importance of research regarding the internal audit effectiveness and its impact on corporate governance, stressing that one can consider the internal audit to be effective only when its activity has a positive impact on the quality of corporate governance. His conclusion is based on considerations of Gramling, Maletta, Schneider, and Church (2004, p. 194-196), who considered internal audit as one of the 'corporate governance cornerstones' . He concludes that the quality of the internal audit affects relations with other participants of corporate governance (Executive management, Audit Committee and external auditor) and, consequently, the quality of corporate governance. Continuous improvement of internal audit effectiveness affects the improvement of the internal audit quality, considering that effectiveness and efficiency are indicators of quality (Vuko, 2009, p. 63).
Research related to the internal audit effectiveness, especially regarding the factors that are associated with it, are relatively new in the scientific literature within the field. The concept of internal audit effectiveness and the determinants that are associated with it have been explored only in the last few years. Research on a sample of Italian companies by Arena and Azzone (2009) is considered to be one of the first major empirical studies related to the internal audit effectiveness. Other studies, mostly based on the case study analyses (Ahmad, Othman, Othman, & Kamaruzaman, [Radiah], Othman R. [Rohana] & Kamaruzaman J., 2009;Al-Twaijry, Brierley, & Gwilliam, 2003;Cohen & Sayag, 2010;Getie Mihret & Wondim Yismaw, 2007;Getie Mihret & Zemenu Woldeyohannis, 2008;Getie Mihret, James, & Mula, 2010;Soh & Martinov-Bennie, 2011, Yee, Sujan, James, & Leung, 2008etc.) have not fully answered the many open questions regarding the determinants of internal audit effectiveness. Many authors (Ahmad et al.,2009;Arena & Azzone, 2009;Coram, Ferguson, & Moroney, 2008;Gramling et al., 2004;Sarens, 2009) have identified the constraints in the existing theoretical framework, particularly given the current context of corporate governance. At the same time, they emphasised the need to upgrade the existing research through further theoretical and empirical analysis of the concept of internal audit effectiveness and its associated determinants, taking into account the characteristics of the current environment, primarily corporate governance and the requirements placed upon the internal audit. It is important to conduct research regarding determinants of internal audit effectiveness in terms of less-developed corporate governance, such as Croatian, in order to identify variations in different cultural and economic environments.
The organisational climate affects the work of all employees including the internal auditors. The environment in which management is aware of the importance of controls and functions that review their effectiveness can have a dual impact on internal audit: facilitating communication with other employees, who often perceive an internal audit as a 'company police' , and better understanding of the internal audit role by management, which affects the relationship between internal auditors and management and affects benefits they both derive from their relation.
The Committee of Sponsoring Organisations of the Treadway Commission, known as COSO, announced in 1992 a framework for the implementation and evaluation of internal controls, in the publication Internal Control-Integrated Framework. The framework has become a generally accepted model (known as the COSO Model of Internal Control) in the scientific and professional literature in the field of accounting and auditing, and has been implemented in different national legislations. According to the COSO model (Committee of Sponsoring Organizations of the Treadway Commission, 1994, p. 4) the control environment 'sets the tone of an organisation' , and affects the employee awareness of the control.
The term 'control environment' concerns the integrity, system of values and basic employees' attitudes on control and management. Special weight is put on the management philosophy, its leadership style and attitudes related to the sharing and accepting of responsibility (European Confederation of Institutes of Internal Auditing, 2007, p. 29).
Establishing a strong control environment through demonstration of integrity and ethical values, appropriate monitoring processes, the existence of adequate segregation of duties and a sense of responsibility for achieving objectives, affects the company' ability to withstand internal and external pressures (Committee of Sponsoring Organizations of the Treadway Commission, 2011, p. 26). By establishing policies and procedures, management structure provides a kind of 'tone at the top' that affects the universal ethical awareness in the company and, according to some studies (e.g., White & Lean, 2008), the precisely perceived integrity of leaders has an impact on the ethical activity of team members or employees, where they are less inclined to take unethical actions when they have a perception of a high level of integrity of their leaders. The term 'tone at the top' includes expected standards of conduct which are formed by the management, including the ones related to the internal control (Committee of Sponsoring Organizations of the Treadway Commission, 2011, p. 255). In the accounting and audit context, the link is often explored between the ethical climate in the company, established by the management structures, and financial reporting, and even the Treadway Commission (1987, p. 32, as cited in Arel, Beaudoin, & Cianci, 2011, p. 4) reported on 'signal at the top' as the environment within which the financial reporting takes place, as the most important factor that contributes to the integrity of the financial reporting process.
The explanation of the control environment offered by the COSO framework implies that it has an impact on all components of the internal control system, including the internal audit, which is usually considered in the context of the last component of the system, monitoring. Wallace and Kreutzfeldt (1991) examined the importance of certain characteristics of the company and control environment for the establishment of the internal audit function. The study resulted in the following findings: companies that have established an internal audit department are significantly larger, more regulated, more competitive, more profitable, more liquid and in these companies there was a greater communication regarding responsibilities and duties and they had more conservative accounting policies, which is directly related to the management philosophy and the leadership style with regard to the decision that the company' accounting policies are part of the management's responsibility. Goodwin-Stewart and Kent (2006), as cited in Sarens & Abdolmohammadi, 2011, p. 6) in their research on guidelines that are related to the existence of internal audits in a company concluded that the establishment of an internal audit is related to the degree of development of the risk management process. Similarly, Sarens andDe Beelde (2006a, 2006b, as cited in Sarens & Abdolmohammadi 2011, p. 6), based on the findings of their research concluded that certain characteristics of the control environment (for example, development of ethical values, the level of awareness about the importance of control and the existence of risk) is significantly associated with the role of internal audit in the company and affect the scope of its activities. Sarens and Abdolmohammadi (2011) in their study confirmed the relationship between the control environment and the size of the internal audit department, whereby the control environment was by characterized formalized demonstration of ethical values, a high level of awareness of controls and risks and their importance and clearly defined responsibilities for risk management and internal controls.
Although there is evidence of importance of control environment for the existence of the internal audit activity (Wallace & Kreutzfeldt 1991;Goodwin-Stewert & Kent 2006;Sarens & De Beelde 2006a, 2006bSarens & Abdolmohammadi, 2011), previous research studies have not greatly explored the correlation between the supportive control environment and the effectiveness of the internal audit. In an internal environment characterised by high awareness of controls and risk management it will be easier to understand the role of an internal audit with its monitoring task. This should result in greater cooperation and support for the internal audit department and organisation of an effective internal audit. Also, in the case of a supportive environment, internal auditors will not feel restrictions when they conduct activities and communicate their results. Therefore, the research hypothesis is developed as follows.
Hypothesis: The supporting control environment has a significant positive correlation with the internal audit effectiveness.
Measurement of internal audit effectiveness
The generality of the internal audit effectiveness definition provides interpretive freedom concerning measurement criteria which may vary in regard to the different internal audit customers. Although the report containing recommendations is the final result of the internal audit process, it can't therefore be taken as the achievement of the objectives. It may initiate the changes towards the desired objective only in the case if management decides to implement the recommended guidelines. Therefore, the value that the internal audit provides is greatly influenced by the way management understands and respects its recommendations. This approach to the concept of internal audit effectiveness is also supported by Cohen and Sayag (2010, p. 297), who took into account the views of Ransan (1955) and Albrecht (1988) (cited in Cohen & Sayag, 2010, p. 297) who considered that the internal audit effectiveness is not a variable whose value is possible to calculate and the success of the internal audit can only be measured in relation to expectations of significant stakeholders. However, some authors also support the other approach to the concept of effectiveness (Al-Twaijry et al., 2003;Getie Mihret et al., 2010), including the Institute of Internal Auditors (2010), according to which the level of internal audit effectiveness is defined as a degree of compliance with the guidelines of the International Standards for the Professional Practice of Internal Auditing (Standards). On the other hand, Dittenhofer (2001) believes that the effectiveness should be considered at the level of individual processes and considers the internal audit effectiveness through the following disposition: has the process that was reviewed actually improved, in cases where its improvement was needed. This means that effective internal audit activity corrects the failures of the process, if they existed, or if they did not exist, an internal audit is able to determine that.
Acceptance of different standpoints is also evident from the viewpoint of the Institute of Internal Auditors (IIA Global). According to the IIA Practical Guide, which provides guidance on ways to measure internal audit efficiency and effectiveness (The Institute of Internal Auditors, 2010, p.1), there are qualitative and quantitative ways of measuring these two dimensions, and it can also be measured with regard to compliance with the Standards. It also underlined the importance of obtaining feedback on the internal audit effectiveness from its customers.
Getie Mihret et al. (2010, p. 17) consider that the context in which it operates affects the internal audit effectiveness and upholds the level of compliance with the Standards as the most appropriate indicator of internal audit effectiveness. They believe that variations in the results of some previous studies related to the practice of internal audit can only be explained by the differences in contextual factors arising from the environments in which they were conducted and they encourage research on the internal audit effectiveness in different corporate governance contexts in order to promote the importance of the profession in contemporary organisational settings. The results of research conducted by Burnaby, Abdolmohammadi, Hass, Sarens, and Allegrini (2009) support that view, according to which there is a difference in the application of Standards between countries in Europe and the US, and the research of Sarens and Abdolmohammadi (2011) showed that the cultural differences between the countries are associated with the level of compliance and implementation of Standards.
Lately there have been some research efforts into the development of models for measuring IA efficiency. Alič and Rusjan (2011) developed the Audit Record Assessment Model (ARA model) 'for quantitative assessment of a quality management system internal audit findings showing their potential to contribute to the business performance' . Assessment outcomes of the ARA model: can be employed as indicators of the internal audit efficiency [...] and used to measure the efficiency of an IA and of the auditors involved in the same environment (organisational units, company) in the course of time. (Alič & Rusjan, 2011, p. 5403) Based on previous research, it can be concluded that there is no unique measure of internal audit effectiveness and it is often measured using partial measures (see Arena & Azzone, 2009, p. 48). One of these measures is the degree of accepted internal audit recommendations by management. It has been identified in previous studies regarding determinants of internal audit effectiveness (Arena & Azzone, 2009;Getie Mihret & Wondim Yismaw, 2007) and was also among the most common measures of the internal audit effectiveness used in practice (Ziegenfuss, 2000). Thus, it was also used as a measure of internal audit effectiveness within this research.
In reviewing the results and methodology of previous studies it is possible to unambiguously conclude that the absence of a unique measure for the internal audit effectiveness is due to different aspects of the factors that are associated with it. There is no 'ideal' measure of internal audit effectiveness but it is necessary to adjust its operationalisation to the related factors that are being analysed as independent variables. In that way, the concept of effectiveness contains its multidimensionality and the ways of its measurement should be adapted to the needs and requirements of the conducted research. An alternative understanding can have a negative effect on the possibility of understanding all the aspects of relations that are being analysed.
Taking into account all the above, there are two ways of measuring internal audit effectiveness within this paper: perceived effectiveness (among its primary stakeholders management and the audit committee) and the degree of accepted internal audit recommendations by management.
Methodology
Perceived internal audit effectiveness is chosen as one of the measures, taking into account the fact that an internal audit is not an end in itself but is established in order to, amongst other things, assist in carrying out the duties of its primary stakeholders. Measuring the perceived internal audit effectiveness was based on an analysis of the characteristics associated with the attributes of function, areas of its activity and relationships with the environment, which indicate the existence of internal audit capabilities to meet the needs and demands of its customers. In this way, the multi-dimensionality of the internal audit effectiveness is taken into account, which is the approach supported by previous research (Cohen & Sayag, 2010).
The perceived internal audit effectiveness was divided into two dimensions: the first contained attributes of the internal audit that point to its effectiveness and the second contained statements that described the internal audit impact on aspects that are important for company operation. There were 15 statements for measuring perceived internal audit effectiveness and they were intended for management and members of the Audit Committee.
The first dimension, as mentioned, contained attributes of effective internal audit (this measurement scale is encoded as IA_ effect) and comprised ten statements (from M1 to M10 in Appendix 1), describing: adequateness of internal audit knowledge concerning company operations, alignment of internal audit objectives with corporate objectives and needs of the internal audit customers, adequateness of the internal audit organisational position, scope of internal audit activities and methodology used for internal audit planning, internal audit focus on testing high-risk areas of the company, constructiveness and applicability of internal audit recommendations and adequateness of communication with an internal audit.
To measure the contribution of an internal audit to the company performance (second dimension of internal audit effectiveness, encoded as IA_contrib), various aspects of this contribution were analysed. There were five statements for measuring this dimension of internal audit effectiveness (M11 to M15 in Appendix 1) and they described: the impact of the internal audit recommendations on the improvement of business and governing processes, the impact of internal audit activity on improvements in the area of internal control, the value of information obtained from the internal audit as input into the managerial decision-making process and whether internal audit recommendations are taken into account in the managerial decision-making process. One statement (M 16 in Appendix 1) also described the usefulness of the internal audit and was not part of any dimension.
Perceived internal audit effectiveness (overall) was measured based on the degree of agreement with all 15 statements related to the features within the two aforementioned dimensions. Respondents were able to state their level of agreement on a scale of 1 to 5 (1 -completely disagree, 5 -completely agree), and the dimension scores present the un-weighted average of the statements (presented in Appendix 1).
The degree of accepted internal audit recommendations by management was also calculated as a measure of internal audit effectiveness.
Factors that describe the supportive control environment were measured with the average grade obtained from the level of agreement with statements in the questionnaire for internal auditors. They were based on elements of the control environment assessment in the COSO framework and previous research (Ernst & Young, 2003;Sarens & Abdolmohammadi, 2011;Roth, 2010). The factors are presented through 13 statements (in Appendix 2) that represent certain aspects of the control environment and the participants expressed their agreement with given statements on a scale from 1 to 5 (1 -completely disagree; 5 -completely agree).
The statements that constitute the variable control environment were also divided into two dimensions. The first dimension (encoded Supporting control environment 1) included statements that described the ethical awareness and philosophy and management style (from A1 to A6 in Appendix 2). The other dimension (encoded Supporting control environment 2) contains the remaining statements (from A7 to A13 in Appendix 2) and described the level of awareness for the importance of control, existence of enterprise risk management and its monitoring activities (primarily internal auditing).
The level of the supportive control environment (overall) is measured by the un-weighted average of the statements. Although the control environment can also be measured taking into account other factors of the COSO framework, the selected ones are considered to be particularly significant in the context of a research topic and are used in previous research (Goodwin-Stewart & Kent, 2006;Sarens & De Beelde 2006a, 2006bSarens & Abdolmohammadi, 2011 and considered significant in the context of the internal audit establishment. In order to determine the reliability of a scale for perceived internal audit effectiveness Cronbach's alpha was calculated for all the statements together and also for the individual dimensions (Table 1). According to the values of a calculated measure, there is a high internal consistency among statements, and the created measurement scale has a very good reliability (overall, and on the level on individual dimensions).
Descriptive statistics for the variable degree of accepted internal audit recommendations, within the questionnaire for internal auditors, are presented in Table 2. According to data from the Table 2, almost 80% of the internal audit departments from the sample have more than 80% of the accepted recommendations (corrective action) by the management on an annual basis, and the rest are between 50% and 80% (5.7%), or less than 50% (15% of internal audit departments). Given the above, this distribution was used to determine the less and more effective internal audit departments, and the limit value of more than 80% of accepted recommendations was taken as a reference to determine the level of internal audit effectiveness. Thus, 42 internal audit departments, which have more than 80% of the accepted recommendations, were categorised as effective, while the remaining 11 departments, which have less than 80% of accepted recommendations, were categorised as less effective departments.
In order to determine the reliability of the measurement scale for the supportive control environment and its dimensions, Cronbach's alpha (α) values were calculated and are presented in Table 3. Based on obtained results of the measure for internal consistency, there is a high reliability of measurement scales.
A survey was conducted among Croatian companies (banks and insurance companies, public companies of special national interest and companies listed on the Zagreb Stock Exchange) and the data were collected from December 2012 to April 2013. Respondents were internal auditors and members of senior, middle management and the Audit Committee. Questionnaires were sent to the 106 companies who declared the existence of an internal audit. Questionnaires from 54 companies were actually analysed (54 intended for internal auditors and 32 that were answered by managers and members of the Audit Committee.) The survey return rate was 50% for questionnaire intended for internal auditors and 30% in case of questionnaires for managers and members of the Audit Committee.
Internal auditors were mainly (87.04%) from large companies and 40.4% of companies were listed on Zagreb Stock Exchange. In addition, 59.3% of companies were from the financial sector. Regarding the attributes of internal auditors, 74.0% were Chief Audit Executives (Directors of Internal Audit) and in more than 50% of companies internal audit has been established for more than 10 years.
Regarding the attributes of the internal audit stakeholders from the sample, they mainly comprised Board Members (34.38%) and directors from financial (12.50%) and other sectors (34.38%) and around 15% were members of the Audit committee. They were mainly (81.25%) from large companies. Fifty per cent of the companies from this sample were from the financial sector, mainly not listed on the Zagreb Stock Exchange (56.25%).
The characteristics of respondents and companies that participated in the survey are presented in Appendix 3 (Tables 8-15).
The methods for testing the hypothesis were the independent t-test and Pearson's correlation coefficient. The independent t-test was used for testing the statistical significance of differences among average grades for supporting control environment considering the
Results
Based on the results of the t-test for independent samples (Table 4) it is possible to conclude that there is a statistically significant difference between effective and less effective internal audit departments in the average scores on a scale of supportive control environment (for a significance level of 5%). Given this, it can be concluded that companies with effective internal audits have, on average, a more supportive control environment. If the variable supportive control environment is divided in two dimensions, a statistically significant difference between a more effective and a less effective internal audit department in the average scores of a supportive control environment exists only on the other scale (supporting control environment 2) that describes the level of awareness of the company related to the control and risks, and again companies with an effective internal audit have a higher average scale. Table 5. coefficients of correlation between variables supporting control environment and the perceived internal audit effectiveness.
**the correlation is statistically significant at 1% of the risk (two-way testing). *the correlation is statistically significant at 5% of the risk (two-way testing). symbols: N -number of respondents; r -correlation coefficient; p -calculated probability. source: Research results. Considering the results, it can be concluded, for a significance level of 5%, that the research hypothesis, in the case of the variable supportive control environment (overall), is supported. In addition, companies with a more effective internal audit, on average, have a more developed control environment in terms of awareness of the importance of risk and control (supporting control environment 2) than companies that have a less effective internal audit, and among them there is no difference in the level of development of ethical awareness and philosophy and management style (supporting control environment 1).
The hypothesis was also tested using the perceived internal audit effectiveness (perceived by management and members of the Audit Committee) as the dependent variable (Table 5).
Out of the two dimensions of the control environment, only supporting control environment 2 is significantly correlated with perceived internal audit effectiveness. This correlation is statistically significant for a significance level of 1%.
Two dimensions (scales) of internal audit effectiveness (IA_effect and IA_contrib) are both positively associated with the scale of the supportive control environment (overall). This correlation is statistically significant for a significance level of 1%. Thus, the perception of the characteristics of the internal audit that point to its effectiveness and the perception of internal audit usefulness to the company (M 16) are positively correlated with a supportive control environment, especially with the dimension related to the level of control and risk culture. This correlation is statistically significant for a significance level of 5%.
Based on the results, it can be concluded that when there is a higher degree of a supportive control environment there is a greater degree of perceived internal audit effectiveness by management and the Audit Committee. In addition, the perceived internal audit usefulness for internal audit customers is greater in these conditions. If the variable supporting control environment is divided into two dimensions, then this applies only for the second dimension, i.e., there is a positive correlation between the perceived internal audit effectiveness and the level of control and risk culture. This correlation is statistically significant for a significance level of 1%. These results are in agreement with previous ones, so it can be concluded that the supportive control environment is a significant factor of internal audit effectiveness.
In order to determine whether companies differ regarding the level of internal audit effectiveness when the independent variable supporting control environment is divided into groups according to the average, further analysis was conducted by dichotomisation of the independent variable, according to variable averages, with Fisher's exact test (Table 6). Zero (0) represents companies that are below average due to the value of the variable, and one (1) those that are above average.
According to the results, the chance that the internal audit is effective (their recommendations will be taken into account to a greater extent) is greater where there is a higher level of supportive control environment, but the significance is determined at a significance level slightly higher than 10%.
A bivariate binary logistic regression was also conducted, with the internal audit effectiveness as a dependent variable and a dichotomised variable supporting control environment as the independent variable (Table 7).
The significance of the variable supportive control environment as a predictor was determined at a level of significance that is slightly higher than 10%. At this level of significance, it can be concluded that in companies with an above-average supportive environment, internal audits are almost three times more likely to be effective than internal audits that operate in companies with below average levels of a supportive control environment.
Discussion and conclusion
Previous research has analysed the importance of a control environment for the existence of internal audit activity but has not greatly explored the correlation between the supportive control environment and the internal audit effectiveness. This paper argues that the supportive control environment is associated with the internal audit effectiveness. In order to test this hypothesis, a survey was conducted via a questionnaire on more than 50 mostly large companies in Croatia. Respondents were internal auditors and management and members of the Audit committee. Appropriate methods of statistical analysis were used in order to analyse the survey results.
According to the research results, there was a statistically significant difference between effective and less effective internal audit departments in the average scores on a scale of supportive control environment, which means that companies with a more effective internal audit have, on average, a more supportive control environment. In addition, companies with a more effective internal audit, on average, had a more developed control environment in terms of awareness of the importance of risk and control than companies that had a less effective internal audit, and among them there was no difference in the level of development of ethical awareness and philosophy and management style. This means that the existence of a more developed control environment in terms of awareness of the importance of risk and control has great meaning for an internal audit in terms of its effectiveness. This is consistent with the results of some previous research that showed how the existence of these features of the control environment is significantly associated with the role of an internal audit in the company and affects the scope of its activities (Goodwin-Stewart & Kent, 2006;Sarens & Abdolmohammadi, 2011;Sarens & De Beelde, 2006a, 2006bWallace & Kreutzfeldt, 1991). Also, the survey results showed a statistically significant positive correlation between perceived usefulness of an internal audit and a higher level of a supportive control environment. Under these conditions, therefore, the managers and Audit Committee take an internal audit to be more effective and the perceived usefulness that they expect from an internal audit is higher.
One of the limitations of the research is the size of the sample, which influenced the probability of significance for some research findings. A suggestion for further empirical testing is using a broader sample and quantitative research. In addition, it would be interesting to see whether there are differences in research findings among different sectors or industries as well as company sizes. Further research can also focus on how can companies evaluate their control environment to set the right expectations about the internal audit effectiveness.
Appendix 2. statements for measuring supportive control environment.
statements measurement scale supportive control Environment
Supporting control environment 1in your company: • there is a code of ethics / code of conduct (a1) • management has a low tolerance for violation of the provisions of the code of ethics/code of conduct (a2) • management has a low tolerance to breaches of regulatory requirements (a3) • management sets realistic goals against their employees with regard to the financial results (a4) • management gives more importance to the accuracy of the financial results disclosed in the financial statements of the company, than that they 'look good' (a5) • the management communicates with employees at lower levels (open doors policy) (a6) Lickert scale: (1 -completely disagree; 5 -completely agree) Supporting control environment 2in your company: • management believes that the company internal controls are important (a7) • management respects functions (departments) that are in the company responsible for the control (a8) • management timely corrects identified internal controls deficiencies (a9) • management gives importance to the existence of a general awareness of risk importance at all levels of the company and informing employees about the risk treatment (a10) • the company has a risk management framework that is established through written rules and policies (a11) • the responsibilities related to risk management and internal controls are clearly defined by the management (a12) • Before making important decisions managers use company procedures related to the analysis of associated risks (a13) (statements were intended for internal auditors and all contribute the same to the final score.) Appendix 3. attributes of respondents and companies from the sample. | 2017-05-03T04:15:32.336Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "e0448bcaf0be4c0c0ae5f8af9a16e55b8fd297de",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1331677X.2016.1211954?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "86f14e59955b7ebdac1b77d0123f6fa0faca2338",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
234780274 | pes2o/s2orc | v3-fos-license | Air Pollution Relates to Airway Pathology in Children with Wheezing
Rationale: Outdoor air pollution contributes to asthma development and exacerbations, yet its effects on airway pathology have not been defined in children. Objectives: To explore the possible link between air pollution and airway pathology, we retrospectively examined the relationship between environmental pollutants and pathological changes in bronchial biopsy specimens from children undergoing a clinically indicated bronchoscopy. Methods: Structural and inflammatory changes (basement membrane [BM] thickness, epithelial loss, eosinophils, neutrophils, macrophages, mast cells, and lymphocytes) were quantified in biopsy specimens by using immunohistochemistry. The association between exposure to particulate matter less than 10 μm in aerodynamic diameter (PM10), SO2 and NO2 and biopsy findings was evaluated by using a generalized additive model with Gamma family to allow for overdispersion, adjusted for atmospheric pressure, temperature, humidity, and wheezing. Results: Overall, 98 children were included (age 5.3 ± 2.9 yr; 53 with wheezing/45 without wheezing). BM thickness increased with prolonged exposure to PM10 (rate ratio [RR], 1.29; 95% confidence interval [CI], 1.09–1.52), particularly in children with wheezing. Prolonged exposure to PM10 was also associated with eosinophilic inflammation in children with wheezing (RR, 3.16; 95% CI, 1.35–7.39). Conversely, in children without wheezing, increased PM10 exposure was associated with a reduction of eosinophilic inflammation (RR, 0.12; 95% CI, 0.02–0.6) and neutrophilic inflammation (RR, 0.36; 95% CI, 0.14–0.89). Moreover, NO2 exposure was also linked to reductions in neutrophil infiltration (RR, 0.57; 95% CI, 0.34–0.93) and eosinophil infiltration (RR, 0.33; 95% CI, 0.14–0.77). Conclusions: Different patterns of association were observed in children with wheezing and in children without wheezing. In children without wheezing, exposure to PM10 and NO2 was linked to reduced eosinophilic and neutrophilic inflammation. Conversely, in children with wheezing, prolonged exposure to PM10 was associated with increased BM thickness and eosinophilic inflammation, suggesting that it might contribute to asthma development by promoting airway remodeling and inflammation.
Asthma is a heterogeneous disease, usually characterized by a range of respiratory symptoms that vary over time across the life course of individual patients (1,2). Asthma is a serious global health problem affecting all age groups, and its prevalence is increasing, especially among children, imposing an unacceptable burden on healthcare systems and society (1).
The pathological hallmarks of the disease encompass both chronic airway inflammation, usually eosinophilic, and airway remodeling. This includes thickening of the subepithelial basement membrane (BM), shedding of the epithelial layer, an increased smooth muscle area, increased mucus production, and neoangiogenesis. Our group has previously demonstrated that some of these features, particularly BM thickening, are early events in the natural history of the disease, being already present in early infancy, even in children with the mildest forms of asthma and without atopy (3)(4)(5).
Despite the growing burden of asthma, causes and pathophysiological mechanisms underlying the disease remain to be established. Asthma is a multifactorial disorder caused by the complex interaction between genetic and environmental factors (2,6,7).
A recent statement from the American Thoracic Society concluded that there is now enough epidemiological evidence to indicate a causal link between long-term exposure to outdoor air pollution and new cases of incident asthma in children (10). Of note, Khreis and colleagues (11) have recently estimated the incidence of asthma related to air pollution by using a validated and harmonized European land-use regression model: they suggested that matching nitric dioxide, particulate matter, and black carbon minimum concentrations may prevent as much as 33% of the incident cases of asthma in children. Of note, outdoor air pollution is associated with impaired lung function, with increased hyperreactivity and airway inflammation being measured indirectly through fractional exhaled nitric oxide (12)(13)(14)(15). These observations highlight the urgent need to reduce children's exposure to outdoor air pollution.
Despite such evidence, the pathophysiological mechanisms linking air pollution and asthma development have not been fully understood. In particular, no study has addressed the influence of long-term outdoor air pollution exposure on inflammatory and structural changes in the airways of children with wheezing.
In this study, we sought to examine the relationship between previous exposure to nitrogen dioxide (NO 2 ), particulate matter less than 10 mm in aerodynamic diameter (PM 10 ), and sulfur dioxide (SO 2 ) levels and the pathological traits typical of asthma in a cohort of children living in Northeast Italy, one of the most polluted regions in Europe.
Some of these results have been published in the form of an abstract (16).
Study Population
Children were recruited at the Department of Woman and Child Health, University of Padova, Padova, Italy, from 2002 to 2014. The study was performed according to the Declaration of Helsinki and was approved by the local ethics committee. Written consent was obtained from children's parents. All children underwent bronchoscopy (with bronchoalveolar lavage [BAL] and bronchial biopsy) on the basis of appropriate clinical indications according to the European Respiratory Society and American Thoracic Society guidelines (17,18), as summarized in Table E1 in the online supplement. An additional endobronchial biopsy for research purposes was performed with approval of the ethics committee and consent from the parents. Fiberoptic bronchoscopy was well tolerated by all children. Before bronchoscopy, all patients were evaluated by a respiratory pediatrician. The pediatrician collected a detailed clinical history, examined the child, and administered parental interviews focused on the presence of respiratory symptoms, frequency of respiratory tract infections (RTIs) in the previous year, and on-course treatment focused on asthma medications (inhaled corticosteroids [ICSs] or oral corticosteroids). The presence of wheezing with a pattern suggestive of asthma was based on reporting of repeated episodes of wheezing in the previous year, which was often associated with cough and dyspnea, particularly at night or in the early morning. Furthermore, wheezing had to be present even apart from colds (multitrigger) and had to be responsive to prescribed bronchodilators. Wheezing frequency was defined on a scale from 0 (no episodes) to 6 (daily episodes). At baseline, all children underwent routine blood tests, including a complete white blood cell count (total leukocytes, neutrophils, lymphocytes, monocytes, eosinophils, and basophils) and testing for total/specific IgE. As previously reported (5), the presence of atopy was defined by an increase in the total (higher than the age-related normal levels) and specific IgE (.0.35 kU/L; ImmunoCap, Phadia). In particular, specific IgEs for the following aeroallergens were investigated in all children: house dust mite (Dermatophagoides pteronyssinus and D. farinae), molds (Alternaria alternate), and cat dander and grass pollens (Lolium perenne, Poa pratensis, Phleum pratense, Dactylis glomerata, and Cynodon dactylon).
Air Pollution Exposure Evaluation
Clinical and pathological data were combined with information collected from the regional air pollution monitoring system.
Data from 2002 onward on daily concentrations of PM 10 , NO 2 , and SO 2 , together with meteorological data on temperature, relative humidity, and atmospheric pressure, were retrieved from the monitoring stations of the Environmental Protection and Prevention Agency of the Veneto Region. Children were linked to the data of the monitoring station nearest to their residence (with a maximum distance set at 20 km) (19). The distribution of pollutants in the Veneto region is highly homogeneous because of its particular morphological condition. The Po valley, of which Veneto occupies the eastern part, is a vast flat area in northern Italy surrounded by mountains and a shallow sea. This particular morphological structure leads to a very low wind speed and facilitates air pollutants' stagnation.
Methods and technologies used to measure air pollutant concentrations were those designated by national and international consensus (20). Moreover, air pollution concentrations were compared with the 2005 World Health Organization air quality guidelines cutoffs (21) of 50 mg/m 3 for PM 10 and 40 mg/m 3 for NO 2 . None of the children changed residential addresses during the study.
ORIGINAL RESEARCH Bronchial Biopsy
Full details on the bronchoscopy, bronchial biopsy, and BAL procedures have been previously described (3)(4)(5). Briefly, bronchoscopy with endobronchial biopsy and BAL was performed by using a flexible bronchoscope with an external diameter of 4.9 mm. Bronchial biopsy specimens were taken by using Olympus FB 19 C-1 bronchial forceps, which were inserted through the service channel of the bronchoscope (2-mm diameter). Patients with insufficient or lowquality biopsy specimens were excluded from the study. Biopsy specimens were gently extracted, fixed in 4% formaldehyde, and dehydrated through alcohol series. They were embedded in paraffin wax and processed for histochemical and immunohistochemical analysis. Analysis of epithelial loss and reticular BM thickness was performed on sections stained with hematoxylin and eosin.
Statistical Analysis
Children's characteristics were expressed by using the median and interquartile range (IQR) for continuous variables and counts and percentages for categorical variables. Comparisons among groups were evaluated with either the Student's t test or the Mann-Whitney U test as appropriate. Distributions of categorical variables were compared by using the x 2 test or Fisher's exact test when the sample size was small (n , 5). We performed analysis by using a generalized additive model with a Gamma in extenso family (23) to evaluate the association between air pollution and both inflammation and structural parameters measured in the biopsy specimen. We also analyzed the impact of outdoor air pollution on relevant clinical parameters (i.e., wheezing frequency, RTI frequency).
We used penalized cubic regression splines to account for the nonlinear relationship between the pollutant and the outcome. The possible delayed effect in time was analyzed by using the average concentrations of pollutants from 0 to 90 days before the bronchoscopy. Therefore, "lag 0-90" indicates the average concentration of the pollutant in the 90 days preceding the bronchoscopy, "lag 0-89" indicates the average concentration of the 89 days before, and so on. The model structure was chosen according to the lowest Akaike information criterion (24). All models were adjusted for temperature, atmospheric pressure, relative humidity, and wheezing. To assess the potential influence of crucial confounding factors on our study, such as atopy, secondhand smoke exposure, ICS treatment, and age of children at bronchoscopy, we included them in the model as possible covariates. However, these covariates did not change the fit of the models. A second analysis was performed separately on the two subpopulations of children with and without wheezing.
Results reported as rate ratios (RRs) and relative 95% confidence intervals (CI) were calculated for an increase in the IQR of the pollutant. The lag with the highest RR in absolute terms for each outcome is labeled as the "best RR." Missing environmental data (4%) were imputed by using multiple imputation with a expectation-maximization with bootstrapping algorithm (25). All analyses were performed by using R statistical software and the Amelia ad mgcv R package (R Foundation for Statistical Computing) (26,27).
Clinical and Demographic Characteristics
Initially, 121 children were enrolled, clinical data were recorded, and bronchial biopsy specimens were examined for all children. However, data on outdoor pollution exposure were available for 98 out of 120 children (81%). The clinical and pathological features of children excluded because of a lack of pollution data did not differ from those of children included in the study.
Clinical indications for bronchoscopy, according to international guidelines, are reported in Table E1. At the time of bronchoscopy, 53 children (54%) had history of repeated wheezing episodes even apart from colds and were responsive to bronchodilators, whereas 45 (46%) did not have wheezing reported in their clinical history. Clinical characteristics of the cohort are summarized in Table 1. The mean age at the onset of wheezing was 4.5 years, and children with wheezing had an age and sex distribution similar to those of children without wheezing. Children with wheezing showed higher levels of serum IgE (P = 0.04). ICS treatment tended to be more frequent in children with wheezing than in children without wheezing; only a minority of children were receiving oral corticosteroids in both groups. The two groups of children had comparable histories of RTIs and secondhand smoke exposure. Finally, no differences were observed between children with wheezing and children without wheezing in the blood total and differential cell counts, including those for blood eosinophils ( Table 1).
The pathological features (structural and inflammatory) measured in bronchial biopsy specimens are shown in Table 2. As reported in our previous studies, the pathological features characteristic of adult asthma were already present in children with wheezing (3)(4)(5). Children with wheezing had an increased reticular BM thickness (P , 0.0001) and epithelial shedding (P = 0.008) compared with children without wheezing. Inflammatory infiltrates also differed between the two groups, with children with wheezing displaying more eosinophils (P = 0.0002) and more mast cells (P = 0.01) than children without wheezing. No differences were observed with regard to neutrophil, macrophage, and CD4 1 T lymphocyte counts in biopsy specimens. BAL fluid inflammatory cell counts were similar in the two groups (Table 2).
Outdoor Air Pollution Exposure
Data on average levels of outdoor air pollutants during the 90 days before bronchoscopy are summarized in Table 3. Of note, SO 2 levels measured during the observation period were negligible across the whole region, so we did not consider this pollutant for further analyses. As shown in Table 3, all children in our cohort were exposed to high levels of pollutants. Exposure to NO 2 exceeded the World Health Organization threshold for half of the days, whereas PM 10 exceeded the ORIGINAL RESEARCH threshold for 38% of the days. Of note, the levels of exposure to PM 10 and NO 2 did not differ among different districts or between children with wheezing and children without wheezing.
As shown in Figure 1A, in the whole cohort, BM thickness significantly increased with prolonged exposure to PM 10 . For each IQR increase in the PM 10 concentration, we observed a significant enlargement of the BM of up to 30%, particularly from lag 0-15 to lag 0-90 (best RR at lag 0-63, 1.29; 95% CI, 1.09-1.52).
When stratifying our study population by the presence of wheezing ( Figure 1B), a positive association was found in children with wheezing for an exposure to PM 10 longer than 13 days (best RR at lag 0-80, 1.34; 95% CI, 1.09-1.66), whereas an association at earlier time points was detected in children without wheezing (best RR at lag 0-7, 1.23; 95% CI, 1.1-1.36). In our cohort, we did not observe any influence of NO 2 on BM thickness (see Figure E1 in the online supplement) or of PM 10 or NO 2 on epithelial integrity. Raw data are reported in online supplement (Table E2).
With regard to the association between air pollution and inflammatory features (Figures 2 and 3), we observed different patterns in children with wheezing and in children without wheezing. In children with wheezing, there was a positive association between prolonged exposure to PM 10 and tissue eosinophilic inflammation, as shown in Figure 2B. This association progressively increased with prolonged exposure, reaching its maximum value at lag 0-68, when the Conversely, in children without wheezing, prolonged exposure to PM 10 was associated with reduced eosinophil numbers in bronchial biopsy specimens ( Figure 2B) (best RR at lag 0-77, 0.12; 95% CI, 0.02-0.6). In children without wheezing, prolonged exposure to PM 10 also reduced the number of neutrophils (best RR at lag 0-70, 0.36; 95% CI, 0.14-0.89; Figure 3B), although this association only trended toward statistical significance. No influence of PM 10 exposure on neutrophils was found in children with wheezing.
Regarding NO 2 exposure, we observed weak negative associations during longer lags with eosinophils (best RR at lag 0-55, 0.33; 95% CI, 0.14-0.77) and during shorter lags with neutrophils (best RR at lag 0-14, 0.57; 95% CI, 0.34-0.93) (Figures E2 and E3) in the whole cohort. Raw data are reported in the online supplement (Tables E3-E6). In our cohort, other inflammatory cell subtypes (lymphocytes, macrophages, mast cells) in bronchial biopsy specimens were not influenced by air pollution. Although exposure to PM 10 and NO 2 influences airway tissue inflammation, we did not observe any significant relationship between air pollution and BAL fluid or blood inflammatory cell counts (including blood eosinophils). Furthermore, there was no link with clinical parameters, such as the frequency of wheezing symptoms and frequency of RTIs.
Discussion
To assess whether outdoor air pollution might affect airway pathology in children with wheezing, we investigated the association between the pathological hallmarks of asthma in airway biopsy specimens and exposure to the major air pollutants detected by regional air quality sensors. Our study illustrates for the first time, in vivo, a significant association between chronic air pollution exposure and histopathological changes in children. In children with wheezing, we found that exposure to high levels of particulate matter is associated with airway structural changes and chronic eosinophilic inflammation, possibly contributing to the pathophysiological mechanisms leading to asthma. Conversely, in children without wheezing, exposure to outdoor air pollution is associated with decreased eosinophilic and neutrophilic infiltrate in tissue, possibly reflecting a general impairment of innate immunity.
Thickening of the reticular BM represents a key structural change in bronchial asthma. Its pathogenetic role has been increasingly appreciated in recent decades, as it has been identified as a marker of epithelial-mesenchymal cross-talk, which may even anticipate inflammatory changes Definition of abbreviations: NO 2 = nitric dioxide; PM 10 = particulate matter less than 10 mm in aerodynamic diameter. Data are expressed as the median (interquartile range). ORIGINAL RESEARCH (28). Of importance, BM thickening is detectable from the beginning of the natural history (3)(4)(5) and is associated with airway hyperresponsiveness and lung function decline (29). Moreover, our group has recently shown that, among the different inflammatory and structural changes present in the airways of young children, BM thickening is the one that best correlates with the persistence of asthma across adolescence (5).
Our study shows that children with wheezing who live in areas with poor air quality (i.e., those exposed to high levels of PM 10 ) have a significant thickening of the reticular BM. This association was independent of potential confounding factors, including age, atopy, ICS treatment, and secondhand smoke exposure, suggesting that particulate matter might have a major role in promoting remodeling early in life. To our knowledge, this is the first study in vivo to suggest a potential contribution of air pollution to airway remodeling in children with wheezing, who are at high risk for asthma. Previously, only Churg and colleagues (30) had demonstrated similar in vivo results, showing greater amounts of airway fibrosis in the autoptic sections from 20 healthy women who were lifelong residents of a high-PM 10 area. Our results expand previous studies that demonstrated in in vitro or animal models that PM enhances airway remodeling (11) and the production of proremodeling factors (VEGF, IL8, MUC5A) (31) or the activation of fibroblastic activation pathways (32,33).
Eosinophilic inflammation represents the typical inflammatory trait of asthma, generally considered as the hallmark of an atopy-driven T-helper cell type 2 cytokine environment (2). However, it is increasingly recognized that, besides atopy, many other factors may enhance eosinophilic inflammation (2). The present study demonstrates that chronic exposure to air pollution might be one such factor. Indeed, children with wheezing exposed to PM 10 for a long time show an increased number of eosinophils, increased by even more than three times, which is independent from the presence of atopy and ICS treatment.
It is conceivable that the airways of this high-risk population were more susceptible to various stimuli, thus supporting the hypothesis that particulate matter enhances existing mechanisms that amplify airway remodeling and inflammation (11). The eosinophil increase we observed in vivo is in line with the findings of many in vitro or animal model studies demonstrating that air pollution enhances the T-helper cell type 2 cytokine environment. Possible mechanisms Figure 2. The strength of the association between eosinophils in bronchial biopsy specimens and exposure to particulate matter less than 10 mm in aerodynamic diameter (PM 10 ). The eosinophil rate ratio (RR) for each interquartile range increase of the moving average concentration of particulate matter less than 10 mm in aerodynamic diameter from 0 to 90 days before bronchoscopy is shown. (A) Data for the whole cohort. (B) Data for children with wheezing (blue) and children without wheezing (red) shown separately. The line indicates the RR and its variation during each lag, and the shaded area indicates the confidence interval (CI). When the lowest (or the highest) limit of the CI (shaded area) does not include 1, the association is statistically significant. Figure 3. The strength of the association between neutrophils in bronchial biopsy specimens and exposure to particulate matter less than 10 mm in aerodynamic diameter. The neutrophil rate ratio (RR) for each interquartile range increase of the moving average concentration of particulate matter less than 10 mm in aerodynamic diameter (PM 10 ) from 0 to 90 days before bronchoscopy is shown. (A) Data for the whole cohort. (B) Data for children with wheezing (blue) and children without wheezing (red) shown separately. The line indicates the RR and its variation during each lag, and the shaded area indicates the confidence interval (CI). When the lowest (or the highest) limit of the CI (shaded area) does not include 1, the association is statistically significant.
ORIGINAL RESEARCH
involved can act through the release of IL-33 or IL-13 or through modulation of the antigen presentation process (34)(35)(36)(37)(38). Although there was a clear association between PM 10 exposure and eosinophil amounts in airway biopsy specimens, no such association was detectable with peripheral blood eosinophils. These results confirm that blood eosinophils poorly represent the mechanisms regulating eosinophilic inflammation in the lung tissue (39).
Of interest, chronic exposure to air pollution was related to different patterns of tissue inflammation in children with and without wheezing in our cohort. Children without wheezing who live in more polluted areas show reduced numbers of eosinophils and neutrophils in bronchial biopsy specimens. Similar reductions in airway neutrophils were observed with prolonged exposure to PM 10 but were also observed with short-term exposure to NO 2 . Our results in children in vivo are in line with recent evidence in vitro showing that air pollutants may impair innate immune responses to influenza virus infection, particularly by downregulating type 1 interferons, downregulating IL-6, and preventing NLRP3 inflammasome formation (40,41). Conversely, other studies reported an enhancement of luminal airway inflammation (in sputum or BAL fluid), particularly after acute exposure (42,43). Interestingly, Stenfors and colleagues (42), who found increased neutrophils in BAL fluid, reported a reduction in neutrophils in biopsy specimens of healthy subjects, suggesting a movement of cells from the airway wall into the airway lumen. Thus, it is conceivable that air pollution may weaken the normal innate immune mechanisms in the tissue, possibly promoting respiratory infections (44,45)
Limitations
The retrospective design of this study is certainly a limitation; however, this is the only ethical way to study the effects of chronic outdoor air pollution exposure on the lung in vivo. Furthermore, we could not assess the levels of particulate matter <2.5 mm in aerodynamic diameter in our cohort, which is another potential limitation of our study. We also acknowledge that the cohort of children who underwent a clinically indicated bronchoscopy may not be representative of the entire population and that the condition prompting the bronchoscopy might have influenced the results. However, these concomitant conditions were evenly present among the study groups and are unlikely to have affected the observed differences. Finally, the relatively low number of patients, especially when analyzing the two subgroups of children with and without wheezing, led to a widening of confidence intervals.
Conclusions
In conclusion, this study reports an association between air pollution and histopathological changes in the airways of children with wheezing. First, prolonged exposure to high levels of PM 10 promotes BM thickening and enhances eosinophilic inflammation in children with wheezing, who are at high risk for asthma. Second, in subjects without wheezing, exposure to high levels of air pollution reduces the levels of eosinophils and neutrophils, suggesting an impairment of innate immunity. These results reveal for the first time in vivo in children the mechanisms linking air pollution and the pathological hallmarks of asthma, with important implications for improved understanding of the pathophysiology and mechanisms of progression in this disease being demonstrated. The importance of air quality control to promote lung health cannot be overemphasized. | 2021-05-20T06:16:17.008Z | 2021-03-30T00:00:00.000 | {
"year": 2021,
"sha1": "00e69d3ea0900b827eee6987b7c26779521168c6",
"oa_license": "CCBYNCND",
"oa_url": "https://www.atsjournals.org/doi/pdf/10.1513/AnnalsATS.202010-1321OC",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4f86c76d5c716a7585436e21874bf1b374e7875",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233862840 | pes2o/s2orc | v3-fos-license | Eco dome a potential experiment tool for greenhouse effect
This study aims to provide information about the characteristics of eco dome as tool for greenhouse effect experiments. This research uses experimental method. This research was held in 2019 at FMIPA Universitas Pendidikan Indonesia. Experiment used eco dome include: the effect of presence of plants inside of eco dome towards changing temperature and the effect of administration of CO2 in eco dome that presence of plants towards changing temperature. Data collection was done by measuring the temperature and intensity of light. Data analysis uses statistical tests. The result of this experiment was temperature in eco dome which has no plants inside is higher rather than temperature in eco dome which present plants inside. Another result is the added of CO2 in eco dome which contain plant has higher temperature rather than eco dome which has no CO2 added. Experiments shows the eco dome has characteristics in showing significant results and can be developed various potency.
Introduction
Learning with science on experiment work is not just about study about the content but also learn the experience of study to built skill and gain the knowledge. Science learning activity build the way to critical thinking, creativity, collaboration, skills to communicate and innovation [1]. Therefore science learning is presented through method, strategy and media to build skills that required in 21 st century life. Learning process by experiment is the activity to examine and applicated the theory. Experiment in science learning is effective method to reach the learning objectives [2]. Experiment is critical role in learning activities especially in science because an experiment can prove theory. [3]. An experiment is directed to expand potency as learning outcome [4].
Abstract concept is only can be imagined without proof, thus the students have not gained the knowledge form chapters they had learned. One of the concept that consists abstract concept that difficult for students to understand is greenhouse effect. The greenhouse chapter is science chapter that important to talk about [5] furthermore greenhouse effect affecting global warming [6][7]. Some research have shown that there are many students do not well understand and having misconception on greenhouse effect concept [5] [8].
The Planet Earth has an atmosphere, which keeps us warm. This simple fact is now replaced by the idea that we live in a greenhouse, and the so called greenhouse gases keep us warm. This is explained the following way: The Earth receives visible light from the Sun, which heats the surface, which then emits infrared radiation, which is absorbed and re-radiated by the greenhouse gases in the atmosphere. The more CO2 we emit the more radiation is reemitted from the atmosphere and we will experience a [9]. Plants contribute a lot in the capture of carbon dioxide load from the atmosphere in the process of photosynthesis. Carbon is the source of energy for plants. During photosynthesis, plants take in CO2 and give off the oxygen (O2) to the atmosphere. The oxygen released is available for respiration [10].
The purpose tool is to help increase the learning process quality to reach learning objectives [11]. Eco dome is one of learning tool. Eco dome is earth miniature or ecosystem miniature that shows how life in earth. The use of eco dome can produce some experiments.
Methods
Research method was experimental method. This research was held in 2019 at FPMIPA of Universitas Pendidikan Indonesia, Bandung. Research procedures divided into 5 steps, those were the first step was literature study to theories that related to experiment. The second step was designing tool. The purpose of designing tool is determine the shape and the size of tool. The third step was constructing the tool The forth is examine tool by experiments, include: 1) the effect of presence of plants in eco dome and 2) the effect of administration of CO2 in eco dome that consist of plants towards temperature changing.
The procedural of experiment 1: was taking the soil used small shovel, stirred until the soil homogeny, after the soil is stirred, put the soil into two eco domes. Plants that has prepared has 5-15 cm in length and still fresh, then take them and plant it in one of eco domes, another eco dome is not filled by soil. After that three thermometers were calibrated until homogeneous. Put 1 thermometer in each of two eco domes and 1 thermometer and lux meter in the outside eco dome. Then close and put them under sunlight or lamp. Two eco dome should have received the heat energy at the same level of quality.
The procedural of experiment 2: was taking the soil used small shovel then placed it into container then stir it to make soil homogeneous then put it in to eco dome. Plants that used are different one and another which consist of 12-18 of plants with 10-15 cm high that measured by rules. After that the plants put inside the eco dome. Both eco dome were planted with the same kind of plants with same size. The next step is creating CO2 solution. Firstly pour hot water into both bottles each 600 ml. Then put the sugars in with 5 spoon amount through roll paper to avoid them to fall out. Then put in ½ sodium bicarbonate in a bottle , the put ½ tea spoon of yeast in a bottle as well, then close the cap to make them homogeneous. Put one of pipe inside the cap bottle that has perforated, then put in another pipe inside eco dome through plasticine. To make the air well flow just close the other hole with plasticine, then put 1 thermometer of each eco dome inside and 1 thermometer and lux meter in the outside eco dome, make sure the thermometer has calibrated, then put both eco dome under direct sunlight or lamp. Pay attention to both of eco dome that receive heat energy.
The fifth step was taking the data to analyze the changing temperature inside or outside the eco dome and light intensity. Retrieval of experiment by doing 3 repetitions. Data analysis has used statistical test.
Eco dome
The Eco System Eco Dome Planet Management was produced by "Wild Science" in 2011 in Taipe, Taiwan. Eco dome was firstly designed by biologist, biophysic expert and science education expert that experiences in making and exploring climate and biology. Eco dome is a thing that made of transparent glass that used as ecosystem miniature to place plants. Thus eco dome is used for biology mini laboratory. Figure 1. Design of eco dome Figure 1 shows a eco dome: 1) Lid cup is used as experiment direction as roof cover. The cover is higher than the body tool. This thing eases the process of water flows; 2) roof or lid is used as atmosphere because it has a cover that has the same function as earth's atmosphere; 3) mountain lake or highland lake, is used to figuring out mountains and lakes on highland; 4) highland or upper terrace is used is figuring out as earth's highland; 5) lowland or lower terrace, is figuring out the earth's lowland; 6) Water channel is water tube. The function is used for a media to flow the water from highland to lowland. Each extent has two tubes; 7) the sea or lowland lake, is figuring out the sea or earth life in lowland; 8) plug, is founded in two parts of tool which created the hole with diameter 2 cm as place to plug in tool as put the CO2 into tubes; 9) lower platform, as lowland; 10) Base or called as basic earth.
When eco dome is put under direct sunlight, sunlight penetrates the glass. Some of the light absorb by anythings that put in eco dome. In eco dome that consists of plants sunlight will used in photosynthesis process and the remain light will absorbed by soil. Some energy is absorbed by eco dome surface and the remain energy will reflect. The reflection of light from things that put in eco dome is restrained by cover glass then the air in eco dome will get warmer. The function of glass is analog as greenhouse effect gasses on atmosphere. Greenhouse effect gasses absorb and trap the heat [12].
3.2.1
The effect of presence of plants inside of eco dome towards changing temperature (experiment 1). The purpose of this step is creating experiment data about the presence of plants in eco dome towards temperature changing. The first eco dome is filled by plants while another is not. Figure 4. The effect of light intensity towards changing temperature Figure 4 shows that temperature is directly proportional over sunlight intensity. The higher intensity of the light then the higher temperature will produce. The cause of decreasing temperature in eco dome that not fill by plants is because the plants that placed inside the eco dome does photosynthesis. The plants absorb gasses like CO2 that founded inside eco dome. The less of CO2 is founded in eco dome then the low of temperature will produce. Another eco dome that not filled with plants will not do the photosynthesis. There is no amount of CO2 is getting decrease. CO2 is founded in the soil while eco dome maintain the heat. This experiment shows eco dome has characteristics in produced significant data.
The effect of administration of CO2 in eco dome that unpresence of plants towards changing temperature (experiment 2)
. This experiment is aim to create the data about the effect of CO2 administration inside the eco dome that filled with plants towards changing temperature. Figure 6. The relation between time over temperature inside and outside the eco dome Figure 6 shows there is the difference between temperature inside and outside the eco dome. Temperature inside the eco dome is higher than the outside. Temperature in eco dome that is administered with CO2 is higher than another eco dome. The longer this experiment takes time the higher temperature will produce. The graph of light intensity towards changing temperature is shown in figure 7. show that temperature is directly proportional towards sunlight intensity. The longer light intensity then the higher temperature will produce. This experiment shows eco dome has characteristics in produced significant data. The cause of higher temperature in eco dome that filled with plants and administrated by CO2 is because the higher amount of CO2 turn the absorption of sunlight that reflected from eco dome surface to higher. The higher amount of CO2 will make the absorption of CO2 is higher. Meanwhile in eco dome that filled with plants without administration of CO2 will make the temperature is lower because plants absorb CO2 that founded inside eco dome.
Based on the experiment data that has shown, it proofs that plants have function to do photosynthesis that will absorb CO2. This experiment used eco dome to illustrate how pollution affects the earth. The existence of quantitative facts various competencies can be developed, including; (1) the ability to analyze the relationship between variables, such as the relationship of intensity towards temperature and time towards temperature; (2) the ability to predict conditions that will occur due to
Conclusion
Utilizing the eco dome will create a new innovation experiment namely "the effect of the presence of plants on temperature changes and the effect of CO2 administration in eco domes filled with plants on temperature changes". The results of this experiment are that the temperature inside the eco dome filled with plants is lower than other eco dome plants that have not been filled. Meanwhile the administration of CO2 in eco dome filled with plants is higher than the eco dome which is not managed by CO2. Experiments use eco dome to create quantitative data. Eco dome has characteristics in showing significant results and can be developed various potency. | 2021-05-07T00:04:19.076Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "178705c315d35f3112e33ea8839a05b497552582",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1806/1/012160",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "637ffc5567f6679c0565cbf9fac59418e37aa678",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
} |
86402684 | pes2o/s2orc | v3-fos-license | Rare lytic lesions of bone at uncommon locations-A study of 12 cases
Introduction: Pathological lesions in skeletal system can be viewed radiologically as lytic lesions of bone. They can be either inflammatory conditions or neoplastic lesions. If they are neoplastic it can be again either benign or malignant. On radiological examination the nature of the disease cannot be assessed. Histopathology is the ultimate tool for the final diagnosis of such conditions. Aims and objectives: The main aim is to study the different lytic bone lesions, uncommon locations at which they can occur and the histopathological features of the same. Materials and Methods: This study was done at Mahadevappa Rampure Medical College from June 2017 to June 2018. A total of 12 bone lesions were analysed. Different investigations were done in different cases after radiological examination. The procedures done were fine needle aspiration cytology, bone biopsy and bone excision. Once they were done the specimens were taken for processing and the staining was done. After that histopathological diagnosis was made. Results: This study included a total number of 12 cases out of which 6 cases (50%) are between 25 – 50 years of age. Female predominance was observed. The incidence of non-neoplastic lesions was 8.3% and of neoplastic lesions was 91.67%. In the neoplastic lesions, the benign lesions were 25%, malignant tumours were 41.67% and metastatic tumours were 25%. Solitary plasmacytoma was most common tumour among malignant tumours. Also secondary metastasis was common among the malignant bone tumours. Conclusion: Bone tumours are not routinely found in clinical practice. If the presentation is at rare site it is difficult to come to a diagnosis. Though clinical and radiological approach is available, the histopathology can make the correct diagnosis.
Introduction
Pathological lesions in the skeletal system can be viewed radiologically as lytic lesions of bone. 1 They can be either inflammatory conditions or neoplastic lesions. If they are neoplastic it can be again either benign or malignant. On radiological examination the nature of the disease cannot be assessed 1. It is difficult to determine whether that bone lesion is benign or malignant. 2 The most common benign lesions which can be seen frequently are bone cyst which can be Aneurysmal bone cyst, Fibrous dysplasia and Osteoblastoma. 2 The most common malignant tumors which are found frequently are Osteosarcoma and Ewing's sarcoma. 2 Primary carcinoma of bone is found less frequently when compared to bone metastasis. 4 In human body third most common site of metastatic disease is found to be bone whereas first two places are liver and lung. 5 Among all the secondary tumors seen, the primary sites usually are found be lung, kidney, thyroid and breast. Many of them produce mainly lytic lesions in bones and few show mixed lytic lesion and a sclerotic reaction. 5 Primary tumours which are carcinomas more commonly metastasize to bone when compared to sarcomas. 5
Materials and Methods
This study has been done in Mahadevappa Rampure Medical College, Kalaburagi. This study comprises of 12 cases in which bone lesions were diagnosed radiologically in one year period (June 2017 to June 2018).They were diagnosed by the surgeons and advised for imaging. On imaging they were diagnosed as different lytic bone lesions pertaining to the cases respectively. After radiological examination they were sent for investigations. Different investigations done were Fine needle aspiration cytology, bone biopsy and excision biopsy. FNAC procedure was done by inserting needle in the lesion and aspirating the material. Once the sample is obtained it is smeared, stained and viewed for diagnosis. In histopathological laboratory, the tissues were grossed and fixed in 10 % formalin and then further processing is done. For bone specimens 3 to 5 mm thick sections were taken and then decalcification was done by keeping the specimens in nitric acid solution. After that all tissue samples were processed by increasing concentrations of alcohol. Paraffin blocks were prepared and sections were taken. Sections were stained with Hematoxylin and eosin stain. After the total process is completed, the slides were mounted and viewed under microscope for the final diagnosis. Inclusion criteria: All the bone lesions which occur at rare sites and which are diagnosed in radiology as lytic bone lesions. Exclusion criteria: Tiny bone biopsies and samples which were inadequate for reporting are excluded from this study.
Among neoplastic lesions benign lesions are 3 cases (25%), malignant lesions are 5 cases (41.67%) and metastatic lesions are 3 cases (25%) ( Table 1). Male 00 02 01 02 Female 01 01 04 01 Total 01 03 05 03 Out of 12 cases, 3 cases (25%) were below 25 years age group, majority of the cases were between 25 -50 years constituting 6 cases (50%) and 3 cases (25%) were more than 50 years. (Table 3) In our study the most common bone involved is femur with 4 cases (30%), next is the frontal bone, ulna and tibia with 2 cases (16.67%) each. Other bones involved are clavicle, humerus and fibula with 1 case (8.3%) each. Langerhans cell histiocytosis Langerhans cell histiocytosis 100% In this study in 2 cases fine needle aspiration cytology was done. The procedure of FNAC was followed and the sample obtained is smeared, stained and viewed under microscope for diagnosis. They were diagnosed as plasmacytoma and Langerhans cell histiocytosis on FNAC. Later the biopsy was performed and sent for histopathological examination. This biopsy sample was processed and final slides were reviewed for the diagnosis. They were confirmed as Solitary plasmacytoma of tibia and Langerhans cell histiocytosis of humerus respectively. So the concordance between FNAC and histopathology in this study is found to be 100%.
Discussion
Pathological lesions in skeletal system can be viewed radiologically as lytic lesions of bone. They can be either inflammatory conditions or neoplastic lesions. 1 Age of the patient is an important consideration as few conditions are common in children and few are common in adult age group. History of any pre-existing conditions is also an important feature. 2 In our study majority of the cases are neoplastic lesions and one case (8.3%) was non neoplastic. Total neoplastic cases are 11 cases (91.67%) among which benign are 3 cases (25%), malignant tumours are 5 cases (41.67%) and metastatic are 3 cases (25%).
The non-neoplastic lesion included in this study is 1 case of Maduramycosis. The neoplastic lesions were divided into 3 categories. They are Benign, Malignant and Metastatic. The benign lytic lesions included were 1 case of Aneurysmal bone cyst, 1 case of Fibrous dysplasia and 1 case of Chondroblastoma. The malignant lesions are 1 case of Langerhans cell histiocytosis, 1 case of Osteosarcoma, 1 case of Malignant fibrous histiocytoma and 2 cases of Solitary plasmacytoma. The metastatic lesions included are 1 case of male breast carcinoma metastasis to femur and tibia, 1 case of renal cell carcinoma which metastasized to fibula, 1 case of follicular carcinoma thyroid metastasis to ulna. All these cases with this particular diagnosis are found at rare sites respectively.
In our study the site of bone which is most commonly involved is femur with 4 cases (30%), next is the frontal bone, ulna and tibia with 2 cases (16.67%) each. Other bones involved are clavicle, humerus and fibula with 1 case (8.3%) each.
Maduramycosis is also known as Mycetoma pedis or Madura foot. This condition was first described by a scientist Gill in the year 1842. 6,7 The causative agents include either Eumycetoma or Actinomycetoma. 6,7 Eumycetoma is a true fungi and Actinomycetoma is a filamentous bacteria. It is a suppurative infection. Microscopically it contains granulation tissue along with discharging sinuses. This is followed by bone involvement as the disease progresses. 6 Early diagnosis is difficult because it is similar to chronic bacterial infection. 6 Aneurysmal bone cyst was named by Jaffe and Lichtenstein. It is more commonly seen in patients in first two decades of life. It may occur in any portion of the skeleton. Radiologically it is seen as radiolucent or lytic lesion which is eccentrically located and expanded in bone. Grossly it appears like a spongy mass with large blood filled cystic spaces separated by fibrous septa and the margins are well defined. Shell of reactive periosteal bone can be seen. Microscopically blood filled spaces are seen which are lined by fibrous septa and also contain osteoblast like giant cells and osteoid. 3 Fibrous dysplasia was first described by Lichtenstein in 1938. It is also known as osteitis fibrosa or generalized fibrocystic disease of bone. It is seen in both the sexes equally. Most common site affected is the jaw. Two forms of fibrous dysplasia are knownmonostotic form and polyostotic form. Radiologically it is seen as intramedullary radiolucency. It can be eccentric lesion or can involve whole of the bone. Microscopically it is seen as a well circumscribed, sharply delineated lesion lined by lamellar bone. Fibrous tissue with proliferation of spindle cells can be seen. 3 It is also described as Chinese letter pattern. 1,3 Chondroblastoma is benign in nature. It is a neoplasm which produces cartilage. Most common site is the epiphysis of bones. It is also known as Calcifying giant cell tumour or epiphyseal chondromatous giant cell tumour. It accounts for less than 1% of all bone tumours. 8 Microscopically seen are the chondroblasts arranged in sheets, they are round to polygonal in shape with slightly eosinophilic cytoplasm. Also seen are osteoclast type giant cells. 8 Plasma cell myeloma is a monoclonal neoplastic proliferation of plasma cells. Solitary plasmacytoma of bone is a variant of plasma cell myeloma. It most commonly occurs in 6 th and 7 th decades of life. Both the sexes are equally affected. The bones which can be more frequently involved are vertebrae, ribs, skull and pelvis. Microscopically it shows a rich vascular pattern with tumour cells surrounding vascular channels. The cells are with eccentric round to oval nucleus, speckled chromatin and abundant cytoplasm. Mott cells or Russel bodies may also be seen. 8 Langerhan cell histiocytosis is a clonal process. It is also known as Histiocytosis X, Letterer-Siwe disease, Eosinophilic granuloma of bone. It is a rare disease and more commonly occurs in males when compared to females. Most commonly affected bones are flat bones specially skull. Microscopically, cells are arranged in loose cohesive clusters. Cells are large round to oval in shape with oval to indented nuclei and abundant pale eosinophilic cytoplasm. These are histiocytes and nuclear grooving will be seen. Acute inflammatory infiltrate of eosinophils can be seen. 3 Osteosarcoma is primary intramedullary high grade malignant tumour, in which neoplastic cells produce osteoid. There are many variants of which Fibroblastic osteosarcoma is one. 8 It usually occurs between 10 -25 years of age. 2 It can arise in any bone of the body. 2 The main microscopic feature of this is presence of spindle cells which show malignancy and minimal amounts of osseous matrix with or without cartilage. 8 Malignant fibrous histiocytoma was first described in bone by Feldman and Norman in 1972. It is also known as Malignant histiocytoma, Xanthosarcoma, Malignant fibrous xanthoma, Fibroxanthosarcoma. It accounts for less than 2% of all primary malignant bone lesions. Microscopically it consists of mixed population of cells which include spindle cells, histiocytoid cells, pleomorphic cells, giant cells and chronic inflammatory cells. Mitosis are present. Storiform pattern is seen in fibroblastic areas. Many histological subtypes are seen. 8 Metastasis when found, one should know that the skeletal system is the third common site involved. Clinical features seen in metastatic conditions are pain, pathological fractures and hypercalcemia. Most common symptom among these is pain. Pain is due to periosteal stretching by the tumour along with nerve stimulation in the endosteum. Pathological fracture is the most common finding in conditions where the metastasis is from breast. 2 Metastatic carcinoma usually involves vertebra, femur, sternum and pelvis. Bone scintigraphy covers the whole skeleton, so it is the most sensitive method for the detection of metastasis. Involvement of skeletal sites can be known by the location of the primary tumour and pattern of blood flow. Grossly metastasis from breast are greyish white and firm, from renal cell carcinoma are soft, hemorrhagic deposits. 8
Conclusion
In this study, we have seen few lytic bone lesions at uncommon locations. Bone tumours are not commonly found. If the presentation is at unusual site it is difficult to come to a conclusion. Inspite of availability of modern diagnostic techniques it is difficult to analyse the nature of the disease process. Histopathology is considered to be the gold standard for the diagnosis of these conditions causing lytic bone lesions. | 2019-03-28T13:33:20.638Z | 2018-12-30T00:00:00.000 | {
"year": 2020,
"sha1": "c78c02b9824929a380426a86b7282b1948dc1be9",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.jdpo.org/journal-article-file/7936",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6284e9eaacfd4539bc45c4249e02888f5943888b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54013521 | pes2o/s2orc | v3-fos-license | Rumination and social support as predictors of posttraumatic growth in women with breast cancer : a systematic review
Abstract: Objective: Posttraumatic Growth (PTG) is a perceived positive change after a stressful situation. Studies describe different predictors of PTG. The purpose of this study was to (1) review the evidence that rumination and social support are predictors of PTG; (2) analyze the results of the screened studies. Method: A systematic review was conducted by searching for articles with quantitative or mixed methods that evaluated PTG using the Posttraumatic Growth Inventory, rumination and/or social support in women with breast cancer. Results: Were identified twelve articles that corresponded to the inclusion criteria. All of them reported some degree of PTG in their samples. Rumination was evaluated in three studies, social support was evaluated in ten, and both were considered to have a positive correlation with PTG. Conclusions: This review concludes that rumination and social support are predictors of PTG in women with breast cancer. These results contribute to the development of new interventions in mental health.
Background
In 1996, Richard Tedeschi and Lawrence Calhoun started publications on Posttraumatic Growth -PTG, a construct that shifts the focus of investigations into the theory of pathogenesis, searching to deepen aspects of adverse situations.PTG deals with a positive cognitive remodeling, resulting from experience of a situation perceived as stressful and/or traumatic.It involves five different aspects, including Relating to Others, New Possibilities, Personal Strength, Spiritual Change and Appreciation of Life, evaluated in the Posttraumatic Growth Inventory (PTGI) (1) .This model states that the event affects the belief system of a person.From this, people will present different strategies and behaviors to face the situation, as psychic and/or social order (2) .
The model has been tested in studies with different populations and different diagnoses of organic diseases, such as rheumatoid arthritis (3) .Within the oncology area, PTG has been associated with head and neck cancer (4) , oral cavity cancer (5) , breast cancer (6) , hematological cancer (7) and pediatric cancer (8) .Thus, the possibility of developing and perceiving positive aspects from adverse situations has been proven.
Cancer is the name given to a large set of diseases characterized by the uncontrolled proliferation of malignant cells, which can reach any part of the human body.Clustering of these cells can cause malignant tumors or neoplasms.In 2015, cancer was considered the second largest cause of death in the world, responsible for 8.8 million cases.In men, the most common types of cancer are prostate, lung and colorectal.In women, the most frequent are breast, colorectal and lung cancer (9) .
Breast cancer is the most prevalent diagnosis in women worldwide, reaching approximately 1.5 million people per year.It also presents the highest death rate among women with cancer.In 2015, 570.000 women died from breast cancer (9) .Its diagnosis and treatment has a direct effect on the mental health of the carrier, which can result in symptoms such as depression and anxiety (10)(11)(12) .Negative symptoms sometimes persist significantly after the end of medical treatment (13) .Thus, it is important evaluation and assistance in the mental health area in this population.
The studies of PTG on women with breast cancer found different associations that influence their development, such as age, morbidity caused by the treatment, as well as different psychosocial variables (14,15) .The most discussed predictors in the literature are the perception of situation as a stressor, perception of social support, use of adaptive coping strategies and use of rumination (16) .
Rumination deals with a cognitive process where the individual establishes himself in a self-reflexive movement in a passive and repetitive way.It is a form of response to stress where the person fixes on the problem and on the negative feelings, but without defining assertive resolution strategies (17) .In general, rumination deals with a non-adaptive coping strategy since it encourages negative symptomatology (18) .Coping strategies are the different ways people deal with stressful situations (19) .Patients with different types of cancer have their coping strategies directly linked to their perception of the disease (20) .
Social support is a concept understood as multidimensional since it refers to the emotional, financial and material resources that the individual accesses through social environment (21) .Sidney Cobb (22) developed a model that establishes that the individual internalizes social support based on three beliefs: to be loved and know that there are people who care about well-being, to be valued and to belong to a social network.
Although the predictors have already been evaluated in studies in different cultures, systematic reviews and/or meta-analyzes about the subject are not found in the literature.Thus, this study searches systematically compile articles that evaluate rumination and/or social support as predictors of PTG in populations of women with breast cancer, analyzing the results presented in a narrative form.
Search, selection and review strategies
The protocol of this review is registered in PROSPERO under registration number CRD42017060584.The PRISMA guideline items were followed for systematic reviews and meta-analyzes, from the search for articles, extraction of results and description of the systematic process.
The Embase, Web of Science, PsycInfo, Scopus and Cochrane Online Library databases were used.It was searched for empirical studies with quantitative methodology, which evaluated PTG, social support and/or rumination in populations of women with breast cancer.From the inclusion criteria, PTG evaluation must be performed using some sort of quantitative measure.The selected papers were published until September 2017.
According to the specificities of each base, the search strategy used and the Boolean connectors were: ("posttraumatic growth" OR "posttraumatic growth inventory") AND ("breast cancer") AND ("social support") OR (rumination).The last search date for articles was conducted on October 19th, 2017.Articles found in the list of references were included.These articles were not identified in the searches, but met the inclusion criteria stipulated.
Figure 1 describes the process of searching and analyzing the articles.After the manual survey, two independent judges evaluated the abstracts in order to minimize publication bias.A third judge was invited to evaluate disagreements regarding the selection of abstracts and to the reading of full articles.If there was still disagreement in the inclusion or exclusion of the abstract, the article was fully read.From the selection of articles to the reading of the full articles, after applying the inclusion and exclusion criteria, Covidence software developed by Cochrane was used to reduce the risk of bias and to evaluate the studies by the judges.
In the reading of the abstracts, 163 papers were excluded for not meeting the pre-established inclusion criteria.74 duplicates were also excluded.24 articles were eligible for complete reading by two judges.One was excluded because the text was not available in English.Six were excluded due to not being articles, but expanded abstracts instead.Other three articles were excluded in the complete reading because they didn't correspond to other including criteria.In the end, two other articles were excluded because they were not fully available online.
Data extraction
After applying the inclusion and exclusion criteria, 24 papers were considered eligible for this review, among the 187 abstracts analyzed.After the exclusion of articles that did not correspond to the including criteria, as well as one that was unavailable in English and two that were not fully available, 12 studies were left.Two judges extracted key data from the articles in order to conduct a qualitative analysis of general information as a result of the researches.The Crowe Critical Appraisal Tool (CCAT) was also used as an assessment measure to quantify the quality of the studies' methodology (23) .
Two articles are from the same longitudinal study.Although the data of the applied instruments are the same, none was excluded.Each article focuses on different important aspects for the discussion of the results (table 1).
Methodological Quality
The score average of the studies analyzed through CCAT was of 85% (range 78% -95%).There were no studies with low methodological quality.However, there were found no articles that would reach 100% on the final score (see table 1).
Posttraumatic growth
From the analyzed studies, five opted to use cultural adaptions of the original PTGI (24- 28) .Only in the studies of Chan et al. (25) e Tomita et al. (28) the adapted instruments went through alterations in the amount of items and factors.However, the authors quote having reached good indexes of reliability.
All studies stated to have found significant PTG scores in their samples.Despite of them all having evaluated women with breast cancer, there were some differences in the samples' characteristics.Kroemeke et al. (29) focused the investigation in women who had undergone mastectomy.The study of Cohen and Numa (26) , on the other hand, looked to compare PTG between women who had and who had not engaged in volunteering.
PTG was correlated with time after diagnosis, showing more significant growth in the first 12 months, tending to stabilize after (30) .In addition, it was still positively correlated with optimism (24) .Women who reported having a religion had higher scores than those who did not report (25,28) .Samples that reported worse health presented lower PTG scores compared to women with fewer side effects (26) .
Social Support
The social support variable was evaluated in ten studies.Different outcomes were presented for this variable.One study had as its main objective the correlation between social support and PTG (24) .The authors concluded that the sources of social support that most contribute to PTG are global, familiar, friendly, and spouse support.However, on another study with the main objective focusing on PTG and social support (31) , the initial hypothesis were not confirmed.Only cognitive support showed to be predictor of PTG in the regressive model established by the authors, whereas spouse support did not show any association.
Regardless of how almost all studies that included social support on their analysis concluded that there is a positive correlation between this variable and PTG, there was no consensus.On the study of Kroemeke et al. (29) , social support showed no correlation with PTG, and was therefore not included on the model tested for predictor analysis.On the other hand, on the study of Tomita et al. (28) the variable only affected the factor relating to others, being important especially when coming from one's spouse.
Social support directly related to breast cancer was shown to be a predictor of positive change in the PTG score.However, general social support was not a predictor (32) .Yet in Cohen and Numa's (26) study, no significant correlation was found between PTG and perceived social support.These authors suggest that previous studies with similar results also evaluated samples with longer time after diagnosis.Furthermore, perceived social support was related to other variables.In Tomita et al. (28) study, social support was related to the reduction of depressive symptoms, as well as in the McDonough et al. (32) study, it showed positive correlation with subjective well-being.
Rumination
Three articles evaluated and analyzed rumination in their samples.In two studies this variable was included in the main objective (25,33) , and in one it was used as a benchmark variable of differences between compared groups (27) .
On the studies found there is a consensus of rumination being a variable positively correlated to PTG.However, there are differences in the types of rumination.Ramos et al. (27) point out that only deliberate rumination showed correlation with PTG.On the study of Soo and Sherman (33) , however, the factors Intrusion, Instrumentality e Brooding correlated to different factors of PTGI, while the subscale instrumental rumination correlated with all five factors of PTGI.
Chan et al. (25) indicate that there are differences between positive and negative cancer related rumination.The first was positively associated with PTG scores.Furthermore, it can also mediate the relationship between PTG and positive attentional bias.Meanwhile, negative cancer related rumination was significantly related to posttraumatic stress disorder (PTSD) symptoms.
Discussion
This review presented data regarding the evaluation of PTG in women with breast cancer in studies that also evaluated social support and/or rumination as predictors.Until now, no systematic review has been published worldwide presenting the same objective.However, the results presented here should be considered with caution since there are important methodological differences in the analyzed articles.
All the studies considered in this review reported finding PTG scores in the samples, understanding that it is possible to perceive a positive change in life, even after the experience of breast cancer.In the study by Cordova et al. (16) , equivalent values were found in the total score between the groups, and the group with breast cancer presented higher averages in the Relationship with Others, Spiritual Change and Life Appreciation factors.It is perceived as a stressful and traumatic situation for women in different aspects, which can result in symptoms such as depression, anxiety and acute stress (12,13) .The identification of the possibility of developing positive changes after such situation allows the development of different interventions in mental health, focusing specifically on quality of life.In addition, the study of predictors that facilitate the development of PTG supports in the improving of the model.
Social support was evaluated in ten analyzed studies.The instruments used in the evaluations were composed of different subscales, evaluating different aspects.In the study by Soo and Sherman (33) the focus of evaluation was not the social support, which is included as an affective variable.Although the authors concluded that social support had a direct influence on the model established between rumination and PTG.Thus, it is suggested that the social factor is determinant for the development of PTG when combined with other predictors.The subscale emotional/informational social support demonstrated correlations with the PTG score.It is understood that the possibility of expression allows the evaluation of different perspectives of the problem.In the study by Danhauer et al. (34) it was observed that women who presented positive scores in different predictor variables, but low social support index, also presented no increase in PTG.It is understood the importance of psychosocial support during coping with breast cancer, corroborating the already established model of PTG (1) .
It is important to point out that there were different results regarding types and sources of social support.Two of the analyzed studies found that spouse support is positively correlated to a greater PTG (24,28) .However, on the study of Hasson-Ohayon et al. (31) , where this was one of the hypothesis, it was not confirmed.The authors found in their results that the cognitive type of social support was the most significant one in the models, something that had already been seen in other analysis (26) .As for the study of McDonough et al. (32) , evaluation of social support specifically tied to breast cancer showed to be a predictor of change on PTG, but general social support did not.Thus, it is important to analyze carefully the different types and sources of social support in different populations, given that this seems to be a variable that can be presented in distinct ways.In the discussion of results, Danhauer et al. (30) bring forward considerations regarding the model found in the study.The authors search to establish a causal relationship where social support assumes to facilitate the development of PTG.However, after the analysis the author questioned whether the reverse would also occur or whether the development of PTG would facilitate an increase in the perception of social support.From the considerations of the study, PTG presupposes positive changes, as Relationship with Others and a predictor of greater social support after the stressful situation.Thus, positive change would be influenced by social support, and the occurrence of positive changes would facilitate interpersonal interaction, being variables that feedback each other.
It is known that social support allows rumination to occur deliberately since it offers a space where the person can discuss the traumatic and/or stressful situation (2) .The social support also serves as support for rumination, allowing it to occur in a healthy way.The emotional/informational factor of the MOS instrument was important in the model described by Soo and Sherman (33) , where rumination and PTG were associated.Thus, direct interference of both variables in PTG scores is observed.
Considering rumination as a variable, if women repeatedly speak about their experience with other people, it is possible to present a positive effect on the understanding and assimilation of the situation.Thus, occurring the perception of positive changes.The social support is shown as an important component, when considered by the woman, either as family, friends or others.In this case, an important resource will be therapeutic groups, which allow the exchange of experiences between people who have similar characteristics, offering social support and information (35,36) .
Rumination is associated in different studies of depressive and stress symptoms (37,38) .However, studies evaluating PTG and rumination have concluded that the last may facilitate the perception of positive changes after adverse experiences (16,25,27,33,39) .It is emphasized that subscales that treated with reflexive rumination, an active processing form, were more predictive of PTG scores (25,33) .Findings showed that negative cancerrelated rumination partially mediated the relationship between negative attentional bias and PTSD symptoms, while positive cancer-related rumination partially mediated the relationship between positive attentional bias and PTG.This active processing may be purposeful, allowing the elaboration of coping strategies.
Tedeschi and Calhoun (2) described that the act of actively speaking about the stressful situation results in the remodeling of mental schemas that were affected by the situation in question.Thus, it is understood that the more the people thinks actively about the situation and looks for ways to re-signify it, the greater PTG it can experience.Chan et al. (25) suggest that active and growth-oriented strategies are understood as positive rumination.The authors also point out that negative thinking and recurrent fear of cancer are linked to symptoms such as PTSD and depression and are classified as negative rumination.These are corroborated by the study by Morris and Shakespeare-Finch (40) , which evaluating sample of participants with different cancers, find in their results that deliberately ruminating on benefits is positively related to PTG.Thus, we conclude that there is a complex difference in the type of rumination that results in personal growth after a stressful situation.Negative rumination strategies can result in the facilitation of negative symptoms, hindering the perception of growth and benefits.
It is worth mentioning that, although they did not evaluate rumination directly, Danhauer et al. (30) considered that PTG may be facilitated if there is any degree of tolerance regarding intrusive thoughts during treatment.In this way, it is possible to consider interventions that instigate rumination in an active way, where the patient is encouraged to speak and relive the situation consciously, directed to an adaptive path, developing positive coping strategies.On the other hand, in Cordova et al. (16) the authors did not mention specifically the term rumination.However, the sample of women with breast cancer was evaluated with the instrument Talking About Cancer, which seeks to evaluate how much the person speaks about what happened.Despite the authors not mentioning specifically the concept of rumination, their results conclude that the active speech about cancer showed a high correlation with greater scores of PTG.
The articles analyzed used different methodologies.Data collection from the Soo and Sherman (33) survey occurred via internet, with participants chosen for convenience.However, the authors do not discuss whether a pilot study was conducted or whether there was a possible sample loss for the time required for evaluation.It is known that the longer the computer is used the less ability to maintain attention in a single task (41) .Thus, it is considered important a careful evaluation of the size and the quantity of the instruments inserted in online surveys, in order to reduce possible sample loss.The application of a pilot study can be an effective way to evaluate if the survey was well constructed, considering the form of data collection.However, on the study of Cohen and Numa (26) , the participants received and returned the instruments via email.That way, it becomes harder to control any bias on the answers, which can interfere with the trustworthiness of the obtained results.
It is noticed the use of self-report scales and limitation, answered based on the perception that individual has of himself.In this way, responses are more likely to be biased.Regarding the objective of these studies, this type of evaluation is the one which best applies since it aims to measure self-perceived changes.
Two articles analyzed in this study (30,34) are from the same longitudinal study.There was no exclusion since each article scored a different objective and discussed different aspects raised in the research.Their results derive from the same data collection, with the same sample.It is understood that these articles cannot be better explored, making generalization difficult.
In this review, four longitudinal studies were found and only one presented an intervention protocol (27) .In this research, quantitative measures were used to evaluate the rumination variable.The authors observed that the participants in the clinical group presented higher rates of PTG compared to participants in the control group, concluding that the intervention tested was effective.In addition, intrusive rumination was a moderating variable of PTG.
The empirical evidence of the data presented from intervention protocols is extremely important, applied in everyday clinical practice.It is understood that interventions tested empirically are more effective, reflecting in the improvement of care provided.
The studies analyzed in this review presented expressive samples, but homogeneous.The fact that participants presented no distinct characteristics may influence the results, such as social class and educational level.It is important to replicate the studies, evaluating PTG, social support and rumination, as well as others predictor variables in heterogeneous samples, measuring whether positive change may occur in different cultures and social levels.
Despite the methodological and objective differences, the results of the studies analyzed corroborate the theoretical model of PTG (1,2) .The standard deviations of the overall mean total PTGI score in four times of the study by Danhauer et al. (30) , Soo and Sherman (33) and Cordova et al. (16) are high, being respectively SD= 23.12, SD= 20.58 e SD= 24.8 for the clinical group and SD=26.3 for the control group.Thus, the results analyzed should be interpreted with caution since the standard deviation departs considerably from the mean.However, in all three studies PTG presented a positively and relatively stable variable.
Conclusions
In clinical terms, this review allows reflection on different mental health interventions.Considering rumination and the specific aspect related to the perception of positive changes after breast cancer, it is understood that the influence in this model allows the therapist to conduct interventions assertively.Thus, rumination does not follow a path of free association.The therapist can assist the patient to relive the situation in a way that encourages a positive re-signification of the stressor.
Study Limitations
Further studies are suggested on this methodological model in order to expand the publication time criteria and to include published studies in the emergence and initial understanding of the PTG.It is suggested a meta-analysis of the results found in the studies, in order to confirm the hypotheses described through quantitative methods.Moreover, the use of a standardized instrument for a qualitative analysis of the results of the studies found is suggested.
Figure 1 .
Figure 1.Flow Diagram based on Prisma Statement.
Table 1 .
Description of the analyzed studies. | 2018-11-23T06:13:49.331Z | 2018-09-06T00:00:00.000 | {
"year": 2018,
"sha1": "f2aa599ecaceb657235d980658dc31fe11b1fb46",
"oa_license": "CCBY",
"oa_url": "https://revistas.ucm.es/index.php/PSIC/article/download/61437/4564456548065",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f2aa599ecaceb657235d980658dc31fe11b1fb46",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
11399772 | pes2o/s2orc | v3-fos-license | Father's occupational exposure to carcinogenic agents and childhood acute leukemia: a new method to assess exposure (a case-control study)
Background Medical research has not been able to establish whether a father's occupational exposures are associated with the development of acute leukemia (AL) in their offspring. The studies conducted have weaknesses that have generated a misclassification of such exposure. Occupations and exposures to substances associated with childhood cancer are not very frequently encountered in the general population; thus, the reported risks are both inconsistent and inaccurate. In this study, to assess exposure we used a new method, an exposure index, which took into consideration the industrial branch, specific position, use of protective equipment, substances at work, degree of contact with such substances, and time of exposure. This index allowed us to obtain a grade, which permitted the identification of individuals according to their level of exposure to known or potentially carcinogenic agents that are not necessarily specifically identified as risk factors for leukemia. The aim of this study was to determine the association between a father's occupational exposure to carcinogenic agents and the presence of AL in their offspring. Methods From 1999 to 2000, a case-control study was performed with 193 children who reside in Mexico City and had been diagnosed with AL. The initial sample-size calculation was 150 children per group, assessed with an expected odds ratio (OR) of three and a minimum exposure frequency of 15.8%. These children were matched by age, sex, and institution with 193 pediatric surgical patients at secondary-care hospitals. A questionnaire was used to determine each child's background and the characteristics of the father's occupation(s). In order to determine the level of exposure to carcinogenic agents, a previously validated exposure index (occupational exposure index, OEI) was used. The consistency and validity of the index were assessed by a questionnaire comparison, the sensory recognition of the work area, and an expert's opinion. Results The adjusted ORs and 95% confidence intervals (CI) were 1.69 (0.98, 2.92) during the preconception period; 1.98 (1.13, 3.45) during the index pregnancy; 2.11 (1.17, 3.78) during breastfeeding period; 2.17 (1.28, 3.66) after birth; and 2.06 (1.24, 3.42) for global exposure. Conclusion This is the first study in which an OEI was used to assess a father's occupational exposure to carcinogenic agents as a risk factor for the development of childhood AL in his offspring. From our results, we conclude that children whose fathers have been exposed to a high level of carcinogenic agents seem to have a greater risk of developing acute leukemia. However, confounding factors cannot be disregarded due to an incomplete control for confounding.
Background
Acute leukemias (AL) are the most frequent types of cancer in children under 15 years of age. The highest incidence rates in the world for AL have been reported for Latin American populations, and Mexico City is no exception. From 1996 to 2000, an average incidence rate of 58.4 cases per million children under 15 years of age has been reported for Mexico City [1]. Medical research has not established whether a father's occupational exposures are associated with the development of AL in his offspring. Pertinent studies had the following weaknesses [2][3][4]: 1) Information about occupational exposure was obtained from secondary sources or by using the occupation or the industrial branch as an indicator of the exposure; 2) the interviewed workers either had ignored the substances to which they were exposed or could not remember their past exposures; and 3) when exposure was characterized, only the duration of exposure was taken into account, with no consideration given either to the frequency or intensity of exposure, or to other variables such as the use of personal protective equipment. This has resulted in a misclassification of the exposure. Also, in these studies, when attempting to prove the occupational effect of a specific position or of exposure to a particular substance, the sample sizes have been unsatisfactory [2][3][4]. These are difficult problems to solve, because occupations and exposures to substances associated with childhood cancer are not very frequently found in the general population; therefore, the risks obtained have been inconsistent and inaccurate [5].
In this study, to assess exposure we used a new method, an index, which considered all parameters recommended to measure occupational exposure: industrial branch, spe-cific position, use of protective equipment, substances at work, degree of contact with such substances, and time of exposure [2][3][4]. Even though there are only a few substances identified as having a potential leukemogenic effect, the underlying supposition to develop this new method to evaluate exposure was that it has not been possible to establish if such substances are related or not to the development of childhood leukemia because the frequency of exposure to each carcinogenic substance is very low. A method that grouped together each carcinogenic, or potentially carcinogenic, substance into an exposure index to carcinogenic substances was thought to solve the problem of the low frequency of exposure to each substance. Therefore, this exposure index was developed to allow us to obtain a grade that permitted the identification of individuals according to their level of exposure to known and to potential carcinogenic agents associated with an occupation, not necessarily specifically identified as a risk factor for leukemia [6]. The aim of this study was to assess the association between the level of father's occupational exposure to carcinogenic substances and the risk of his offspring in developing AL through the use of an occupational exposure index (OEI).
Methods
From 1999 to 2000, a case-control study was performed with 193 children with AL and 193 controls consisting of children without AL, who were matched by age, sex, and institution of origin. All children resided in Mexico City and were under 16 years of age. The initial sample-size calculation was 150 children per group, assessed with an expected odds ratio (OR) of three and a minimum exposure frequency of 15.8%.
Cases
In Mexico City, there are both public and private hospitals that treat children with AL. Private hospitals care for fewer than 5% of all children with cancer [7,8]. Out of the nine public hospitals that treat children with cancer and that were invited to participate in this study, in only four were we able to identify the population base in order to identify the controls. However, these four are the largest and most important hospitals in Mexico City and represent 88% of all the cases treated in public hospitals in Mexico City. All cases were diagnosed through cytochemical analysis of bone morrow aspirates; specific stains were used to differentiate acute lymphoblastic leukemia (ALL) from acute myeloblastic leukemia (AML). During that period, there was a total of 230 cases, 25 of which were excluded by the hospital in which they were diagnosed and 12 were also excluded because there was no information about the father (six single mothers, four abandoned mothers, one divorcee and one widow).
Controls
We decided that controls should be selected from secondary-care hospitals that had referred children with AL to tertiary-care hospitals. Just as in other parts of the world, in Mexico, there are three levels of medical assistance: the primary-care group refers to treatment of patients by the family doctor, the secondary-care group is in charge of general medical specialties, and the tertiary-care group is responsible for giving treatment to difficult-to-manage diseases and for very specialized medical attention. The closer the level of medical attention is to the general public (e.g., primary care), the more the cases reflect the general population from which they arise. However, we chose children only from secondary-care hospitals for the following reason: Because a patient is assigned to a first-level clinic according to her/his address, there was a risk of over-matching due to father's occupation variable. This is because, in some cases, different companies have constructed apartment complexes into which their employees are crowded in the same community. In addition, we took into account the fact that including a hospitalized population would increase the participation rate of the controls; thus, we decided to include children coming from secondary-care hospitals as controls.
The control group was composed of children who had been admitted for short-stay surgery (hernioplasty, circumcision, orchidopexy), who lived with both their biological parents, and who could be matched with the cases by age (maximum 18-month difference) and gender. There were 415 children potentially eligible when these secondary-care hospitals were visited. However, the parents of 71 patients refused to participate, giving a noresponse rate of 17%. It was not possible to locate the father of 46 patients. Out of the remaining 298 controls, only193 met the two criteria for pair matching by age and sex.
The protocol of the study was approved by the Ethics and Investigation Committee of the Instituto Mexicano del Seguro Social (No. 2003-243-003). The parents of each child signed an informed consent form.
Data collection
Trained and standardized personnel conducted an individual, in-person interview with both parents of the indexed child. A questionnaire, adapted from the United States National Cancer Institute Questionnaire Modules [9], was used to obtain demographic information such as birth weight, gender, age of the father and of the mother during pregnancy, family history of cancer, and socioeconomic status. Each interview with the mother of a child with AL was conducted during the first two months after the diagnosis, and that of the father was completed within the first five months after the diagnosis. In the questionnaire, parents were asked to write what they thought was the reason their children had developed leukemia; in no case did they associate occupation as a cause.
The birth weight of the indexed child was divided into two groups, <3,500 g and ≥ 3,500 g. The parent's age was divided into two groups, >35 and ≤ 35 years of age. For both variables, the cut-off was determined as in other studies [10][11][12]. The level of crowding, part of a validated index in the Mexican population [13], was used as a proxy of the socioeconomic status. The crowding index is also that part of the socioeconomic level, which has been most frequently related to the risk of developing childhood leukemia [14]. The level of crowding, calculated as the number of people divided by the number of rooms in a home, was classified according to the criteria of Bronfman et al. [13]: not crowded, ≤ 3.5 persons per room; crowded, >3.5 persons per room. Parents were asked about cigarette smoking and alcohol consumption because, in a study carried out in Mexico City, it was determined that smoking and alcohol consumption by the parents are associated with the development of childhood AL [15] and that these variables are related to the occupation. The parents were also asked about their exposure to wood dust, fertilizers, pesticides, and hydrocarbons and derivatives thereof; such exposure was designated as "exposure to carcinogenic agents at home". All these factors were selected because, theoretically, they meet the confounding criteria described by Rothman and Greenland [16], who pointed out that a confounding factor must be a risk factor for the disease, must be associated with the exposure under study in the source population, and must not be in the causal pathway.
Exposure assessment Occupation
Through an individual, in-person interview, parents were asked to list all occupations in which they had been involved, for at least six months, during the following four periods: 1) the two years period prior to the conception of the indexed child; 2) during pregnancy; 3) during the breastfeeding period, and 4) after pregnancy either until diagnosis (for all cases) or until the date of the interview (for all controls). Each of the occupations was classified according to the International Standard Classification of Occupations version 1988 (ISCO-88) of the International Labour Organization [17].
Level of occupational exposure to carcinogenic agents An exposure index (occupational exposure index (OEI)) was used in which the following indicators were considered for each position, with the information obtained from the labor history of the father: type of economic activity, type of specific position, use of personal protective equipment, toxic agents to which the individual was exposed, exposure frequency, exposure intensity, and degree of contact. Two specialists in occupational medicine were in charge of assigning to each reported occupation each of these indicators with a pre-established, weighted value, according to the probability of being in contact with carcinogenic agents in each occupation. The criteria were as follows: a) Type of economic activity According to the review by Savitz and Chen [2], two categories were considered, the first giving a value of 0 to the indicators not related to cancer in their offspring and the second, a value of 1 to those that were associated. b) Type of specific position A value was assigned according to the position occupied within the work, with a value of 1 given to office workers, 2 to supervisors, and 3 to those workers directly involved in the process.
c) Use of personal protective equipment
A value of 0 was given to those who used appropriate protective equipment, 1 to those who used inappropriate equipment, and 2 to those who did not use any equipment at all.
d) Exposure to carcinogens
The list suggested by the International Agency for Research on Cancer [18] was used. According to the evidenced degree of carcinogenicity, each group of compounds was weighted: A proved carcinogen group was assigned a value of 5; probable carcinogen in humans, a 4; possible carcinogen, a 3 and others, a 0. Substances of unknown composition were arbitrarily assigned a value of 1. Two databases were also used to identify and classify substances: the "Haz-Map Occupational Exposure to Hazardous Agents [19]," and the "Report on Carcinogens, 11 th Edition [20]." e) Daily exposure frequency This indicator was weighted with a value of 0.2 per hour on day(s) of exposure. f) Exposure intensity or contact degree A 1 value was given when there had been no contact with the substance; 2, if there had been contact by smell, but without handling the substance; 3, when the individual both smelled and handled the substance.
In order to calculate the OEI for each occupation, the values for industrial branch (a), type of position (b), and use of protective equipment (c), were added together; to this value, was added the summation of the product of the values for each substance (d), for the frequency of exposure (e), and for the degree of contact (f), giving the formula OEI = a+b+c+Σdef.
When applying the formula and in accordance with the validation, "high exposure" was considered to be ≥ 25 points and "non-high exposure" to be <25 points, where "non-high exposure" includes moderate, low, and null levels.
Instrument validation
Workers (n = 52) from nine different industries were studied [6]. The companies were selected considering the industrial branch, their processes, and their raw materials; they were distributed according to the risk of exposure to low-, medium-, and high-risk carcinogenic agents, with three companies in each category. A work environment sensory recognition and the application of the assessment instrument were applied independently to workers from different areas to have representation of the different positions involved in the process. An exposure index evaluation was carried out to assess exposure to carcinogenic agents, considering the above-mentioned indicators; it was assessed with the highest probability value of exposure to carcinogenic agents. The consistency and validity of the index were assessed by a questionnaire comparison, the sensory recognition of the work area, and an expert's opinion. Although the responses to the questionnaire were not validated for every individual interviewed, a sensory analysis of the specific position was done in order to evaluate consistency with the response given by worker. The person who conducted the interview did not know the results of the sensory analysis and vice versa. When checking the questionnaire, the expert classified workers according to high, moderate, and low exposure on two occasions in a month interval and showed a high consist-ency in classifying them, with a weighted Kappa of 0.806. The sensory recognition report was also evaluated twice by the expert, with a weighted Kappa of 0.973. For this reason, the sensory recognition and its interpretation by an expert were chosen as a gold standard by which to measure the validity of the index obtained from the questionnaire. Receiver Operating Characteristic (ROC) curves were plotted to show the best cut-off level for the index. It was found that the exposure index did not differentiate between high and moderate degree of exposure, nor between moderate and low. Sensitivity and specificity, but especially likelihood ratio, were increased when both lowand moderate degree of exposure were combined. The cutoff level to distinguish between these degrees of exposures was 25 points, with a 100% sensitivity level, a 93% specificity, and a 16.66 likelihood ratio. A limiting factor that was observed was that, when considering the number of years of exposure, the specificity, and the likelihood ratio decreased. This situation did not affect this study, for the exposure time necessary for the child to develop AL seems to be not greater than two years [21].
Statistical analyses
A simple, stratified, and logistic regression analysis was performed to calculate the OR with 95% CI. This analysis was performed for four life periods: Two years before the conception of the indexed child, during pregnancy, during breastfeeding, and the period after breastfeeding until either diagnosis (for the cases) or until the date of the interview (for the controls). Analyses were also performed 1) without including the breastfeeding period and 2) including one more period in which global exposure was analyzed over all four periods.
A complete model was built. It included 1) the father's occupational level of exposure (beta); 2) all the potentially confusing variables (family cancer history, sex of child, age of child at the time of diagnostic or interview; weight at birth, crowding level, father's and mother's age at pregnancy, father's and mother's alcohol consumption, father's and mother's tobacco use, and exposure to carcinogenic agents at home) (gammas); and 3) all the potential interactions between the father's occupational level of exposure and all the potentially confounding variables (deltas) [22]. By constructing a model in which all the interactions were eliminated and by comparing the -2 likelihood (-2LK) to the complete model, a value of P = 0.64 was obtained; therefore, it was concluded that the interactions did not have an influence. Then, the model with all the potentially confounding variables was compared to another model without these variables; from the result (P = 0.003), we concluded that there was confounding.
Those variables that had a difference lower than 10% between the crude OR and the adjusted OR were discarded. Several partial models were run until a P > 0.10 was obtained when comparing the -2LK of the complete model to the -2LK of the partial model.
Population description
In this study, 193 cases and 193 controls were analyzed. There were 163 cases of ALL (84.5%) and the rest of the cases were myeloid leukemias. For the sociodemographic variables, groups were similar; however, the cases showed a greater frequency of being positive for the following variables: family history of cancer, father's cigarette smoking during child's gestation, mother's cigarette smoking during the breastfeeding period, and exposure to carcinogenic agents at home (Table 1). Table 2 shows the various occupations that each father had before conception of the indexed child and for which a non-significant increased risk of developing AL was reported. The only occupation that showed a statistically significant increased risk was insurance agent. Occupations that remained as risk occupations during the four periods were the following: insurance agent, farmer, machinery operator, mechanic, packer, and builder (data not shown).
Exposure level
For this variable, all occupations for each period were considered; a period was classified as "exposed" if the index indicated that the father had been "highly exposed" in at least one of his occupations during that period (Table 3). By using logistic regression, it was possible to conclude that interactions were not an influence, but that confounding did exist. The final logistic regression model included nine variables for the father's occupational exposure level: age, gender, institution where the child received treatment, maternal occupation, family history of cancer, weight at birth, socioeconomic status, paternal cigarette smoking, and exposure at home. The adjusted OR showed a significantly increased risk in all periods, with exception of the pregestational period that reported a non-significant increase in OR.
Because some fathers reported more than one occupation for the period after the birth of the indexed child, the number of occupations with high exposure to carcinogenic agents was analyzed. As shown in Table 4, the greater the number of occupations with a high exposure, the greater the risks that showed a significant trend (p < 0.001).
Discussion
This is the first study in which the father's occupation is assessed as a risk factor for the development of childhood AL in his offspring by using an OEI to carcinogenic agents.
From the first published study by Fabia and Thuy in 1974 [23], which showed the association between the father's occupation and the development of malignant diseases in his offspring, several articles have been published on this topic; however, these studies have been inaccurate and have had inconsistent results [2][3][4]. For this reason, Linet et al. in 2003 [24], when classifying the evidence for risk factors for AL into known, stimulating, and limited, classified parental occupational exposures as a risk factor of limited evidence.
One way to increase accuracy in this type of studies was recommended by Ward et al. [25]. They pointed out that it is better to conduct studies with large sample sizes when studying specific substances as risk factors, because these types of exposures are very rare among general population. The present study did not require a large sample size; the strategy used was to diminish the variability in measures by using a strict measuring protocol for the exposure trough the use of an exposure index [26].
Regarding the possibility of selection bias with the cases, it is important to point out that the children included in this study were drawn from highly specialized, public pediatric hospitals that, on the whole, give treatment to about 95% of the cases of childhood AL in Mexico City [7,8]. Although these hospitals had only 88% of all cases in public hospitals, these cases represented 100% of the cases for which it was possible to identify an appropriate control; that is, for which it was possible to identify the secondary-care hospital that had referred them to the tertiary-care hospital for leukemia diagnosis and treatment.
For the controls, individuals were included from general hospitals under the aegis of the two institutions from which the cases were obtained: Instituto Mexicano del Seguro Social (Social Security Mexican Institute) and Secretaria de Salud (Health Secretariat). The hospitals were located in different parts of Mexico City: south, north, center-west, and east sections of the city. Controls were not drawn from the same tertiary-care hospitals from which the cases were taken, because the diseases that these hospitals treat are associated with different risk factors that would make them totally different from the study's base population [26]. OR, odds ratio; CI, confidence interval a Only "highly exposed" father's values are reported; values taken as a reference and which correspond to the "non-highly exposed" fathers are not shown. b This analysis was adjusted by age, sex, source institution, level of crowding, paternal cigarette smoking, exposures at home, and mother's occupation.
In this study, hospital controls obtained from medical assistance centers were used. Such centers work as reference units for the hospital from which the cases were drawn. If any of the controls were to have developed AL, the case would have gone directly to the case-source hospitals. Moreover, because of the lack of differences between sociodemographic variables among groups, we could conclude that cases, as well as controls, came from the same population base [27].
In regard to interviewer bias, cases and controls were interviewed under similar conditions; however, the cases reported greater frequencies for some non-occupational exposures. We could not eliminate the possibility that recall bias had been present; however, we applied techniques suggested to eliminate such bias: a structured and standardized questionnaire, which provided memory aids, was used; trained personnel obtained data as accurately as possible, and hospital controls were used [28,29]. The interviewer bias was limited, because the trained personnel acting as interviewer did not know the main hypothesis for the study and they were standardized.
The way to obtain the necessary information to estimate the index was through direct questioning, which is considered to increase the participation rate of both cases and controls; direct questioning is also considered to increase the reliability of the information so obtained [30]. There is no evidence available to suppose that fathers from either group would over-report the frequency of occupational exposures. It is possible that fathers could not remember all exposures throughout their work life; however, such lack of precision would be similar for parents from the cases and the controls; therefore, the estimated ORs would be an underestimation of the real OR [31]. It has been recommended that, in epidemiological studies, interviews with fathers of children with cancer be performed before they seek an explanation to their children's disease, because such situation could bias their answers [30]. In this study, we had the advantage that, for 100% of the cases, interviews with the mothers were performed within the first month after the diagnosis had been made and the father's interview within the first five months. Moreover, none of the fathers stated, in any of the questionnaires, that they thought that one of the causes for their children's illness could have been an occupation that they had had.
The prevalence of occupational exposure to carcinogenic agents was from 11.4 to 15.0% among the controls and from 20.2 to 28.0% among the cases. This frequency was high because this index grouped together all the known and potentially carcinogenic substances reported by the worker and not one substance in particular. At present, it is not possible to state that the frequency of exposure to carcinogenic agents in the studied population was greater than that in the rest of the population, due to the fact that no other study has used the instrument that we employed to evaluate exposure. Nonetheless, it is known that about 23% of the working population in the European Union is exposed to carcinogenic substances [32].
Confounding was controlled by a logistic regression analysis. A conditional regression analysis was not performed because none of the matching variables for the study was considered a risk factor for the disease, an implicit criterion for a variable to be considered as a true confounding factor; therefore, the matching variables in the analysis should be maintained [33]. In this study, through logistic regression analysis, we determined that the risks found were confounded by the occupation of the mother. Maternal occupation has been less studied than paternal occupation and associations have also been less consistent [2,4]. However, in two recent studies, an increase in risk of developing AL was identified for offspring of mothers who presented occupational exposures to electromagnetic fields [34] and solvents [35] during pregnancy. The cut-off levels used for the mother's age (≤ 35 and >35) and for the index of child's weight at birth (≤ 3,500 and >3,500 g) were those that are most frequently reported in medical literature [11,12]. There are no consistent data showing that the mother's age is a risk factor for her offspring to develop childhood leukemia; Little interpreted this inconsistency as the result of maternal age may be more a reflection of sociological, rather than biological, influence [10]. The effect of mother's age may reflect the increase in the frequency of non-disjunction during oogenesis, which increases with maternal age, and polygenic or imprinting mechanisms may involve a tendency to non-disjunction; these mechanisms may have implications for the etiology of leukemia in children [36]. The most frequently reported birth weight is >3,500 g [11]; in recent studies, the cut-off level of >4,000 g has been used, but a weight between 3,000 and 3,500 g is considered the average weight [37]. There have been no consistent data to show that this is a risk factor for childhood leukemia, however, a proposed mechanism by which it may be related to leukemia is that overweight at birth may be a result of high levels of growth factors in the uterus and that these growth factors may increase the risk to acute leukemia when inducing a proliferative stress in the bone marrow [37].
Another factor evaluated as a possible confounding variable was exposure to carcinogenic substances at home. We decided to include this factor because it has consistently been associated with acute leukemia [38,39]; the most studied cases of exposure have been occupational or residential exposure, exposure at home has been less studied [40]. More studies have been done on hydrocarbons associated with pesticides and here is where the strongest associations have been found [40,41]. The mechanism(s) by which some hydrocarbons, including those contained in pesticides, increase the risk to develop cancer is(are) not thoroughly understood [42]. Some mechanisms are chromosomal damage; disruption of cell division; and reduction in host resistance to cancer-initiating viruses, such as the Epstein-Barr virus, which can provoke a breakdown in the immune surveillance [42]. Some compounds in this group of chemicals are immunotoxic [43].
Using occupations and industrial branches as risk factors for the development of cancer in their offspring has given rather inaccurate results [44]; it is for this reason that most recent studies have focused on using occupation and economic activity to deduce the substances to which workers are exposed by using exposure matrixes [45]. Some studies that deduced exposures obtained information on occupation from secondary sources. On this point, Swaen et al. [46] have commented that it is possible, when information is obtained from cancer records or secondary sources, that there are false-positive results in studies on cancer and occupational exposures. Such false positives would be reduced when the information obtained permits the analysis of the relationship between doses and examine the phenomena. In this study we were able to estimate the trend, by finding an exposure gradient for the number of occupations with high exposure in the period after the birth of the indexed child and with statistically significant values in the trend assessment. In the present study, when the job position was evaluated in a specific way, an association was found with insurance agents; however, this finding could be a result of chance.
The use of experts is another strategy to assess exposure, which is not exempt from misclassification errors [47].
Reiner et al. [48], stated that exposure misclassification has been the main limitation in studies assessing parental occupational exposure as a risk factor for the development of diseases in offspring. This has given rise mainly to suggestions to improve the quality of questionnaires and of data-collection techniques [49].
Another proposal is the development of more sophisticated methods to assess exposure [24], preferably in a quantitative way [50], through the use of estimation models that incorporate the phenomena-determining factors (frequency, intensity, duration, etc); this would increase accuracy and reliability of the exposure estimation. Two articles were recently published on new instruments to assess occupational exposures for studies on childhood AL. One of them suggests the use of a questionnaire with specific work modules to achieve a better description of exposure [48]; the other study assesses exposure to pesti-cides, suggesting the use of icons to facilitate the worker's understanding [51]. These instruments were used to try to improve the measurement of exposure, but none of the instruments evaluated occupational exposure in a quantitative or semi-quantitative way.
The main strength and contribution of the present study is that, through use of the OEI, when obtaining information, we were able to take into account all these suggestions, integrate them into the study, and then use them in a formula to calculate a value that that represented the level of exposure.
Another strength of this study was the analysis of the father's exposures during different periods in the life of the indexed child. A cohort study, in which the father's occupational exposure to fungicide was assessed as a risk of developing cancer in his offspring, classified exposure as low, medium, or high [52]. That study identified risks to highly exposed in the periods 1) prior to conception, 2) during pregnancy, and 3) after the birth of indexed child; the ORs were 1.7, 1.3, and 1.7, respectively, but again with broad CIs and with P values in the trend test having no statistical significance. These findings coincide with the present study, in which farmers were found to be an occupation with high risks in all four periods. Moreover, those data coincide with the fact that the incidence of AL is higher in the southwestern part of Mexico City, where there are still agricultural zones [53]. In another study that used exposure windows, an association was found only for the father's exposure to plastic materials during period prior to conception [54]. McKinney et al., found risks only for exposure to exhaust fumes and inhaled particles of hydrocarbons during the period prior to conception, which was the only one evaluated [55].
A weak point of this study was the small size of the sample. Although most of the adjusted ORs were significant statistically, it is not possible to disregard the role of chance.
Another weak point is that neither the population mix nor exposure to infections was considered as possible confounding variables. There is sufficient evidence to think that childhood AL has an infectious etiology [56] and that fathers, laboring in certain occupations and having frequent contact with other people at the time of the child's birth, can be the source of contagion for the child [57]. Additionally, there is the possibility that the working population in search of a better job may need to migrate from rural to urban populations. This is the reason why, when considering studies of paternal occupation, evaluation of population migration has been recommended [56]. In the present study, this variable was not evaluated either; however, when assessing the number of children who had been born in a rural community and now live in Mexico City (urban community), we found that only seven cases and six controls had been born in a rural community (OR 1.17; 95% CI 0. 38-3.5). Due to such a small number of individuals, it was not feasible to evaluate whether migration from a rural zone to an urban one differed, depending on the level of exposure of the father to carcinogenic substances and much less on a specific occupation of the father.
Conclusion
With the results obtained from this study, we concluded that, among the children of fathers exposed to a high level of carcinogenic substances at work, there seemed to be a greater risk of developing AL. However, confounding factors cannot be disregarded due to incomplete control for confounding. | 2018-04-03T00:22:49.293Z | 0001-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "7dca831e390397ce581fb95cd010cfbed1d55d53",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-8-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7dca831e390397ce581fb95cd010cfbed1d55d53",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
202849225 | pes2o/s2orc | v3-fos-license | Function of Pumilio Genes in Human Embryonic Stem Cells and Their Effect in Stemness and Cardiomyogenesis
Posttranscriptional regulation plays a fundamental role in the biology of embryonic stem cells (ESCs). Many studies have demonstrated that multiple mRNAs are coregulated by one or more RNA binding proteins (RBPs) that orchestrate the expression of these molecules. A family of RBPs, known as PUF (Pumilio-FBF), is highly conserved among species and has been associated with the undifferentiated and differentiated states of different cell lines. In humans, two homologs of the PUF family have been found: Pumilio 1 (PUM1) and Pumilio 2 (PUM2). To understand the role of these proteins in human ESCs (hESCs), we first demonstrated the influence of the silencing of PUM1 and PUM2 on pluripotency genes. OCT4 and NANOG mRNA levels decreased significantly with the knockdown of Pumilio, suggesting that PUMILIO proteins play a role in the maintenance of pluripotency in hESCs. Furthermore, we observed that the hESCs silenced for PUM1 and 2 exhibited an improvement in efficiency of in vitro cardiomyogenic differentiation. Using in silico analysis, we identified mRNA targets of PUM1 and PUM2 expressed during cardiomyogenesis. With the reduction of PUM1 and 2, these target mRNAs would be active and could be involved in the progression of cardiomyogenesis.
INTRODUCTION
Human embryonic stem cells (hESCs) are pluripotent cells derived from the 42 inner cell mass of the blastocyst that have potential for differentiation into three germ 43 layers (1-3). In an undifferentiated state, hESCs are characterized by the expression of 44 stemness factors such as OCT4 (POU5F1), SOX2 and NANOG (4). These three 45 transcription factors, which are positively regulated, are responsible for pluripotency 46 maintenance and contribute to the repression of lineage-specific genes (reviewed by 5). 47 When hESCs are stimulated to initiate the differentiation process, expression of genes 48 associated with pluripotency is negatively regulated and genes associated with the germ 49 layer begin to be positively regulated (6).
50
A complex network of gene expression underlies the molecular signaling that 51 will give rise to the adult heart. Cardiomyogenic differentiation is a highly regulated 52 process that depends on the fine regulation of gene expression (7). In vitro 53 cardiomyogenic differentiation of hESCs mimics embryonic development and can be 54 used as a model for cardiac development studies per se and as a model for research 55 ranging from tissue electrophysiology to drug screening (reviewed by 8). The expression of PUM1 and PUM2 has been observed in hESCs and several 69 human fetal and adult tissues, indicating a possible participation in the maintenance of 70 germ cells (11,12). Furthermore, in mammals, the disruption of PUM proteins promotes 71 defective germline phenotypes (18,19 and 10 ng/ml human βFGF. The cells were passaged every 3-4 days by enzymatic 101 dissociation using 0.25% trypsin/EDTA. Cardiomyogenic differentiation assays were 102 conducted using an embryoid body (EB) protocol adapted from previously described 103 (31,32) or a monolayer protocol previously reported (33).
104
Regarding EB cardiac differentiation protocol, briefly, 7x10 5 cells/well were 105 plated onto Growth Factor Reduced Matrigel ® Matrix (Corning) 6-well coated dishes. hours. Then, the medium was replaced with supplemented DMEM, as described above.
141
After 48 and 72 hours, the medium was collected and centrifuged twice at 141000 x g. 142 The cell pellet was resuspended in 1X PBS and stored at -80 ºC. Table S1). We generated standard curves for The immunofluorescence protocol followed as previously described (7). Briefly, 179 monolayer cultures we fixed with paraformaldehyde 4%, rinsed with PBS, followed by Statistical analysis was performed using GraphPad Prism 7 software. The data 208 sets are expressed as the means ± standard deviation. According to data sets were used 209 unpaired Student's t-test or one-way ANOVA followed by Tukey post hoc test.
210
Differences with p<0.05 were considered statistically significant.
235
To understand the role of PUM1 and PUM2 in hESC maintenance or during 236 cardiomyogenic differentiation, we silenced their expression using short hairpin RNAs. 237 We produced lentiviral particles containing shRNA that recognize PUM1, PUM2 and a percentage of cTnT+ cells not changed between the different treatments ( Figure 3C).
272
These results demonstrated that when PUM was silenced, hESCs followed EBs cardiac 273 differentiation efficiently, with no statistically significant changes. 274 We performed a monolayer cardiomyogenic differentiation protocol, as 275 previously described (33) ( Figure 4A). In this protocol we transfected hESCs with shSc Table S2). (22,41), and a compensatory regulation mechanism has been observed when one 335 of these genes is silenced by increasing the expression of the other (43). We evaluated 336 the levels of PUM1 and PUM2 mRNAs after silencing these genes individually, and we 11 337 did not observe this compensation, at least at the mRNA level. We hypothesized that 338 due to their high similarity, the shRNA used to knockdown one transcript impacted the 339 stability of the other transcript, at least in this cell type. 340 We observed that the PUM1 and PUM2 silencing altered the mRNA levels of | 2019-09-17T02:46:04.665Z | 2019-08-29T00:00:00.000 | {
"year": 2019,
"sha1": "c640a6a028b526f53963884eaec215b324f0d858",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2019/08/29/751537.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "a9aeec0a91df7724d6f6696aedd86a0e90d03218",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
3450899 | pes2o/s2orc | v3-fos-license | Recent Advances and Future Prospects in Bacterial and Archaeal Locomotion and Signal Transduction
ABSTRACT The structure and function of two-component and chemotactic signaling and different aspects related to the motility of bacteria and archaea are key research areas in modern microbiology. Escherichia coli is the traditional model organism used to study chemotaxis signaling and motility. However, the recent study of a wide range of bacteria and even some archaea with different lifestyles has provided new insight into the ecophysiology of chemotaxis, which is essential for the establishment of different pathogens or beneficial bacteria in a host. The expanded range of model organisms has also permitted the study of chemosensory pathways unrelated to chemotaxis, multiple chemotaxis pathways within an organism, and new types of chemoreceptors. This research has greatly benefitted from technical advances in the field of cryomicroscopy, which continues to reveal with increasing resolution the complexity and diversity of large protein complexes like the flagellar motor or chemoreceptor arrays. In addition, sensitive instruments now allow an increasing number of experiments to be conducted at the single-cell level, thereby revealing information that is beginning to bridge the gap between individual cells and population behavior. Evidence has also accumulated showing that bacteria have evolved different mechanisms for surface sensing, which appears to be mediated by flagella and possibly type IV pili, and that the downstream signaling involves chemosensory pathways and two-component-system-based processes. Herein, we summarize the recent advances and research tendencies in this field as presented at the latest Bacterial Locomotion and Signal Transduction (BLAST XIV) conference.
T he capacity to sense and respond to changes in environmental cues is an essential feature of the prokaryotic lifestyle. As a consequence, bacteria and archaea have evolved an array of different molecular mechanisms that permit the detection of signals in order to generate appropriate cellular responses. These responses are mediated primarily by one-and two-component systems, as well as chemosensory signaling pathways (1)(2)(3)(4). Whereas the former systems mediate changes primarily at the transcriptional level, chemosensory pathways form the basis for chemotaxis, the directed movement of prokaryotes in compound gradients. The study of signaling processes is not only of fundamental interest but may also contribute to the tackling of one of the central clinical problems, which is the increasing amount of antibiotic-resistant pathogens. There is now a significant amount of data indicating that interference with signal transduction systems, motility, and chemotaxis can be an alternative strategy to weaken or block pathogens (5,6).
In January 2017, over 130 researchers from around the world met in New Orleans, LA, for edition XIV of the Bacterial Locomotion and Signal Transduction (BLAST) the ability to migrate toward compounds that promote growth is the major ecophysiological reason for chemotaxis. However, there is now a significant body of data showing that chemotaxis is essential for many beneficial and pathogenic bacteria to recognize and attach to different hosts.
To address these issues, the scientific community has turned to studying chemotaxis in a series of alternative model systems, a representative selection of which is shown in Table 1. These species belong to different taxonomic groups, have different lifestyles, and possess various numbers of chemoreceptors. Some of the insight gained and further questions that arose from these studies are summarized here.
Multiple chemotaxis pathways. In contrast to E. coli, other bacteria possess multiple chemotaxis pathways. A bioinformatic study has indicated that more than half of chemotaxis pathway-containing genomes contain multiple pathways (2). Rhodobacter sphaeroides is the best-studied model bacterium used to investigate such additional chemotaxis pathways. The extensive work of the Armitage laboratory has shown that there is a membrane-bound polar signaling cluster, as well as a cytosolic cluster containing the soluble receptors that are activated by as-yet-unidentified signals. Multiple routes of communication exist between the two pathways because of the action of several CheY and CheB paralogues on both pathways (4). In this respect, several parallels to Pseudomonas aeruginosa exist. Two of its four chemotaxis gene clusters are involved in the che and che2 chemotaxis pathways (13). Data suggest that, in analogy to R. sphaeroides, the transmembrane receptors signal into the che pathway, whereas the soluble Aer-2/McpB chemoreceptor signals into the che2 pathway. Future studies will show to what extent the differential response to cytosolic or extracytoplasmic signals via different pathways is of general relevance.
Chemosensory pathways with nonchemotactic functions. Although most chemosensory pathways appear to mediate chemotaxis, not all do. Other pathways were shown to possess alternative cellular functions (ACF), like modulation of the levels of the second messengers cyclic di-GMP (c-di-GMP) and cyclic AMP (cAMP), or are related to type IV pilus (TFP)-mediated motility (14)(15)(16). This discovery hence raised the question of whether information on pathway function can be obtained by sequence analysis. Bioinformatic studies have classified chemosensory pathways into 19 different groups, of which 17 are associated with chemotaxis, whereas each of the remaining groups is associated with either ACF or TFP motility (2). Single-domain CheY response regulators, composed of a receiver domain, have been associated with chemotaxis (17). However, the CheY homologues with ACF possess additional domains (2) and inspection of such additional domains gives only a glimpse of the underlying complexity of the corresponding signaling processes. In most cases, the CheY receiver domains are fused to multiple domains. Such additional domains include GGDEF and EAL domains for the synthesis and hydrolysis of c-di-GMP, further receiver and histidine autokinase domains, additional sensor domains of the PAS or GAF type, and various combinations thereof (2). The notion that not all chemosensory pathways are associated with flagellum-mediated chemotaxis still needs to be promulgated in the scientific community. For example, there is a significant number of transcriptomic studies that interpret changes in chemosensory signaling genes as changes in bacterial motility without considering the possibility that some chemosensory pathways are not associated with motility. Questions to be addressed in the future are to what degree there is cross talk between the different types of chemosensory pathways and what else these systems might control in cells.
Other chemoreceptors and chemoeffectors. All four E. coli chemotaxis receptors contain a four-helix bundle sensor domain. The molecular mechanism by which ligand binding activates this receptor type has been studied extensively by using the Tar receptor. Aspartate binds with high negative cooperativity to the sensor domain dimer, which in turn causes translational and rotational displacements of the final helix of this domain. These displacements are then relayed to the transmembrane region and transmitted to the other end of the receptor, where it modulates CheA activity (18,19).
However, genome analyses showed that chemoreceptors employ a wide range of different sensor domains. Interestingly, the most abundant domain type in chemoreceptors is not the four-helix bundle but the CACHE domain, in either its monomodular (sCACHE) or its bimodular (dCACHE) conformation ( Fig. 1) (20). CACHE domains are composed of a long N-terminal helix followed by either one (sCACHE) or two (dCACHE) primarily -strand-containing globular modules. The abundance of CACHE domains in chemoreceptors agrees with another study demonstrating that dCACHE domains (previously referred to as dPDC domains) are the predominant sensor domains in histidine kinases (21). In recent years, significant progress in the structural biology of different CACHE domains and cocrystal structures with the bound chemoeffector has revealed the determinants of signal recognition. For example, high-resolution structures were reported for the sensor domains of the H. pylori TlpB receptor in complex with urea (22), a carboxylic acid sensor of Pseudomonas syringae (23) (both sCACHE), the taurine-bound structure of the Vibrio cholerae Mlp37 receptor (24), and the amino acid-complexed sensor domain of Campylobacter jejuni Tlp3 (25) (Fig. 1). Current investigations are aimed at understanding the molecular mechanism by which ligand binding causes activation of these different receptors. In addition, all of the biochemical and structural data available on dCACHE domains indicate that ligands bind to the membrane-distal module. This raises the question of the role of the membrane-proximal module in signaling, which is a topic that is currently being investigated.
Plant root colonization by beneficial bacteria. The colonization of plant roots by many bacteria is mutually beneficial to both organisms. On the one hand, root colonization can promote plant growth or an induction of systemic plant resistance to pathogens, which are both processes of significant agrobiological interest (26). On the other hand, bacteria gain access to the carbon and nitrogen sources present in root exudates. Chemotaxis to root exudates was found to be essential for efficient root colonization by many rhizobacteria (27). Work on different rhizobacteria has allowed the identification of plant signals that are central to colonization-relevant chemotaxis (Fig. 2). In Azospirillum brasilense, an energy taxis receptor was shown to be essential for root colonization (28). The receptor mutant was deficient in chemotaxis to several rapidly oxidizable substrates and terminal electron acceptors like oxygen and nitrate and had a largely reduced capacity to colonize roots. A. brasilense has two chemotaxis pathways, one of which is required for efficient root colonization (29). Other studies have assessed the chemotaxis system of the alfalfa symbiont Sinorhizobium meliloti to germinating seeds. Chemoreceptor single mutants were screened for chemotaxis to exudates and indicated a dominant role for the McpU chemoreceptor (30). Subsequent studies have shown that this receptor recognizes proline, which is abundantly present in root exudates (31). Quaternary amines such as betaine or choline are also secreted by seeds and roots. Webb et al. have previously shown that S. meliloti contains a chemoreceptor that specifically binds such quaternary amines (32). Additional studies underline the central role of amino acid chemotaxis in root colonization. For example, three amino acid-responsive receptors were found to be important in this process in Bacillus subtilis (33). In P. fluorescens, the deletion of the three genes that encode the amino acid chemoreceptors resulted in a significant decrease in root colonization (34).
In the same species, chemotaxis to Krebs cycle intermediates was identified as another important component of root colonization, and similar observations have also been made for Bacillus amyloliquefaciens (35). Furthermore, chemotaxis to GABA and polyamines was associated with root colonization in Pseudomonas putida KT2440 (36,37). Taken together, this knowledge forms the rational basis for attempts to enhance the colonization of plants by beneficial bacteria.
Human pathogens. Chemotaxis is an essential requirement for effective host infection by many animal or human pathogens. Borrelia burgdorferi, the causative agent of Lyme disease, has advanced to be an important model organism used to study the relevance of chemotaxis in infection. This spirochete has a complex life cycle that involves both the tick vector and a mammalian host. During its enzootic life cycle, the bacterium migrates from the midgut to the salivary glands within the infected tick, which allows transmission to the next host during tick feeding. Once the bacteria have entered the mammalian host, they disseminate through the skin matrix to reach a multitude of target tissues. Subsequent feeding of ticks on an infected mammalian host allows the bacteria to return to their arthropod host. The spirochete successfully switches between the different hosts by sensing its current environment to determine its next optimal direction and to evade the host's immune system (38,39). Data demonstrate that chemotaxis is crucial for the dissemination and viability of the spirochete within each host, as well as between mice and ticks (38,(40)(41)(42). For example, a cheA2 mutant was chemotactically unresponsive to attractants and failed to infect mice (40). Another study showed that a cheY3 mutant was unable to reverse direction and failed to disseminate from the skin matrix to distant tissues or migrate from an infected tick to the murine host (38). The second chemotaxis response regulator, CheY2, does not appear to affect motility or chemotaxis despite having all of the domains/ conserved amino acid residues seen in a classical CheY protein, however, the ΔcheY2 mutant cells were not able to establish persistent infection in mice by needle inoculation or tick bite. CheY2 is therefore thought to be a virulence determinant (43). Studies are under way to characterize the ligand profiles of the six chemoreceptors of B. burgdorferi (M. A. Motaleb, unpublished data). These studies will provide crucial insight into the signals that trigger chemotaxis in the different tissues the bacterium encounters during its life cycle.
Recently, further evidence has accumulated showing that pathogenic bacteria have evolved specific chemotactic mechanisms to sense host-derived and niche-specific signals in order to efficiently colonize target tissues. An emerging model organism used to study such mechanisms is Helicobacter pylori, which colonizes the human stomach. This organism has three transmembrane chemoreceptors (TlpA, TlpB, and TlpC) and one cytoplasmic chemoreceptor (TlpD) that together feed into a single chemosensory pathway (44). Mounting evidence suggests that signaling through all four chemoreceptors is necessary for efficient colonization of the gastric epithelium (45). It has been shown that H. pylori exhibits chemotaxis toward metabolites emanating from the human gastric epithelium and that urea is the primary host-derived metabolite that attracts the bacterium (46). Urea is sensed by TlpB, and its very high affinity enables responses to concentrations as low as 50 nM.
Another model organism for involvement in pathogenicity of the gastrointestinal tract is C. jejuni. Li et al. showed that the bacterium exhibits chemotaxis to bile in general, as well as to its major component sodium deoxycholate (SDC) (47). An either Tlp3 or Tlp4 chemoreceptor mutant showed decreased SDC chemotaxis and a reduced ability to colonize the jejunal mucosa. A double mutant deficient in both receptors completely lacked the ability to colonize the mucosa. These data suggest that chemotaxis to bile and SDC is necessary for efficient C. jejuni infection (47). Another study has led to the identification of a galactose receptor in invasive C. jejuni strains, and receptor inactivation has resulted in a significant decrease in virulence (48). Understanding the specificity of these chemotactic reactions may provide the basis for the development of therapeutic strategies to reduce host colonization.
EXPANDING THE TOOLBOX
Detailed insights into the structure and function of microbial machineries that are involved in locomotion, sensing of environmental signals, and cellular behavior are increasingly gained by employing new technologies. New tools are also used to determine the characteristic biophysical forces that influence cells in their threedimensional (3D) environments.
High-speed digital holographic microscopy. Flow fields around cells are now being studied in detail by high-speed digital holographic microscopy (49,50). This technique is based on the interference patterns around particles that change their size and appearance, depending on the axial (z) position of the object in the imaging field. This allows the accurate localization and tracing of particles in a 3D volume. The presence of tracer beads can make the flow fields around bacteria visible. This technique provides a new tool for the study how biophysical forces influence the formation of nascent biofilms. Comparison with theoretical models reveals that the flow field around attached cells differs from that around swimming cells (51). While the flow field around swimming cells behaves like a dipole and decays rapidly with distance (1/r 2 ), the one around surface-attached cells behaves similarly to a Stokeslet, which decays significantly more gradually (1/r). This research provides testable models of how flow fields and shear forces together influence the behavior of as-yet-unattached cells approaching surface-attached cells (N. Farthing, M. Bees, and L. Wilson, unpublished data).
Cryo-EM. New tools also provide different insights into the structure and function of cellular machineries involved in environmental sensing, motility, and surface attachment. Cryoelectron microscopy (cryo-EM) provides the means to study these microbial structures at the molecular level. Here, intact cells are flash frozen without the need for any additional sample preparation or staining procedures. The contrast during imaging in the electron microscope thus originates solely from the biological sample itself. This provides images of nearly native samples with unprecedented detail. One main focus in the field is the study of motility structures in bacteria and archaea, the flagella and archaella, respectively. While rotary motors anchored in the cell envelope propel the cells forward in both motility structures, they are structurally not homologous. The archaellum is a homologue of the bacterial TFP, and in contrast to their bacterial counterparts, archaella rotate to propel cells forward instead of using the extensionand-retraction motion typical of bacterial type IVa pili (T4aP) (52). A homolog of the circadian clock protein KaiC interacts with the base of the archaellar motor and is thought to generate a rotational motion (53).
Single-particle cryo-EM. Electron microscopy of isolated components provides high-resolution maps of individual components of the machinery. The micrographs provide 2D projections of identical particles with different orientations in respect to the electron beam. These different orientations are necessary to computationally generate a 3D, high-resolution density map of the sample. This technique has recently been used to visualize the interaction of the archaellar core protein FlaH inside a ring of FlaX in vitro (54).
Cryo-EM can also be used to analyze the structure of filaments with helical symmetry. This approach has been applied to archaellar filaments by using helical reconstruction. It revealed the archaellum filament structure of Methanospirillum hungatei to a resolution of 3.4 Å and gave new insight into how its structure is distinct from that of the bacterial TFP; the archaellum is heavily posttranslationally modified by primarily O-linked glycans. Furthermore, the filament lacks a central pore. Instead, the extensive interactions between neighboring archaellins may provide the necessary structural support (55).
Another flavor of cryo-EM is electron cryotomography. This technique is used to image intact molecular complexes inside intact cells. Here, individual cells are rotated in the electron microscope while a series of 2D projections is collected. These images are then used to computationally generate a 3D volume of the microbe (56). This method has recently been used to unravel the composition of bacterial flagellar motors in diverse species. To improve the signal-to-noise ratio of the electron density maps, many individual flagellar motors can be computationally averaged together by a method called subvolume averaging. The data sets reveal the in situ structure of the motors at molecular (ϳ4-nm) resolution. The comparison of subvolume averages of wild-type and mutant strains helped to identify known components and revealed new structural components, such as the sheath ring characteristic of the motors of Vibrio alginolyticus (S. Zhu, T. Nishikino, M. Homma, and J. Liu, unpublished data). The combination of genetic methods, electron cryotomography and subvolume averaging can not only be used for structural studies but also provide functional insight into how differences in torque are related to additional scaffold and stator complexes of different motors (Fig. 3). Altogether, these studies can be used to gain further insight into the evolution of the multiprotein complex of the flagellar motor (57).
Correlative cryo-EM methods. Cryo-EM methods are especially powerful when paired with other techniques. For example, cryo-EM can be paired with light microscopy to correlate localization information from fluorescent light microscopy with the high-resolution information from cryo-EM (58). This method has been used in the past to identify the structures of both membrane-bound and cytoplasmic chemoreceptor arrays (59,60) and is now being used to study the retraction function of T4bP in Caulobacter crescentus (E. R. Wright, unpublished data).
Cryo-EM can also be paired with comparative genomic and bioinformatic methods to gain insights into chemotaxis-related pathways in bacteria (61)(62)(63). Most motile bacteria and archaea contain a chemotaxis system that controls the cell's flagellar or archaellar motility, respectively. Additionally, more than half of all chemotactic microbes have additional operons that contain genes with high homology to canonical chemotaxis genes (2). However, these gene products appear to control cell functions unrelated to chemotaxis behavior and, as discussed in further detail above, the structure and function of these systems are still poorly understood. Comparative genomics permits the classification of these additional systems into groups likely to have the same biological function and the determination of which organisms harbor pathways belonging to each group. Electron cryotomography of wild-type and mutant strains of multiple organisms was then used to study the structure of these systems and to determine which genes are indispensable for the formation of the related protein cluster in vivo. In addition, this method provides the means to determine a range of growth conditions in which the same system is expressed and assembled in different organisms. Taken together, the combination of results from these techniques allows the generation of testable hypotheses about the biological functions and molecular mechanisms of novel, evolutionarily conserved, chemotaxis-related biological pathways.
Single-cell FRET. Fluorescence resonance energy transfer (FRET) microscopy has been widely used to characterize intracellular kinase activity in bacterial chemotaxis in vivo. FRET between fluorophores fused to the response regulator CheY and the phosphatase CheZ gave great insight into chemotactic signaling dynamics at the population level (64). FRET microscopy has been applied at the single-cell level (65), but the quantitative extraction of signaling parameters was limited to population measurements averaging over hundreds of cells, in which effects of fluctuations are lost. This assay has since been optimized for the measurement of signaling dynamics in single cells over extended times (Fig. 4). First results from single-cell FRET revealed large cell-to-cell variability in many signaling parameters, which most likely result from the stochastic expression of chemotaxis genes (66). In addition to cell-to-cell variability, the baseline network activity in a single cell shows slow temporal noise that is augmented in the presence of the methylation-demethylation enzymes CheR and CheB. These fluctuations likely reflect the stochastic enzyme kinetics of the adaptation system and are not detectable in population level FRET because fluctuations uncorrelated across cells are averaged out. Altogether, these results provide a deeper understanding of how molecular noise of multiple origins propagates through chemotaxis signaling to tune the bacterium's mode of environmental exploration (66).
Fluorescent labeling of filaments. When bacterial filaments were labeled with fluorophores for the first time, an entirely new world of possibilities for the study of bacterial motility opened up (67). Real-time visualization of the flagella of swimming bacteria enabled, for example, the study of polymorphic transformations in the filaments of different bacterial species or the interactions between the filaments of swarming cells (68,69). At the latest BLAST meeting, we heard from two groups who independently used this labeling technique to study how bacteria can back out of a dead end when swimming in a restricted environment (K. Thormann and S. Rainville laboratories, unpublished data). Their observations brought to light new ways in which bacterial filaments can move (a screwing motion and a locked-hook mode). In addition, by sequentially labeling filaments with fluorophores of different colors as they grow outside the cell, it was shown that the rate of flagellum growth decreases with length, as illustrated in Fig. 5 (70). The same collaboration even succeeded in monitoring the growth of flagellar filaments in real time. These results, combined with mathematical modeling, demonstrated that a simple injection-diffusion mechanism controls bacterial flagellar growth outside the cell. Therefore, the previously proposed chain mechanism (71) cannot contribute to filament elongation dynamics because that model predicts a constant growth rate versus length, which is incompatible with new observations. MD simulations. Oftentimes, experimental techniques provide static snapshots of molecular assemblies. While this information is critical for understanding the structural composition of a biological system, in many cases, this knowledge alone is insufficient to discern function. The combination of such structural data with molecular dynamics (MD) simulation has been shown to provide a powerful approach by which to gain insight into the function of molecular machines (72). Briefly, MD simulation is a computational method that uses an empirically based potential energy function to characterize the chemical and physical interactions between atoms in a molecule, enabling calculation of the forces between individual atoms and ultimately the molecule's conformation over time.
Recently, a combination of experimental and computational techniques has been applied to chemotaxis arrays (73). Here, crystallographic structures of an individual receptor, CheA, and CheW proteins from Thermotoga maritima were assembled and refined according to cryo-EM maps of the extended E. coli signaling complex by MD flexible fitting (74), a technique based on MD simulation. A series of subsequent simulations with durations of up to 450 ns allowed investigation of the dynamic behavior of the intact array. Most notably, these simulations revealed a characteristic dipping motion of the kinase domain (P4) of CheA, providing testable predictions that were supported by genetic mutations and behavioral analysis, as well as cross-linking experiments (73; J. S. Parkinson, unpublished data). These results demonstrate the power of all-atom MD simulations but highlight the intense computing power and high-performance software required to investigate large multiprotein complexes such as the chemotaxis array. For example, to achieve a simulated time of 450 ns, the all-atom extended array structure (1.25 million atoms) required the use of the highly scalable NAMD code (75) and ϳ100 graphics processing unit (GPU)-accelerated nodes (16 central processing units plus one GPU per node) on the Blue Waters supercomputer for 360 h. MD simulations are also being used to study the behavior of isolated E. coli receptor trimers of dimers in situ (I. B. Zhulin, unpublished data), as well as transmembrane signaling in single receptors (K. Schulten, unpublished data).
INDIVIDUAL CELLS VERSUS COLLECTIVE BEHAVIOR
A very exciting and promising area of research is the study of how the behavior of individual cells maps onto collective behaviors at the population level. A good example is the process of fruiting body formation in Myxococcus xanthus during the development stage, which can be induced by starvation. It was found that this densitydependent process closely resembles phase separation in passive systems (76). More specifically, it can be described by the phenomena of coarsening, nucleation and growth, and spinodal decomposition observed in material science. Together, these processes can explain the remarkable uniformity in size and distribution of fruiting bodies that is observed in petri dishes. Since speed and reversal frequency are controlled by the genetics of individual cells (77), this realization offers the promising opportunity to begin to bridge the gap between individual cell behavior and population behavior.
Experimental work on biofilms has also taught us that variability at the cell level has an impact on collective behaviors. It was observed that E. coli biofilms containing a mixture of motile and nonmotile cells remained intact longer (many weeks) and contained more biomass than biofilms composed of only motile or only nonmotile bacteria. A heterogeneity in motility, caused by spontaneous mutations in the flhD operon, therefore seems to be an advantage for increasing and maintaining a biofilm (78). In studying this relationship between individual and collective behaviors, one is quickly confronted by the notion of noise. Variability is everywhere, in stochastic MD, single-cell response, and collective behavior. How do variations in protein abundance, gene expression, and environmental stimuli affect the behavior of a single cell and its performance? How do these individual actions affect the population? The origin of that variability and how it is controlled and exploited by cells are important areas of research, and many new tools enable their study in exquisite detail.
Monitoring of single motors demonstrated that stochastic fluctuations in (de)methylation reactions in the chemotaxis network generate fluctuations in CheY-P, which in turn cause behavioral variability in a single cell over time (79)(80)(81). Single-cell FRET imaging (described above) enabled the measurement of CheY-P activity in individual cells, showing that these differences give rise to phenotypic variations (65,82). For example, the response to ligand stimuli was found to be highly variable from cell to cell, which can be explained in terms of variability in the Tar/Tsr receptor expression ratio. We have also learned that microbes have evolved effective ways to control that variability. In particular, the CheB phosphorylation feedback loop (which is not needed for perfect adaptation) reduces the cell-cell variability in CheA kinase output (79,80). Indeed, this feedback loop provides robustness in the chemotaxis pathway by reducing fluctuations in CheY-P levels (83). On the other hand, we know that such variability can also be exploited; the same adaptation system that is responsible for precise adaptation introduces large fluctuations in the time domain that lead to extended runs (Levy flights). These extended runs are found to be beneficial in the absence of any gradient, allowing the population to sample a larger volume more effectively (79). Another clear illustration is the observation of individual trajectories of swimming bacteria, which shows substantial cell-to-cell variability (82) in parameters such as swimming speed, average turning angle, and effective rotational diffusion coefficient. Large data sets obtained by high-throughput 3D tracking (84) reveal that the variations in these parameters are not entirely random but display substantial correlations with each other. Theoretical predictions suggest that this coordinated variation maintains a compromise between high drift velocity and high localization performance in chemoattractant gradients (K. M. Taute, S. Gude, S. J. Tans, and T. S. Shimizu, unpublished data).
We also learned from observations of tens of thousands of individual cells "racing" in a microfluidic device that the shape of the distribution of a given phenotype is also important, since the mapping between phenotype (tumble bias) and chemotactic performance (drift speed in a gradient) is nonlinear. In other words, changing the standard deviation of a distribution can have as large an effect on performances as changing its mean. Both the shape and the mean can be independently affected (82), suggesting that both are under selection pressure from evolution (85).
The emerging conclusion is that variability in individual cell behavior (or protein concentration, for example) should no longer be called noise because it seems to be an important part of the function and might even be selected for (86). Recent decades have seen impressive developments in both theory and experimentation in this field. This has greatly improved our understanding and resulted in a shift in how we think about noise and variability.
SURFACE SENSING AND SIGNAL TRANSDUCTION
Another current area of focus in the community is examining signal transduction and motility outside the traditional test tube and swim plates. For both pathogens and environmental microbes, the signal transduction that results upon contact with a surface is critical for the transition from the motile to the sessile state and eventual biofilm formation. This developmental process may involve altered levels of the secondary messengers cAMP and c-di-GMP or altered gene expression. Ultimately, this transition results in dramatically different cell physiology, including increased antibiotic resistance (87). Because adherence is the first required step in this transition, understanding how bacteria sense surface contact and the mechanisms driving the subsequent phenotypic changes is critical.
Sensing the surface through flagella and TFP. Some bacteria use their motility organelles, such as flagella or TFP, to recognize interaction with surfaces. The concept of motility organelles as mechanosensors was proposed as early as the late 1980s in Vibrio parahaemolyticus (88). Increased external viscosity was found to reduce the rotation of the polar flagellum and thereby increase the expression of lateral flagellar genes (88). More recently, interest in the flagellum as a mechanosensor has expanded to include a breadth of microbes (89,90). There is an emphasis on the changes in flagellar structure and gene expression as a result of a load increase due to surface contact. In E. coli, dramatic load changes lead to an initial reduction in speed, followed by stepwise increases in speed, concomitant with an increase in stator subunits (91). This incorporation of additional stator subunits (MotB) into the preexisting flagellar structure was termed stator remodeling. The stator proteins form the ion channel and work with the rotor protein (FliG) to generate torque. While stator remodeling occurs in response to the mechanical load, it remains unclear how this would contribute to differential gene expression, as seen in Vibrio bacteria (92).
Stator remodeling is not unique to E. coli. The genome of P. aeruginosa encodes two sets of stator proteins, MotAB and MotCD, that generate the torque required for swimming and swarming motility, respectively (93,94). Current studies are focused on the differential role and possible exchange of these stator complexes in the immediate response of P. aeruginosa to load changes resulting from surface contact and adhesion (B. Kazmierczak, unpublished data). The T4aP is another motility organelle found in P. aeruginosa and other bacteria that may participate in surface sensing. These thin filaments extend and retract from the poles of cells, mediating attachment and surface translocation (twitching, walking, and slingshot motility) (95). The production and function of T4aP are controlled through the Chp chemosensory system (96,97). Recently, this chemosensory system was found to have a second function, regulation of intracellular cAMP levels through CyaB (15). Signal transduction through the Chp system is proposed to be mediated through direct PilA-PilJ interactions wherein PilA is the major pilin subunit of the T4aP and PilJ is the sole chemoreceptor of the Chp chemosensory system (16,97). The increase in cAMP is dependent on surface contact, although the mechanism by which the T4aP senses the surface is still unknown. The O'Toole lab is investigating the role of the motility organelles in surface sensing during the transition to biofilm formation (G. A. O'Toole, unpublished data).
Two-component signaling and surface sensing. In B. subtilis, an increased load on the bacterial flagellum activates the two-component signal transduction system DegS-DegU (90). DegU is a transcriptional regulator that, when phosphorylated (DegUϳP), controls several different processes, including motility and biofilm formation (Fig. 6). Low levels of DegUϳP lead to hyperflagellation in the presence of SwrA. SwrA is the master swarming regulator that accumulates following surface contact. This SwrA accumulation results from a lack of proteolysis by the LonA AAAϩ protease/SmiA adaptor (98). While the exact mechanism that relieves proteolysis of SwrA is under investigation (D. Kearns, unpublished data), this surface contact-controlled transcription regulation allows differentiation into swarmer cells. While it is unclear how the signal of an increased load on the flagellum is transduced to the DegS-DegU system, the basal body appears to play an important role. The loss of the flagellar stator (MotB), inhibition of flagellar rotation through overexpression of the clutch protein (EpsE), and tangling of the flagella all resulted in increased phosphorylated DegU through the histidine kinase DegS (90). The consequences of this flagellum-based activation of the DegS-DegU system appear to be far reaching, and the effects on competence are under investigation (D. Dubnau, unpublished data).
The motile-to-sessile transition is also affected by the regulation of two-component signal transduction systems independently of motility/mechanosensors. Agrobacterium tumefaciens is a facultative plant pathogen responsible for crown gall disease through the transfer of transfer DNA. In response to acidic conditions, the ChvG-ChvI two- component signal transduction system is activated. This activity of this system is repressed at neutral pH because of periplasmic interactions between ExoR and the sensor kinase ChvG (99). ExoR is a periplasmic protein containing tetratricopeptide repeats (100). Derepression of ChvG leads to phosphorylation of ChvI and, surprisingly, the reduction of both motility and biofilm formation. As motility and biofilm formation are thought to be mutually exclusive phenotypes, the mechanisms behind this parallel phenotypic pattern are intriguing. These mechanisms are currently being deciphered (C. Fuqua lab, unpublished data).
In this review, we have painted a broad picture of our progress in understanding how microbes sense and respond to their environments by using two-component systems, chemotaxis, and motility organelles. This field greatly benefits from the expanding array of cutting-edge tools used to study in exquisite detail phenomena ranging from collective behaviors to the molecular scale. This level of understanding is also made possible by rich interactions among theory, modeling, and experiments. We have illustrated the general themes of research by citing a few specific examples, but there are obviously countless other exciting and important results that could not be included. In addition, it would be remiss of us not to acknowledge the work that has come before. Years of dedicated study by the founding members of the BLAST community have resulted in a high level of understanding of E. coli signal transduction, chemotaxis, and motility. This canonical paradigm forms our baseline of knowledge and allows us to better identify and understand the complex variations that occur throughout the microbial community. | 2018-04-03T00:33:45.580Z | 2017-05-08T00:00:00.000 | {
"year": 2017,
"sha1": "b15c3396da4ee2095d5bd695d9c59e9644265fdb",
"oa_license": "CCBY",
"oa_url": "https://jb.asm.org/content/jb/199/18/e00203-17.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "ASMUSA",
"pdf_hash": "0c42ea5986b8cfd67c0fd128dba470972a580416",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
54798641 | pes2o/s2orc | v3-fos-license | Advances in Parkinson’s Disease: 200 Years Later
When James Parkinson described the classical symptoms of the disease he could hardly foresee the evolution of our understanding over the next two hundred years. Nowadays, Parkinson’s disease is considered a complex multifactorial disease in which genetic factors, either causative or susceptibility variants, unknown environmental cues, and the potential interaction of both could ultimately trigger the pathology. Noteworthy advances have been made in different fields from the clinical phenotype to the decoding of some potential neuropathological features, among which are the fields of genetics, drug discovery or biomaterials for drug delivery, which, though recent in origin, have evolved swiftly to become the basis of research into the disease today. In this review, we highlight some of the key advances in the field over the past two centuries and discuss the current challenges focusing on exciting new research developments likely to come in the next few years. Also, the importance of pre-motor symptoms and early diagnosis in the search for more effective therapeutic options is discussed.
A LITTLE BIT OF HISTORY
Two centuries have passed since James Parkinson's Essay on the Shaking Palsy described a handful of patients who showed tremor at rest, bradykinesia and, some of them, akinesia. In his essay, he characterized the motor symptoms of the disease that now takes his name (Parkinson, 1817). Although this was the first description of the disease as a neurological condition, it was not until 50 years later that new scientific evidence obtained by Jean-Martin Charcot contributed to a definition of the clinical and anatomopathological basis of Parkinson's Disease (PD) (Charcot, 1872). Years later, in 1893, Blocq and Marinescu noticed resting tremor in a patient that resembled parkinsonian symptoms. The tremor was due to a tuberculous granuloma on the right cerebral peduncle that was affecting the ipsilateral Substantia nigra pars compacta (SNc) (Blocq and Marinescu, 1893). It was Brissaud a few years later who suggested that the SNc might be the site affected in PD (Brissaud, 1899). Two decades later, Trétiakoff first reported neuropathological changes in the SNc in PD patients. He observed a large loss of neuromelanin in the SNc resulting from the absence of SNc neurons containing this pigment and also the presence of cytoplasmatic inclusions named Lewy bodies (LB) (Trétiakoff, 1919). These structures had been described some years earlier by James Lewy and, from that moment onward, this feature of PD became the focal point of neuropathological studies on PD (Lewy, 1912). The presence of both loss of dopaminergic neurons in the SNc and LB was established as the anatomopathological hallmark and diagnostic criterion of PD (Postuma and Berg, 2016).
At this point, the diagnostic criteria were established, but the main challenge was to assure successful treatment. The first neurosurgery of the basal ganglia (BG) to treat PD took place in 1940. Between the late 1950s and the mid-1960s many discoveries were made about the existence of dopamine (DA) as a neurotransmitter (Montagu, 1957;Carlsson et al., 1958) and its role in the striatum (Bertler and Rosengren, 1959;Carlsson, 1959;Sano et al., 1959). In 1957, Carlsson reported the first evidence for a functional role for DA, describing the reserpine effect in reducing motor activity in animals, which was reversed by L-3,4-dihydroxyphenylalanine (L-DOPA) administration, a precursor in DA synthesis. DA signaling proved to play a crucial role in motor control by the BG (Carlsson et al., 1957). Soon after this, evidence emerged of the striatal DA deficiency in PD (Sano, 2000). Particularly, Ehringer and Hornykiewicz described a deficit in both the striatum and the SNc in brains from parkinsonian patients (Ehringer and Hornykiewicz, 1960). Furthermore, some studies sustained the presence of a dopaminergic nigrostriatal projections and they also revealed that the dorsolateral striatum mainly receives terminals from SNc neurons. It happens that this area of the striatum is the most affected in PD (Dahlstroem and Fuxe, 1964;Anden et al., 1965). After these discoveries, the L-DOPA era began. During these years, it was demonstrated that intravenous injection of L-DOPA and also small oral doses of L-DOPA in humans had anti parkinsonian effects (Cotzias, 1968). From that moment L-DOPA became the gold-standard treatment for PD, since many authors consistently reported a marked improvement in PD with large oral doses of L-DOPA (Hornykiewicz, 2002). Since then significant progress has been made in the development of new pharmacological and surgical tools to treat PD motor symptoms (Smith et al., 2012).
A new important breakthrough took place in 1983 when Langston and colleagues reported a group of drug users who developed acute parkinsonism after MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine) exposure (Langston et al., 1983). These patients developed an acute syndrome indistinguishable from PD. This is due because the MPTP metabolite, MPP+, destroys the dopaminergic neurons in the substantia nigra after a series of alterations in the mitochondrial matrix and the electron transport chain. The SNc of Parkinson patients was also described as exhibiting a marked decrease in complex I activity (Davis et al., 1979;Schapira, 1993). The fact that some PD patients have certain polymorphisms in genes that express subunits of complex I suggests that this could be a vulnerability factor in PD van der Walt et al., 2003).
New models based on MPTP intoxication allowed researchers to ascertain PD hallmarks both in vitro and in vivo (Langston, 2017). Due to the achievements of pharmacological DA treatments, search of cell-based DA replacement approaches were initiated with largely disappointing results (Barker et al., 2015). From the surgical and therapeutic point of view, discrete lesions of the BG improved parkinsonism (Meyers, 1942). A monkey model of PD showed motor signs improvement as a result of the chemical destruction of the subthalamic nucleus (STN) (Bergman et al., 1990), with evidence of reversal of experimental parkinsonism by STN lesions. This same year deep brain stimulation (DBS) of the STN became effective for PD treatment (Hammond et al., 2007;Rosin et al., 2011;de Hemptinne et al., 2015).
In the late 1990s, due to the improvement and sophistication of genetic analysis techniques, mutations in SNCA gene codifying for alpha-synuclein (α-syn) protein were identified as the first genetic cause of PD (Polymeropoulos et al., 1997). Once it was clear that SNCA mutations cause parkinsonism, and more than 80 years after the discovery of LB, α-syn protein was found to be the main component of LB (Spillantini et al., 1997(Spillantini et al., , 1998. Later on and based on these discoveries, Braak et al. (2003) proposed a pathological staging of the disease. From that time on, genetic studies have revealed many other mutations in other genes related to PD (PINK1, LRRK2, Parkin, DJ1, etc. . . see Advances in genetics below). The discovery of different genetic variants affecting the risk of PD has provided the field with a new battery of potential therapies ready to be tested in clinical trials. The initial findings have been followed by intensive research and the identification of several genes linked to PD pathogenesis in the last few years. Developments in genetics and molecular techniques such as CRISPR, have allowed the raise of new experimental models based on the use of transgenic animals presenting mutations associated to PD . These examples open the door to studies where using genetic-based animal models would allow to assess the potential role of α-syn aggregation and spreading, evaluating potential therapeutics, imaging tracers or biomarkers (Dehay et al., 2016;Koprich et al., 2017;Ko and Bezard, 2017;Marmion and Kordower, 2018); also, the cellular models offer unique opportunities for the identification of therapeutic strategies capable of modulating the disease (Lázaro et al., 2017). These new models join the well-known classic neurotoxin based animal models such as MPTP or 6-OHDA that have provided a valuable insight into potential new targets for disease intervention (Blesa et al., 2012;Morissette and Di Paolo, 2018).
At present, catching theories linking alterations in the gut microbiota to PD of the disease open new research areas in the hunt of the etiology of the disease (Sampson et al., 2016). Many efforts are concentrated in decoding the pre-symptomatic phases and turning scientific progresses into disease-modifying therapies for PD (Blesa et al., 2017). In this sense, exciting cuttingedge approaches with less invasive technologies such as gamma knife or focused ultrasound for the treatment of motor symptoms in PD have been advanced (Martínez-Fernández et al., 2018) (see New technologies for the diagnosis, clinical assessment and treatment of Parkinson's Disease below) (Figure 1).
ADVANCES IN GENETICS
The etiology of PD remains largely unknown. The majority of patients are classified as idiopathic PD cases, i.e., arising 'spontaneously' or from an unknown cause. Prevalence is estimated at 1% in people above 65 years and increases exponentially in subsequent decades of life (Fahn, 2003) and in fact, aging is considered the major known risk factor. Yet, continuous and intense efforts have been undertaken to improve our incomplete comprehension of the disease. In this context, genetic research has played a pivotal role in elucidating the cause of disease, most especially during the last 20 years. Earlier than that, the genetic contribution to PD was unrecognized because classic reports of PD familial clustering or twin concordance studies were scarce and controversial (Farrer, 2006). It was in 1997 when a linkage study first identified unequivocal familial segregation of the missense mutation A53T in the SNCA gene with an adult-onset autosomal-dominant PD phenotype (Polymeropoulos et al., 1997). Subsequently, other pathogenic missense mutations in SNCA were identified including A30P, E46K, H50Q, G51N, and A53T (Krüger et al., 1998;Zarranz et al., 2004;Lesage et al., 2013;Proukakis et al., 2013). In 1998, another pioneer study reported that the α-synuclein protein was the main component of the proteinaceous aggregates termed Lewy bodies and Lewy neurites which are found in the soma and neurites of the few surviving dopaminergic neurons of the SNpc in PD patients (Spillantini et al., 1997(Spillantini et al., , 1998. The identification of SNCA led to a shift of paradigm in the classification of PD patients into monogenic or familial PD (fPD) cases caused by pathogenic mutations in genes associated with the disease (5-10% of cases), and the vast majority of patients encompassing sporadic PD (sPD) cases (95%). The identification of SNCA was also seminal to set the basis for the subsequent intense genetic cell and animal modeling of the disease in the lab (Singleton et al., 2003;Chartier-Harlin et al., 2004;Ibáñez et al., 2004). More recently, multiplications of the SNCA locus, duplications and triplications, were found to cause PD with an inverse correlation between gene dose and age-at-onset, but a direct effect on disease severity (Chartier-Harlin et al., 2004;Ibáñez et al., 2004;Singleton et al., 2013). Overall mutations in SNCA are uncommon in frequency and lead to a DOPA-responsive early-onset parkinsonism, often severe and with dementia that is pathologically characterized by nigral neurodegeneration and widespread brainstem and cortical LB pathology.
Until date, a total of 23 loci and 19 causative genes have been associated with PD, yet with certain degree of heterogeneity regarding phenotypes (PD only or PD plus syndromes), ageat-onset (juvenile or adult onset), and inheritance mode (autosomal dominant, recessive or X-linked) ( Table 1). Whereas some of the genes associated to the PARK loci have not been yet identified (PARK3, PARK10, PARK12, and PARK16), the pathogenicity of a few PD-associated genes still remains controversial due to novelty or to lack of replication of the original study (UCHL1, GIGYF2, EIF4G1, SYNJ1, TMEM230, and CHCHD2). Yet, mutations in the remaining genes, although rare in frequency, have been unequivocally established as PDcausative and account for the majority of autosomal dominant (SNCA, LRRK2, HTRA2, and VPS35) or recessive PD cases (PRKN, PINK1, DJ-1, ATP13A2, PLA2G6, FBXO7, DNAJC6, and VPS13C). Among the dominant genes, the identification by linkage analysis of mutations in the leucine-rich repeat gene (LRRK2) in some PD families with adult onset autosomaldominant inheritance simultaneously by two groups (Paisan-Ruiz et al., 2004;Zimprich et al., 2004) represented another major milestone in PD research. Subsequently, three different groups identified in parallel the mutation G2019S at the kinase domain of LRRK2 as the most common pathogenic variant of LRRK2-associated PD (Di Fonzo et al., 2005;Hernandez et al., 2005;Kachergus et al., 2005) that remarkably it is found not only in monogenic but also in sPD cases lacking mendelian segregation. The LRRK2-associated PD form uniquely resembles common sPD at the clinical and neuropathological levels, yet with slight clinical differences (Marras et al., 2016;Pont-Sunyer et al., 2017), and eventual pleomorphic pathology (Zimprich et al., 2004). Of note, the penetrance of G2019S mutation is limited but rises progressively with age (Healy et al., 2008;Marder et al., 2015) and has been shown to be modified by additional factors such as genetic risk polymorphisms and other still unknown factors (Trinh et al., 2016;Fernández-Santiago et al., 2018). Moreover, mutations in HtrA Serine Peptidase 2 (HTRA2) (Strauss et al., 2005) and vesicle protein sorting 35 (VPS35) (Vilariño-Güell et al., 2011;Zimprich et al., 2011) are responsible of typical L-DOPA responsive PD, although no neuropathological data is not available yet. On the other hand, mutations in the recessive genes including parkin (PRKN), PTEN-induced putative kinase 1 (PINK1) and DJ-1 are causative of early-onset parkinsonism shearing largely identical clinical phenotypes, but distinct neuropathology. PRKN-associated PD is characterized by pure degeneration in the SNc and locus coeruleus without LB pathology and occasional Tau inclusions (Schneider and Alcalay, 2017), whereas PINK1 mutations lead to nigral neurodegeneration with LB and neurites (Samaranch et al., 2010), and DJ1-associated pathology includes severe degeneration in the SNc and locus coeruleus with diffuse LBs and axonal spheroids (Taipa et al., 2016). In addition, pathogenic mutations in the genes ATPase 13A2 (ATP13A2) (Bras et al., 2012), phospholipase A2 (PLA2G6) (Gregory et al., 2008) F-Box protein 7 (FBXO7) (Shojaee et al., 2008), DNA J Heat Shock Protein Family (Hsp40) Member C6 (DNAJC6) (Edvardson et al., 2012) and Vacuolar Protein Sorting 13 Homolog C (VPS13C) (Lesage et al., 2016) are linked to autosomal recessive, early-onset atypical parkinsonism that often comprises additional clinical features such as pyramidal degeneration, ataxia or dementia, with or without LBs. Overall, although the relative contribution of pathogenic mendelian genes to overall PD is limited, genetic research in PD has been instrumental since it has uniquely permitted the identification of disease molecular alterations, pathophysiological pathways, and candidate therapeutic targets, most of which are believed to be largely common to sPD. Thus genetic findings in PD have undoubtly paved the way out for tackling overall pathology of all PD cases.
In addition to PD-causative mutations, classical candidate gene association approaches or more recently large genome-wide association studies (GWAS) have identified common genetic variants in genes such as SNCA, LRRK2, microtubule-associated protein tau gene (MAPT) or glucosylceramidase beta (GBA) which contribute to increase PD susceptibility (Lill, 2016). The variants in MAPT (Pastor et al., 2000;Golbe et al., 2001;Caffrey and Wade-Martins, 2007) and SNCA (Botta-Orfila et al., 2011;Cardo et al., 2012;Brockmann et al., 2013) loci showed the strongest association with PD risk across populations and most importantly, also at the GWAS level (Simón-Sánchez et al., 2009;Bonifati, 2010;Nalls et al., 2014) correlating not only with higher risk but with increased disease age-at-onset as well (Wang G. et al., 2016). On the other hand, common variants in LRRK2 increase the risk of PD only in Asian populations but not in Europeans (Farrer et al., 2007;Lu et al., 2008). In addition, mutations in GBA which codifies the lysosomal enzyme β-glucocerebrosidase are causative of the recessive lysosomal storage disorder Gaucher's disease. Both homozygous and heterozygous GBA variants increase the risk of developing PD (Thaler et al., 2017). Moreover, GBA-mutation carriers show a more severe parkinsonism than idiopathic patients, earlier ageat-onset and more frequently dementia (Thaler et al., 2017). Besides genetics, epigenetic alterations have also been suggested to play a role in the pathogenesis of PD in recent years. Thus, abnormal changes in various epigenetic mechanisms regulating gene expression such as DNA methylation (Masliah et al., 2013;Coupland et al., 2014;Fernández-Santiago et al., 2015;Pihlstrom et al., 2015), histones modifications (Park et al., 2016) or microRNAs (miRNAs) (Kim et al., 2007) have been linked to disease, thus opening a new venue of epigenetic research in PD.
FROM MOTOR TO NON-MOTOR SYMPTOMS
Motor disturbances in PD have been widely investigated leading to a better diagnosis and origination of validated rating scales and therapies. However, the non-motor symptoms (NMS) of PD also have major importance when evaluating the quality of life of patients and the impact on health economics, attracting a growing interest in the last years. The incidence of NMS augments along with the disease duration, even preceding the motor symptoms or signs by several years. Symptoms such as olfactory dysfunction, REM sleep behavior disorder (RBD), constipation, depression, and pain (Chaudhuri et al., 2006;Tolosa et al., 2009) appear to be clear indicators of a preclinical phase of the disease. This concept is reinforced by studies showing an augmented risk for patients with idiopathic RBD or idiopathic hyposmia to develop a synucleinopathy (Boeve et al., 2001;Iranzo et al., 2014;Sakakibara et al., 2014;Postuma et al., 2017). In early phases of the disease, some of these NMS still remain in many patients. Up to 21% of patients report pain, depression or anxiety . Importantly, in many cases, patients report a major disturbances of these NMS rather than motor ones, in the beginning of the disease (Gulati et al., 2004;Politis et al., 2010).
We currently know that PD involves disorders in several neurotransmitters pathways, including the cholinergic, noradrenergic, and serotonergic systems (Wolters, 2009;Halliday et al., 2011;Jellinger, 2012;Buddhala et al., 2015). This fact could relate symptoms such as depression with the loss of dopaminergic and noradrenergic transmission in the limbic system; and also anxiety and apathy, associated in this case to the low dopaminergic transmission (Thobois et al., 2010). On the other hand, excess of dopaminergic transmission due to DA agonist therapy, can prompt some NMS. Impulse control disorders (ICDs) is one of the most common side effects of dopamine replacement therapy used in PD with an estimated prevalence of 4.9-19% (Voon et al., 2009;Weintraub et al., 2010). ICDs are behavioral addictions including exaggerated behaviors such as gambling or shopping related to the administration of D2/D3 agonists. Research on this topic remains quite reduced and preclinical studies are limited because of the lack of alternatives for the pharmacological treatment. (Voon et al., 2009). It seems that the fact that in PD-ICD patients there is a major denervation in the ventral striatum this leads to an "over-dose" when dopamine is administered in ventral areas and limbic pathways (Weintraub, 2009;Voon et al., 2011). Nevertheless, further studies should be performed in order to achieve a better comprehension of the disorder and the development of successful treatments.
Non-motor fluctuations (uncomfortable anxiety, slowness of thinking, fatigue, and dysphoria) are other psychiatric disorder also DA-dependent, and reported primarily during "OFF" periods and can be reversed with continuous dopaminergic replacement (Chaudhuri and Schapira, 2009).
Nowadays, NMS denote some of the most relevant sources of disability and impairment in quality of life of parkinsonian patients and the acknowledgment of these symptoms become critical for the improvement and advances in the diagnosis of the disease (Chaudhuri et al., 2006). Still, in many cases, NMS of PD are not distinguished in routine clinical evaluations since their origin are not directly related to PD (Shulman et al., 2002).
These circumstances indicate the relevance of developing successful tools to identify NMS, both for the assessment and for their treatment (Grosset et al., 2007). The development of valuable instruments capable of supporting neurologists at the time of diagnosis would also mean a benefit for the rise of valid therapeutic strategies (Seppi et al., 2011) (see Diagnosis and clinical assessment devices below). This is reflected in the scarce therapies available for non-motor deficits (Zesiewicz et al., 2010). Currently, dopaminergic treatments are the most broadly used therapies, but they have no impact on those aspects of the disease that are associated to other neurotransmitter deficits. Conversely, the use of anticholinergics, for example, classically increases the cognitive symptoms of PD, as it does deep brain stimulation surgery (Witt et al., 2008).
To sum up, the increasing prevalence of non-motor complications is far complex marking a new concept in the scenery of PD. These problems are linked with a marked decrease of quality of life of the patients and the social life of their families. Their etiology is multifaceted and still poorly understood. Thus, specific NMS treatments are required, as current treatment options for NMS in PD continue incomplete and large areas remain unfulfilled of therapeutic need.
NEW TECHNOLOGIES FOR THE DIAGNOSIS, CLINICAL ASSESSMENT AND TREATMENT OF PARKINSON'S DISEASE
In the last decade, new technology-based tools and technologybased therapies have been advanced with the objective of refining the diagnosis, clinical assessment and treatment of patients with movement disorders. The development and intricacy of molecular and cellular techniques, as well as extraordinary progress in technology, have marked a milestone in our general understanding of the disease.
Drug Delivery Systems
The clinical use of neuroprotective molecules has been hampered by several issues, and among these, drug delivery to the brain remains a particular challenge. To address these limitations, drug delivery systems and methods that allow enhanced brain delivery of neuroprotective molecules have been investigated. These new technologies offer unprecedented advantages enabling protection of sensitive molecules from degradation and controlled release over days or months. Drug delivery systems can also be engineered to target diseased regions within the body, thereby enhancing the specificity of therapeutics. Therefore, the delivery and efficacy of many pharmaceutical compounds can be improved and their side effects reduced. Among drug delivery systems, microparticles (MPs), nanoparticles (NPs) and hydrogels (HGs) seem to be the most effective in providing neuroprotection, although liposomes and micelles have also been investigated (Figure 2) (Garbayo et al., 2009;Rodríguez-Nogales et al., 2016). MPs and NPs are particulate carrier systems in the micrometer and nanometer size range, respectively. MPs are generally used for the long-term delivery of drugs while NPs are commonly used as carriers of small molecules for targeted and intracellular delivery. On the other hand, HGs are tridimensional polymeric networks that absorb a large amount of water, which becomes their principal component. Formulations can be designed either for local administration into the brain or for systemic delivery to achieve targeted action in the central nervous system. The examples below show that drug delivery systems are in the initial stages of the drug development process, but the potential for using this technology for PD treatment is very high.
Drug Delivery Systems for Neurotrophic Factor Therapy
Neurotrophic factors, and glial cell line-derived neurotrophic factor (GDNF) in particular, have been regarded as one of the most promising molecules for PD. In this regard, several delivery systems have been designed focused on increasing GDNF stability and retention in the brain. Several studies have demonstrated the preclinical efficacy of microencapsulated GDNF in different PD animal models (rodents and monkeys) (Garbayo et al., 2009. In those studies, a single injection of microencapsulated GDNF achieved long term improvement of motor function and dopaminergic function restoration in parkinsonian monkeys with severe nigrostriatal degeneration. The injectable formulation localized GDNF within the putamen and prevented systemic off-target effects. GDNF showed trophic effects on the nigrostriatal pathway increasing striatal and nigral dopaminergic neurons. Moreover, microencapsulated GDNF did not elicit immunogenicity or cerebellar degeneration. This example demonstrates that MPs are an efficient vehicle for sustained GDNF delivery to the brain. In another approach, vascular endothelial growth factor (VEGF), a potent angiogenic factor with prosurvival effects in neuronal cultures, was combined with GDNF to enhance the action of the latter (Herrán et al., 2013). A pronounced tyrosine hydroxylase (TH) neuron recovery was observed in the SNc of parkinsonian rats. Later, a combinatorial strategy of NPs-containing GDNF and VEGF was locally applied in a partially lesioned rat PD model. Behavioral improvement was observed together with a significant enhancement of dopaminergic neurons both in the striatum and SNc, which corroborates previous work in GDNF and VEGF encapsulation. Interestingly, the synergistic effect of the therapeutic proteins allows dose reduction while still providing neurogenerative/neuroreparative effects . The direct nose to brain administration of GDNF-NPs is another promising trend. One of the most recent examples uses nanoencapsulated GDNF in lipid NPs (Hernando et al., 2018). In order to enhance the target NP delivery to the brain, the nanocarrier surface was modified with a cell-penetrating peptide named TAT. The formulation improved the nose to brain delivery of GDNF, thereby improving motor function recovery and GDNF neuroprotective effects when tested in a mouse PD model. An alternative approach to NPs is the use of liposomes. Uptake of the neurotrophic factor to the brain via intranasal delivery is enhanced when GDNF is encapsulated in a liposomal formulation (Migliore et al., 2014). In order to move forward with nose to brain delivery strategies greater formulation retention in the olfactory region needs to be achieved, together with better targeting of specific brain regions. Finally, another promising approach that has been undertaken for GDNF brain delivery is the use of nanoformulations able to cross the blood brain barrier through receptor-mediated-delivery. This strategy would allow non-invasive drug delivery to the brain. Based on this concept, neuroprotection has been observed after the intravenous administration of a GDNF nanoformulation (Huang et al., 2009). The NPs improved locomotor activity, reduced dopaminergic neuronal loss and enhanced monoamine neurotransmitter levels in parkinsonian rats. A remaining challenge is to target specific brain areas in order to avoid unwanted side effects.
Besides GDNF, other neurotrophic factor such as basic fibroblast growth factor (bFGF) have been evaluated. One example involves gelatin nanostructured lipid carriers encapsulating bFGF that can be targeted to the brain via nasal administration (Zhao et al., 2014). Overall, the nanoformulation stimulated dopaminergic function in surviving synapses and played a neuroprotective role in 6-OHDA hemiparkinsonian rats. A very recent study took advantage of the neuroprotective properties of Activin B, which was administered in a parkinsonian mice using a thermosensitive injectable HG (Li et al., 2016). The biomaterial allowed a sustained protein release over 5 weeks and contributed to substantial cellular protection and behavioral improvement.
Drug Delivery Systems for Stem Cell Therapy
In recent years, stem cells have attracted considerable attention as regards achieving neuroprotection. However, cell therapy has been limited by the low engraftment of the administered cells. By applying a combination of biomaterials, cells and bioactive molecules, brain repair can be facilitated. In an early example, MPs loaded with neurotrophin-3 were used to retain injected adult stem cells in the striatum and to support cell viability and differentiation (Delcroix et al., 2011). When tested in a PD rat model, a potent behavioral recovery was observed together with nigrostriatal pathway protection/repair. Going a step further, BDNF-loaded MPs have been encapsulated in a HG embedded with mesenchymal stem cells for neural differentiation and secretome enhancement (Kandalam et al., 2017). This strategy not only provides neuroprotective BDNF but also stem cells that benefit from that environment by displaying neural commitment and an improved neuroprotective/reparative secretome. Likewise, HGs have also been used to improve dopaminergic progenitor survival and integration after transplantation. A report by T. Wang and coworkers pioneered the development of a composite scaffold made of nanofibers embedded within a xyloglucan HG. The biomaterial was further functionalized with GDNF to improve the niche surrounding the implanted cells (Wang T.Y. et al., 2016). The scaffold enhanced graft survival and striatal re-innervation. A similar strategy was followed by Adil MM, that determined the impact of a heparin/RGD functionalized hyaluronic acid HG on the survival of embryonic stem cell-derived dopaminergic neurons (Adil et al., 2017). These examples demonstrate the potential of biologically functionalized HGs to improve stem cell delivery. Beyond HGs, the use of NPs as a tool to optimize MSC therapeutics was underlined in a recent study by T. Chung and coworkers that successfully developed a dextran-coated iron oxide nanosystem to improve the rescuing effect of mesenchymal stem cells (Chung et al., 2018).
In addition to stem cell delivery, biomaterials can also be used to deliver mesenchymal stem cell secretome at the site of injury. By way of example, adipose mesenchymal stem cell secretome has been encapsulated in a biodegradable injectable HG that was able to increase the controlled release of the neuroprotective factors in a PD-relevant experimental context (Chierchia et al., 2017). NPs can also be used to modulate the subventricular neurogenic niche and boost endogenous brain repair mechanisms using microRNAs. Due to the short half-life and poor stability of these molecules, their efficient delivery into cells is a challenge. NPs can provide a shielded environment and controlled release. One example involves microRNA-124, a potent pro-neurogenic factor for neural stem cells which has been nanoencapsulated, demonstrating the feasibility of this approach as well as its efficacy in parkinsonian mice (Saraiva et al., 2016). The nanoformulation promoted not only neurogenesis but also the migration and maturation of new neurons in the lesioned striatum. Specifically, this example illustrates the potential of nanotechnology for improving not only the safety and efficacy of conventional drugs, but also the delivery of newer drugs based on microRNAs to the brain. Overall, these promising results suggest that biomaterials and drug delivery systems are a valid alternative to enhance stem cell neuroprotective properties. Further studies are needed for the advancement of this technology from preclinical studies to clinical trials.
Nanomedicines for Antioxidant Delivery
Mitochondrial damage and oxidative stress have been proposed as the major contributing factors to PD pathogenesis. Accordingly, coenzyme Q10 has been considered a promising molecule in PD management due to its ability to enhance mitochondrial function. However, its efficacy has been hindered by insolubility, poor bioavailability and lack of brain penetration. In order to solve these issues, a nanomicellar coenzyme Q10 formulation able to stop, but not reverse, ongoing neurodegeneration has shown efficacy in a mouse PD model (Sikorska et al., 2014). Moreover, this neuroprotective treatment activates an astrocytic reaction suggesting that these cells played a significant role in neuron protection. In addition to coenzyme Q10, curcumin counteracts oxidative stress and mitochondrial dysfunction. However, its clinical efficacy has been limited by its poor aqueous solubility, rapid metabolism and inadequate tissue absorption. Piperine has been used as adjuvant to improve curcumin's bioavailability. Thus, curcumin and piperine amalgamation seems beneficial. Moreover, nanomedicines could also help to enhance drug transport from blood to the brain. In one example, both therapeutics were loaded in a lipid-based nanoformulation blended with different surfactants and orally administered in a PD mouse model (Kundu et al., 2016). A higher density of nigral TH + neurons was found in the animals treated with dual drug loaded NPs, demonstrating that the system was able to cross the blood brain barrier preventing dopaminergic neuronal degeneration. This may be due to the improved curcumine bioavailability and the synergistic effect exhibited by both drugs. Another strategy to detain oxidative stress and achieve neuroprotection is the use of nanoencapsulated resveratrol (da Rocha Lindner et al., 2015). The nanoformulation was able to attenuate MPTP-induced lipid peroxidation and prevent striatal TH protein decrease in parkinsonian mice. These findings suggest that resveratrol-loaded NPs are a promising nanomedical tool for PD.
Nanomedicines That Interfere With α-syn Expression
Strategies that interfere with α-syn expression in neurons have also received widespread attention. One remarkable approach is the targeted gene therapy proposed by Niu et al. (2017) that has provided effective repair in a PD mice model using magnetic NPs loaded with shRNA plasmid for α-syn. Multifunctional magnetic NPs were effectively delivered through the blood brain barrier, prevented DA neuron degeneration as reflected by TH up-regulation and α-syn down-regulation and inhibited further apoptosis in the brain. Alternatively, suppression of α-syn overexpression has been demonstrated using gold NPs which could load plasmid DNA, cross the blood-brain barrier and target specific cells. For example, the group of Y. Guan achieved successful results in carrying pDNA into the neurons, and thus inhibiting dopaminergic neuron apoptosis (Hu et al., 2018). These approaches have the potential to suppress α-syn expression, providing a highly efficient treatment for PD.
Focused Ultrasound
In the last few years, the use of focused ultrasound (FUS) therapies has been revolutionizing the treatment of neurological disorders. This non-invasive technique consists in the application of focused acoustic energy (ultrasound) on selected brain areas. The MR-guided FUS (MRgFUS) allowed computer calculated targeting and achieved high accuracy with real-time feedback on the effect of the treatment. The first studies using MRgFUS thalamotomy in patients with essential tremor showed a significant clinical reduction in hand tremor (Elias et al., 2016). In PD, MRgFUS is being explored as a way to non-invasively ablate the brain areas responsible for the motor features associated with the disease. In 2014, MRgFUS of the pallidothalamic tract was used in PD patients for the first time, with a significant clinical improvement (Magara et al., 2014). Subsequent studies using MRgFUS in the ventral intermediate thalamic nuclei (Vim) reported a clinically significant reduction in mean UPDRS scores post procedure in PD patients (Schlesinger et al., 2015). In a recent pilot study, MRgFUS unilateral subthalamotomy was reported to be well tolerated and to improve the motor features of noticeably asymmetric PD patients (Martínez-Fernández et al., 2018). The questions of the best target for treating PD symptoms and whether different targets should be chosen for different patients are currently unresolved. Other unanswered questions are the long-term durability of FUS ablation outcomes and the safety and feasibility of bilateral procedures. The possibility of this non-invasive approach, with its immediate and apparently permanent clinical outcome, makes this treatment suitable for an increasing number of patients who are either unable or unwilling to undergo DBS therapy. Large randomized controlled trials are necessary to validate these preliminary findings and to assess the potential use of ablative FUS therapy in the treatment of PD patients. Other applications of FUS that are under current research are the opening of the brain-blood barrier (BBB) or neuromodulation (Krishna et al., 2017). Low-Intensity Ultrasound decreased α-syn in PC12 cells (Karmacharya et al., 2017). And more recently, using a non-invasive approach by combining MRgFUS and intravenous microbubbles and a shRNA sequence targeting α-syn, immunoreactivity of this protein have been decreased in several regions such as hippocampus, SNpc, olfactory bulb, and dorsal motor nucleus (Xhima et al., 2018). This technology could be useful in the near future to alter the progression of LB pathology in combination with improved early diagnosis of the disease.
Deep Brain Stimulation
Device-aided therapies, as levodopa-carbidopa infusion gel (LCIG), subcutaneous apomorphine pump infusion and deep brain stimulation (DBS), are essential tools in the treatment of advanced PD patients. During the last decade, evidence has been obtained regardless of safety, validity and efficacy in large prospective clinical studies (Antonini et al., 2018). Deep brain stimulation is a surgical therapy that involves the implantation of one or more electrodes in specific regions of the brain. There is substantial and consistent evidence indicating that DBS of both STN and GPi improve motor fluctuations, dyskinesia and quality of life in advanced PD (Rodriguez-Oroz et al., 2005;Follett et al., 2010). Those benefits are maintained for more than 10 years (Zibetti et al., 2011). Additionally, DBS treatment has been evaluated in patients with relatively short disease duration providing better motor outcomes and quality of life compared to the control group receiving best medical treatment (Tinkhauser et al., 2018).
Deep brain stimulation has notably improved due to the development of new neurosurgery approaches (asleep surgery), devices (microelectrodes, directional electrodes), and programming and stimulation algorithms. Particularly relevant is the implementation of the directional electrodes, which leads to a segmented stimulation. They provide a more accurate therapeutic frame and potentially reduce the adverse effects related to DBS (Steigerwald et al., 2016).
The control of fluctuations could be improved and the adverse effects of DBS could be reduced by selective stimulation in a short-time window by using adaptive DBS (aDBS). Thus, aDBS is intended to personalize stimulation by recording local field potentials (LFP) directly from the stimulating electrode, which can only be activated when the LFP beta power exceeds a customized threshold. Therefore, it can modulate the stimulations according to the changes in the LFP beta power. aDBS seems to be more effective than conventional DBS in improving motor scores and controlling levodopa-induced dyskinesias. Further research over more extended time periods and larger cohorts are needed to ensure the benefit and efficacy of this novel strategy (Meidahl et al., 2017).
Diagnosis and Clinical Assessment Devices
The use of new technology-based tools allows quantitative assessment of the motor function of PD patients. Sensors, video-assessment methods or mobile phone applications are some of the techniques that improve the sensitivity, accuracy and reproducibility of the evaluation of PD patients (Espay et al., 2016). Portable devices that include inertial measurement units (IMUs) measure the orientation, amplitude and frequency of movement, as well as the speed of the part of the body where they are located. IMUs are usually made up of accelerometers and gyroscopes, and occasionally magnetometers. IMUs situated in different parts of the patient's body make a precise record of tremor, bradykinesia, dyskinesias and even gait patterns (Heldman et al., 2014). On the other hand, continual monitoring of the motor status in the domestic environment (regarding baseline motor status, motor fluctuations, and benefit of treatment, among other factors) is also possible by using these technology-based tools (Ossig et al., 2016). These new technology-based systems open up an unexpected range of specific and real-time data, thereby resulting in the prospect of (1) better diagnostic accuracy, (2) more sensitive monitoring of the motor and non-motor symptoms, and (3) more precise adjustments of medical therapies. However, their use is limited in routine clinical practice due to the heterogeneity of the studies, which limit the extrapolation of results, and the high cost of the devices (Sánchez-Ferro et al., 2016).
CONCLUSION
In the future, population aging in developed countries will increase the burden of neurodegenerative diseases. In the case of PD, where treatment of symptoms needs to be patientcustomized, balancing the control of symptoms, drug dose, presence of side effects and patient's expectations, clinicians and researchers face a situation in which a synergy of medicine and research is urgently needed. In summary, 200 years after the publication of James Parkinson's essay, our understanding of the disease has made remarkable progress and is still advancing, generating a considerable array of tools. Nowadays, fields such as functional genetics, novel molecular mechanisms, brain imaging and biomarker detection seem to be the major issues guiding our research strategies. Nevertheless, despite the progress made, improved early clinical diagnosis is still necessary and the disease lacks a cure. In this regard, research in drug delivery might provide safer and more effective treatments for PD. Years of research have revealed the need to take into account the role of environmental factors in addition to the genetics when studying PD progression. However, further research is needed to decipher the mechanisms by which this pathology spreads from cell to cell within the brain and from other organs to the central nervous system. Importantly, studies should also address early diagnosis (screening) tools, and more information is needed concerning the differential vulnerability of pathogenic factors affecting dopaminergic neurons.
AUTHOR CONTRIBUTIONS
NDR, AQ-V, EG, IC-C, RF-S, MM, IT-D, MB-P, and JB reviewed the literature, composed and wrote the manuscript. NDR, IT-D, and JB organized the paper. IC-C and RF-S prepared Table 1 | 2018-12-14T14:05:08.127Z | 2018-12-14T00:00:00.000 | {
"year": 2018,
"sha1": "5ee461006f6eefa310b07c700ff2e908686a8f7e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnana.2018.00113/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ee461006f6eefa310b07c700ff2e908686a8f7e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23743425 | pes2o/s2orc | v3-fos-license | RTN 4 3 ’-UTR Insertion / Deletion Polymorphism and Susceptibility to Non-Small Cell Lung Cancer in Chinese Han Population
Lung cancer is one of the most common cancers with an increasing incidence in the human population. More than one million patients died of lung cancer around the world every year (Hecht, 1999). Lung cancer ranks first in male cancer mortality rate and ranks second in females just following breast cancer. Lung cancer is also known as bronchus lung cancer, because nearly all the lung cancer originated in the bronchial epithelium. Lung cancer includes two subtypes, non-small cell lung cancer (NSCLC) and small lung cancer (SCLC). 80-85% of lung cancer cases are NSCLC (adeno-, squamous celland large-cell carcinoma) (Liam et al., 2014). Surgical resection should be the first choice to the treatment of NSCLC. The 5-year survival rate of NSCLC after surgery can be up to 50% before occurring lymph node
Introduction
Lung cancer is one of the most common cancers with an increasing incidence in the human population.More than one million patients died of lung cancer around the world every year (Hecht, 1999).Lung cancer ranks first in male cancer mortality rate and ranks second in females just following breast cancer.Lung cancer is also known as bronchus lung cancer, because nearly all the lung cancer originated in the bronchial epithelium.Lung cancer includes two subtypes, non-small cell lung cancer (NSCLC) and small lung cancer (SCLC).80-85% of lung cancer cases are NSCLC (adeno-, squamous celland large-cell carcinoma) (Liam et al., 2014).Surgical resection should be the first choice to the treatment of NSCLC.The 5-year survival rate of NSCLC after surgery can be up to 50% before occurring lymph node RESEARCH ARTICLE RTN4 3'-UTR Insertion/Deletion Polymorphism and Susceptibility to Non-Small Cell Lung Cancer in Chinese Han Population De-Yi Lu 1& , Xu-Hua Mao 2& , Ying-Hui Zhou 1 , Xiao-Long Yan 3 , Wei-Ping Wang 1 ,Ya-Biao Zheng 1 , Juan-Juan Xiao 1 , Ping Zhang 1 , Jian-Guo Wang 1 , Neetika Ashwani 1 , Wei-Liang Ding 2 , Hua Jiang 4 , Yan Shang 5 *, Ming-Hua Wang 1 *, metastasis.However, most patients were diagnosed with advanced NSCLC in their first surgical treatment losing the opportunity for radical resection.Therefore, early diagnosis and early treatment still play a pivotal role in the treatment of lung cancer.Genetic variations are thought to lead to different susceptibilities to NSCLC for individuals (Piao et al., 2013).So, finding available molecular genetic markers are important for early diagnosis and treatment.
RTN4 (reticulon-4) gene, which mapped to chromosome 2p12-14 (Yang et al., 2000), plays an important role in the inhibition of axonal regeneration, vascular remodeling, apoptosis and inhibition of tumor.RTN4 gene produced three Nogo isoforms A, B and C through differential splicing and varied promoter usage (Oertle et al., 2003).Nogo-A is mainly expressed in the central nervous system; Nogo-B is expressed in various tissues, such as endothelial cells and smooth muscle cells (Acevedo et al., 2004); Nogo-C is highly expressed in the central nervous system, as well as in skeletal muscle (GrandPre et al., 2000).Recently, many researchers have paid much attention to the peripheral role of Nogo protein in the nervous tissue, but less in other tissues.Some evidences have shown that Nogo proteins play an important role in apoptosis (Tagami et al., 2000;Li et al., 2001;Chen et al., 2006;Kuang et al., 2006;Tashiro et al., 2013).
Nowadays, more and more researchers have started to focus on the polymorphisms of RTN4 3'-UTR.3'-UTR of eukaryotic mRNA has been proved to regulate the expression of gene, and also involve in the regulation of translation initiation, mRNA stability and subcellular localization (Gray et al., 1998;Jansen, 2001;Mitchell et al., 2001).rs34917480 is located on RTN4 3'-UTR and the CAA insertion/deletion polymorphism is associated with the occurrence of schizophrenia (Novak et al., 2002;Novak et al., 2006).Thus, the polymorphism may affect the expression RTN4.However, the association between the polymorphism and the susceptibility to NSCLC in Chinese Han population has remained unknown.Therefore, by conducting this case-control study, we want to investigate the association between rs34917480 and NSCLC risk in Chinese population.
Study populations
411 unrelated non-small cell lung cancer patients (292 males and 119 females) were recruited from Soochow municipal hospital between July, 2011 and September, 2012.Control subjects were 471 unrelated healthy individuals (338 males and 133 females) from a routine health survey at Soochow municipal hospital during the same period.Control subjects were matched to cases for sex and age at the ratio of 1:1.15.This project has been approved by Soochow University Ethics Committee.All participators have signed a written informed consent for donating their blood samples.
Determination of genotypes
All blood samples were collected and stored in EDTAanticoagulant tubes.A Chelex method was used to extract genomic DNA of blood samples (Walsh et al., 1991).The primers of PCR were according to Shaoqing Shi's method (Shi et al., 2012).PCR was performed in a total volume of 20 μL, including 2 μL 10×PCR buffer, 1.5 mM MgCl 2 , 0.15 mM dNTPs, 0.5 mM of each primer, 50 ng of genomic DNA, and 1.0 U of Taq DNA polymerase.The PCR conditions were 94°C for 5 min, followed by 35 cycles of 30 s at 94°C, 30 s at 61°C, and 30 s at 72°C, with a final elongation at 72°C for 10 min.The PCR products were analyzed by 6% polyarylamide gel electrophoresis and visualized by sliver nitrate staining.For CAA polymorphism, the CAA deletion yields a 124bp band, and the CAA insertion yields a 127-bp band.
About 10% of the samples were randomly selected to perform the repeated assay and the reproducibility was 100%.
Statistical analysis
The Hardy-Weinberg equilibrium in control subjects was tested using goodness-of-fit χ 2 test.Differences in frequency distributions of the genotypes, alleles and the selected demographic variables between cases and controls were evaluated by χ 2 test.The mean ages were compared using t-test.Logistic regression analyses were conducted to calculate odds ratio (OR) and 95% confidence interval (95%CI) to evaluate the risk of NSCLC.Multivariate adjustments were made for age, sex and smoking status.We further performed the stratification analyses according to age (≤60, >60), sex (male, female) and smoking status (non-smoker, smoker).The genetic models were used as follows: codominant model (AA vs AB & AA vs BB); dominant model (AA vs AB BB); recessive model (AA AB vs BB) and overdominant model (AA BB vs AB), assuming B is the risk allele.p<0.05 was regarded as statistical significance.All data analyses were carried out by using SPSS 18.0 statistical software.
Subject characteristics
The distributions of characteristics in cases and controls were summarized in Table 1.The final analysis included 411 NSCLC cases and 471 healthy controls.By the frequency-matched study design, there were no statistical differences in the distributions of age (p=0.105),sex (p=0.823) and smoking status (p=0.418) between cases and controls.
Genotype distributions and NSCLC risk
The genotype frequencies of rs34917480 in this analysis were summarized in Table 2.The observed genotype frequencies in controls were agreed with Hardy-Weinberg equilibrium (p=0.413).The genotype frequencies were significantly different between the cases and controls (p=0.014).The del allele was more frequent among cases than among controls, and the difference was statistically significant (p=0.008).
Multivariate logistic regression analysis was conducted after adjustment by age, sex and smoking in genetic models, the results showing in Table 3.In the dominant model, the del allele of rs34917480 was associated with a significantly increased risk of NSCLC compared with ins/ins genotype (OR=1.47,95%CI=1.13-1.92,p=0.004).The ins/del genotype significantly increased lung cancer susceptibility in the codominant model (OR=1.46,95%CI=1.11-1.93,p=0.007) and in the overdominant model (OR=1.41,95%CI=1.07-1.85,p=0.014).
Stratification analysis of NSCLC risk
We further calculated the association between rs34917480 and NSCLC risk stratified by variables including age, sex and smoking status.The results are shown in Table 4.The potential association of rs34917480 del allele with the risk of NSCLC is more evident in older subjects and smokers.For males, the significant results were observed in the codominant model (OR=1.43
Discussion
The occurrence of lung cancer is the comprehensive result of genetic-environment interactions, and increasing studies have certified that genetic variants of important genes play major roles in the susceptibility to lung cancer.In this study, we investigated the association between genetic variant within RTN4 (rs34917480) and NSCLC risk, which was the first time conducted on NSCLC, as far as we know.The result showed that rs34917480 increased NSCLC susceptibility in Chinese population.
RTN4 gene, containing eight introns and nine exons and locating on chromosome 2p12-14, can encode three proteins by different splicing, named Nogo-A, Nogo-B and Nogo-C (Oertle et al., 2003).In recent years, more and more attentions have been drawn to the functions of RNT4 gene.One study has reported that the absence of Nogo-B enhances apoptosis of hepatic stellate cells and the overexpression of Nogo-B inhibits apoptosis (Tashiro et al., 2013).But there is an inconsistent result that Nogo-B interacted with Bcl-XL and Bcl-2, thus promoting Bcl-XL and Bcl-2 to locate on endoplasmic reticulum (ER), and decreasing the anti-apoptosis activity of them (Tagami et al., 2000).Some researchers believed that the overexpression of Nogo-B induces cell apoptosis through ER stress and ER-specific signal pathways (Kuang et al., 2006).Strikingly, Nogo-B was claimed to be potent pro-apoptosis protein in certain tumor cells when it ectopically overexpressed.Transient transfecting Nogo-B into carcinoma cell lines (CGL4, SaOS-2) can induce the cell apoptosis (Li et al., 2001).Nogo-B induces vascular smooth muscle cells apoptosis by activation of the JNK/p38 MAPK signaling pathway (Zheng et al., 2011).The overexpression of Nogo-B protein can induce apoptosis in cancer cells, but not in normal cell lines (Watari et al., 2003).Nogo-C expressed in HEK 293 cell confers apoptosis by inducing caspase-3 and p53 activation through the JNK-c-Jun-dependent pathway (Chen et al., 2006).Moreover, by transferring mutant p53 protein from nucleus to cytoplasm and decreasing the expression of c-Fos, Hsp70 protein, Nogo-c inhibited SMMC7721 cell growth and promoted its apoptosis.Nogo-C is expressed differently in hepatocellular carcinoma and its paracancerous tissues (Chen et al., 2005).Some researchers have shown that knockdown of Nogo-A in cardiomyocytes markedly attenuated hypoxia/ reoxygenation-induced apoptosis (Sarkey et al., 2011).Increasing evidences show that Nogo proteins play an important role in the apoptosis of cells, especially in tumor cells.
The 3'-UTR of eukaryotic mRNAs takes part in the translation initiation, mRNA stability and localization.The CAA insertion/deletion polymorphism (rs34917480) locates at 4548-4554 of RTN4 (AY102279) 3'-UTR and we found the absence of CAA allele will increase NSCLC risk.This result suggested that RTN4 3'-UTR was associated with NSCLC risk and rs34917480 altered the function of 3'-UTR.Recently, two studies have shown that this polymorphism of RTN4 3'-UTR was significantly associated with increased cervical squamous cell carcinoma risk (Shi et al., 2012) and uterine leiomyomas (UL) risk (Zhang et al., 2013).The polymorphism with RNT4 3'-UTR could be molecular marker for detecting malignancy.
In the subgroup analysis, we also found the association between rs34917480 and NSCLC risk was more apparent among elders, males and smokers.These results indicated that these factors can impact the effect of this SNP site.As the functional role of rs34917480 remained unknown, this investigation provided experimental basis for further research.
There were some limitations for this study.The relatively small sample size may cause instability to the result.And the information of environmental exposure was not detailed, such as the explicit cigarette smoking history and drinking consumption.
In summary, we have provided the initial evidence that the CAA polymorphism in RTN4 3'-UTR associated with non-small cell lung cancer in Chinese Han population.However, further study will be required to investigate the mechanism of this polymorphism in the development of NSCLC.
Table 1 . Characteristics of the Cases and Controls
*SD: Standard error; a Two-sided χ 2 test for distributions between cases and controls
Table 4 . Stratification Analysis for Associations Between rs34917480 and Lung Cancer Risk in Genetic Models
*Adjusted by age, sex and smoking | 2018-04-03T04:50:06.585Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "5240e303fbafcfa4d0a0cff4013d39c644193704",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201424635095715&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5240e303fbafcfa4d0a0cff4013d39c644193704",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
15319651 | pes2o/s2orc | v3-fos-license | Relation Between Erectile Dysfunction and Silent Myocardial Ischemia in Diabetic Patients: A Multidetector Computed Tomographic Coronary Angiographic Study
Introduction Erectile dysfunction (ED) can precede coronary artery disease. In addition, silent myocardial ischemia (SMI) is more common in diabetic patients and is a strong predictor of cardiac events and death. Aim To evaluate the presence of SMI in patients with diabetes and ED using multidetector computed tomographic coronary angiography (MDCT-CA). Methods This study evaluated patients with diabetes and ED without any history of cardiac symptoms or signs. Erectile function was evaluated with the Sexual Health Inventory for Men score, erection hardness score (EHS), and maximal penile circumferential change by an erectometer. MDCT-CA was used for the detection of coronary artery stenosis. Main Outcome Measures Sexual Health Inventory for Men score, EHS, maximal penile circumferential change, and coronary artery stenosis by MDCT-CA. Results Of 20 patients (mean age = 61.45 ± 10.7 years), MDCT-CA showed coronary artery stenosis in 13 (65%) in the form of one-vessel disease (n = 6, 30%), two-vessel disease (n = 2, 10%), and three-vessel disease (n = 5, 25%). Fifty percent of patients showed at least 50% vessel lumen obstruction of the left anterior descending coronary artery, which was the most commonly affected vessel (55%). Fifteen percent (3 of 20) of patients had greater than 90% stenosis, and two of them underwent an immediate coronary angioplasty with stenting to prevent myocardial infarction. Maximum coronary artery stenosis was positively correlated with age (P = 0.016, r = 0.529) and negatively correlated with EHS (P = .046, r = −0.449). Multivariate regression analysis using age and EHS showed that age was the only independent predictor of SMI (P = .04). Conclusion MDCT-CA can be a useful tool to identify SMI in diabetic patients with ED, especially in those of advanced age and/or with severe ED.
INTRODUCTION
Diabetes mellitus is a major public health problem around the world. It is estimated that the numbers of adults with diabetes will increase by 69% in developing countries and by 20% in developed countries from 2010 through 2030. 1 Most patients with diabetes (90e95%) have type 2 diabetes mellitus. 2 The death rate of diabetic adults is two to four times higher than for non-diabetic adults, 2 with cardiovascular disease (CVD) being the commonest cause of death. 3 The chronic hyperglycemia of diabetes is associated with macrovascular complications, including coronary artery disease (CAD), and microvascular complications that contribute to the pathogenesis of erectile dysfunction (ED). 4 A recent systematic review has interpreted the link between CAD and ED as an interaction of several factors, including cardiovascular risk factors, androgens, and chronic inflammation, which can lead to endothelial dysfunction and atherosclerosis, suggesting ED and CAD might be two different presentations of the same systemic disease. 5 The prevalence of ED in diabetic patients varies from 35% to 90%, with risk factors such as age, diabetes duration, glycemic control, sedentary lifestyle, smoking, and associated comorbidities. 6 A meta-analysis has associated ED with increased risk of CVD events in diabetic patients. 7 Even prediabetes identification in patients with ED has been associated with CVD predication. 8 Penile color Doppler ultrasound has been recognized as a potential tool for predicting silent myocardial ischemia (SMI) in patients with ED. 9 Patients with SMI exhibit objective findings suggestive of myocardial infarction in the absence of angina or equivalent symptoms. 10 Although the prevalence of SMI is highly variable depending on the targeted population, age, and diagnostic tools, diabetes is associated with a marked increase in SMI prevalence. 11 Several studies have been conducted to screen for SMI in patients with diabetes using different tools with varying sensitivity and specificity, 12 including electrocardiography, 13,14 the ankle-brachial index, 15 nuclear myocardial perfusion imaging studies, 16e18 coronary artery calcium scoring using electronbeam computed tomography or multidetector computed tomography (MDCT), 19,20 or a combination of such tests. 21 MDCT coronary angiography (MDCT-CA) has become a reliable non-invasive imaging modality with high specificity and sensitivity for the evaluation of CAD. 22,23 MDCT-CA has been used to screen patients with asymptomatic diabetes for SMI, 24 providing long-term prognostic value. 25 Some studies have used MDCT-CA to screen for SMI in patients with ED. 26e28 However, no previous studies have used MDCT-CA to screen patients with diabetes and ED.
AIM
This prospective study aimed to evaluate the presence of SMI in diabetic patients with ED using MDCT-CA.
METHODS
A prospective clinical study was conducted in diabetic men with ED seeking treatment at the Men's Health Clinic at Juntendo University Hospital (Tokyo, Japan) from March 2014 through March 2015. The inclusion criteria for the study were the absence of current and/or previous cardiac symptoms and signs. The study design was approved by the ethical and scientific research committee of Juntendo University Hospital (number 14-065). The ethical principles of the Declaration of Helsinki were followed and an informed consent was obtained from all patients. Diagnosis of diabetes was based on criteria of the American Diabetes Association 2013 guidelines. 29 Exclusion criteria included patients with cerebrovascular disease, congestive heart failure, congenital or valvular heart disease, cardiomyopathy, arrhythmia, advanced kidney (creatinine > 1.3 mg/dL) or liver disease, psychiatric disease, history of pelvic trauma, and pelvic surgery.
Initial Evaluation
History taking included a patient's personal history, special habits, duration and type of diabetes, associated medical diseases (hypertension, dyslipidemia), diabetic treatment, and ED history. History of chronic diabetic complications, including retinopathy and neuropathy, was obtained. General examination included weight, height, body mass index, and blood pressure.
Laboratory Investigations
Patients' glycemic control was evaluated by fasting blood glucose level, glycosylated hemoglobin level, and homeostasis model assessment of insulin resistance. Hemoglobin, highsensitive C-reactive protein, prostate-specific antigen, and uric acid were evaluated because they could reflect cardiovascular risk burden. Diabetic nephropathy was evaluated by measuring albumin, urine b-microglobulins, serum creatinine, and estimated glomerular filtration rate. Patients with albuminuria (albumin > 30 mg/L) were considered to have nephropathy. A complete lipid profile, including triglyceride, very low-density lipoprotein cholesterol, total cholesterol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, ratio of total cholesterol to high-density lipoprotein cholesterol, apolipoprotein A 1 , and apolipoprotein B, was obtained. Hormonal assessment of total and free testosterone, luteinizing hormone, and follicle-stimulating hormone levels was performed.
Erectile Function Evaluation
Patients' erectile function was evaluated by three validated tools. The first tool was the Sexual Health Inventory for Men (SHIM) questionnaire, which evaluated erectile function during the past 6 months. According to the SHIM score, patients were categorized as having mild ED (17e21), mild to moderate ED (12e16), moderate ED (8e11), or severe ED (1e7). 30 The second tool was the erection hardness score (EHS). According to the EHS, patients were categorized as having optimal erection (grade ¼ 4), suboptimal erection (grade ¼ 3), moderate ED (grade ¼ 2), or severe ED (grade ¼ 1). 31 The third tool was the maximal penile circumferential change (MPCC) using an erectometer (Nippon Medical Products, Asahikawa, Japan) during sleep for three nights. The MPCC measurement has a good correlation with the RigiScan 32 and EHS. 33 The ED cutoff point was an MPCC less than 20 mm, as reported in previous studies. 32
Evaluation of SMI
A 64-row MDCT scanner (Sensation Cardiac 64; Somatom, Munich, Germany) was used to evaluate patients. Before MDCT scanning, oral metoprolol was administrated to patients to slow their heart rates to lower than 70 beats/min. After electrocardiographic electrodes were connected, patients were asked to hold their breath during the scan. The scan parameters included 0.5-mm slices thickness, 120-kV tube voltage, and 500-mA tube current. MDCT-CA results were evaluated as the presence (positive) or absence (negative) of coronary artery stenosis. Quantitative grading of maximum coronary artery stenosis as minimal (<25%), mild (25e49%), moderate (50e69%), and severe (70e99%) was performed as recommended by the Society of Cardiovascular Computed Tomography. 34 Patients with coronary artery stenosis were classified according to the decrease in luminal diameter as having obstructive CAD (!50%) or nonobstructive CAD (<50%). 35 The number of stenotic vessels (one, two, or three) was estimated. Also, coronary artery stenosis was classified according to coronary artery nomenclature (right, left main trunk, left anterior descending, and left circumflex coronary arteries). Patients with positive MDCT-CA results were informed and followed up (follow-up data incomplete).
Statistical Analysis
Correlations between age and maximum stenosis by MDCT-CA and between EHS and maximum stenosis by MDCT-CA were performed using the Pearson correlation test with a twotailed P value. To identify predictors of SMI, multivariate regression analysis using age and EHS was performed. JMP 11.0 (SAS Institute, Cary, NC, USA) was used for data analysis. A P value less than .05 was considered significant.
MAIN OUTCOME MEASURES
The main outcome measurements were the SHIM score, the EHS, MPCC, and coronary artery stenosis by MDCT-CA.
MDCT-CA showed positive coronary artery stenosis in 65% of subjects. Fifty percent of patients showed obstructive CAD (!50% lumen obstruction). One-vessel CAD (30%) was the commonest presentation. The left anterior descending coronary artery was the commonest coronary artery with stenosis. Data related to MDCT-CA are presented in Table 2. Fifteen percent (3 of 20) of patients had greater than 90% stenosis, and two of them underwent an immediate coronary angioplasty with stenting to prevent myocardial infarction. Maximum coronary artery stenosis was positively correlated with age (P ¼ .016, r ¼ 0.529; Figure 1) and negatively correlated with EHS (P ¼ .046, r ¼ À0.449; Figure 2). Multivariate regression analysis using age and EHS showed that age was the only independent predictor for SMI (P ¼ .04; Table 3). A representative MDCT-CA result of one patient is shown in Figure 3.
DISCUSSION
The present study examined the magnitude of the effect of ED on cardiovascular status in diabetic patients using the noninvasive diagnostic modality of MDCT-CA. The present study is the first to use MDCT-CA in patients with asymptomatic diabetes and ED; other studies have used MDCT-CA to investigate patients with only diabetes 24,25 or patients with only ED. 26e28 For the relation between ED and CAD, Montorsi et al 35 proposed the artery size hypothesis, which states that when the penile arteries are smaller (1e2 mm) than the coronary arteries (3e4 mm), the penile vasculature is affected sooner by cardiovascular risk factors, which makes ED a predictor of CVD events. The association between ED and CVD events is well established, 36 especially in diabetic patients. 37 Therefore, ED should be considered an independent CVD risk until proved otherwise. 38 In the present study, MDCT-CA depicted coronary artery stenosis in 65% of patients with ED and asymptomatic CAD. The rate of CAD in diabetic patients with concomitant ED is controversial. One study of patients with asymptomatic diabetes screened with MDCT-CA showed stenosis in 36.5% (19 of 52). 24 The present study found that SMI was very common in diabetic patients with ED without any cardiac symptoms and signs. It showed that 50% of patients had significant obstructive CAD (!50% decrease in vessel lumen) and 25% of patients had threevessel CAD. This is considered an alarming sign because obstructive CAD and three-vessel CAD were reported as predictors of all cardiac events after more than 5 years of follow-up of 405 diabetic patients. 34 In addition, left anterior descending coronary artery stenosis was reported in 55% of patients, which was associated with worst prognosis among other myocardial infarction types owing to a larger infarct, especially with advanced age. 39 Therefore, ED identification, especially in diabetic patients younger than 60 years, could assist in CVD risk evaluation and decrease the risk of an event. 40 A significant positive correlation was observed between ED severity and maximum coronary artery stenosis by MDCT-CA in diabetic patients. The present results are supported by studies that used MDCT-CA to screen patients with ED. 26,28 In diabetic patients, increasing ED severity was associated with increased total CVD risk, 41 with poor CVD prognosis. 42 Therefore, ED could be used as a warning sign for SMI. The leading interval from ED to CVD events was estimated at 2 to 5 years. 43 In diabetic patients, ED is a predictor of CAD and cardiac events, with a 1.4-fold higher CAD risk compared with diabetic patients without ED. 44 Therefore, ED in diabetic patients is considered an atherosclerosis marker that could assist in the detection of subclinical vascular disorders. 45 In patients with diabetes and ED, SMI has been screened using different stress tests, 46e48 showing that the presence vs absence of ED can improve the sensitivity of screening guidelines for SMI in diabetic patients. 49 In the present study, age was the only predictor for coronary artery stenosis in diabetic patients with ED. This result is consistent with that of a study that showed that increasing age was an independent risk factor for a high Agatston coronary artery calcium score in patients with ED. 28 In another study, ED predicted CAD in patients with type 2 diabetes without clinically evident CVD. 50 A prospective study found that predictors of SMI were diabetes duration, intima-media thickness, and statin therapy at MDCT-CA screening of patients with asymptomatic diabetes. 24 Recently, MDCT-CA screening of 320 patients with asymptomatic diabetes showed that a glycosylated hemoglobin level of at least 7.4%, dyslipidemia, diabetes duration, and retinopathy were predictors for SMI. 51 Therefore, MDCT might be helpful to identify CAD in diabetic patients with ED and high risk for CVD (including aging) in the future. Radiation exposure from these screening modalities also should be considered.
The present study has several limitations. The sample was extracted from outpatients who presented at our clinic seeking for treatment for ED. Thus, the subjects were from a strongly biased population. Furthermore, we could not set the controls; therefore, we could not compare the CAD prevalence in diabetic patients with ED with other combinations of disease status, such as subjects with vs without diabetes and/or with vs without ED. Comparative data on CAD prevalence in diabetic patients are needed. In addition, CAD should be assessed separately for type 1 and for type 2 diabetes owing to different pathogeneses and outcomes. The sample size was limited; therefore, only two parameters (age and ED severity) were used to assess the predictive factor for SMI. We did this to distinguish the stronger predictor for CAD, and we believe this information is important for patients with diabetes and ED. The diagnosis of CAD in diabetic patients with ED was performed using only MDCT-CA, which is an indicator of atherosclerosis but cannot assess inducible ischemia as stress testing can. Also, coronary artery calcium scoring was not calculated for the MDCT-CA scan because we used coronary artery stenosis grading as an outcome measurement for the MDCT-CA results. In the future, a large-scale comparative study including healthy men should be performed.
CONCLUSIONS
CAD was highly prevalent (65%) in diabetic patients with ED in our outpatient clinic. Furthermore, 15% of patients showed severe coronary artery stenosis (!90%), which might lead to myocardial infarction. Age was the single significant predictor for coronary artery stenosis in diabetic patients with ED. One should consider the possibility of SMI in elderly patients with diabetes who have ED. | 2018-04-03T05:12:12.899Z | 2016-06-30T00:00:00.000 | {
"year": 2016,
"sha1": "244c5be8a8300d462203013180577d0f0267608a",
"oa_license": "CCBYNCND",
"oa_url": "http://www.smoa.jsexmed.org/article/S2050116116300393/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "244c5be8a8300d462203013180577d0f0267608a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225210052 | pes2o/s2orc | v3-fos-license | Improving the Running Conditions of Diesel Engine with Grape Seed Oil Ad-ditives by Response Surface Design
Diesel engines are widely used in large sectors of the global economy, such as industry, transportation, and agriculture, due to their high efficiency [1–3]. Despite the high efficiency of diesel engines, their emissions, especially NOx and smoke emissions, have many negative effects on human health and the environment [4,5]. On the other hand, petroleum-based fuel reserves are running out rapidly due to the increase in world population and increasing energy demand in parallel with the development of the industry [6–9]. Studies on diesel engines are carried out for simultaneous reduction of fuel consumption and emissions due to both the depletion of fossil fuel reserves and high emission levels [10]. In this context, environmentally friendly renewable fuel research has accelerated in recent years and biofuels have emerged as an important alternative to fossil fuels [11–13]. Biodiesel is one step ahead in biofuels due to its advantage of being produced from many different substances [14–16]. There are various biodiesel raw materials such as fish oil, frying oil, and also oils of animal origin, as well as various vegetable origin substances, from soybeans to sunflower, canola to cotton seed oil [17–19]. In this study, GSO, which is in the category of biodiesel species of vegetable origin, was used. There are a limited number of studies in the literature about the use of GSO as fuel in diesel engines, and these studies are mostly done in marine engines or related to the production of GSO [20–22]. Azad and Rasul [23] examined the effects of using GSO and waste cooking oil as fuel in a fourcylinder, four-stroke diesel engine and compared two biodiesel results. According to their results, they stated that GSO gives better results in terms of both emission and performance. Vedagiri et al. [24] investigated the performance, Abstract
Introduction
Diesel engines are widely used in large sectors of the global economy, such as industry, transportation, and agriculture, due to their high efficiency [1][2][3]. Despite the high efficiency of diesel engines, their emissions, especially NOx and smoke emissions, have many negative effects on human health and the environment [4,5]. On the other hand, petroleum-based fuel reserves are running out rapidly due to the increase in world population and increasing energy demand in parallel with the development of the industry [6][7][8][9]. Studies on diesel engines are carried out for simultaneous reduction of fuel consumption and emissions due to both the depletion of fossil fuel reserves and high emission levels [10]. In this context, environmentally friendly renewable fuel research has accelerated in recent years and biofuels have emerged as an important alternative to fossil fuels [11][12][13].
Biodiesel is one step ahead in biofuels due to its advantage of being produced from many different substances [14][15][16]. There are various biodiesel raw materials such as fish oil, frying oil, and also oils of animal origin, as well as various vegetable origin substances, from soybeans to sunflower, canola to cotton seed oil [17][18][19]. In this study, GSO, which is in the category of biodiesel species of vegetable origin, was used. There are a limited number of studies in the literature about the use of GSO as fuel in diesel engines, and these studies are mostly done in marine engines or related to the production of GSO [20][21][22]. Azad and Rasul [23] examined the effects of using GSO and waste cooking oil as fuel in a fourcylinder, four-stroke diesel engine and compared two biodiesel results. According to their results, they stated that GSO gives better results in terms of both emission and performance. Vedagiri et al. [24] investigated the performance,
Abstract
In this study, an optimization study was carried out by using Response Surface Methodology (RSM) to determine the optimum conditions by improving the working conditions in a single cylinder diesel engine using fuel blends created by mixing the biodiesel obtained from grape seed oil (GSO) to diesel in different proportions (5%, 10% and 15% by vol.). Experiments were carried out with three different fuel mixtures with three different injection pressures (200, 225 and 250 bar) at three different engine loads (400, 1000 and 1600-Watt). Since the minimum number of experiments proposed by the RSM application is 20 for optimization according to three different factors and three different levels of each factor, an RSM model was created from the experiment data obtained by performing 20 trials. While the GSO ratio, the injection pressure and engine load was determined as input factors, brake specific fuel consumption (BSFC), exhaust gas temperature (EGT), carbon monoxide (CO), hydrocarbon (HC), nitrogen oxides (NOx) and smoke were chosen as responses on the RSM model. Considering the findings taken from the RSM model, the working conditions in which the best output can be obtained from the engine; it has been determined as 13% GSO percentage, 245 bar injection pressure and 850-W engine load. The study to verify the results obtained from the optimization study reveals that the results were obtained with an error of less than 9%.
Keywords: Response surface, Optimization approach, Grape seed oil, Diesel engine 186 combustion and emission parameters of a diesel engine powered by GSO biodiesel. In addition to GSO, they added nanocerium oxide and zinc oxide solids. They stated that by adding cerium oxide and zinc oxide emulsion mixtures, they achieved a significant decrease in NOx emission and that GSO is an effective alternative fuel for diesel engines without any engine changes.
The search for alternative fuel brought an increase in the number of experiments accompanied. Experiment is needed to measure the suitability of a new type of fuel for use in internal combustion engines. For this reason, both the number of experiments, the time spent, and the costs of the experiment have increased considerably in recent years [25,26]. To prevent this, in other words, to reduce the number of experiments, computer applications have been developed that can simulate many more experiments using a certain number of experiment data. Among these applications, RSM stands out due to its ability to optimize in a shorter time as it creates the most suitable matrix for tests, unlike other applications [6,27]. There are many studies in recent years where diesel engines have been optimized with RSM using different alternative fuels [28][29][30][31].
Although there are a few studies evaluating the usability of GSO as a fuel in a diesel engine, an optimization study about diesel engine using GSO as a fuel has not been found in the literature. For this reason, in this study, an optimization study of a diesel engine was made using RSM, where GSO ratio, injection pressure and load were selected as the input variable.
Material and method
In this study, which is done to improve the engine conditions and determine the best conditions, the experimental data required for the creation of the RSM model was obtained using the experimental setup shown schematically in Fig. 1. In the tests, the exhaust gas temperature values were measured with the type J (Fe-Const), TMX-B12F08 brand thermocouple, which can measure between -200 °C and 800 °C. Fuel consumption was mass-measured with the Weightlab brand WH-2002 model, which can measure 0.01 g precision. The resistive load set with control panel used in the loading of the test engine and seen in Figure 1 is composed of General brand 200 W and 1000 W halogen bulbs and switches. The Bilsa brand MOD 2210 model exhaust gas emission device used for the measurement of exhaust emissions can perform CO, HC, NOx, air fuel ratio, lambda and smoke darkness measurements according to the principles specified in the TS 11365 / T1 standard. Technical characteristics of the engine/generator and exhaust emission device used in the experiments are shown in Table 1 and Table 2, respectively. In the experiments, GSO5 (5% GSO + 95% diesel), GSO10 (10% GSO + 90% diesel) and GSO15 (15% GSO + 85% diesel) were used as fuel, which were obtained by adding GSO to the diesel in three different proportions. These three test fuels were tested at different injection pressure values (200, 225 and 250 bar) and at different engine loads (400, 1000 and 1600-W). Properties of test fuels are shown in Table 3. RSM model was created with the data obtained from the experimental study. RSM is one of the primary optimization applications that can be used to minimize money and time spent in academic and commercial tests. It can both derive a basic equation for the parameters to be optimized using a minimum number of experimental data and present it with 3D graphics. In addition, it is an application that can determine the effect of working parameters on outputs with the analysis of variance (ANOVA) and Pareto charts. RSM optimization is based on the equations given below; The basic model based on a first-degree polynomial available in RSM; If the model is second-order; Where ↋ is random test error, k is the number of factors, y is the predicted response, and are independent factors [32] (Simsek and Uslu, 2020). 0 is the constant, i is the linear coefficient and ij interactive coefficient, and are the linear and quadratic coefficient, respectively.
The correlation coefficient (R 2 ) is assigned as per Eq. (3), the adjusted correlation coefficient (Adj. R 2 ) is assigned using Eq. (4), the predicted correlation coefficient (Pred. R 2 ) is assigned using Eqs. (5) with Eqs. (6) and (7) [6] (Uslu, 2020); In this optimization study, the input factors to be optimized are selected as the GSO ratio, the injection pressure and the engine load, while the outputs to achieve the best values are selected as BSFC, EGT, CO, HC, NOx and smoke. Factors selected for input are shown in Table 4, along with their levels.
Results and Discussion
The R 2 is an evidence of how the test data fit with the models. The R 2 of BSFC, EGT, HC, CO, NOx, and smoke are 99.67%, 90.17%, 94.93%, 94.88%, 91.22% and 90.07%, respectively which are supplying high-level accurate results of Where GSO, IP and L characterize GSO percentage, injection pressure and engine load, respectively. Fig. 2 shows the effects of the selected variables on BSFC and EGT simultaneously. It is desirable that BSFC and EGT are at low levels. Looking at 3D graphics, the increase in GSO rate caused BSFC to increase. Considering Table 3, where the fuel properties are shown, the lower heating value of the diesel is 42.6 MJ / kg, while the GSO is 36.54 MJ / kg. Therefore, as the amount of GSO increases in the fuel mixture, the lower thermal value of the mixture will decrease. In order to obtain the equal outlet power from the engine, it is essential to utilize more fuel with low thermal value. Therefore, BSFC increased with the use of GSO. On the other hand, if the BSFC change is examined according to the load variation, it is clearly understood from the graph that BSFC decreases as the load increases. It is a known fact that as the load increases, the temperature inside the cylinder increases. Along with the increased in-cylinder temperature, the combustion temperature also rises, and as a result, the exact combustion rate increases. This situation reduces BSFC. The injection pressure changes increased BSFC up to 225 bar and after 225 bar, BSFC tended to decrease again. Since the viscosity of GSO is very high, incomplete combustion occurred at low pressures and BSFC increased. BSFC is thought to decrease as the rate of incomplete combustion decreases with increasing pressure.
Fig. 2. Simultaneous effects of engine variables on BSFC and EGT
Simultaneous effects of engine variables on CO and HC emissions are demonstrated in Fig. 3. The main factor triggering both CO emission and HC emission formation is incomplete combustion. All factors that increase incomplete combustion also cause CO and HC to increase. It was mentioned in the explanation of an upper graph that the rate of complete burning increases with increasing engine load. Consequently, as the engine load increases, CO and HC emissions must decrease, and the graph has given supporting results. On the other hand, CO and HC emissions have increased as the incomplete combustion has increased with the increase of GSO, whose kinematic viscosity is quite high compared to diesel. Since the spraying difficulty caused by the effect of high viscosity is relatively resolved at high injection pressure values, CO and HC emissions decrease as the pressure increases.
Changes in smoke and NOx depending on engine variables are shown in Fig. 4. NOx is a type of emission that occurs mostly due to high temperatures and oxygen excess. Consequently, it increased with the increasing load of the engine, which increased the temperature inside the cylinder, and decreased with increasing GSO due to the cooling effect of GSO. Similarly, smoke emission increased with increasing engine load and decreasing with increasing GSO rate. The injection pressure value with the highest smoke and NOx emission was determined as 225 bar. Emissions decreased as the rate of complete combustion reaction increased at higher pressures.
Optimization and Validation
The main purpose of this study is to optimize the selected input variables and the responses arising from these variables. Accordingly, the criteria of the optimization study are shown in Table 5. All the responses that are required to be minimum in internal combustion engine processes. Therefore, it was chosen to minimize all responses as optimization criteria.
Results obtained from the optimization study based on the selected criteria are shown in Fig. 5. Considering the findings taken from the RSM model, the working conditions in which the best responses can be obtained from the engine; it has been determined as 13% GSO percentage, 245 bar injection pressure and 850-W load. According to the optimum working conditions obtained, the responses are 675.82 g/kWh, 130.32 ℃, 25.732 ppm, 5.245%, 3.542 ppm and 0.023% for BSFC, EGT, NOx, smoke, HC, and CO, respectively. Table 6. The scale in comparison is the magnitude of the error rate. Looking at the error rates, they are all lower than 9%. The lowest error rate was obtained in EGT with 3.44%, while the highest error occurred in HC with 8.71%. When the literature information is examined, it is understood that error rates less than 9% are at acceptable levels. According to the results of the study, it is understood that a diesel engine with a GSO contribution will be successfully optimized with RSM according to the level change of different variables. | 2020-09-10T10:21:54.143Z | 2020-09-07T00:00:00.000 | {
"year": 2020,
"sha1": "3d3a352b91d0130c19f30eb81845f84194f2d463",
"oa_license": "CCBY",
"oa_url": "https://dergipark.org.tr/en/download/article-file/1203864",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9033c11a3014daa42211f208af456b4e91b980e7",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
13232415 | pes2o/s2orc | v3-fos-license | A GP-MOEA/D Approach for Modelling Total Electron Content over Cyprus
Vertical Total Electron Content (vTEC) is an ionospheric characteristic used to derive the signal delay imposed by the ionosphere on near-vertical trans-ionospheric links. The major aim of this paper is to design a prediction model based on the main factors that influence the variability of this parameter on a diurnal, seasonal and long-term time-scale. The model should be accurate and general (comprehensive) enough for efficiently approximating the high variations of vTEC. However, good approximation and generalization are conflicting objectives. For this reason a Genetic Programming (GP) with Multi-objective Evolutionary Algorithm based on Decomposition characteristics (GP-MOEA/D) is designed and proposed for modeling vTEC over Cyprus. Experimental results show that the Multi-Objective GP-model, considering real vTEC measurements obtained over a period of 11 years, has produced a good approximation of the modeled parameter and can be implemented as a local model to account for the ionospheric imposed error in positioning. Particulary, the GP-MOEA/D approach performs better than a Single Objective Optimization GP, a GP with Non-dominated Sorting Genetic Algorithm-II (NSGA-II) characteristics and the previously proposed Neural Network-based approach in most cases.
I. INTRODUCTION
The ionosphere is defined as a region of the earth's upper atmosphere where sufficient ionisation can exist to affect the propagation of radio waves. It ranges in height above the surface of the earth from approximately 50 km to 1000 km. The influence of this region on radio waves is accredited to the presence of free electrons. The impact of the ionosphere on communication, navigation, positioning and surveillance systems is determined by variations in its electron density profile and total electron content along the signal propagation path [1], [2]. As a result satellite systems for communication, navigation, surveillance and control that are based on trans-ionospheric propagation may be affected by complex variations in the ionospheric structure in space and time. This often leads to degradation of accuracy, reliability and availability of their service. Vertical Total Electron Content (vTEC) is an important parameter in trans-ionospheric links since when multiplied by a factor which is a function of the signal frequency, it yields an estimate of the delay imposed on the signal by the ionosphere due to its dispersive nature.
This paper describes an attempt to develop a model to predict vTEC over Cyprus and encapsulate its variability on a diurnal, seasonal and long-term scale. The model development is This article is published in Engineering Intelligent Systems 18 (3)(4): 193-203. CRL Publishing based on around 60000 hourly vTEC measurements recorded above Cyprus from 1998 to 2009. The practical application of this model lies in its possible use as an alternative candidate local model to the existing Klobuchar global model [3] that is currently being used in single frequency GPS navigation system receivers to improve positioning accuracy.
Metaheuristics and more specifically Evolutionary Algorithms were shown efficient and effective in dealing with difficult-to-solve real-life problems [4]. Particularly, Genetic Programming (GP) based approaches performed well in evolving computer programs, controllers and models [5] in the past. GP approaches deal with this kind of problems by learning from historical data and designing a model for predicting future events. One of the major drawbacks of GP approaches is their bias towards improving their predictive accuracy on the examples available for training [6]. This often results in having a good approximation while evolving the model and a poor approximation in predicting future events, especially in highly distorted cases. In this paper, we have designed a Genetic Programming (GP) approach with a Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) [7] characteristics, coined GP-MOEA/D, for alleviating the aforementioned drawback and dealing with the vTEC prediction problem in the context of Multi-Objective Optimization (MOO) [8]. In MOO, there is no single solution that optimizes all objectives in a single run, but a set of mathematically equally important (or non-dominated) solutions, commonly known as the Pareto Front (PF) [8]. Therefore, our major goal is to obtain a set of Pareto-optimal models, i.e. with high predictive accuracy on the training data as well as comprehensive and general enough.
The main contribution of our paper is as follows: • A newly proposed vTEC prediction problem is formulated in the context of MOO, using a real data set of vTEC measurements recorded over Cyprus for a period of 11-years.
• A GP-MOEA/D approach, i.e. a panmictic, generational, elitist Genetic Programming (GP) approach having characteristics of the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) with an expressiontree representation, is designed for dealing with the vTEC prediction problem.
• A GP-based prediction model is derived for vTEC over Cyprus showing a better performance than a Single Objective Optimization GP, a GP with Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) [9] characteris-tics and the previously proposed Neural Network [10] models. The rest of the paper is organized as follows: Section II introduces background material and related work. Section III defines the vTEC prediction problem by describing vTEC characteristics and measurements performed during the period 1998-2009. The proposed approach is detailed in Section IV. The experimental methodology and results are reported and discussed in Sections V and VI respectively. Section VII concludes the paper.
II. BACKGROUND AND RELATED WORK
The importance of accurate spatial and temporal vTEC specification [11] in the context of a wide spectrum of spacebased telecommunication, radar and navigation systems was a decisive factor encouraging a number of studies with various modeling approaches and prediction techniques [12]. These techniques have ranged from statistical time-series analysis [13] and harmonic analysis [14], [15] to AI techniques. Neural networks were widely adopted as a favourable option in ionospheric modeling [16] and specifically for vTEC, for which local [10] and regional models have been published [17]. Additional studies have also been conducted in the application of relevant techniques in vTEC modelling such as recurrent [18] and radial basis function (RBF) [19] neural networks.
Genetic Programming (GP) [5] is an Evolutionary Computation (EC) technique that evolves populations of computer programs as solutions to problems. The term evolutionary algorithm [4] describes a class of stochastic search processes that operate through a simulated evolution process on a population of solution structures, which represent candidate solutions in the search space. Evolution occurs through (i) a selection mechanism that implements a survival of the fittest strategy, and (ii) diversification of the selected solutions to produce offspring for the next generation. In GP, programs are usually expressed using hierarchical representations taking the form of syntax-trees. It is common to evolve programs into a constrained, and often problem-specific user-defined language. The variables and constants in the program are leaves in the tree (collectively named as terminal set), whilst arithmetic operators are internal nodes (collectively named as function set). GP finds out how well a program works by running it, and then comparing its behaviour to some ideal, this is quantified to give a numeric value called fitness. Those programs that do well are chosen to breed, and produce new programs for the new generation. The primary variation operators to perform transitions within the space of computer programs are crossover (e.g. subtree crossover) and mutation (e.g. point, bit-flip, subtree mutation) [5]. Like in other evolutionary algorithms, GP randomly generates individuals for the initial population. Two dominant methods are the full and grow, as well as the widely used combination of the two, known as Ramped half-and-half [5]. In both methods, the initial individuals are generated so that they do not exceed a userspecified maximum depth. The depth of a node is the number of edges that need to be traversed to reach the node starting from the tree's root node (the depth of the tree is the depth of its deepest leaf). Once a stopping criterion has been met the algorithm terminates and the best program is designated as the output of the run.
In some cases of prediction modelling, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity [6]. It is often desirable that the proposed approaches provide insight and understanding into the predictive structure of the data to be able to explain each individual prediction [20]. In [6], it is argued that the incomprehensibility of some models is caused by the model induction process being primarily based on predictive accuracy or performance. To address this concern, we use a multi-objective Genetic Programming algorithm to optimize decision trees for both classification performance and comprehensibility, without discriminating against either. MOO is a relatively new field in the area of telecommunications and it is difficult to apply an existing linear/single objective method to effectively tackle the Multiobjective Optimization Problem (MOP), giving a set of non-dominated solutions. The literature hosts several interesting approaches for tackling MOPs, with Multi-Objective Evolutionary Algorithms (MOEAs) [8] posing all the desired characteristics for obtaining a set of nondominated solutions, in a single run. The two major classes of MOEAs are the Pareto-dominance based approaches [8] and the approaches based on decomposition [21]. Research studies that used GP approaches having MOEA characteristics for dealing with MOPs include the following: In [6], a Paretodominance based GP approach is used to optimize three objectives, i.e. classification accuracy, tree size and performance for medical data mining. [22] proposes a Pareto-dominance based GP variant, coined Traceless Genetic Programming for dealing with five multiobjective test problems. More recently in 2009, [23] have used a GP with MOEA based on Pareto dominance characteristics to automatically construct stochastic processes.
However, all research studies just mentioned use Paretodominance based approaches. Recently, a new and promising MOEA based on Decomposition (MOEA/D) [7] approach was proposed and it has shown a good performance in both continuous [7] and combinatorial problems [24], [25]. MOEA/D decomposes a MOP into a set of scalar subproblems and solves them using neighborhood information and scalar techniques, in a single run. In this paper, a GP with MOEA/D characteristics is proposed to find a good prediction model of vTEC over Cyprus, focusing in optimizing the performance (i.e. predictive accuracy) and complexity (i.e. comprehensibility measured in terms of tree size). To the best of our knowledge this is the first time that the vTEC prediction problem is studied in the context of MOO and a GP-MOEA/D based approach has never been applied to this problem before.
III. PROBLEM DEFINITION AND MODEL
In this section, the characteristics of vertical Total Electron Content (vTEC) are introduced and particularly discussed for vTEC over Cyprus for a period of 11 years. The model parameters are also presented.
A. Total Electron Content Characteristics
Dual-frequency GPS data recorded by GPS receivers enable an estimation of the Total Electron Content (TEC) measured in total electron content units, (1 T ECU = 10 16 electrons/m 2 ). This is the total amount of electrons along a particular line of sight between the receiver and a GPS satellite in a column of 1m 2 cross-sectional area (illustrated in Figure 1) and represents a typical quantitative parameter of interest to GPS users. vTEC corresponds to the integral of the vertical electron density profile, an example of which is shown in Figure 2 from the ground to an infinite height (practically the height of the satellite). The analysis used in the present work to estimate vTEC from GPS data was carried out by means of the procedure developed by Ciraolo [26]. The electron density of free electrons within the ionosphere and therefore vTEC depend upon the strength of the solar ionizing radiation which is a function of time of day, season, geographical location and solar activity [1], [2]. Since solar activity has an impact on ionospheric dynamics which in turn influence the electron density of the ionosphere, vTEC also exhibits variability on daily, seasonal and long-term time scales in response to the effect of solar radiation. It is also subject to abrupt variations due to enhancements of geomagnetic activity following extreme manifestations of solar activity disturbing the ionosphere from minutes to days on a local or global scale. The most profound solar effect on vTEC is reflected on its daily variation as shown in the typical examples for three days at different parts of the sunspot cycle in Figure 3. As it is clearly depicted in this figure, there is a strong dependency of vTEC on local time which follows a sharp increase of vTEC around sunrise and gradual decrease around sunset. This is attributed to the rapid increase in the production of electrons due to the photo-ionization process during the day and a more gradual decrease due to the recombination of ions and electrons during the night.
There is also a seasonal component in the variability of vTEC, which can be attributed to the seasonal change in extreme ultraviolet (EUV) radiation from the Sun. This can be clearly identified in Figure 4 for all daily noon values of vTEC collected for high and low solar activity periods (years 2001 and 2008). The long-term effect of solar activity on vTEC, which follows an eleven-year cycle, is also clearly shown in both Figures 3 and 4, in which we can observe higher vTEC variability for higher solar activity in both diurnal and seasonal time-scales.
B. Model Parameters
The diurnal variation of vTEC is clearly evident by observing Figure 3. We therefore include hour number as an input to the model. The hour number, hour, is an integer in the range 0 ≤ hour ≤ 23. In order to avoid unrealistic discontinuity at the midnight boundary, hour is converted into its quadrature components according to: and coshour = cos(2π hour 24 ) A seasonal variation is also an underlying characteristic of vTEC as shown in Figure 4 and is represented by day number daynum in the range 1 ≤ daynum ≤ 365. Again to avoid unrealistic discontinuity between December 31 st and January 1 st , daynum is converted into its quadrature components according to: and Long-term solar activity has a prominent effect on vTEC. To include this effect in the model specification we need to incorporate an index, which represents a good indicator of solar activity. In ionospheric work the 12-month smoothed sunspot number is usually used, yet this has the disadvantage that the most recent value available corresponds to vTEC measurements made six months ago. To enable vTEC data to be modeled as soon as they are measured, and for future predictions of vTEC to be made, the monthly mean sunspot number values were modeled using a smooth curve defined by a summation of sinusoids.
IV. GENETIC PROGRAMMING + MOEA/D
In this section the problem representation is introduced and the vTEC prediction problem is formulated in the context of MOO. The description of the evolutionary algorithm employed, coined GP-MOEA/D, follows. GP-MOEA/D is a standard elitist (i.e. the best is always preserved), generational (i.e. populations are arranged in generations, not steady-state), panmictic (i.e. no mating restrictions) [27] Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) characteristics. Interested readers are referred to [7] for details on MOEA/D.
A. Problem Representation and MOO Formulation
In this paper, a prediction model is represented by Rampedhalf-and-half trees X with an initial maximum depth of 6 that are allowed to grow up to depth of 12 during evolution. The models are evolved into a constrained, problem-specific userdefined language. The variables and constants of the model are leaves in the tree (collectively named as terminal set T ), whilst arithmetic operators are internal nodes (collectively named as function set F ). It is common in the GP literature to represent expressions in the prefix notation similar to that used in LISP or Scheme. For example, x+3*y becomes (+ x (* 3 y))). This representation eases the expression-tree data structure formation, and its manipulation during the application of variation operators, which will be explained soon. GP finds out how well a program works by running it, and then comparing its behaviour to some ideal, i.e. exact measurements.
In this paper, we are interested in how well a model X predicts vTEC in a given data set D of size n, denoted as X(in j ) : j = 1, . . . , n, where in j is the vector consisting the input parameters (defined in Subsection III-B) of instance j in D. This comparison is quantified to give a numeric fitness value of tree X, which in our case is the RM SE(X, D). Besides, on the one hand, it is accepted that smaller decision trees are more comprehensible and have better generalization capabilities to adapt to the variations of the parameters in the whole data set. On the other hand, the bigger the tree size is the less generalized (and more complex) the tree is, and consequently the more biassed in terms of RMSE (i.e. more accurate prediction structures). Therefore, the RM SE(X, D) and the size of the tree, i.e. Size(X) are conflicting objectives and should be optimized in the context of MOO. The proposed vTEC prediction MOP formulation is as follows: Given: • D: data set • T : terminal set • F : function set Decision variables of a prediction tree X: • variables and constants from terminal set T • operands from function set F • the connections between variables/constants and operands. Objectives: Minimize RMSE and the size of tree X: where in j is the vector consisting the input parameters of instance j in data set D and vT EC j is the corresponding measured vTEC value.
min Size(X) = |X| (6) which is the number of nodes composing the tree solution X. In a MOP [8], there is no single solution X that optimizes all objectives simultaneously, but a set of trade-off candidates. The set of trade-off solutions is often defined in terms of Pareto Optimality [8]. That is, considering a minimization MOP 1 with m decision variables and n objectives: x is said to be nondominated, if there is no y ∈ Ω which dominates x, where Ω is the objective space.
• Definition 2 (Pareto optimality). An objective vector u = (u 1 , . . . , u n ) T is said to be (globally) Paretooptimal if there does not exist another objective vector v = (v 1 , . . . , v n ) T such that v dominates u, the latter is then called the Pareto objective vector. The set of all Pareto-optimal objective vectors is called the Paretooptimal front, denoted by PF. The set of all Pareto-optimal solutions in the decision space is called the Paretooptimal set, denoted by PS.
B. The Proposed Methodology
The proposed GP-MOEA/D proceeds as in Algorithm 1 and is described in the following.
1) Setup-Decomposition: Initially, the MOP should be decomposed into m subproblems by adopting any technique for aggregating functions [7], e.g. the Tchebycheff approach used here. In this paper, the i th subproblem is in the form where f j , j = 1, 2 are the objectives of the MOP in Subsection IV-A, z * = (z * 1 , z * 2 ) is the reference point, i.e. the maximum objective value z * j = max{f j (X) ∈ Ω} of each objective f j , j = 1, 2 and Ω is the decision space. For each Pareto-optimal solution X * there exists a weight vector w such that X * is the optimal solution of (7) and each solution is a Pareto-optimal solution of the MOP in Subsection IV-A. For the remainder of this paper, we consider a uniform spread of the weights w i j , which remain fixed for each subproblem i for the whole evolution and 2 j=1 w i j = 1. By decomposing the MOP into a set of scalar subproblems one can predict the objective preference of a particular prediction tree X and therefore its position in the objective space, considering the w i weight coefficient of a subproblem i. For example, the g i (X|w i j , z * ) with w i j = (1, 0) means that the subproblem g i focuses in optimizing objective function f 1 (in this case RMSE), ignoring the other objective function and consequently utilizing all its effort in obtaining a prediction tree of minimum RMSE. In the same way, g i (X|w i j , z * ) with w i j = (0, 1) focuses in prediction trees of just minimum size. The goal, however, in vTEC prediction problem is to obtain the solutions of these extreme cases as well as the trade-off between them, e.g. w i j = (0.3, 0.7). Consequently, appropriate scalar strategies can be employed and controlled to optimize different feasible areas of the objective space accordingly. Note that, this beneficial procedure cannot be utilized by any nondecompositional MOEA framework.
2) Setup-10 fold validation: The data-set was segmented in 10 continuous folds similarly to [10]. In each cross-validation cycle, 9 folds were used as the training set, whereas the evolved model was tested on the remaining 10 th fold. The training set was further randomly divided into two data-sets (with no overlapping): the fitness evaluation data-set, with 67% of the training data, and the validation data-set with the remaining 33%.
3) Initialization: In Step 1 of Algorithm 1, we adopt a random method to generate m solutions for the initial internal population (i.e IP 0 ). Namely, a tree solution X is initiated by using a Ramped-half-and-half tree creation with a maximum depth of 6 to perform a random sampling of rules. Each tree X is composed of variables and constants from terminal set T as well as operands from function set F . Each tree solution X ∈ IP 0 is then evaluated using the training set generated during setup.
Step 2.2-Evaluation on Training Set: Evaluate Y using the training set.
Step 2.3-Update Populations: Use Y to update IPgen, EP and the T closest neighbor solutions of Y .
Step 4-Evaluation on Validation Set: Evaluate all solutions Z ∈ EP using the validation set.
Step 5-Output: Evaluate solution X * ∈ EP having the lowest RMSE with respect the validation set and evaluate it using the 10 th fold.
4) Genetic Operators: In
Step 2.1 of Algorithm 1, the genetic operators are then invoked on IP for offspring reproduction for each subproblem g i , where i = 1 to m. Initially, the popular tournament selection [4] is utilized. Tournament selection randomly chooses a finite size set of tree solutions X from the current population IP gen . From this set the solution with the best fitness, i.e. g i (X|w i j , z * ), is selected for reproduction and forwarded to the breeding operators. In this paper, neither recombination, nor reproduction was used for breeding, but just mutation. Particularly, a mixture of mutation-based variation operators is employed, where subtree mutation is combined with point-mutation, for generating a new solution Y . The two mutation operators are probabilistically selected using a pre-defined parameter.
5) Evaluation (training set) and update of populations: In
Step 2.2, the new solution Y is evaluated using the training set generated in the setup phase. Then the update of populations, which is processed in two steps, follows. (1) Update IP , which keeps the best solution found so far for each (2) Update the External Population (EP ), which stores all the non-dominated solutions found so far during the search. EP = EP ∪ {Y i } if Y i is not dominated by any solution X j ∈ EP and EP = EP/{X j }, for all X j dominated by Y i . The twoobjective sort conducted in this step is in order to extract a set of non-dominated individuals [8] (Pareto Front), with regards to the lowest fitness evaluation data-set RMSE, as well as the smallest model complexity in terms of expression-tree size (measured by the number of tree-nodes). The rationale behind this is to create selection pressure towards accurate but simpler prediction models that have the potential to generalise better. These non-dominated individuals are then evaluated on the validation data-set, with the best-of-generation prediction model selected as the one of these with the smallest RMSE. During tournament selection based on the fitness evaluation data-set performance, we used the model complexity as a second point of comparison in cases of identical error rates.
6) Stopping criterion, evaluation (validation set) and output: In Step 3, the search stops after a pre-defined number of generations, gen max . When the termination criterion in Step 3 is satisfied, the EP , which holds all non-dominated solutions found during the search is evaluated using the validation set (generated in setup) in Step 4. Finally, in Step 5, the best solution X * found in terms of RMSE, evaluated using the validation set, is evaluated in the 10 th fold (generated during setup) and output as the best prediction model.
A. Data Set
The vTEC data-set used in this work consist of around 60000 values recorded between 1998 and 2009. In this paper, the data-set was segmented in 10 continuous folds similarly to [10]. In each cross-validation cycle, 9 folds are used as the training set, whereas the evolved model is tested on the remaining 10 th fold. The training set is further randomly divided into two data-sets (with no overlapping): the fitness evaluation data-set, with 67% of the training data, and the validation dataset with the remaining 33%. The fitness measure (of Step 2.2) consists of minimising the RMSE on the fitness evaluation data-set.
B. Algorithms
Two GP-based approaches are used for evaluating the performance of our GP+MOEA/D based approach: (i) The conventional single objective GP (i.e. sGP) that uses all the GP characteristics of the proposed approach described in Section IV except the multi-objective optimization characteristics of Algorithm 1, i.e. Steps 2.3 and 4 related to Multi-objective Pareto-dominance ranking. Particularly, this approach evolves a prediction model in the training set and validates it on the validation set. The evolution stops when no further convergence is noticed in five consecutive generations.
(ii) The Pareto-dominance based GP, i.e. GP-NSGAII which is a GP approach having the characteristics of the state-of-theart in MOEAs based on Pareto dominance, the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) [9]. Particularly, NSGA-II maintains a population IP gen of size m at each generation gen, for gen max generations. NSGA-II adopts the same evolutionary operators (i.e. selection, crossover and mutation) for offspring reproduction as MOEA/D. The key characteristic of NSGA-II is that it uses a fast non-dominated sorting and a crowded distance estimation for comparing the quality of different solutions during selection and to update the IP gen and the EP . We refer interested readers to [9] for details.
The GP-based algorithms use tournament selection with a tournament size of 7. Evolution proceeds for 50 generations, and the population size is set to 1000 individuals. Rampedhalf-and-half tree creation with a maximum depth of 6 is used to perform a random sampling of rules during run initialisation. Throughout evolution, expression-trees are allowed to grow up to depth of 12. The evolutionary search employs a mixture of subtree mutation combined with point-mutation; with the probability governing the application of each set to 0.6 in favour of sub-tree mutation. The primitive language consisted of the basic arithmetic operators (+, -, *, /) serving as the function set, whereas the terminal set consisted of the five independent variables described in Section III.
Finally, the performance of the proposed approach is studied against the previously proposed Neural Network (NN) approach [10]. The NN approach has a fully connected twolayer structure, with 5 input, 10 hidden and 1 output neurons. Both the hidden and output neurons of the NN consisted of hyperbolic tangent sigmoid activation functions. The number of hidden neurons was determined by trial and error. The training algorithm used was the Levenberg-Marquardt back propagation algorithm.
All approaches were coded in Java and run on an Intel Pentium 4 3.2 GHz Windows XP server with 1.5 GB RAM. We performed 50 independent evolutionary runs for each test fold, in order to account the stochastic nature of the adaptive search algorithms, and obtain statistically meaningful results.
C. Performance Metrics
For evaluating the performance of the approaches the RMSE metric is mainly utilized, as well as some statistical metrics, e.g. mean, max, min and standard deviation. Furthermore, the Multi-Objective Evolutionary Algorithms (i.e. MOEA/D and NSGA-II) were studied according to the quality and diversity of their PF obtained during evolution. Since MOEAs generate a set of solutions for approximating the PF, it is not easy to compare the algorithms performances and there is not a single metric that can satisfy all requirements [9], [28], [29]. For this purpose, the following three metrics are adopted: The ∆-metric [9] measures the extent of spread achieved among the obtained solutions. In the case of two objectives, the ∆ value of a set of candidate solutions A is defined as follows: where d f and d l are the extreme Pareto optimal solutions in the objective space, d j is the distance between two neighboring solutions and d is the mean of all the distribution. The smaller the ∆(A) metric is, the better the diversity performance of A. ∆(A)=0 means a uniform spread of solutions in the objective space. A straightforward comparison metric between two sets of non-dominated solutions A and B is the C-metric [9], [29]. The C(A, B) metric, which is usually considered as a MOEA quality metric, evaluates the ratio of the non-dominated solutions in A dominated by the non-dominated solutions in B, Another commonly used metric, usually considered in cases of real-life discrete optimization problems [30], [31], is the number of Non-Dominated Solutions (N DS(A)) in set A, i.e.
N DS(A) = |A|.
In the type of problem considered in this paper it is very difficult to obtain many different N DSs. Therefore, a high number of N DS(A) is desirable to provide an adequate number of Pareto optimal choices. However, the N DS should be considered in combination with other metrics (e.g. ∆ and C metrics), since it is usually desirable to have a high number of N DS when the solutions is of high quality and spread in the objective space. In contrast, and usually in cases of continuous optimization [7], a high number of N DS is not desirable, since the decision making procedure becomes more complicated.
VI. EXPERIMENTAL RESULTS AND DISCUSSION
The primary goal of our experimental studies is to investigate the performance of our GP-based approach in designing a prediction model for vTEC over Cyprus with which to approximate the measured values, compared to other GPvariants and the previously proposed Neural Network based model.
A. Conventional single-objective GP versus GP-MOEA/D
Initially, the proposed GP-MOEA/D is compared with the conventional single-objective GP (described in Subsection V-B). Figure 5 shows the performance of the two approaches during evolution using the training set (left), based on the RMSE of the final proposed prediction model on each test fold (center) and based on the minimum, maximum and average RMSE of the proposed models on all 10 folds (right). The results clearly demonstrate the superiority of the proposed GP-MOEA/D due to its ability in increasing the selection pressure towards prediction models that have the potential to generalise better. The left subfigure of Figure 5 show that the two approaches provide similar RMSE during evolution and when evaluated on the training set. However, the proposed prediction models of GP-MOEA/D (in the center subfigure) perform better than those of the sGP on the final RMSE on each fold. The right subfigure supports these observations, since GP-MOEA/D provides a lowest minimum, maximum and mean RMSE considering all folds, having a smallest standard deviation as well.
B. GP-NSGA-II versus GP-MOEA/D
In this subsection, we have evaluated the performance of the proposed GP-MOEA/D (i.e. GP with the decompositional approach MOEA/D) against the GP-NSGAII (i.e. GP with the Pareto-dominance based approach NSGA-II described in Subsection V-B.) The two MOEA approaches, using the training set, have obtained a set of non-dominated prediction models, i.e. the PF, for each fold as illustrated in Figure 6. Figure 6 shows the Pareto-optimal solutions of each approach per fold, where the solutions of GP-MOEA/D are denoted by red crosses and those of GP-NSGAII with green diamonds. The results show that GP-MOEA/D's PF outperforms the PF of GP-NSGA-II in most cases. The solutions of the proposed approach are of better quality as well as diversity, providing a higher number of non-dominated prediction models that are spread in the objective space indicating a better exploration. In most cases, the two approaches perform similarly for high RMSE and low model sizes. However, the decompositional nature of GP-MOEA/D forces the proposed approach to converge towards complex models of lower RMSE more efficiently than GP-NSGAII, giving more prediction model choices. The observations just mentioned are also supported by the statistical results summarized in Table I, where the best results are denoted in bold.
The statistical results show that GP-MOEA/D's Paretooptimal solutions dominate all solutions obtained by GP- NSGA-II in four out of ten folds, providing better quality in two more (this is indicated by the C-metric in columns two and three of Table I). On average, the Pareto-optimal solutions of GP-MOEA/D dominate 53% of the Pareto-optimal solutions obtained by GP-NSGA-II having a lower standard deviation as well. In terms of diversity the superiority of GP-MOEA/D is clearer since it provides a more diverse PF on all 10 folds (this is indicated by the D-metric in columns four and five of Table I), giving a higher number of non-dominated solutions (the NDS-metric in columns six and seven of Table I) and consequently more prediction models choices. The PF obtained by GP-MOEA/D is about nine times more diverse with five Pareto-optimal solutions more than the PF obtained by GP-NSGA-II, on average. Finally, Figure 7 shows a comparison of the two MOEAs with respect to the RMSE of the final proposed prediction model on each fold (left) and based on the minimum, maximum and average RMSE of the proposed models on all 10 folds (right). The results show that GP-MOEA/D obtains a better prediction model on all ten folds. GP-MOEA/D provides around 50% lower RMSE compared to the one of GP-NSGAII in the worst case (i.e. the maximum RMSE obtained by both MOEAs is in fold 5), around 20% lower RMSE in the best case (i.e. the minimum RMSE obtained by both MOEAs is in fold 9) and about 37% lower RMSE, on average.
C. Neural Networks versus GP-MOEA/D
Based on the conclusions drawn in Subsections VI-A and VI-B, one can say that the best GP with MOEA characteristics approach presented in this paper is the GP-MOEA/D. In this subsection, the GP-MOEA/D is compared with a Neural Network based approach, which was already shown to be efficient in predicting vTEC over Cyprus in [10]. The comparison between the two approaches is illustrated in Figure 8 with respect to the RMSE of the final proposed prediction model on each fold (left) and based on the minimum, maximum and average RMSE of the proposed models on all 10 folds (right). The results show that GP-MOEA/D performs better than the Neural Network approach in six out of ten folds.
GP-MOEA/D provides around 7.5% lower RMSE compared to the one of Neural Network in the worst case (i.e. the maximum RMSE obtained by GP-MOEA/D is in fold 5 and the maximum RMSE obtained by NN is in fold 1), around 24% lower RMSE in the best case (i.e. the minimum RMSE obtained by both approaches is in fold 9) and about 7% lower RMSE, on average.
Additionally, it is important to note that all approaches converge towards similar values in the last three folds of all experimental studies (i.e. Figures 5, 7 and 8). This is due to the fact that the variability of the vTEC on these three folds is low, and therefore it is much easier to obtain more accurate predictions.
D. Measured (exact) versus GP-predicted values
Finally, in this subsection we demonstrate the effectiveness and efficiency of GP-MOEA/D in approximating the actual measurements of the diurnal variation of vTEC over Cyprus with respect to the Neural Network approach. Figures 9 and 10 show the good performance of GP-MOEA/D in approximating vTEC during a period of 24 hours in different days of the year. The results support the observations of Subsection VI-D that GP-MOEA/D performs better than Neural Network approach in most cases. GP-MOEA/D approximates the measured values of vTEC by around 2% in Case 1 of Figure 9 and by 4% in Case 2 of Figure 10, where the Neural Network approach approximates the measured vTEC values of Cases 1 and 2 by 4% and 10%, respectively. From the ionospheric perspective, in Case 1, we observe that during the night both models exhibit similar performance but during the day where the variability in the ionosphere is significantly higher, GP-MOEA/D clearly outperforms the Neural Network approach. This is also true for Case 2 in addition to the fact that GP-MOEA/D significantly outperforms the Neural Network approach also after sunset.
VII. CONCLUSIONS
In this paper, a Genetic Programming (GP) based approach is used to design a prediction model for Total Electron Content over Cyprus in the context of Multi-Objective Optimization. Particularly, a panmictic, generational, elitist GP with an expression-tree representation, having the characteristics of the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), coined GP-MOEA/D is used. A prediction model is developed based on a data set obtained during a period of eleven years covering a full sunspot cycle. The experimental results have shown the superiority of the proposed approach with respect to a conventional (Single Objective Optimization) GP approach, a GP having the characteristics of the Pareto-dominance NSGA-II approach and a Neural Network (NN) approach. The GP-model has shown a good approximation of the different time-scales in the variability of the modelled parameter and it has outperformed its counterparts.
There are a number of avenues for future research. For example, it will be interesting to investigate different genetic operators and primitive languages to further improve the performance of the GP approach. Moreover, the hybridization of the GP with NNs and the design of a more robust approach is also a future possibility. | 2011-11-24T03:04:09.000Z | 2011-11-24T00:00:00.000 | {
"year": 2011,
"sha1": "6008a19ac4d90e81f5d3ffe76bb834136e5d6a4c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "87df3c85524ba3124c7f8ab15c592b02bbfbeb5d",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
263034474 | pes2o/s2orc | v3-fos-license | Synthesis of polyhydroquinolines and propargylamines through one-pot multicomponent reactions using an acidic ionic liquid immobilized onto magnetic Fe3O4 as an efficient heterogeneous catalyst under solvent-free sonication
A nano-sized Fe3O4-supported Lewis acid ionic liquid catalyst for the synthesis of polyhydroquinolines and propargylamines under ultrasound irradiation has been developed. LAIL@MNP was synthesized from imidazolium chlorozincate(ii) ionic liquid grafted onto the surface of Fe3O4 nanoparticles and evaluated by FT-IR, TGA, SEM, Raman, TEM, ICP-OES, and EDS. The multicomponent synthesis of polyhydroquinolines and propargylamines proceeded smoothly to afford the desired products in high yields. LAIL@MNP can be separated easily from the reaction mixture and reused for several runs without a significant degradation in catalytic activity.
Synthesis of LAIL@MNP
The LAIL@MNP was synthesized following a previously reported procedure. 65The magnetic nanoparticle supported ionic liquid catalyst was prepared in a few steps.MNPs were obtained by a simple co-precipitation method in the presence of KOH solution.Next, the imidazole chloride ionic liquid was synthesized from 3-chloroethoxypropylsilane and imidazole.Then, imidazole-functionalized magnetic Fe 3 O 4 nanoparticle (IL@MNP) was reacted with ZnCl 2 to afford the LAIL@MNP.Characterization of the synthesized nano-Fe 3 O 4 and LAIL@MNP was determined using Fourier transform infrared (FT-IR) spectroscopy, scanning electron microscopy (SEM), transmission electron microscopy (TEM), thermo-gravimetric analysis (TGA), energy dispersive spectrum (EDS), and Raman spectrum (ESI, Fig. S1-S4 †).The amount of zinc element in LAIL@MNP was found to be 0.3 mmol g À1 by ICP-MS.
Synthesis of polyhydroquinolines
The multicomponent reaction is one of the attractive tools in the synthesis of bioactive compounds. 66,67The catalytic activity of LAIL@MNP-catalyzed was performed the synthesis of polyhydroquinolines via Hantzsch reaction, four-component condensation reactions of dimedone, ethyl acetoacetate, ammonium acetate and aldehydes under solvent-free sonication.As shown from Table S1 (please see in ESI †), the condensation reaction between benzaldehyde, dimedone, ethyl acetoacetate, and ammonium acetate in the presence of LAIL@NMP (15 mg) provided in 68% yield under sonication at room temperature for 60 min (Table S1, † entry 5).Interestingly, the excellent yield was observed when the reaction 80 C within 45 min (Table S1, † entry 9).Then, the loading of LAIL@NMP was examined in various weights ranging from 1 mg to 20 mg, and the best yield was attained with 15 mg of LAIL@NMP.Table 1 shows that LAIL@MNP is also a suitable catalyst for the synthesis of polyhydroquinolines.
LAIL@NMP catalyzed the Hanzsch condensation of aldehyde (1.0 mmol), dimedone (1.0 mmol), ethyl acetoacetate (1.0 mmol), and ammonium acetate (2.0 mmol) solvent-free sonication.As shown in Table 2, the reaction was proceeded smoothly with cyclohexanecarbaldehyde to provide the product in 88% yield was for 45 min under sonication.Ortho-or parasubstituted benzaldehydes containing electron-poor groups such as chloro and uoro exhibited weaker activity than benzaldehyde.For those bearing electron-rich groups such as methyl, methoxy, and tert-butyl at the para position, the yields were nearly equal to that of benzaldehyde.Substrates containing o-substituted polar functional groups, such as -OH and -COOH, afforded the respective products whose yields were 20% less than that of benzaldehyde.The method was also efficient for furan aldehydes to give the desired products in 79-89% yields.
A plausible mechanism for the synthesis of polyhydroquinolines using LAIL@NMP was demonstrated in Scheme 3. The zinc species of LAIL@MNP catalyst coordinated with oxygen on the carbonyl group of benzaldehyde which Scheme 2 Some bioactive propargylamines.
Synthesis of propargylamines
The catalytic activity of LAIL@MNP was demonstrated through a one-pot multicomponent reaction of phenylacetylene, piperidine, and aldehydes under solvent-free sonication.The effect of various parameters including time, temperature, amount of catalyst, and different solvents was investigated.As can be seen from Table S3, † the reaction of phenylacetylene (1.5 mmol), piperidine (1.2 mmol) and benzaldehyde (1.0 mmol) was conducted at 30 C or 80 C under sonication.Interestingly, a yield of propargylamine was obtained at 80 C for 45 min in the presence of LAIL@MNP.The excellent yield of (1,3diphenylprop-2-yn-1-yl)piperidine was achieved at an optimized molar ratio 1 : 1.5 : 1.2 of benzaldehyde, phenylacetylene, and piperidine (entry 19).The effect of catalytic loading and solvent was also examined (entries 24-27).The results showed that the use of 10 mg of LAIL@NMP afforded the highest yield of the product under solvent-free sonication (Table S4 †).The current method was compared with other reports (Table 3), and LAIL@MNP catalyst demonstrated good catalytic performance in the preparation of propargylamines.The scope of the substrate was explored with some aromatic aldehydes.Under the optimal conditions, the reaction proceeded smoothly to produce the corresponding propargylamines in good Table 2 The synthesis of various polyhydroquinolines using LAIL@MNP under solvent-free sonication a a Aldehyde (1 mmol), ethyl acetoacetate (1.0 mmol), dimedone (1 mmol), and ammonium acetate (2.0 mmol) in the presence of LAIL@MNP (15 mg) under solvent-free sonication.Yields are isolated yield. to excellent yields.The reaction was carried out with aryl aldehydes bearing an electron-donating or electron-withdrawing group.p-Methylbenzaldehyde was proceeded smoothly under optimized conditions.However, p-tertbutylbenzaldehyde reacted slowly and gave the desired product in moderate yield for 60 min.The aldehydes with halo or hydroxyl substituents in the para position were also reactive with prolonged reaction time to 60-70 min.The heterocyclic aldehyde, such as furfural and pyridine-4-carbaldehyde, afforded the desired products in acceptable yields (Table 4).
A proposed mechanism for the preparation of propargylamine was demonstrated in Scheme 4. The zinc species of LAIL@MNP catalyst coordinated with oxygen on the carbonyl group of benzaldehyde, which enhanced the electrophilicity of carbonyl carbon, providing rise to a better nucleophilic attack of piperidine to benzaldehyde to produce intermediate products (E).This intermediate product was dehydrated to form benzilidenpiperidinium ion (F).Next, phenylacetylene reacted with benzilidenpiperidinium ion (F) to form the desired product.
The recyclability of LAIL@MNP was tested in the preparation of propargylamine under optimal reaction conditions (Fig. 1).
Aer completion, the reaction mixture was diluted with ethyl acetate (30 mL) and the LAIL@MNP was removed by an external magnet.The LAIL@MNP was then washed with ethyl acetate (3 Â 3 mL), ethanol (3 Â 3 mL), and dried in vacuo.The activity of the recovered catalyst was assessed ve consecutive recycling tests.The FT-IR spectrum of recovered LAIL@MNP suggested no signicant change in functionality (please see ESI, Fig. S7 †).
Experimental
Synthesis of LAIL@MNP LAIL@MNP was prepared according to a procedure reported previously in the literature. 78,79The LAIL@MNP has been
Synthesis of polyhydroquinolines
In a typical experiment, a reaction mixture of aldehyde (1.0 mmol, 0.106 g), ethyl acetoacetate (1.0 mmol, 0.130 g), dimedone (1.0 mmol, 0.140 g), ammonium acetate (2.0 mmol, 0.144 g), and LAIL@MNP (15 mg) was sonicated at 80 C. Aer completion, 10 mL ethyl acetate was added and the LAIL@MNP was separated from an organic phase by an external magnet.The ethyl acetate layer was extracted wit water (3 Â 15 mL), dried with anhydrous Na 2 SO 4 and evaporated under vacuum.The residue was recrystallized from hot ethanol to give polyhydroquinoline.Synthesized products were conrmed by 1 H, 13 C NMR, and MS.
Synthesis of propargylamines
In a typical experiment, a reaction mixture of aldehyde (1.0 mmol, 0.106 g), phenylacetylene (1.5 mmol, 0.153 g), piperidine (1.2 mmol, 0.102 g), and LAIL@MNP (10 mg) was sonicated at 80 C. Aer completion, 10 mL ethyl acetate was added, and the LAIL@MNP was separated from an organic phase by an external magnet.The organic solution was then dried with anhydrous Na 2 SO 4 .The solvent was removed under vacuum, and the crude product was puried through column chromatography using nhexane/ethyl acetate (9/1) to provide pure propargylamine.Synthesized products were conrmed by 1 H, 13 C NMR, and MS.
Conclusions
In summary, we have developed a recyclable and efficient LAIL@MNP catalyst for the synthesis of polyhydroquinolines and propargylamines.The present method demonstrated a facile and green approach toward polyhydroquinolines and propargylamines under ultrasound irradiation.The LAIL@MNP can be quickly recovered and reused without a considerable decline in catalytic activity.
of carbonyl carbon, providing rise to a better nucleophilic attack of dimedone to benzaldehyde to produce an intermediate (A).In the second step, b-ketoester (ethyl acetoacetate) is activated by acid catalysis with the release of ammonia from ammonium acetate (NH 4 OAc) to produce enamine (B) along with CH 3 COOH as a by-product.Then, the Michael additive reaction occurs between (A) and (B), followed by the cyclization creating intermediate (C) and (D).Finally, the deprotonation of (D) provides the Hantzsch 1,4dihydropyridines.
Scheme 3
Scheme 3 Proposed mechanism for one-pot four-component synthesis of polyhydroquinolines.
aa
Scheme 4 A plausible mechanism for the synthesis of propargylamines.
Table 1
Comparative effectiveness for the four-component synthesis of polyhydroquinolines This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 25358-25363 | 25359
Table 3
Comparison methods for the preparation of propargylamines
Table 4
LAIL@NMP-catalyzed for the synthesis of propargylamines | 2020-07-09T09:02:57.354Z | 2020-06-29T00:00:00.000 | {
"year": 2020,
"sha1": "0eb0dcb6ec0d5435e8377443b41a557fa267d34f",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/d0ra04008h",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c322ed2da44adfbbb613d09cc1bf960e150c750e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
48361704 | pes2o/s2orc | v3-fos-license | Transposon-associated epigenetic silencing during Pleurotus ostreatus life cycle
Abstract Transposable elements constitute an important fraction of eukaryotic genomes. Given their mutagenic potential, host-genomes have evolved epigenetic defense mechanisms to limit their expansion. In fungi, epigenetic modifications have been widely studied in ascomycetes, although we lack a global picture of the epigenetic landscape in basidiomycetes. In this study, we analysed the genome-wide epigenetic and transcriptional patterns of the white-rot basidiomycete Pleurotus ostreatus throughout its life cycle. Our results performed by using high-throughput sequencing analyses revealed that strain-specific DNA methylation profiles are primarily involved in the repression of transposon activity and suggest that 21 nt small RNAs play a key role in transposon silencing. Furthermore, we provide evidence that transposon-associated DNA methylation, but not sRNA production, is directly involved in the silencing of genes surrounded by transposons. Remarkably, we found that nucleus-specific methylation levels varied in dikaryotic strains sharing identical genetic complement but different subculture conditions. Finally, we identified key genes activated in the fruiting process through the comparative analysis of transcriptomes. This study provides an integrated picture of epigenetic defense mechanisms leading to the transcriptional silencing of transposons and surrounding genes in basidiomycetes. Moreover, our findings suggest that transcriptional but not methylation reprogramming triggers fruitbody development in P. ostreatus.
Introduction
The extraordinary increase in genomic data released during the last decade has made possible to begin to unravel the impact of transposable elements (TEs) on a wide range of eukaryotic genomes. 1,2TEs are 'selfish' genetic units that can mobilize and increase their copy number in the host genome.TEs can also interrupt genes, produce rearrangements and lead to illegitimate recombination events. 3Most TEs are usually present as defective copies that have accumulated mutations and deletions, which ultimately inactivate their transposition potential.In many organisms, TEs accumulate in centromeric and pericentromeric regions where they play a crucial role in genome plasticity and heterochromatin maintenance. 4,5Nevertheless, active transposons constitute a significant source of mutations that can lead to harmful effects in the host genome. 6,7Thus, most eukaryotes have evolved epigenetic defense mechanisms to limit TE expansion.More specifically, transcriptional and post-transcriptional gene silencing pathways (TGS and PTGS) have been described to limit transposon activity in plants and animals. 8,9TGS operates through DNA methylation, whereas PTGS is mediated by small RNAs and orchestrated by the RNA interference pathway (RNAi).DNA methylation is an epigenetic modification involved in several cellular processes and described in a variety of eukaryotic genomes. 10,11Such modification occurs by the addition of a methyl group to the C-5 position of cytosine in DNA.PTGS mediated by siRNAs (short-interfering RNAs, a class of small RNAs) starts with the production of aberrant doublestranded RNAs (dsRNAs) originated from transposons, viruses, or RNA hairpins, among other sources.The dsRNAs are processed by Dicer RNAse III producing short fragments of 21-25 nucleotides (siRNAs), which are loaded in the RNA-induced silencing complex, guided to complementary mRNA and subsequently degraded by the Argonaute slicer activity. 12,13Also, siRNAs can produce TGS by promoting DNA methylation, as described in plants and fungi. 13,14ecent studies in plants and mammals have described a link between DNA methylation and the RNAi pathways 15,16 in transposon silencing, indicating that these mechanisms are functionally related. 17In fungi, epigenetic silencing mechanisms have been extensively explored in the filamentous ascomycete Neurospora crassa and are beginning to be documented in other lineages.This model organism uses TGS and PTGS mechanisms to inactivate TEs at the vegetative and sexual stages.Specifically, a repeat-induced point mutation (RIP) operates during the sexual cycle to transcriptionally silence repetitive sequences through a homology-dependent mechanism.RIP induces C: G to A: T hypermutations in these sequences during the sexual cycle, leading to their degeneration and silencing by DNA methylation. 18,19In ascomycetes, transposons can be inactivated by DNA methylation linked to RIP mutations, 20 whereas in basidiomycetes this mechanism has been examined in silico in members of the Pucciniomycotina 21 and in the ustilaginomycete Microbotrium violaceum. 22A related mechanism called methylation-induced premeiotically, has been detected in Ascobolus immersus and in the basidiomycete Coprinus cinereus and it displays similar hallmarks with RIP. 23,246][27] Recent comprehensive methylome analyses carried out in five fungi belonging to Zygomycota, Ascomycota and Basidiomycota groups reported a marked preference for methylation at CG sites within transposons and other repeated sequences, in contrast to the low methylation levels found in gene-coding regions. 26,27egarding PTGS and TGS of TEs, three RNAi mechanisms have been described in N. crassa: quelling 28 (related to plant cosuppression), MSUD (meiotic silencing by unpaired DNA) 29 and DNA methylation associated to disiRNA loci. 14Quelling and MSUD mechanisms are based on the production of small RNAs, and rely on the core components of the RNAi pathway.The former leads to the silencing of repetitive DNA sequences (i.e.transposons or multi-copy genes) in the vegetative phase, and the latter silences unpaired regions between two parental chromosomes during meiosis.Moreover, an RNAi-dependent mechanism has been recently discovered to silence TEs post-transcriptionally during sexual development (SIS, sex-induced silencing) in the basidiomycete Cryptococcus neoformans. 30Although epigenetic mechanisms for TE control have been extensively described in the ascomycete model N. crassa, only a few studies have analysed DNA methylation 31,32 and RNAi interference mechanism 33 in basidiomycetes.Beyond its biotechnological applications, the lignin-degrader Pleurotus ostreatus has gained relevance in genetic and genomics studies in recent years.Its simple life cycle, the ease of cultivation under laboratory conditions and the availability of an un-gapped telomere-to-telomere genome sequence makes P. ostreatus a good model for basidiomycete studies.Its life cycle alternates between monokaryotic (cells contain only one haploid nucleus) and dikaryotic (cells are dihaploid and contain two haploid nuclei) phases. 34A dikaryotic strain is formed when two compatible monokaryotic strains mate to form a dikaryon where the haploid nuclei remain independent throughout the vegetative growth and fruit-body development.Karyogamy takes place after both haploid nuclei are joined to form a diploid one which undergoes meiosis.Recent comprehensive analyses carried out in our group characterized the landscape of TEs in two compatible monokaryotic strains of P. ostreatus 35 (PC9 and PC15).This study uncovered the presence of 80 TE families, which encompass 2.5 and 6.2% of the total genome sizes of PC9 and PC15, respectively.Most TEs were aggregated in 40 non-homologous clusters spread across the 12 chromosomes.Moreover, it was observed that genes having a TE inserted upstream or downstream the gene body have lower transcription levels than average, especially when the genes are enclosed in TE-rich clusters. 35egarding the potential RNAi activity of P. ostreatus, preliminary data suggests the presence of the RNAi machinery core, which has been detected in silico by screening for Neurospora MSUD and quelling orthologous proteins. 36Using high-throughput sequencing, we describe the genome-wide epigenetic (DNA methylation and small RNAs) and transcriptional (mRNA) profiles of two compatible P. ostreatus monokaryons as well as dikaryons at different stages during fruit body development.Our results support the evidence of strain-specific DNA methylation and small RNA production primarily involved in the repression of transposons activity.Also, we provide evidence that the TE-associated gene silencing effect, previously described by Castanera et al., 35 is correlated to the extension of DNA methylation associated to surrounding transposons.Finally, the comparative analysis of the transcriptomes at different stages of the P. ostreatus life cycle identifies the genes and functions that might be involved in triggering fruit body primordia formation and development.
Fungal strains and growth conditions
Four P. ostreatus strains were used: PC9 (Spanish Type Culture Collection accession CECT20312), PC15 (CECT20312), N001 (CECT20600) and N001-HyB.PC9 and PC15 are two compatible monokaryotic protoclones obtained by de-dikaryotization of the N001 commercial strain in 1999.PC15 and PC9 strains have been described previously by our group, 37 and display slow-and fast growing phenotypes, respectively (Supplementary Fig. S1).N001-HyB is a dikaryotic strain regenerated ad hoc by mating PC9 and PC15 in 2014, 36 which contains the same genetic complement as N001.All strains were cultured in Erlenmeyer flasks containing 200 ml of Malt Extract (ME, 20 g/l) in the dark, at 24 C under orbital shaking (125 rpm).After 6 days, the cultures were homogenized using an Omni mixer and used as inoculum (15 ml) for Submerged Fermentation (SmF) and Solid-state Fermentation (SSF) cultures.SmF was carried out in Erlenmeyer flasks containing 135 ml of liquid ME medium and maintained in the dark for 7 days at 24 C under orbital shaking.SSF was carried out in polycarbonate Magenta boxes containing 15 g of the total dry substrate (v/v) (88% sawdust, 10% millet and 2% CaCO 3 ) and adjusted to an 80% water content.A total of six samples representing the main stages of the P. ostreatus life cycle were obtained for further analysis: (i) vegetative mycelium in SmF (PC9 and PC15); (ii) mycelium under fruiting induction in SSF (M_N001 and N001-HyB); (iii) primordia (P_N001); and (iv) mature fruitbodies (F_N001) (Fig. 1) in SSF.To induce fruiting conditions, completely colonized SSF cultures were maintained at 18 C under a light/dark photoperiod of 12 h until fruit-body formation ($15 days).Three biological replicates per condition were separately sampled, ground in a sterile mortar in the presence of liquid nitrogen and stored before nucleic acids extraction.
Construction and sequencing of whole genome bisulphite libraries
Global DNA methylation levels were estimated by performing sodium bisulphite treatment, based on the chemical conversion of unmethylated cytosines, 38 followed by high-throughput sequencing (BS-seq).Whole genome bisulphite (WGBS) libraries were prepared as described in Morselli et al. 39 Briefly, genomic DNA (gDNA) from the fungal samples was extracted using an E.Z.N.A Fungal DNA Mini Kit (Omega Bio-Tek, Norcross, GA).After additional RNase A treatment (10 mg/ml for 60 min at 37 C), gDNA was purified using phenol: chloroform solution (3:1), precipitated overnight with ethanol (2:1) and the pellet resuspended in nuclease-free water.For additional purification after extraction, gDNA was treated with the Genomic DNA Clean & Concentrator Kit (Zymo Research, Irvine, CA).The concentrations were quantified using a Qubit 2.0 fluorometer (Life Technology, Carlsbad, CA) and total gDNA was fragmented with a Covaris S-2 ultrasonicator to obtain fragments spanning from 150 to 300 bp size range.Library preparation was performed using the Illumina TruSeq DNA Sample Prep 40 according to the manufacturer's instruction.Bisulphite conversion was carried out with the EpiTect Kit (QIAGEN), performing two consecutive rounds of conversion for a total of 10 h of incubation.Converted DNA was amplified according to the following PCR programme: denaturation at 98 C for 2 min, 12 cycles of 98 C for 15 s, 60 C for 30 s, 72 C for 30 s and final extension at 72 C for 5 min.All libraries obtained after bisulphite treatment were sequenced by an Illumina HiSeq2000 system (Illumina, San Diego, CA, USA) using 100 bp single-end reads.Quantitative DNA methylation assays were also performed in a selected set of genes by bisulphite-free real-time PCR following the MSRE-qPCR approach (detailed protocol shown described in Supplementary Material S1).
Preparation of mRNA and small RNAs-sequencing libraries
Total RNA isolation for mRNA (mRNA-seq) and small RNA (sRNA-seq) sequencing was performed using a Fungal RNA E.Z.N.A Kit (Omega Bio-Tek, Norcross, GA, USA) according to the manufacturer's guidelines.The integrity and quantity of RNA were validated by Bioanalyzer (version 2100) and Qubit 2.0 fluorometer.mRNA-seq libraries were prepared using the TruSeq RNA Sample Prep Kit (Illumina) following the manufacturer's instruction.Total RNA was used for isolation of poly(A)-carrying mRNA molecules and synthesis of double-stranded cDNA before adapters ligations.For sRNA libraries, small RNAs molecules were resolved by electrophoresis on a 6% (w/v) polyacrylamide gel and the fraction corresponding to < 200 nt in length was eluted from the gel.Adapterligated molecules were reverse transcribed and enriched by PCR.The final libraries were quantified by real-time PCR in LightCycler 480 (Roche) and sequenced with an Illumina HiSeq 2000 system using 75 and 100 bp paired-end reads.ostreatus samples used in this study.PC15 and PC9 represent two monokaryotic strains that fuse to generate ad hoc the dikaryon N001-HyB.Given its inability to fructify, N001-HyB is examined exclusively at the mycelium stage.The N001 dikaryotic strain, bearing both PC15 and PC9 haploid nuclei and maintained under continuing subculturing during several years, is analyzed under different developmental stages (mycelium M_N001, primordium P_N001, and fruitbodies F_N001).N001-HyB and N001 harbor the same genetic complement although they show different fruiting ability.
DNA methylation and sRNA profiles during P. ostreatus' life cycle
Whole-genome bisulphite sequencing (BS-seq) was carried out in six representative samples of the P. ostreatus life cycle (Fig. 1) to investigate the profile of 5-cytosine DNA methylation.Sequencing of BS-seq libraries yielded an average of 40 6 6 million total reads per sample.Reads were aligned to the PC15 v2.0 reference genome, obtaining coverages ranging from 66 to 97Â (Supplementary Table S2A).The global methylation levels ranged from 2.8 to 6.7% of the total cytosines, and the bisulphate non-conversion rate was 0.27%.P. ostreatus has the lowest methylation levels in the monokaryotic stage, although the two strains tested showed differences between them (2.8% in PC15 vs. 4.4% in PC9, respectively) (Supplementary Table S2B).Interestingly, we found that the reconstructed N001-HyB dikaryotic strain (obtained by mating PC15 and PC9 monokaryons, see Section 2) exhibited methylation levels substantially lower than the 'natural' N001 dikaryotic strain (3.96 vs. 6.48%), which displayed nearly identical levels in the three developmental stages (Fig. 2A).Cytosine methylation was clearly predominant in the CpG context in all samples (4.0 6 2% in CpG vs. 0.5 6 0.1% in CHG and 0.6 6 0.2% in CHH (Fig. 2A and Supplementary Table S2B) and hereafter we focus only on this context.Reads were also aligned to the P. ostreatus PC9 reference genome, and results yielded similar methylation levels compared with the PC15 reference genome in all among samples (Supplementary Table S2C and D).Due to lower assembling quality of PC9 reference genome (572 scaffolds and a total of 476 gaps covering 9.72% of the whole assembly), the subsequent analyses were performed on the fully assembled PC15 reference genome.To validate the findings obtained by BS-seq analysis, methylation levels were also estimated by performing bisulphite-free real-time PCR method.The results of the MSRE-qPCR (methylation sensitive restriction enzyme qPCR) profiles of five genes in PC9 and PC15 monokaryotic samples confirmed the trend previously outlined by bisulphite treatment followed by NGS sequencing (Supplementary Fig. S2).The distribution of 5-methylcytosines (5mC) was analysed in two different genomic contexts: genes and TEs.The results uncovered significant differences between these two features.Genes showed patterns of hypomethylation, with average methylation levels ranging from 1.5 to 4%.In contrast, TEs were heavily methylated, with levels ranging from 20 to 60% (Fig. 2B).We also observed sharp 5mC increments in the adjacent regions of both initial and terminal TE insertion sites, reaching the maximum methylation levels along the whole transposon body.In contrast, this trend was absent in protein-coding genes.Regarding the difference between samples, the methylation levels of N001 were the highest and showed no variation during fruitbody development, neither within genes nor in TEs.The genome-wide production of small RNAs was investigated in the same six samples by sRNA sequencing (mapping statistics using PC15 and PC9 as reference genomes are shown in Supplementary Table S3A and B).Most of the small RNAs originated from nonannotated genomic features, which could match with heterochromatin regions spread in the genome of P. ostreatus (Fig. 2C).The amount of repeat-associated small interfering RNA (rasiRNA) varied between the six samples, ranging from 14% to 37% of the total mapped reads, whereas the percentage of sRNAs mapping to genes ranged from 16 to 44%.The population of small RNAs was further characterized by analysing the length distribution.A maximum peak was found at 21 nt in all strains and samples (Fig. 2D and Supplementary Fig. S3).
Genome-wide transcriptional profiles during P. ostreatus development
Next, we sought to perform mRNA-seq on the same six samples to analyse gene expression changes during P. ostreatus development.An average of 24.16 6 2.6 millions of uniquely aligned reads per sample (Supplementary Table S4) were used to calculate the transcriptional levels of all non-TE genes (genes overlapping with TEs were excluded) and perform differential expression analyses.We performed all-by-all sample comparisons and found that 3,531 out of the 11,828 genes were differentially expressed in the samples analysed (3-fold Log 2 cutoff, FDR-corrected P-value < 0.05).The biggest differences were found between PC15 and M_N001 (1,528 DEGs, Table 1), and the smallest between primordia (P_N001) and fruitbody (F_N001) samples of N001 (6 DEGs).To analyse the expression trends of DEGs under the six samples, we performed a hierarchical clustering of the 3,531 genes.We identified a total of nine clusters of co-expression, consisting of variable numbers of genes (heatmap vertical axis, Fig. 3).Three of the nine clusters were further analysed due to their relevance for the study of fruitbody triggering and development (Clusters B, C and D) and one for the role of dominance in the expression profiles of the dikaryotic stage (Cluster A).Specifically, the expression of genes belonging to Cluster A (906 genes) was high in PC15 and low in PC9, and the dikaryons showed either intermediate profile (N001-HyB) or PC9like profile (M_N001, P_N001 and F_N001).Cluster C (142 genes) showed an increased transcription during fruitbody triggering and higher expression than monokaryons in primordia and fruitbodies.Finally, Clusters B and D (614 and 36 genes, respectively) were upregulated during fruitbody development.To better understand the link between transcription and DNA methylation in the different stages of P. ostreatus development, we represented the average value of these two marks in genes belonging to the selected clusters (Fig. 3).Genes belonging to Clusters B, C and D showed a nearly complete lack of methylation.Notably, genes included in Cluster A showed higher methylation levels in PC9 and N001 (about 35% average methylation level for N001 samples).Moreover, this hypermethylation pattern coincided with a low expression in the two strains.Further analysis showed that 29% of the genes belonging to Cluster A were present inside the TE-rich regions defined by Castanera et al. 35 In contrast, this percentage decreased to 13% in Cluster B, 20% in Cluster C and 8% in Cluster D. (Supplementary Fig. S4).Next, we retrieved the functional annotation of all genes and performed gene ontology (GO) enrichment focussing on the selected Clusters.This approach revealed over-represented molecular functions (MFs), biological processes (BPs) and cellular components.Specifically, the most enriched ontology of cluster A was 3'-5' exonuclease activity (MF), in Cluster B was monooxygenase activity (MF), in cluster C was structural constituent of cell wall (MF), and in Cluster D was fruiting body development (Supplementary Table S5).Interestingly, other significantly enriched BP enriched during fruiting induction was multicellular organism development (Table 2, Cluster C).These data also provided evidence of the different transcriptional profiles between the natural N001 (M_N001) and the regenerated N001-HyB.We found that a total of 395 genes were differentially transcribed between these two strains.The comparison displayed that 349 genes were upregulated in the N001-HyB and 46 in M_N001 strain (Supplementary Fig. S5).Not surprisingly, when we performed GO enrichment analysis between these two dikaryotic strains, the subset of enriched functions revealed that BP involved in fruitbody development was exclusively represented in the genes upregulated in M_N001 strain (Supplementary Table S6).
Different nucleus-specific methylation profiles operate in the P. ostreatus dikaryotic stage
In previous sections, we have shown how the dikaryotic mycelium of the natural N001 (represented here by M_N001 sample) and the ad hoc generated N001-HyB strains displayed different methylation profiles under identical conditions, although they share the same genetic complement.Thus, we considered the possibility of an unequal contribution of each monokaryotic nucleus to the dikaryons.To test this hypothesis, we mapped BS-seq data to a new set of pseudo-genomes consisting of the nucleus-specific regions concatenated with the common regions sharing a similarity < 90% (one unique pseudo-genome for PC15 and another for PC9).To build this reference sequences sets, we performed a whole genome alignment between PC15 and PC9 using the NUCmer software. 42The direct pairwise comparison revealed an average of 97.2% similarity in the aligned regions, which spanned 87.3% of PC15 assembly and 83.2% of PC9.Afterward, we determined the nucleus-specific methylation levels in M_N001 and N001-HyB by performing BS-Seeker2 analyses 43 on the unique regions (pseudo-genomes of PC9 and PC15 used as references).We observed similar low mappability rates between the M_N001 and N001-HyB strains when aligned to each reference genome, which could reflect the presence of repetitive sequences along both pseudogenomes.Interestingly, when we looked at the global DNA methylation values, we found that each nucleus contributed differentially in the two dikaryotic strains (Table 3).In fact, while genomic regions deriving from the PC9 nucleus exhibited comparable methylation levels in both the natural and ad hoc dikaryons (coefficient of variation of 13.4%), differences in regions associated with the PC15 nucleus were higher (coefficient of variation of 37.9%).Specifically, methylation of the PC15 nucleus in N001-HyB was considerably lower than in M_N001 (18.11 vs. 34.45%),where the methylation levels of both PC9 nuclei were similar (32.23 vs. 39.62).Next, we sought to identify genomic regions showing significant differential methylation between these two strains.Using the SMART2 software, we detected a total of 2, 199 differentially methylated regions (DMRs) of 200-5,714 bp in length (Supplementary Fig. S6A).Among the resulting DMR, 98% were significantly hypermethylated in M_N001 vs. N001-HyB (Supplementary Fig. S6 B). 3.4 DNA methylation, small RNA production and transcriptome landscape in P. ostreatus The overall distribution and levels of DNA methylation, mRNA and sRNAs production were analysed along the twelve P. ostreatus chromosomes using the dikaryotic N001 strain (sample M_N001) as reference.We noticed that DNA methylation and sRNAs production were tightly associated with TE-rich clusters spread along the genome, where the transcriptional activity had dramatic depletion (Figs 4A and B).Moreover, we found that the levels of methylation and sRNA production varied regionally and between chromosomes.
Considering these data, we sought to analyse the correlation of methylation levels with mRNA and sRNA expression across the entire genome.For this purpose, the genome was divided into 200 bp windows.Windows were split into three groups according to their methylation levels (Group I: 0-20%, Group II: 20-60% and Group III: > 60%), and plotted with their corresponding sRNA and mRNA expression.In addition, we analysed the correlation across genes and promoter regions according to the same methylation ranges previously mentioned.As shown in Figure 4C, DNA methylation exhibited a negative correlation with mRNA transcription.Although the strongest repression was found in windows with >60% average methylation, the sharpest decrease in expression was found to occur when methylation exceeded 20%.Regarding DNA methylation and sRNAs expression, the two marks were positively correlated (Fig. 4D).Notably, we noticed that the majority of the sRNA production (approximately 95%) derived exclusively from a very small portion of the whole genome (3.3% of entire genome).When we analysed both mRNA and sRNA abundances at genes and promoter regions, a similar negative correlation between methylation and mRNA expression was uncovered among the three methylation ranges.In particular, promoter regions displayed lower mRNA transcriptional levels when compared with genes.On the contrary, a positive correlation was detected between sRNA and methylation in both genes and promoter regions, with slightly higher sRNA expression levels in correspondence to promoter regions.
Repeat-associated DNA methylation, expression and smallRNA profiles
A clearly opposite trend was observed in transcription and methylation levels between genes (high transcription with low methylation) and TEs (low transcription with high methylation) (Fig. 5A and B).Within the different TE orders, terminal inverted repeats (TIR), long terminal repeats (LTR) and Dictyostelium Intermediate Repeat Sequences (DIRS) were the most heavily methylated and showed the lowest expression.Helitron and long interspersed nuclear elements (LINE) elements had slightly lower methylation and higher expression values.This general trend was maintained in all samples and strains, although, PC15 had the lowest 5mC rate in all TE orders, which corresponded to the highest expression ratios.We performed a deeper analysis accounting for the 80 TE families present in P. ostreatus and found that PC15 TE families were consistently hypomethylated compared with the other strains (Fig. 5D).Hierarchical clustering of TE methylation and expression displayed that the most invasive TE families (LTR/Gypsy) were strongly methylated (26-59% on average) and transcriptionally repressed (Fig. 5D, Supplementary Fig. S7A).Nevertheless, other less abundant families, such as TIR_1, displayed higher methylation levels (35-67%) and complete transcriptional repression.Next, we analysed the production of sRNAs by genes, TE orders and families in the context previously described.The amount of sRNAs per element (i.e. gene or TE) was higher in TEs than in genes (Fig. 5C), although it varied greatly depending on the strain and TE order.sRNAs were abundantly produced by TIR transposons in all samples (reaching average levels up to $7,000 RPM/copy in the PC9 strain), with the only exception of PC15 which displayed much lower production of sRNA associated to TIR elements.The LTRs in the Gypsy superfamily and Helitrons also showed high sRNAs production, especially in PC15 and N001-HyB strains, and the remaining orders had low amounts of mapped sRNAs.Considering the percentage of sRNAs mapped to TEs (rasiRNAs), Gypsy, Copia retroelements and TIRs were the main sources, with great differences between families (Supplementary Fig. S7B).Next, we tested whether the production of rasiRNAs by TE families was related to (i) size (family copy number) or (ii) age (divergence between TE copies and family consensus).Despite detecting differences between strains, we found that the TE families containing more than 10 copies tend to have the highest rasiRNAs expression (Supplementary Fig. S8A).This effect was especially relevant in abundant sRNAs-producing samples, such as PC9, where rasiRNA abundance (RPM/copy) was positively correlated with family copy number (Pearson correlation coefficient ¼ 0.42, P-value ¼ 1.542 À5 ) (Fig. 6A).
To explore the relationship between rasiRNAs production and family age, we correlated rasiRNAs data to the average divergence rate, which increases linearly with family age (Supplementary Fig. S8B).Using PC9 as a reference, we found that the average family divergence and rasiRNAs expression were negatively correlated (Pearson correlation coefficient ¼ À0.25, P-value ¼ 0.01617) (Fig. 6B).
Role of epigenetic modifications on TE-mediated gene silencing
As discussed earlier, TEs represent the primary target of cytosine methylation in P. ostreatus.Our previous investigations found that TE insertions lead to a significant reduction of the expression levels of surrounding genes within 1 kb. 35Also, we found that this TEmediated gene silencing effect was stronger when genes were located inside any of the 40 TE clusters described in P. ostreatus.We explored the possibility that this phenomenon had an epigenetic explanation.We tested the hypothesis that methylation could spread outside TE boundaries reaching the surrounding genes and blocking their transcription.Therefore, we compared the methylation and expression levels of genes surrounded by transposons (Fig. 7) (within a window of 1 kb, either upstream or downstream the gene body, Group I, labelled as þTE) and genes not surrounded by TEs (Group II, labelled as Ctl).Additionally, to uncover the impact of TE clusters on this phenomenon, we split the first group of genes in two contexts: (i) genes located inside a TE cluster (cluster) and (ii) genes located outside TE clusters (isolated).As shown in Figure 7A and B using N001_HyB and M_N001 strains as a model, genes carrying a TE insertion displayed higher methylation levels than the control groups (P < 0.05).In the case of genes in TE cluster (Fig. 7B), we found a bimodal distribution of methylation, with approximately half of the genes having high 5mC levels (up to 40% in N001-HyB and 70% in M_N001).Regarding transcriptional profiles (Fig. 7C and D), genes surrounded by TEs had lower expression than controls (P < 0.05), especially genes present inside a TE cluster (Fig. 7D).This phenomenon was present in the six strains, although with different intensities (Supplementary Fig. S9A-D and Table S7).Furthermore, we analysed the distribution of sRNA in genes grouped into the contexts described earlier.Notably, no significant differences in sRNAs production were found among genes surrounded by a TE and the group control (Supplementary Fig. S9E and F).Next, we studied the distribution of methylation across the gene body and regions adjacent to the TSS and TTS.The aim was to understand if the methylation extension was spreading from the TE to the whole gene body or only to the promoter sequence in both contexts (TE cluster or isolated TE).We found that the methylation levels mildly decreased from the adjacent regions to the gene body.Nevertheless, gene body methylation of genes in both contexts was much higher than the control, reaching average values around 13% for genes isolated and with TE insertions and up to 25% for genes with TE insertions present in a TE cluster (Fig. 8A).Further analyses on the relationship between transcription and gene body methylation found that genes carrying TE insertions (at 1 kb upstream or downstream) had only two activity state.As represented in Figure 8B and C, the clear majority of genes surrounded by a TE held methylated and silenced activity state, whereas only a few copies displayed unmethylated and transcriptionally active state, similarly to almost all genes included in the control group not enclosed by TE insertions (Fig. 8D).
Epigenetic factors contribute to P. ostreatus genome regulation
Previous findings have illustrated the presence and importance of epigenetic modifications in fungi (for review, see 44 ).Most of the data on fungal epigenetics comes from DNA methylation and RNAi studies performed in ascomycete fungi.However, few studies have reported evidence of the presence of such mechanisms in basidiomycetes. 26,32n N. crassa, it was observed that Dim-2 DMTase is responsible for methylation at both symmetrical and asymmetrical sites and is required for de novo and maintenance of DNA methylation. 45,46n plants, asymmetric methylation is maintained by the activity of de novo methyltransferases drm1, drm2 and chromomethylase 3 (cmt3).In fact, Cao et al. 47 reported the absence of asymmetric methylation only in drm1 drm2 and cmt3 triple mutant plants, suggesting that the regulation of non-CG methylation is complex.Castanera et al. 35 reported that Dim-2 DMTase was transcriptionally active in the genome of P. ostreatus, and it also carries a transcriptionally active cmt3.Nevertheless, it lacks dmr1 and dmr2 homologs, as it happens in N. crassa.According to these results, we hypothesize that the differences in the non-CG methylation levels between basidiomycetes and ascomycetes might be associated with the different evolutionary trajectories of their respective Dim-2 DMTases.Our results confirm the presence of 5-cytosine methylation in this basidiomycete, predominantly localized within repetitive regions, in the symmetric CpG context.This predominance has also been detected by methylome analyses of five species belonging to the three major fungal groups (ascomycetes, basidiomycetes and zygomycetes). 26Despite an estimated divergence time of >1 billion years, similar methylation patterns were found between basidiomycetes and zygomycetes, displaying a marked trend toward CpG methylation of regions located within TEs or other repetitive loci, and global weak methylation associated to genes.The methylation pattern of P. osteratus was similar to what has been described for other basidiomycete species, showing substantial amount of CG methylation and a small amount of non-CG methylation both concentrated in repetitive regions.These divergent patterns might underline evolutionary differences in mechanisms related to DNA methylation within fungal species characterized by different genome structure and lifestyle.Selker et al 48 , stated that in N. crassa, asymmetric methylation might be associated to methylation events occurring during premeiosis in the parental nuclei.This condition is maintained during the vegetative state in those genomic regions exhibiting RIP mutations and where the methylation is present.
Based on these observations, the low asymmetric methylation levels found in P. ostreatus could be due to its negligible RIP levels (data not shown), which could make up a strategy to avoid silencing of genes involved in growth and development processes in monokaryon as well as dikaryons mycelia, the most common developmental stage in nature.In this regard, Chan et al. 49 suggested that non-CG DNA methylation can be inherited via a network of different and persistent signals that have been co-opted to regulate developmentally important genes, though we have not carried out in Pleurotus experiments to test such hypothesis.Thus, this statement is only a speculation.Our findings showed that cytosine methylation ranged from 2 to 6% in the six samples analysed at different growing stages.These methylation levels are comparable to those described for other fungi so far, 26,27,50 except the highly methylated genome of Tuber melanosporum, displaying up to 44% cytosine methylation. 51Previous studies in plants 52 and fungi 27 have demonstrated that the activity of transposons can be efficiently shut down by chromatin modifications linked to 5mC, as a defense mechanism aimed at controlling their expansion.In higher fungi, this trend has been reported in T. melanosporum, suggesting that methylation principally targets TEs. 51onsistent with this observation, we found that P. ostreatus transposons are highly methylated whereas gene-coding regions are hypomethylated, independently of the strain and developmental stage (Fig. 2B).It is worth noting that the highest methylation levels are maintained across the whole transposon body, while they sharply decrease beyond the transposon borders.A plausible explanation for this locally-restricted methylation might be the 'mosaic methylation' described in invertebrates and plants.As reviewed by Suzuki et al., 10 this strategy attempts to delimit potential detrimental changes induced by methylation and prevent spurious transcriptional silencing across the genome.In addition to 5-cytosine methylation, we describe the genomic hallmark associated with the production of small RNAs.We show that endogenous sRNAs produced by P. ostreatus are enriched in the 21 nt fraction (Fig. 2D) which could reflect the presence of RNAi-like silencing mechanisms such as those described in other eukaryotes. 5,53In the basidiomycete C. neoformans, experimental analyses showed the presence of 21-23 nt endogenous sRNAs involved in the control of mobile elements. 30Also, the plant-pathogen Puccinia striiformis produces 20-22 nt sRNA likely involved in PTGS. 33In this sense, P. ostreatus contains a highly conserved, transcriptionally active RNAi machinery composed of Argonaute, Dicer and RNA-dependent RNA polymerase proteins 36 (18 proteins sharing 93-100% similarity between PC9 and PC15 strains).
TEs are targeted by DNA methylation and RNAi machinery
TEs play a major role in genome stability and evolution 54 but are also a source of mutation that can result in deleterious or lethal effects to the host.Thus, epigenetic mechanism encoded by the host genome has been developed to silence their expression at the transcriptional and post-transcriptional levels. 5Here, we found that DNA methylation and sRNA production were correlated with the presence of silent TEs, suggesting a role in their activity leading to transcriptional suppression.This association was even more striking in TE-rich regions.Specifically, TE clusters were shown to be transcriptionally silenced and highly methylated, similarly to that described in the basidiomycete Laccaria bicolor. 26In this sense, the accumulation of silent transposon knobs could correspond to heterochromatic regions matching to centromeric and pericentromeric zones, as described in plants. 55egarding DNA methylation, both Classes I and II transposons had higher methylation compared with genes.This observation supports the hypothesis that transcriptional silencing mediated by DNA methylation could be primarily responsible for TE inactivation in P. ostreatus.Similarly to the ascomycete T. melanosporum, 51 we found that both LTR-retrotransposons and Class II transposons were abundantly methylated.It is interesting to note that the most methylated orders, TIR and LTR/Gyspy, also accounted for the larger portion of 21 nt rasiRNAs (Fig. 5C), especially TIR transposons belonging to the Mariner superfamily, the only cut-and-paste elements described in P. ostreatus.An example of this association has been reported in wheat, where most of the 21-22 nt sRNAs perfectly targeted within TIR regions of MITE elements, indicating that they are subjected to post-transcriptional control. 56An explanation for such an impressive amount of rasiRNA might reside in the hairpin RNA structure that elements from the TIR1 family can adopt due to the presence of Terminal Inverted Repeats (Supplementary Fig. S10), a conformation that promotes the presence of dsRNA and triggers RNAi machinery.
In Caenorhabditis elegans, it has been proposed that such conformations lead to rasiRNA production from Mariner elements. 5Previous studies have described that Class I transposons predominate in basidiomycetes genomes whereas Class II transposons show limited expansion.This is especially relevant in the agaricomycotina subphylum, 57 to which P. ostreatus belongs, and might be a consequence of the efficient replicative mechanism of Class I elements.According to our results, the under-representation of Class II transposons in the P. ostreatus genome might also be related to a stronger posttranscriptional inactivation in comparison to Class I elements.Based on our results, the production of rasiRNAs positively correlates with family size and negatively correlates with mean family divergence (Fig. 6A).Thus, the production of sRNA molecules might reflect an attempt to limit the expansion of the youngest, most invasive TE families of the genome.This observation is reminiscent of co-suppression studies carried out in the basidiomycete C. neoformans.In this species, mitotic-induced silencing pathway, a quelling-like asexual mechanism operating in trans, lead to RNAi-mediated silencing of homologous sequences and repeated elements during vegetative growth. 58Also, a similar mechanism was shown to induce post-transcriptional inactivation of repetitive transgenes and transposons mediated by 21-23-nt sRNAs during the sexual reproduction (SIS). 30Interestingly, the production of sRNAs in this fungal model increased according to the transgenes copy number with both mechanisms.Our finding suggests that P. ostreatus can inactivate transposons at the transcriptional and post-transcriptional level, by epigenetic modifications associated with DNA methylation and 21 nt sRNAs production.In this scenario, we speculate that sRNAs might be involved in the methylation of repetitive regions spread in the genome, guiding their silencing at the transcriptional level similarly to what has been reported in plants, 59 where non-canonical RdDM has been described to be mediated by 21-22 nt long sRNAs. 16
Methylated-TEs induce transcriptional silencing of nearby genes
According to our results, we propose that TE-associated transcriptional silencing of nearby genes occurs due to the extension of DNA methylation from TEs to the surrounding genes, which contain significantly higher methylation levels compared with controls.This is consistent with the results described in Magnaporthe oryzae, where genes with upstream, downstream or gene body methylation show lower expression than controls. 50Similarly, in T. melanosporum TEs close to highly expressed genes (1 kb upstream/downstream the TSS or TTS) tend to be less methylated than transposons located in the proximity of lowly expressed genes. 51In our model, methylation of genes surrounded by TE insertions decreases from regions adjacent to the TSS and TTS, reaching the lowest levels in the gene body.This observation fits with the hypothesis of methylation being extended from the closest TE, repressing the transcriptional activity of neighbour genes as documented in plants 60 and animals. 61Another intriguing point is to understand how much methylation is needed to impact the transcriptional activity.
In light of our findings, we propose that low to intermediate methylation levels (<20%) can prompt transcriptional silencing, although the strongest repression is found when methylation exceeds 60% (Figs 4C and 8B).Another striking observation is that genes displaying TEs insertion upstream or downstream of gene bodies have equal or even lower sRNA levels than control genes (Supplementary Fig. S9E and F).This suggests that TE insertions are presumably not involved in posttranscriptional silencing of nearby genes, and indicates that TE-mediated gene silencing is promoted by DNA methylation.Nevertheless, other mechanisms associated to silent heterochromatin structures such as methylation of histone H3 at Lys9 (H3K9me3) 62 should be further investigated to understand their putative role in this phenomenon.
P. ostreatus fruiting stage is associated with methylation-independent transcriptional reprogramming
The exploratory analyses carried out during P. ostreatus development yielded some insights into the transcriptional changes underlying fruitbody induction and development.The transition from vegetative mycelium to primordium stage is a complex process that requires the aggregation of cells into compact hyphal knots which later experience tissue differentiation.In P. ostreatus, fruiting is triggered (among other environmental conditions) by lowering temperature and introducing light-dark cycles. 63In our study, these conditions lead to fruiting induction in N001, accompanied by the activation of a set of genes (especially Cluster D) expressed at low levels in monokaryons and also in N001-HyB (a strain unable to fruit), suggesting their putative role in the fruiting process.According to the number of DEGs between M_N001 and N001-HyB, at most 3% of the P. ostreatus genes are necessary for the early induction, while very few genes are presumably involved in changes from primordial to mature fruit bodies, similarly to what has been found in the basidiomycete Coprinopsis cinerea. 64Within this relatively small gene pool, the impressive overexpression of the pleurotolysin B (GO: 0030582, > 15-fold increase in M_N001, P_N001 and F_N001 vs. any of the monokaryotic stages and N001_Hyb), suggests its important role in the fruiting process.In this sense, previous experimental work has shown that the expression of these hemolytic proteins is activated during the formation of primordia and young fruit bodies in P. ostreatus and Agrocybe aegerita. 65Within the clusters of genes expressed during fruiting induction, we also found enriched functions related to multicellular development (Cluster C), although the most enriched biological function was related to transport, similarly to what was reported for C. cinerea, were such activity is upregulated prior to enlargement of fruiting bodies. 64The analysis of MFs activated during fruitbody triggering and development in P. ostreatus suggest that oxidorreductase and binding activities are the most enriched.Glycoside hydrolases were also found to be enriched in Cluster B, along with other proteins such as oxidative enzymes, hydrophobins involved in the aggregation of aerial hyphae and lectins, previously described to be involved in this process in basidiomycetes. 64,66This indicates that the core genes involved in the fruiting process are conserved across the Basidiomycota phylum.
Interestingly, despite the differences in transcription of fruitingassociated genes, methylation levels were invariably low along the six samples.These results suggest that fruiting body formation in P. ostreatus is not triggered by epigenetic modifications linked to DNA methylation.
Nucleus-specific methylation is compensated in the long-term dikaryotic stage
The sample-specific profiles presented in this study can provide a framework for understanding the epigenetic and transcriptomic differentiation observed during the P. ostreatus life cycle (Fig. 1).Also, our experimental design allowed us to compare the epigenetic profiles of short-term (N001-HyB) vs. long-term (N001) cultured dikaryons.In fact, striking differences were found between these two strains, which share the same genetic complement but have clearly different methylation profiles.In particular, N001 has heterotic TE methylation levels higher than the parental strains PC15 and PC9, whereas N001-HyB shows mid-parent level values (Fig. 5A).Our results suggest that this difference can be explained by the different contribution of each nucleus to the overall methylation levels in the dikaryon, as in N001-HyB the PC9 nucleus is more methylated than PC15, whereas in N001 both nuclei show similar levels (Table 3).The N001 strain has been sub-cultured for >20 years as a dikaryon, whereas N001-HyB is the result of a very recent mating (< 10 subcultures) between the compatible protoclones of PC15 and PC9, which were stored as isolated strains in a Culture Collection longer than 15 years.In fungi, the dikaryotic stage is accomplished through the migration of nuclei from one cell to another cell.Thus, the impaired contribution in the dikaryotic formation, could suggest that the dikaryotization was not completely established in the N001-HyB mycelium, besides the presence of clamp connections in the culture.The two nuclei present in dikaryons experience co-evolution in the long-term. 67In this context, the dikaryotic stage is thought to be favored over the monokaryotic, as deleterious mutations in one nucleus can be compensated with the healthy allele of the other, or even reverted by compensatory mutations such as described for Schizophyllum commune. 68Our results suggest that in addition to permanent modifications (such as mutations), epigenetic profiles are also compensated in long-lasting dikaryons.This compensation led to higher methylation in the N001 strain, which can account for their better defense against TEs than their single monokaryotic counterparts.A similar phenomenon was observed in Arabidopsis, where the distribution of epialleles inherited from the parental lines may reflect the selection against demethylated traits possibly influencing plant fitness. 69Nevertheless, this phenomenon is not observed in recently formed dikaryons such as N001-HyB, where methylation profiles of independent nuclei resemble those of their original monokaryotic parentals.To our knowledge, no similar study has reported this event in fungi.Nevertheless, in plant hybrids, an siRNA-mediated mechanism called trans-chromosomal methylation is responsible for equilibrating the methylation levels of alleles, leading to an increase of methylation in the 'low parent allele' and resulting in overall higher methylation levels in the F1 hybrids. 70The N001 profile mimics this phenomenon, where the increase in methylation of PC15 alleles would lead to the balanced nucleus-specific methylation that we have described.The intermediate values found in N001-HyB suggest that this phenomenon could be the result of a slow process requiring progressive co-adaptation of the two nuclei in an unique cytoplasm, something that seems reasonable as nuclei in dikaryons remain un-fused during most of their lives.The case of Cluster A (Fig. 3) is an interesting example of how long-term compensation can impact the methylation levels of many genes.This group of genes is present in TE-rich clusters showing opposite methylation profiles in PC15 and PC9, which leads to the presence of epialleles in the early formed dikaryon N001-HyB.In this case, the hypomethylated profile of PC15 is dominant in many Cluster A genes of the latter strain but shifts to a hypermethylated and transcriptionally silent PC9-like profile in the long-term dikaryon N001.This observation suggests that the two nuclei that coexist in the recently formed dikaryon retain some degree of independence prior to establishing crosstalk interactions.Based on our findings, we propose that the TE dynamics could rewire the epigenetic landscape of the fungal genome, promoting gene silencing at their surroundings and generating epialleles in the dikaryotic stage.The methylation profile of these epialleles is subjected to compensation after longterm culture, leading to balanced methylation levels in each nucleus.
Figure 1 .
Figure 1.Summary of P. ostreatus samples used in this study.PC15 and PC9 represent two monokaryotic strains that fuse to generate ad hoc the dikaryon N001-HyB.Given its inability to fructify, N001-HyB is examined exclusively at the mycelium stage.The N001 dikaryotic strain, bearing both PC15 and PC9 haploid nuclei and maintained under continuing subculturing during several years, is analyzed under different developmental stages (mycelium M_N001, primordium P_N001, and fruitbodies F_N001).N001-HyB and N001 harbor the same genetic complement although they show different fruiting ability.
Figure 2 .
Figure 2. Global DNA methylation and small RNAs profiles within the six samples.(A) Average methylation levels of the CG, CHG and CHH contexts.Error bars represent the standard deviation of three biological replicates (excepted for P_N001 where n ¼ 2).(B) Metaplots showing DNA methylation across adjacent regions, genes and transposons body.(C) Percentage of sRNA reads mapped in transposons (TEs), genes (Genes) and other regions (Others).(D) Line chart displaying the percentage of sRNAs mapped reads ranging from 17 to 30 nt in length.
Figure 3 .
Figure 3. Hierarchical clustering of differentially expressed genes.Heatmap plot illustrating differentially expressed genes (3,531 genes).Nine main clusters are shown indicating grouping genes with similar expression profiles.Gene expression levels from lower to higher are shown.The more intensively studied Clusters A-D are highlighted at the right of the heatmap.Expression and methylation profiles of genes included in Clusters A-D are shown in the right panels.Color figure is available at online version.
Figure 4 .
Figure 4. Global association of CpG methylation, mRNA and sRNAs expression in the M_N001 strain.Circular genome and data visualization with Circos for BSseq, mRNA-seq and sRNA-seq profiles (A).From inside to out: mRNA expression (yellow), small RNA production (violet) and DNA methylation (light blue).The outer track report TEs (grey bands).All tracks represent mean values of three biological replicates.The presence of TE-rich clusters is indicated by blue asterisks.(B) Integrative genomics viewer (IGV) browser visualization of a representative 260 kb region in scaffold 2 (location: 210,736-472,880 bp) of a replicate of M_N001.Plotted from top to bottom are: TEs annotation, DNA methylation, sRNAs and mRNA transcription and genes annotation (logarithmic scale).Boxplots showing the correlation of DNA methylation with mRNA (C) and sRNAs (D) transcription.Each boxplot represents the genome split in 200-bp windows, along with genes and promoters grouped based on their methylation levels (0-20, 20-60 and 60-100%).In panel D, 'no producing' group represents genomic regions, genes and promoters having <10 sRNA mapped reads.In panel C, genes are grouped as following according to the methylation levels: 10, 692 genes in 0-20%, 428 in 20-60% and 398 in 60-100%.In (D): 6,505 no producing, 4,969 in 0-20%, 70 in 20-60% and 104 in 60-100%.
Figure 5 .
Figure 5.DNA methylation, transcriptome and sRNAs expression over annotated genomic features.Histograms showing average methylation levels (A), mRNA transcription (B) and sRNA production (C) overlapping with genes and TE orders belonging to Classes I and II.All tracks represent the mean values of three biological replicates.In (C), the y-axis is reported at different scale in the lower and upper part.Percentage values below each order term indicate the relative occupancy in the total P. ostreatus TEs landscape.(D) Hierarchical clustering reporting DNA methylation levels within 80 TE families.Blue and yellow colours indicated high and low values respectively, expressed as mCG (%).
Figure 6 .
Figure 6.Association of transposons size and age with rasiRNA production.Line charts showing the correlation of sRNAs production with copy number (A) and divergence rate (B) in 80 TE families in the PC9 strain.The coefficient of determinations is reported on the top right of each graph.
Figure 7 .
Figure 7. Influence of TE-associated methylation on nearby gene transcription.Violin plots showing methylation (A and B) and transcription (C and D) levels of genes in N001-HyB and M_N001 dikaryotic strains.Genes are classified as surrounded (þTE) or not surrounded (Ctl) by a transposon within a 1 kb-window (either upstream or downstream).Isolated (A and C) and Cluster (B and D) contexts indicate genes located outside and inside TE-rich clusters, respectively.White dot inside each plot represents the median.Below each violin plot, n represents the number of genes, and P represents the P-value of the Mann-Whitney-Wilcoxon test.
Figure 8 .
Figure 8. Association between methylation and transcription in the M_N001 strain.(A) Metagene plot showing the average methylation levels across the genes body and adjacent region: (i) surrounded by TEs at 1 kb upstream and downstream localized inside a TE-rich cluster (Cluster, upper line); (ii) surrounded by TEs at 1 kb up-stream and downstream localized outside a TE-rich cluster (Isolated, intermediate line); and (iii) isolated and not surrounded by TEs (Control, bottom line).Each of these groups is represented by a dashed (upper), dotted (intermediate) and solid (bottom) line, respectively.Scatterplot distribution at different scale reporting the relationship between methylation levels and expression of genes classified in the cluster (B), isolated (C) and control (D) groups.
Table 2 .
BPs enriched in co-expression clusters under study
Table 3 .
Summary of BS-seq mapping to PC15 and PC9 unique pseudo-genomes | 2018-06-30T00:51:45.440Z | 2018-06-08T00:00:00.000 | {
"year": 2018,
"sha1": "c8bc5ecaea80fd8b8041c0e799820a678ac82983",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/dnaresearch/article-pdf/25/5/451/28011654/dsy016.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c5b77381070da01db91646b56a2e580d302752e2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
219921454 | pes2o/s2orc | v3-fos-license | Using a Multimedia Aquivalence Model to Evaluate the Environmental Fate of Fe, Mn and Trace Metals in an Industrial City, China
: The rapid expansion of urban impervious surface areas complicates urban-scale heavy metal circulation among various environmental compartments (air, soil, sediment, water, and road dust). Herein, a level III steady-state aquivalence model evaluated the fate of heavy metals in Nanjing, China. Iron was the most abundant heavy metal in all environmental compartments, while cadmium was the rarest. Most simulated concentrations agreed with measured values within three logarithmic residuals. In the simulated heavy metal cycle, industrial emission contributed almost the entire input, whereas sediment burial was the dominant output pathway. The transfer fluxes between bottom sediment and water were the highest. Thereinto, the contribution of sediment resuspension for Fe and Mn was significantly higher than that to the other metal elements, which could partly explain why Fe and Mn are the major blackening ingredients in malodorous black rivers. Road dust was also an important migration destination for heavy metals, accounting for 3–45%, although soil and sediment were the main repositories of heavy metals in the urban environment. The impact of road dust on surface water should not be neglected, with its contribution reaching 4–31%. The wash-o ff rate constant W for road dust–water process was proved to be consistent with that for film–water and was independent of the type of heavy metals. Sensitivity analysis highlighted the notable background value e ff ect on Fe and Mn.
Introduction
Urban streams, rivers, and lakes are the final link of the urban water cycle. They receive a variety of point source and diffuse pollutants from other urban environmental media [1], including atmospheric deposition [2][3][4], soil erosion [5,6], and tailwater discharge [7][8][9]. Consequently, the receiving water readily deteriorates and even turns black and malodorous. As the city grows, the hardened impermeable area of urban roads is constantly expanding. In Shenzhen, it has been found that, when the impermeable area of the urban watershed exceeds 36.9%, irreversible river water quality degradation will occur [10]. In Korea, it is suggested that the proportion of the impervious surface area should be controlled within 10% in watersheds to mitigate water quality degradation [11]. Although the threshold of the impervious surface impact on stream health varies at different locations, the impact of urbanization on water quality in watersheds has been increasing in recent years [12]. Road dust accumulated on impervious surface areas can absorb various pollutants under the influence of intrinsic and extrinsic factors [13]. The contribution of urban runoff caused by rain wash-off to river water pollutants cannot be ignored [14,15]. Accordingly, road dust has gradually become a major environmental phase potentially threatening the urban water environment. In addition to exogenous
Characteristics of the Study Area
Located in the lower reaches of the Yangtze River in southeast coastal area of China (118 • 22 E-119 • 14 E, 31 • 14 N-32 • 37 N), Nanjing city has a subtropical monsoon climate with four distinct seasons and abundant rainfall. The average annual rainfall is 1106.5 mm, the relative humidity is 76%, and the annual average temperature is 15.4 • C. The terrain is relatively flat. The flow rates of urban rivers are slow. The black and odorous water bodies are easy to form. In addition, Nanjing is a nationally important comprehensive industrial production base with a GDP totaling 95.15 billion dollars in 2011. Nanjing has a high electronic chemical production capacity, a very large vehicle manufacturing scale, and advanced rail transit equipment and power grid industries. In this study, five environmental media, namely air, soil, water, sediment, and road dust, were selected. Road dust collection and heavy metal content determination were conducted by our research group.
Collection of Road Dust and Determination of Heavy Metal Content
As the impervious surface of Nanjing city is relatively concentrated, the locations of the sampling points were set based on the land use type of the city. In total, 23 road dust sampling points were arranged, and each sampling point was cleaned with a brush to collect approximately 300 g samples, which were then transferred to a polyethylene sealed bag with a plastic shovel. The samples were sent to the laboratory as soon as possible after collection. The bulk samples were separated by an electric vibrating machine with a series of nylon screens with mesh openings of 250, 125, 75, and 37 µm overlaid in sequence on the machine. About 150 g dried bulk sample was put on the top screen with mesh openings of 250 µm. When the electric vibrating machine started, the particles were sieved though the series overlaid screens and separated into sub-samples (250-500, 125-250, 75-125, 37-75, and <37 µm). The heavy metal content was tested after the bulk and sub-samples were freeze-dried. According to the detection method of national standard GB15618-1995, 0.5 g samples were digested with hydrofluoric, nitric and perchloric acids and aqua regia. Cadmium (Cd), copper (Cu), nickel (Ni), lead (Pb), and zinc (Zn) were detected by inductively coupled plasma mass spectrometry (ICP-MS). Chromium (Cr), manganese (Mn), and iron (Fe) were detected by inductively coupled plasma optical emission spectrometry (ICP-OES).
For the air, soil, water, and sediment, heavy metal concentration data were collected through literature research. Descriptive statistical results of the reference data, including arithmetic mean (AM), geometric mean (GM), median, minimum value (Min), maximum value (Max), number (N), and standard deviation (SD), are provided in the Supplementary Materials (Table S1). Table 1 is a partial summary of the descriptive statistics results. and road dust, in Nanjing, China. The migration and transformation processes involved in this model are shown in Figure 1, including atmospheric dry and wet deposition, diffusion, sediment resuspension and deposition, road dust wash-off, etc.
Atmosphere (1) Road dust (5) Surface water (2) Sediment ( The basic principle of the multimedia model is the law of mass conservation. The modeling basis consists of establishing a series of mass balance equations for the pollutants in the study area. In this study, metal circulation was assumed to be stable. That is, for the five main environmental compartments in the system, the inflow fluxes are equal to the outflow fluxes. The metal flux exchanges between adjacent compartments were calculated separately using the aquivalence approach (N = D × Q ). Based on the conceptual model, the mass balance equation for each compartment is summarized in Equations (1)- (5). The basic principle of the multimedia model is the law of mass conservation. The modeling basis consists of establishing a series of mass balance equations for the pollutants in the study area. In this study, metal circulation was assumed to be stable. That is, for the five main environmental compartments in the system, the inflow fluxes are equal to the outflow fluxes. The metal flux exchanges between adjacent compartments were calculated separately using the aquivalence approach (N = D × Q). Based on the conceptual model, the mass balance equation for each compartment is summarized in Equations (1)- (5). Water: Soil: Water 2020, 12, 1580 5 of 16 Road dust: In the equations above, N (mol/h) is the transfer flux, D (m 3 /h) is the transport parameter of the pollutant, and D reflects the speed of mass transfer in a process. The higher the D value is, the higher the transmission rate is. The equilibrium criterion, aquivalence (Q, mol/m 3 ), was proposed to replace fugacity, and Z (dimensionless) is the fugacity capacity. When the Z value is high, the corresponding fugacity increases little after the pollutant is absorbed by the compartment, and the pollutant tends to remain in the compartment; otherwise, it tends to escape. The molar concentration (C, mol/m 3 ) is the product of the aquivalence Q and fugacity capacity Z (C = Z × Q). E A and E W refer to the amounts of heavy metal pollutants discharged into the air and water in Nanjing, respectively (Table S2). The model initialized the compartmental (water, soil, sediment, and road dust) concentrations to the corresponding background values. The initial concentrations in the air compartments were 0.
The Z value for heavy metals in road dust, as particle-sorbed chemicals, is the same as that for aerosols whose major source for heavy metals is the re-suspended road dust [60]. As for those environmental compartments in direct contact with water, the Z values for heavy metals in soil, sediment and SPM was calculated based on the partition coefficients summarized in the supplementary materials (Table S3). In light of the film-water transfer process in the fugacity model, the mass transfer coefficient, k rw (m/h), of the road dust-water process, is described by k rw = T r × W, where T r is the road dust thickness (m) and W is the wash-off rate constant (h −1 ). To estimate a reasonable value applicable to steady state rather than event-specific conditions, we determined W by dividing the total amount of contaminants into five ranges of grain size (250-500, 125-250, 75-125, 37-75, and <37 µm) of road dust washed out by the one unwashed.
where P i is the wash-off percentage of road dust on a grain size fraction i (%), M i is the mass of a particular grain size fraction per unit area (mg/m 2 ), and C i is the measured concentration of the metal in the road dust with a grain size i (mg/kg). The wash-off percentages of road dust on each grain size fraction from impervious surface was determined on the basis of the mean mass load per unit area (around 20 mg/m 2 ) [39]. According to the transport factor F w (%) [40], P i for the five grain size fractions (250-500, 125-250, 75-125, 37-75, and <37 µm) was estimated to be 6%, 11.6%, 18%, 40%, and 68%, respectively. With regard to other parameters associated with air, water, soil, and sediment, they were determined pursuant to empirical parameters listed in the Supplementary Materials. MATLAB R2019a was run to calculate the simulated concentrations in five environmental compartments (see the MATLAB Live Script in the Supplementary Materials).
As shown in Table 2, the calculated results are close to the wash-off rate constant of organic pollutants (0.25) [61]. Therefore, the rate constant is also applicable to heavy metals in road dust and is not affected by the chemical properties of pollutants. However, we do not consider residual in the rainwater pipe network or storm water treatment.
Sensitivity Analysis
Sensitivity analysis was performed to assess how each input parameter affects the model output and to identify the most influential inputs. The sensitivity coefficient (SC) was calculated as the proportion of the change in output compared to that in the tested parameters by considering their ranges as follows: where X 1.01 indicates that the input parameter is increased by 1%, that is, the value of the parameter is 101% of its mean. Y 1.01 represents the output of the model when the test parameter is increased by 1%.
Model Parameterization for Nanjing
Model parameters involving environmental and physicochemical parameters were obtained from the current relevant literature and databases and were used in the model to simulate the fate and transport of heavy metals in Nanjing. Herein, a total surface area of 6.41 × 10 9 m 2 was selected. The air compartments were defined at 1000 m according to previous studies [62]. The density of the total suspended particulates was an empirical value of 1.5 × 10 15 µg/m 3 [63]. The water area covered 7.42 × 10 8 m 2 with an average depth of 2.48 m [64]. A depth of 10 cm was assumed for the sediment underlying the water. The urban soil and impervious surface areas were 3.08 × 10 9 and 2.10 × 10 9 m 2 , respectively ( Figure 2). where X1.01 indicates that the input parameter is increased by 1%, that is, the value of the parameter is 101% of its mean. Y1.01 represents the output of the model when the test parameter is increased by 1%.
Model Parameterization for Nanjing
Model parameters involving environmental and physicochemical parameters were obtained from the current relevant literature and databases and were used in the model to simulate the fate and transport of heavy metals in Nanjing. Herein, a total surface area of 6.41 × 10 9 m 2 was selected. The air compartments were defined at 1000 m according to previous studies [62]. The density of the total suspended particulates was an empirical value of 1.5 × 10 15 µg/m 3 [63]. The water area covered 7.42 × 10 8 m 2 with an average depth of 2.48 m [64]. A depth of 10 cm was assumed for the sediment underlying the water. The urban soil and impervious surface areas were 3.08 × 10 9 and 2.10 × 10 9 m 2 , respectively ( Figure 2).
Model Simulation and Verification
The pre-processed data involving road dust, soil, atmospheric fine particulate matter (PM 2.5), surface water, sediment, and suspended particulate matter (SPM) are summarized in Table 3. The iron concentration is the highest in all environmental compartments listed, while cadmium has the lowest concentration. This is basically in line with the abundance of elements in the crust. Compared with the background values for river water in the Yangtze River system [65], the concentration proportions of Fe and Mn were not more than 0.5 (0.31 and 0.32, respectively), while the value for Cd was as high as 43.68. For the solid media, including road dust, soil, sediment, and suspended particles, the Cd concentration was 2.09-13.65 times the soil background value [66], while the Fe and Mn variations were small (1.16-2.15 and 1.04-1.64, respectively). This suggests that iron and manganese are less affected by human activities, which implies that, compared with Cd, Fe and Mn are greatly influenced by the initial value of the model-the background value. Furthermore, different environmental media are affected by human activities to different degrees. Many countries
Model Simulation and Verification
The pre-processed data involving road dust, soil, atmospheric fine particulate matter (PM 2.5), surface water, sediment, and suspended particulate matter (SPM) are summarized in Table 3. The iron concentration is the highest in all environmental compartments listed, while cadmium has the lowest concentration. This is basically in line with the abundance of elements in the crust. Compared with the background values for river water in the Yangtze River system [65], the concentration proportions of Fe and Mn were not more than 0.5 (0.31 and 0.32, respectively), while the value for Cd was as high as 43.68. For the solid media, including road dust, soil, sediment, and suspended particles, the Cd concentration was 2.09-13.65 times the soil background value [66], while the Fe and Mn variations were small (1.16-2.15 and 1.04-1.64, respectively). This suggests that iron and manganese are less affected by human activities, which implies that, compared with Cd, Fe and Mn are greatly influenced by the initial value of the model-the background value. Furthermore, different environmental media are affected by human activities to different degrees. Many countries and organizations have developed sophisticated systems of standards for controlling the discharge of pollutants to air, water, and soil. However, emission standards for pollutants associated with urban street dust, such as restrictions on the use of heavy metals in tires and brake pads, are lacking. The Fe, Mn, Zn, Cr, Ni, Cu, Pb, and Cd metal element concentrations simulated by the multimedia aquivalence model were compared with observed concentrations as follows (Figure 3). The difference between the simulated and measured logarithmic molar concentrations did not exceed 3. This indicated that the model can simulate the heavy metal concentration in various environmental compartments successfully. Nevertheless, the largest cadmium variation in road dust reached 2.96. This might be related to the uncertainty of the input parameters, most of which were obtained from the literature and not from specific field investigations for the purpose of modeling [25,31]. Consequently, the sensitivity of the input parameters is examined below. For the five environmental compartments, the average logarithmic residual errors of air (1.60), soil (0.70), sediment (1.27), and road dust (0.32) were lower than that of water (2.03). This was consistent with the results of a previous study [68]. It is speculated that the larger deviation observed for the aqueous phase may be attributed to the assumption that the water depth remained consistent, and the water was uniformly mixed. Moreover, most of the simulated concentrations were higher than the measured concentrations, which might be because the maximum allowable emission concentration is used for the simulated pollutant emission concentration, and the emission amount varies from industry to industry.
The Transport and Fate of Fe, Mn, and Trace Metals in the Urban Multimedia Environment
The transfer flux N (mol/h), the product of the aquivalence Q (mol/m 3 ) and the transport or
The Transport and Fate of Fe, Mn, and Trace Metals in the Urban Multimedia Environment
The transfer flux N (mol/h), the product of the aquivalence Q (mol/m 3 ) and the transport or transformation parameter D (m 3 /h), describes the transport and fate of heavy metals in urban environments. For the whole open environmental system, the simulation results reveal that, out of all the pathways, industrial atmospheric pollutant emissions occupied almost all the heavy metal inputs, while sediment burial (77.52%) was the main output for most heavy metals. This was consistent with a study in an oilfield [25]. Water advection outflow and infiltration into groundwater accounted for 13.59% and 8.89% of the output fluxes, respectively. Diffusion into the stratosphere and air advection outflow contributed little to the output flux. For the exchange between adjacent environmental media in the system, the transport fluxes varied greatly among the different metals. The iron flux was the highest, reaching 117,585 mol/h. Nevertheless, the main transport flux was contributed by the interaction between the sediment and overlying water (Figure 4a). The migration of heavy metals from the water phase to the sediment phase was dominant (41%), but there was still a difference among the different metals, with higher proportions for cadmium (46.37%) and chromium (45.15%). Thus, sediment burial is not the best way to treat heavy metal pollutants. It is important to explore alternative strategies to inhibit the release of heavy metals from sediment.
The Transport and Fate of Fe, Mn, and Trace Metals in the Urban Multimedia Environment
The transfer flux N (mol/h), the product of the aquivalence Q (mol/m 3 ) and the transport or transformation parameter D (m 3 /h), describes the transport and fate of heavy metals in urban environments. For the whole open environmental system, the simulation results reveal that, out of all the pathways, industrial atmospheric pollutant emissions occupied almost all the heavy metal inputs, while sediment burial (77.52%) was the main output for most heavy metals. This was consistent with a study in an oilfield [25]. Water advection outflow and infiltration into groundwater accounted for 13.59% and 8.89% of the output fluxes, respectively. Diffusion into the stratosphere and air advection outflow contributed little to the output flux. For the exchange between adjacent environmental media in the system, the transport fluxes varied greatly among the different metals. The iron flux was the highest, reaching 117,585 mol/h. Nevertheless, the main transport flux was contributed by the interaction between the sediment and overlying water (Figure 4a). The migration of heavy metals from the water phase to the sediment phase was dominant (41%), but there was still a difference among the different metals, with higher proportions for cadmium (46.37%) and chromium (45.15%). Thus, sediment burial is not the best way to treat heavy metal pollutants. It is important to explore alternative strategies to inhibit the release of heavy metals from sediment. The mass distribution rule of each heavy metal was basically in agreement with the concentration distribution ( Table 3). The total mass of iron accumulated in the various environmental compartments was still the largest, reaching 60 billion tons, followed by manganese (3 billion tons). Cadmium had the smallest mass, with only 350,000 tons. However, as shown in Figure 4b, the mass proportion of each environmental phase has a certain regularity. Soil and sediment were the largest sinks of heavy metals in the urban environment, accounting for 27-51% and 13-68%, respectively. It also suggests that sediment and soil can be difficult to clean thoroughly once contaminated with heavy metal pollutants. Road dust was also an important migration destination for heavy metals, accounting for 3-45%. This indicates that impermeable roads are a significant place for metal accumulation, which is ignored in numerous studies [23,68,69]. Hence, targeted restoration measures should be developed and taken for the sustainable development of the city. In contrast, the masses for the water and air phases are negligible.
Sensitivity Analysis
The sensitivity of 25 heavy metal parameters was identified. As shown in Figure 5, compared with the other environmental media, the number of highly sensitive parameters for the heavy metal concentrations in the air phase is much lower. Only the atmospheric background concentration (C A ) and advection output rate (Q 10 ) significantly affect the heavy metal concentration in the gas phase. The water and sediment phases both have almost the same parameter sensitivity except the density and background value. Interestingly, there are several common parameters that have opposite effects on both phases. They include the partition coefficients between sediment and water (K se ) and between suspended particles and water (K p ), sediment deposition rate (G_ SD ), molecular diffusion coefficient in sediment pore water (B 4 ), and volume fraction of solids in water (v 23 ). This may occur due to the dynamic equilibrium of heavy metals between the aqueous phase and sediment. For road dust and soil, the response of the heavy metal concentration to the partition coefficients (K so ) is still positive, but for the parameters related to wash off and erosion (W and G SW ), the response is negative. Iron and manganese are less sensitive to most parameters compared to the other elements. However, this is the opposite for the background value and density. This may occur because the soil background values of Fe and Mn in the model were much higher than the concentration changes caused by transfer and transformation.
Distribution and Migration of Heavy Metals Among Multimedia in Urban Environment
Although soil and sediment are the main repositories of heavy metals in the urban environment, the total amount of heavy metals in urban street dust cannot be ignored, with a mass fraction ranging from 3.2% to 44.9%. However, many studies on multi-media models have done little to study this increasingly important environmental phase [23,36]. Even when studied, most researchers have focused on organic film attached to impervious road surfaces, including polychlorinated biphenyls (PCBs) [70] and polycyclic aromatic hydrocarbons (PAHs) [35], with little research on heavy metals. As for toxic heavy metal-lead, its mass load of street dust is almost half that of sediment, probably due to its extensive sources including automobile exhaust, brake pad wear, tire wear, paint, and mining emissions [71][72][73]. Moreover, lead was accumulated more than cadmium, copper, and zinc. This was consistent with the research results in Xi'an, China [74]. Unfortunately, this study only considered the influence of weather on the composition of road dust, including atmospheric deposition and precipitation scour, and did not calculate statistics on the heavy metal emissions from road dust, which will be carried out in the following studies. Nevertheless, the smallest accumulation of heavy metals in road dust is on the order of magnitude of one million tons. This can be attributed to the fact that the composition of heavy metals in road dust is influenced by both internal and external adsorption [13].
Although iron itself is not toxic, it and manganese (Mn), two of the abundant metals in the Earth's crust, are the major blackening ingredients in malodorous black rivers. Table 3 shows that the
Distribution and Migration of Heavy Metals among Multimedia in Urban Environment
Although soil and sediment are the main repositories of heavy metals in the urban environment, the total amount of heavy metals in urban street dust cannot be ignored, with a mass fraction ranging from 3.2% to 44.9%. However, many studies on multi-media models have done little to study this increasingly important environmental phase [23,36]. Even when studied, most researchers have focused on organic film attached to impervious road surfaces, including polychlorinated biphenyls (PCBs) [70] and polycyclic aromatic hydrocarbons (PAHs) [35], with little research on heavy metals. As for toxic heavy metal-lead, its mass load of street dust is almost half that of sediment, probably due to its extensive sources including automobile exhaust, brake pad wear, tire wear, paint, and mining emissions [71][72][73]. Moreover, lead was accumulated more than cadmium, copper, and zinc. This was consistent with the research results in Xi'an, China [74]. Unfortunately, this study only considered the influence of weather on the composition of road dust, including atmospheric deposition and precipitation scour, and did not calculate statistics on the heavy metal emissions from road dust, which will be carried out in the following studies. Nevertheless, the smallest accumulation of heavy metals in road dust is on the order of magnitude of one million tons. This can be attributed to the fact that the composition of heavy metals in road dust is influenced by both internal and external adsorption [13].
Although iron itself is not toxic, it and manganese (Mn), two of the abundant metals in the Earth's crust, are the major blackening ingredients in malodorous black rivers. Table 3 shows that the abundance of Fe, Mn, and other trace metals in five environmental phases was similar to that in the Earth's crust and soil background. Studies have shown that iron and manganese in river water are mainly derived from major clay minerals in sediment, including nontronite, saponite, and pennantite [21]. This was validated by the large transfer flux of iron and manganese between sediment and water (Figure 4a), which can be regulated by microorganisms [75]. Moreover, the exchange rates between sediment and water will increase with organic pollution of urban river [76]. As a result, it can be inferred that heavy metal, especially Fe and Mn, transfer between sediment and water in black and odorous water bodies will be more frequent. This will be beneficial to our further study of heavy metals in black and odorous water.
Effects of Heavy Metals in Multimedia on Urban Water Quality
The contributions calculated by the ratio of the fluxes from one medium compared to the total input fluxes quantify the sources of those metal pollutants associated with water quality impairment. The results indicate that despite the differences, endogenous sediment had the largest impact on urban water bodies (19.7-89.9% shown in Table 4). This may be affected by hydrodynamic conditions and bioturbation/bioirrigation [77,78]. This is the reason many studies have focused on the immobilization of heavy metals in sediment [79][80][81][82]. Research has shown that hydraulic mulches exhibit the highest release potential for heavy metals (4-70% of the total concentration), while netting/blanketing has the lowest release potential, particularly for Pb (0-8%) [83]. Furthermore, the main migration routes of the different metals from the sediment to the overlying water are also different. Among them, the surprising finding is that the main migration pathway for Fe and Mn is sediment resuspension. In particular, the iron proportion is as high as 17.3%. This also explains why iron and manganese are more likely to adhere to suspended particles and cause water to turn black. For the other metallic elements, such as Ni, Cu, Zn, and Cd, it has been suggested that resuspension events could cause water quality deterioration over time in both anoxic and oxic sediments [84]; notwithstanding, the main release behavior of these metal elements is diffusion. This is likely due to a combination of the fact that iron occurs at much higher concentrations than the other metals and that iron is transformed to its reduced (soluble) form at a much lower redox potential. Consequently, reductive dissolution is less effective at transforming particulate iron [85], followed by manganese and lead. The reason is that their dissolution conditions (low pH and oxidation-reduction potential (ORP)) are difficult to achieve in a natural neutral water environment. Atmospheric deposition did not contribute as much as sediment (1.57-10.78%), but it is still important for urban waters. This conclusion is also applicable to oligotrophic open oceans and alpine lakes [86][87][88]. Compared with wet deposition, the contribution of atmospheric dry deposition is smaller for the eight heavy metals studied, with the proportion ranging from 0.47% to 3.22%. This proportion is similar to that in Lake Tahoe, where dry deposition contributed 0.03-5.7% of heavy metals [88]. This indicates that deposition in Nanjing is dominated by wet deposition instead of dry deposition. Wet deposition of heavy metals is more susceptible to regional climate characteristics, especially precipitation patterns. The eastern coastal areas of China, such as the Pearl River Delta Region [89], are affected by monsoons and experience abundant rainfall, leading to an increase in wet deposition, while dry deposition of heavy metals tends to predominate in arid and semiarid areas inland [90]. In addition, the deposition flux of wet particles is much higher than that of dry particles, which also indicates that, although wet particle deposition has a larger influence on polluting the urban receiving water, it also has a larger positive effect on the removal of atmospheric particles. Regrettably, this model does not consider the difference between the dry and wet deposition fluxes of the different metal elements, which are often affected by more complex factors, such as the water solubility of metallic elements [91]. Therefore, parameters need to be added to further optimize it.
Compared with the contributions (18-50%) in some U.S. cities [85,92,93], the impact of urban nonpoint source pollution, including road dust and soil, is relatively similar. Nevertheless, for the eight heavy metals, which are commonly found in urban storm runoff [94,95], the proportion of heavy metals carried by road runoff exceeds that of point source tailwater discharge mainly from 82 industrial enterprises among the 13 districts and counties of Nanjing. Despite the fact that metal element concentrations in runoff can vary with the time, location, and intensity of rainfall [96], this result was in line with that of the greater Los Angeles region (CA, USA), where the annual cumulative loading of total copper, lead, and zinc from three watersheds (the Los Angeles River, Ballona Creek, and Dominguez Channel) far exceeded the pollutant discharge amount from industrial point sources such as power-generating stations and oil refineries [97]. As mentioned in the literature, unlike industrial point source pollution, which has fixed treatment sites and mature treatment technologies, the control and mitigation measures of urban nonpoint source pollution are generally still in the research stage [98][99][100] and are relatively lacking in practical applications. Therefore, urban road runoff should be given sufficient attention for management.
Conclusions
A multimedia aquivalence model coupled with road dust on impervious surfaces was applied to assess the fate and transport of Fe, Mn, Zn, Cr, Ni, Cu, Pb, and Cd in a developed industrial city of China. The results indicate that the model can simulate the observed concentrations well, with an average logarithmic residual smaller than 3. Iron was the most abundant heavy metal in five environmental compartments. In the simulated heavy metal cycle, the transfer fluxes between bottom sediment and water were the highest. Thereinto, the contribution of sediment resuspension for Fe and Mn was significantly higher than that to the other metal elements, which could partly explain why Fe and Mn are the major blackening ingredients in malodorous black rivers. Road dust was also an important migration destination for heavy metals, accounting for 3-45%, although soil and sediment were the main repositories of heavy metals in the urban environment. The impact of road dust on surface water should not be neglected, with its contribution reaching 4-31%. The wash-off rate constant W for the road dust-water process was proved to be consistent with that for film-water and was independent of the type of heavy metals. This study not only lays a foundation for the follow-up study on the migration and transformation of heavy metals in black and odorous water, but also couples road dust into the multi-media model properly, revealing its potential impact on urban water bodies, which is conducive to the sustainable development of urban water environment.
Supplementary Materials: The following are available online at http://www.mdpi.com/2073-4441/12/6/1580/s1, Table S1: The descriptive statistics of observed concentration of metals in multimedia in Nanjing, Table S2: Discharge of heavy metal pollutants, Table S3: Summary of partition coefficients used in the model; MATLAB Live Script. | 2020-06-04T09:06:33.665Z | 2020-06-02T00:00:00.000 | {
"year": 2020,
"sha1": "c8d98943659dc76319720e5f63d9cc430bcd8d26",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/12/6/1580/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "74bbf259c13521e68161501e0de5668e36354a71",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.